You can use Insights to dive into the data collected by Honeybadger and the logs and other events that you send to our Events API. We provide a query language (that we lovingly call BadgerQL) that enables quick discovery of what's happening inside your applications. The Insights UI also lets you chart the results of those queries and add those charts to dashboards that you can share with your team.
Our query language strives to be minimalist, yet powerful. With it you can specify which fields you want to see, filter the kinds of events that should be returned, perform aggregations and calculations, and more. When you first load the Insights UI, you will see a query box that has a default query to help you get started:
fields @ts, @preview | sort @ts
This query selects a couple of special fields — the timestamp and a preview of the fields that are available in the event — and sorts the results by time, with the most recent results first. Each row of the query is piped through the following row, which allows you to apply filters, formatting functions, and so on. Let's do a quick walk-through to see how it works, and to see how it can be used to create visualizations of your data.
Here's an example of working with some Honeybadger data. First, filter the data to see only the results of uptime checks:
fields @ts, @preview | filter event_type::str == "uptime_check" | sort @ts
You can see that we've piped the initial results through
filter, which accepts a variety of conditions, such as the string comparison shown here. You'll also notice that we specified the data type of the
event_type field (
str) so the query parser can validate the functions and comparisons that you use on the field data.
Clicking on the disclosure arrow will show the all the fields that were stored for an event:
Additional disclosure controls appear inside the event detail view when the event has nested objects.
Let's filter on some additional data that is present in these events. We can limit the results to show only the uptime checks that originated from our Virginia location, and we can change the fields that we display so we can see some info about the results of each check:
fields @ts, location::str, response.status_code::int, duration::int | filter event_type::str == "uptime_check" | filter location::str == "Virginia" | sort @ts
Now let's summarize the data to find the average response duration for all successful checks:
fields duration::int | filter event_type::str == "uptime_check" | filter location::str == "Virginia" | filter response.status_code::int == 200 | stats avg(duration) by bin(1h) as time | sort time
stats to perform all kinds of calculations, such as averages, and
by allows us to specify the grouping for those calculations. Grouping by
bin gives us time-series data, which makes it easy to create a chart by clicking the Line button.
From there you can experiment with different visualizations, update the query to change the chart (try changing
15m), and add the chart to a custom dashboard.
Of course, this functionality isn't limited to only the data that is generated by Honeybadger. Your error data is also available for querying (
event_type::str == "notice"), and you can send logs and events to our API to be able to query and chart your own data.
Our API accepts newline-delimited JSON, where each line is a JSON object that describes an event that you care about. Sending structured logs in a JSON format (like lograge produces) allows you to correlate what's happening in your app with the error data that you are already sending to Honeybadger. But Insights isn't limited to logs! You can send any kind of event (e.g., user signup, billing failure) or metric that you'd like to query and visualize.
To get your Heroku logs into Insights, create a new log drain for your Heroku app, using an API key displayed on the API keys tab of the project settings page:
heroku drains:add "https://logplex.honeybadger.io/v1/events?api_key=Your project API key"
Use our fork of Fly.io's log shipper app to ship logs from your apps hosted by Fly.io. First, create a new app config:
# Make a directory for your log shipper app mkdir logshipper cd logshipper # Create the app but don't deploy just yet fly launch --no-deploy --image honeybadgerindustries/fly-log-shipper:latest # Set some secrets. Setting HONEYBADGER_API_KEY enables the shipping of logs to your Honeybadger project. fly secrets set ORG=personal # The org you chose when running "fly launch" fly secrets set ACCESS_TOKEN=$(fly auth token) fly secrets set HONEYBADGER_API_KEY=Your project API key
Edit the generated
fly.toml file, replacing the
[http_service] section with this:
[[services]] http_checks =  internal_port = 8686
Then deploy the app:
Once that's done, you should see logs from your apps flowing into Insights. See the Fly.io docs for more information about using the log shipper app.
See our API documentation to learn how to use Vector to watch your existing log files and send those logs to our API.
Coming soon! For now, you can open the inline docs via the book icon in the top-right corner of the query box.