Skip to content

Host metrics

View Markdown

Track your infrastructure’s health by sending host metrics to Honeybadger Insights. Monitor CPU usage, memory consumption, and disk space alongside your application errors and logs.

The easiest way to collect host metrics is with the Honeybadger CLI. Download a prebuilt binary from the GitHub releases page, or install with Go:

Terminal window
go install github.com/honeybadger-io/cli@latest

See the CLI installation guide for other options, including Homebrew.

Start the metrics agent with your project API key:

Terminal window
hb agent --api-key PROJECT_API_KEY

The agent collects CPU, memory, and disk metrics every 60 seconds and sends them to Insights. You can customize the interval with the -i, --interval flag (see the CLI reference for details).

If you’re running the agent on multiple hosts, add tags to identify and group them:

Terminal window
hb agent --api-key PROJECT_API_KEY \
--tag environment=production \
--tag role=web-1

Tags appear as top-level fields on every metric event. You can also override the default hostname with --tag host=custom-name, which is useful when hostnames are auto-generated (e.g. IP-based names from cloud providers).

Tags can also be set in the configuration file (~/.honeybadger-cli.yaml):

api_key: PROJECT_API_KEY
agent:
tags:
environment: production
role: web-1

CLI flags take precedence over configuration file tags. See the CLI reference for details and examples of reserved field names that cannot be used as tag keys.

Once tagged, you can filter and group metrics in Insights:

fields @ts, host::str, used_percent::float
| filter event_type::str == "report.system.cpu"
| filter environment::str == "production"
| filter role::str == "web-1"

Once metrics are flowing, you can query them in Insights. Each metric type sends a separate event:

{"@id": "ca4dee56-bede-453d-a41e-a6fd93d30eaf", "@stream.id": "3XepYQVyo5to", "@ts": "2026-01-12 22:22:11.000", "total_bytes": 994662584320, "used_bytes": 544694333440, "free_bytes": 449968250880, "used_percent": 54.76, "device": "/dev/disk3s1s1", "event_type": "report.system.disk", "host": "vonnegut.lan", "mountpoint": "/", "fstype": "apfs"}
{"@id": "d76ca037-3bab-4c1c-beb1-a18b9e6ff765", "@stream.id": "3XepYQVyo5to", "@ts": "2026-01-12 22:22:11.000", "total_bytes": 51539607552, "used_bytes": 38632865792, "free_bytes": 164954112, "available_bytes": 12906741760, "used_percent": 74.96, "event_type": "report.system.memory", "host": "vonnegut.lan"}
{"@id": "5b7c4060-1ff3-4d52-90d8-a9d3af17174a", "@stream.id": "3XepYQVyo5to", "@ts": "2026-01-12 22:22:11.000", "num_cpus": 14, "used_percent": 32.85, "load_avg_1": 3.35009765625, "load_avg_5": 3.73046875, "load_avg_15": 3.86083984375, "event_type": "report.system.cpu", "host": "vonnegut.lan"}

Here’s an example BadgerQL query to get a snapshot of disk usage:

fields @ts, mountpoint::str, used_percent::float
| filter event_type::str == "report.system.disk"
| sort used_percent desc
| limit 1 by mountpoint::str
@ts TIME EDTmountpoint STRused_percent FLOAT
2026-01-12 16:15:06.000/55.01
2026-01-12 16:14:21.000/data11.91

If you need more flexibility or are already using Vector in your infrastructure, you can use it to send host metrics to Insights instead.

Here’s a sample configuration:

# Put this in /etc/vector/vector.yaml
sources:
host:
type: "host_metrics"
sinks:
honeybadger_events:
type: "http"
inputs: ["host"]
uri: "https://api.honeybadger.io/v1/events"
request:
headers:
X-API-Key: "PROJECT_API_KEY"
encoding:
codec: "json"
framing:
method: "newline_delimited"

The easiest way to run Vector is via Docker. Here’s a sample Docker Compose configuration, assuming your Vector configuration is in a file named vector.yaml:

version: "3.2"
services:
vector:
image: timberio/vector:latest-alpine
volumes:
- "vector.yaml:/etc/vector/vector.yaml:ro"

See the Vector documentation for more configuration options.

Vector’s metrics are structured like this:

{
"@id": "01922983-149f-7a69-b5e1-ddca928d815e",
"@stream.id": "cEhUcrZrnny0",
"@ts": "2025-09-25 14:08:26.048",
"gauge": {
"value": 1.25
},
"tags": {
"collector": "load",
"host": "api-10-0-11-252"
},
"kind": "absolute",
"name": "load15",
"namespace": "host"
}

Here’s an example BadgerQL query to get a snapshot of disk usage:

fields @ts, tags.mountpoint::str, round(gauge.value::float * 100, 2) as used_percentage
| filter namespace::str == "host"
| filter name::str == "filesystem_used_ratio"
| filter gauge.value::float > 0.0
| filter tags.filesystem::str not in ["tmpfs", "devtmpfs", "squashfs"]
| sort @ts
| limit 1 by tags.mountpoint
@ts TIME EDTtags.mountpoint STRused_percentage FLOAT
2025-09-25 10:45:11.047/29.77
2025-09-25 10:45:11.047/efs0