Skip to content

Working with LLMs

Large language models (LLMs) can help you troubleshoot your applications and fix issues faster. When integrated with Honeybadger’s error tracking and application monitoring tools, they become even more effective at helping you squash bugs and keep your systems running smoothly.

Honeybadger Model Context Protocol (MCP) server

Section titled “Honeybadger Model Context Protocol (MCP) server”

The Honeybadger MCP server provides structured access to Honeybadger’s API through the Model Context Protocol, allowing AI assistants to interact with your Honeybadger projects and monitoring data.

Instead of manually copying error details or switching between tools, your AI assistant can automatically fetch error data, analyze patterns, and provide contextual debugging suggestions—all within your existing workflow.

The Model Context Protocol (MCP) is a standard that enables LLMs to interact with external services in a structured and safe manner. Think of it as giving your AI assistant the ability to use tools - in this case, tools to manage your Honeybadger data and investigate errors and production issues.

The easiest way to get started is with Docker:

Terminal window
docker pull ghcr.io/honeybadger-io/honeybadger-mcp-server:latest

You’ll need your Honeybadger personal authentication token to configure the server, which you can find under the “Authentication” tab in your Honeybadger user settings.

Run this command to configure Claude Code:

Terminal window
claude mcp add honeybadger -- docker run -i --rm -e HONEYBADGER_PERSONAL_AUTH_TOKEN="HONEYBADGER_PERSONAL_AUTH_TOKEN" ghcr.io/honeybadger-io/honeybadger-mcp-server:latest

Put this config in ~/.cursor/mcp.json for Cursor, or ~/.codeium/windsurf/mcp_config.json for Windsurf. See Anthropic’s MCP quickstart guide for how to locate your claude_desktop_config.json for Claude Desktop:

{
"mcpServers": {
"honeybadger": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"HONEYBADGER_PERSONAL_AUTH_TOKEN",
"ghcr.io/honeybadger-io/honeybadger-mcp-server"
],
"env": {
"HONEYBADGER_PERSONAL_AUTH_TOKEN": "your personal auth token"
}
}
}
}

Add the following to your user settings or .vscode/mcp.json in your workspace:

{
"mcp": {
"inputs": [
{
"type": "promptString",
"id": "honeybadger_auth_token",
"description": "Honeybadger Personal Auth Token",
"password": true
}
],
"servers": {
"honeybadger": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"HONEYBADGER_PERSONAL_AUTH_TOKEN",
"ghcr.io/honeybadger-io/honeybadger-mcp-server"
],
"env": {
"HONEYBADGER_PERSONAL_AUTH_TOKEN": "${input:honeybadger_auth_token}"
}
}
}
}
}

See Use MCP servers in VS Code for more info.

Add the following to your Zed settings file in ~/.config/zed/settings.json:

{
"context_servers": {
"honeybadger": {
"command": {
"path": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"HONEYBADGER_PERSONAL_AUTH_TOKEN",
"ghcr.io/honeybadger-io/honeybadger-mcp-server"
],
"env": {
"HONEYBADGER_PERSONAL_AUTH_TOKEN": "your personal auth token"
}
},
"settings": {}
}
}
}

If you don’t have Docker, you can build the server from source:

Terminal window
git clone git@github.com:honeybadger-io/honeybadger-mcp-server.git
cd honeybadger-mcp-server
go build -o honeybadger-mcp-server ./cmd/honeybadger-mcp-server

And then configure your MCP client to run the server directly:

{
"mcpServers": {
"honeybadger": {
"command": "/path/to/honeybadger-mcp-server",
"args": ["stdio"],
"env": {
"HONEYBADGER_PERSONAL_AUTH_TOKEN": "your personal auth token"
}
}
}
}

For detailed development instructions, check out the full documentation on GitHub.

Once you have Honeybadger’s MCP server running, your AI assistant gains the following capabilities:

  • Project management: List, create, update, and delete projects, and get detailed project reports.
  • Error investigation: Search and filter errors, view occurrences and stack traces, see affected users, and analyze error patterns.
  • We’re actively developing additional tools for working with Honeybadger Insights data, account and team management, uptime monitoring, and other platform features. More to come!

For a complete list of available tools and their parameters, see the tools documentation in the GitHub README.

Here are some things you might ask your AI assistant to help with. Always review closely; LLMs can make mistakes.

“Fix this error [link to error]”

Your assistant can look up the project and error details, open the source file from the stack trace, and fix the bug. You could also try phrases like “Tell me more about this error,” “Help me troubleshoot this error,” etc.

“What’s happening with my Honeybadger projects?”

Your assistant can list your projects, show recent error activity, and filter faults by time and environment to provide a quick overview or help you triage.

“Create an interactive chart that shows error occurrences for my ‘[project name]’ Honeybadger project over time. Use your excellent front end skills to make it look very professional and well polished.”

Your assistant can fetch time-series data from your project and generate an interactive chart showing error trends.

We provide machine-readable versions of our documentation optimized for LLMs. These files follow the llms.txt standard and are automatically generated from our documentation content.

  • /llms.txt - Index page with links to full and abridged documentation, plus specialized subsets
  • /llms-full.txt - Complete documentation in text format
  • /llms-small.txt - Abridged documentation with non-essential content removed

The abridged version (llms-small.txt) is optimized for token efficiency while preserving essential technical information.

We also provide focused documentation subsets for specific use cases:

  • The Honeybadger Data (REST) API and reporting APIs
  • Honeybadger Insights and BadgerQL
  • Honeybadger’s user interface and product features
  • Individual documentation sets for each client library (Ruby, JavaScript, Python, PHP, Elixir, etc.)

Visit /llms.txt for the complete list with links to download.

These files are designed to be consumed by LLMs either:

  1. Directly: Some LLM tools can fetch and process llms.txt files automatically
  2. As context: Copy and paste relevant sections into your AI assistant
  3. Via automation: Build tools that fetch and inject documentation into LLM prompts

Honeybadger includes full backtraces in Slack error notifications to provide the context that AI coding assistants need for effective debugging.

The backtrace appears as formatted code in Slack, allowing you to copy and paste it to AI debugging assistants like Cursor, Windsurf, or Copilot. If you use Cursor’s Background Agents, you can install their Slack integration to ask Cursor to fix the error directly from Slack:

Slack notification showing a Honeybadger error and integration with AI debugging tools for quick issue resolution.

We include similar information when creating issues for errors in GitHub and other issue trackers, which should help you—for example—assign bugfixes to GitHub Copilot.

See our integration docs to learn more about our 3rd-party integrations.

As we continue to develop LLM integrations, we’re exploring ways to make automated monitoring and debugging more intelligent. Some ideas we’re excited about:

Do you have ideas for how LLMs could improve your Honeybadger experience? We’d love to hear from you! Drop us a line at support@honeybadger.io.