Working with LLMs

Large language models (LLMs) can help you troubleshoot your applications and fix issues faster. When integrated with Honeybadger's error tracking and application monitoring tools, they become even more effective at helping you squash bugs and keep your systems running smoothly.

Honeybadger Model Context Protocol (MCP) server

The Honeybadger MCP server provides structured access to Honeybadger's API through the Model Context Protocol, allowing AI assistants to interact with your Honeybadger projects and monitoring data.

Instead of manually copying error details or switching between tools, your AI assistant can automatically fetch error data, analyze patterns, and provide contextual debugging suggestions—all within your existing workflow.

What is the Model Context Protocol?

The Model Context Protocol (MCP) is a standard that enables LLMs to interact with external services in a structured and safe manner. Think of it as giving your AI assistant the ability to use tools - in this case, tools to manage your Honeybadger data and investigate errors and production issues.

Quick start

Tip

For detailed development instructions, check out the full documentation on GitHub.

The easiest way to get started is with Docker:

bash
docker pull ghcr.io/honeybadger-io/honeybadger-mcp-server:latest

You'll need your Honeybadger personal authentication token to configure the server, which you can find under the "Authentication" tab in your Honeybadger User settings.

Cursor, Windsurf, and Claude Desktop

Put this config in ~/.cursor/mcp.json for Cursor, or ~/.codeium/windsurf/mcp_config.json for Windsurf. See Anthropic's MCP quickstart guide for how to locate your claude_desktop_config.json for Claude Desktop:

json
{ "mcpServers": { "honeybadger": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "HONEYBADGER_PERSONAL_AUTH_TOKEN", "ghcr.io/honeybadger-io/honeybadger-mcp-server" ], "env": { "HONEYBADGER_PERSONAL_AUTH_TOKEN": "your personal auth token" } } } }

VS Code

Add the following to your user settings or .vscode/mcp.json in your workspace:

json
{ "mcp": { "inputs": [ { "type": "promptString", "id": "honeybadger_auth_token", "description": "Honeybadger Personal Auth Token", "password": true } ], "servers": { "honeybadger": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "HONEYBADGER_PERSONAL_AUTH_TOKEN", "ghcr.io/honeybadger-io/honeybadger-mcp-server" ], "env": { "HONEYBADGER_PERSONAL_AUTH_TOKEN": "${input:honeybadger_auth_token}" } } } } }

See Use MCP servers in VS Code for more info.

Zed

Add the following to your Zed settings file in ~/.config/zed/settings.json:

json
{ "context_servers": { "honeybadger": { "command": { "path": "docker", "args": [ "run", "-i", "--rm", "-e", "HONEYBADGER_PERSONAL_AUTH_TOKEN", "ghcr.io/honeybadger-io/honeybadger-mcp-server" ], "env": { "HONEYBADGER_PERSONAL_AUTH_TOKEN": "your personal auth token" } }, "settings": {} } } }

Running without Docker

If you don't have Docker, you can build the server from source:

bash
git clone git@github.com:honeybadger-io/honeybadger-mcp-server.git cd honeybadger-mcp-server go build -o honeybadger-mcp-server ./cmd/honeybadger-mcp-server

And then configure your MCP client to run the server directly:

json
{ "mcpServers": { "honeybadger": { "command": "/path/to/honeybadger-mcp-server", "args": ["stdio"], "env": { "HONEYBADGER_PERSONAL_AUTH_TOKEN": "your personal auth token" } } } }

For detailed development instructions, check out the full documentation on GitHub.

What can you do with the MCP server?

Once you have Honeybadger's MCP server running, your AI assistant gains the following capabilities:

  • Project management: List, create, update, and delete projects, and get detailed project reports.
  • Error investigation: Search and filter errors, view occurrences and stack traces, see affected users, and analyze error patterns.
  • We're actively developing additional tools for working with Honeybadger Insights data, account and team management, uptime monitoring, and other platform features. More to come!

Example workflows

Here are some things you might ask your AI assistant to help with. Always review closely; LLMs can make mistakes.

"Fix this error [link to error]"

Your assistant can look up the project and error details, open the source file from the stack trace, and fix the bug. You could also try phrases like "Tell me more about this error," "Help me troubleshoot this error," etc.

"What's happening with my Honeybadger projects?"

Your assistant can list your projects, show recent error activity, and filter faults by time and environment to provide a quick overview or help you triage.

"Create an interactive chart that shows error occurrences for my '[project name]' Honeybadger project over time. Use your excellent front end skills to make it look very professional and well polished."

Your assistant can fetch time-series data from your project and generate an interactive chart showing error trends.

Responding to alerts in Slack and GitHub

Honeybadger includes full backtraces in Slack error notifications to provide the context that AI coding assistants need for effective debugging.

The backtrace appears as formatted code in Slack, allowing you to copy and paste it to AI debugging assistants like Cursor, Windsurf, or Copilot. If you use Cursor's Background Agents, you can install their Slack integration to ask Cursor to fix the error directly from Slack:

Slack notification showing a Honeybadger error and integration with AI debugging tools for quick issue resolution.

We include similar information when creating issues for errors in GitHub and other issue trackers, which should help you—for example—assign bugfixes to GitHub Copilot.

See our integration docs to learn more about our 3rd-party integrations.

Going further

As we continue to develop LLM integrations, we're exploring ways to make automated monitoring and debugging more intelligent. Some ideas we're excited about:

Do you have ideas for how LLMs could improve your Honeybadger experience? We'd love to hear from you! Drop us a line at support@honeybadger.io.

Additional resources