Sentry, the renowned tool for monitoring and debugging production code, has taken another step towards enhancing its already impressive capabilities. The company recently launched a new feature called AI Autofix, which leverages the contextual data it collects from a company’s production environment to provide suggested fixes for any errors that may occur.
Don’t let the name fool you, AI Autofix is not a fully automated system. Instead, it is a human-in-the-loop tool that stands ready to help developers in times of need. To quote the company, it’s “like having a junior developer ready to help on-demand.”
“Rather than thinking about the performance of your application — or your errors — from a system infrastructure perspective, we’re really trying to focus on evaluating it and helping you solve problems from a code-level perspective,” explained Sentry engineering manager Tillman Elser when asked about where this new feature fits into the company’s product lineup.
Elser went on to highlight the unique value proposition of AI Autofix compared to other AI-based coding tools. While these other tools excel at auto-completing code in an IDE, they lack access to a company’s production environment. This means they cannot proactively identify and address issues. With Autofix, however, developers can speed up the process of triaging and resolving errors because the tool has knowledge of the context in which the code is running. As Elser puts it, “We’re trying to solve problems in production as fast as possible. We’re not trying to make you a faster developer when you’re building your application.”
Leveraging an agent-based architecture, Autofix constantly monitors for errors and uses its discovery agent to evaluate whether a code change could resolve the issue. If not, it will provide an explanation. It’s worth noting that developers remain fully involved in this process. For instance, they can add additional context for the AI agents if they already have an idea of what the problem may be. Alternatively, they can simply hit the “gimme fix” button and see what the AI comes up with.
The AI goes through multiple steps to assess the issue and creates an action plan to fix it. As part of this process, Autofix generates a diff that outlines the changes and, assuming everything looks good, creates a pull request to merge the changes.
Autofix boasts support for all major programming languages, although Elser admits that the team primarily tested it with JavaScript and Python code. Of course, it won’t always get everything right. As Elser explains, the most common failure case occurs when the AI lacks context — perhaps because the team did not set up enough instrumentation to collect the required data for Autofix to work with.
It’s worth mentioning that while Sentry is working on developing its own models, it currently relies on third-party models from companies like OpenAI and Anthropic. This means that users must opt-in and allow their data to be shared with these third-party services in order to use Autofix. Elser says that the company is considering revisiting this approach in the future and potentially offering an internally developed LLM model fine-tuned on their own data.
[…] a dark day for pests. Earlier today, Sentry revealed its new AI Autofix feature for debugging live code and now, just a few hours later, GitHub is launching the initial beta of their code scanning […]