can produce an incredible amount of content in a short span. This could be creating new features, reviewing production logs, or fixing a bug report.
The bottleneck in software engineering and data science has moved from developing code to reviewing what the coding agents are building. In this article, I discuss how I effectively review Claude output to be an even more efficient engineer using Claude Code.
This infographic highlights the main contents of this article, which is to show you how to review the output of coding agents more efficiently, to become an even more efficient engineer. Image by ChatGPT.
Why optimize output reviewing
You might wonder why you need to optimize reviewing code and output. Just a few years ago, the biggest bottleneck (by far) was writing code to produce results. Now, however, we can produce code by simply prompting a coding agent like Claude Code.
Producing code is simply not the bottleneck anymore
Thus, since engineers are always striving to identify and minimize bottlenecks, we move on to the next bottleneck, which is reviewing the output of Claude Code.
Of course, we need to review the code it produces through pull requests. However, there is so much more output to review if you’re using Claude Code to solve all possible tasks (which you definitely should be doing). You need to review:
- The report Claude Code generated
- The errors Claude Code found in your production logs
- The emails Claude Code made for your outreach
You should be trying to use coding agents for absolutely every task you’re doing, not only programming tasks, but all of your commercial work, making presentations, reviewing logs, and everything in between. Thus, we need to apply special techniques to review this content faster.
In my next section, I’ll cover some of the techniques I use to review the output of Claude Code.
Techniques to review output
The review technique I use varies by task, but I’ll cover specific examples in the following subsections. I’ll keep it as specific as possible to my exact use cases, and then you can attempt to generalize this to your own tasks.
Reviewing code
Obviously, reviewing code is one of the most common tasks you do as an engineer, especially now that coding agents have become so quick and efficient at producing code.
To more effectively perform code reviews, I’ve done two main things:
- Set up a custom code review skill that has a full overview of how to efficiently perform a code review, what to look for and so on.
- Have an OpenClaw agent automatically run this skill whenever I’m tagged in a pull request.
Thus, whenever someone tags me in a poll request, my agent automatically sends me a message with the code review that I did and proposes to send that code review message to GitHub. All I need to do then is to simply look at the summary of the poll request, and if I want to, simply press send on the proposed poll request review. This uncovers a lot of issues that could have gotten to production if not detected.
This is probably the most valuable or time-critical reviewing technique that I’m using, and I would argue efficient code reviews are probably one of the most important things companies can focus on now to increase speed, considering the increased output of code with coding agents.
Reviewing generated emails
This image shows some example emails (not real data) that I’m previewing in HTML to make it super efficient to analyze the output that my calling agent produced, and I can quickly give feedback to the agent. To make the feedback process even more efficient, I transcribe feedback while looking at emails by using Superwhisper to record my voice, providing the feedback as I’m looking through the emails, and then quickly transcribing my feedback into Claude Code directly. Image by the author.
Another common task that I do is generating emails that I send out through a cold outreach tool or emails to respond to people. Oftentimes I want to review these emails also with formatting. For example, if they have links in them or some bold lettering and so on.
Reviewing this in a text-only interface such as Slack is not an ideal scenario. First of all, it creates a lot of mess in the Slack channel and Slack also isn’t able to format it correctly always.
Thus, one of the most efficient ways of reviewing generated emails and in general formatted text I’ve found is to ask Claude Code to generate an HTML file and open it in your browser.
This allows Claude Code to incredibly quickly generate formatted content, making it super easy for you to review. Claude can not only show the formatted emails but also show it in a very nice manner, which person is receiving which email, and also if you’re sending email sequences, it’s super easy to format.
Using HTML to review outputs is one of the secret hacks that have saved me, that saves me hours of time every week.
Reviewing production log reports
Another very common task I use Cloud Code for is to review production log reports. I typically run a daily query where I analyze production logs, looking for errors and things I should be aware of, or even just log warnings in the code.
This is incredibly useful because reporting services that send alerts on errors are often very noisy, and you end up getting a lot of false alerts.
Thus, instead, I prefer to have a daily report sent to me, which I can then analyze. This report is sent with an OpenClaw agent, but the way I preview the results is incredibly important, and this is where HTML file formatting comes in again.
When reviewing these production logs, there is a lot of information. First of all, you have the different error messages that you can see. Secondly, you have the number of times each error message has occurred. You might have different IDs that refer to each error message that you also want to display in a simple manner. All of this information is super difficult to provide in a nice manner in txt formatting, such as slack for example, but it’s incredibly nice to preview in an HTML file
Thus, after my agent has reviewed production logs, I ask it to provide a report and present it in an HTML file, which makes it super easy for me to review all the output and quickly gain an overview of what’s important, what they can skip, and so on.
Another pro tip here is not only to generate the HTML file but also to ask Claude Code to open it up in your specific browser, which it does automatically, and you quickly get an overview. And basically get notified whenever the agent is done because the browser pops up on your computer with a new tab holding the HTML file that was generated.
Conclusion
In this article, I’ve covered some of the specific techniques I use to review Claude Code output. I discussed why it’s so important to optimize reviewing outputs, highlighting how the bottleneck in software engineering has shifted from producing code to analyzing the results of code. Thus, since the bottleneck is now the reviewing part, we want to make that as efficient as possible, which is the topic I’ve been discussing here today. I talked about the different use cases I used Cloud Code for and how I efficiently analyze results. Further improving the way you analyze the output of your coding agents will be incredibly important going forward, and I urge you to spend time optimizing this process and thinking about how you can make reviewing coding agent output more efficient. I’ve covered some techniques that I use on a day-to-day basis, but of course, there are a lot of other techniques you can use, as well as the fact that you will have your own set of tests that will require their own set of techniques to use that are different from mine.
👉 My free eBook and Webinar:
🚀 10x Your Engineering with LLMs (Free 3-Day Email Course)
📚 Get my free Vision Language Models ebook
💻 My webinar on Vision Language Models
👉 Find me on socials:
💌 Substack
🐦 X / Twitter

