As I work toward transitioning into AI engineering, I wanted to build something practical that combined my software engineering background with what I’ve been learning about large language models. The result is AI Code Reviewer, a web app that uses Claude to analyze code and provide structured feedback.
What it does
You paste code directly or provide a GitHub file URL, and the app sends it to Claude for review. The response includes a quality score, key strengths, critical issues, code quality concerns, performance notes, and actionable recommendations. It works across multiple languages including Python, JavaScript, Java, Swift, and more.
How it’s built
The backend is built with FastAPI, a modern Python web framework. The frontend uses Jinja2 templates styled with Tailwind CSS. The AI layer is powered by the Anthropic Claude API using the Claude Sonnet model.
A few things I was deliberate about: validating GitHub URLs to prevent SSRF attacks, handling token limits gracefully, and structuring the Claude prompt to return consistent, readable output.
Why I built this
In my current role at Scale AI, I evaluate AI-generated code every day – identifying failure patterns, assessing quality, and writing structured feedback. This project was a natural extension of that work. I wanted to see what it would look like to automate a version of that review process using Claude.
It’s a prototype, not a production system, but it was a great exercise in working with LLM APIs, prompt engineering, and building a full end-to-end Python web app.
Check out the code on GitHub.