Hi, Franck here 🎤
As a solo developer, I write a lot of code alone. The problem ? I’m terrible at reviewing my own work. I always think “it looks good enough”… until I spot a dumb bug two days later.
I got tired of shipping code with blind spots. So I built a small open-source tool:
llm-review-framework
LLM-powered code review as a pre-commit hook. Works with any LLM provider, any project.
Install
curl -fsSL https://raw.githubusercontent.com/francklebas/llm-review-framework/main/install.sh | bash
Setup a project
cd your-project
llmfwk init
The interactive setup will:
- Ask which LLM provider to use (Gemini, Ollama, Claude, OpenAI, Mistral, Groq, GitHub Copilot, or custom)
- Check if the provider CLI is installed (and offer to install it)
- Let you pick a model
- Check if pre-commit is installed (and offer to install it)
- Optionally create project-specific prompts from templates
- Write
.llm-review.yamland.pre-commit-config.yaml
How it works
On every commit, the llm-codereview hook:
- Reads your
.llm-review.yamlconfig - Gets the staged diff
- Loads review prompts (project-specific or base defaults)
- Sends everything to your LLM provider
- Outputs a markdown code review
The output is readable by you and exploitable by the LLM in your editor (Claude Code, opencode, Cursor, etc.).
Prompts
Base prompts (built-in)
If you skip project-specific prompts during llmfwk…
It turns any LLM into a pre-commit code reviewer that runs automatically on every git commit.
Why I Needed This ?
In a team, you open a PR and get feedback.
As a solo dev, you’re on your own. Self-review checklists quickly become wishful thinking.
I wanted something simple that:
- Gives honest, consistent feedback
- Runs locally or with any provider
- Doesn’t slow me down
- Doesn’t assume my code is perfect (spoiler: it never is)
How It Works (Really Simple)
- Install it once:
curl -fsSL https://raw.githubusercontent.com/francklebas/llm-review-framework/main/install.sh | bash
- In any project, run:
llmfwk init
A quick interactive wizard asks:
Which LLM do you want ? (Ollama, Gemini, Claude, Groq, OpenAI, Mistral, Copilot, or custom)
Which model ?
It sets up the pre-commit hook for you ! 🎉
And... that’s it.
From now on, every commit triggers a review. The tool grabs your staged changes, sends them to your LLM with solid prompts (SOLID, security, anti-patterns, error handling, etc.), and shows you a clean Markdown review in the terminal.
Voir un exemple de review générée
### Review of `auth.py`
- **Security**: The hardcoded API key on line 42 should be moved to an environment variable.
- **Style**: Function `login_user` is too long, consider refactoring.
If something looks off — you fix it and commit again.
Make It Understand Your Project
You can add two files to give it context (the context project is in the parent directory) :
codereview-context.md → your stack, architecture choices, naming rules
codereview-guidelines.md → what you care about most in this project
The LLM then reviews your code like someone who actually knows your codebase.
Why I Like It
- Works completely locally with Ollama → zero cost, zero data leak...
- Super fast with Groq when I’m in a hurry
- Fully customizable
- AGPL-3.0 and 100% FOSS
It’s not magic, and it won’t replace human review when you have teammates. But for solo work, it’s a game-changer: I now commit with much more confidence.
Try It
Go to the repo:
Tester llm-review-framework sur GitHub 🚀
Run the install, do llmfwk init in one of your projects, and make a small change + commit. You’ll see the difference immediately.
I’d love to hear what you think, especially if you’re also a solo dev fighting the same problem.
What’s your current trick to avoid shipping "mediocre" code alone ?
Happy coding (and reviewed commits)! 🚀😁
Top comments (1)
Use this version with your terminology. Clean, direct, no em dashes:
This is a useful layer for reducing solo dev blind spots.
Running LLM review at pre commit improves consistency, but it is still advisory.
The gap is execution time governance.
An LLM can suggest changes, but it does not enforce a Decision Boundary before code is committed.
At execution:
Without that, the system generates feedback but still allows drift to accumulate.
This is the difference between review and control.
Pre commit review increases awareness.
Execution time governance enforces what is allowed to proceed.