Show HN: AI that edits your files directly, no approvals
2 points by acro-v 9 hours ago
Hey HN, I'm building Aye Chat (https://github.com/acrotron/aye-chat), an open source AI-powered terminal workspace that lets you edit files, run shell commands, and ask AI to modify your codebase directly, all in one REPL session.
I built this because I got tired of the "suggest -> review -> approve" loop in existing AI coding tools. As models improve and generate proper code more often than not, manual approval started to feel unnecessary as long as there is a strong safety net to allow easy rollback of the changes.
Aye Chat applies changes automatically, but every AI edit is snapshotted locally, so you can instantly undo any change with a single command. This automatic file update with a safety net is the core idea.
In the same session, you can run shell commands, open Vim, and ask the AI to modify your code.
It supports multiple models via OpenRouter, direct OpenAI API usage with your key, and also includes an offline local model (Qwen2.5 Coder 7B).
You can watch a ~1-minute demo here: https://youtu.be/i-vGI6-kP4c
Basically, the typical workflow goes like this (instead of a chat window, you stay in your terminal):
$ aye chat # starts the session
> fix the bug in server.py
Fixed undefined variable on line 42
> vim server.py
[opens real Vim, returns to chat after]
> refactor: make it async
Updated server.py with async/await
> pytest
Tests fail
> restore
Reverted last changes
I use Aye Chat both in my work projects and to build Aye Chat itself. Recently, I used it to implement a local vector search engine in just a few days.Lower-level technical details that went into the tool:
The snapshot engine is a Python-based implementation that serves as a lightweight version control layer.
For retrieval, we intentionally avoided PyTorch to keep installs lightweight. Instead, we use ChromaDB with ONNXMiniLM-L6_V2 running on onnxruntime.
File indexing runs in the background using a fast coarse pass followed by AST-based refinement.
What I learned:
The key realization was that the bottleneck in AI coding is often the interface, not the model.
I also learned that early users do not accept a custom snapshot engine, so to make it professional-grade we are now integrating it with git refs.
What I'd love feedback on:
- Does the snapshot safety net give you enough confidence to let the AI write files directly, or does it still feel too risky?
- Shell integration: does the ability to execute native commands and prompt the AI from a unified terminal interface solve the context-switching problem for you?
There is a 1-line quick install:
pip install ayechat
Homebrew and Windows installer are also available.It's early days, but Aye Chat is working well and is legitimately the tool I reach for first when I want to iterate faster. I would love to get your feedback. Feel free to hop into the Discord (https://discord.gg/ZexraQYH77) and let me know how it goes. If you find it interesting, a repo star would mean a lot!
Instead of implementing your own command loop, why not just have a command that prompts the LLM, and uses the result to edit and commit a file (using hard-coded logic for the commit rather than parsing a tool request)? Then you're always in "stay at the terminal" mode anyway and you can use Git normally to "restore". If you want to loop anyway to re-use a process (and avoid persisting context between queries, or whatever), experienced terminal users can just background the process anyway. For me, everything aside from the actual prompt interface here is window dressing.
Anyway, safety for these tools is only minimally about having backups. The much more important part is the amount of agency extended to the tool. The LLM part itself can only generate instructions; it's up to the rest of the agent to actually follow them. So really all depends on what commands you recognize (is there anything beyond a "write text X to file Y" command?) and what is hard-coded (e.g. invoking `git commit` after each write).