Comment by anougaret
[dead]
[dead]
we don't do the LLM part per say
we instrument your code automatically which is a compiler like approach under the hood, then we aggregate the traces
this allows context engineering the most exhaustive & informative prompt for LLMs to debug with
now if they still fail to debug at least we gave them all they should have needed
Okay this sounds better. But aren't there other continuous debuggers out there? It doesn't seem hard to roll my own. I can definitely get vim to run pdb in a buffer every time I save my file (or whatever condition). But this does seem quite expensive for minimal benefit. Usually people turn to print statements because it's easier than the debugger. Is it iterative so you don't do the full trace and only roll back your stack trace to where the breach occurs? That's much more complex
And critically, why are you holding my code for 48 hrs? Why is anything leaving my machine at all?
valid concerns of course
- we are planning a hosted AI debugging feature that can aggregate multiple traces & code snippets from different related codebases and feed it all into one llm prompt, that benefits a lot from having it all centralized on our servers
- for now the rewriting algorithms are quite unstable, it helps me debug it to have failing code files in sight
- we only store your code for 48hours as I assume it's completely unnecessary to store for longer
- a self hosted ver will be released for users that cannot accept this for valid reasons
Are you saying that LLMs will generate shitty code and the fix that by using your LLM? That seems... inconsistent...