Comment by _boffin_

Comment by _boffin_ 19 hours ago

0 replies

Not sure this counts as "successful" yet (invite-only beta, still rough), but I'm building a full product almost entirely via LLM-assisted coding.

Tangents (https://tangents.chat) is an Angular/Nest/Postgres app for thinking-with-LLMs without losing the thread.

- Branch: select any span (user or assistant) and branch it into a tangent thread so the main thread stays coherent.

- Collector: collect spans across messages/threads into curated context, then prompt with it.

- You can inspect a "what the model will see" preview and keep a stored context-assembly manifest.

Vibe-coding aspect: about 600 commits and about 120k LOC (tests included) and I have not handwritten the implementation code. I do write specs/docs/checklists and I run tests/CI like normal.

What made it workable for something larger than a static page:

- Treat the model like a junior dev: explicit requirements plus acceptance criteria, thin slices, one change at a time.

- Keep "project truth" in versioned docs (design system plus interface spec) so the model does not drift.

- Enforce guardrails: types, lint, tests, and a strict definition of "done."

- The bottleneck is not generating code, it is preventing context/spec drift and keeping invariants stable across hundreds of changes.

If you define "vibe coding" as "I never look at the code," I do not think serious production apps fit that. But if you define it as "the LLM writes the code and you steer via specs/tests," it is possible to build something non-trivial.

Happy to answer specifics if anyone cares (workflow, tooling, what breaks first, etc.).