Comment by machtiani-chat

Comment by machtiani-chat 6 days ago

5 replies

Just use codex and machtiani (mct). Both are open source. Machtiani was open sourced today. Mct can find context in a hay stack, and it’s efficient with tokens. Its embeddings are locally generated because of its hybrid indexing and localization strategy. No file chunking. No internet, if you want to be hardcore. Use any inference provider, even local. The demo video shows solving an issue VSCode codebase (of 133,000 commits and over 8000 files) with only Qwen 2.5 coder 7B. But you can use anything you want, like Claude 3.7. I never max out context in my prompts - not even close.

https://github.com/tursomari/machtiani

asar 5 days ago

This sounds really cool. Can you explain your workflow in a bit more detail? i.e. how exactly you work with codex to implement features, fix bugs etc.

  • machtiani-chat 5 days ago

    Say I'm chatting in a git project directory `undici`. I can show you a few ways how I work with codex.

    1. Follow up with Codex.

    `mct "fix bad response on h2 server" --model anthropic/claude-3.7-sonnet:thinking`

    Machtiani will stream the answer, then also apply git patches suggested in the convo automatically.

    Then I could follow up with codex.

    `codex "See unstaged git changes. Run tests to make sure it works and fix and problems with the changes if necessary."

    2. Codex and MCT together

    `codex "$(mct 'fix bad response on h2 server' --model deepseek/deepseek-r1 --mode answer-only)"`

    In this case codex will dutifully implement the suggested changes of codex, saving tokens and time.

    The key for the second example is `--mode answer-only`. Without this flagged argument, mct will itself try and apply patches. But in this case codex will do it as mct withholds the patches with the aforementioned flagged arg.

    3. Refer codex to the chat.

    Say you did this

    `mct "fix bad response on h2 server" --model gpt-4o-mini --mode chat`

    Here, I used `--mode chat`, which tells mct to stream the answer and save the chat convo, but not to apply git changes (differrent than --mode answer-only).

    You'll see mct will printout that something like

    `Response saved to .machtiani/chat/fix_bad_server_resonse.md`

    Now you can just tell codex.

    `codex "See .machtiani/chat/fix_bad_server_resonse.md, and do this or that...."`

    *Conclusion*

    The example concepts should cover day-to-day use cases. There are other exciting workflows, but I should really post a video on that. You could do anything with unix philosophy!

    • asar 4 days ago

      Amazing, really excited to try this out. And thanks for the time you took to write this up!

evnix 6 days ago

How does this compare to aider?

  • machtiani-chat 5 days ago

    I skipped using aider, but I heard good things. I needed to work with large, complex repos, not vibe codebases. And agents require always top-notch models that are expensive and can't run locally well. So when Codex came out, it skipped to that.

    But mct leverages the weak models well, do things not possible otherwise. And it does even better with stronger models. Rewards stronger models, but doesn't punish smaller models.

    So basically, you can use save money and do more using mct + codex. But I hear aider is terminal tool so maybe try and mct + aider?