Show HN: Light like the Terminal – Meet GTK LLM Chat Front End

(github.com)

35 points by icarito 2 days ago

14 comments

Author here. I wanted to keep my conversation with #Gemini about code handy while discussing something creative with #ChatGPT and using #DeepSeek in another window. I think it's a waste to have Electron apps and so wanted to chat with LLMs on my own terms. When I discovered the llm CLI tool I really wanted to have convenient and pretty looking access to my conversations, and so I wrote gtk-llm-chat - a plugin for llm that provides an applet and a simple window to interact with LLM models.

Make sure you've configure llm first (https://llm.datasette.io/en/stable/)

I'd love to get feedback, PRs and who knows, perhaps a coffee! https://buymeacoffee.com/icarito

guessmyname 2 days ago

It’d be better if it was written in C or at least Vala. With Python, you have to wait a couple hundred milliseconds for the interpreter to start, which makes it feel less native than it can be. That said, the latency of the LLM responses is higher than the UI, so I guess the slowness of Python doesn’t matter.

  • icarito 2 days ago

    Yeah I agree, I've been thinking about using Rust. But ultimately it's a problem with GTK3 vs GTK4 too because if we could reuse the Python interpreter from the applet that would speed things up but GTK4 doesn't have support for AppIndicator icons(!).

    I've been pondering whether to backport to GTK3 for this sole purpose. I find that after the initial delay to startup the app, its speed is okay...

    Porting to Rust is not really planned because I'd loose the llm-python base - but still something that triggers my curiosity.

  • cma 2 days ago

    What's the startup time now with 9950X3D, after a prior start so the pyc's are cached in RAM?

    • icarito a day ago

      Hey I felt bad that there was a longer delay and by making sure to lazy-load everything I could, I managed to bring down the startup time from 2.2 seconds to 0.6 on my machine! Massive improvement! Thanks for the challenge!

      • cma 13 hours ago

        nice that's a huge difference

    • icarito 2 days ago

      I wonder! In my more modest setup, it takes a couple of seconds perhaps. After that it's quite usable.

    • cma 2 days ago

      With a laptop 7735HS, using WSL2, I get 15ms for the interpreter to start and exit without any imports.

      • icarito 2 days ago

        I've got a i5-10210U CPU @ 1.60GHz.

        You triggered my curiosity. The chat window takes consistently 2.28s to start. The python interpreter takes roughly 30ms to start. I'll be doing some profiling.

Gracana 2 days ago

This looks quite nice. I would like to see the system prompt and inference parameters exposed in the UI, because those are things I'm used to fiddling with in other UIs. Is that something that the llm library supports?

  • icarito 2 days ago

    Yeah absolutely, I've just got to point where I'm happy with the architecture so I'll continue to add UI. I've just added support for fragments and I've thought to add them as if they were attached documents. I've in the radar to switch models in mid conversation and perhaps the ability to rollback a conversation or remove some messages. But yeah, system prompt and parameters would be nice to move too! Thanks for the suggestions!

    • Gracana 2 days ago

      Awesome. It would be great to see a nice gtk-based open source competitor to lm-studio and the like.

indigodaddy 2 days ago

Does this work on Mac or Linux only?

  • icarito 2 days ago

    I'd truly like to know! But I've no access to a Mac to try. If you can, try it and let me know? If it does, please send a screenshot!