zlwaterfield 2 days ago

Correct but I'm going to loom into a locally running LLM so it would be free.

  • Tepix 2 days ago

    Please do (assuming you mean "look"). When you add support for a custom API URL, please make sure it supports HTTP Basic authentication.

    That's super useful for people who run say ollama with an nginx reverse proxy in front of it (that adds authentication).

  • nickthegreek 2 days ago

    Look into allowing it to connect to either a LM Studio endpoint or ollama please.

Szpadel 2 days ago

yes, but gpt-4o-mini costs very little so you probably will spend well under $1/month

  • miguelaeh 2 days ago

    I don't think the point here should be the cost, but the fact that you are sending everything you write to OpenAI to train their models on your information. The option of a local model allows you to preserve the privacy of what you write. I like that.

    • nickthegreek 2 days ago

      Openai does not train models on data that comes in from the API.

      https://openai.com/policies/business-terms/

      • punchmesan 2 days ago

        Assuming for the moment that they aren't saying that with their fingers crossed behind their back, that doesn't change the fact that they store the inputs they receive and swear they'll protect it (Paraphrasing from the Content section of the above link). Even if it's not fed back into the LLM, the fact that they store the inputs anywhere for a period of time is a huge privacy risk -- after all a breach is a matter of "when", not "if".