swatcoder 2 days ago

You lob it the beginning of a document and let it toss back the rest.

That's all that the LLM itself does at the end of the day.

All the post-training to bias results, routing to different models, tool calling for command execution and text insertion, injected "system prompts" to shape user experience, etc are all just layers built on top of the "magic" of text completion.

And if your question was more practical: where made available, you get access to that underlying layer via an API or through a self-hosted model, making use of it with your own code or with a third-party site/software product.

behnamoh 2 days ago

the same way we used GPT-3. "the following is a conversation between the user and the assistant. ..."

  • nrhrjrjrjtntbt 2 days ago

    Or just:

    1 1 2 3 5 8 13

    Or:

    The first president of the united

    • CGMthrowaway 2 days ago

      And that's better? Isn't that just SMS autocomplete?

      • d-lisp 2 days ago

        If that's SMS autocomplete, then chatLLMs are just SMS autocomplete with sugar on top.

        • CGMthrowaway a day ago

          That's what I have always thought. SMS autocomplete with more intermediate iteration and better source data compression