Comment by hahahahhaah
Comment by hahahahhaah 4 days ago
I'd love to "chat" to that model see how it behaves
Comment by hahahahhaah 4 days ago
I'd love to "chat" to that model see how it behaves
I've done this out of curiosity with the base model of LLama 3.1 405B. I vibe coded a little chat harness with the system prompt being a few short conversations between "system" and "user" with "user:" being the stop word so I could enter my message. Worked surprisingly well and I didn't get any sycophancy or cliched AI responses.
I highly recommend. As a tip, you can quite easily get into a chat like state by simply using in context learning. Have a few turns of conversation pre-written and generate from that. It'll continue the conversation (for both parties) so you just stop it from generating when it starts generating on your behalf.
That said, it's useful for so much more beyond. Outline the premise of a Book, then "what follows is that book\n #Chapter 1:" and watch it rip. Base models are my preferred way of using LLM's by a long margin.