Comment by simonw
Anyone seen a low-friction way to run prompts through this yet, either via a hosted API or chat UI or a convenient GGML or MLX build that runs in Ollama or llama.cpp or LM Studio?
Anyone seen a low-friction way to run prompts through this yet, either via a hosted API or chat UI or a convenient GGML or MLX build that runs in Ollama or llama.cpp or LM Studio?
Thanks, looks like that's this one: https://huggingface.co/FractalSurfer/TimeCapsuleLLM-v2-1800-...
There's a "Use this model" button on that page that can launch it in LM Studio.
Currently running it using LM Studio. It can download it from Hugging Face. It generates incoherent text though
===
You:
I pray you, who is this Master Newton?
timecapsulellm-v2-1800-1875-mlx:
TI offer to pay you the very same fee as you did before. It was not in the power of your master to deliver the letter to your master. He did. I will be with you as soon as I can keep my word. It is not at all clear, whether the letter has been sent or not. It is not at all clear: but it is clear also that it was written by the person who gave it. "No," I said, "I cannot give it to you." There, the letter was sent to me. "The letter is yours, I believe," I said. "But, I hope, you will not refuse to give it to me?