Comment by permo-w
I was wondering this. what is the minimum amount of text an LLM needs to be coherent? fun of an idea as this is, the samples of its responses are basically babbling nonsense. going further, a lot of what makes LLMs so strong isn't their original training data, but the RLHF done afterwards. RLHF would be very difficult in this case