Comment by aucisson_masque
Comment by aucisson_masque 2 days ago
It could be used to spot LLM generated text.
compare the frequency of words to those used in human natural writings and you spot the computer from the human.
Comment by aucisson_masque 2 days ago
It could be used to spot LLM generated text.
compare the frequency of words to those used in human natural writings and you spot the computer from the human.
> The more we use AI, the more we integrate LLMs and other tools into our life, the more their output will influence us
Hmm I don’t disagree but I think it will be valuable skill going forward to write text that doesn’t read like it was written by an LLM
This is an arms race that I’m not sure we can win though. It’s almost like a GAN.
> ... compare the frequency of words to those used in human natural writings and you spot the computer from the human.
But that's a losing endeavor: if you can do that, you can immediately ask your LLM to fix its output so that it passes that test (and many others). It can introduce typos, make small errors on purpose, and anything you can think of to make it look human.
it may work for a short time, but after a while natural language will evolve due to natural exposure of those new words or word patterns and even human will write in ways that, while being different from the LLMs, will also be different from the snapshot captured by this snapshot. It's already the case that we used to write differently 20 years ago from 50 years ago and even more so 100 years ago, etc
It could be used to differentiate LLM text from pre-LLM human text maybe. The thing, our AIs may not be very good at learning but our brains are. The more we use AI, the more we integrate LLMs and other tools into our life, the more their output will influence us. I believe there was a study (or a few anecdotes) where college papers checked for AI material were marked AI written even though they were written by humans because the students used AI during their studying and learned from it.