Comment by DroneBetter

Comment by DroneBetter 3 days ago

2 replies

the problem is generally the same as with generative adversarial networks; the capability to meaningfully detect some set of hallmarks of LLMs automatically is equivalent to the capability to avoid producing those, and LLMs are trained to predict (ie. be indistinguishable from) their source corpus of human-written text.

so the LLM detection problem is (theoretically) impossible for SOTA LLMs; in practice, it could be easier due to the RLHF stage inserting idiosyncrasies.

arendtio 2 days ago

Sure, having a 100% reliable system is impossible as you have laid out. However, if I understand the announcement correctly, this is about volume, and I wonder if you could have a tool flag articles that show obvious signs of LLM usage.

  • warkdarrior a day ago

    The point is that this leads to an arms race. If Arxiv uses a top-of-line LLM for, say, 20 minutes per paper, cheating authors will use a top-of-line LLM for 21 minutes to beat that.