Terretta 2 days ago

> I'd love to see by how much the use of world "delve" has increased since 2021...

There are charts / graphs in the link, both since 2021, and since earlier.

The final graph suggests the phenomenon started earlier, possibly correlated in some way to Malaysian / Indian usages of English.

It does seem OpenAI's family of GPTs as implemented in ChatGPT unspool concepts in a blend of India-based-consultancy English with American freshmen essay structure, frosted with superficially approachable or upbeat blogger prose ingratiatingly selling you something.

Anthropic has clearly made efforts to steer this differently, Mistral and Meta as well but to a lesser degree.

I've wondered if this reflects training material (the SEO is ruining the Internet theory), or is more simply explained by selection of pools of Hs hired for RLHF.

chipdart 2 days ago

From the submission you're commenting on:

> As one example, Philip Shapira reports that ChatGPT (OpenAI's popular brand of generative language model circa 2024) is obsessed with the word "delve" in a way that people never have been, and caused its overall frequency to increase by an order of magnitude.

slashdave 2 days ago

Amusing that we now have a feedback loop. Let's see... delve delve delve delve delve delve delve delve. There, I've done my part.

dqv 2 days ago

Same for me but with the word “crucial”.

xpl 2 days ago

The fun thing is that while GPTs initially learned from humans (because ~100% of the content was human-generated), future humans will learn from GPTs, because almost all available content would be GPT-generated very soon.

This will surely affect how we speak. It's possible that human language evolution could come to a halt, stuck in time as AI datasets stop being updated.

In the worst case, we will see a global "model collapse" with human languages devolving along with AI's, if future AIs are trained on their own outputs...