Comment by ensocode

Comment by ensocode 7 hours ago

0 replies

Interesting thought, thanks. So what would we need? What if everyone effectively had a permanent microphone — via smartphones, smart speakers, cars, wearables — and all of that lived, spoken, emotionally charged data were fed into future LLMs?

On the surface, that sounds like a path toward richer models: less elite-written text, more everyday language, more non-academic thinking, more embodied culture. But it also raises a deeper question: whose reality would actually be learned?

Because even if the data were global, the selection, labeling, weighting, and training objectives would still be controlled somewhere.

And then there’s preference. Would people eventually choose their models the way they choose media ecosystems today? A Californian-progressive LLM. A post-socialist Eastern European LLM. A Palestinian LLM for discussing geopolitics. A deeply conservative, tradition-preserving LLM that treats modernity itself as suspect.

If that happens, AI wouldn’t homogenize thought — it would solidify worldviews into software as it is done in media today. Dialogue might actually become harder, not easier.

So the think may not be “AI Californication” alone, but AI Balkanization, ...

The open question is whether we can build models that don’t just represent cultures, but can genuinely inhabit multiple, conflicting ontologies without collapsing them into a single moral frame. That may be the hardest problem of all — and one that current LLMs, trained mostly on English-speaking upper layers of the internet, are nowhere near solving yet.