Comment by r14c
[flagged]
[flagged]
Pretty sure they're asking for the narrative that's widely known about everywhere _except_ by the er... non-leadership people of China.
I recently learned about the (ancient?) greek concept of amathia. It's a willful ignorance, often cultivated as a preference for identity and ego over learning. It's not about a lack of intelligence, but rather a willful pattern of subverting learning in favor of cult and ideology.
It's obviously true that DeepSeek models are biased about topics sensitive to the Chinese government, like Tiananmen Square: they refuse to answer questions related to Tiananmen. That didn't magically fall out of a "predict the next token" base model (of which there is plenty of training data for it to complete the next token accurately); that came out of specific post-training to censor the topic.
It's also true that Anthropic and OpenAI have post-training that censors politically charged topics relevant to the United States. I'm just surprised you'd deny DeepSeek does the same for China when it's quite obvious that they do.
What data you include, or leave out, biases the model; and there's obviously also synthetic data injected into training to influence it on purpose. Everyone does it: DeepSeek is neither a saint nor a sinner.