- Home
- Technology
- News
China launches non-political version of DeepSeek
Huawei claims that model avoids politically sensitive questions with almost 100pc success during normal conversations


Beijing: Chinese tech giant Huawei has developed a new artificial intelligence (AI) model called DeepSeek-R1-Safe, which is almost 100 percent successful in blocking conversations on sensitive or political topics.
AI models in China are required to adhere to ‘social norms’ before being released to the public, so that content that goes against government policies is not exposed. The model was developed jointly by researchers at Huawei and Zhejiang University.
Their goal was to add layers of protection to the AI that would make it compliant with the laws and restrictions set by the Chinese government. The new version has been retrained using 1,000 Huawei Ascend AI chips to avoid politically sensitive topics and other prohibited content.
Huawei claims that the model avoids politically sensitive questions with almost 100 percent success during normal conversations, and has shown only a slight decrease in its original performance and speed of one percent.
However, this model has some limitations. When users cleverly ask questions, such as through role-playing or indirect gestures, its success rate drops sharply to just 40 percent. This shows that it is still a challenge for artificial intelligence to fully enforce such limits. The update is part of Beijing’s ongoing efforts to strictly control AI.
All public AI systems in China are required to adhere to national values and set limits on expression. This new effort ensures that the technology remains in line with government guidelines.
It is a global trend that different countries are adapting their AI systems to their local values and political preferences. For example, a Saudi Arabian company launched an Arabic chatbot that not only speaks fluently but also reflects Islamic culture and values. Even American companies admit that their models have cultural influences.
In addition, an action plan was introduced in the US under the Trump administration, which stipulated that AI interacting with government agencies must be ‘neutral and unbiased’. All these examples illustrate the fact that AI systems are no longer judged solely on their technical capabilities, but are also expected to reflect the cultural, political, and ideological preferences of the regions in which they operate. Thus, the launch of DeepSeek Aaron Safe is not an isolated incident, but part of a broader global trend.

What haunts America’s animal shelter workers
- 15 hours ago

Larry’s risky business
- 17 hours ago

Foreign Office terms social media post by British SRA as one-sided
- 4 hours ago
KSA reverses decision to impose minimum age limit of 15 years for Hajj within hours of its announcement
- 33 minutes ago

ChatGPT downloads are slowing — and may cause problems for OpenAI’s IPO
- 17 hours ago

Ex Senator Mushtaq Ahmad released from Israeli custody: Dar
- 5 hours ago
US bypasses congressional review for military sales of $8.6bn to Middle East allies
- 5 hours ago

Microsoft is giving its Xbox employees an Xbox email address
- 8 hours ago

General Motors is adding Gemini to four million cars
- 17 hours ago
Iranian proposal rejected by Trump would open strait before nuclear talks, Iran official says
- 6 hours ago

Microsoft Office can now be controlled with Logitech’s MX Creative Console
- 8 hours ago
Finance Minister vows investor-friendly policy environment
- 4 hours ago









