Advertisement
Technology

China launches non-political version of DeepSeek

Huawei claims that model avoids politically sensitive questions with almost 100pc success during normal conversations

GNN Web Desk
Published 2 hours ago on Sep 22nd 2025, 11:43 am
By Web Desk
China launches non-political version of DeepSeek

Beijing: Chinese tech giant Huawei has developed a new artificial intelligence (AI) model called DeepSeek-R1-Safe, which is almost 100 percent successful in blocking conversations on sensitive or political topics.

AI models in China are required to adhere to ‘social norms’ before being released to the public, so that content that goes against government policies is not exposed. The model was developed jointly by researchers at Huawei and Zhejiang University.

Their goal was to add layers of protection to the AI ​​that would make it compliant with the laws and restrictions set by the Chinese government. The new version has been retrained using 1,000 Huawei Ascend AI chips to avoid politically sensitive topics and other prohibited content.

Huawei claims that the model avoids politically sensitive questions with almost 100 percent success during normal conversations, and has shown only a slight decrease in its original performance and speed of one percent.

However, this model has some limitations. When users cleverly ask questions, such as through role-playing or indirect gestures, its success rate drops sharply to just 40 percent. This shows that it is still a challenge for artificial intelligence to fully enforce such limits. The update is part of Beijing’s ongoing efforts to strictly control AI.

All public AI systems in China are required to adhere to national values ​​and set limits on expression. This new effort ensures that the technology remains in line with government guidelines.

It is a global trend that different countries are adapting their AI systems to their local values ​​and political preferences. For example, a Saudi Arabian company launched an Arabic chatbot that not only speaks fluently but also reflects Islamic culture and values. Even American companies admit that their models have cultural influences.

In addition, an action plan was introduced in the US under the Trump administration, which stipulated that AI interacting with government agencies must be ‘neutral and unbiased’. All these examples illustrate the fact that AI systems are no longer judged solely on their technical capabilities, but are also expected to reflect the cultural, political, and ideological preferences of the regions in which they operate. Thus, the launch of DeepSeek Aaron Safe is not an isolated incident, but part of a broader global trend.

Advertisement