- Home
- Technology
- News
China launches non-political version of DeepSeek
Huawei claims that model avoids politically sensitive questions with almost 100pc success during normal conversations


Beijing: Chinese tech giant Huawei has developed a new artificial intelligence (AI) model called DeepSeek-R1-Safe, which is almost 100 percent successful in blocking conversations on sensitive or political topics.
AI models in China are required to adhere to ‘social norms’ before being released to the public, so that content that goes against government policies is not exposed. The model was developed jointly by researchers at Huawei and Zhejiang University.
Their goal was to add layers of protection to the AI that would make it compliant with the laws and restrictions set by the Chinese government. The new version has been retrained using 1,000 Huawei Ascend AI chips to avoid politically sensitive topics and other prohibited content.
Huawei claims that the model avoids politically sensitive questions with almost 100 percent success during normal conversations, and has shown only a slight decrease in its original performance and speed of one percent.
However, this model has some limitations. When users cleverly ask questions, such as through role-playing or indirect gestures, its success rate drops sharply to just 40 percent. This shows that it is still a challenge for artificial intelligence to fully enforce such limits. The update is part of Beijing’s ongoing efforts to strictly control AI.
All public AI systems in China are required to adhere to national values and set limits on expression. This new effort ensures that the technology remains in line with government guidelines.
It is a global trend that different countries are adapting their AI systems to their local values and political preferences. For example, a Saudi Arabian company launched an Arabic chatbot that not only speaks fluently but also reflects Islamic culture and values. Even American companies admit that their models have cultural influences.
In addition, an action plan was introduced in the US under the Trump administration, which stipulated that AI interacting with government agencies must be ‘neutral and unbiased’. All these examples illustrate the fact that AI systems are no longer judged solely on their technical capabilities, but are also expected to reflect the cultural, political, and ideological preferences of the regions in which they operate. Thus, the launch of DeepSeek Aaron Safe is not an isolated incident, but part of a broader global trend.
Security forces neutralise eight Khwarij of Indian proxy in North Waziristan: ISPR
- 11 hours ago

Rank the 50 best Apple products
- 13 hours ago

America is going back to the moon
- 20 hours ago
5th death anniversary of folk singer Shaukat Ali being observed today
- 10 hours ago
China calls for ‘immediate’ halt to hostilities after Trump says will continue hitting Iran
- 11 hours ago
Graveyard raid in India uncovers hidden cooking gas canisters amid shortage
- 9 hours ago

PSX plunge 3,500 points as Trump speech sinks hopes for quick end to war
- 9 hours ago

Gold prices plummet in Pakistan, global markets
- 11 hours ago

Iran, US express confidence in Pakistan for talks: FO
- 10 hours ago
Asim Khan reaches World Squash Qualifiers semifinals
- 6 hours ago
Seven dead, several injured as rain, flood lash parts of Balochistan
- 10 hours ago
Putin, Saudi prince seek more efforts to end Mideast war
- 6 hours ago










