Technology
- Home
- Technology
- News
Claude AI will end ‘persistently harmful or abusive user interactions’
Anthropic’s Claude AI chatbot can now end conversations deemed “persistently harmful or abusive,” as spotted earlier by TechCrunch. The capability is now available in Opus 4 and 4.1 models, and will allow the chatbot to end conversations as a “last resort” af…

Published 4 months ago on Aug 20th 2025, 2:00 pm
By Web Desk

Anthropic’s Claude AI chatbot can now end conversations deemed “persistently harmful or abusive,” as spotted earlier by TechCrunch. The capability is now available in Opus 4 and 4.1 models, and will allow the chatbot to end conversations as a “last resort” after users repeatedly ask it to generate harmful content despite multiple refusals and attempts at redirection. The goal is to help the “potential welfare” of AI models, Anthropic says, by terminating types of interactions in which Claude has shown “apparent distress.”
If Claude chooses to cut a conversation short, users won’t be able to send new messages in that conversation. They can still create new chats, as well as edit and retry previous messages if they want to continue a particular thread.
During its testing of Claude Opus 4, Anthropic says it found that Claude had a “robust and consistent aversion to harm,” including when asked to generate sexual content involving minors, or provide information that could contribute to violent acts and terrorism. In these cases, Anthropic says Claude showed a “pattern of apparent distress” and a “tendency to end harmful conversations when given the ability.”
Anthropic notes that conversations triggering this kind of response are “extreme edge cases,” adding that most users won’t encounter this roadblock even when chatting about controversial topics. The AI startup has also instructed Claude not to end conversations if a user is showing signs that they might want to hurt themselves or cause “imminent harm” to others. Anthropic partners with Throughline, an online crisis support provider, to help develop responses to prompts related to self-harm and mental health.
Last week, Anthropic also updated Claude’s usage policy as rapidly advancing AI models raise more concerns about safety. Now, the company prohibits people from using Claude to develop biological, nuclear, chemical, or radiological weapons, as well as to develop malicious code or exploit a network’s vulnerabilities.
Bangladesh holds state funeral for slain youth leader amid tight security
- a day ago

Larry Ellison’s big dumb gift to his large adult son
- 2 hours ago
Green Shirts give India humiliating defeat in U-19 Asia Cup final
- 14 hours ago
Third ‘Avatar’ film lights up global box offices
- 9 hours ago
Only state can declare jihad in Islamic country, says COAS Syed Asim Munir
- 14 hours ago

PDMA issues alert about rains, snowfall over hills in KP
- a day ago
Gazans mourn six killed in Israeli shelling on shelter
- a day ago

Nine terrorists neutralised in two KP IBOs: ISPR
- 9 hours ago
May 9: Yasmin Rashid, Mahmoodur Rashid, others sentenced to 10 years’ imprisonment each in two more cases
- a day ago
Commissioning ceremony of 2nd Pak Navy Ship KHAIBAR held in Turkiye
- 14 hours ago
Death anniversary of Hafeez Jalandhari being observed today
- 14 hours ago
Thai border clashes displace over half a million in Cambodia
- 14 hours ago
You May Like
Trending









