Technology
- Home
- Technology
- News
Claude AI will end ‘persistently harmful or abusive user interactions’
Anthropic’s Claude AI chatbot can now end conversations deemed “persistently harmful or abusive,” as spotted earlier by TechCrunch. The capability is now available in Opus 4 and 4.1 models, and will allow the chatbot to end conversations as a “last resort” af…

Published 7 ماہ قبل on اگست 20 2025، 2:00 شام
By Web Desk

Anthropic’s Claude AI chatbot can now end conversations deemed “persistently harmful or abusive,” as spotted earlier by TechCrunch. The capability is now available in Opus 4 and 4.1 models, and will allow the chatbot to end conversations as a “last resort” after users repeatedly ask it to generate harmful content despite multiple refusals and attempts at redirection. The goal is to help the “potential welfare” of AI models, Anthropic says, by terminating types of interactions in which Claude has shown “apparent distress.”
If Claude chooses to cut a conversation short, users won’t be able to send new messages in that conversation. They can still create new chats, as well as edit and retry previous messages if they want to continue a particular thread.
During its testing of Claude Opus 4, Anthropic says it found that Claude had a “robust and consistent aversion to harm,” including when asked to generate sexual content involving minors, or provide information that could contribute to violent acts and terrorism. In these cases, Anthropic says Claude showed a “pattern of apparent distress” and a “tendency to end harmful conversations when given the ability.”
Anthropic notes that conversations triggering this kind of response are “extreme edge cases,” adding that most users won’t encounter this roadblock even when chatting about controversial topics. The AI startup has also instructed Claude not to end conversations if a user is showing signs that they might want to hurt themselves or cause “imminent harm” to others. Anthropic partners with Throughline, an online crisis support provider, to help develop responses to prompts related to self-harm and mental health.
Last week, Anthropic also updated Claude’s usage policy as rapidly advancing AI models raise more concerns about safety. Now, the company prohibits people from using Claude to develop biological, nuclear, chemical, or radiological weapons, as well as to develop malicious code or exploit a network’s vulnerabilities.
Mike Clay's rookie rankings for 2026: Jeremiyah Love on top
- 6 hours ago
'Right now, I could be UFC champion': Bella Mir is the future face of fighting
- 6 hours ago
Zhang shoots 66 to join three-way tie in China
- 6 hours ago

The false promise of energy independence
- 5 hours ago
Lee leads in China looking to end 8-year drought
- 6 hours ago
As a 'mysterious' illness sidelines Kristaps Porzingis, the Warriors reckon with the future
- 6 hours ago

BenQ’s new Mac monitor could be a cheaper alternative to Apple’s new Studio Display
- 7 hours ago

Are more people getting ADHD — or are we just catching more cases?
- 5 hours ago
2026 NFL free agency: Biggest needs, predictions for 32 teams
- 6 hours ago

Our first hands-on look at Apple’s MacBook Neo
- 7 hours ago

PlayStation is reportedly moving away from PC ports
- 7 hours ago
Berger tames U.S. Open-like Bay Hill, leads by 5
- 6 hours ago
You May Like
Trending











