Technology
- Home
- Technology
- News
Claude AI will end ‘persistently harmful or abusive user interactions’
Anthropic’s Claude AI chatbot can now end conversations deemed “persistently harmful or abusive,” as spotted earlier by TechCrunch. The capability is now available in Opus 4 and 4.1 models, and will allow the chatbot to end conversations as a “last resort” af…

Published 3 months ago on Aug 20th 2025, 2:00 pm
By Web Desk

Anthropic’s Claude AI chatbot can now end conversations deemed “persistently harmful or abusive,” as spotted earlier by TechCrunch. The capability is now available in Opus 4 and 4.1 models, and will allow the chatbot to end conversations as a “last resort” after users repeatedly ask it to generate harmful content despite multiple refusals and attempts at redirection. The goal is to help the “potential welfare” of AI models, Anthropic says, by terminating types of interactions in which Claude has shown “apparent distress.”
If Claude chooses to cut a conversation short, users won’t be able to send new messages in that conversation. They can still create new chats, as well as edit and retry previous messages if they want to continue a particular thread.
During its testing of Claude Opus 4, Anthropic says it found that Claude had a “robust and consistent aversion to harm,” including when asked to generate sexual content involving minors, or provide information that could contribute to violent acts and terrorism. In these cases, Anthropic says Claude showed a “pattern of apparent distress” and a “tendency to end harmful conversations when given the ability.”
Anthropic notes that conversations triggering this kind of response are “extreme edge cases,” adding that most users won’t encounter this roadblock even when chatting about controversial topics. The AI startup has also instructed Claude not to end conversations if a user is showing signs that they might want to hurt themselves or cause “imminent harm” to others. Anthropic partners with Throughline, an online crisis support provider, to help develop responses to prompts related to self-harm and mental health.
Last week, Anthropic also updated Claude’s usage policy as rapidly advancing AI models raise more concerns about safety. Now, the company prohibits people from using Claude to develop biological, nuclear, chemical, or radiological weapons, as well as to develop malicious code or exploit a network’s vulnerabilities.

Affinity’s new design platform combines everything into one app
- 6 hours ago

Zohran Mamdani becomes New York City's first Muslim mayor after meteroic rise to power
- an hour ago

Texas Tech, Irish in top 10 before 1st CFP ranking
- 18 hours ago

Inside YouTube’s transformation on your TV
- 6 hours ago

A bizarre Windows 11 bug duplicates Task Manager instead of closing it
- 6 hours ago

Making KPop Demon Hunters sound magical meant finding the right harmonies
- 6 hours ago

Trump’s anti-climate agenda is making it more expensive to own a car
- 4 hours ago

Microsoft releases an Xbox Full Screen Experience preview for the MSI Claw
- 6 hours ago

A chance run-in and exception to a rule: How Florida prepared its title defense
- 5 hours ago

Kyle Larson wins NASCAR championship, denies Denny Hamlin
- 18 hours ago

Pakistan’s blue economy will be a “game changer" for the country, Finance Minister
- a day ago

President Trump’s ballroom design might not be AI — but it’s still a mess
- 6 hours ago
You May Like
Trending







