The ClaudeBot web crawler that Anthropic uses to scrape training data for AI models like Claude has hammered iFixit’s website almost a million times in a 24-hour period, seemingly violating the repair company’s Terms of Use in the process.
- Home
- Technology
- News
Anthropic’s crawler is ignoring websites’ anti-AI scraping policies
Anthropic’s ClaudeBot web crawler has aggressively hammered iFixit’s website, seemingly violating the repair company’s Terms of Use in the process.


“If any of those requests accessed our terms of service, they would have told you that use of our content expressly forbidden. But don’t ask me, ask Claude!” said iFixit CEO Kyle Wiens on X, posting images that show Anthropic’s chatbot acknowledging that iFixit’s content was off limits. “You’re not only taking our content without paying, you’re tying up our devops resources. If you want to have a conversation about licensing our content for commercial use, we’re right here.”
“The rate of crawling was so high that it set off all our alarms and spun up our devops team,” Wiens tells The Verge. “iFixit gets a lot of traffic. Being one of the internet’s top sites makes us pretty familiar with web crawlers and bots. We can handle that load just fine, but this was an anomaly.”
iFixit’s Terms of Use policy states that “reproducing, copying or distributing” any content from the website is “strictly prohibited without the express prior written permission” from the company, with specific inclusion of “training a machine learning or AI model.” When Anthropic was questioned on this by 404 Media, however, the AI company linked back to an FAQ page that says its crawler can only be blocked via a robots.txt file extension.
Wiens says iFixit has since added the crawl-delay extension to its robots.txt. “Based on our logs, they did stop after we added it to the robots.txt,” Wiens says.
“We respect robots.txt and our crawler respected that signal when iFixit implemented it,” Anthropic spokesperson Jennifer Martinez tells The Verge.
iFixit doesn’t seem to be alone, with Read the Docs co-founder Eric Holscher and Freelancer.com CEO Matt Barrie saying in Wiens’ thread that their site had also been aggressively scraped by Anthropic’s crawler. This also doesn’t seem to be new behavior for ClaudeBot, with several months-old Reddit threads reporting a dramatic increase in Anthropic’s web scraping. In April this year, the Linux Mint web forum attributed a site outage to strain caused by ClaudeBot’s scraping activities.
Disallowing crawlers via robots.txt files is also the opt-out method of choice for many other AI companies like OpenAI, but it doesn’t provide website owners with any flexibility to denote what scraping is and isn’t permitted. Another AI company, Perplexity, has been known to ignore robots.txt exclusions entirely. Still, it is one of the few options available for companies to keep their data out of AI training materials, which Reddit has applied in its recent crackdown on web crawlers.
Updates, July 25th: Added statements from Wiens and Anthropic.
Trump says US will not use force to acquire Greenland
- 16 hours ago

Gold prices set another historic record in Pakistan
- 16 hours ago

Ricoh’s black-and-white-only camera launches in February for $2,200
- 8 hours ago

One weird thing that’s been holding drug trials back
- 6 hours ago

Pakistan accepts invitation for joining Board of Peace to promote peace in Gaza
- 17 hours ago
Eight Arab, Islamic states announce to join Board of Peace: FO
- 14 hours ago

Microsoft’s first Windows 11 update of 2026 stopped some computers from shuting down
- 8 hours ago

Trump and Greenland: Latest stories and updates
- 6 hours ago
NDMA issues medium-level landslide alert for Balochistan
- 16 hours ago
China completes first-phase 6G trials
- 16 hours ago

Trump is waffling on Iran strikes. Here are four possible reasons why.
- 6 hours ago
Israeli strike kills three Gaza journalists including AFP freelancer
- 14 hours ago






