Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI crawlers, fetchers are blowing up websites; Meta, OpenAI are worst offenders (theregister.com)
87 points by rntn 3 hours ago | hide | past | favorite | 36 comments




I recently, for pretty much the first time ever in 30 years of running websites, had to blanket ban crawlers. I now whitelist a few, but the rest (and all other non-UK visitors) have to pass a CloudFlare challenge[1].

AI crawlers were downloading whole pages and executing all the javascript tens of millions of times a day - hurting performance, filling logs, skewing analytics and costing too much money in Google Maps loads.

Really disappointing.

[1] https://developers.cloudflare.com/cloudflare-challenges/


In the same time it’s so practical to ask a question and it opens 25 pages to search and summarize the answer. Before that’s more or less what I was trying to do by hand. Maybe not 25 websites because of crap SEO the top 10 contains BS content so I curated the list but the idea is the same no ?

My personal experience is that OpenAI's crawler was hitting a very, very low traffic website I manage 10s of 1000s of times a minute non-stop. I had to block it from Cloudflare.

Where is caching breaking so badly that this is happening? Are OpenAI failing to use etags or honour cache validity?

Their crawler is vibe-coded.

Same here.

I run a very small browser game (~120 weekly users currently), and until I put its Wiki (utterly uninteresting to anyone who doesn't already play the game) behind a login-wall, the bots were causing massive amounts of spurious traffic. Due to some of the Wiki's data coming live from the game through external data feeds, the deluge of bots actually managed to crash the game several times, necessitating a restart of the MariaDB process.


Sure, but if the fetcher is generating "39,000 requests per minute" then surely something has gone wrong somewhere ?

Even if it is generating 39k req/minute I would expect most of the pages already be locally cached by Meta, or served statically by their respective hosts. We have been working hard on catching websites and it has been a solved problem for the last decade or so.

Could be serving no-cache headers? Seems like yet another problem stemming from every website being designed as if it were some dynamic application when nearly all of them are static documents. nginx doing 39k req/min to cacheable pages on an n100 is what you might call "98% idle", not "unsustainable load on web servers".

The data transfer, on the other hand, could be substantial and costly. Is it known whether these crawlers do respect caching at all? Provide If-Modified-Since/If-None-Match or anything like that?


They're not very good at web queries, if you expand the thinking box to see what they're searching for, like half of it is nonsense.

e.g. they'll take an entire sentence the user said and put it in quotes for no reason.

Thankfully search engines started ignoring quotes years ago, so it balances out...


OpenAI straight up DoSed a site I manage for my in-laws a few months ago.

What is it about? I'm curious what kinds of things people ask that floods sites.

I suppose that they just keep referring to the website in their chats, and probably they have selected the search function, so before every reply, the crawler hits the website

A bit off-topic but wtf is this preview image of a spider in the eye? It’s even worse than the clickbait title of this post. I think this should be considered bad practice.

Isn't there a class action lawsuit coming from all this? I see a bunch of people here indicating these scrapers are costing real money to people who host even small niche sites.

Is the reason these large companies don't care because they are large enough to hide behind a bunch of lawyers?


This article and the "report" look like a submarine ad for Fastly services. At no point does it mention the human/bot/AI bot ratio, making it useless for any real insights.

They mention anubis, cloudflare, robots.txt – does anyone have experiences with how much any of them help?

CDNs like Cloudflare are the best. Anubis is a rate limitor for small websites where you can't or won't use CDNs like Cloudflare. I have used Cloudflare on several medium sized websites and it works really well.

Anubis's creator says the same thing:

> In most cases, you should not need this and can probably get by using Cloudflare to protect a given origin. However, for circumstances where you can't or won't use Cloudflare, Anubis is there for you.

Source: https://github.com/TecharoHQ/anubis


CloudFlare's Super Bot Fight Mode completely killed the surge in bot traffic for my large forum.

robots.txt is obviously only effective against well-behaved bots. OpenAI etc are usually well behaved, but there's at least one large network of rogue scraping bots that ignores robots.txt, fakes the user-agent (usually to some old Chrome version) and cycles through millions of different residential proxy IPs. On my own sites, this network is by far the worst offender and the "well-behaved" bots like OpenAI are barely noticeable.

To stop malicious bots like this, Cloudflare is a great solution if you don't mind using it (you can enable a basic browser check for all users and all pages, or write custom rules to only serve a check to certain users or on certain pages). If you're not a fan of Cloudflare, Anubis works well enough for now if you don't mind the branding.

Here's the cloudflare rule I currently use (vast majority of bot traffic originates from these countries):

  ip.src.continent in {"AF" "SA"} or
  ip.src.country in {"CN" "HK" "SG"} or
  ip.src.country in {"AE" "AO" "AR" "AZ" "BD" "BR" "CL" "CO" "DZ" "EC" "EG" "ET" "ID" "IL" "IN" "IQ" "JM" "JO" "KE" "KZ" "LB" "MA" "MX" "NP" "OM" "PE" "PK" "PS" "PY" "SA" "TN" "TR" "TT" "UA" "UY" "UZ" "VE" "VN" "ZA"} or
  ip.src.asnum in {28573 45899 55836}

Xe Iaso is my spirit animal.

> "I don't know what this actually gives people, but our industry takes great pride in doing this"

> "unsleeping automatons that never get sick, go on vacation, or need to be paid health insurance that can produce output that superficially resembles the output of human employees"

> "This is a regulatory issue. The thing that needs to happen is that governments need to step in and give these AI companies that are destroying the digital common good existentially threatening fines and make them pay reparations to the communities they are harming."

<3 <3


I run a symbol server, as in, PDB debug symbol server. Amazon's crawler and a few others love requesting the ever loving shit out of it for no obvious reason. Especially since the files are binaries.

I just set a rate-limit in cloudflare because no legitimate symbol server user will ever be excessive.


There's so much bullshit on the internet how do they make sure they're not training on nonsense?

By paying a pretty penny for non bullshit data (Scale Ai). That along with Nvidia are the shovels in this gold rush.

I mean...they don't. That's part of the problem with "AI answers" and such.

Much of it is not training. The LLMs fetch webpages for answering current questions, summarize or translate a page at the user's request etc.

Any bot that answers daily political questions like Grok has many web accesses per prompt.


While it’s true that chatbots fetch information from websites in response to requests, the load from those requests is tiny compared to the volume of requests indexing content to build training corpuses.

The reason is that user requests are similar to other web traffic because they reflect user interest. So those requests will mostly hit content that is already popular, and therefore well-cached.

Corpus-building crawlers do not reflect current user interest and try to hit every URL available. As a result these hit URLs that are mostly uncached. That is a much heavier load.


But surely there aren't thousands of new corpuses built every minute.

Why would the Register point out Meta and OpenAI as the worst offenders? I'm sure they do not continuously build new corpuses every day. It is probably the search function, as mentioned in the top comments.

Is an AI chatbot fetching a web page to answer a prompt a 'web scraping bot'? If there is a user actively promoting the LLM, isn't it more of a user agent? My mental model, even before LLMs, was that a human being present changes a bot into a user agent. I'm curious if others agree.

The Register calls them "fetchers". They still reproduce the content of the original website without the website gaining anything but additional high load.

I'm not sure how many websites are searched and discarded per query. Since it's the remote, proprietary LLM that initiates the search I would hesitate to call them agents. Maybe "fetcher" is the best term.


But they're (generally speaking) not being asked for the contents of one specific webpage, fetching that, and summarizing it for the user.

They're going out and scraping everything, so that when they're asked a question, they can pull a plausible answer from their dataset and summarize the page they found it on.

Even the ones that actively go out and search/scrape in response to queries aren't just scraping a single site. At best, they're scraping some subset of the entire internet that they have tagged as being somehow related to the query. So even if what they present to the user is a summary of a single webpage, that is rarely going to be the product of a single request to that single webpage. That request is going to be just one of many, most of which are entirely fruitless for that specific query: purely extra load for their servers, with no gain whatsoever.


I'm absolutely pro AI-crawlers. The internet is so polluted with garbage, compliments of marketing. My AI agent should find and give me concise and precise answers.

The second I get hit with bot traffic that makes my server heat up, I would just slam some aggressive anti bot stuff infront. Then you, my friend, are getting nothing with your fancy AI agent.

I've never ran any public-facing servers, so maybe I'm missing the experience of your frustration. But mine, as a "consumer" is wanting clean answers, like what you'd expect when asking your own employee for information.

so the fancy AI agent will have to get really fancy and mimic human traffic and all is good until the server heats up from all those separate human trafficionados - then what?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: