Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They just don't need to hammer sites into the ground to do it. This wouldn't be an issue if the AI companies where a bit more respectful of their data sources, but they are not, they don't care.

All this attempting to block AI scrapers would not be an issue if they respected rate-times, knew how to back of when a server starts responding to slowly, or caching frequently visited sites. Instead some of these companies will do everything, including using residential ISPs, to ensure that they can just piledrive the website of some poor dude that's just really into lawnmowers, or the git repo of some open source developer who just want to share their work.

Very few are actually against AI-crawlers, if they showed just the tiniest amount of respect, but they don't. I think Drew Devault said it best: "Please stop externalizing your costs directly into my face"





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: