This is a cute idea, but I wonder what is the sustainable solution to this emerging fundamental problem: As content publishers, we want our content to be accessible to everyone, and we're even willing to pay for server costs relative to our intended audience -- but a new outsized flood of scrapers was not part of the cost calculation, and that is messing up the plan.
It seems all options have major trade-offs. We can host on big social media and lose all that control and independence. We can pay for outsized infrastructure just to feed the scrapers, but the cost may actually be prohibitive, and seems such a waste to begin with. We can move as much as possible SSG and put it all behind cloudflare, but this comes with vendor lock in and just isn't architecturally feasible in many applications. We can do real "verified identities" for bots, and just let through the ones we know and like, but this only perpetuates corporate control and makes healthy upstart competition (like Kagi) much more difficult.
If the LLMs are the "new Google" one solution would be for them to pay you when scraping your content, so you both have an incentive, you're more willing to be scraped and they'll try to not abuse you because it will cost them at every visit. If your content is valuable and requested on prompts they will scrape you more and so on. I can't see other solutions honestly. For now they decided to go full evil and abuse everyone
The only way that would work is if they were legally required to. And even then, it probably wouldn’t work unless failure to comply was a criminal offense. You know what? Even then it still might not work.
or turn your blog into a frontend/backend combo. keep the frontend as an SPA so that the page has nothing on it. have your backend send data in encrypted format and the AI scrapers would need to do a tonne of work in order to figure out what your data is. If everyone uses a different key and different encryption algorithm suddenly all their server time is busted decrypting stuff
Ehm what would stop ai scrapers from using a browser like a normal user would? Google bot already does, it can execute js and can read spa client side generated content, so it proves can be done at scale, and I'm pretty sure some ai scrapers already do
What if scrapers ips are millions of smartphones? If I was as evil as an AI scraper company that is not obeying robots.txt I would totally build/buy thousands of small games/apps for mobiles to use them as jumphosts to scape the web. This is probably happening already.
in my case my application does not use pagination, it uses infinite scroll, even if you had a million devices that use google chrome, they would all load page 1 and if that req/minute progressively decreaasing thing is implemented, once they start scrolling endlessly they would all hit the rate limits sooner or later, the thing is a human is not going to scroll down a 100 pages but a bot will. once this difference has been factored it, it wont matter how many unique devices they bring into the battle
so why not just do that for these scrapers, rather than complicate it by encrypting and decrypting, which is just obfuscation as the private key is clearly available to the end-user?
tbh i did not encrypt decrypt for the ai scrapers at all, a lot of people were previously trying to download data directly from my API that my frontend uses and this kinda pissed me off a bit. So I added encryption/decryption to the mix and will release the newer version. As I mentioned earlier as well, can someone sit through and decrypt it? yes. Will 99% of them do it? no! Thats where I win
At this point it seems like the problem isn’t internet bandwidth, but just expensive for a server to handle all the requests because it has to process them. Does that seem correct?
It seems all options have major trade-offs. We can host on big social media and lose all that control and independence. We can pay for outsized infrastructure just to feed the scrapers, but the cost may actually be prohibitive, and seems such a waste to begin with. We can move as much as possible SSG and put it all behind cloudflare, but this comes with vendor lock in and just isn't architecturally feasible in many applications. We can do real "verified identities" for bots, and just let through the ones we know and like, but this only perpetuates corporate control and makes healthy upstart competition (like Kagi) much more difficult.
So, what are we to do?