Yes, and then we can avoid the entire issue. It's patronizing for people to assume users wouldn't notice a 10x or 50x slowdown. You can tell those who think that way are not web developers, as we know that every millisecond has a real, nonlinear fiscal cost.
Of course, then the issue becomes "what is the latency and cost incurred by a scraper to maintain and load balance across a large list of IPs". If it turns out that this is easily addressed by scrapers then we need another solution. Perhaps, the user's browser computes tokens in the background and then serves them to sites alongside a certificate or hash (to prevent people from just buying and selling these tokens).
We solve the latency issue by moving it off-line, and just accept the tradeoff that a user is going to have to spend compute periodically in order to identify themselves in an increasingly automated world.