Comment by shiomiru
If it's such a common issue, I would've thought Google already ignored searches from clients that do not enable JavaScript when computing results?
Besides, you already got auto-blocked when using it in a slightly unusual way. Google hasn't worked on Tor since forever, and recently I also got blocked a few times just for using it through my text browser that uses libcurl for its network stack. So I imagine a botnet using curl wouldn't last very long either.
My guess is it had more to do with squeezing out more profit from that supposed 0.1% of users.
Given that curl-impersonate[1] exists and that a major player in this space is also looking for experience with this library, I'm pretty sure forcing the execution of JS using DOM stuff would be a much more effective deterrent to prevent scraping.
[1] https://github.com/lwthiker/curl-impersonate