Comment by pwg

Comment by pwg 2 days ago

6 replies

The reality is that the ratio of "total websites" to "websites with an API" is likely on the order of 1M:1 (a guess). From the scraper's perspective, the chances of even finding a website with an API is so low that they don't bother. Retrieving the HTML gets them 99% of what they want, and works with 100% of the websites they scrape.

Investing the effort to 1) recognize, without programmer intervention, that some random website has an API and then 2) automatically, without further programmer intervention, retrieve the website data from that API and make intelligent use of it, is just not worth it to them when retrieving the HTML just works every time.

edit: corrected inverted ratio

JimDabell 2 days ago

I’ve implemented a search crawler before, and detecting and switching to the WordPress API was one of the first things I implemented because it’s such an easy win. Practically every WordPress website had it open and there are a vast number of WordPress sites. The content that you can pull from the API is far easier to deal with because you can just pull all the articles and have the raw content plus metadata like tags, without having to try to separate the page content from all the junk that whatever theme they are using adds.

> The reality is that the ratio of "total websites" to "websites with an API" is likely on the order of 1M:1 (a guess).

This is entirely wrong. Aside from the vast number of WordPress sites, the other APIs the article mentions are things like ActivityPub, oEmbed, and sitemaps. Add on things like Atom, RSS, JSON Feed, etc. and the majority of sites have some kind of alternative to HTML that is easier for crawlers to deal with. It’s nothing like 1M:1.

> Investing the effort to 1) recognize, without programmer intervention, that some random website has an API and then 2) automatically, without further programmer intervention, retrieve the website data from that API and make intelligent use of it, is just not worth it to them when retrieving the HTML just works every time.

You are treating this like it’s some kind of open-ended exercise where you have to write code to figure out APIs on the fly. This is not the case. This is just “Hey, is there a <link rel=https://api.w.org/> in the page? Pull from the WordPress API instead”. That gets you better quality content, more efficiently, for >40% of all sites just by implementing one API.

alsetmusic a day ago

> Investing the effort to 1) recognize, without programmer intervention, that some random website has an API

Hrm…

>> Like most WordPress blogs, my site has an API.

I think WordPress is big enough to warrant the effort. The fact that AI companies are destroying the web isn't news. But they could certainly do it a with a little less jackass. I support this take.

danielheath 2 days ago

Right - the scraper operators already have an implementation which can use the HTML; why would they waste programmers time writing an API client when the existing system already does what they need?

sdenton4 2 days ago

If only there were some convenient technology that could help us sort out these many small cases automatically...