Comment by JimDabell

Comment by JimDabell 2 days ago

0 replies

I’ve implemented a search crawler before, and detecting and switching to the WordPress API was one of the first things I implemented because it’s such an easy win. Practically every WordPress website had it open and there are a vast number of WordPress sites. The content that you can pull from the API is far easier to deal with because you can just pull all the articles and have the raw content plus metadata like tags, without having to try to separate the page content from all the junk that whatever theme they are using adds.

> The reality is that the ratio of "total websites" to "websites with an API" is likely on the order of 1M:1 (a guess).

This is entirely wrong. Aside from the vast number of WordPress sites, the other APIs the article mentions are things like ActivityPub, oEmbed, and sitemaps. Add on things like Atom, RSS, JSON Feed, etc. and the majority of sites have some kind of alternative to HTML that is easier for crawlers to deal with. It’s nothing like 1M:1.

> Investing the effort to 1) recognize, without programmer intervention, that some random website has an API and then 2) automatically, without further programmer intervention, retrieve the website data from that API and make intelligent use of it, is just not worth it to them when retrieving the HTML just works every time.

You are treating this like it’s some kind of open-ended exercise where you have to write code to figure out APIs on the fly. This is not the case. This is just “Hey, is there a <link rel=https://api.w.org/> in the page? Pull from the WordPress API instead”. That gets you better quality content, more efficiently, for >40% of all sites just by implementing one API.