Comment by hyperpape

Comment by hyperpape 2 days ago

48 replies

The reality is that the HTML+CSS+JS is the canonical form, because it is the form that humans consume, and at least for the time being, we're the most important consumer.

The API may be equivalent, but it is still conceptually secondary. If it went stale, readers would still see the site, and it makes sense for a scraper to follow what readers can see (or alternately to consume both, and mine both).

The author might be right to be annoyed with the scrapers for many other reasons, but I don't think this is one of them.

pwg 2 days ago

The reality is that the ratio of "total websites" to "websites with an API" is likely on the order of 1M:1 (a guess). From the scraper's perspective, the chances of even finding a website with an API is so low that they don't bother. Retrieving the HTML gets them 99% of what they want, and works with 100% of the websites they scrape.

Investing the effort to 1) recognize, without programmer intervention, that some random website has an API and then 2) automatically, without further programmer intervention, retrieve the website data from that API and make intelligent use of it, is just not worth it to them when retrieving the HTML just works every time.

edit: corrected inverted ratio

  • JimDabell 2 days ago

    I’ve implemented a search crawler before, and detecting and switching to the WordPress API was one of the first things I implemented because it’s such an easy win. Practically every WordPress website had it open and there are a vast number of WordPress sites. The content that you can pull from the API is far easier to deal with because you can just pull all the articles and have the raw content plus metadata like tags, without having to try to separate the page content from all the junk that whatever theme they are using adds.

    > The reality is that the ratio of "total websites" to "websites with an API" is likely on the order of 1M:1 (a guess).

    This is entirely wrong. Aside from the vast number of WordPress sites, the other APIs the article mentions are things like ActivityPub, oEmbed, and sitemaps. Add on things like Atom, RSS, JSON Feed, etc. and the majority of sites have some kind of alternative to HTML that is easier for crawlers to deal with. It’s nothing like 1M:1.

    > Investing the effort to 1) recognize, without programmer intervention, that some random website has an API and then 2) automatically, without further programmer intervention, retrieve the website data from that API and make intelligent use of it, is just not worth it to them when retrieving the HTML just works every time.

    You are treating this like it’s some kind of open-ended exercise where you have to write code to figure out APIs on the fly. This is not the case. This is just “Hey, is there a <link rel=https://api.w.org/> in the page? Pull from the WordPress API instead”. That gets you better quality content, more efficiently, for >40% of all sites just by implementing one API.

  • alsetmusic a day ago

    > Investing the effort to 1) recognize, without programmer intervention, that some random website has an API

    Hrm…

    >> Like most WordPress blogs, my site has an API.

    I think WordPress is big enough to warrant the effort. The fact that AI companies are destroying the web isn't news. But they could certainly do it a with a little less jackass. I support this take.

  • danielheath 2 days ago

    Right - the scraper operators already have an implementation which can use the HTML; why would they waste programmers time writing an API client when the existing system already does what they need?

  • sdenton4 2 days ago

    If only there were some convenient technology that could help us sort out these many small cases automatically...

dlcarrier 2 days ago

Not only is abandonment of the API possible, but hosts may restrict it on purpose, requiring paid access to use acessability/usability tools.

For example, Reddit encouraged those tools to use the API, then once it gained traction, they began charging exorbitant fees effectively blocking every blocking such tools.

  • culi 2 days ago

    That's a good point. Anyone who used the API properly were left with egg on their face and anyone who misused the site and just scraped HTML ended up unharmed

    • ryandrake 2 days ago

      Web developers in general have a horrible track record with many notable "rug pulls" and "lol the old API is deprecated, use the new one" behaviors. I'm not surprised that people don't trust APIs.

      • dolmen 2 days ago

        This isn't about people.

        • KK7NIL 2 days ago

          APIs are always about people, they're an implicit contract. This is also why API design is largely the only difficult part of software design (there are tough technical challenges too sometimes, but they are much easier to plan for and contain).

modeless 2 days ago

I want AI to use the same interfaces humans use. If AIs use APIs designed specifically for them, then eventually in the future the human interface will become an afterthought. I don't want to live in a world where I have to use AI because there's no reasonable human interface to do anything anymore.

You know how you sometimes have to call a big company's customer support and try to convince some rep in India to press the right buttons on their screen to fix your issue, because they have a special UI you don't get to use? Imagine that, but it's an AI, and everything works that way.

sowbug 2 days ago

I'm reminded of Larry Wall's advice that programs should be "strict in what they emit, and liberal in what they accept." Which, to the extent the world follows this philosophy, has caused no end of misery. Scrapers are just recognizing reality and being liberal in what they accept.

llbbdd 2 days ago

Yeah APIs exist because computers used to require very explicitly structured data, with LLMs a lot of the ambiguity of HTML disappears as far as a scraper is concerned.

  • swatcoder 2 days ago

    > LLMs a lot of the ambiguity of HTML disappears as far as a scraper is concerned

    The more effective way to think about it is that "the ambiguity" silently gets blended into the data. It might disappear from superficial inspection, but it's not gone.

    The LLM is essentially just doing educated guesswork without leaving a consistent or thorough audit trail. This is a fairly novel capability and there are times where this can be sufficient, so I don't mean to understate it.

    But it's a different thing than making ambiguity "disappear" when it comes to systems that actually need true accuracy, specificity, and non-ambiguity.

    Where it matters, there's no substitute for "very explicit structured data" and never really can be.

    • llbbdd 2 days ago

      Disappear might be an extremely strong word here, but yeah as you said as the delta closes between what a human user and an AI user are able to interpret from the same text, it becomes good enough for some nines of cases. Even if on paper it became mathematically "good enough" for high-risk cases like medical or government data structured data will still have a lot of value. I just think more and more structured data is going to be cleaned up from unstructured data except for those higher precision cases.

  • dmitrygr 2 days ago

    "computers used to require"

    please do not write code. ever. Thinking like this is why people now think that 16GB RAM is to little and 4 cores is the minimum.

    API -> ~200,000 cycles to get data, RAM O(size of data), precise result

    HTML -> LLM -> ~30,000,000,000 cycles to get data, RAM O(size of LLM weights), results partially random and unpredictable

    • hartator 2 days ago

      If API doesn’t have the data you want, this point is moot.

      • dotancohen 2 days ago

        Not GP, but I disagree. I've written successful, robust web scrapers without LLMs for decades.

        What do you think the E in perl stands for?

    • llbbdd 2 days ago

      A lot of software engineering is recognizing the limitations of the domain that you're trying to work in, and adapting your tools to that environment, but thank you for your contribution to the discussion.

      EDIT: I hemmed and hawed about responding to your attitude directly, but do you talk to people anywhere but here? Is this the attitude you would bring to normal people in your life?

      Dick Van Dyke is 100 years old today. Do you think the embittered and embarrassing way you talk to strangers on the internet is positioning your health to enable you to live that long, or do you think the positive energy he brings to life has an effect? Will you readily die to support your animosity?

    • shadowgovt 2 days ago

      On the other hand, I already have an HTML parser, and your bespoke API would require a custom tool to access.

      Multiply that by every site, and that approach does not scale. Parsing HTML scales.

      • swiftcoder 2 days ago

        You already have a JSON and XML parser too, and the website offers standardised APIs in both of those

        • shadowgovt 2 days ago

          Not standardized enough; I can't guarantee the format of an API is RESTful, I can't know apriori what the response format is (arbitrary servers on the internet can't be trusted to be setting content type headers properly) or How to crawl it given the response data, etc. we ultimately never solved the problem of universal self- describing APIs, so a general crawling service can't trust they work.

          In contrast, I can always trust that whatever is returned to be consumed by the browser is in the format that is consumable by a browser, because if it isn't the site isn't a website. Html is pretty much the only format guaranteed to be working.

      • dmitrygr 2 days ago

        parsing html -> lazy but ok

        using an llm to parse html -> please do not

        • llbbdd 2 days ago

          > Lazy but ok

          You're absolutely welcome on your own free time to waste it on whatever feels right

          > using an llm to parse html -> please do not

          have you used any of these tools with a beginner's mindset in like, five years?

    • venturecruelty 2 days ago

      Weeping and gnashing of teeth because RAM is expensive, and then you learn that people buy 128 GB for their desktops so they can ask a chatbot how to scrape HTML. Amazing.

      • llbbdd a day ago

        The more I've thought about it the RAM part is hardly the craziest bit. Where the fuck do you even buy a computer with less than 4 cores in 2025? Pawn shop?

      • llbbdd 2 days ago

        isn't it ridiculous? This is hacker news. Nobody with the spare time to post here is living on the street. Buy some RAM or rent it. I can't believe honestly how many people on here I see bemoaning the fact that they haven't upgraded their laptops in 20 years and it's somehow anyone else's problem.

      • lechatonnoir 2 days ago

        it's kind of hard to tell what your position is here. should people not ask chatbots how to scrape html? should people not purchase RAM to run chatbots locally?

      • shadowgovt 2 days ago

        I may be out of the loop; is system RAM key for LLMs? I thought they were mostly graphics RAM constrained.

cr125rider 2 days ago

Exactly. This parallels “the most accurate docs are the passing test cases”

  • btown 2 days ago

    I like to go a level beyond this and say: "Passing tests are fine and all, but the moment your tests mock or record-replay even the smallest bit of external data, the only accurate docs are your production error logs, or lack thereof."

1718627440 a day ago

This is something the XML ecosystem (which is now getting killed) actually got right and is the primary reason people don't want to have it killed.

echelon 2 days ago

[flagged]

  • edent 2 days ago

    As I wrote:

    > Like most WordPress blogs, my site has an API.

    WordPress, for all its faults, powers a fair number of websites. The schema is identical across all of them.

    • gldrk 2 days ago

      If you decide to move your blog to another platform, are you going to maintain API compatibility?

  • sdenton4 2 days ago

    Shouldn't the llm that all this scraping is powering be able to help figure out which websites have an API and how to use it?

    • _puk 2 days ago

      Is there a meta tag to point to the API / MCP?