Permit 2 days ago

> Once network effects crowded a few winners, the drawbridges slowly pulled up. Previously simple APIs evolved into complicated layers of access controls and pricing tiers. Winning platforms adjusted their APIs so you could support their platforms, but not build anything competitive. Perhaps the best example of this was Twitter’s 2012 policy adjustment which limited client 3rd party apps to a maximum of 100,000 users (they’ve since cut off all 3rd party clients).

One thing I haven't seen written about much is how these APIs turned into massive liabilities for privacy. If a Twitter API allows me to siphon tweets off of Twitter, you can never delete them. If a Facebook API allows (user-approved apps) to view the names of my friends and the pages they like, this data can be used to create targeted political ads for those users[1].

So a company considering creating a public-facing API must deal with the fact that:

1. This API could be helping my competitor

2. This API makes internal changes more difficult (typically there is a strong effort to maintain backwards compatibility).

3. If company XXX uses the API to extract data (that users have given them explicit access to), the ensuring scandal will not be called the "XXXX Data Scandal", but rather the "MYCOMPANY-XXX Data Scandal"[1].

[1] https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Ana...

  • mb7733 2 days ago

    > One thing I haven't seen written about much is how these APIs turned into massive liabilities for privacy. If a Twitter API allows me to siphon tweets off of Twitter, you can never delete them.

    Is that really a privacy concern? Tweets are public. As soon as you post them, others can just save the page. No need for an API.

    • bloppe 2 days ago

      Data brokers don't care about easy APIs anyway. They'll save that tweet even if it takes a dozen engineers, a global bot net, and millions in cloud spend to do it at scale

  • skybrian 2 days ago

    Nowadays we expect popular tweets to be screenshotted, just as popular webpages are usually archived somewhere.

    Bluesky has decided that it’s not a bug and is not going to be fixed: you can delete a post, but someone could have saved it, and worse, it’s digitally signed.

    • pfraze 2 days ago

      We generally would characterize the monopolies as the bug, not the public nature of the data

      • skybrian 2 days ago

        Yeah, I don’t think it’s the wrong decision. Maybe I should have called it a design tradeoff.

        Edit: editing posts is nice to have.

      • bloppe 2 days ago

        Are you saying the phenomenon is different on Twitter than on bluesky?

    • cryptonector 2 days ago

      > Bluesky has decided that it’s not a bug and is not going to be fixed:

      It's called an "analog hole". It's very difficult to prevent analog holes. By difficult I mean: impossible.

    • bunderbunder 2 days ago

      I haven't read it in 10 years, but this used to be pretty explicitly spelled out in Twitter's privacy policy, in plain language, in a way that I really appreciated. (Not that anyone ever reads the privacy policy.)

      But it really does make sense. Nothing you publicly tweet can ever be private, nor is there any real way you can reliably take it back. Because as soon as the tweet's been transferred to someone else's device, they now have every bit as much control over that content as they do over any other content that makes it onto their device.

      I'm a pretty pro-privacy person, to the point where I generally avoid social media sites. But this was also my policy back when I administered an oldschool Web forum: once it's posted, it's out of your control. Period. That's really the only policy for a public forum that makes any sense at all. If that's scary to you then maybe the things you're posting should be, y'know, kept private instead of being broadcast to the entire world.

      tl;dr: group chats are actually pretty cool.

  • veqq 2 days ago

    Precisely what kneecapped the semantic web. Why make it easier for the competition to take all of your data?

    • Y_Y 2 days ago

      I remember when the internet was collaboratorative rather than competitive. I think then tech companies got so big that they ran out of scientists and engineers and had to hire fairground hucksters.

      • pixl97 2 days ago

        Yes and no.

        The internet was collaborative when it was very small. You still had islands like AOL and Compuserve and such.

        Then as it got bigger the big islands like AOL broke, and the views started going to larger and larger websites (think things like news sites). These sites had to work with vendors (Microsoft/Apache) to be able to support the load without crashing. While this is occurring hardware got a lot faster and databases more performant (along with things like K/V caching).

        This lead to the last 'social media' wave where just a few large companies could host enough servers to serve everyone on the internet (within reason). These companies sucked a lot of wind out of the smaller companies that were successful. You could wake up one day and find out Google had implemented your entire business model and is giving it away for 'free'.

        But free was never free. Those big companies need your eyeballs. They need your attention. And they will do anything regardless of the ethics to keep it (what are small fines between friends). There was not much more room to expand in to, you're only expanding into other companies. You take over/replace the ones that give their data away and 'compete/fight with' the ones that don't.

        • theendisney 20 hours ago

          The amish are still laughing at us and it just keeps getting more embarresing.

          Big tech companies are full of extremly competent people who for the most part cant get shit done. A hand full of cooperating people armed with curiosity and the desire to make something useful can do things tens to thousands of times better.

          What are these websites they make that need hundreds of requests to show a bit of text? I cant view source without repeatedly screaming from laughter.

          Maybe the answer to the riddle is to force the pattern and make usefulness as well as asking for help requirements for participation.

      • pas 16 hours ago

        ... alas that was fundamentally "borrowed time". Since our culture did not change (to a radically open cooperative supportive one) as more and more of our life has became online more and more the Internet became like our society.

    • jandrewrogers 2 days ago

      More precisely, one of several things that kneecapped the semantic web.

  • ImPostingOnHN 2 days ago

    > If a Twitter API allows me to siphon tweets off of Twitter, you can never delete them. If a Facebook API allows (user-approved apps) to view the names of my friends and the pages they like, this data can be used to create targeted political ads for those users[1].

    Not only is this already possible (I can open up twitter and press "control-P"; I can open up Facebook and see names)*, but it's already being done by those companies. If you thought Cambridge Analytica was bad, imagine what Facebook is doing with even more user data.

    That indicates that the issue isn't protecting users from that sort of abuse (since they are the abusers in that sense), but to prevent business competitors from doing the same and reduce user choice (eg users who don't want to have to have their eyes bleed to read their content on these sites).

    If the goal is to keep information secret from X, disclosing it to X via 1 programmatic means while restricting it via another, fails to achieve that goal.

    > So a company considering creating a public-facing API must deal with the fact that:

    1. It could be helping users, which is more important to users than Facebook winning some corpo-war-on-data-access. Is it more important to Facebook et al, though? Clearly not, and therein lies the ethical failing of Facebook et al.

    * - "but wait" I hear some saying, "you're just a human, you can't do that at scale!" Well: the data got on my computer screen programmatically, and it's trivial to reuse those methods to get the data you want. It's just an extra step or two that frustrates legitimate users.

  • MichaelZuo 2 days ago

    It does like seem there are so many inherent disadvantages that the original proponents must have been confused or intentionally ignoring realistic factors…

    It’s like they never even tallied up all plausible advantages and disadvantages in the first place. So how did anyone determine it was an overall net positive?

    • __MatrixMan__ 2 days ago

      Are you proposing that interoperability is not an overall net positive? If it's getting a bad rap right now it's just because it's not always simultaneously a competitive advantage. But that line of thinking is a race to the bottom.

      I mean, why not just kill your competitors? Then your product, however bad, would be the only one. Clearly a net negative, but a competitive advantage.

      What has changed is that we've recently lowered the bar for how much of a net positive we plan on shooting for. Top dog on the trash heap is, I guess, now an enviable position.

      • MichaelZuo 2 days ago

        Privacy, reputation risk, etc., seem like huge disadvantages… so it’s not clear at all if it’s a net positive overall.

        Someone has to actually do that analysis in the first place. It doesn’t just automatically become true.

walterbell 2 days ago

> don’t expect the platforms to let you compete easily.

Regulatory support of interoperability and competition:

  1. EU mandated interoperability on mobile and messages.
  2. US won antitrust legal case against Google. Remedy TBD.
  3. Epic lawsuit enabled non-Apple payments and lower fees for content sale.
  4. US has mandated that banks open up payment history data to 3rd parties.
  5. US halted Facebook/Meta Libra/Diem digital currency.
  6. China halted Ant Group digital currency.
exabrial 2 days ago

OAuth/APIs were a beautiful thing until the marketing departments figured out they could use it to spam even more people.

robertheadley 3 days ago

I am still mad that Facebook mostly abandoned the Open Graph protocol on their own sites.

  • mxmilkiib 3 days ago

    for me, when both Facebook and Google rejected Jabber/XMPP federation :(

    but yeah, in general, what happened to the dream of true Data Portability?

    • Lammy 2 days ago

      > for me, when both Facebook and Google rejected Jabber/XMPP federation :(

      I agree with you in principle, but this is only half-true. You're right that Facebook's XMPP was always just a gateway into their proprietary messaging system, but Google did support XMPP federation. What Google did not support was server-to-server TLS, and thus it was “us” who killed Google XMPP federation.

      In late 2013 there was an XMPP community manifesto calling for mandatory TLS (even STARTTLS) for XMPP server-to-server communication by a drop-dead date in mid 2014: https://github.com/stpeter/manifesto/blob/master/manifesto.t...

      "The schedule we agree to is:

      - January 4, 2014: first test day requiring encryption

      - February 22, 2014: second test day

      - March 22, 2014: third test day

      - April 19, 2014: fourth test day

      - May 19, 2014: permanent upgrade to encrypted network, coinciding with Open Discussion Day <http://opendiscussionday.org/>"

      Well-intentioned for sure, but the one XMPP provider with an actual critical mass of users (Google Talk) remained non-TLS-only, all Google Talk users dropped off the federated XMPP network after May 2014, and so XMPP effectively ceased to matter. I'm sure Google were very happy to let us do this.

    • JumpCrisscross 3 days ago

      > what happened to the dream of true Data Portability?

      It got muddled into the privacy/security debate and then we all got distracted.

    • rahoulb 2 days ago

      As other posters have said - capitalism.

      But also privacy - it would be amazing to just be able to connect to any app or service you want, interact and react to stuff that's happening _over there_.

      However, do you want any old app or service connecting to _your_ data, siphoning it and selling it on (and, at best, burying their use of your data in a huge terms of service document that no-one reads, at worst, lying about what they do with that information)? So you have to add access controls that are either intrusive and/or complex, or, more likely, just ignored. Then the provider gets sued for leaking data and we're in a situation where no-one dares open up.

    • immibis 3 days ago

      Capitalism happened. You can't extract value if the usership can flow away from your site like water.

    • julik 3 days ago

      Capitalism happened. My hope is on regulation - I don't see any other force being capable of prying these moat cans open.

  • [removed] 3 days ago
    [deleted]
bsenftner 2 days ago

The moment MCP was announced, my first thoughts were "oh, those summer children". MPC is idyllic and not for this world.

  • TeMPOraL 2 days ago

    Yup, same here. But it's also super painful to watch it being neutered by people who tried to force-fit MCP to their usual smelly business models, and then started to make a fuss about "security issues" that are actually core features of MCP and LLMs in general. In most cases, it's not MCP that was a problem, it's someone's -as-a-Service business model they cling to.

armchairhacker 2 days ago

Today, an external camera can record your computer screen and audio, AI can extract the data and metadata, and a 2D contraption can physically move your mouse to interact. In the future, these will probably become more effective and cheaper (eventually the AI becoming possible to run locally, though even today it’s possible with a good GPU on simple UIs).

Lots of other comments argue for regulation mandating open APIs. I disagree, instead we should remove and prevent regulations that block scraping. We should also create alternative monetization paths for companies who charge for access or use ads, since they’ll lose those paths, and they’re already suffering from piracy and illegal scraping.

  • pixl97 2 days ago

    The biggest problem here is preventing said scraping from shutting down the sites with cost.

    Over the years most of the problems I had with sites getting overloaded were from valid 'scrapers' like Google. Quite often providers were adding more memory/cpu just to ensure Google could index the site.

    While hosting costs are cheaper than ever, being on the internet can still get very expensive very fast.

    • armchairhacker 2 days ago

      In theory it could be solved by websites charging a very small fee (maybe crypto) for incoming requests, to pay operating costs. The fees from human browsing (even excessive like 1 site / second) would be negligible. APIs that use scraping would forward the fee to their users. Training or search (index) data would cost a lot to generate, but probably still insignificant compared to training the ML model or operating the search.

      It already costs a small amount of electricity for clients to send requests, so maybe paying for the server to handle them wouldn’t be a big difference, but I don’t know much about networking.

      Although in practice, similar things have been tried and those haven’t worked out, e.g. Coil. It would require adoption by most browsers or ISPs, otherwise participating sites would be mostly ignored (most clients don’t pay the fee so they don’t get access, they don’t cares because it’s probably a small site and/or there are alternatives).

renewiltord 3 days ago

It's inevitable. You can't afford to just provide a platform for free that someone else monetizes. I wonder what API plans are reasonable:

* Just let your users pay for API access at a per-call rate

* Charge app developer per user

The problem is that ultimately the LTV of the average user is high, but this is skewed up by the most valuable users who will switch to a different app that will inevitably attempt to hijack your userbase once they control enough of your users.

A classic example is that imgur became a social network of its own once it had enough Reddit users and only Reddit doing their own image/video hosting stemmed that bleeding.

And then there's the fact that if you choose the payment-based approaches, one app will suction the data out and compete with you for it; inevitably some user will lose his data through some app breach and blame you; and the basic app any newbie developer will build will be "yours but ad-free" which is fine for him because you're paying the development and hosting costs of the entire infra.

It's no surprise everyone converges on preventing API access. Even Metafilter does.

I'm curious if anyone has an idea for API access that can nonetheless be a successful company. Everyone's always got some idea with negative margin and negative feedback loops which they bill as "but that won't make you a billionaire" (that's true, because your company will fail) but I wonder if there is some way that could work without ruining social network network-effects etc.

  • immibis 3 days ago

    Probably not. But there can be API access from a nonsuccessful noncompany - look at Fediverse or whatever.

ChrisMarshallNY 3 days ago

…news broke that rival Meta, opens new tab is taking…

(emphasis mine)

Been awhile since I’ve seen this kind of content error.

  • io84 2 days ago

    I wonder if that’s a dictation artefact

    • dbreunig 2 days ago

      Not dictation…copy/paste I think. Thanks, fixed.

seydor 3 days ago

I m optimistic, because LLMs can understand plain language. MCP won't last as the article correctly states, but you will always be able to say to your AI to open your email and search whatever. And companies cannot block you from doing that as long as it is your own PC / Phone.

If we do allow companies to block AI agents from accessing our own computers and data, then the users are to blame for falling again into another BigTech trap.

  • bobbiechen 2 days ago

    I am less optimistic. Even paid products like Netflix or the Amazon Kindle are ad-monetized now.

    I think the current useful state of consumer LLMs is a temporary subsidy, and the incentives to add ads are too large. And that will change everything, even tools that should work for the user. I recently wrote a blog post on this: https://digitalseams.com/blog/the-ai-lifestyle-subsidy-is-go...

  • msgodel 3 days ago

    I think the demand for this will actually kill closed ecosystems like iOS. I feel strongly enough about this that I'm shorting Apple over it. They won't be able to get it right because every integration will have to be canned while companies giving the LLMs/users a shell will allow them to do anything. People get confused because that used to not matter, most users couldn't do anything with a shell. That's no longer the case with LLMs.

    • robertlagrant 2 days ago

      > I feel strongly enough about this that I'm shorting Apple over it.

      How long do you think it will take for this to meaningfully override Apple's share price?

      • msgodel 2 days ago

        I think it's already starting. Apple can't produce anything people just have to have anymore because of the attitude that's causing this. You can see this in their sales numbers.

    • skybrian 2 days ago

      I think you’re extrapolating too much from the enthusiasm of early adopters? There is widespread skepticism about AI. A lot of people aren’t that eager to use it and resent having new AI features pushed on them by overenthusiastic vendors.

      Maybe users would rather keep their data safe than have it exfiltrated by a confused AI?

  • TeMPOraL 2 days ago

    > but you will always be able to say to your AI to open your email and search whatever.

    Can you actually even do that today? Not on iOS, I presume, definitely not on Android, at least not without hacking it six ways to Sunday with Tasker and Termux and API access to LLM providers.

    (And no, firing Gemini and asking it to kindly maybe search your GMail doesn't count - because GMail is not the only e-mail provider, and GMail app is not the only e-mail client. If I want this to be possible with FastMail as provider and FairEmail as the app, it's hack o'clock again.)

    Vendors all across the board really hate to give users useful features, because useful features tend to side-step their monetization efforts. And if history is any lesson, they'll find ways to constrain and shut down general-purpose use of LLMs. "Security" and "privacy" were the main fig leafs used in the past, so watch out for any new "improvements" there.

  • _heimdall 2 days ago

    MCPs are, in part, a response to the difficulties LLM companies had when trying out LLMs interact online by visually navigation the screen.

    They need APIs for it to be efficient. For whatever reason they didn't choose to use accessibility tooling to automate agents, and we haven't written REST APIs for 20+ years - they're left hoping a newly designed protocol will fix it.

    • TeMPOraL 2 days ago

      > For whatever reason they didn't choose to use accessibility tooling to automate agents

      That surprises me too. It's arguably the only way forward that has a chance of surviving for more than a moment, because accessibility actually has a strong cultural and (occasionally) legal backing, so companies can't easily close that off.

      • _heimdall 2 days ago

        I was genuinely (maybe naively) impressed when google pushed for https everywhere. Maybe there were nefarious reason behind it that I missed, but it did a lot of good for the average web user.

        LLM companies could easily have made a similar impact by leaning on accessibility tooling. Pushing companies to better support ARIA standards online would have made a huge impact for the better.

        Heck, throw a little of that LLM money towards browser vendors to even better support ARIA - personally I'd love to see a proper API for directly accessing the accessibility tree.

  • visarga 3 days ago

    Computer use over screen and keyboard comes to the rescue

bigmattystyles 3 days ago

Laughs/Cries in SAP

  • _jholland 3 days ago

    I have made it my mission to conquer SAP and gain control of our own critical financial data.

    As a business, they uniquely leverage inefficient and clunky design to drive profit. Simply because they haven’t documented their systems sufficiently, it is “industry standard practice” to go straight to a £100/hr+ consultant to build what should be straightforward integrations and perform basic IT Admin procedures.

    Through many painful late nights I have waded through their meticulously constructed labyrinth of undocumented parameters and gotchas built on foot-guns to eventually get to both build and configure an SAP instance from scratch and expose a complete API in Python.

    It is for me a David and Goliath moment, carrying more value than the consultancy fees and software licences I've spared my company.

    • piva00 3 days ago

      It's unfortunate it is your employer's IP, this shim on top of SAP would be extremely valuable if you sold as another product to enable internal teams in SAP-world corporations to develop without the knowledge of SAP arcana.

      • robertlagrant 2 days ago

        Yes I would strongly recommend monetising this, even though you'd have to rebuild it from scratch. Worth filling in a Y Combinator application?

        • dbreunig 2 days ago

          Yes, look up Winshuttle.

          A very successful company with some of the happiest customers I’ve ever seen, whose entire product was a SAP hack that allowed people to enter their data using Excel. As someone unfamiliar with SAP, absolutely blew my mind.

    • jgraettinger1 2 days ago

      Hi, I’m a cofounder / CTO of estuary.dev. Our whole mission is democratizing and enabling use of data within orgs.

      Open to a conversation about your work here? Reach me at johnny at estuary dot dev.

eadmund 3 days ago

At the end of the day, servers and software engineers cost money. One way to pay for things is ads, but ads are hostile to integrations (because there is no good way to guarantee ads will be shown) — I believe this is why Twitter and Reddit killed their third-party clients. But there are alternate ways to pay for things, e.g. subscriptions. The good news here is that the sorts of things one pays for are IMHO more likely to be the sorts of things worth MCPing together. Using MCP to post to Reddit or Twitter? Low value, to oneself and to society. Using MCP to work with one’s AWS account? Higher value.

Incidentally, why do the article’s links all use strikethrough rather than underlines? Is this a deliberate style choice, or some Chrome/Firefox/Safari incompatibility? It’s pretty ugly.

tempodox 2 days ago

> But it didn’t last.

Of course not. All this gatekeeping is how every Tom, Dick and Harriette make their money and wrestle for dominance. Believing that any specific tech would fundamentally change that is hopelessly naive. The honeymoon phases that make it look like it could be different this time around are merely there to lock in lots of users.

It's in the nature of capitalism and that's not a technological issue.