Comment by johnnyanmac

Comment by johnnyanmac 5 hours ago

26 replies

Maybe this was more of an intro/pitch to something I already support, so I wasn't quite the audience here.

But I feel that talking about the open social web without addressing the reasons current ones aren't popular/get blocked doesn't lead to much progress. Ultimately, big problems with an open social web include:

- moderation

- spam, which now includes scrapers bringing your site to a crawl

- good faith verification

- posting transparency

These are all hard problems and it seems to make me believe the future of a proper community lies more in charging a small premium. Even charging one dollar for life takes out 99% of spam and gives a cost to bad faith actors should they be banned and need another dollar to re-enter. Thus, easing moderation needs. But charging money for anything online these days can cause a lot of friction.

_heimdall 6 minutes ago

In my opinion, both spam and moderation are only really a problem when content is curated (usually algorithmically). I don't need a moderator and don't worry about spam in my RSS reader, for example.

A simple chronological feed of content from feeds I chose to follow is enough. I do have to take on the challenge of finding new content sources, but at least fore that's a worthwhile tradeoff to not be inundated with spam and to not feel dependent on someone else to moderate what I see.

AnthonyMouse 3 hours ago

> Ultimately, big problems with an open social web include:

These two seem like the same problem:

> moderation

> spam

You need some way of distinguishing high quality from low quality posts. But we kind of already have that. Make likes public (what else are they even for?). Then show people posts from the people they follow or that the people they follow liked. Have a dislike button so that if you follow someone but always dislike the things they like, your client learns you don't want to see the things they like.

Now you don't see trash unless you follow people who like trash, and then whose fault is that?

> which now includes scrapers bringing your site to a crawl

This is a completely independent problem from spam. It's also something decentralized networks are actually good at. If more devices are requesting some data then there are more sources of it. Let the bots get the data from each other. Track share ratios so high traffic nodes with bad ratios get banned for leeching and it's cheaper for them to get a cloud node somewhere with cheap bandwidth and actually upload than to buy residential proxies to fight bans.

> good faith verification

> posting transparency

It's not clear what these are but they sound like kind of the same thing again and in particular they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see instead of a bunch of spam from anons that nobody they follow likes.

  • johnnyanmac 2 hours ago

    >You need some way of distinguishing high quality from low quality posts.

    Yes. But I see curation more as a 2nd order problems to solve once the bases are taken care of. Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.

    The tools needed for curation, stuff like filtering, finding similar posts/comments, popularity, following, are different from those needed to moderate, or self moderate (ignore, down voting, reporting). The latter poisons a site before it can really start to curate to its users.

    >This is a completely independent problem from spam.

    Yeah, thinking more about it, it probably is a distinct category. It simply has a similar result of making a site unable to function.

    >It's not clear what these are but they sound like kind of the same thing again

    I can clarify. In short, posting transparency focused more on the user and good faith verification focuses more on the content. (I'm also horrible with naming, so I welcome better terms to describe these)

    - Posting transparency at this point has one big goal: ensure you know when a human or a bot is posting. But it extends to ensuring there's no impersonation, that there's no abuse of alt accounts, and no voting manipulation.

    It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google. But this is definitely a step that can overstep privacies.

    - good faith verification refers more towards a duty to properly vet and fact check information that is posted. It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing. It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.

    >they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see

    Yes, they are. I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform. Being able to address that naturally requires some more authorian approaches.

    That's why "good faith" is an important factor here. Any authoritarian act you introduce can only work on trust, and is easily broken by abuse. If we want incentives to change from "maximizing engagement" to "maximizing quality and community", we need to cull out malicious information.

    We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.

Seattle3503 2 hours ago

A lot of tech folks hate government ID schemes, but I think MDL with some sort of pairwise pseudonyms could help with spam and verification.

It would let you identify users uniquely, but without revealing too much sensitive information. It would let you verify things like "This user has a Michigan driver's license, and they have an ID 1234, which is unique to my system and not linkable to any other place they use that ID."

If you ban that user, they wouldn't be able to use that ID again with you.

The alternative is that we continue to let unelected private operators like Cloudflare "solve" this problem.

  • Gigachad 28 minutes ago

    Telegram added a feature where if someone cold dms you, it shows their phone number country and account age. When I see a 2 month old account with a Nigeria phone number I know it's a bot and I can ignore it.

  • isodev 2 hours ago

    The EU’s eIDAS 2.0 specification for their digital wallet identity explicitly supports the use of pseudonyms for this exact purpose of “Anonymous authentication”.

prisenco 4 hours ago

Having worked on the problem for years, decentralized social networking is such as tar pit of privacy and security and social problems that I can't find myself excited by it anymore. We are clear what the problems with mainstream social networking at scale are now, and decentralization only seems to make them worse and more intractable.

I've also come to the conclusion that a tightly designed subscription service is the way to go. Cheap really can be better than "free" if done right.

  • Gigachad 26 minutes ago

    Yeah kind of agree. Decentralised protocols are forced to expose a lot of data which can normally be kept private like users own likes.

    • EnglishMobster 18 minutes ago

      Dunno necessarily if they are _forced_ to expose that data.

      Something like OAuth means that you can give different levels of private data to different actors, based on what perms they request.

      Then you just have whoever is holding your data anyway (it's gotta live somewhere) also handle the OAuth keys. That's how the Bluesky PDS system works, basically.

      Now, there is an issue with blanket requesting/granting of perms (which an end user isn't necessarily going to know about), but IMO all that's missing from the Bluesky-style system is to have a way to reject individual OAuth grants (for example, making it so Bluesky doesn't have access to reading my likes, but it does have access to writing to my likes).

      • prisenco 16 minutes ago

        In a federated system, the best you can do is a soft delete request, and ignoring that request is easier than satisfying it.

        If I have 100 followers on 100 different nodes, that means each node has access to (and holds on to) some portion of my data by way of those followers.

        In a centralized system, a user having total control over their data (and the ability to delete it) is more feasible. I'm not saying modern systems are great about this, GDPR was necessary to force their hands, but federation makes it more technically difficult.

  • johnnyanmac 2 hours ago

    It's unfortunate, and I don't necessarily want to say decentralization isn't viable at all. But I only see decentralization at best address the issue of scraping. It's solving different problems without necessarily addressing the core ones needed to make sure a new community is functional. But I think both kinds of tech can execute on addressing these issues.

    I'm not against subscriptions per se, but I do think a one time entry cost is really all that's needed to achieve many of the desired effects. I'm probably in the minority as someone who'd rather pay $10 one time to enter a community once than $1-2/month to maintain my participation, though. I'm just personally tired of feeling like I'm paying a tax to construct something that may one day be good, rather than buying into a decently polished product upfront.

  • JuniperMesos an hour ago

    If I have to pay you to access a service, and I'm not doing so through one of a small number of anonymity-preserving cryptocurrencies such as Bitcoin or Monero, then the legitimate financial system has an ultimate veto on what I can say online.

    • prisenco 6 minutes ago

      It does if you don't pay to access the service as well, because the financial system is the underpinning of their ad network.

      Even in a federated system, you can be blacklisted although it does take more coordination and work.

      i2p and writing to the blockchain are an attempt to deal with that but that permanence, but those are not without their own (serious) problems.

    • [removed] an hour ago
      [deleted]
  • krapp 4 hours ago

    >I've also come to the conclusion that a tightly designed subscription service is the way to go. Cheap really can be better than "free" if done right.

    "Startup engineer" believes the solution to decentralization is a startup, what a shock. We look forward to your launch.

    • prisenco 3 hours ago

      I'm a consultant that builds for startups. I'm not an entrepreneur myself.

      If I were to build something like this, I'd use a services non-profit model.

      Ad-supported apps result in way too many perverse economic incentives in social media, as we've seen time and time again.

      I worked on open source decentralized social networking for 12 years, starting before Facebook even launched. Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.

      • krapp 3 hours ago

        >Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.

        I disagree that federation is "specifically political decentralization" but how so?

        You claim that decentralization makes all of the problems of mainstream social networking worse and more intractable, but I think most of those problems come from the centralized nature of mainstream social media.

        There is only one Facebook, and only one Twitter, and if you don't like the way Zuckerberg and Musk run things, too bad. If you don't like the way moderation works with an instance, you don't have to federate with it, you can create your own instance and moderate however you see fit.

        This seems like a better solution than everyone being subject to the whims of a centralized service.

        • prisenco 39 minutes ago

          To clarify, I don't mean big P Politics, I mean political in the sense that each node is owned and operated separately, which means there are competing interests and a need to coordinate between them that extends beyond the technical. Extrapolated to N potential nodes creates a lot of conflicting incentives and perspectives that have to be managed. And if the network ever becomes concentrated in a handful of nodes or even one of them which is not unlikely, then we're effectively back at square one.

          | if you don't like the way Zuckerberg and Musk run things, too bad

          It's important to note we're optimizing for different things. When I say third-order social effects, it means the way that engagement algorithms and virality combine with massive scale to create a broadly negative effect on society. This comes in the form of addiction, how constant upward social comparison can lead to depression and burnout, or how in extreme situations, society's worst tendencies can be amplified into terrible results with Myanmar being the worst case scenario.

          You assume centralization means total monopolization, which neither Twitter or Facebook or Reddit or anyone has been able to do. You may lose access to a specific audience, but nobody has a right to an audience. You can always put up a website, blog, write for an op-ed position at your local newspaper, hold a sign in a public square, etc. The mere existence of a centralized system with moderation is not a threat to freedom of speech.

          Federation is a little bit more resilient but accounts can be blacklisted, and whole nodes can be blacklisted because of the behavior of a handful of accounts. And unfortunately, that little bit of resilience amplifies the problem of spam and bots, which for the average user is much bigger of a concern than losing their account. Not to mention privacy concerns, which is self-evident why an open system is more difficult than a closed one.

          I'll concede that "worse" was poor wording, but intractable certainly wasn't. These problems become much more difficult to solve in a federated system.

          However, most advocates of federation aren't interested in solving the same problems as I am, so that's where the dissonance comes from.

BrenBarn 3 hours ago

Those are important reasons, but there are other reasons as well, such as concentration of market power in a few companies, which allows those companies to erect barriers to entry and shape law in ways that benefit themselves, as well as simply creating network effects that make it hard for new social-web projects to establish a foothold.

  • johnnyanmac 2 hours ago

    That's an even harder problem to solve. I do agree we should make sure that policy isn't manipulated by vested powers and make things even harder to compete with.

    But network effects seems to be a natural phenomenon of people wanting to establish a familiar routine. I look at Steam as an example here, where while it has its own shady schemes behind the scenes (which I hope are addressed), it otherwise doesn't engage in the same dark patterns as other monopolies. But it still creates a strong network effect nonetheless.

    I think the main solace here is that you don't need to be dominant to create a good community. You need to focus instead on getting above a certain critical mass, where you keep a healthy stream of posting and participation that can sustain itself. Social media should ultimately be about establishing a space for a community to flourish, and small communities are just as valid.

numpy-thagoras 4 hours ago

"- moderation

- spam, which now includes scrapers bringing your site to a crawl

- good faith verification

- posting transparency"

And we have to think about how to hit these targets while:

- respecting individual sovereignty

- respecting privacy

- meeting any other obligations or responsibilities within reason

and of course, it must be EASY and dead simple to use.

It's doable, we've done far more impossible-seeming things just in the last 30 years, so it's just a matter of willpower now.

echelon 5 hours ago

It'd be cool if you had to pay a certain amount of money to publish any message.

And then if you could verify you'd paid it in a completely P2P decentralized fashion.

I'm not a crypto fan, but I'd appreciate a message graph where high signal messages "burned" or "donated money" to be flagged for attention.

I'd also like it if my attention were paid for by those wishing to have it, but that's a separate problem.

  • Groxx 4 hours ago

    it's pure waste-generation, but hashcash is a fairly old strategy for this, and it's one of the foundations of Bitcoin. there's no "proof of payment to any beneficial recipient", sadly, but it does throttle high-volume spammers pretty effectively.

    • echelon 4 hours ago

      Maybe if you could prove you sent a payment to a charity node and then signed your message in the receipt for verification...

      • Terr_ 3 hours ago

        Imagine a world where every City Hall has a vending machine you can use to donate a couple bucks to to a charity of your choice, and receive an anonymous one-time use "some real human physically present donated real money to make this" token.

        You could then spend the token with a forum, to gain and basic trust for an otherwise anonymous account.