Comment by AnthonyMouse

Comment by AnthonyMouse 5 hours ago

1 reply

> Ultimately, big problems with an open social web include:

These two seem like the same problem:

> moderation

> spam

You need some way of distinguishing high quality from low quality posts. But we kind of already have that. Make likes public (what else are they even for?). Then show people posts from the people they follow or that the people they follow liked. Have a dislike button so that if you follow someone but always dislike the things they like, your client learns you don't want to see the things they like.

Now you don't see trash unless you follow people who like trash, and then whose fault is that?

> which now includes scrapers bringing your site to a crawl

This is a completely independent problem from spam. It's also something decentralized networks are actually good at. If more devices are requesting some data then there are more sources of it. Let the bots get the data from each other. Track share ratios so high traffic nodes with bad ratios get banned for leeching and it's cheaper for them to get a cloud node somewhere with cheap bandwidth and actually upload than to buy residential proxies to fight bans.

> good faith verification

> posting transparency

It's not clear what these are but they sound like kind of the same thing again and in particular they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see instead of a bunch of spam from anons that nobody they follow likes.

johnnyanmac 4 hours ago

>You need some way of distinguishing high quality from low quality posts.

Yes. But I see curation more as a 2nd order problems to solve once the bases are taken care of. Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.

The tools needed for curation, stuff like filtering, finding similar posts/comments, popularity, following, are different from those needed to moderate, or self moderate (ignore, down voting, reporting). The latter poisons a site before it can really start to curate to its users.

>This is a completely independent problem from spam.

Yeah, thinking more about it, it probably is a distinct category. It simply has a similar result of making a site unable to function.

>It's not clear what these are but they sound like kind of the same thing again

I can clarify. In short, posting transparency focused more on the user and good faith verification focuses more on the content. (I'm also horrible with naming, so I welcome better terms to describe these)

- Posting transparency at this point has one big goal: ensure you know when a human or a bot is posting. But it extends to ensuring there's no impersonation, that there's no abuse of alt accounts, and no voting manipulation.

It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google. But this is definitely a step that can overstep privacies.

- good faith verification refers more towards a duty to properly vet and fact check information that is posted. It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing. It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.

>they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see

Yes, they are. I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform. Being able to address that naturally requires some more authorian approaches.

That's why "good faith" is an important factor here. Any authoritarian act you introduce can only work on trust, and is easily broken by abuse. If we want incentives to change from "maximizing engagement" to "maximizing quality and community", we need to cull out malicious information.

We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.