Comment by kimixa

Comment by kimixa 2 days ago

16 replies

That's just means you're effectively acting as a moderator yourself, only with a whitelist. It's just your own direct curation of sources.

And how did you discover those feeds in the first place? Or find new ones?

I know people have tried to have a relatively closed mesh-of-trust, but you still need people to moderate new applicants, otherwise you'll never get any new idea of fresh discussion. And if it keeps growing, scale means that group will slowly gather bad actors. Maybe directly by putting up whatever front they need to get into the mesh or existing in-mesh accounts. Maybe existing accounts get hacked. Maybe previously-'good' account-owning people have changed, be it in opinion or situation, to take advantage of their in-mesh position. It feels like a speedrun of the internet itself growing.

yason 2 days ago

> That's just means you're effectively acting as a moderator yourself, only > with a whitelist. It's just your own direct curation of sources.

That's exactly how a useful social information system works. I choose what I want to follow and see, and there's no gap between what moderation thinks and what I think. Spam gets dealt with the moment I see something spammy (or just about any kind of thing I don't want to see).

This is how Usenet worked: you subscribed to the groups you found interesting and where participants were of sufficient quality. And you further could block individuals whose posts you didn't want to see.

This is how IRC worked: you joined channels that you deemed worth joining. And you could further ignore individuals that you didn't like.

That is how the whole original internet actually worked: you were reading pages and using services that you felt were worth your time.

Ultimately, that's how human relationships work. You hang out with friends you like and who are worth your time, and you ignore people who you don't want to spend your time with, especially assholes.

  • jasode 2 days ago

    >This is how Usenet worked: you subscribed to the groups you found interesting and where participants were of sufficient quality. And you further could block individuals whose posts you didn't want to see.

    Your explanation actually proves why USENET doesn't work anymore because that client-side moderation is unusable these days. I was on Usenet in the 1980s before the WorldWideWeb in 1993 and continued up until 2008.

    Why did I quit Usenet?!? Because it worked better when the internet was much smaller and consisted of universities federating NNTP servers. But Usenet's design can't handle the massive growth of the internet such as commercial entities being allowed to connect in 1992 and "The Eternal September" of massive users from AOL. Spam gets out of control. Signal-to-noise ratio goes way down. Usenet worked better in a "collegial" atmosphere of a smaller internet where it's mostly good actors. It's fundamental design doesn't work for a big internet full of bad actors.

    This is why a lot of us ex-Usenet users are here on a web forum that's moderated instead of a hypothetical "nntp://comp.lang.news.ycombinator" with newsgroup readers. With "https://news.ycombinator.com", I don't need to do extra housekeeping of "killfiles" or wade through a bunch of spam.

    Whatever next gen social web gets invented, it cannot work like Usenet for it to be usable.

    >Spam gets dealt with the moment I see something spammy

    Maybe consider you're unusual with that preference because most of us don't want our eyeballs to even see the spam at all. The system's algorithms should filter it out automatically. We don't want to impose extra digital housekeeping work of "dealing with spam" ourselves.

    • pixl97 a day ago

      I think most users that have not ran the systems themselves really have no clue how bad spam really is. It can quickly spiral to the point were 99.9% of the incoming posts on a system are spam, porn where it doesn't belong, or otherwise illegal content. Simply put even if you as the user filter 99.5% of the spam the system is still majority spam.

      IP blocks and initial filtering typically make a massive difference in total system load so you can get to the point that the majority of the posts are 'legitimate'. After that bot filtering is needed to remove the more complex attacks against the system.

  • malcolmxxx 2 days ago

    You are right./ignore is all the mod you need.

    • immibis 2 days ago

      Incorrect when accounts are free. Usenet providers are forced to police users changing their email addresses or signing up multiple times, or else they get de-peered. IRC networks do IP address bans.

      • ndiddy a day ago

        For at least the past 20 years, Usenet has been so full of spam that it’s been made virtually unusable. If de-peering is an option, then why haven’t the providers that allow spammers to operate gotten de-peered?

        • immibis 18 hours ago

          Most spam was from Google. Google was kicked off Usenet last year or the year before that.

_heimdall 2 days ago

> That's just means you're effectively acting as a moderator yourself, only with a whitelist

Agreed, though when you are your own moderator that really is more about informed consent or free will than moderation. Moderation, at least in my opinion, implies a third party.

> And how did you discover those feeds in the first place? Or find new ones?

The same way I make new friends. Recommendations from those I already trust, or "friend of a friend" type situations. I don't need an outside matchmaker to introduce me to people they think I would be friends with.

  • jryle70 a day ago

    Well, then the risk is that you build your own bubbles of likeminded people. Maybe that's all you need, may be not.

    • _heimdall a day ago

      Anecdotal, but I feel like I've done a better job curating diverse opinions in my feed than any algorithm has.

      Surely an algorithm focused on that would best me, but the only ones out there today are only motivated by selling ad space and data. Its a bit of an unfair fight today since that isn't my goal, but I don't expect anyone to fund a social media platform with similar goals to mine in their algorithm.

    • 1shooner a day ago

      There is much more diversity of perspective among friends and colleagues than there is in my algorithmic social media feed. This is the whole problem: we no longer see perspectives we respect but don't share.

matheusmoreira 2 days ago

> you're effectively acting as a moderator yourself

Honestly, that's how things should work. People should simply avoid, block and hide the things they don't like.

  • immibis 2 days ago

    If 99% of what I see on the platform is stuff I have to block, if I have to spend half an hour every day blocking stuff, I'm quitting the platform.

    • _heimdall 2 days ago

      That isn't a problem if your feed is filled only with content from those you chose to follow.

      If you hit follow and 99% of your feed should be blocked, unfollow them and move on.

virtuous_sloth 2 days ago

ActivityPub allows one to follow hashtags in addition to accounts. Pick some hashtags of interest, find some people in those posts to follow. Lather, rinse, repeat.

  • immibis 2 days ago

    ActivityPub has no provision to follow a hashtag - that's a local server feature.

    This comment was rate limited.