Comment by godelski

Comment by godelski a day ago

6 replies

  > This is a really common problem with science reporting in general.
There's a lot that actually becomes really sinister with this. I'm sure the people writing those news articles pass them off as little white lies, "close enough", or whatever. And in some sense, there's a lot of truth to that. But the problems stem from what grows out of these "small errors" and into far larger ones.

1) it certainly contributes to a significant part of the g̶r̶o̶w̶i̶n̶g̶ distrust in science. People are getting their science from the news, not the horse's mouth. If you report something silly while saying "scientists say" then people will point the finger at the scientists more than they will point to the reporter. The scientists aren't writing in plain English after all[0]. Things like chocolate being healthy for you, red meat causing cancer, or machine learning quantum blackholes. There's always elements of truth to these things but truth is not sensational. Truth and complexity go hand in hand. As complexity decreases you will have to sacrifice accuracy. This is a tough balance to play[1].

2) The public isn't very scientifically literate and it is easy to misconstrue meaning. Hell, even scientists routinely struggle with this stuff. Let's take the red meat issue as an example. It's true, but a lot of those studies were looking at daily intake of 50g or 100g and results like "25% higher risk of rectal cancer" at the higher end of the estimates. For context, a Costco hotdog is 110g while a Nathan's Hotdog is 48. We're talking about 1-2 hotdogs per day. We're also talking about a percent increase in risk, not a percent risk. The CDC site says approximately 3.9% of men and women will be diagnosed with colorectal cancer. If that's our baseline then a 25% increase is a 4.9% risk. That's a disgusting amount of hotdogs to move from a 4% cancer risk to a 5%. Concerning on a national level but not on a personal. This feeds back to #1 as people are interpreting the news as saying "eating hotdogs makes you likely to get cancer" while people observe heavy hotdog eaters around them not getting cancer. Their observation wouldn't run counter to what the research says but it will against the narrative on the news. The incongruence between observation and understanding does justify mistrust. But there's just more ways to misinterpret something than there are to interpret correctly (relates to [2]). This failure mode becomes self-reinforcing. Enough that I think anyone that spends any time on the internet will be aware of it. (Communicating is fucking hard, communicating accurately is even harder)

3) (Perhaps the worst part) Scientists are primarily measured by their citations (count or "h-index"[3]). Unless you do something groundbreaking[4] (which is rare) then this is the main way to "measure" performance. A great way to boost citations is getting media attention. Unfortunately there's just a lot of papers published and a primary driver of citation count is knowledge of a paper's existence. You don't need to be an Avi Loeb type (it doesn't hurt) when we're talking about small numbers. If a Cal State grad student and a MIT grad student were to publish the same paper we'd expect the latter to have more citations due to the latter's greater visibility. MIT has a media wing and these papers are much more easily picked up by larger news orgs. This is why so many scientists use platforms like Twitter. Because your work doesn't mean anything (to your personal success and ability to continue doing your work) if you can't get enough citations. There's an obvious slippery slope here... One that can create a feedback loop to misreporting. The fiercer the competition the the more risky this situation becomes. It's really easy to do slight embellishments of your work. No one is checking at time of publication. Replication happens later and the system devalues replication. Plus, while replicating it is much easier to assume you've made a mistake rather than the paper was in error (or in serious error).

All this is to say that shit is messy. And I don't think any of it is particularly any one person's fault. More an emergent phenomena through compounding effects. Little things here and there add up as we talk about millions of events and many years. I know we all want things to be simple, and simplicity has a lot of benefits, but it also can be a big trap. "As simple as possible, but no simpler" does not mean something isn't extremely complex[5]. It's a trap that makes people think they can read a few lines from Wikipedia and understand something (read the whole article, it's still not enough). A trap with growing consequences to a world that grows in complexity[6]

[0] And I don't think w̶e̶ they should. Papers are a peer-to-peer communication network. Open and visible, but that's how the peer-to-peer communication takes place. Expert-to-public communication has traditionally been done through news or other science communicators. Asking scientists to write papers to the general public is like asking you to communicate to your coworker about your code as if your coworker knows nothing about code or the context it is running in (all because a layman may overhear). Good luck getting any work done...

[1] While news orgs and science communicators (especially pop sci communicators... ugh...) are doing harm here there are defenses anyone can take. Recognize your understanding is always wrong to some degree. Don't take in information as binary true/false statements but probabilities: e.g. likely true/maybe true/maybe false/likely false. Fundamentally the reason this is a good defense is because it is always a more accurate interpretation. Scientists don't find truth through confirmation but through negation. What I mean is w̶e̶ they rule things out. A scientist converges to truth[2]

[2] This also helps you sniff out conmen from the scientists. The scientist always has some doubt. At first they may show themselves as highly confident but as you press on detail they start weakening language. This isn't foolproof and isn't gonna work for every question, but it is common. Scientists are being more trained on media literacy because there is recognition that while this is the right way to talk to peers it gives the public a sense that they lack expertise rather than are aware of complexity. The best signal is just getting them to talk about their domain. They won't stop and will get very detailed.

[3] Number of papers with greater than N citations. An h-index of 10 is having 10 papers with >= 10 citations. h-index of 100 is 100 papers with >= 100 citations.

[4] If you do something groundbreaking no one gives a shit about citation count. But doing something groundbreaking will surely make your citation count skyrocket (often it also drives your h-index too, as you've simply gained more attention and more people are reading your other works. Your work doesn't change, but visibility does). This fact is often used to justify the usage of citation metrics. Citation metrics are fine, but they're also easy to hack and highly context driven. I mention Avi Loeb, and controversy is beneficial to this metric. Every paper that cites him to say he's wrong is a point for him. Controversy is a way to gather points from a whole new source! Not those building on your work, but those building against your work.

[5] "If you can't explain it simply, you don't understand it well enough" is laughable phrase. The reason you can't teach "a barmaid" Quantum Chromodynamics isn't because you don't understand it. You can't explain it "simply" because you don't understand it. You can't explain it simply because you can't even use the word "color" without a layman thinking a quark is red (can you even get into how it is impossible to have "red" at that scale?).

[6] Progress necessitates an increase in complexity. Look at a Taylor Expansion and it's relationship to computational difficulty. I'll let you all figure this out, my comment is already too long.

makeitdouble 17 hours ago

> Truth and complexity go hand in hand

Yes. It's pretty incredible when someone comes up with a way to convey that complexity in a gradual and easily graspable form.

Looking at how other subjects, we have movies explaining extremely intricate heist plans and people stick with it for 3 hours. I wish we had more incentives to spend that kind of talent and money on knowledge stuff.

Watching the "Doctor Stone" anime I was baffled at how rough it presents engineering and scientific processes. As it is the popularity of the show is crazy, and I wonder how much of a hit it would be if it took much more time to explain trial and error and how much it takes to design and build anything from scratch.

  • godelski 8 hours ago

    I think we're making progress in that direction. I mean you have people like Veritasium and 3Blue1Brown being excellent science and math communicators. They are fairly high level, but have you ever checked out the Summer of Math Exposition (#some) submissions? There's some amazing videos in there that are as informative, if not more, than lectures I've had while at Uni. Not just an average lecture either, like rivaling or surpassing some of the best lectures.

    It's hard, but I think there's hope. Currently academia isn't aligned for this right now though so it is often coming from outside. But writing my thoughts on that is going to me another, but smaller, rant lol

cantor_S_drug 13 hours ago

> Not those building on your work, but those building against your work.

In our brains, we have inhibitory neurons, why can't the citation mechanism incorporate negative feedback? and the final score would be combination of negative and positive citations.

  • godelski 8 hours ago

    It could, but that could also have negative impacts to the system. I don't actually want people like Loeb to stop publishing. Off the wall thinking can be helpful. The reason for this is that you're still doing work.

    It is exploration, so think about it this way. Everyone says "there's no gold over there" and then someone says "well did anybody check?" It helps when someone comes back like "yep, I checked, no gold."

    Even if it was very likely it is good to know. Those big breakthroughs only happen by challenging the current paradigm. By definition this is almost certainly true. You're not going to create paradigm shifts by maintaining the current paradigm, right? So you have to maintain space open for wild and crazy ideas. Hell, take the famous multiverse theory in physics. It is almost certainly wrong but it still can provide some utility too. Same is true with String Theory. It has to be okay to be wrong in science. You won't progress if this isn't possible. It also would mean you can't progress even if you're right but no one listens!

    The problem is you're trying to fundamentally measure something intangible. With scientists you're trying to measure impact. But the only way to measure impact is through hindsight, which sometimes takes hundreds of years. Every single measure you take, no matter how obvious and simple it may seem, is a proxy. Your measurement cannot be perfectly aligned with the thing you intend to measure. Many times the alignment difference will be almost nothing so it doesn't matter, but with things like this? Sorry, you're not going to be able to measure scientific output unless you can travel to the future to make that measurement. (similarly, you can't measure a student's educational outcome until after they have left school and applied that knowledge, including their metaskills.)

    You should measure, but with things like this, measures are a dangerous trap.

4rt a day ago

thank you for your service, it was a very long comment but i read it all and found it insightful!

  • godelski a day ago

    Thanks for reading. Topic hits close to home for me lol