Comment on 2015 mRNA paper suggests data re-used in different contexts
(pubpeer.com)155 points by picture 2 days ago
155 points by picture 2 days ago
An (agarose?) gel.
There are partial holes in at at one end. You insert a small amount of dyed DNA (etc) containing solution each. Apply an electrical potential across the gel. DNA gradually moves along. Smaller DNA fragments move faster. So, at a given time, you can coarsely measure fragment size of a given sample. Your absolute scale is given by "standards", aka "ladders" that have samples of multiple, known sizes.
The paper authors cheated (allegedly) by copy + pasting images of the gel. This is what was caught, so it implies they may have made up some or all results in this and other papers.
Close - this is a SDS-PAGE gel, and you run it using proteins. The bands in the first two rows are from a western blot (gel is transferred to a membrane), where you use antibodies against those specific proteins to detect them. The Pon S row is Ponceau S, a dye that non-specifically detects all proteins - so it's used as a loading control, to make sure that the same amount of total protein is loaded in each lane of the gel.
Is it conceivable that the control was run once because the key result came from the same run? I can see a reviewer asking for it in all three figures, whereas they may drafted it only in one
Additional context to be speculative of OP's intentions. Within the academic world there was a major scandal where a semi-famous researcher was exposed for faking decades of data (Google: Pruitt). Every since, people have been hungry for more drama of the same shape.
This is protein on a western blot but the general idea is the same.
what happens to people who do this? are they shunned forever from scientific endeavors? isn't this the ultimate betrayal of what a scientist is supposed to do?
This guy made some videos about it
I've always wondered about gel image fraud -- what's stopping fraudulent researchers from just running a dummy gel for each fake figure? If you just loaded some protein with a similar MW / migration / concentration as the one you're trying to spoof, the bands would look more or less indistinguishable. And because it's a real unique band (just with the wrong protein), you wouldn't be able to tell it's been faked using visual inspection.
Perhaps this is already happening, and we just don't know it... In this way I've always thought gel images were more susceptible to fraud vs. other commonly faked images (NMR / MS spectra etc, which are harder to spoof)
Gel electrophoresis data or Western/Southern/Northern blots are not hard to fake. Nobody seeing the images can tell what you put into each pocket of your gel. And for the blots nobody can tell which kind of antibody you used. It's still not totally effortless to fake as you have to find another protein with the right weight, this is not necessarily something you have just lying around.
I'd also suspect that fraud does not necessarily start at the beginning of the experiments, but might happen at a later stage when someone realizes their results didn't turn out as expected or wanted. At that point you already did the gels and it might be much more convenient to just do image manipulation.
Something like NMR data is certainly much more difficult to fake convincingly, especially if you'd have to provide the original raw datasets at publication (which unfortunately isn't really happening yet).
Shifting the topic from research misconduct to good laboratory practices, I don't really understand how someone would forget to take pictures of their gels often enough that they would feel it necessary to fake data. (I think you're recounting something you saw someone else do, so this isn't criticizing you.) The only reason to run the experiment to collect data. If there's no data in hand, why would they think the experiment was done? Also, they should be working from a written protocol or a short-form checklist so each item can be ticked off as it is completed. And they should record where they put their data and other research materials in their lab notebook, and copy any work (data or otherwise) to a file server or other redundant storage, before leaving for the day. So much has to go wrong to get to research misconduct and fraud from the starting point of a little forgetfulness.
I mean, I've seen people deliberately choose to discard their data and keep no notes, even when I offered to give them a flash drive with their data on it, so I understand that this sort of thing happens. It's still senseless.
"Whats stopping?" nothing, and that is why it is happening constantly. A larger and larger portion of scientific literature is riddled with these fake studies. I've seen it myself and it is going to keep increasing as long as the number of papers published is the only way to get ahead.
They have a playlist of 3500 videos showing images like this one
https://youtube.com/playlist?list=PLlXXK20HE_dV8rBa2h-8P9d-0...
I was curious how the video creators were able to generate so many videos in such a short timeframe. It looks like it might be automated with this tech: https://rivervalley.io/products/research-integrity
Very cool. I wish these guys would have a podcast discussing high profile papers, how influential they are, what sorts of projects have been built on top of them and then be like "uh oh, it looks like our system detecting something strange about the results".
I wish wish wish there was something similar also for computer science. If I got paid for how many papers that looked interested but could not be replicated, I would be rich.
There is so little content and context to this link that it is essentially flame war bait in a non-expert forum like HN.
The title was edited by supposedly HN moderators after I posted it. I actually ran into this youtube channel and thought it was very interesting, since I didn't realize academia seems to make so many mistakes all the time. https://news.ycombinator.com/item?id=42728742
For reference, the title of the paper this appeared in is "Novel RNA- and FMRP-binding protein TRF2-S regulates axonal mRNA transport and presynaptic plasticity"
Google Scholar reports 43 citations: https://scholar.google.com/scholar?q=Novel+RNA-and+FMRP-bind...
The images still seem to be visible in both PubMed and Nature versions.
PubMed version: https://pubmed.ncbi.nlm.nih.gov/26586091/
Nature version: https://www.nature.com/articles/ncomms9888
Nature version (PDF): https://www.nature.com/articles/ncomms9888.pdf
Just for context:
The senior author is Mark Mattson: one of the world’s most highly cited neuroscientists with amazing productivity and large lab while at NIH when this work was done.
https://scholar.google.com/citations?user=N3ObarMAAAAJ&hl=en...
Mattson is well known as a biohacker and an expert in intermittent fasting and health benefits.
https://en.wikipedia.org/wiki/Mark_Mattson
He retired from the National Institute on Aging in 2019 and is now at Johns Hopkins University. Still active researcher.
https://nihrecord.nih.gov/2019/08/23/mattson-expert-brain-ag...
Whereas actually Spotify funds artificial bands because they're more profitable
https://harpers.org/archive/2025/01/the-ghosts-in-the-machin...
If you just looked at all the undergrads trying to find ways to cheat on their homework, exams, and job interviews, it'd be easy to imagine that university lab science conducted by those same people is also full of cheating whenever they thought they could get away with it.
But I've wondered whether maybe some of the fabrications are just sloppy work tracking so many artifacts.
You might be experienced enough with computers to have filing conventions and workflow tools, around which you could figure out how to accurately keep track of numerous lab equipment artifacts, including those produced by multiple team members, and have traceability from publication figures all the way to original imaging or data. But is this something everyone involved in a university lab would be able to do reliably?
I'm sure there's a lot of dishonesty going on, because people going into the hard sciences can be just as shitty as your average Leetcode Cadet. But maybe some genuine scientists could use better computer tools and skills?
Would this imply that someone faked data in a paper they published?
Could this be a repeat of the Xerox image duplication bug? https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...
any reason hanlons razor doesnt apply here? honest question, im just a regular 4 year degree off to work guy
Yeah, I have mixed feelings about hanlons razor. Giving people the benefit of the doubt is good, and some people don't do it enough, but there's also a lot of people that overextend the benefit of the doubt to the point that they're almost doing damage control for fraudsters
There are perverse incentives in scientific publishing, and there are not many alternative explanations.
Here's how the razor applies: There is no real malice behind all the fraud in science publications. The authors aren't usually out to specifically harm others.
However, in the long run it is stupid because of two and a half reasons:
- it reduces people's trust in science because it is obvious we cannot trust the scientists which in the long run will reduce public funding for The grift
- it causes misallocation of funds by people misled by the grift and this may lead you actual harm (e.g., what if you catch Alzheimer's but there is no cure because you lied about the causes 20 years ago?)
1/2- there is a chance that you will get caught, and like the former president of Stanford, not be allowed to continue bilking the gullible. This only gets half a point because the repercussions are generally not immediate and definitely not devastating to those who do it skillfully.
The former president of Stanford is the CEO of Xaira now.
The opportunity here is to automate detection of fake data used in papers.
I could be hard to do without access to data and costly integration. And like shorting, the difficulty is how to monetize. It could also be easy to game. Still...
The nice thing about the business is that market (publishing) is flourishing. Not sure about state of the art or availability of such services.
For sales: run it on recent publications, and quietly ping the editors with findings and a reasonable price.
Unclear though whether to brand in a user-visible way (i.e., where the journal would report to readers that you validate their stuff). It could drive uptake, but a glaring false negative would be a risk.
Structurally, perhaps should be a non-profit (which of course can accumulate profits at will). Does YC do deals without ownership, e.g., with profit-sharing agreements?
Elizabeth Bik (who is known for submitting such reports to journals) has a nice interview about this problem[0], which covers software as well.
> After I raised my concerns about 4% of papers having image problems, some other journals upped their game and have hired people to look for these things. This is still mainly being done I believe by humans, but there is now software on the market that is being tested by some publishers to screen all incoming manuscripts. The software will search for duplications but can also search for duplicated elements of photos against a database of many papers, so it’s not just screening within a paper or across two papers or so, but it is working with a database to potentially find many more examples of duplications. I believe one of the software packages that is being tested is Proofig.
Proofig makes a lot of claims but they also list a lot of journals: https://www.proofig.com/
[0]: https://thepublicationplan.com/2022/11/29/spotting-fake-imag...
The image with meaningless blotches, technical diagrams and implied dubiousness feels like the beginning of a "please check and comment" meme.
There's a video that's quite convincing: https://youtu.be/K0Xio5yo_x8
It inverts the second image and passes the first and third images under it, and when there is a complete overlap the combined images make a nearly perfectly gray rectangle, showing that they cancel out.
The linked video makes it pretty clear by subtracting one image from the other and showing the difference: https://www.youtube.com/watch?v=K0Xio5yo_x8
Ironically there was a whole post about basically exactly this the other day: https://news.ycombinator.com/item?id=42655870
Any image manipulation program like photoshop with layers, you put the suspect images on top of one another and use filters to subtract one layer from the other (I'm not sure which filter operation works best, it might be multiply or divide) and then work to align the two layers. Differences and similarities become extremely obvious.
You can also get the raw pixel information by converting to a bitmap and comparing values, but it's easier visually because it's pretty trivial for a simple image modification to change all of the pixel values but still have the same image.
A desperate need for automated experiment verification and auditing is needed. Something as simple as submitting exif + archiving at time of capture, for crying out loud.
A imgur for scientific photos with hash-based search or something. We have the technology for this.
Can someone change the title to:
"Comment on Nature paper on 2015 mRNA paper suggests data re-used in different contexts"
The current title would suggest music to most lay-people.
As someone clueless about music and mRNA I've got to say this wouldn't help me much.
Ok, we've changed it. Submitted title was "Same three bands appear in three different presentations with different labels".
picture (the submitter) had the right idea—it's often better to take a subtitle or a representative sentence from the article when an original title isn't suitable for whatever reason, but since in this case it's ambiguous, we can change it.
If there's a better phrase from the article itself, we can change it again.
I guess I'll bite - what am I looking at here?