Perseids 5 days ago

I'm dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out [1]. Has the technical and scientific community in the US already forgotten this huge breach of trust? This is especially jarring at a time where the US is burning its political good-will at unprecedented rate (at least unprecedented during the life-times of most of us) and talking about digital sovereignty has become mainstream in Europe. As a company trying to promote a product, I would stay as far away from that memory as possible, at least if you care about international markets.

[1] https://news.ycombinator.com/item?id=46787165

  • ZpJuUuNaQ5 4 days ago

    >I'm dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out

    I just think it's silly to obsess over words like that. There are many words that take on different meanings in different contexts and can be associated with different events, ideas, products, time periods, etc. Would you feel better if they named it "Polyhedron"?

    • jll29 4 days ago

      What the OP was talking about is the negative connotation that goes with the word; it's certainly a poor choice from a marketing point of view.

      You may say it's "silly to obsess", but it's like naming a product "Auschwitz" and saying "it's just a city name" -- it ignores the power of what Geffrey N. Leech called "associative meaning" in his taxonomy of "Seven Types of Meaning" (Semantics, 2nd. ed. 1989): speaking that city's name evokes images of piles of corpses of gassed undernourished human beings, walls of gas chambers with fingernail scratches and lamp shades made of human skin.

    • black_puppydog 4 days ago

      I have to say I had the same reaction. Sure, "prism" shows up in many contexts. But here it shows up in the context of a company and product that is already constantly in the news for its lackluster regard for other people's expectation of privacy, copyright, and generally trying to "collect it all" as it were, and that, as GP mentioned, in an international context that doesn't put these efforts in the best light.

      They're of course free to choose this name. I'm just also surprised they would do so.

    • mc32 4 days ago

      Plus there are lots of “legacy” products with the name prism in them. I also don’t think the public makes the connection. It’s mainly people who care to be aware of government overreach who think it’s a bad word association.

    • jimbokun 4 days ago

      But the contexts are closely related.

      Large scale technology projects that people are suspicious and anxious about. There are a lot of people anxious that AI will be used for mass surveillance by governments. So you pick a name of another project that was used for mass surveillance by government.

    • bergheim 4 days ago

      Sure. Like Goebbels. Because they gobble things up.

      Altso, nazism. But different context, years ago, so whatever I guess?

      Hell, let's just call it Hitler. Different context!

      Given what they do it is an insidious name. Words matter.

      • fortyseven 4 days ago

        Comparing words with unique widespread notoriety with a simple, everyday one. Try again.

        • rvnx 4 days ago

          Prism in tech is very well-known to be a surveillance program.

          Coming from a company involved with sharing data to intelligence services (it's the law you can't escape it) this is not wise at all. Unless nobody in OpenAI heard of it.

          It was one of the biggest scandal in tech 10 years ago.

          They could call it "Workspace". More clear, more useful, no need to use a code-word, that would have been fine for internal use.

      • ZpJuUuNaQ5 4 days ago

        So you have to resort to the most extreme examples in order to make it a problem? Do you also think of Hitler when you encounter a word "vegetarian"?

    • mayhemducks 4 days ago

      You do realize that obsessing over words like that is a pretty major part of what programming and computer science is right? Linguistics is highly intertwined with computer science.

  • sunaookami 5 days ago

    >Has the technical and scientific community in the US already forgotten this huge breach of trust?

    Have you ever seen the comment section of a Snowden thread here? A lot of users here call for Snowden to be jailed, call him a russian asset, play down the reports etc. These are either NSA sock puppet accounts or they won't bite the hand that feeds them (employees of companies willing to breach their users trust).

    Edit: see my comment here in a snowden thread: https://news.ycombinator.com/item?id=46237098

    • jll29 4 days ago

      What Snowden did was heroic. What was shameful was the world's underwhelming reaction. Where were all these images in the media of protest marches like against the Vietnam war?

      Someone once said "Religion is opium for the people." - today, give people a mobile device and some doom-scrolling social media celebrity nonsense app, and they wouldn't noticed if their own children didn't come home from school.

      • vladms 4 days ago

        Looking back I think allowing more centralized control to various forms of media to private parties did much worse overall than government surveillance on the long run.

        For me the problem was not surveillance, the problem is addiction focused app building (+ the monopoly), and that never seem to be a secret. Only now there are some attempts to do something (like Australia and France banning children - which am not sure is feasible or efficient but at least is more than zero).

      • sunaookami 4 days ago

        Remember when people and tech companies protested against SOPA and PIPA? Remember the SOPA blackout day? Today even worse laws are passed with cheers from the HN crowd such as the OSA. Embarassing.

      • linkregister 4 days ago

        Protests in 2025 alone have outnumbered that of those during the Vietnam War.

        Protesting is a poor proxy for American political engagement.

        Child neglect and missing children rates are lower than they were 50 years ago.

    • linkregister 4 days ago

      Are you asserting that disagrees with you is either a propaganda campaign or a cynical insider? Nobody who opposes you has a truly held belief?

    • TiredOfLife 5 days ago

      Him being (or best case becoming) a russian asset turned out to be true

      • omnimus 5 days ago

        Like it would matter for any of the revelations. And like he would have other choices to not go to prison. Look at how it worked out for Assange.

      • lionkor 4 days ago

        If the messenger has anything to do with Russia, even after the fact, we should dismiss the message and remember to never look up.

      • sunaookami 4 days ago

        In what way did it "turn out to be true"? Because he has russian citizenship and is living in a country that is not allied with his home country that is/was actively trying to kill him (and revoked his US passport)?

      • vezycash 4 days ago

        Truth is truth, no matter the source.

      • jimmydoe 4 days ago

        He could have been a Chinese asset, but CCP is a coward.

  • pageandrew 5 days ago

    These things don't really seem related at all. Its a pretty generic term.

    • Phelinofist 5 days ago

      FWIW, my immediate reaction was the same "That reminds me of NSA PRISM"

    • kakacik 4 days ago

      I came here based to headline expecting some more cia & nsa shit, that word is tarnished for few decades in better part of IT community (that actually cares about this craft beyond paycheck)

    • vaylian 5 days ago

      And yet, the name immediately reminded me of the Snowden relevations.

  • JasonADrury 5 days ago

    This comment might make more sense if there was some connection or similarity between the OpenAI "Prism" product and the NSA surveillance program. There doesn't appear to be.

    • Schlagbohrer 5 days ago

      Except that this lets OpenAI gain research data and scientific ideas by stealing from their users, using their huge mass surveillance platform. So, tremendous overlap.

      • concats 5 days ago

        Isn't most research and scientific data is already shared openly (in publications usually)?

      • cruffle_duffle 4 days ago

        "Except that this lets OpenAI gain research data and scientific ideas by stealing from their users, using their huge mass surveillance platform. So, tremendous overlap."

        Even if what you say is completely untrue (and who really knows for sure).... it creates that mental association. It's a horrible product name.

      • isege 5 days ago

        This comment allows ycombinator to steal ideas from their user's comments, using their huge mass news platform. Temendous overlap indeed.

  • aa-jv 5 days ago

    >Has the technical and scientific community in the US already forgotten this huge breach of trust?

    Yes, imho, there is a great deal of ignorance of the actual contents of the NSA leaks.

    The agitprop against Snowden as a "Russian agent" has successfully occluded the actual scandal, which is that the NSA has built a totalitarian-authoritarian apparatus that is still in wide use.

    Autocrats' general hubris about their own superiority has been weaponized against them. Instead of actually addressing the issue with America's repressive military industrial complex, they kill the messenger.

  • LordDragonfang 4 days ago

    Probably gonna get buried at the bottom of this thread, but:

    There's a good chance they just asked GPT5.2 for a name. I know for a fact that when some of the OpenAI models get stuck in the "weird" state associated with LLM psychosis, three of the things they really like talking about are spirals, fractals, and prisms. Presumably, there's some general bias toward those concepts in the weights.

  • saidnooneever 5 days ago

    tons of things are called prism.

    (full disclosure, yes they will be handin in PII on demands like the same kinda deals, this is 'normal' - 2012 shows us no one gives a shit)

  • alfiedotwtf 5 days ago

    > Has the technical and scientific community in the US already forgotten this huge breach of trust?

    We haven’t forgotten… it’s mostly that we’re all jaded given the fact that there has been zero ramifications and so what’s the use of complaining - you’re better off pushing shit up a hill

  • teddyh 4 days ago

    We used to have “SEO spam”, where people would try to create news (and other) articles associated with some word or concept to drown out some scandal associated with that same word or concept. The idea was that people searching on Google for the word would see only the newly created articles, and not see anything scandalous. This could be something similar, but aimed at future LLM’s trained on these articles. If LLM’s learn that the word “Prism” means a certain new thing in a surveillance context, the LLM’s will unlearn the older association, thereby hiding the Snowden revelations.

  • cruffle_duffle 4 days ago

    As a datapoint, when I read this headline, the very first thing i thought of as "wasn't PRISM some NSA shit? Is OpenAI working with the NSA now?"

    It's a horrible name for any product coming out of a company like OpenAI. People are super sensitive to privacy and government snooping and OpenAI is a ripe target for that sort of thinking. It's a pretty bad association. You do not want your AI company to be in any way associated with government surveillance programs no matter how old they are.

  • wmeredith 4 days ago

    I get what you're saying, but that was 13 years ago. How long before the branding statute of limitations runs out on usage for a simple noun?

  • bandrami 5 days ago

    I mean it's also the name of the national engineering education journal and a few other things. There's only 14,000 5-letter words in English so you're going to have collisions.

  • hcfman 4 days ago

    Yeah, to be fair I would be hesitant to have anything to do with any program called prism as well. Hard to imagine that no one brought this up when they were thinking of a name.

  • yayitswei 4 days ago

    Fwiw I was going to make the same comment about the naming, but you beat me to it.

  • lrvick 4 days ago

    Considering OpenAI is deeply rooted in anti-freedom ethos and surveillance capitalism, I think it is quite a self aware and fitting name.

  • johanyc 4 days ago

    I did not make the association at all

  • observationist 4 days ago

    I think it's probably just apparent to a small set of people; we're usually the ones yelling at the stupid cloud technologies that are ravaging online privacy and liberty, anyway. I was expecting some sort of OpenAI automated user data handling program, with the recent venture into adtech, but since it's a science project and nothing to do with surveillance and user data, I think it's fine.

    If it was part of their adtech systems and them dipping their toe into the enshittification pool, it would have been a legendarily tone deaf project name, but as it is, I think it's fine.

  • aargh_aargh 5 days ago

    I still can't get over the Apple thing. Haven't enjoyed a ripe McIntosh since. </s>

vitalnodo 5 days ago

Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.

On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.

[0] https://crixet.com

[1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...

[2] https://news.ycombinator.com/item?id=42009254

[3] https://news.ycombinator.com/item?id=46394937

  • crazygringo 5 days ago

    I'm curious how it compares to Overleaf in terms of features? Putting aside the AI aspect entirely, I'm simply curious if this is a viable Overleaf competitor -- especially since it's free.

    I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).

    I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.

    • efficax 5 days ago

      Overleaf is a little curious to me. What's the point? Just install LaTeX. Claude is very good at manipulating LaTeX documents and I've found it effective at fixing up layouts for me.

      • radioactivist 5 days ago

        In my circles the killer features of Overleaf are the collaborative ones (easy sharing, multi-user editing with track changes/comments). Academic writing in my community basically went from emailed draft-new-FINAL-v4.tex files (or a shared folder full of those files) to basically people just dumping things on Overleaf fairly quickly.

      • bhadass 5 days ago

        collaboration is the killer feature tbh. overleaf is basically google docs meets latex.. you can have multiple coauthors editing simultaneously, leave comments, see revision history, etc.

        a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their fuckin' paper (WTFP).

        overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.

        also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyones comfortable with that, but plenty of researchers treat computers as appliances lol.

        • joker666 4 days ago

          I am curious if Git + Local install can solve this collaboration issue with Pull Requests?

      • jdranczewski 5 days ago

        To add to the points raised by others, "just install LaTeX" is not imo a very strong argument. I prefer working in a local environment, but many of my colleagues much prefer a web app that "just works" to figuring out what MiKTeX is.

      • crazygringo 5 days ago

        I can code in monospace (of course) but I just can't write in monospace markup. I need something approaching WYSIWIG. It's just how my brain works -- I need the italics to look like italics, I need the footnote text to not interrupt the middle of the paragraph.

        The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.

        (And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)

      • warkdarrior 5 days ago

        Collaboration is at best rocky when people have different versions of LaTeX packages installed. Also merging changes from multiple people in git are a pain when dealing with scientific, nuanced text.

        Overleaf ensures that everyone looks at the same version of the document and processes the document with the same set of packages and options.

      • baby 5 days ago

        Latex is such a nightmare to work with locally

      • lou1306 4 days ago

        The first three things are, in this order: collaborative editing, collaborative editing, collaborative editing. Seriously, this cannot be understated.

        Then: The LaTeX distribution is always up-to-date; you can run it on limited resources; it has an endless supply of conference and journal templates (so you don't have to scavenge them yourself off a random conference/publisher website); Git backend means a) you can work offline and b) version control comes in for free. These just off the top of my head.

      • MuteXR 4 days ago

        "Just install LaTeX" is really not a valid response when the LaTeX toolchain is a genuine nightmare to work with. I could do it but still use Overleaf. Managing that locally is just not worth it.

      • spacebuffer 5 days ago

        I'd use git in this case, I am sure there are other reasons to use overleaf otherwise it wouldn't exist but this seems like a solved issue with git.

      • 3form 5 days ago

        LaTeX ecosystem is a UX nightmare, coming from someone who had to deal with it recently. Overleaf just works.

  • vicapow 5 days ago

    The deeper I got, the more I realized really supporting the entire LaTeX toolchain in WASM would mean simulating an entire linux distribution :( We wanted to support Beamer, LuaLaTeX, mobile (wasn't working with WASM because of resource limits), etc.

    • seazoning 5 days ago

      We had been building literally the same thing for the last 8 months along with a great browsing environment over arxiv -- might just have to sunset it

      Any plans of having typst integrated anytime soon?

      • vicapow 5 days ago

        I'm not against typst. I think it's integration would be a lot easier and more straightforward I just don't know if it's really that popular yet in academia.

    • storystarling 4 days ago

      The WASM constraints make sense given the resource limits, especially for mobile. If you are moving that compute server-side though I am curious about the unit economics. LaTeX pipelines are surprisingly heavy and I wonder how you manage the margins on that infrastructure at scale.

    • BlueTemplar 5 days ago

      But what's the point ?

      To end up with yet another shitty (because running inside a browser, in particular its interface) web app ?

      Why not focus efforts into making a proper program (you know, with IBM menu bars and keyboard shortcuts), but with collaborative tools too ?

      • jll29 4 days ago

        You are right in pointing out that the Web browser isn't the most suitable UI paradigm for highly interactive applications like a scientific typesetting system/text editor.

        I have occasionally lost a paragraph just by accidental marking a few lines and pressing [Backspace].

        But at the moment, there is no better option than Overleaf, and while I encourage you to write what you propose if you can, Overleaf will be the bar that any such system needs to be compared against.

        • BlueTemplar 4 days ago

          OP is talking about developing an alternative to Overleaf. But they are still trying to do it inside a browser !

  • regenschutz 4 days ago

    I was using Crixet before I switched over to Typst[0] for all of my writing. However, back when I did use Crixet, I never used its AI features. It was just a much better alternative to Overleaf for me. Sad to see that AI will be forced on all Crixet users now.

    [0]: https://typst.app

  • songodongo 5 days ago

    So this is the product of an acquisition?

    • vitalnodo 5 days ago

      > Prism builds on the foundation of Crixet, a cloud-based LaTeX platform that OpenAI acquired and has since evolved into Prism as a unified product. This allowed us to start with a strong base of a mature writing and collaboration environment, and integrate AI in a way that fits naturally into scientific workflows.

      They’re quite open about Prism being built on top of Crixet.

  • realaaa 4 days ago

    great context - thanks ! so yeah maybe Overleaf is the way to go now :)

  • doctorpangloss 5 days ago

    It seems bad for OpenAI to make this about latex documents, which will be now associated, visually, with AI slop. The opposite of what anyone wants really. Nobody wants you to know they used a chatbot!

    • eloisant 4 days ago

      This is just because LaTeX is widely used by researchers.

      Also yes, LaTeX being source code it's much easier to get an AI to genere LaTeX than integrate into MS Word.

    • y1n0 5 days ago

      Please refrain from incorporating em dashes into your LaTeX document. In summary, the absence of em dashes in LaTeX.

    • amitav1 5 days ago

      Am I missing something? LaTeX is associated with slop now?

      • nemomarx 5 days ago

        If a common AI tool produces latex documents, the association will be created yeah. Right now latex would be a high indicator of manual effort, right?

i2km 5 days ago

This is going to be the concrete block which finally breaks the back of the academic peer review system, i.e. it's going to be a DDoS attack on a system which didn't even handle the load before LLMs.

Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...

  • thomasahle 4 days ago

    I tried Prism, but it's actually a lot more work than just using claude code. The latter allows you to "vibe code" your paper with no manual interaction, while Prism actually requires you review every change.

    I actually think Prism promotes a much more responsible approach to AI writing than "copying from chatgpt" or the likes.

  • haspok 5 days ago

    > This is going to be the concrete block which finally breaks the back of the academic peer review system

    Exactly, and I think this is good news. Let's break it so we can fix at last. Nothing will happen until a real crisis emerges.

  • aembleton 5 days ago

    Maybe Open AI will sell you 'Lens' which will assist with sorting through the submissions and narrow down the papers worth reviewing.

  • jltsiren 4 days ago

    Or it makes gatekeepers even more important than before. Every submission to a journal will be desk-rejected, unless it is vouched for by someone one of the editors trusts. And people won't even look at a new paper, unless it's vouched for by someone / published in a venue they trust.

  • make3 5 days ago

    Overleaf basically already has the same thing

  • csomar 4 days ago

    That will just create a market for hand-writers. Good thing the economy is doing very well right, so there aren't that many desperate people who will do it en-masse and for peanuts.

  • boxed 5 days ago

    Handwriting is super easy to fake with plotters.

    • eternauta3k 4 days ago

      Is there something out there to simulate the non-uniformity and errors of real handwriting?

  • 4gotunameagain 5 days ago

    > i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...

    And you think the indians will not hand write the output of LLMs ?

    Not that I have a better suggestion myself..

syntex 5 days ago

The Post-LLM World: Fighting Digital Garbage https://archive.org/details/paper_20260127/mode/2up

Mini paper: that future isn’t the AI replacing humans. its about humans drowning in cheap artifacts. New unit of measurement proposed: verification debt. Also introduces: Recursive Garbage → model collapse

a little joke on Prism)

  • Springtime 5 days ago

    > The Post-LLM World: Fighting Digital Garbage https://archive.org/details/paper_20260127/mode/2up

    This appears to just be the output of LLMs itself? It credits GPT-5.2 and Gemini 3 exclusively as authors, has a public domain license (appropriate for AI output) and is only several paragraphs in length.

    • doodlesdev 5 days ago

      Which proves its own points! Absolutely genius! The cost asymmetry of producing and checking for garbage truly is becoming a problem in the recent years, with the advent of LLMs and generative AI in general.

      • [removed] 5 days ago
        [deleted]
      • parentheses 5 days ago

        Totally agree!

        I feel like this means that working in any group where individuals compete against each other results in an AI vs AI content generation competition, where the human is stuck verifying/reviewing.

        • dormento 4 days ago

          > Totally agree!

          Not a dig on your (very sensible) comment, but now I always do a double take when I see anyone effusively approving of someone else's ideas. AI turned me into a cynical bastard :(

    • syntex 5 days ago

      Yes, I did it as a joke inspired by the PRISM release. But unexpectedly, it makes a good point. And the funny part for was that the paper lists only LLMs as authors.

      Also, in a world where AI output is abundant, we humans become the scarce resource the "tools" in the system that provide some connectivity to reality (grounding) for LLM

  • mrbonner 5 days ago

    Plot twist: humans become the new Proof of Work consensus mechanism. Instead of GPUs burning electricity to hash blocks, we burn our sanity verifying whether that Medium article was written by a person or a particularly confident LLM.

    "Human Verification as a Service": finally, a lucrative career where the job description is literally "read garbage all day and decide if it's authentic garbage or synthetic garbage." LinkedIn influencers will pivot to calling themselves "Organic Intelligence Validators" and charge $500/hr to squint at emails and go "yeah, a human definitely wrote this passive-aggressive Slack message."

    The irony writes itself: we built machines to free us from tedious work, and now our job is being the tedious work for the machines. Full circle. Poetic even. Future historians (assuming they're still human and not just Claude with a monocle) will mark this as the moment we achieved peak civilization: where the most valuable human skill became "can confidently say whether another human was involved."

    Bullish on verification miners. Bearish on whatever remains of our collective attention span.

    • kinduff 5 days ago

      Human CAPTCHA exists to figure out whether your clients are human or not, so you can segment them and apply human pricing. Synthetics, of course, fall into different tiers. The cheaper ones.

    • direwolf20 5 days ago

      Bullish on verifiers who accept money to verify fake things

JBorrow 5 days ago

From my perspective as a journal editor and a reviewer these kinds of tools cause many more problems than they actually solve. They make the 'barrier to entry' for submitting vibed semi-plausible journal articles much lower, which I understand some may see as a benefit. The drawback is that scientific editors and reviewers provide those services for free, as a community benefit. One example was a submission their undergraduate affiliation (in accounting) to submit a paper on cosmology, entirely vibe-coded and vibe-written. This just wastes our (already stretched) time. A significant fraction of submissions are now vibe-written and come from folks who are looking to 'boost' their CV (even having a 'submitted' publication is seen as a benefit), which is really not the point of these journals at all.

I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.

  • SchemaLoad 5 days ago

    GenAI largely seems like a DDoS on free resources. The effort to review this stuff is now massively more than the effort to "create" it, so really what is the point of even submitting it, the reviewer could have generated it themself. Seeing it in software development where coworkers are submitting massive PRs they generated but hardly read or tested. Shifting the real work to the PR review.

    I'm not sure what the final state would be here but it seems we are going to find it increasingly difficult to find any real factual information on the internet going forward. Particularly as AI starts ingesting it's own generated fake content.

    • cryzinger 5 days ago

      More relevant than ever:

      > The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

      https://en.wikipedia.org/wiki/Brandolini%27s_law

      • trees101 5 days ago

        The P≠NP conjecture in CS says checking a solution is easier than finding one. Verifying a Sudoku is fast; solving it from scratch is hard. But Brandolini's Law says the opposite: refuting bullshit costs way more than producing it.

        Not actually contradictory. Verification is cheap when there's a spec to check against. 'Valid Sudoku?' is mechanical. But 'good paper?' has no spec. That's judgment, not verification.

      • monkaiju 5 days ago

        Wow the 3 comments from OC to here are all bangers, they combine into a really nice argument against these toys

    • overfeed 5 days ago

      > The effort to review this stuff is now massively more than the effort to "create" it

      I don't doubt the AI companies will soon announce products that will claim to solve this very problem, generating turnkey submission reviews. Double-dipping is very profitable.

      It appears LLM-parasitism isn't close to being done, and keeps finding new commons to spoil.

      • fooker 5 days ago

        There are a dozen startups that do this.

    • wmeredith 4 days ago

      > Seeing it in software development where coworkers are submitting massive PRs they generated but hardly read or tested. Shifting the real work to the PR review.

      I've seen this complaint a lot of places, but the solution to me seems obvious. Massive PRs should be rejected. This was true before AI was a thing.

    • Spivak 5 days ago

      In some ways it might be a good thing that shorthand signals of quality are being destroyed because it forces all of us to meaningfully engage with the work. No more LGTM +1 when every PR looks good.

    • toomuchtodo 5 days ago
      • Cornbilly 5 days ago

        This one is hilarious. https://hackerone.com/reports/3516186

        If I submitted this, I'd have to punch myself in the face repeatedly.

        • toomuchtodo 5 days ago

          The great disappointment is that the humans submitting these just don’t care it’s slop and they’re wasting another human’s time. To them, it’s a slot machine you just keep cranking the arm of until coins come out. “Prompt until payout.”

  • InsideOutSanta 5 days ago

    I'm scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We're truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it's drowning out everything of value.

    • willturman 5 days ago

      In a corollary to Sturgeon's Law, I'd propose Altman's Law: "In the Age of AI, 99.999...% of everything is crap"

      • SimianSci 5 days ago

        Altman's Law: 99% of all content is slop

        I can get behind this. This assumes a tool will need to be made to help determine the 1% that isn't slop. At which point I assume we will have reinvented web search once more.

        Has anyone looked at reviving PageRank?

    • techblueberry 5 days ago

      There's this thing where all the thought leaders in software engineering ask "What will change about building about building a business when code is free" and while, there are some cool things, I've also thought, like it could have some pretty serious negative externalities? I think this question is going to become big everywhere - business, science, etc. which is like - Ok, you have all this stuff, but do is it valuable? Which of it actually takes away value?

      • jimbokun 4 days ago

        I think about this more and more when I see people online about their "agents managing agents" producing...something...24/7/365.

        Very rarely is there anything about WHAT these agents are producing and why it's important and valuable.

        • 2sk21 3 days ago

          Indeed - there is a lot of fake "productivity" going on with these swarms of agents

      • SequoiaHope 5 days ago

        To be fair, the question “what will change” does not presume the changes will be positive. I think it’s the right question to ask, because change is coming whether we like it or not. While we do have agency, there are large forces at play which impact how certain things will play out.

      • wmeredith 4 days ago

        The value is in the same place: solving people's problems.

        Now that the code is cheaper (not free quite yet) skills further up the abstraction chain become more valuable.

        Programming and design skills are less valuable. However, you still have to know what to build: product and UX skills are more valuable. You still have to know how to build it: software architect skills are more valuable.

    • jcranmer 5 days ago

      The first casualty of LLMs was the slush pile--the unsolicited submission pile for publishers. We've since seen bug bounty programs and open source repositories buckle under the load of AI-generated contributions. And all of these have the same underlying issue: the LLM makes it easy to do things that don't immediately look like garbage, which makes the volume of submission skyrocket while the time-to-reject also goes up slightly because it passes the first (but only the first) absolute garbage filter.

      • storystarling 5 days ago

        I run a small print-on-demand platform and this is exactly what we're seeing. The submissions used to be easy to filter with basic heuristics or cheap classifiers, but now the grammar and structure are technically perfect. The problem is that running a stronger model to detect the semantic drift or hallucinations costs more than the potential margin on the book. We're pretty much back to manual review which destroys the unit economics.

    • jll29 5 days ago

      Soon, poor people will talk to a LLM, rich people will get human medical care.

      • Spivak 5 days ago

        I mean I'm currently getting "expensive" medical care and the doctors are still all using AI scribes. I wouldn't assume there would be a gap in anything other than perception. I imagine doctors that cater to the fuck you rich will just put more effort into hiding it.

        No one, at all levels, wants to do notes.

        • golem14 5 days ago

          My experience has been that the transcriptions are way more detailed and correct when doctors use these scribes.

          You could argue that not writing down everything provides a greater signal-noise ratio. Fair enough, but if something seemingly inconsequential is not noted and something is missed, that could worsen medical care.

          I'm not sure how this affects malpractice claims - It's now easier to prove (with notes) that the doc "knew" about some detail that would otherwise not have been note down.

  • jll29 5 days ago

    I totally agree. I spend my whole day from getting up to going to bed (not before reading HN!) on reviews for a conference I'm co-organizing later this year.

    So I was not amused about this announcement at all, however easy it may make my own life as an author (I'm pretty happy to do my own literature search, thank you very much).

    Also remember, we have no guarantee that these tools will still exist tomorrow, all these AI companies are constantly pivoting and throwing a lot of things at the wall to see what sticks.

    OpenAI chose not to build a serious product, as there is no integration with the ACM DL, the IEEE DL, SpringerNatureLink, the ACL Anthology, Wiley, Cambridge/Oxford/Harvard University Press etc. - only papers that are not peer reviewed (arXiv.org) are available/have been integrated. Expect a flood of BS your way.

    When my student submit a piece of writing, I can ask them to orally defend their opus maximum (more and more often, ChatGPT's...); I can't do the same with anonymous authors.

    • MITSardine 5 days ago

      Speaking of conferences, might this not be the way to judge this work? You could imagine only orally defended work to be publishable, or at least have the prestige of vetting, in a bit of an old-school science revival.

      • Majromax 4 days ago

        Chicken and egg problem: since conferences have limited capacity, you need to pre-filter submissions to see who gets a presentation spot.

  • bloppe 5 days ago

    I wonder if there's a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you're confident in your paper, you can think of it as a deposit. If you're spamming journals, you're just going to pay for the wasted time.

    Maybe you get reimbursed for half as long as there are no obvious hallucinations.

    • JBorrow 5 days ago

      The journal that I'm an editor for is 'diamond open access', which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.

      • NewsaHackO 5 days ago

        Those journals are really good for getting practice in writing and submitting research papers, but sometimes they are already seen as less impactful because of the quality of accepted papers. At least where I am at, I don't think the advent of AI writing is going to affect how they are seen.

        • agnishom 5 days ago

          In the field of Programming Languages and Formal Methods, many of the top journals and conference proceedings are open access

      • lupire 5 days ago

        Who pays the operating expenses?

    • willturman 5 days ago

      If the penalty for a crime is a fine, then that law exists only for the lower class

      In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis. A fine isn't going to stop tobacco companies from spamming submissions that say smoking doesn't cause lung cancer or social media companies from spamming submissions that their products aren't detrimental to the mental health.

      • Majromax 4 days ago

        > In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis.

        That's not the right threat model. The existing peer review process is already weak to high-effort but conflicted research.

        Instead, the threat model is closer one closer to that of spam, where the submitting authors don't care about the content of their submission at all but need X publications in high-impact outlets for their CV or grant application. Predatory journals exploit this as part of a pay-to-play problem, but the low reputation of those journals limits their desirable impact factor.

        This threat model relies on frequent but low-quality submissions, and a submission fee would make taking multiple kicks at the can unviable.

      • bloppe 5 days ago

        I'm sure my crude idea has it's shortcomings, but this feels superfluous. Deep-pocketed propagandists can do all sorts of things to pump their message whether a slop tax exists or not. There may or may not be existing countermeasures at journals for that. This just isn't really about that. It's about making sure that, in the process of spamming the journal, they also fund the review process, which would otherwise simply bleed time and money.

    • s0rce 5 days ago

      That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn't a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.

      • noitpmeder 5 days ago

        I mean your methodology also sounds suspect. You're just going down a list until it sticks. You don't care where it ends up (I'm sure within reason) just as long as it is accepted and published somewhere (again, within reason).

    • azan_ 4 days ago

      You must have no idea how scientific publishing works. Typical acceptance rate for ok/good journal is 10-20% (and it was like that even before LLMs). Also it's a great idea to make business of scientific publishing even more predatory - now sciencists writing articles for free, reviewing for free and then having to pay for publication will also have to pay to even submit something, with 90% chance of rejection. Also think what kind of incentives it will create.

    • throwaway85825 5 days ago

      Pay to publish journals already exist.

      • bloppe 5 days ago

        This is sorta the opposite of pay to publish. It's pay to be rejected.

      • olivia-banks 5 days ago

        I would think it would act more like a security deposit, and you'd get back 100%, no profit for the journal (at least in that respect).

      • eloisant 4 days ago

        I'm pretty sure the reviewers of those are still volunteers, the publisher is just making even more money!

    • pixelready 5 days ago

      I’d worry about creating a perverse incentive to farm rejected submissions. Similar to those renter application fee scams.

    • mathematicaster 5 days ago

      Pay to review is common in Econ and Finance.

      • skissane 5 days ago

        Variation I thought of on pay-to-review:

        Suppose you are an independent researcher writing a paper. Before submitting it for review to journals, you could hire a published author in that field to review it for you (independently of the journal), and tell you whether it is submission-worthy, and help you improve it to the point it was. If they wanted, they could be listed as coauthor, and if they don't want that, at least you'd acknowledge their assistance in the paper.

        Because I think there are two types of people who might write AI slop papers: (1) people who just don't care and want to throw everything at the wall and see what sticks; (2) people who genuinely desire to seriously contribute to the field, but don't know what they are doing. Hiring an advisor could help the second group of people.

        Of course, I don't know how willing people would be to be hired to do this. Someone who was senior in the field might be too busy, might cost too much, or might worry about damage to their own reputation. But there are so many unemployed and underemployed academics out there...

    • utilize1808 5 days ago

      Better yet, make a "polymarket" for papers where people can bet on which paper can make it, and rely on "expertise arbitrage" to punish spams.

      • ezst 5 days ago

        Doesn't stop the flood, i.e. the unfair asymmetry between the effort to produce vs. effort to review.

      • direwolf20 5 days ago

        Now accepting money from slop companies to verify their slop as notslop

    • petcat 5 days ago

      > There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.

      While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!

      • ezst 5 days ago

        Sure, but now we can't even assume that such research is submitted in good faith anymore. There just seems to be no perfect solution.

        Maybe something like a "hierarchy/DAG? of trusted-peers", where groups like universities certify the relevance and correctness of papers by attaching their name and a global reputation score to it. When it's found that the paper is "undesirable" and doesn't pass a subsequent review, their reputation score deteriorates (with the penalty propagating along the whole review chain), in such a way that:

        - the overall review model is distributed, hence scalable (everybody may play the certification game and build a reputation score while doing so) - trusted/established institutions have an incentive to keep their global reputation score high and either put a very high level of scrutiny to the review, or delegate to very reputable peers - "bad actors" are immediately punished and universally recognized as such - "bad groups" (such as departments consistently spamming with low quality research) become clearly identified as such within the greater organisation (the university), which can encourage a mindset of quality above quantity - "good actors within a bad group" are not penalised either because they could circumvent their "bad group" on the global review market by having reputable institutions (or intermediaries) certify their good work

        There are loopholes to consider, like a black market of reputation trading (I'll pay you generously to sacrifice a bit of your reputation to get this bad science published), but even that cannot pay off long-term in an open system where all transactions are visible.

        Incidentally, I think this may be a rare case where a blockchain makes some sense?

  • Rperry2174 5 days ago

    This keeps repeating in different domains: we lower the cost of producing artifacts and the real bottleneck is evaluating them.

    For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.

    Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"

    • SchemaLoad 5 days ago

      This has been discussed previously as "workslop", where you produce something that looks at surface level like high quality work, but just shifts the burden to the receiver of the workslop to review and fix.

    • vitalnodo 5 days ago

      This fits into the broader evolution of the visualization market. As data grows, visualization becomes as important as processing. This applies not only to applications, but also to relating texts through ideas close to transclusion in Ted Nelson’s Xanadu. [0]

      In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]

      [0] https://news.ycombinator.com/item?id=40295661

      [1] https://news.ycombinator.com/item?id=22368323

    • lonelyasacloud 4 days ago

      > Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"

      Or the providers of the models are capable of providing accepted/certified guarantees as to the quality of the output that their models and systems produce.

  • pickleRick243 5 days ago

    I'm curious if you'd be in favor of other forms of academic gate keeping as well. Isn't the lower quality overall of submissions (an ongoing trend with a history far pre-dating LLMs) an issue? Isn't the real question (that you are alluding to) whether there should be limits to the democratization of science? If my tone seems acerbic, it is only because I sense cognitive dissonance between the anti-AI stance common among many academics and the purported support for inclusivity measures.

    "which is really not the point of these journals at all"- it seems that it very much is one of the main points? Why do you think people publish in journals instead of just putting their work on the arxiv? Do you think postdocs and APs are suffering through depression and stressing out about their publications because they're agonizing over whether their research has genuinely contributed substantively to the academic literature? Are academic employers poring over the publishing record of their researchers and obsessing over how well they publish in top journals in an altruistic effort to ensure that the research of their employees has made the world a better place?

    • JBorrow 4 days ago

      I don't really understand how me saying that this tool isn't good for science as gatekeeping. The vibe-written papers that I am talking about have little-to-no valuable scientific content, and as such would always be rejected. It's just that it's way easier to produce something that _looks_ reasonable from a five-second glance than before, and that causes additional load on an already strained system.

      I also don't understand your second paragraph at all.

    • agnishom 5 days ago

      > whether there should be limits to the democratization of science?

      That is an interesting philosophical question, but not the question we are confronted with. A lot of LLM assisted materials have the _signals_ of novel research without having its _substance_.

      • pickleRick243 5 days ago

        LLMs are tools. In the hands of adept, conscientious researchers, they can only be a boon, assisting in the crafting of the research manuscript. In the hands of less adept, less conscientious users, they accelerate the production of slop. The poster I'm responding to seems to be noting an asymmetry- those who find the most use from these tools could be inept researchers who have no business submitting their work. This is because experienced researchers find writing up their results relatively easy.

        To me, this is directly relevant to the issue of democratization of science. There seems to be a tool that is inconveniently resulting in the "wrong" people accelerating their output. That is essentially the complaint here rather than any criticism inherent to LLMs (e.g. water/resource usage, environmental impact, psychological/societal harm, etc.). The post I'm responding to could have been written if LLMs were replaced by any technology that resulted in less experienced or capable researchers disproportionately being able to submit to journals.

        To be concrete, let's just take one of prism's capabilities- the ability to "turn whiteboard equations or diagrams directly into LaTeX". What a monstrous thing to give to the masses! Before, those uneducated cranks would send word docs to journals with poorly typeset equations, making it a trivial matter to filter them into the trash bin. Now, they can polish everything up and pass off their chicken scratch as respectable work. Ideally, we'd put up enough obstacles so that only those who should publish will publish.

    • Eridrus 5 days ago

      The people on the inside often like all the gatekeeping.

  • MITSardine 5 days ago

    If I may be the Devil's advocate, I'm not sure I fully agree with "The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research)".

    Plenty of researchers hate writing and will only do it at gunpoint. Or rather, delegate it all to their underlings.

    I don't see an issue with generative writing in principle. The Devil is in the details, but I don't see this as much different from "hey grad student, write me this paper". And generative writing already exists as copy-paste, which makes up like 90% of any random paper given the incrementality of it all.

    I was initially a little indignated by the "find me some plausible refs and stick them in the paper" section of the video but, then again, isn't this what most people already do? Just copy-paste the background refs from the colleague's last paper introduction and maybe add one from a talk they saw in the meantime, plus whatever the group & friends produced since then.

    My experience is most likely skewed (as all are), but I haven't met a permanent researcher that wrote their own papers yet, and most grad students and postdocs hate writing. Literally the only times I saw someone motivated to write papers (in a masochistic way) were just before applying to a permanent position or while wrapping up their PhD.

    Onto your point, though, I agree this is somewhat worrisome in that, by reaction, the barrier to entry might rise by way of discriminating based on credentials.

    • Otterly99 4 days ago

      Thank you for bringing this nuanced view.

      I also am not sure why so many people are vehemently against this. I would bet that at least 90% of researchers would agree that the writing up is definitely not the part of the work they prefer (to stay polite). As you mentioned, work is usually relegated to students, and those students already had access to LLMs if they wanted to generate the work.

      In my opinion, most of those tools become problematic when people use them without caution. Unfortunately, even in sciences, people are not as careful and pragmatic as we would like to imagine they are and a lot of people are cutting corners, especially in those "lesser" areas like writing and presenting your work.

      Overall, I think this has the potential to reshape the publication system, which is long overdue.

    • raphman 4 days ago

      I am a rather slow writer who certainly might benefit from something like Prism.

      A good tool would encourage me, help me while I am writing, and maybe set up barriers that keep me from taking shortcuts (e.g. pushing me to re-read the relevant paragraphs of a paper that I cite).

      Prism does none of these things - instead it pushes me towards sloppy practices, such as sprinkling citations between claims. Why won't ChatGPT tell me how to build a bomb but Prism will happily fabricate fake experimental results for me?

  • jjcm 5 days ago

    The comparison to make here is that a journal submission is effectively a pull request to humanities scientific knowlegde base. That PR has to be reviewed. We're already seeing the effects of this with open source code - the number of PR submissions have skyrocketed, overwhelming maintainers.

    This is still a good step in a direction of AI assisted research, but as you said, for the moment it creates as many problems as it solves.

  • maxkfranz 5 days ago

    I generally agree.

    On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.

    • ezst 5 days ago

      As I understand it, the problem isn't publication or how it's changing over time, it's about the challenges of producing new science when the existing one is muddied in plausible lies. That warrants a new process by which to assess the inherent quality of a paper, but even if it comes as globally distributed, the cheats have a huge advantage considering the asymmetry between the effort to vibe produce vs. the tedious human review.

      • maxkfranz 5 days ago

        That’s a good point. On the other hand, we’ve had that problem long before AI. You already need to mentally filter papers based on your assessment of the reputability of the authors.

        The whole process should be made more transparent and open from the start, rather than adding more gatekeeping. There ought to be openness and transparency throughout the entire research process, with auditing-ability automatically baked in, rather than just at the time of publication. One man’s opinion, anyway.

  • mrandish 5 days ago

    As a non-scientist (but long-time science fan and user), I feel your pain with what appears to be a layered, intractable problem.

    > > who are looking to 'boost' their CV

    Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.

  • SecretDreams 5 days ago

    I appreciate and sympathize with this take. I'll just note that, in general, journal publications have gone considerably downhill over the last decade, even before the advent of AI. Frequency has gone up, quality has gone down, and the ability to actually check if everything in the article is actually valid is quite challenging as frequency goes up.

    This is a space that probably needs substantial reform, much like grad school models in general (IMO).

  • i000 5 days ago

    Perhaps the real issue is the gate-keeping scientific publishing model. Journals had a place and role, and peer-review is a critical aspect of the scientific process but new times (internet, citizien science, higher levels of scientific literacy, and now AI) diminish the benefits of journals creating "barriers to entry" as you put it.

    • desolate_muffin 5 days ago

      I for one hope not to live in a world where academic journals fall out of favor and are replaced by vibe-coded papers by citizen scientists with inflated egos from one too many “you’re absolutely right!” Claude responses.

      • i000 5 days ago

        Me neither, but what you present is a false dichotomy. Science used to be a past time of the wealthy elites, it became a profession. By opening up it up progrss was accelerated. Same will happen when publication will be made more open and accessible.

        • BlueTemplar 4 days ago

          And then, Einstein was a « citizen scientist », wasn't he ?

  • boplicity 5 days ago

    Is it at all possible to have a policy that bans the submission of any AI written text, or text that was written with the assistance of AI tools? I understand that this would, by necessity, be under an "honor system" but maybe it could help weed out papers not worth the time?

    • currymj 5 days ago

      this is probably a net negative as there are many very good scientists with not very strong English skills.

      the early years of LLMs (when they were good enough to correct grammar but not enough to generate entire slop papers) were an equalizer. we may end up here but it would be unfortunate.

      • BlueTemplar 4 days ago

        But then, assuming we are fine with this state of things with LLMs :

        why would it be upon them to submit in English, when instead reviewers and readers can themselves use a LLM translator to read the paper ?

  • egorfine 4 days ago

    > these kinds of tools cause many more problems than they actually solve

    For whom? For OpenAI these tools are definitely the solutions. They are developing by throwing various AI-powered stuff at the wall to see what sticks. These tools also demonstrate to the investors that innovation did not stall and to show that AI usage is growing.

    Same with Microsoft: none of the AI stuff they are shoving down the users' throats were actually designed for the users. All this stuff is only for the token usage to grow for the shareholders to see.

    Similar with Google although no one can deny real innovation happening there.

  • jascha_eng 5 days ago

    Why not filter out papers from people without credentials? And also publicly call them out and register them somewhere, so that their submission rights can be revoked by other journals and conferences after "vibe writing".

    These acts just must have consequences so people stop doing them. You can use AI if you are doing it well but if you are wasting everyones time you should just be excluded from the discourse altogether.

    • direwolf20 5 days ago

      What do credentials have to do with good science? There are already some roadblocks to publish science in important–sounding journals, but it's important for the neutrality of the scientific process that in principle anyone can do it.

      • jascha_eng 3 days ago

        Fair but if spam becomes an issue that blocks good research from happening maybe adding some filters improves the end result more.

  • jasonfarnon 5 days ago

    I'm certain your journal will be using LLMs in reviewing incoming articles, if they aren't already. I also don't think this is in response to the flood of LLM generated articles. Even if authors were the same as pre-LLM, journals would succumb to the temptation, at least at the big 5 publishers, which already have a contentious relationship with the referees.

  • [removed] 5 days ago
    [deleted]
  • parentheses 5 days ago

    This dynamic would create even more gate-keeping using credentials, which is already a problem with academia.

  • keithnz 5 days ago

    wouldn't AI actually be good for filtering given it's going to be a lot better at knowing what has been published? Also seems possible that it could actually work out papers that have ideas that are novel, or at least come up with some kind of likely score.

  • eloisant 4 days ago

    The real problem is that researchers are pushed to publish as their publication is the only way their career can advance. It's not even to "boost" your CV, as a researcher your publication history IS your CV.

    It was already a problem 25 years ago when I did my Ph.D., and I don't think things changed that much since then.

    This encourages researchers to publish barely valuable results, or to cut one articles into multiple ones with small variations to increase their number of publications. Also publishers creating more conferences and more journals to respond to the need that researchers have to publish.

    I remember many experienced professors telling me cynically about this, about all the techniques they had to blow up one small finding into many articles.

    Anyway - research slop started way before AI. It's probably going to make the problem worse, but the root issue have been there for a long time.

  • lupsasca 5 days ago

    I am very sympathetic to your point of view, but let me offer another perspective. First off, you can already vibe-write slop papers with AI, even in LaTeX format--tools like Prism are not needed for that. On the other hand, it can really help researchers improve the quality of their papers. I'm someone who collaborates with many students and postdocs. My time is limited and I spend a lot of it on LaTeX drudgery that can and should be automated away, so I'm excited for Prism to save time on writing, proofreading, making TikZ diagrams, grabbing references, etc.

    • fuzzfactor 4 days ago

      This is what I see, you need more of an active, accomplished helper at the keyboard.

      If I can't have that, the next best thing is a helper while I'm at the keyboard my damn self.

      >Why LaTeX is the bottleneck: scientists spend hours aligning diagrams, formatting equations, and managing references—time that should go to actual science, not typesetting

      This is supposed to be only a temporary situation until people recover from the cutbacks of the 1970's, and a more comprehensive number of scientists once again have their own secretary.

      Looks like the engineers at Crixet were tired of waiting.

    • CJefferson 5 days ago

      What the heck is the point of a reference you never read?

      • lupsasca 5 days ago

        By "grabbing references" I meant queries of the type "add paper [bla] to the bibliography" -- that seems useful to me!

    • noitpmeder 5 days ago

      AI generating references seems like a hop away from absolute unverifiable trash.

tarcon 4 days ago

This is a actual prompt in the video: "What are the papers in the literature that are most relevant to this draft and that I should consider citing?"

They probably wanted: "... that I should read?" So that this is at least marketed to be more than a fake-paper generation tool.

  • mFixman 4 days ago

    You can tell that they consulted 0 scientists to verify the clearly AI-written draft of this video.

    The target audience of this tool is not academics; it's OpenAI investors.

  • jtr1 4 days ago

    At last, our scientific literature can turn to its true purpose: mapping the entire space of arguable positions (and then some)

  • floitsch 4 days ago

    I felt the same, but then thought of experts in their field. For example, my PhD advisor would already know all these papers. For him the prompt would actually be similar to what was shown in the video.

parentheses 5 days ago

It feels generally a bit dangerous to use an AI product to work on research when (1) it's free and (2) the company hosting it makes money by shipping productized research

  • roflmaostc 4 days ago

    I am not so skeptical about AI usage for paper writing as the paper will be often public days after anyways (pre-print servers such as arXiv).

    So yes, you use it to write the paper but soon it is public knowledge anyway.

    I am not sure if there is much to learn from the draft of the authors.

  • GorbachevyChase 4 days ago

    I think the goal is to capture high quality training data to eventually create an automated research product. I could see the value of having drafts, comments, and collaboration discussions as a pattern to train the LLMs to emulate.

  • biscuit1v9 4 days ago

    Why do you think these points would make the usage dangerous?

  • z3t4 5 days ago

    They have to monetize somehow...

raincole 5 days ago

I know many people have negative opinions about this.

I'd also like to share what I saw. Since GPT-4o became a thing, everyone who submits academic papers I know in my non-english speaking country (N > 5) has been writing papers in our native language and translating them with GPT-4o exclusively. It has been the norm for quite a while. If hallucination is such a serious problem it has been so for one and half a year.

  • direwolf20 5 days ago

    Translation is something Large Language Models are inherently pretty good at, without controversy, even though the output still should be independently verified. It's a language task and they are language models.

    • kccqzy 5 days ago

      Of course. Transformers were originally invented for Google Translate.

    • biophysboy 5 days ago

      Are they good at translating scientific jargon specific to a niche within a field? I have no doubt LLMs are excellent at translating well-trodden patterns; I'm a bit suspicious otherwise..

      • andy12_ 4 days ago

        In my experience of using it to translate ML work between English->Spanish|Galician, it seems to literally translate jargon too eagerly, to the point that I have to tell it to maintain specific terms in English to avoid it sounding too weird (for most modern ML jargon there really isn't a Spanish translation).

      • mbreese 5 days ago

        It seems to me that jargon would tend to be defined in one language and minimally adapted in other languages. So I’d not sure that would be much of a concern.

        • fuzzfactor 4 days ago

          I would look at non-English research papers along with the English ones in my field and the more jargon and just plain numbers and equations there were, the more I could get out of it without much further translation.

      • disconcision 5 days ago

        for better or for worse, most specific scientific jargon is already going to be in english

        • [removed] 4 days ago
          [deleted]
    • [removed] 5 days ago
      [deleted]
  • ivirshup 5 days ago

    I've heard that now that AI conferences are starting to check for hallucinated references, rejection rates are going up significantly. See also the Neurips hallucinated references kerfuffle [1]

    [1]: https://statmodeling.stat.columbia.edu/2026/01/26/machine-le...

    • doodlesdev 5 days ago

      Honestly, hallucinated references should simply get the submitter banned from ever applying again. Anyone submitting papers or anything with hallucinated references shall be publicly shamed. The problem isn't only the LLMs hallucinating, it's lazy and immoral humans who don't bother to check the output either, wasting everyone's time and corroding public trust in science and research.

      • lionkor 4 days ago

        I fully agree. Not reading your own references should be grounds for banning, but that's impossible to check. Hallucinated references cannot be read, so by definition,they should get people banned.

        • fuzzfactor 4 days ago

          >Not reading your own references

          This could be considered in degrees.

          Like when you only need a single table from another researcher's 25-page publication, you would cite it to be thorough but it wouldn't be so bad if you didn't even read very much of their other text. Perhaps not any at all.

          Maybe one of the very helpful things is not just reading every reference in detail, but actually looking up every one in detail to begin with?

    • SilverBirch 4 days ago

      Yeah that's not going to work for long. You can draw a line in 2023, and say "Every paper before this isn't AI". But in the future, you're going to have AI generated papers citing other AI slop papers that slipped through the cracks, because of the cost of doing reseach vs the cost of generating AI slop, the AI slop papers will start to outcompete the real research papers.

      • BlueTemplar 4 days ago

        How is this different from flat earth / creationist papers citing other flat earth / creationist papers ?

      • fuzzfactor 4 days ago

        >the cost of doing reseach vs the cost of generating

        >slop papers will start to outcompete the real research papers.

        This started to rear its ugly head when electric typewriters got more affordable.

        Sometimes all it takes is faster horses and you're off to the races :\

  • utopiah 5 days ago

    It's quite a safe case if you maintain provenance because there is a ground truth to compare to, namely the untranslated paper.