asveikau 5 days ago

Good idea to name this after the spy program that Snowden talked about.

  • pazimzadeh 5 days ago

    idk if OpenAI knew that Prism is already a very popular desktop app for scientists and that it's one of the last great pieces of optimized native software?

    https://www.graphpad.com/

    • varjag 5 days ago

      They don't care. Musk stole a chunk Heinlein's literary legacy with Grok (which unlike prism wasn't a common word) and noone bat an eye.

      • DonaldPShimoda 5 days ago

        > Grok (which unlike prism wasn't a common word)

        "Grok" was a term used in my undergrad CS courses in the early 2010s. It's been a pretty common word in computing for a while now, though the current generation of young programmers and computer scientists seem not to know it as readily, so it may be falling out of fashion in those spaces.

      • sincerely 5 days ago

        Grok has been nerd slang for a while. I bet it's in that ESR list of hacker lingo. And hell if every company in silicon valley gets to name their company after something from Lord of the Rings why can't he pay homage to an author he likes

      • Fnoord 5 days ago

        He stole a letter, too.

        • tombert 5 days ago

          That bothers more than it should. Every single time I see a new post about Twitter, I think that there's some update for X11 or X Server or something, only to be reminded that Twitter has been changed.

    • intothemild 5 days ago

      I very much doubt they knew much about what they were building if they didn't know this.

  • XCSme 5 days ago

    I thought this was about the Prism Database ORM. Or that was Prisma?

bmaranville 5 days ago

Having a chatbot that can natively "speak" latex seems like it might be useful to scientists that already use it exclusively for their work. Writing papers is incredibly time-consuming for a lot of reasons, and having a helper to make quick (non-substantive) edits could be great. Of course, that's not how people will use it...

I would note that Overleaf's main value is as a collaborative authoring tool and not a great latex experience, but science is ideally a collaborative effort.

plastic041 5 days ago

The video shows a user asking Prism to find articles to cite and to put them in a bib file. But what's the point of citing papers that aren't referenced in the paper you're actually writing? Can you do that?

Edit: You can add papers that are not cited, to bibliography. Video is about bibliography and I was thinking about cited works.

  • parsimo2010 5 days ago

    A common approach to research is to do literature review first, and build up a library of citable material. Then when writing your article, you summarize the relevant past research and put in appropriate citations.

    To clarify, there is a difference between a bibliography (a list of relevant works but not necessarily cited), and cited work (a direct reference in an article to relevant work). But most people start with a bibliography (the superset of relevant work) to make their citations.

    Most academics who have been doing research for a long time maintain an ongoing bibliography of work in their field. Some people do it as a giant .bib file, some use software products like Zotero, Mendeley, etc. A few absolute psychos keep track of their bibliography in MS Word references (tbh people in some fields do this because .docx is the accepted submission format for their journals, not because they are crazy).

    • plastic041 5 days ago

      > a bibliography (a list of relevant works but not necessarily cited)

      Didn't know that there's difference between bibliography and cited work. thank you.

  • alphazard 5 days ago

    I once took a philosophy class where an essay assignment had a minimum citation count.

    Obviously ridiculous, since a philosophical argument should follow a chain of reasoning starting at stated axioms. Citing a paper to defend your position is just an appeal to authority (a fallacy that they teach you about in the same class).

    The citation requirement allowed the class to fulfill a curricular requirement that students needed to graduate, and therefore made the class more popular.

    • iterance 5 days ago

      In coursework, references are often a way of demonstrating the reading one did on a topic before committing to a course of argumentation. They also contextualize what exactly the student's thinking is in dialogue with, since general familiarity with a topic can't be assumed in introductory coursework. Citation minimums are usually imposed as a means of encouraging a student to read more about a topic before synthesizing their thoughts, and as a means of demonstrating that work to a professor. While there may have been administrative reasons for the citation minimum, the concept behind them is not unfounded, though they are probably not the most effective way of achieving that goal.

      While similar, the function is fundamentally different from citations appearing in research. However, even professionally, it is well beyond rare for a philosophical work, even for professional philosophers, to be written truly ex nihilo as you seem to be suggesting. Citation is an essential component of research dialogue and cannot be elided.

    • bonsai_spool 5 days ago

      > Citing a paper to defend your position is just an appeal to authority

      Hmm, I guess I read this as a requirement to find enough supportive evidence to establish your argument as novel (or at least supported in 'established' logic).

      An appeal to authority explicitly has no reasoning associated with it; is your argument that one should be able to quote a blog as well as a journal article?

      • tyre 5 days ago

        It’s also a way of getting people to read things about the subject that they otherwise wouldn’t. I read a lot of philosophy because it was relevant to a paper I was writing, but wasn’t assigned to the entire class.

    • _bohm 5 days ago

      Huh? It's quite sensible to make reference to someone else's work when writing a philosophy paper, and there are many ways to do so that do not amount to an appeal to authority.

      • bogdan 5 days ago

        He's point is that they asked for a minimum number of references not references in general

        • [removed] 5 days ago
          [deleted]
    • fxwin 4 days ago

      > Citing a paper to defend your position is just an appeal to authority (a fallacy that they teach you about in the same class).

      an appeal to authority is fallacious when the authority is unqualified for the subject at hand. Citing a paper from a philosopher to support a point isn't fallacious, but "<philosophical statement> because my biology professor said so" is.

danelski 5 days ago

Many people here talk about Overleaf as if it was the 'dumb' editor without any of these capabilities. It had them for some time via Writefull integration (https://www.writefull.com/writefull-for-overleaf). Who's going to win will probably be decided by brand recognition with Overleaf having a better starting position in this field, but money obviously being on OAI's side. With some of Writefull's features being dependent on ChatGPT's API, it's clear they are set to be priced-out unless they do something smart.

DominikPeters 5 days ago

This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.

  • qbit42 5 days ago

    Loads of researchers have only used LaTeX via Overleaf and even more primarily edit LaTeX using Overleaf, for better or worse. It really simplifies collaborative editing and the version history is good enough (not git level, but most people weren't using full git functionality). I just find that there are not that many features I need when paper writing - the main bottlenecks are coming up with the content and collaborating, with Overleaf simplifying the latter. It also removes a class of bugs where different collaborators had slightly different TeX setups.

    I think I would only switch from Overleaf if I was writing a textbook or something similarly involved.

  • mturmon 5 days ago

    Getting close to the "why Dropbox when you can rsync" mistake (https://news.ycombinator.com/item?id=9224)

    @vicapow replied to keep the Dropbox parallel alive

    • DominikPeters 4 days ago

      Yeah I realized the parallel while I was writing my comment! I guess what I'm thinking is that a much better experience is available and there is no in-principle reason why overleaf and prism have to be so much worse, especially in the age of vibe-coding. Prism feels like the result of two days of Claude Code, when they should have invested at least five days.

  • vicapow 5 days ago

    I could see it seeming likely that because the UI is quite minimalist, but the AI capabilities are very extensive, imo, if you really play with it.

    You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.

  • yfontana 4 days ago

    > Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.

    I have a phd in economics. Most researchers in that field have never even heard of any of those tools. Maybe LaTeX, but few actually use it. I was one of very few people in my department using Zotero to manage my bibliography, most did that manually.

rockskon 5 days ago

Naming their tool after the program where private companies run searches on behalf of and give resulting customer data to the NSA....was certainly a choice.

beklein 5 days ago

The Latent Space podcast just released a relevant episode today where they interviewed Kevin Weil and Victor Powell from, now, OpenAI, with some demos, background and context, and a Q&A. The YouTube link is here: https://www.youtube.com/watch?v=W2cBTVr8nxU

  • swyx 5 days ago

    oh i was here to post it haha - thank you for doing that job for me so I'm not a total shill. I really enjoyed meeting them and was impressed by the sheer ambition of the AI for Science effort at OAI - in some sense I'm making a 10000x smaller scale bet than OAI on AI for Science "taking off" this year with the upcoming dedicated Latent Space Science pod.

    generally think that there's a lot of fertile ground for smart generalist engineers to make a ton of progress here this year + it will probably be extremely financially + personally rewarding, so I broadly want to create a dedicated pod to highlight opportunities available for people who don't traditionally think of themselves as "in science" to cross over into the "ai for hard STEM" because it turns out that 1) they need you 2) you can fill in what you don't know 3) it will be impactful/challenging/rewarding 4) we've exhausted common knowledge frontiers and benchmarks anyway so the only* people left working on civilization-impacting/change-history-forever hard problems are basically at this frontier

    *conscious exaggeration sorry

    • beklein 4 days ago

      Wasn't aware you're so active on HN; sorry for stealing your karma.

      Love the idea of a dedicated series/pod where normal people take on hard problems by using and leveraging the emergent capabilities of frontier AI systems.

      Anyway, thanks for pod!

      • swyx 4 days ago

        not at all about stealing karma, i dont care much about fake internet points.

        yes you got the important thing!

  • vicapow 5 days ago

    Hope you like it :D I'm here if you have questions, too

jumploops 5 days ago

I’ve been “testing” LLM willingness to explore novel ideas/hypotheses for a few random topics[0].

The earlier LLMs were interesting, in that their sycophantic nature eagerly agreed, often lacking criticality.

After reducing said sycophancy, I’ve found that certain LLMs are much more unwilling (especially the reasoning models) to move past the “known” science[1].

I’m curious to see how/if we can strike the right balance with an LLM focused on scientific exploration.

[0]Sediment lubrication due to organic material in specific subduction zones, potential algorithmic basis for colony collapse disorder, potential to evolve anthropomorphic kiwis, etc.

[1]Caveat, it’s very easy for me to tell when an LLM is “off-the-rails” on a topic I know a lot about, much less so, and much more dangerous, for these “tests” where I’m certainly no expert.

PrismerAI 4 days ago

Prismer-AI team here. We’ve actually been building an open-source stack for this since early 2025. We were fed up with the fragmented paper-to-code workflow too. If you're looking for an open-source alternative to Prism that's already modular and ready to fork, check us out: https://github.com/Prismer-AI/Prismer

sva_ 5 days ago

> In 2025, AI changed software development forever. In 2026, we expect a comparable shift in science,

I can't wait

falcor84 5 days ago

It seems clear to me that this is about OpenAI getting telemetry and other training data with the intent of having their AI do scientific work independently down the line, and I'm very ambivalent about it.

  • Ronsenshi 5 days ago

    Just more coal to the hype-train - AI companies can't afford news cycle without anything AI. Stock prices must grow!

maest 5 days ago

Burried halfway through the article.

> Prism is a free workspace for scientific writing and collaboration

jeffybefffy519 5 days ago

I postulate 90% of the reason openai now has "variants" for different use cases is just to capture training data...

  • cauliflower2718 5 days ago

    ChatGPT lets you refuse to allow your content to be used for training (under Preferences -> Data controls), but Prism does not.

vitalnodo 5 days ago

With a tool like this, you could imagine an end-to-end service for restoring and modernizing old scientific books and papers: digitization, cleanup, LaTeX reformatting, collaborative or volunteer-driven workflows, OCR (like Mathpix), and side-by-side comparison with the original. That would be useful.

  • vessenes 5 days ago

    Don’t forget replication!

    • olivia-banks 5 days ago

      I'm curious how you think AI would aide in this.

      • vessenes 5 days ago

        Tao’s doing a lot of related work in mathematics, so I can say that first of all literature search is a clearly valuable function frontier models offer.

        Past that, A frontier LLM can do a lot of critiquing, a good amount of experiment design, a check on statistical significance/power claims, kibitz on methodology..likely suggest experiments to verify or disprove. These all seem pretty useful functions to provide to a group of scientists to me.

      • noitpmeder 5 days ago

        Replicate this <slop>

        Ok! Here's <more slop>

markbao 5 days ago

Not an academic, but I used LaTeX for years and it doesn’t feel like what future of publishing should use. It’s finicky and takes so much markup to do simple things. A lab manager once told me about a study that people who used MS Word to typeset were more productive, and I can see that…

  • crazygringo 5 days ago

    100% completely agreed. It's not the future, it's the past.

    Typst feels more like the future: https://typst.app/

    The problem is that so many journals require certain LaTeX templates so Typst often isn't an option at all. It's about network effects, and journals don't want to change their entire toolchain.

    • lmc 4 days ago

      I've had some good initial results in going from typst to .tex with Claude (Opus 4.5) for an IEEE journal paper - idiomatic use of templates etc.

  • maxkfranz 5 days ago

    Latex is good for equations. And Latex tools produce very nice PDFs, but I wouldn't want to write in Latex generally either.

    The main feature that's important is collaborative editing (like online Word or Google Docs). The second one would be a good reference manager.

  • probably_wrong 5 days ago

    Academic here. Working on MS Word after years of using LaTeX is... hard. With LaTex I can be reassured that the formatting will be 95% fine and the 5% remaining will come down to taste ("why doesn't this Figure show in this page?") while in Word I'm constantly fighting the layout - delete one line? Your entire paragraph is now bold. Changed the font of the entire text? No, that one paragraph ignores you. Want to delete that line after that one Table? F you, you're not. There's a reason why this video joke [1] got 14M views.

    And then I need an extra tool for dealing with bibliography, change history is unpredictable (and, IMO, vastly inferior to version control), and everything gets even worse if I open said Word file in LibreOffice.

    LaTeX' syntax may be hard, but Word actively fights me during writing.

    [1] Moving a photo in Microsoft Word - https://www.instagram.com/jessandquinn/reel/DIMkKkqODS5/

  • auxym 5 days ago

    Agreed. Tex/Latex is very old tech. Error recovery and messages is very bad. Developing new macros in Tex is about as fun as you expect developing in a 70s-era language to be (ie probably similar to cobol and old fortran).

    I haven't tried it yet but Typst seems like a promising replacement: https://typst.app/

  • hatmatrix 5 days ago

    That study must have compared beginners in LaTeX and MS Word. There is a learning curve, but LaTeX will often save more time in the end.

    It is an old language though. LaTeX is the macro system on top of TeX, but now you can write markdown or org-mode (or orgdown) and generate LaTeX -> PDF via pandoc/org-mode. Maybe this is the level of abstraction we should be targeting. Though currently, you still need to drop into LaTeX for very specific fine-tuning.

BizarroLand 5 days ago

https://en.wikipedia.org/wiki/A_Mind_Forever_Voyaging

In 2031, the United States of North America (USNA) faces severe economic decline, widespread youth suicide through addictive neural-stimulation devices known as Joybooths, and the threat of a new nuclear arms race involving miniature weapons, which risks transforming the country into a police state. Dr. Abraham Perelman has designed PRISM, the world's first sentient computer,[2] which has spent eleven real-world years (equivalent to twenty years subjectively) living in a highly realistic simulation as an ordinary human named Perry Simm, unaware of its artificial nature.

anon1253 4 days ago

Slightly off-topic but related: currently I'm in a research environment (biomedicine) where a lot of AI is used. Sometimes well, often poorly. So as an exercise I drafted some rules and commitments about AI and research ("Research After AI: Principles for Accelerated Exploration" [1]), I took the Agile manifesto as a starting point. Anyways, this might be interesting as a perspective on the problem space as I see it.

[1] https://gist.github.com/joelkuiper/d52cc0e5ff06d12c85e492e42...

sbszllr 5 days ago

The quality and usefulness of it aside, the primary question is: are they still collecting chats for training data? If so, it limits how comfortable, and sometimes even permitted, people would with working on their yet-to-be-public work using this tool.

  • einpoklum 4 days ago

    They don't call it PRISM for nothing my friend...

    The collect chat records for any number of users, not the least of which being NSA surveillance and analysis - highly likely given what we know from the Snowden leaks.

tyteen4a03 4 days ago

If you're not a fan of OpenAI: I work at RSpace (https://github.com/rspace-os/rspace-web) and we're an open-source research data management system. While we're not as modern as Obsidian or NotebookLM (yet - I'm spearheading efforts to change that :)) we have been deployed at universities and institutions for years now.

The solution is currently quite focused on life science needs but if you're curious, check us out!

bonsai_spool 5 days ago

The example proposed in "and speeding up experimental iteration in molecular biology" has been done since at least the mid-2000s.

It's concerning that this wasn't identified and augur poorly for their search capabilities.

reassess_blind 5 days ago

Do you think they used an em-dash in the opening sentence because they’re trying to normalise the AI’s writing style, or…

  • torginus 5 days ago

    I haven't used MS Word in quite a while, but I distinctly remember it changed minus signs to em dashes.

  • jedberg 5 days ago

    > because they’re trying to normalise the AI’s writing style,

    AIs use em dashes because competent writers have been using em dashes for a long time. I really hate the fact that we assume em dash == AI written. I've had to stop using em dashes because of it.

    • noname120 5 days ago

      Likewise, I’m now reluctant to use any em dashes these days because unenlightened people immediately assume that it’s AI. I used em dashes way before AI decided these were cool

  • flumpcakes 5 days ago

    LaTeX made writing Em dashes very easy. To the point that I would use them all the times in my academic writing. It's a shame that perfectly good typography is now a sign of slop/fraud.

  • reed1234 5 days ago

    Probably used their product to write it

  • exyi 5 days ago

    ... or they teached GPT to use em-dashes, because of their love for em-dashes :)

MattDaEskimo 5 days ago

What's the goal here?

There was an idea of OpenAI charging commission or royalties on new discoveries.

What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?

  • engineer_22 5 days ago

    > Prism is free to use, and anyone with a ChatGPT account can start writing immediately.

    Maybe it's cynical, but how does the old saying go? If the service is free, you are the product.

    Perhaps, the goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.

AuthAuth 5 days ago

This does way less than i'd expect. Converting images to tikz is nice but some of the other applications demonstrated were horrible. This is no way anyone should be using AI to cite.

epolanski 5 days ago

Not gonna lie, I cringed when it asked to insert citations.

Like, what's the point?

You cite stuff because you literally talk about it in the paper. The expectation is that you read that and that it has influenced your work.

As someone who's been a researcher in the past, with 3 papers published in high impact journals (in chemistry), I'm beyond appalled.

Let me explain how scientific publishing works to people out of the loop:

- science is an insanely huge domain. Basically as soon as you drift in any topic the number of reviewers with the capability to understand what you're talking about drops quickly to near zero. Want to speak about properties of helicoidal peptides in the context of electricity transmission? Small club. Want to talk about some advanced math involving fourier transforms in the context of ml? Bigger, but still small club. When I mean small, I mean less than a dozen people on the planet likely less with the expertise to properly judge. It doesn't matter what the topic is, at elite level required to really understand what's going on and catch errors or bs, it's very small clubs.

2. The people in those small clubs are already stretched thin. Virtually all of them run labs so they are already bogged down following their own research, fundraising, and coping with teaching duties (which they generally despise, very few good scientist are barely more than mediocre professors and have already huge backlogs).

3. With AI this is a disaster. If having to review slop for your bs internal tool at your software job was already bad, imagine having to review slop in highly technical scientific papers.

4. The good? People pushing slop, due to these clubs being relatively small, will quickly find their academic opportunities even more limited. So the incentives for proper work are hopefully there. But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can't imagine now.

  • SoKamil 5 days ago

    It’s like not only the technology is to blame, but the culture and incentives of modern world.

    The urge to cheat in order to get a job, promotion, approval. The urge to do stuff you are not even interested in, to look good in the resume. And to some extent I feel sorry for these people. At the end of the day you have to pay your bills.

    • epolanski 5 days ago

      This isn't about paying your bills, but having a chance of becoming a full time researcher or professor in academia which is obviously the ideal career path for someone interested in science.

      All those people can go work for private companies, but few as scientists rather than technicians or QAs.

  • bonsai_spool 5 days ago

    > But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can't imagine now.

    Hmm, I follow the argument, but it's inconsistent with your assertion that there is going to be incentive for 'proper work' over time. Anecdotally, I think the median quality of papers from middle- and top-tier Chinese universities is improving (your comment about 'asian researchers' ignores that Japan, South Korea, and Taiwan have established research programs at least in biology).

    • epolanski 4 days ago

      Japan is notoriously an exception in the region.

      South Korea and China produce huge amounts non reproducible experiments.

uwehn 5 days ago

If you're looking for something like this for typst: any VSCode fork with AI (Cursor, Antigravity, etc) plus the tinymist extension (https://github.com/Myriad-Dreamin/tinymist) is pretty nice. Since it's local, it won't have the collaboration/sharing parts built in, but that can be solved too in the usual ways.

matteocantiello 4 days ago

At first I was a bit puzzled about why OpenAI would want to get involved in this somewhat niche project. Obviously, they don't give a damn about Overleaf’s market, which is a drop in the bucket. What OpenAI is after -- I think -- it’s a very specific kind of “training data.” Not Overleaf’s finished papers (those are already public), but the entire workflow. The path from a messy draft to a polished paper captures how ideas actually form: the back-and-forth, the false starts, the collaborative refinement at the frontier of knowledge. That’s an unusually distilled form of cognitive work, and I could imagine that's something one would want in order to train advanced models how to think.

Keeping LaTeX as the language is a feature, not a bug: it filters out noise and selects for people trained in STEM, who’ve already learned how to think and work scientifically.

drakenot 4 days ago

This is handy for maintaining a resume!

I converted my resume to LaTeX with Claude Code recently. Being able to iterate on this code-form of my document is so much nicer than fighting the formatting with in Word/Google Docs.

I dropped my .tex file into Prism and it makes it nice to instantly render it.

andrepd 5 days ago

"Chatgpt writes scientific papers" is somehow being advertised as a good thing. What is there even left to say?

hulitu 5 days ago

> Introducing Prism Accelerating science writing and collaboration with AI.

I thought this was introduced by the NSA some time ago.

  • webdoodle 5 days ago

    Lol, yep. Now with enhanced A.I. terrorist tracking...

    Fuck A.I. and the collaborators creating it. They've sold out the human race.

AndrewKemendo 5 days ago

I genuinely don’t see scientific journals and conferences continuing to last in this new world of autonomous agents, at least the same way that they used to be.

As other top level posters have indicated the review portion of this is the limiting factor

unless journal reviewers decide to utilize entirely automated review process, then they’re not gonna be able to keep up with what will increasingly be the most and best research coming out of any lab.

So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.

I expect over the longest period, that’s probably not going to be throwing more humans at the problem, but agreeing on some kind of constraint around autonomous reviewers.

If not that then labs will also produce products and science will stop being in public and the only artifacts will be whatever is produced in the market

  • f2fff 5 days ago

    "So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game."

    Errr sure. Sounds easy when you write it down. I highly doubt such a thing will ever exist.

  • idontknowmuch 5 days ago

    If you think these types of tools are going to be generating "the most and best research coming out of any lab", then I have to assume you aren't actively doing any sort of research.

    LLMs are undeniably great for interactive discussion with content IF you actually are up-to-date with the historical context of a field, the current "state-of-the-art", and have, at least, a subjective opinion on the likely trajectories for future experimentation and innovation.

    But, agents, at best, will just regurgitate ideas and experiments that have already been performed (by sampling from a model trained on most existing research literature), and, at worst, inundate the literature with slop that lacks relevant context, and, as a negative to LLMs, pollute future training data. As of now, I am leaning towards "worst" case.

    And, just to help with the facts, your last comment is unfortunately quite inaccurate. Science is one of the best government investments. For every $1.00 dollar given to the NIH in the US, $2.56 of economic activity is estimated to be generated. Plus, science isn't merely a public venture. The large tech labs have huge R&D because the output from research can lead to exponential returns on investment.

    • f2fff 5 days ago

      " then I have to assume you aren't actively doing any sort of research."

      I would wager hes not - he seems to post with a lot of bluster and links to some paper he wrote (that nobody cares about).

radioactivist 5 days ago

Is anyone else having trouble using even some of the basic features? For example, I can open a comment, but it doesn't seem like there is any way to close them (I try clicking the checkmark and nothing happens). You also can't seem to edit the comments once typed.

  • lxe 5 days ago

    Thanks for surfacing this. If you click to "tools" button to the left of "compile", you'll see a list of comments, and you can resolve them from there. We'll keep improving and fixing things that might be rough around the edges.

    EDIT: Fixed :)

soulofmischief 5 days ago

I understand the collaborative aspects, but I wonder how this is going to compare to my current workflow of just working with LaTeX files in my IDE and using whichever model provider I like. I already have a good workflow and modern models do just fine generating and previewing LaTeX with existing toolchains.

Of course, my scientific and mathematical research is done in isolation, so I'm not wanting much for collaborative features. Still, kind of interested to see how this shakes out; We're going to need to see OpenAI really step it up against Claude Opus though if they really want to be a leader in this space.

flockonus 5 days ago

Curious in terms of trademark, does it could infringe in Vercel's Prisma (very popular ORM / framework in node.js) ?

EDIT: as corrected by comment, Prisma is not Vercel, but ©2026 Prisma Data, Inc. -- curiosity still persists(?)

mfld 4 days ago

I'd like to hypothesize a little bit about the strategy of OpenAI. Obviously, it is nice for academic users that there is a new option for collaborative LaTeX editing plus LLM integration for free. At the same time, I don't think there is much added revenue expected here, for example, from Pro features or additional LLM usage plans. My theory is that the value lies in the training data received from highly skilled academics in the form of accepted and declined suggestions.

  • sn0wr8ven 4 days ago

    It is nice for academics, but I would ask why? These aren't tasks you can't do yourself. Yes it's all in one place, but it's not like doing the exact same thing previously was ridiculous to setup.

    A comparison comes to mind is the n8n workflow type product they put out before. N8n takes setup. Proofreading, asking for more relevant papers, converting pictures to latex code, etc doesn't take any setup. People do this with or without this tool almost identically.

  • hdivider 4 days ago

    Even that would be quite niche for OpenAI. They raised far too much capital, and now have to deliver on AGI, fast. Or an ultra-high-growth segment, which has not materialized.

    The reason? I can give you the full source for Sam Altman:

    while(alive) { RaiseCapital() }

    That is the full extent of Altman. :)

[removed] 5 days ago
[deleted]
nxobject 5 days ago

What they mean by "academic" is fairly limited here, if LaTeX is the main writing platform. What are their plans for expanding past that, and working with, say Jane Biomedical Researcher with a GSuite or Microsoft org, that has to use Word/Docs and a redlining-based collaboration workflow? I can certainly see why they're making it free at this point.

FWIW, Google Scholar has a fairly compelling natural-language search tool, too.

khalic 5 days ago

All your papers are belong to us

arnejenssen 4 days ago

This assumes that the article, the artifact, is most valuable. But often it is the process of writing the article that has the most value. Prism can be a nice tool for increasing output. But the second order consequence could be that the skill of deep thinking and writing will atrophy.

"There is no value added without sweating"

  • lionkor 4 days ago

    Work is value and produces sweat, and OpenAI sells just the sweat.

[removed] 5 days ago
[deleted]
jonas_kgomo 5 days ago

I actually found it quite robinhood for openai to acqhire, bascially this startup was my favourite thing for the past few months, but they were experiencing server overload and other issues on reliability, i think openai taking them under their wing is a good/neutral storyline. I think its net good for science given the opai toolchain

unicodeveloper 4 days ago

Not too bad an acquisition though. Scientists need more tech tools just like everyone else to accelerate their work. The faster scientists are, the more discoveries & world class solutions to problems we can have.

Maybe OpenAI should acquire Valyu too. They allow you deepresearch on academic papers.

CobrastanJorji 5 days ago

"Hey, you know how everybody's complaining about AI making up totally fake science shit? Like, fake citations, garbage content, fake numbers, etc?"

"Sure, yes, it comes up all the time in circles that talk about AI all the time, and those are the only circles worth joining."

"Well, what if we made a product entirely focused on having AI generate papers? Like, every step of the paper writing, we give the AI lots of chances to do stuff. Drafting, revisions, preparing to publish, all of it."

"I dunno, does anybody want that?"

"Who cares, we're fucked in about two years if we don't figure out a way to beat the competitors. They have actual profits, they can ride out AI as long as they want."

"Yeah, I guess you're right, let's do your scientific paper generation thing."

homerowilson 5 days ago

Adding

% !TEX program = lualatex

to the top of your document allows you to switch LaTeX engine. This is required for recent accessibility standards compliance (support for tagging and \DocumentMetadata). Compilation takes a bit longer though, but works fine, unlike with Overleaf where using the lualatex engine does not work in the free version.

  • gerdesj 5 days ago

    How on earth is that pronounced?

    • mkl 4 days ago

      TeX is pronounced Teck or with a sound like in Bach or loch. Derivatives like Latex and Lualatex are similar.

estebarb 4 days ago

I'm really surprised OpenAI went with LaTeX. ChatGPT still has issues maintaining LaTeX syntax. It still happily switches to markdown notation for quotes or emph. Gemini has a similar problem as well. I guess that there aren't enough good LaTeX documents in the training set.

jackblemming 5 days ago

There is zero chance this is worth billions of dollars, let alone the trillion$ OpenAI desparately needs. Why are they wasting time with this kind of stuff? Each of their employees needs to generate insane amounts of money to justify their salaries and equity and I doubt this is it.

  • fuzzfactor 4 days ago

    Some employees are just worth having around whether or not they are directly engaged in making billions of dollars every single minute with every single task.

    A good salesman could make money off of people who can do this, even if this is free they can always pull more than their weight with other efforts, and that can be in a more natually lucrative niche.

Myrmornis 5 days ago

Away from applied math/stats, and physics etc, not that many scientists use LaTeX. I'm not saying it's not useful, just I don't think many scientists will feel like a product that's LaTeX based is intended for them.

  • plutomeetsyou 5 days ago

    Economists definitely use LaTeX, but as a field, it's at the intersection of applied math and social sciences so your point stands. I also know some Data Scientists in the industry who do.

jf___ 5 days ago

<typst>and just when i thought i was out they pull me back in</typst>

bariswheel 5 days ago

I used overleaf during grad school and was easy enough, I'm interested to see what more value this will bring. Sometimes making less decisions is the better route, e.g. vi vs MS word, but I won't speak too much without trying it just yet.

zb3 5 days ago

Is this the product where OpenAI will (soon) take profit share from inventions made there?

ozgung 4 days ago

I don’t see anything regarding Privacy of your data. Did I miss it or they just use your unpublished research and your prompts as a real human researcher to train their own AI researchers?

flumpcakes 5 days ago

This is terrible for Science.

I'm sorry, but publishing is hard, and it should be hard. There is a work function that requires effort to write a paper. We've been dealing with low quality mass-produced papers from certain regions of the planet for decades (which, it appears, are now producing decent papers too).

All this AI tooling will do is lower the effort to the point that complete automated nonsense will now flood in and it will need to be read and filtered by humans. This is already challenging.

Looking elsewhere in society, AI tools are already being used to produce scams and phishing attacks more effective than ever before.

Whole new arenas of abuse are now rife, with the cost of producing fake pornography of real people (what should be considered sexual abuse crime) at mere cents.

We live in a little microcosm where we can see the benefits of AI because tech jobs are mostly about automation and making the impossible (or expensive) possible (or cheap).

I wish more people would talk about the societal issues AI is introducing. My worthless opinion is that prism is not a good thing.

  • jimmar 5 days ago

    I've wasted hours of my life trying to get Latex to format my journal articles to different journals' specifications. That's tedious typesetting that wastes my time. I'm all for AI tools that help me produce my thoughts with as little friction as possible.

    I'm not in favor of letting AI do my thinking for me. Time will tell where Prism sits.

    • flumpcakes 5 days ago

      This Prism video was not just typesetting. If OpenAI released tools that just helped you typeset or create diagrams from written text, that would be fine. But it's not, it's writing papers for you. Scientists/publishers really do not need the onslaught of slop this will create. How can we even trust qualifications in the post-AI world, where cheating is rampant at univeristies?

      • f2fff 5 days ago

        Nah this is necessary.

        Lessons are learned the hard way. I invite the slop - the more the merrier. It will lead to a reduction in internet activity as people puke from the slop. And then we chart our way back to the right path.

        It is what it is. Humans.

  • PlatoIsADisease 5 days ago

    I just want replication in science. I don't care at all how difficult it is to write the paper. Heck, if we could spend more effort on data collection and less on communication, that sounds like a win.

    Look at how much BS flooded psychology but had pretty ideas about p values and proper use of affect vs effect. None of that mattered.

ggm 5 days ago

A competition for the longest sequence of \relax in a document ensues. If enough people do this, the AI will acquire merit and seek to "win" ...

pmbanugo 4 days ago

I don't see anything fancy here that Google doesn't do with their Gemini products, and even better

asadm 5 days ago

Disappointing actually, what I actually need is a research "management" tool that lets me put in relevant citations but also goes through ENTIRE arxiv or google scholar and connect ideas or find novel ideas in random fields that somehow relate to what I am trying to solve.

smuenkel 4 days ago

That click towards accepting the bibliography without checking it is absolutely mindboggling.

butlike 4 days ago

> Prism is free to use, and anyone with a ChatGPT account can start writing immediately.

Great, so now I'll have to sift through a bunch of ostensibly legitimate (though legitimate looking) non-peer reviewed whitepapers, where if I forget to check the peer review status even once I risk wasting a large amount of time reading gobbledygook. Thanks openai?

  • azan_ 4 days ago

    Don't worry - most of the peer reviewed stuff is also bad.

zmmmmm 5 days ago

They compare it to software development but there is such a crucial difference to software development: by and large, software is an order of magnitude easier to verify than it is to create. By comparison, reviewing a vibe generated manuscript will be MUCH more work to verify than a piece of software with equivalent complexity. On top of that, review of academic literature is largely outsourced to the academic community for free. There is no model to support it that scales to an increased volume of output.

I would not like to be a publisher right now facing the enslaught of thousands and thousands of slop generated articles, trying to find reviewers for them all.

tzahifadida 4 days ago

Since it offers collaboration for free, it can take a bite out of overleaf market.

r_thambapillai 4 days ago

didn't OpenAI just say they needed a code red to be relentlessly focussed on making ChatGPT market leading again? Why are they launching new products? Is the code red over is the gemini threat considered done?

legitster 5 days ago

It's interesting how quickly the quest for the "Everything AI" has shifted. It's much more efficient to build use-case specific LLMs that can solve a limited set of problems much more deeply than one that tries to do everything well.

I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.

All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.

  • Otterly99 4 days ago

    I completely agree.

    In my lab, we have been struggling with automated image segmentation for years. 3 years ago, I started learning ML and the task is pretty standard, so there are a lot of solution.

    In 3 months, I managed to get a working solution, which only took a lot of sweat annotating images first.

    I think this is where tools like OpenCode really shine, because they unlock the potential for any user to generate a solution to their specific problem.

  • falcor84 5 days ago

    I don't get this argument. Our nervous system is also heterogenous, why wouldn't AGI be based on an "executive functions" AI that manages per-function AIs?

camillomiller 5 days ago

Given what Prism was at the NSA, why the hell would any tech company greenlight this name?

noahbp 5 days ago

They seem to have copied Cursor in hijacking ⌘Y shortcut for "Yes" instead of Undo.

  • drusepth 5 days ago

    In what applications is ⌘Y Undo and not ⌘Z? Is ⌘Y just a redundant alternative?

    • zerocrates 5 days ago

      Ctrl-Y is typically Redo, not Undo. Maybe that's what they meant.

      Apparently on Macs it's usually Command-Shift-Z?

unixzii 5 days ago

It may be useful, but it also encourages people to stop writing their own papers.

  • mves 5 days ago

    As they demo in the video, it even encourages people to actually skip doing the research (which includes reading both relevant AND not-so-relevant papers in order to explore!) Instead, prompt "cite some relevant papers, please", and done. Hours of actual reading, thinking, and exploration reduced to a minimum.

    A couple of generations of students later, and these will be rare skills: information finding, actual thinking, and conveying complex information in writing.

ai_critic 5 days ago

Anybody else notice that half the video was just finding papers to decorate the bibliography with? Not like "find me more papers I should read and consider", but "find papers that are relevant that I should cite--okay, just add those".

This is all pageantry.

  • sfink 5 days ago

    Yes. That part of the video was straight-up "here's how to automate academic fraud". Those papers could just as easily negate one of your assumptions. What even is research if it's not using cited works?

    "I know nothing but had an idea and did some work. I have no clue whether this question has been explored or settled one way or another. But here's my new paper claiming to be an incremental improvement on... whatever the previous state of understanding was. I wouldn't know, I haven't read up on it yet. Too many papers to write."

  • renyicircle 5 days ago

    It's as if it's marketed to the students who have been using ChatGPT for the last few years to pass courses and now need to throw together a bachelor's thesis. Bibliography and proper citation requirements are a pain.

    • pfisherman 5 days ago

      That is such a bummer. At the time, it was annoying and I groused and grumbled about it; but in hindsight my reviewers pointed me toward some good articles, and I am better for having read them.

    • olivia-banks 5 days ago

      I agree with this. This problem is only going to get worse once these people enter academia and facing needing to publish.

  • olivia-banks 5 days ago

    I've noticed this pattern, and it really drives me nuts. You should really be doing a comprehensive literature review before starting any sort of review or research paper.

    We removed the authorship of a a former co-author on a paper I'm on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.

    • NewsaHackO 5 days ago

      There is definitely a difference between how senior researchers and students go about making publications. To students, they get told basically what topic they should write a paper on or prepare data for, so they work backwards: try to write the paper (possibly some researching information to write the paper), then add references because they know they have to. For the actual researchers, it would be a complete waste of time/funding to start a project on a question that has already been answered before (and something that the grant reviewers are going to know has already been explored before), so in order to not waste their own time, they have to do what you said and actually conduct a comprehensive literature review before even starting the work.

  • black_puppydog 5 days ago

    Plus, this practice (just inserting AI-proposed citations/sources) is what has recently been the front-runner of some very embarrassing "editing" mistakes, notably in reports from public institutions. Now OpenAI lets us do pageantry even faster! <3

  • verdverm 5 days ago

    It's all performance over practice at this point. Look to the current US administration as the barometer by which many are measuring their public perceptions

  • teaearlgraycold 5 days ago

    The hand-drawn diagram to LaTeX is a little embarrassing. If you load up Prism and create your first blank project you can see the image. It looks like it's actually a LaTeX rendering of a diagram rendered with a hand-dawn style and then overlayed on a very clean image of a napkin. So you've proven that you can go from a rasterized LaTeX diagram back to equivalent LaTeX code. Interesting but probably will not hold up when it meets real world use cases.

  • adverbly 5 days ago

    I chuckled at that part too!

    Didn't even open a single one of the papers to look at them! Just said that one is not relevant without even opening it.

  • maxkfranz 5 days ago

    A more apt example would have been to show finding a particular paper you want to cite, but you don’t want to be bothered searching your reference manager or Google Scholar.

    E.g. “cite that paper from John Doe on lorem ipsum, but make sure it’s the 2022 update article that I cited in one of my other recent articles, not the original article”

  • thesuitonym 5 days ago

    You may notice that this is the way writing papers works in undergraduate courses. It's just another in a long line of examples of MBA tech bros gleaning an extremely surface-level understanding of a topic, then decided they're experts.

0dayman 5 days ago

in the end we're going to end up with papers written by AI, proofread by AI .....summarized for readers by AI. I think this is just for them to remain relevant and be seen as still pushing something out

  • falcor84 5 days ago

    You're assuming a world where humans are still needed to read the papers. I'm more worried about a future world where AIs do all of the work of progressing science and humans just become bystanders.

    • drusepth 5 days ago

      Why are you worried about that world? Is it because you expect science to progress too fast, or too slow?

      • falcor84 5 days ago

        Too fast. It's already coding too fast for us to follow, and from what I hear, it's doing incredible work in drug discovery. I don't see any barrier to it getting faster and faster, and with proper testing and tooling, getting more and more reliable, until the role that humans play in scientific advancement becomes at best akin to that of managers of sports teams.

chaosprint 5 days ago

As a researcher who has to use LaTeX, I used to use Overleaf, but lately I've been configuring it locally in VS Code. The configuration process on Mac is very simple. Considering there are so many free LLMs available now, I still won't subscribe to ChatGPT.

delduca 5 days ago

First 5 seconds reading and I have spotted that was written by AI.

postatic 5 days ago

ok I don't care what people say, this would've helped me a lot during my PhD days fighting with LateX and diagrams. :)

dash2 4 days ago

“LaTeX-native“

Oh NO. We will be stuck in LaTeX hell forever.

wasmainiac 5 days ago

The state of publishing in academic was already a dumpster fire, why lower the friction farther? It’s not like writing was the hard part. Give it two years max we will see hallucination citing hallucination, independent repeatability out the window

  • falcor84 5 days ago

    That's one scenario, but I also see a potential scenario where this integration makes it easier to manage the full "chain of evidence" for claimed results, as well as replication studies and discovered issues, in order to then make it easier to invalidate results recursively.

    At the end of the day, it's all about the incentives. Can we have a world where we incentivize finding the truth rather than just publishing and getting citations?

    • wasmainiac 4 days ago

      Possibly, but 1 I am concerned that the current LLM AI is not thinking critically, just auto completing in a way that looks like thinking. 2 current AI rollout is incentivised for market capture not honest work.

AlexCoventry 5 days ago

I don't see the use. You can easily do everything shown in the Prism intro video with ChatGPT already. Is it meant to be an overleaf killer?

addedlovely 4 days ago

Ahhhh. It happily re-wrote the example paper to be from Google AI and added references that supported that falsehood.

Slop science papers is just what the world needs.

slashdave 4 days ago

Not a PR person myself, but why use as an example a parody topic for a paper? Couldn't someone have invented something realistic to show? Or, heck, just get permission to show a real paper?

The example just reinforces the whole concept of LLM slop overwhelming preprint archives. I found it off-putting.

Min0taurr 4 days ago

Dog turd, will be used to mine research data and train some sort of research AI model, do not trust. I would much rather support Overleaf which is made by academics for academics than some vibe coded alternative with deep data mining. No wonder we have so much slop in research at the moment

preommr 5 days ago

Very underwhelming.

Was this not already possible in the web ui or through a vscode-like editor?

  • vicapow 5 days ago

    Yes, but there's a really large number of users who don't want to have to setup vscode, git, texlive, latex workshop, just to collaborate on a paper. You shouldn't have to become a full stack software engineer to be able to write a research paper in LaTeX.

mkl 4 days ago

> Turn whiteboard equations or diagrams directly into LaTeX, saving hours of time manipulating graphics pixel-by-pixel

What a bizarre thing to say! I'm guessing it's slop. Makes it hard to trust anything the article claims.

[removed] 5 days ago
[deleted]
i2km 5 days ago

LaTeX was one of the last bastions against AI slop. Sadly it's now fallen too. Is there any standardised non-AI disclaimer format which is gaining use?

lispisok 5 days ago

Way too much work having AI generate slop which gets dumped on a human reviewer to deal with. Maybe switch some of that effort into making better review tools.

egorfine 4 days ago

> Chat with GPT‑5.2

> Draft and revise papers with the full document as context

> ...

And pay the finder's fee on every discovery worth pursuing.

Yeah, immediately fuck that.

shevy-java 5 days ago

"Accelerating science writing and collaboration with AI"

Uhm ... no.

I think we need to put an end to AI as it is currently used (not all of it but most of it).

  • drusepth 5 days ago

    Does "as it is currently used" include what this apparently is (brainstorming, initial research, collaboration, text formatting, sharing ideas, etc)?

  • Jaxan 5 days ago

    Yeah, there are already way more papers being published than we can reasonably read. Collaboration, ok, but we don’t need more writing.

    • f2fff 5 days ago

      It seems people dont understand the basics...

      We dont need more stuff - we need more quality and less of the shit stuff.

      Im convinced many involved in the production of LLM models are far too deep in the rabbit hole and cant see straight.

[removed] 5 days ago
[deleted]
jsrozner 5 days ago

AI: enshittifying everything you once cared about or relied upon

(re the decline of scientific integrity / signal-to-noise ratio in science)

mves 5 days ago

Less thinking, reading, and reflection, and more spouting of text, yay! Just what we need.

postalcoder 5 days ago

Very unfortunately named. OpenAI probably (and likely correctly) estimated that 13 years is enough time after the Snowden leaks to use "prism" for a product but, for me, the word is permanently tainted.

  • cheeseomlit 5 days ago

    Anecdotally, I have mentioned PRISM to several non-techie friends over the years and none of them knew what I was talking about, they know 'Snowden' but not 'PRISM'. The amount of people who actually cared about the Snowden leaks is practically a rounding error

    • hedora 5 days ago

      Given current events, I think you’ll find many more people care in 2026 than did in 2024.

      (See also: today’s WhatsApp whistleblower lawsuit.)

    • giancarlostoro 5 days ago

      Most people don't care about the details. Neither does the media. I've seen national scandals that the media pushed one way disproven during discovery in a legal trial. People only remember headlines, the retractions are never re-published or remembered.

  • blitzar 5 days ago

    Guessing that Ai came up with the name based on the description of the product.

    Perhaps, like the original PRISM programme, behind the door is a massive data harvesting operation.

  • arthurcolle 5 days ago

    This was my first thought as well. Prism is a cool name, but I'd never ever use it for a technical product after those leaks, ever.

  • vjk800 5 days ago

    I'd think that most people in science would associate the name with an optical prism. A single large political event can't override an everyday physical phenomenon in my head.

  • seanhunter 5 days ago

    Pretty much every company I’ve worked for in tech over my 25+ year career had a (different) system called prism.

    • no-dr-onboard 5 days ago

      (plot twist: he works for NSA contractors)

      • seanhunter 5 days ago

        Hehe. You got me. Also “atlas” is another one. Pretty much everyone has a system somewhere called “atlas”.

  • [removed] 5 days ago
    [deleted]
  • kaonwarb 5 days ago

    I suspect that name recognition for PRISM as a program is not high at the population level.

    • maqp 5 days ago

      2027: OpenAI Skynet - "Robots help us everywhere, It's coming to your door"

      • willturman 5 days ago

        Skynet? C'mon. That would be too obvious - like naming a company Palantir.

  • moralestapia 5 days ago

    I never though of that association, not in the slightest, until I read this comment.

  • wilg 5 days ago

    I followed the Snowden stuff fairly closely and forgot, so I bet they didn't think about it at all and if they did they didn't care and that was surely the right call.

  • dylan604 5 days ago

    Surprised they didn't do something trendy like Prizm or OpenPrism while keeping it closed source code.

  • [removed] 5 days ago
    [deleted]
  • [removed] 5 days ago
    [deleted]
verdverm 5 days ago

I remember, something like a month ago, Altman twit'n that they were stopping all product work to focus on training. Was that written on water?

Seems like they have only announced products since and no new model trained from scratch. Are they still having pre-training issues?

  • [removed] 5 days ago
    [deleted]