Top
Best
New

Posted by meetpateltech 3 hours ago

Prism(openai.com)
195 points | 106 comments
JBorrow 1 hour ago|
From my perspective as a journal editor and a reviewer these kinds of tools cause many more problems than they actually solve. They make the 'barrier to entry' for submitting vibed semi-plausible journal articles much lower, which I understand some may see as a benefit. The drawback is that scientific editors and reviewers provide those services for free, as a community benefit. One example was a submission their undergraduate affiliation (in accounting) to submit a paper on cosmology, entirely vibe-coded and vibe-written. This just wastes our (already stretched) time. A significant fraction of submissions are now vibe-written and come from folks who are looking to 'boost' their CV (even having a 'submitted' publication is seen as a benefit), which is really not the point of these journals at all.

I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.

InsideOutSanta 48 minutes ago||
I'm scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We're truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it's drowning out everything of value.
techblueberry 4 minutes ago|||
There's this thing where all the thought leaders in software engineering ask "What will change about building about building a business when code is free" and while, there are some cool things, I've also thought, like it could have some pretty serious negative externalities? I think this question is going to become big everywhere - business, science, etc. which is like - Ok, you have all this stuff, but do is it valuable? Which of it actually takes away value?
jcranmer 5 minutes ago|||
The first casualty of LLMs was the slush pile--the unsolicited submission pile for publishers. We've since seen bug bounty programs and open source repositories buckle under the load of AI-generated contributions. And all of these have the same underlying issue: the LLM makes it easy to do things that don't immediately look like garbage, which makes the volume of submission skyrocket while the time-to-reject also goes up slightly because it passes the first (but only the first) absolute garbage filter.
bloppe 1 hour ago|||
I wonder if there's a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you're confident in your paper, you can think of it as a deposit. If you're spamming journals, you're just going to pay for the wasted time.

Maybe you get reimbursed for half as long as there are no obvious hallucinations.

JBorrow 57 minutes ago|||
The journal that I'm an editor for is 'diamond open access', which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.
NewsaHackO 14 minutes ago|||
Those journals are really good for getting practice in writing and submitting research papers, but sometimes they are already seen as less impactful because of the quality of accepted papers. At least where I am at, I don't think the advent of AI writing is going to affect how they are seen.
methuselah_in 18 minutes ago|||
Welcome to new world of fake stuff i guess
s0rce 1 hour ago||||
That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn't a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.
noitpmeder 53 minutes ago||
I mean your methodology also sounds suspect. You're just going down a list until it sticks. You don't care where it ends up (I'm sure within reason) just as long as it is accepted and published somewhere (again, within reason).
niek_pas 21 minutes ago||
Scientists are incentivized to publish in as high-ranking a journal as possible. You’re always going to have at least a few journals where your paper is a good fit, so aiming for the most ambitious journal first just makes sense.
pixelready 52 minutes ago||||
I’d worry about creating a perverse incentive to farm rejected submissions. Similar to those renter application fee scams.
petcat 53 minutes ago||||
> There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.

While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!

throwaway85825 1 hour ago||||
Pay to publish journals already exist.
bloppe 1 hour ago|||
This is sorta the opposite of pay to publish. It's pay to be rejected.
olivia-banks 1 hour ago|||
I would think it would act more like a security deposit, and you'd get back 100%, no profit for the journal (at least in that respect).
utilize1808 38 minutes ago|||
Better yet, make a "polymarket" for papers where people can bet on which paper can make it, and rely on "expertise arbitrage" to punish spams.
ezst 7 minutes ago||
Doesn't stop the flood, i.e. the unfair asymmetry between the effort to produce vs. effort to review.
mrandish 12 minutes ago|||
As a non-scientist (but long-time science fan and user), I feel your pain with what appears to be a layered, intractable problem.

> > who are looking to 'boost' their CV

Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.

boplicity 18 minutes ago|||
Is it at all possible to have a policy that bans the submission of any AI written text, or text that was written with the assistance of AI tools? I understand that this would, by necessity, be under an "honor system" but maybe it could help weed out papers not worth the time?
Rperry2174 37 minutes ago|||
This keeps repeating in different domains: we lower the cost of producing artifacts and the real bottleneck is evaluating them.

For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.

Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"

vitalnodo 24 minutes ago||
This fits into the broader evolution of the visualization market. As data grows, visualization becomes as important as processing. This applies not only to applications, but also to relating texts through ideas close to transclusion in Ted Nelson’s Xanadu. [0]

In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]

[0] https://news.ycombinator.com/item?id=40295661

[1] https://news.ycombinator.com/item?id=22368323

maxkfranz 46 minutes ago|||
I generally agree.

On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.

ezst 11 minutes ago||
As I understand it, the problem isn't publication or how it's changing over time, it's about the challenges of producing new science when the existing one is muddied in plausible lies. That warrants a new process by which to assess the inherent quality of a paper, but even if it comes as globally distributed, the cheats have a huge advantage considering the asymmetry between the effort to vibe produce vs. the tedious human review.
usefulposter 1 hour ago|||
Completely agree. Look at the independent research that gets submitted under "Show HN" nowadays:

https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

SecretDreams 54 minutes ago|||
I appreciate and sympathize with this take. I'll just note that, in general, journal publications have gone considerably downhill over the last decade, even before the advent of AI. Frequency has gone up, quality has gone down, and the ability to actually check if everything in the article is actually valid is quite challenging as frequency goes up.

This is a space that probably needs substantial reform, much like grad school models in general (IMO).

lupsasca 1 hour ago||
I am very sympathetic to your point of view, but let me offer another perspective. First off, you can already vibe-write slop papers with AI, even in LaTeX format--tools like Prism are not needed for that. On the other hand, it can really help researchers improve the quality of their papers. I'm someone who collaborates with many students and postdocs. My time is limited and I spend a lot of it on LaTeX drudgery that can and should be automated away, so I'm excited for Prism to save time on writing, proofreading, making TikZ diagrams, grabbing references, etc.
CJefferson 51 seconds ago|||
What the heck is the point of a reference you never read?
noitpmeder 51 minutes ago|||
AI generating references seems like a hop away from absolute unverifiable trash.
ai_critic 1 hour ago||
Anybody else notice that half the video was just finding papers to decorate the bibliography with? Not like "find me more papers I should read and consider", but "find papers that are relevant that I should cite--okay, just add those".

This is all pageantry.

sfink 3 minutes ago||
Yes. That part of the video was straight-up "here's how to automate academic fraud". Those papers could just as easily negate one of your assumptions. What even is research if it's not using cited works?

"I know nothing but had an idea and did some work. I have no clue whether this question has been explored or settled one way or another. But here's my new paper claiming to be an incremental improvement on... whatever the previous state of understanding was. I wouldn't know, I haven't read up on it yet. Too many papers to write."

renyicircle 27 minutes ago|||
It's as if it's marketed to the students who have been using ChatGPT for the last few years to pass courses and now need to throw together a bachelor's thesis. Bibliography and proper citation requirements are a pain.
pfisherman 3 minutes ago|||
That is such a bummer. At the time, it was annoying and I groused and grumbled about it; but in hindsight my reviewers pointed me toward some good articles, and I am better for having read them.
olivia-banks 23 minutes ago|||
I agree with this. This problem is only going to get worse once these people enter academia and facing needing to publish.
olivia-banks 1 hour ago|||
I've noticed this pattern, and it really drives me nuts. You should really be doing a comprehensive literature review before starting any sort of review or research paper.

We removed the authorship of a a former co-author on a paper I'm on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.

NewsaHackO 6 minutes ago||
There is definitely a difference between how senior researchers and students go about making publications. To students, they get told basically what topic they should write a paper on or prepare data for, so they work backwards: try to write the paper (possibly some researching information to write the paper), then add references because they know they have to. For the actual researchers, it would be a complete waste of time/funding to start a project on a question that has already been answered before (and something that the grant reviewers are going to know has already been explored before), so in order to not waste their own time, they have to do what you said and actually conduct a comprehensive literature review before even starting the work.
adverbly 10 minutes ago|||
I chuckled at that part too!

Didn't even open a single one of the papers to look at them! Just said that one is not relevant without even opening it.

black_puppydog 46 minutes ago|||
Plus, this practice (just inserting AI-proposed citations/sources) is what has recently been the front-runner of some very embarrassing "editing" mistakes, notably in reports from public institutions. Now OpenAI lets us do pageantry even faster! <3
verdverm 40 minutes ago|||
It's all performance over practice at this point. Look to the current US administration as the barometer by which many are measuring their public perceptions
teaearlgraycold 41 minutes ago|||
The hand-drawn diagram to LaTeX is a little embarrassing. If you load up Prism and create your first blank project you can see the image. It looks like it's actually a LaTeX rendering of a diagram rendered with a hand-dawn style and then overlayed on a very clean image of a napkin. So you've proven that you can go from a rasterized LaTeX diagram back to equivalent LaTeX code. Interesting but probably will not hold up when it meets real world use cases.
thesuitonym 19 minutes ago||
You may notice that this is the way writing papers works in undergraduate courses. It's just another in a long line of examples of MBA tech bros gleaning an extremely surface-level understanding of a topic, then decided they're experts.
reassess_blind 32 seconds ago||
Do you think they used an em-dash in the opening sentence because they’re trying to normalise the AI’s writing style, or…
vitalnodo 2 hours ago||
Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.

On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.

[0] https://crixet.com

[1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...

[2] https://news.ycombinator.com/item?id=42009254

[3] https://news.ycombinator.com/item?id=46394937

crazygringo 1 hour ago||
I'm curious how it compares to Overleaf in terms of features? Putting aside the AI aspect entirely, I'm simply curious if this is a viable Overleaf competitor -- especially since it's free.

I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).

I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.

efficax 1 hour ago||
Overleaf is a little curious to me. What's the point? Just install LaTeX. Claude is very good at manipulating LaTeX documents and I've found it effective at fixing up layouts for me.
radioactivist 53 minutes ago|||
In my circles the killer features of Overleaf are the collaborative ones (easy sharing, multi-user editing with track changes/comments). Academic writing in my community basically went from emailed draft-new-FINAL-v4.tex files (or a shared folder full of those files) to basically people just dumping things on Overleaf fairly quickly.
crazygringo 53 minutes ago||||
I can code in monospace (of course) but I just can't write in monospace markup. I need something approaching WYSIWIG. It's just how my brain works -- I need the italics to look like italics, I need the footnote text to not interrupt the middle of the paragraph.

The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.

(And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)

bhadass 51 minutes ago||||
collaboration is the killer feature tbh. overleaf is basically google docs meets latex.. you can have multiple coauthors editing simultaneously, leave comments, see revision history, etc.

a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their paper.

overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.

also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyone's comfortable with that, but plenty of researchers treat computers as appliances lol.

warkdarrior 15 minutes ago|||
Collaboration is at best rocky when people have different versions of LaTeX packages installed. Also merging changes from multiple people in git are a pain when dealing with scientific, nuanced text.

Overleaf ensures that everyone looks at the same version of the document and processes the document with the same set of packages and options.

vicapow 1 hour ago|||
The deeper I got, the more I realized really supporting the entire LaTeX toolchain in WASM would mean simulating an entire linux distribution :( We wanted to support Beamer, LuaLaTeX, mobile (wasn't working with WASM because of resource limits), etc.
seazoning 17 minutes ago||
We had been building literally the same thing for the last 8 months along with a great browsing environment over arxiv -- might just have to sunset it

Any plans of having typst integrated anytime soon?

songodongo 1 hour ago||
So this is the product of an acquisition?
vitalnodo 1 hour ago||
> Prism builds on the foundation of Crixet, a cloud-based LaTeX platform that OpenAI acquired and has since evolved into Prism as a unified product. This allowed us to start with a strong base of a mature writing and collaboration environment, and integrate AI in a way that fits naturally into scientific workflows.

They’re quite open about Prism being built on top of Crixet.

DominikPeters 1 hour ago||
This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.
vicapow 1 hour ago||
I could see it seeming likely that because the UI is quite minimalist, but the AI capabilities are very extensive, imo, if you really play with it.

You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.

jstummbillig 1 hour ago||
Accessibility does matter
jumploops 1 hour ago||
I’ve been “testing” LLM willingness to explore novel ideas/hypotheses for a few random topics[0].

The earlier LLMs were interesting, in that their sycophantic nature eagerly agreed, often lacking criticality.

After reducing said sycophancy, I’ve found that certain LLMs are much more unwilling (especially the reasoning models) to move past the “known” science[1].

I’m curious to see how/if we can strike the right balance with an LLM focused on scientific exploration.

[0]Sediment lubrication due to organic material in specific subduction zones, potential algorithmic basis for colony collapse disorder, potential to evolve anthropomorphic kiwis, etc.

[1]Caveat, it’s very easy for me to tell when an LLM is “off-the-rails” on a topic I know a lot about, much less so, and much more dangerous, for these “tests” where I’m certainly no expert.

radioactivist 41 minutes ago||
Is anyone else having trouble using even some of the basic features? For example, I can open a comment, but it doesn't seem like there is any way to close them (I try clicking the checkmark and nothing happens). You also can't seem to edit the comments once typed.
lxe 36 minutes ago|
Thanks for surfacing this. If you click to "tools" button to the left of "compile", you'll see a list of comments, and you can resolve them from there. We'll keep improving and fixing things that might be rough around the edges.
postalcoder 2 hours ago||
Very unfortunately named. OpenAI probably (and likely correctly) estimated that 13 years is enough time after the Snowden leaks to use "prism" for a product but, for me, the word is permanently tainted.
cheeseomlit 2 hours ago||
Anecdotally, I have mentioned PRISM to several non-techie friends over the years and none of them knew what I was talking about, they know 'Snowden' but not 'PRISM'. The amount of people who actually cared about the Snowden leaks is practically a rounding error
hedora 1 hour ago||
Given current events, I think you’ll find many more people care in 2026 than did in 2024.

(See also: today’s WhatsApp whistleblower lawsuit.)

arthurcolle 1 hour ago|||
This was my first thought as well. Prism is a cool name, but I'd never ever use it for a technical product after those leaks, ever.
vjk800 1 hour ago|||
I'd think that most people in science would associate the name with an optical prism. A single large political event can't override an everyday physical phenomenon in my head.
blitzar 59 minutes ago|||
Guessing that Ai came up with the name based on the description of the product.

Perhaps, like the original PRISM programme, behind the door is a massive data harvesting operation.

kaonwarb 2 hours ago|||
I suspect that name recognition for PRISM as a program is not high at the population level.
maqp 47 minutes ago||
2027: OpenAI Skynet - "Robots help us everywhere, It's coming to your door"
seanhunter 2 hours ago|||
Pretty much every company I’ve worked for in tech over my 25+ year career had a (different) system called prism.
no-dr-onboard 1 hour ago||
(plot twist: he works for NSA contractors)
dylan604 2 hours ago|||
Surprised they didn't do something trendy like Prizm or OpenPrism while keeping it closed source code.
songodongo 1 hour ago|||
Or the JavaScript ORM.
moralestapia 2 hours ago|||
I never though of that association, not in the slightest, until I read this comment.
locusofself 1 hour ago|||
this was my first thought as well.
wilg 2 hours ago||
I followed the Snowden stuff fairly closely and forgot, so I bet they didn't think about it at all and if they did they didn't care and that was surely the right call.
sva_ 48 minutes ago||
> In 2025, AI changed software development forever. In 2026, we expect a comparable shift in science,

I can't wait

pwdisswordfishy 6 minutes ago|
Oh, like that mass surveillance program!
More comments...