Top
Best
New

Posted by meetpateltech 1/27/2026

Prism(openai.com)
781 points | 524 comments
Perseids 1/28/2026|
I'm dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out [1]. Has the technical and scientific community in the US already forgotten this huge breach of trust? This is especially jarring at a time where the US is burning its political good-will at unprecedented rate (at least unprecedented during the life-times of most of us) and talking about digital sovereignty has become mainstream in Europe. As a company trying to promote a product, I would stay as far away from that memory as possible, at least if you care about international markets.

[1] https://news.ycombinator.com/item?id=46787165

ZpJuUuNaQ5 1/28/2026||
>I'm dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out

I just think it's silly to obsess over words like that. There are many words that take on different meanings in different contexts and can be associated with different events, ideas, products, time periods, etc. Would you feel better if they named it "Polyhedron"?

jll29 1/28/2026|||
What the OP was talking about is the negative connotation that goes with the word; it's certainly a poor choice from a marketing point of view.

You may say it's "silly to obsess", but it's like naming a product "Auschwitz" and saying "it's just a city name" -- it ignores the power of what Geffrey N. Leech called "associative meaning" in his taxonomy of "Seven Types of Meaning" (Semantics, 2nd. ed. 1989): speaking that city's name evokes images of piles of corpses of gassed undernourished human beings, walls of gas chambers with fingernail scratches and lamp shades made of human skin.

ZpJuUuNaQ5 1/28/2026|||
Well, I don't know anything about marketing and you might have a point, but the severity of impact of these two words is clearly very different, so it doesn't look like a good comparison to me. It would raise quite a few eyebrows and more if, for example, someone released a Linux distro named "Auschwitz OS", meanwhile, even in the software world, there are multiple products that incorporate the word prism in various ways[1][2][3][4][5][6][7][8][9]. I don't believe that an average user encountering the word "prism" immediately starts thinking about NSA surveillance program.

[1] https://www.prisma.io/

[2] https://prism-pipeline.com/

[3] https://prismppm.com/

[4] https://prismlibrary.com/

[5] https://3dprism.eu/en/

[6] https://www.graphpad.com/features

[7] https://www.prismsoftware.com/

[8] https://prismlive.com/en_us/

[9] https://github.com/Project-Prism/Prism-OS

vladms 1/28/2026|||
I think the ideas was to try to explain why is a problem to choose something, it is not a comparison of the intensity / importance.

I am not sure you can make an argument of "other people are doing it too". Lots of people do things that it is not in their interest (ex: smoking, to pick the easy one).

As others mentioned, I did not have the negative connotation related to the word prism either, but not sure how could one check that anyhow. It is not like I was not surprised these years about what some other people think, so who knows... Maybe someone with experience in marketing could explain how it is done.

adammarples 1/28/2026||
But without the extremity of the Auschwitz example, it suddenly is not a problem. Prism is an unbelievably generic word and I had not even heard of the Snowdon one until now nor would I remember it if I had. Prism is one step away from "Triangle" in terms of how generic it is.
jackphilson 1/28/2026|||
Triangle kind of reminds me of the Bermuda Triangle. You know how many people died there?
ConceptJunkie 1/28/2026||
People? Do you know how many of them are murderers, fraudsters and all around finks. That's a terrible thing to mention.
order-matters 1/28/2026|||
1 more perspective to add: while i did not know the NSA program was called prism, it did give me pause to find out in this thread. OpenAI surely knows what it was called, at least they should. So it begs the question of why.

If they claim in a private meeting with people at the NSA that they did it as a tribute to them and a bid for partnership, who would anyone here be to say they didnt? even if they didnt... which is only relevant because OpenAI processes an absolute shitton of data the NSA would be interested in

helsinkiandrew 1/28/2026||||
And of course The prism

https://en.wikipedia.org/wiki/Prism_(optics)

I remember the NSA Prism program, but hearing prism today I would think first of Newton, optics, and rainbows.

bicepjai 1/28/2026|||
When you’re as high profile as OpenAI, you don’t get judged like everyone else. People scrutinize your choices reflexively, and that’s just the tax of being a famous brand: it amplifies both the upsides and the blowback.

Most ordinary users won’t recognize the smaller products you listed, but they will recognize OpenAI and they’ll recognize Snowden/NSA adjacent references because those have seeped into mainstream culture. And even if the average user doesn’t immediately make the connection, someone in their orbit on social media almost certainly will and they’ll happily spin it into a theory for engagement.

946789987649 1/28/2026||||
Do a lot of people know that Prism is the name of the program? I certainly didn't and consider myself fairly switched on in general
BlueTemplar 1/28/2026||
It's likely to be an age thing too. Were you in hacker-related spaces when the Snowden scandal happened ?

(I expect a much higher than average share of people in academia also part of these spaces.)

andrewinardeer 1/28/2026||||
We had a local child day care provider call themselves ISIS. That was blast.
ConceptJunkie 1/28/2026|||
There was a TV show called "The Mighty Isis" in the 70s. What were they thinking?! (Well, with Joanna Cameron around, I wouldn't be able to think too clearly either.)
SoftTalker 1/28/2026|||
We had a local siding company call themselves "The Vinyl Solution" some people are just tone-deaf.
FrustratedMonky 1/28/2026|||
I think point is that on the sliding scale of words that are no longer allowed to use, "Prism" does not reach the level of "Auschwitz".

Most people don't even remember Snowden at this point.

black_puppydog 1/28/2026||||
I have to say I had the same reaction. Sure, "prism" shows up in many contexts. But here it shows up in the context of a company and product that is already constantly in the news for its lackluster regard for other people's expectation of privacy, copyright, and generally trying to "collect it all" as it were, and that, as GP mentioned, in an international context that doesn't put these efforts in the best light.

They're of course free to choose this name. I'm just also surprised they would do so.

mc32 1/28/2026||||
Plus there are lots of “legacy” products with the name prism in them. I also don’t think the public makes the connection. It’s mainly people who care to be aware of government overreach who think it’s a bad word association.
jimbokun 1/28/2026||||
But the contexts are closely related.

Large scale technology projects that people are suspicious and anxious about. There are a lot of people anxious that AI will be used for mass surveillance by governments. So you pick a name of another project that was used for mass surveillance by government.

bergheim 1/28/2026||||
Sure. Like Goebbels. Because they gobble things up.

Altso, nazism. But different context, years ago, so whatever I guess?

Hell, let's just call it Hitler. Different context!

Given what they do it is an insidious name. Words matter.

fortyseven 1/28/2026|||
Comparing words with unique widespread notoriety with a simple, everyday one. Try again.
rvnx 1/28/2026||
Prism in tech is very well-known to be a surveillance program.

Coming from a company involved with sharing data to intelligence services (it's the law you can't escape it) this is not wise at all. Unless nobody in OpenAI heard of it.

It was one of the biggest scandal in tech 10 years ago.

They could call it "Workspace". More clear, more useful, no need to use a code-word, that would have been fine for internal use.

ZpJuUuNaQ5 1/28/2026|||
So you have to resort to the most extreme examples in order to make it a problem? Do you also think of Hitler when you encounter a word "vegetarian"?
collingreen 1/28/2026|||
Is that what you think hitler was very famous for?

The extreme examples are an analogy that highlight the shape of the comparison with a more generally loathed / less niche example.

OpenAI is a thing with lots and lots of personal data that the consumers trust OpenAI not to abuse or lose. They chose a product name that matches a us government program that secretly and illegal breached exactly that kind of trust.

Hitler vegetarians isn't a great analogy because vegetarianism isn't related to what made hitler bad. Something closer might be Exxon or BP making a hairgel called "Oilspill" or Dupont making a nail polish called "Forever Chem".

They could have chosen anything but they chose one specifically matching a recent data stealing and abuse scandal.

gegtik 1/28/2026|||
huh.. seems like a head-scratcher why it would relevant to this argument to select objectionable words instead of benign, inert words.
mayhemducks 1/28/2026|||
You do realize that obsessing over words like that is a pretty major part of what programming and computer science is right? Linguistics is highly intertwined with computer science.
sunaookami 1/28/2026|||
>Has the technical and scientific community in the US already forgotten this huge breach of trust?

Have you ever seen the comment section of a Snowden thread here? A lot of users here call for Snowden to be jailed, call him a russian asset, play down the reports etc. These are either NSA sock puppet accounts or they won't bite the hand that feeds them (employees of companies willing to breach their users trust).

Edit: see my comment here in a snowden thread: https://news.ycombinator.com/item?id=46237098

jll29 1/28/2026|||
What Snowden did was heroic. What was shameful was the world's underwhelming reaction. Where were all these images in the media of protest marches like against the Vietnam war?

Someone once said "Religion is opium for the people." - today, give people a mobile device and some doom-scrolling social media celebrity nonsense app, and they wouldn't noticed if their own children didn't come home from school.

vladms 1/28/2026|||
Looking back I think allowing more centralized control to various forms of media to private parties did much worse overall than government surveillance on the long run.

For me the problem was not surveillance, the problem is addiction focused app building (+ the monopoly), and that never seem to be a secret. Only now there are some attempts to do something (like Australia and France banning children - which am not sure is feasible or efficient but at least is more than zero).

sunaookami 1/28/2026||||
Remember when people and tech companies protested against SOPA and PIPA? Remember the SOPA blackout day? Today even worse laws are passed with cheers from the HN crowd such as the OSA. Embarassing.
linkregister 1/28/2026|||
Protests in 2025 alone have outnumbered that of those during the Vietnam War.

Protesting is a poor proxy for American political engagement.

Child neglect and missing children rates are lower than they were 50 years ago.

linkregister 1/28/2026||||
Are you asserting that disagrees with you is either a propaganda campaign or a cynical insider? Nobody who opposes you has a truly held belief?
sunaookami 1/28/2026||
So you hate waffles?
TiredOfLife 1/28/2026|||
Him being (or best case becoming) a russian asset turned out to be true
omnimus 1/28/2026|||
Like it would matter for any of the revelations. And like he would have other choices to not go to prison. Look at how it worked out for Assange.
jll29 1/28/2026|||
They both undertook something they believed in, and showed extreme courage.

And they did manage to get the word out. They are both relatively free now, but it is true, they both paid a price.

Idealism is that you follow your principles despite that price, not escaping/evading the consequences.

BlueTemplar 1/28/2026|||
Assange became a Russian asset *while* in a whistleblowing-related job.

(And he is also the reason why Snowden ended up in Russia. Though it's possible that the flight plan they had was still the best one in that situation.)

Matl 1/28/2026|||
So exposing corruption of Western governments is not worthwhile because it 'helps' Russia? Aha, got it.

I am increasingly wondering what there remains of the supposed superiority of the Western system if we're willing to compromise on everything to suit our political ends.

The point was supposed to be that the truth is worth having out there for the purpose of having an informed public, no matter how it was (potentially) obtained.

In the end, we may end up with everything we fear about China but worse infrastructure and still somehow think we're better.

BlueTemplar 1/28/2026||
No, exposing Western corruption is all well and good, but the problem is that at some point Assange seems to have decided "the enemy of my enemy is my friend", which was a very bad idea when applied to Putin's Russia.
Matl 1/29/2026||
> Assange seems to have decided "the enemy of my enemy is my friend", which was a very bad idea when applied to Putin's Russia

What if he simply decided that the information he obtained is worth having out there no matter the source? It seems to me that you're simply upset that he dared to do so and are trying very hard to come up with a rationalization for why he's a Bad Guy(tm) for daring to turn the tables. It's a transparent and rather lackluster attempt to shift the conversation from what to who.

BlueTemplar 1/29/2026||
No, I'm upset that he took money from the Kremlin and hosted a show on Russia Today. (At least it was before 2014 I guess...)
Matl 1/29/2026||
One can only hope that you're at least as upset at the double tapping criminals he exposed.
observationist 1/28/2026|||
Obama and Biden chased him into a corner. They actually bragged about chasing him into Russia, because it was a convenient narrative to smear Snowden with after the fact.

It was Russia, or vanish into a black site, never to be seen or heard from again.

lionkor 1/28/2026||||
If the messenger has anything to do with Russia, even after the fact, we should dismiss the message and remember to never look up.
vezycash 1/28/2026||||
Truth is truth, no matter the source.
TiredOfLife 1/28/2026||
Whole Truth is truth.

https://en.wikipedia.org/wiki/Lie#:~:text=citation%20needed%...

rvnx 1/28/2026||
There is also the truth that you say, and the truth that you feel
sunaookami 1/28/2026||||
In what way did it "turn out to be true"? Because he has russian citizenship and is living in a country that is not allied with his home country that is/was actively trying to kill him (and revoked his US passport)?
jimmydoe 1/28/2026|||
He could have been a Chinese asset, but CCP is a coward.
pageandrew 1/28/2026|||
These things don't really seem related at all. Its a pretty generic term.
Phelinofist 1/28/2026|||
FWIW, my immediate reaction was the same "That reminds me of NSA PRISM"
addandsubtract 1/28/2026|||
It reminded me of the code highlighter[0], and the ORM Prisma[1].

[0] https://prismjs.com/

[1] https://www.prisma.io/

wmeredith 1/28/2026||||
It reminded me of the album cover to Dark Side of The Moon by Pink Floyd.
karmakurtisaani 1/28/2026|||
Same here.
3form 1/28/2026||
Same, to the point where I was wondering if someone deliberately named it so. But I expect that whoever made this decision simply doesn't know or care.
kakacik 1/28/2026||||
I came here based to headline expecting some more cia & nsa shit, that word is tarnished for few decades in better part of IT community (that actually cares about this craft beyond paycheck)
vaylian 1/28/2026||||
And yet, the name immediately reminded me of the Snowden relevations.
ImHereToVote 1/28/2026|||
They are farming scientists for insight.
JasonADrury 1/28/2026|||
This comment might make more sense if there was some connection or similarity between the OpenAI "Prism" product and the NSA surveillance program. There doesn't appear to be.
Schlagbohrer 1/28/2026||
Except that this lets OpenAI gain research data and scientific ideas by stealing from their users, using their huge mass surveillance platform. So, tremendous overlap.
concats 1/28/2026|||
Isn't most research and scientific data is already shared openly (in publications usually)?
cruffle_duffle 1/28/2026||||
"Except that this lets OpenAI gain research data and scientific ideas by stealing from their users, using their huge mass surveillance platform. So, tremendous overlap."

Even if what you say is completely untrue (and who really knows for sure).... it creates that mental association. It's a horrible product name.

isege 1/28/2026|||
This comment allows ycombinator to steal ideas from their user's comments, using their huge mass news platform. Temendous overlap indeed.
WiSaGaN 1/28/2026|||
OpenAI has a former NSA director on its board. [1] This connection makes the dilution of the term "PRISM" in search results a potential benefit to NSA interests.

[1]: https://openai.com/index/openai-appoints-retired-us-army-gen...

aa-jv 1/28/2026|||
>Has the technical and scientific community in the US already forgotten this huge breach of trust?

Yes, imho, there is a great deal of ignorance of the actual contents of the NSA leaks.

The agitprop against Snowden as a "Russian agent" has successfully occluded the actual scandal, which is that the NSA has built a totalitarian-authoritarian apparatus that is still in wide use.

Autocrats' general hubris about their own superiority has been weaponized against them. Instead of actually addressing the issue with America's repressive military industrial complex, they kill the messenger.

LordDragonfang 1/28/2026|||
Probably gonna get buried at the bottom of this thread, but:

There's a good chance they just asked GPT5.2 for a name. I know for a fact that when some of the OpenAI models get stuck in the "weird" state associated with LLM psychosis, three of the things they really like talking about are spirals, fractals, and prisms. Presumably, there's some general bias toward those concepts in the weights.

saidnooneever 1/28/2026|||
tons of things are called prism.

(full disclosure, yes they will be handin in PII on demands like the same kinda deals, this is 'normal' - 2012 shows us no one gives a shit)

alfiedotwtf 1/28/2026|||
> Has the technical and scientific community in the US already forgotten this huge breach of trust?

We haven’t forgotten… it’s mostly that we’re all jaded given the fact that there has been zero ramifications and so what’s the use of complaining - you’re better off pushing shit up a hill

teddyh 1/28/2026|||
We used to have “SEO spam”, where people would try to create news (and other) articles associated with some word or concept to drown out some scandal associated with that same word or concept. The idea was that people searching on Google for the word would see only the newly created articles, and not see anything scandalous. This could be something similar, but aimed at future LLM’s trained on these articles. If LLM’s learn that the word “Prism” means a certain new thing in a surveillance context, the LLM’s will unlearn the older association, thereby hiding the Snowden revelations.
cruffle_duffle 1/28/2026|||
As a datapoint, when I read this headline, the very first thing i thought of as "wasn't PRISM some NSA shit? Is OpenAI working with the NSA now?"

It's a horrible name for any product coming out of a company like OpenAI. People are super sensitive to privacy and government snooping and OpenAI is a ripe target for that sort of thinking. It's a pretty bad association. You do not want your AI company to be in any way associated with government surveillance programs no matter how old they are.

bandrami 1/28/2026|||
I mean it's also the name of the national engineering education journal and a few other things. There's only 14,000 5-letter words in English so you're going to have collisions.
wmeredith 1/28/2026|||
I get what you're saying, but that was 13 years ago. How long before the branding statute of limitations runs out on usage for a simple noun?
yayitswei 1/28/2026|||
Fwiw I was going to make the same comment about the naming, but you beat me to it.
hcfman 1/29/2026|||
Yeah, to be fair I would be hesitant to have anything to do with any program called prism as well. Hard to imagine that no one brought this up when they were thinking of a name.
CalRobert 1/28/2026|||
Do they care what anyone over 30 thinks?
lrvick 1/28/2026|||
Considering OpenAI is deeply rooted in anti-freedom ethos and surveillance capitalism, I think it is quite a self aware and fitting name.
chromanoid 1/28/2026|||
Sorry, did you read this https://blog.cleancoder.com/uncle-bob/2018/12/14/SJWJS.html?

I personally associate Prism with [Silverlight - Composite Web Apps With Prism](https://learn.microsoft.com/en-us/archive/msdn-magazine/2009...) due to personal reasons I don't want to talk about ;))

johanyc 1/28/2026|||
I did not make the association at all
observationist 1/28/2026|||
I think it's probably just apparent to a small set of people; we're usually the ones yelling at the stupid cloud technologies that are ravaging online privacy and liberty, anyway. I was expecting some sort of OpenAI automated user data handling program, with the recent venture into adtech, but since it's a science project and nothing to do with surveillance and user data, I think it's fine.

If it was part of their adtech systems and them dipping their toe into the enshittification pool, it would have been a legendarily tone deaf project name, but as it is, I think it's fine.

igleria 1/28/2026|||
money is a powerful amnesiac
alexpadula 1/28/2026|||
That’s funny af
aargh_aargh 1/28/2026||
I still can't get over the Apple thing. Haven't enjoyed a ripe McIntosh since. </s>
vitalnodo 1/27/2026||
Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.

On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.

[0] https://crixet.com

[1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...

[2] https://news.ycombinator.com/item?id=42009254

[3] https://news.ycombinator.com/item?id=46394937

crazygringo 1/27/2026||
I'm curious how it compares to Overleaf in terms of features? Putting aside the AI aspect entirely, I'm simply curious if this is a viable Overleaf competitor -- especially since it's free.

I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).

I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.

efficax 1/27/2026||
Overleaf is a little curious to me. What's the point? Just install LaTeX. Claude is very good at manipulating LaTeX documents and I've found it effective at fixing up layouts for me.
radioactivist 1/27/2026|||
In my circles the killer features of Overleaf are the collaborative ones (easy sharing, multi-user editing with track changes/comments). Academic writing in my community basically went from emailed draft-new-FINAL-v4.tex files (or a shared folder full of those files) to basically people just dumping things on Overleaf fairly quickly.
crazygringo 1/27/2026||||
I can code in monospace (of course) but I just can't write in monospace markup. I need something approaching WYSIWIG. It's just how my brain works -- I need the italics to look like italics, I need the footnote text to not interrupt the middle of the paragraph.

The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.

(And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)

bhadass 1/27/2026||||
collaboration is the killer feature tbh. overleaf is basically google docs meets latex.. you can have multiple coauthors editing simultaneously, leave comments, see revision history, etc.

a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their paper.

overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.

also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyone's comfortable with that, but plenty of researchers treat computers as appliances lol.

warkdarrior 1/27/2026|||
Collaboration is at best rocky when people have different versions of LaTeX packages installed. Also merging changes from multiple people in git are a pain when dealing with scientific, nuanced text.

Overleaf ensures that everyone looks at the same version of the document and processes the document with the same set of packages and options.

vicapow 1/27/2026|||
The deeper I got, the more I realized really supporting the entire LaTeX toolchain in WASM would mean simulating an entire linux distribution :( We wanted to support Beamer, LuaLaTeX, mobile (wasn't working with WASM because of resource limits), etc.
seazoning 1/27/2026||
We had been building literally the same thing for the last 8 months along with a great browsing environment over arxiv -- might just have to sunset it

Any plans of having typst integrated anytime soon?

songodongo 1/27/2026||
So this is the product of an acquisition?
vitalnodo 1/27/2026||
> Prism builds on the foundation of Crixet, a cloud-based LaTeX platform that OpenAI acquired and has since evolved into Prism as a unified product. This allowed us to start with a strong base of a mature writing and collaboration environment, and integrate AI in a way that fits naturally into scientific workflows.

They’re quite open about Prism being built on top of Crixet.

i2km 1/28/2026||
This is going to be the concrete block which finally breaks the back of the academic peer review system, i.e. it's going to be a DDoS attack on a system which didn't even handle the load before LLMs.

Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...

thomasahle 1/28/2026||
I tried Prism, but it's actually a lot more work than just using claude code. The latter allows you to "vibe code" your paper with no manual interaction, while Prism actually requires you review every change.

I actually think Prism promotes a much more responsible approach to AI writing than "copying from chatgpt" or the likes.

haspok 1/28/2026|||
> This is going to be the concrete block which finally breaks the back of the academic peer review system

Exactly, and I think this is good news. Let's break it so we can fix at last. Nothing will happen until a real crisis emerges.

suddenlybananas 1/28/2026|||
There's problems with the medical system, therefore we should set hospitals on fire to motivate them to make them better.
port11 1/28/2026||||
Disrupting a system without good proposals for its replacement sounds like a recipe for disaster.
butlike 1/28/2026||
Reign of terror https://en.wikipedia.org/wiki/Reign_of_Terror
reciprocity 1/29/2026|||
Very myopic comment.
aembleton 1/28/2026|||
Maybe Open AI will sell you 'Lens' which will assist with sorting through the submissions and narrow down the papers worth reviewing.
jltsiren 1/28/2026|||
Or it makes gatekeepers even more important than before. Every submission to a journal will be desk-rejected, unless it is vouched for by someone one of the editors trusts. And people won't even look at a new paper, unless it's vouched for by someone / published in a venue they trust.
make3 1/28/2026|||
Overleaf basically already has the same thing
csomar 1/28/2026|||
That will just create a market for hand-writers. Good thing the economy is doing very well right, so there aren't that many desperate people who will do it en-masse and for peanuts.
boxed 1/28/2026|||
Handwriting is super easy to fake with plotters.
eternauta3k 1/28/2026||
Is there something out there to simulate the non-uniformity and errors of real handwriting?
4gotunameagain 1/28/2026||
> i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...

And you think the indians will not hand write the output of LLMs ?

Not that I have a better suggestion myself..

syntex 1/27/2026||
The Post-LLM World: Fighting Digital Garbage https://archive.org/details/paper_20260127/mode/2up

Mini paper: that future isn’t the AI replacing humans. its about humans drowning in cheap artifacts. New unit of measurement proposed: verification debt. Also introduces: Recursive Garbage → model collapse

a little joke on Prism)

Springtime 1/28/2026||
> The Post-LLM World: Fighting Digital Garbage https://archive.org/details/paper_20260127/mode/2up

This appears to just be the output of LLMs itself? It credits GPT-5.2 and Gemini 3 exclusively as authors, has a public domain license (appropriate for AI output) and is only several paragraphs in length.

doodlesdev 1/28/2026|||
Which proves its own points! Absolutely genius! The cost asymmetry of producing and checking for garbage truly is becoming a problem in the recent years, with the advent of LLMs and generative AI in general.
parentheses 1/28/2026|||
Totally agree!

I feel like this means that working in any group where individuals compete against each other results in an AI vs AI content generation competition, where the human is stuck verifying/reviewing.

dormento 1/28/2026||
> Totally agree!

Not a dig on your (very sensible) comment, but now I always do a double take when I see anyone effusively approving of someone else's ideas. AI turned me into a cynical bastard :(

syntex 1/28/2026|||
Yes, I did it as a joke inspired by the PRISM release. But unexpectedly, it makes a good point. And the funny part for was that the paper lists only LLMs as authors.

Also, in a world where AI output is abundant, we humans become the scarce resource the "tools" in the system that provide some connectivity to reality (grounding) for LLM

mrbonner 1/28/2026||
Plot twist: humans become the new Proof of Work consensus mechanism. Instead of GPUs burning electricity to hash blocks, we burn our sanity verifying whether that Medium article was written by a person or a particularly confident LLM.

"Human Verification as a Service": finally, a lucrative career where the job description is literally "read garbage all day and decide if it's authentic garbage or synthetic garbage." LinkedIn influencers will pivot to calling themselves "Organic Intelligence Validators" and charge $500/hr to squint at emails and go "yeah, a human definitely wrote this passive-aggressive Slack message."

The irony writes itself: we built machines to free us from tedious work, and now our job is being the tedious work for the machines. Full circle. Poetic even. Future historians (assuming they're still human and not just Claude with a monocle) will mark this as the moment we achieved peak civilization: where the most valuable human skill became "can confidently say whether another human was involved."

Bullish on verification miners. Bearish on whatever remains of our collective attention span.

kinduff 1/28/2026|||
Human CAPTCHA exists to figure out whether your clients are human or not, so you can segment them and apply human pricing. Synthetics, of course, fall into different tiers. The cheaper ones.
direwolf20 1/28/2026|||
Bullish on verifiers who accept money to verify fake things
JBorrow 1/27/2026||
From my perspective as a journal editor and a reviewer these kinds of tools cause many more problems than they actually solve. They make the 'barrier to entry' for submitting vibed semi-plausible journal articles much lower, which I understand some may see as a benefit. The drawback is that scientific editors and reviewers provide those services for free, as a community benefit. One example was a submission their undergraduate affiliation (in accounting) to submit a paper on cosmology, entirely vibe-coded and vibe-written. This just wastes our (already stretched) time. A significant fraction of submissions are now vibe-written and come from folks who are looking to 'boost' their CV (even having a 'submitted' publication is seen as a benefit), which is really not the point of these journals at all.

I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.

InsideOutSanta 1/27/2026||
I'm scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We're truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it's drowning out everything of value.
techblueberry 1/27/2026|||
There's this thing where all the thought leaders in software engineering ask "What will change about building about building a business when code is free" and while, there are some cool things, I've also thought, like it could have some pretty serious negative externalities? I think this question is going to become big everywhere - business, science, etc. which is like - Ok, you have all this stuff, but do is it valuable? Which of it actually takes away value?
jcranmer 1/27/2026|||
The first casualty of LLMs was the slush pile--the unsolicited submission pile for publishers. We've since seen bug bounty programs and open source repositories buckle under the load of AI-generated contributions. And all of these have the same underlying issue: the LLM makes it easy to do things that don't immediately look like garbage, which makes the volume of submission skyrocket while the time-to-reject also goes up slightly because it passes the first (but only the first) absolute garbage filter.
bloppe 1/27/2026|||
I wonder if there's a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you're confident in your paper, you can think of it as a deposit. If you're spamming journals, you're just going to pay for the wasted time.

Maybe you get reimbursed for half as long as there are no obvious hallucinations.

JBorrow 1/27/2026|||
The journal that I'm an editor for is 'diamond open access', which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.
NewsaHackO 1/27/2026|||
Those journals are really good for getting practice in writing and submitting research papers, but sometimes they are already seen as less impactful because of the quality of accepted papers. At least where I am at, I don't think the advent of AI writing is going to affect how they are seen.
methuselah_in 1/27/2026|||
Welcome to new world of fake stuff i guess
s0rce 1/27/2026||||
That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn't a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.
noitpmeder 1/27/2026||
I mean your methodology also sounds suspect. You're just going down a list until it sticks. You don't care where it ends up (I'm sure within reason) just as long as it is accepted and published somewhere (again, within reason).
niek_pas 1/27/2026||
Scientists are incentivized to publish in as high-ranking a journal as possible. You’re always going to have at least a few journals where your paper is a good fit, so aiming for the most ambitious journal first just makes sense.
pixelready 1/27/2026||||
I’d worry about creating a perverse incentive to farm rejected submissions. Similar to those renter application fee scams.
petcat 1/27/2026||||
> There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.

While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!

throwaway85825 1/27/2026||||
Pay to publish journals already exist.
bloppe 1/27/2026|||
This is sorta the opposite of pay to publish. It's pay to be rejected.
olivia-banks 1/27/2026|||
I would think it would act more like a security deposit, and you'd get back 100%, no profit for the journal (at least in that respect).
utilize1808 1/27/2026|||
Better yet, make a "polymarket" for papers where people can bet on which paper can make it, and rely on "expertise arbitrage" to punish spams.
ezst 1/27/2026||
Doesn't stop the flood, i.e. the unfair asymmetry between the effort to produce vs. effort to review.
mrandish 1/27/2026|||
As a non-scientist (but long-time science fan and user), I feel your pain with what appears to be a layered, intractable problem.

> > who are looking to 'boost' their CV

Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.

boplicity 1/27/2026|||
Is it at all possible to have a policy that bans the submission of any AI written text, or text that was written with the assistance of AI tools? I understand that this would, by necessity, be under an "honor system" but maybe it could help weed out papers not worth the time?
Rperry2174 1/27/2026|||
This keeps repeating in different domains: we lower the cost of producing artifacts and the real bottleneck is evaluating them.

For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.

Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"

vitalnodo 1/27/2026||
This fits into the broader evolution of the visualization market. As data grows, visualization becomes as important as processing. This applies not only to applications, but also to relating texts through ideas close to transclusion in Ted Nelson’s Xanadu. [0]

In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]

[0] https://news.ycombinator.com/item?id=40295661

[1] https://news.ycombinator.com/item?id=22368323

maxkfranz 1/27/2026|||
I generally agree.

On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.

ezst 1/27/2026||
As I understand it, the problem isn't publication or how it's changing over time, it's about the challenges of producing new science when the existing one is muddied in plausible lies. That warrants a new process by which to assess the inherent quality of a paper, but even if it comes as globally distributed, the cheats have a huge advantage considering the asymmetry between the effort to vibe produce vs. the tedious human review.
usefulposter 1/27/2026|||
Completely agree. Look at the independent research that gets submitted under "Show HN" nowadays:

https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

SecretDreams 1/27/2026|||
I appreciate and sympathize with this take. I'll just note that, in general, journal publications have gone considerably downhill over the last decade, even before the advent of AI. Frequency has gone up, quality has gone down, and the ability to actually check if everything in the article is actually valid is quite challenging as frequency goes up.

This is a space that probably needs substantial reform, much like grad school models in general (IMO).

lupsasca 1/27/2026||
I am very sympathetic to your point of view, but let me offer another perspective. First off, you can already vibe-write slop papers with AI, even in LaTeX format--tools like Prism are not needed for that. On the other hand, it can really help researchers improve the quality of their papers. I'm someone who collaborates with many students and postdocs. My time is limited and I spend a lot of it on LaTeX drudgery that can and should be automated away, so I'm excited for Prism to save time on writing, proofreading, making TikZ diagrams, grabbing references, etc.
CJefferson 1/27/2026|||
What the heck is the point of a reference you never read?
noitpmeder 1/27/2026|||
AI generating references seems like a hop away from absolute unverifiable trash.
tarcon 1/28/2026||
This is a actual prompt in the video: "What are the papers in the literature that are most relevant to this draft and that I should consider citing?"

They probably wanted: "... that I should read?" So that this is at least marketed to be more than a fake-paper generation tool.

mFixman 1/28/2026||
You can tell that they consulted 0 scientists to verify the clearly AI-written draft of this video.

The target audience of this tool is not academics; it's OpenAI investors.

jtr1 1/28/2026|||
At last, our scientific literature can turn to its true purpose: mapping the entire space of arguable positions (and then some)
floitsch 1/28/2026||
I felt the same, but then thought of experts in their field. For example, my PhD advisor would already know all these papers. For him the prompt would actually be similar to what was shown in the video.
parentheses 1/28/2026||
It feels generally a bit dangerous to use an AI product to work on research when (1) it's free and (2) the company hosting it makes money by shipping productized research
roflmaostc 1/28/2026||
I am not so skeptical about AI usage for paper writing as the paper will be often public days after anyways (pre-print servers such as arXiv).

So yes, you use it to write the paper but soon it is public knowledge anyway.

I am not sure if there is much to learn from the draft of the authors.

GorbachevyChase 1/28/2026|||
I think the goal is to capture high quality training data to eventually create an automated research product. I could see the value of having drafts, comments, and collaboration discussions as a pattern to train the LLMs to emulate.
biscuit1v9 1/28/2026|||
Why do you think these points would make the usage dangerous?
z3t4 1/28/2026||
They have to monetize somehow...
raincole 1/28/2026||
I know many people have negative opinions about this.

I'd also like to share what I saw. Since GPT-4o became a thing, everyone who submits academic papers I know in my non-english speaking country (N > 5) has been writing papers in our native language and translating them with GPT-4o exclusively. It has been the norm for quite a while. If hallucination is such a serious problem it has been so for one and half a year.

direwolf20 1/28/2026||
Translation is something Large Language Models are inherently pretty good at, without controversy, even though the output still should be independently verified. It's a language task and they are language models.
kccqzy 1/28/2026|||
Of course. Transformers were originally invented for Google Translate.
biophysboy 1/28/2026||||
Are they good at translating scientific jargon specific to a niche within a field? I have no doubt LLMs are excellent at translating well-trodden patterns; I'm a bit suspicious otherwise..
andy12_ 1/28/2026|||
In my experience of using it to translate ML work between English->Spanish|Galician, it seems to literally translate jargon too eagerly, to the point that I have to tell it to maintain specific terms in English to avoid it sounding too weird (for most modern ML jargon there really isn't a Spanish translation).
mbreese 1/28/2026||||
It seems to me that jargon would tend to be defined in one language and minimally adapted in other languages. So I’d not sure that would be much of a concern.
fuzzfactor 1/28/2026||
I would look at non-English research papers along with the English ones in my field and the more jargon and just plain numbers and equations there were, the more I could get out of it without much further translation.
disconcision 1/28/2026|||
for better or for worse, most specific scientific jargon is already going to be in english
ivirshup 1/28/2026|||
I've heard that now that AI conferences are starting to check for hallucinated references, rejection rates are going up significantly. See also the Neurips hallucinated references kerfuffle [1]

[1]: https://statmodeling.stat.columbia.edu/2026/01/26/machine-le...

doodlesdev 1/28/2026|||
Honestly, hallucinated references should simply get the submitter banned from ever applying again. Anyone submitting papers or anything with hallucinated references shall be publicly shamed. The problem isn't only the LLMs hallucinating, it's lazy and immoral humans who don't bother to check the output either, wasting everyone's time and corroding public trust in science and research.
lionkor 1/28/2026||
I fully agree. Not reading your own references should be grounds for banning, but that's impossible to check. Hallucinated references cannot be read, so by definition,they should get people banned.
fuzzfactor 1/28/2026||
>Not reading your own references

This could be considered in degrees.

Like when you only need a single table from another researcher's 25-page publication, you would cite it to be thorough but it wouldn't be so bad if you didn't even read very much of their other text. Perhaps not any at all.

Maybe one of the very helpful things is not just reading every reference in detail, but actually looking up every one in detail to begin with?

SilverBirch 1/28/2026|||
Yeah that's not going to work for long. You can draw a line in 2023, and say "Every paper before this isn't AI". But in the future, you're going to have AI generated papers citing other AI slop papers that slipped through the cracks, because of the cost of doing reseach vs the cost of generating AI slop, the AI slop papers will start to outcompete the real research papers.
BlueTemplar 1/28/2026|||
How is this different from flat earth / creationist papers citing other flat earth / creationist papers ?
fuzzfactor 1/28/2026|||
>the cost of doing reseach vs the cost of generating

>slop papers will start to outcompete the real research papers.

This started to rear its ugly head when electric typewriters got more affordable.

Sometimes all it takes is faster horses and you're off to the races :\

utopiah 1/28/2026||
It's quite a safe case if you maintain provenance because there is a ground truth to compare to, namely the untranslated paper.
asveikau 1/27/2026||
Good idea to name this after the spy program that Snowden talked about.
pazimzadeh 1/27/2026||
idk if OpenAI knew that Prism is already a very popular desktop app for scientists and that it's one of the last great pieces of optimized native software?

https://www.graphpad.com/

varjag 1/27/2026|||
They don't care. Musk stole a chunk Heinlein's literary legacy with Grok (which unlike prism wasn't a common word) and noone bat an eye.
DonaldPShimoda 1/27/2026|||
> Grok (which unlike prism wasn't a common word)

"Grok" was a term used in my undergrad CS courses in the early 2010s. It's been a pretty common word in computing for a while now, though the current generation of young programmers and computer scientists seem not to know it as readily, so it may be falling out of fashion in those spaces.

Fnoord 1/28/2026|||
Wikipedia about Groklaw [1]

> Groklaw was a website that covered legal news of interest to the free and open source software community. Started as a law blog on May 16, 2003, by paralegal Pamela Jones ("PJ"), it covered issues such as the SCO-Linux lawsuits, the EU antitrust case against Microsoft, and the standardization of Office Open XML.

> Its name derives from "grok", roughly meaning "to understand completely", which had previously entered geek slang.

[1] https://en.wikipedia.org/wiki/Groklaw

milleramp 1/28/2026||||
He is referencing the book Stranger in a Strange Land, written in 1961.
varjag 1/28/2026|||
Grok was specifically coined by Heinlein in _Stranger in a Strange Land_. It's been used in nerd circles for decades before your undergrad times but was never broadly known.
DonaldPShimoda 1/30/2026||
I'm aware of the provenance; I was specifically addressing the parent comment's assertion that it is not "a common word". It's a well-known word in the realm of computing, though perhaps less these days as the upcoming generation seems less inclined to learn archaic pop culture.
sincerely 1/28/2026||||
Grok has been nerd slang for a while. I bet it's in that ESR list of hacker lingo. And hell if every company in silicon valley gets to name their company after something from Lord of the Rings why can't he pay homage to an author he likes
Fnoord 1/27/2026|||
He stole a letter, too.
tombert 1/27/2026||
That bothers more than it should. Every single time I see a new post about Twitter, I think that there's some update for X11 or X Server or something, only to be reminded that Twitter has been changed.
intothemild 1/27/2026|||
I very much doubt they knew much about what they were building if they didn't know this.
XCSme 1/28/2026||
I thought this was about the Prism Database ORM. Or that was Prisma?
bmaranville 1/27/2026|
Having a chatbot that can natively "speak" latex seems like it might be useful to scientists that already use it exclusively for their work. Writing papers is incredibly time-consuming for a lot of reasons, and having a helper to make quick (non-substantive) edits could be great. Of course, that's not how people will use it...

I would note that Overleaf's main value is as a collaborative authoring tool and not a great latex experience, but science is ideally a collaborative effort.

More comments...