I just think it's silly to obsess over words like that. There are many words that take on different meanings in different contexts and can be associated with different events, ideas, products, time periods, etc. Would you feel better if they named it "Polyhedron"?
You may say it's "silly to obsess", but it's like naming a product "Auschwitz" and saying "it's just a city name" -- it ignores the power of what Geffrey N. Leech called "associative meaning" in his taxonomy of "Seven Types of Meaning" (Semantics, 2nd. ed. 1989): speaking that city's name evokes images of piles of corpses of gassed undernourished human beings, walls of gas chambers with fingernail scratches and lamp shades made of human skin.
[2] https://prism-pipeline.com/
[6] https://www.graphpad.com/features
[7] https://www.prismsoftware.com/
I am not sure you can make an argument of "other people are doing it too". Lots of people do things that it is not in their interest (ex: smoking, to pick the easy one).
As others mentioned, I did not have the negative connotation related to the word prism either, but not sure how could one check that anyhow. It is not like I was not surprised these years about what some other people think, so who knows... Maybe someone with experience in marketing could explain how it is done.
If they claim in a private meeting with people at the NSA that they did it as a tribute to them and a bid for partnership, who would anyone here be to say they didnt? even if they didnt... which is only relevant because OpenAI processes an absolute shitton of data the NSA would be interested in
https://en.wikipedia.org/wiki/Prism_(optics)
I remember the NSA Prism program, but hearing prism today I would think first of Newton, optics, and rainbows.
Most ordinary users won’t recognize the smaller products you listed, but they will recognize OpenAI and they’ll recognize Snowden/NSA adjacent references because those have seeped into mainstream culture. And even if the average user doesn’t immediately make the connection, someone in their orbit on social media almost certainly will and they’ll happily spin it into a theory for engagement.
(I expect a much higher than average share of people in academia also part of these spaces.)
Most people don't even remember Snowden at this point.
They're of course free to choose this name. I'm just also surprised they would do so.
Large scale technology projects that people are suspicious and anxious about. There are a lot of people anxious that AI will be used for mass surveillance by governments. So you pick a name of another project that was used for mass surveillance by government.
Altso, nazism. But different context, years ago, so whatever I guess?
Hell, let's just call it Hitler. Different context!
Given what they do it is an insidious name. Words matter.
Coming from a company involved with sharing data to intelligence services (it's the law you can't escape it) this is not wise at all. Unless nobody in OpenAI heard of it.
It was one of the biggest scandal in tech 10 years ago.
They could call it "Workspace". More clear, more useful, no need to use a code-word, that would have been fine for internal use.
The extreme examples are an analogy that highlight the shape of the comparison with a more generally loathed / less niche example.
OpenAI is a thing with lots and lots of personal data that the consumers trust OpenAI not to abuse or lose. They chose a product name that matches a us government program that secretly and illegal breached exactly that kind of trust.
Hitler vegetarians isn't a great analogy because vegetarianism isn't related to what made hitler bad. Something closer might be Exxon or BP making a hairgel called "Oilspill" or Dupont making a nail polish called "Forever Chem".
They could have chosen anything but they chose one specifically matching a recent data stealing and abuse scandal.
Have you ever seen the comment section of a Snowden thread here? A lot of users here call for Snowden to be jailed, call him a russian asset, play down the reports etc. These are either NSA sock puppet accounts or they won't bite the hand that feeds them (employees of companies willing to breach their users trust).
Edit: see my comment here in a snowden thread: https://news.ycombinator.com/item?id=46237098
Someone once said "Religion is opium for the people." - today, give people a mobile device and some doom-scrolling social media celebrity nonsense app, and they wouldn't noticed if their own children didn't come home from school.
For me the problem was not surveillance, the problem is addiction focused app building (+ the monopoly), and that never seem to be a secret. Only now there are some attempts to do something (like Australia and France banning children - which am not sure is feasible or efficient but at least is more than zero).
Protesting is a poor proxy for American political engagement.
Child neglect and missing children rates are lower than they were 50 years ago.
And they did manage to get the word out. They are both relatively free now, but it is true, they both paid a price.
Idealism is that you follow your principles despite that price, not escaping/evading the consequences.
(And he is also the reason why Snowden ended up in Russia. Though it's possible that the flight plan they had was still the best one in that situation.)
I am increasingly wondering what there remains of the supposed superiority of the Western system if we're willing to compromise on everything to suit our political ends.
The point was supposed to be that the truth is worth having out there for the purpose of having an informed public, no matter how it was (potentially) obtained.
In the end, we may end up with everything we fear about China but worse infrastructure and still somehow think we're better.
What if he simply decided that the information he obtained is worth having out there no matter the source? It seems to me that you're simply upset that he dared to do so and are trying very hard to come up with a rationalization for why he's a Bad Guy(tm) for daring to turn the tables. It's a transparent and rather lackluster attempt to shift the conversation from what to who.
It was Russia, or vanish into a black site, never to be seen or heard from again.
https://en.wikipedia.org/wiki/Lie#:~:text=citation%20needed%...
Even if what you say is completely untrue (and who really knows for sure).... it creates that mental association. It's a horrible product name.
[1]: https://openai.com/index/openai-appoints-retired-us-army-gen...
Yes, imho, there is a great deal of ignorance of the actual contents of the NSA leaks.
The agitprop against Snowden as a "Russian agent" has successfully occluded the actual scandal, which is that the NSA has built a totalitarian-authoritarian apparatus that is still in wide use.
Autocrats' general hubris about their own superiority has been weaponized against them. Instead of actually addressing the issue with America's repressive military industrial complex, they kill the messenger.
There's a good chance they just asked GPT5.2 for a name. I know for a fact that when some of the OpenAI models get stuck in the "weird" state associated with LLM psychosis, three of the things they really like talking about are spirals, fractals, and prisms. Presumably, there's some general bias toward those concepts in the weights.
(full disclosure, yes they will be handin in PII on demands like the same kinda deals, this is 'normal' - 2012 shows us no one gives a shit)
We haven’t forgotten… it’s mostly that we’re all jaded given the fact that there has been zero ramifications and so what’s the use of complaining - you’re better off pushing shit up a hill
It's a horrible name for any product coming out of a company like OpenAI. People are super sensitive to privacy and government snooping and OpenAI is a ripe target for that sort of thinking. It's a pretty bad association. You do not want your AI company to be in any way associated with government surveillance programs no matter how old they are.
I personally associate Prism with [Silverlight - Composite Web Apps With Prism](https://learn.microsoft.com/en-us/archive/msdn-magazine/2009...) due to personal reasons I don't want to talk about ;))
If it was part of their adtech systems and them dipping their toe into the enshittification pool, it would have been a legendarily tone deaf project name, but as it is, I think it's fine.
On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.
[1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...
I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).
I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.
The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.
(And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)
a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their paper.
overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.
also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyone's comfortable with that, but plenty of researchers treat computers as appliances lol.
Overleaf ensures that everyone looks at the same version of the document and processes the document with the same set of packages and options.
Any plans of having typst integrated anytime soon?
They’re quite open about Prism being built on top of Crixet.
Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...
I actually think Prism promotes a much more responsible approach to AI writing than "copying from chatgpt" or the likes.
Exactly, and I think this is good news. Let's break it so we can fix at last. Nothing will happen until a real crisis emerges.
And you think the indians will not hand write the output of LLMs ?
Not that I have a better suggestion myself..
Mini paper: that future isn’t the AI replacing humans. its about humans drowning in cheap artifacts. New unit of measurement proposed: verification debt. Also introduces: Recursive Garbage → model collapse
a little joke on Prism)
This appears to just be the output of LLMs itself? It credits GPT-5.2 and Gemini 3 exclusively as authors, has a public domain license (appropriate for AI output) and is only several paragraphs in length.
I feel like this means that working in any group where individuals compete against each other results in an AI vs AI content generation competition, where the human is stuck verifying/reviewing.
Not a dig on your (very sensible) comment, but now I always do a double take when I see anyone effusively approving of someone else's ideas. AI turned me into a cynical bastard :(
Also, in a world where AI output is abundant, we humans become the scarce resource the "tools" in the system that provide some connectivity to reality (grounding) for LLM
"Human Verification as a Service": finally, a lucrative career where the job description is literally "read garbage all day and decide if it's authentic garbage or synthetic garbage." LinkedIn influencers will pivot to calling themselves "Organic Intelligence Validators" and charge $500/hr to squint at emails and go "yeah, a human definitely wrote this passive-aggressive Slack message."
The irony writes itself: we built machines to free us from tedious work, and now our job is being the tedious work for the machines. Full circle. Poetic even. Future historians (assuming they're still human and not just Claude with a monocle) will mark this as the moment we achieved peak civilization: where the most valuable human skill became "can confidently say whether another human was involved."
Bullish on verification miners. Bearish on whatever remains of our collective attention span.
I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.
Maybe you get reimbursed for half as long as there are no obvious hallucinations.
While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!
> > who are looking to 'boost' their CV
Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.
For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.
Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"
In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]
On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
This is a space that probably needs substantial reform, much like grad school models in general (IMO).
They probably wanted: "... that I should read?" So that this is at least marketed to be more than a fake-paper generation tool.
The target audience of this tool is not academics; it's OpenAI investors.
So yes, you use it to write the paper but soon it is public knowledge anyway.
I am not sure if there is much to learn from the draft of the authors.
I'd also like to share what I saw. Since GPT-4o became a thing, everyone who submits academic papers I know in my non-english speaking country (N > 5) has been writing papers in our native language and translating them with GPT-4o exclusively. It has been the norm for quite a while. If hallucination is such a serious problem it has been so for one and half a year.
[1]: https://statmodeling.stat.columbia.edu/2026/01/26/machine-le...
This could be considered in degrees.
Like when you only need a single table from another researcher's 25-page publication, you would cite it to be thorough but it wouldn't be so bad if you didn't even read very much of their other text. Perhaps not any at all.
Maybe one of the very helpful things is not just reading every reference in detail, but actually looking up every one in detail to begin with?
>slop papers will start to outcompete the real research papers.
This started to rear its ugly head when electric typewriters got more affordable.
Sometimes all it takes is faster horses and you're off to the races :\
"Grok" was a term used in my undergrad CS courses in the early 2010s. It's been a pretty common word in computing for a while now, though the current generation of young programmers and computer scientists seem not to know it as readily, so it may be falling out of fashion in those spaces.
> Groklaw was a website that covered legal news of interest to the free and open source software community. Started as a law blog on May 16, 2003, by paralegal Pamela Jones ("PJ"), it covered issues such as the SCO-Linux lawsuits, the EU antitrust case against Microsoft, and the standardization of Office Open XML.
> Its name derives from "grok", roughly meaning "to understand completely", which had previously entered geek slang.
I would note that Overleaf's main value is as a collaborative authoring tool and not a great latex experience, but science is ideally a collaborative effort.