As other top level posters have indicated the review portion of this is the limiting factor
unless journal reviewers decide to utilize entirely automated review process, then they’re not gonna be able to keep up with what will increasingly be the most and best research coming out of any lab.
So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.
I expect over the longest period, that’s probably not going to be throwing more humans at the problem, but agreeing on some kind of constraint around autonomous reviewers.
If not that then labs will also produce products and science will stop being in public and the only artifacts will be whatever is produced in the market
Errr sure. Sounds easy when you write it down. I highly doubt such a thing will ever exist.
LLMs are undeniably great for interactive discussion with content IF you actually are up-to-date with the historical context of a field, the current "state-of-the-art", and have, at least, a subjective opinion on the likely trajectories for future experimentation and innovation.
But, agents, at best, will just regurgitate ideas and experiments that have already been performed (by sampling from a model trained on most existing research literature), and, at worst, inundate the literature with slop that lacks relevant context, and, as a negative to LLMs, pollute future training data. As of now, I am leaning towards "worst" case.
And, just to help with the facts, your last comment is unfortunately quite inaccurate. Science is one of the best government investments. For every $1.00 dollar given to the NIH in the US, $2.56 of economic activity is estimated to be generated. Plus, science isn't merely a public venture. The large tech labs have huge R&D because the output from research can lead to exponential returns on investment.
I would wager hes not - he seems to post with a lot of bluster and links to some paper he wrote (that nobody cares about).
EDIT: Fixed :)
Of course, my scientific and mathematical research is done in isolation, so I'm not wanting much for collaborative features. Still, kind of interested to see how this shakes out; We're going to need to see OpenAI really step it up against Claude Opus though if they really want to be a leader in this space.
EDIT: as corrected by comment, Prisma is not Vercel, but ©2026 Prisma Data, Inc. -- curiosity still persists(?)
A comparison comes to mind is the n8n workflow type product they put out before. N8n takes setup. Proofreading, asking for more relevant papers, converting pictures to latex code, etc doesn't take any setup. People do this with or without this tool almost identically.
The reason? I can give you the full source for Sam Altman:
while(alive) { RaiseCapital() }
That is the full extent of Altman. :)
FWIW, Google Scholar has a fairly compelling natural-language search tool, too.
Even if yall don’t train off it he’ll find some other way.
“In one example, [Friar] pointed to drug discovery: if a pharma partner used OpenAI technology to help develop a breakthrough medicine, [OpenAI] could take a licensed portion of the drug's sales”
https://www.businessinsider.com/openai-cfo-sarah-friar-futur...