Top
Best
New

Posted by meetpateltech 6 hours ago

Prism(openai.com)
268 points | 168 commentspage 3
sbszllr 4 hours ago|
The quality and usefulness of it aside, the primary question is: are they still collecting chats for training data? If so, it limits how comfortable, and sometimes even permitted, people would with working on their yet-to-be-public work using this tool.
MattDaEskimo 4 hours ago||
What's the goal here?

There was an idea of OpenAI charging commission or royalties on new discoveries.

What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?

engineer_22 3 hours ago|
> Prism is free to use, and anyone with a ChatGPT account can start writing immediately.

Maybe it's cynical, but how does the old saying go? If the service is free, you are the product.

Perhaps, the goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.

AuthAuth 4 hours ago||
This does way less than i'd expect. Converting images to tikz is nice but some of the other applications demonstrated were horrible. This is no way anyone should be using AI to cite.
khalic 4 hours ago||
All your papers are belong to us
vicapow 4 hours ago|
Users have full control over whether their data is used to help improve our models
chairhairair 1 hour ago|||
Never trust Sam Altman.

Even if yall don’t train off it he’ll find some other way.

“In one example, [Friar] pointed to drug discovery: if a pharma partner used OpenAI technology to help develop a breakthrough medicine, [OpenAI] could take a licensed portion of the drug's sales”

https://www.businessinsider.com/openai-cfo-sarah-friar-futur...

danelski 51 minutes ago|||
Only the defaults matter.
asadm 2 hours ago||
Disappointing actually, what I actually need is a research "management" tool that lets me put in relevant citations but also goes through ENTIRE arxiv or google scholar and connect ideas or find novel ideas in random fields that somehow relate to what I am trying to solve.
jeffybefffy519 3 hours ago||
I postulate 90% of the reason openai now has "variants" for different use cases is just to capture training data...
flumpcakes 2 hours ago||
This is terrible for Science.

I'm sorry, but publishing is hard, and it should be hard. There is a work function that requires effort to write a paper. We've been dealing with low quality mass-produced papers from certain regions of the planet for decades (which, it appears, are now producing decent papers too).

All this AI tooling will do is lower the effort to the point that complete automated nonsense will now flood in and it will need to be read and filtered by humans. This is already challenging.

Looking elsewhere in society, AI tools are already being used to produce scams and phishing attacks more effective than ever before.

Whole new arenas of abuse are now rife, with the cost of producing fake pornography of real people (what should be considered sexual abuse crime) at mere cents.

We live in a little microcosm where we can see the benefits of AI because tech jobs are mostly about automation and making the impossible (or expensive) possible (or cheap).

I wish more people would talk about the societal issues AI is introducing. My worthless opinion is that prism is not a good thing.

jimmar 2 hours ago||
I've wasted hours of my life trying to get Latex to format my journal articles to different journals' specifications. That's tedious typesetting that wastes my time. I'm all for AI tools that help me produce my thoughts with as little friction as possible.

I'm not in favor of letting AI do my thinking for me. Time will tell where Prism sits.

flumpcakes 2 hours ago||
This Prism video was not just typesetting. If OpenAI released tools that just helped you typeset or create diagrams from written text, that would be fine. But it's not, it's writing papers for you. Scientists/publishers really do not need the onslaught of slop this will create. How can we even trust qualifications in the post-AI world, where cheating is rampant at univeristies?
PlatoIsADisease 2 hours ago||
I just want replication in science. I don't care at all how difficult it is to write the paper. Heck, if we could spend more effort on data collection and less on communication, that sounds like a win.

Look at how much BS flooded psychology but had pretty ideas about p values and proper use of affect vs effect. None of that mattered.

Onavo 1 hour ago||
It would be interesting to see how they would compete with the incumbents like

https://Elicit.com

https://Consensus.app

https://Scite.ai

https://Scispace.com

https://Scienceos.ai

https://Undermind.ai

Lots of players in this space.

AndrewKemendo 1 hour ago|
I genuinely don’t see scientific journals and conferences continuing to last in this new world of autonomous agents, at least the same way that they used to be.

As other top level posters have indicated the review portion of this is the limiting factor

unless journal reviewers decide to utilize entirely automated review process, then they’re not gonna be able to keep up with what will increasingly be the most and best research coming out of any lab.

So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.

I expect over the longest period, that’s probably not going to be throwing more humans at the problem, but agreeing on some kind of constraint around autonomous reviewers.

If not that then labs will also produce products and science will stop being in public and the only artifacts will be whatever is produced in the market

More comments...