There was an idea of OpenAI charging commission or royalties on new discoveries.
What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?
Maybe it's cynical, but how does the old saying go? If the service is free, you are the product.
Perhaps, the goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.
Even if yall don’t train off it he’ll find some other way.
“In one example, [Friar] pointed to drug discovery: if a pharma partner used OpenAI technology to help develop a breakthrough medicine, [OpenAI] could take a licensed portion of the drug's sales”
https://www.businessinsider.com/openai-cfo-sarah-friar-futur...
I'm sorry, but publishing is hard, and it should be hard. There is a work function that requires effort to write a paper. We've been dealing with low quality mass-produced papers from certain regions of the planet for decades (which, it appears, are now producing decent papers too).
All this AI tooling will do is lower the effort to the point that complete automated nonsense will now flood in and it will need to be read and filtered by humans. This is already challenging.
Looking elsewhere in society, AI tools are already being used to produce scams and phishing attacks more effective than ever before.
Whole new arenas of abuse are now rife, with the cost of producing fake pornography of real people (what should be considered sexual abuse crime) at mere cents.
We live in a little microcosm where we can see the benefits of AI because tech jobs are mostly about automation and making the impossible (or expensive) possible (or cheap).
I wish more people would talk about the societal issues AI is introducing. My worthless opinion is that prism is not a good thing.
I'm not in favor of letting AI do my thinking for me. Time will tell where Prism sits.
Look at how much BS flooded psychology but had pretty ideas about p values and proper use of affect vs effect. None of that mattered.
Lots of players in this space.
As other top level posters have indicated the review portion of this is the limiting factor
unless journal reviewers decide to utilize entirely automated review process, then they’re not gonna be able to keep up with what will increasingly be the most and best research coming out of any lab.
So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.
I expect over the longest period, that’s probably not going to be throwing more humans at the problem, but agreeing on some kind of constraint around autonomous reviewers.
If not that then labs will also produce products and science will stop being in public and the only artifacts will be whatever is produced in the market