Top
Best
New

Posted by meetpateltech 9 hours ago

Prism(openai.com)
430 points | 264 commentspage 5
andrepd 4 hours ago|
"Chatgpt writes scientific papers" is somehow being advertised as a good thing. What is there even left to say?
epolanski 4 hours ago||
Not gonna lie, I cringed when it asked to insert citations.

Like, what's the point?

You cite stuff because you literally talk about it in the paper. The expectation is that you read that and that it has influenced your work.

As someone who's been a researcher in the past, with 3 papers published in high impact journals (in chemistry), I'm beyond appalled.

Let me explain how scientific publishing works to people out of the loop:

- science is an insanely huge domain. Basically as soon as you drift in any topic the number of reviewers with the capability to understand what you're talking about drops quickly to near zero. Want to speak about properties of helicoidal peptides in the context of electricity transmission? Small club. Want to talk about some advanced math involving fourier transforms in the context of ml? Bigger, but still small club. When I mean small, I mean less than a dozen people on the planet likely less with the expertise to properly judge. It doesn't matter what the topic is, at elite level required to really understand what's going on and catch errors or bs, it's very small clubs.

2. The people in those small clubs are already stretched thin. Virtually all of them run labs so they are already bogged down following their own research, fundraising, and coping with teaching duties (which they generally despise, very few good scientist are barely more than mediocre professors and have already huge backlogs).

3. With AI this is a disaster. If having to review slop for your bs internal tool at your software job was already bad, imagine having to review slop in highly technical scientific papers.

4. The good? People pushing slop, due to these clubs being relatively small, will quickly find their academic opportunities even more limited. So the incentives for proper work are hopefully there. But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can't imagine now.

bonsai_spool 2 hours ago||
> But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can't imagine now.

Hmm, I follow the argument, but it's inconsistent with your assertion that there is going to be incentive for 'proper work' over time. Anecdotally, I think the median quality of papers from middle- and top-tier Chinese universities is improving (your comment about 'asian researchers' ignores that Japan, South Korea, and Taiwan have established research programs at least in biology).

SoKamil 3 hours ago||
It’s like not only the technology is to blame, but the culture and incentives of modern world.

The urge to cheat in order to get a job, promotion, approval. The urge to do stuff you are not even interested in, to look good in the resume. And to some extent I feel sorry for these people. At the end of the day you have to pay your bills.

epolanski 3 hours ago||
This isn't about paying your bills, but having a chance of becoming a full time researcher or professor in academia which is obviously the ideal career path for someone interested in science.

All those people can go work for private companies, but few as scientists rather than technicians or QAs.

legitster 7 hours ago||
It's interesting how quickly the quest for the "Everything AI" has shifted. It's much more efficient to build use-case specific LLMs that can solve a limited set of problems much more deeply than one that tries to do everything well.

I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.

All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.

falcor84 4 hours ago|
I don't get this argument. Our nervous system is also heterogenous, why wouldn't AGI be based on an "executive functions" AI that manages per-function AIs?
oytmeal 5 hours ago||
Some things are worth doing the "hard way".
falcor84 5 hours ago|
Reminds me of that dystopian virtual sex scene in Demolition Man (slightly nsfw) - https://youtu.be/E3yARIfDJrY
ai_critic 7 hours ago||
Anybody else notice that half the video was just finding papers to decorate the bibliography with? Not like "find me more papers I should read and consider", but "find papers that are relevant that I should cite--okay, just add those".

This is all pageantry.

sfink 5 hours ago||
Yes. That part of the video was straight-up "here's how to automate academic fraud". Those papers could just as easily negate one of your assumptions. What even is research if it's not using cited works?

"I know nothing but had an idea and did some work. I have no clue whether this question has been explored or settled one way or another. But here's my new paper claiming to be an incremental improvement on... whatever the previous state of understanding was. I wouldn't know, I haven't read up on it yet. Too many papers to write."

renyicircle 6 hours ago|||
It's as if it's marketed to the students who have been using ChatGPT for the last few years to pass courses and now need to throw together a bachelor's thesis. Bibliography and proper citation requirements are a pain.
pfisherman 5 hours ago|||
That is such a bummer. At the time, it was annoying and I groused and grumbled about it; but in hindsight my reviewers pointed me toward some good articles, and I am better for having read them.
olivia-banks 6 hours ago|||
I agree with this. This problem is only going to get worse once these people enter academia and facing needing to publish.
olivia-banks 6 hours ago|||
I've noticed this pattern, and it really drives me nuts. You should really be doing a comprehensive literature review before starting any sort of review or research paper.

We removed the authorship of a a former co-author on a paper I'm on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.

NewsaHackO 5 hours ago||
There is definitely a difference between how senior researchers and students go about making publications. To students, they get told basically what topic they should write a paper on or prepare data for, so they work backwards: try to write the paper (possibly some researching information to write the paper), then add references because they know they have to. For the actual researchers, it would be a complete waste of time/funding to start a project on a question that has already been answered before (and something that the grant reviewers are going to know has already been explored before), so in order to not waste their own time, they have to do what you said and actually conduct a comprehensive literature review before even starting the work.
adverbly 5 hours ago|||
I chuckled at that part too!

Didn't even open a single one of the papers to look at them! Just said that one is not relevant without even opening it.

black_puppydog 6 hours ago|||
Plus, this practice (just inserting AI-proposed citations/sources) is what has recently been the front-runner of some very embarrassing "editing" mistakes, notably in reports from public institutions. Now OpenAI lets us do pageantry even faster! <3
verdverm 6 hours ago|||
It's all performance over practice at this point. Look to the current US administration as the barometer by which many are measuring their public perceptions
teaearlgraycold 6 hours ago|||
The hand-drawn diagram to LaTeX is a little embarrassing. If you load up Prism and create your first blank project you can see the image. It looks like it's actually a LaTeX rendering of a diagram rendered with a hand-dawn style and then overlayed on a very clean image of a napkin. So you've proven that you can go from a rasterized LaTeX diagram back to equivalent LaTeX code. Interesting but probably will not hold up when it meets real world use cases.
thesuitonym 6 hours ago||
You may notice that this is the way writing papers works in undergraduate courses. It's just another in a long line of examples of MBA tech bros gleaning an extremely surface-level understanding of a topic, then decided they're experts.
0dayman 7 hours ago||
in the end we're going to end up with papers written by AI, proofread by AI .....summarized for readers by AI. I think this is just for them to remain relevant and be seen as still pushing something out
falcor84 4 hours ago|
You're assuming a world where humans are still needed to read the papers. I'm more worried about a future world where AIs do all of the work of progressing science and humans just become bystanders.
drusepth 2 hours ago||
Why are you worried about that world? Is it because you expect science to progress too fast, or too slow?
falcor84 43 minutes ago||
Too fast. It's already coding too fast for us to follow, and from what I hear, it's doing incredible work in drug discovery. I don't see any barrier to it getting faster and faster, and with proper testing and tooling, getting more and more reliable, until the role that humans play in scientific advancement becomes at best akin to that of managers of sports teams.
wasmainiac 5 hours ago||
The state of publishing in academic was already a dumpster fire, why lower the friction farther? It’s not like writing was the hard part. Give it two years max we will see hallucination citing hallucination, independent repeatability out the window
falcor84 5 hours ago|
That's one scenario, but I also see a potential scenario where this integration makes it easier to manage the full "chain of evidence" for claimed results, as well as replication studies and discovered issues, in order to then make it easier to invalidate results recursively.

At the end of the day, it's all about the incentives. Can we have a world where we incentivize finding the truth rather than just publishing and getting citations?

hulitu 7 hours ago||
> Introducing Prism Accelerating science writing and collaboration with AI.

I thought this was introduced by the NSA some time ago.

webdoodle 40 minutes ago|
Lol, yep. Now with enhanced A.I. terrorist tracking...

Fuck A.I. and the collaborators creating it. They've sold out the human race.

zb3 4 hours ago|
Is this the product where OpenAI will (soon) take profit share from inventions made there?
More comments...