Top
Best
New

Posted by meetpateltech 4 hours ago

Prism(openai.com)
195 points | 106 commentspage 2
markbao 2 hours ago|
Not an academic, but I used LaTeX for years and it doesn’t feel like what future of publishing should use. It’s finicky and takes so much markup to do simple things. A lab manager once told me about a study that people who used MS Word to typeset were more productive, and I can see that…
crazygringo 1 hour ago||
100% completely agreed. It's not the future, it's the past.

Typst feels more like the future: https://typst.app/

The problem is that so many journals require certain LaTeX templates so Typst often isn't an option at all. It's about network effects, and journals don't want to change their entire toolchain.

maxkfranz 1 hour ago|||
Latex is good for equations. And Latex tools produce very nice PDFs, but I wouldn't want to write in Latex generally either.

The main feature that's important is collaborative editing (like online Word or Google Docs). The second one would be a good reference manager.

auxym 2 hours ago||
Agreed. Tex/Latex is very old tech. Error recovery and messages is very bad. Developing new macros in Tex is about as fun as you expect developing in a 70s-era language to be (ie probably similar to cobol and old fortran).

I haven't tried it yet but Typst seems like a promising replacement: https://typst.app/

WolfOliver 3 hours ago||
Check out MonsterWriter if you are concerned about the recent acquisition of this.

It also offers LaTeX workspaces

see video: https://www.youtube.com/watch?v=feWZByHoViw

vitalnodo 2 hours ago||
With a tool like this, you could imagine an end-to-end service for restoring and modernizing old scientific books and papers: digitization, cleanup, LaTeX reformatting, collaborative or volunteer-driven workflows, OCR (like Mathpix), and side-by-side comparison with the original. That would be useful.
vessenes 2 hours ago|
Don’t forget replication!
olivia-banks 2 hours ago||
I'm curious how you think AI would aide in this.
vessenes 1 hour ago|||
Tao’s doing a lot of related work in mathematics, so I can say that first of all literature search is a clearly valuable function frontier models offer.

Past that, A frontier LLM can do a lot of critiquing, a good amount of experiment design, a check on statistical significance/power claims, kibitz on methodology..likely suggest experiments to verify or disprove. These all seem pretty useful functions to provide to a group of scientists to me.

noitpmeder 2 hours ago|||
Replicate this <slop>

Ok! Here's <more slop>

olivia-banks 1 hour ago||
I don't think you understand what replication means in this context.
sbszllr 2 hours ago||
The quality and usefulness of it aside, the primary question is: are they still collecting chats for training data? If so, it limits how comfortable, and sometimes even permitted, people would with working on their yet-to-be-public work using this tool.
AuthAuth 2 hours ago||
This does way less than i'd expect. Converting images to tikz is nice but some of the other applications demonstrated were horrible. This is no way anyone should be using AI to cite.
MattDaEskimo 2 hours ago||
What's the goal here?

There was an idea of OpenAI charging commission or royalties on new discoveries.

What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?

engineer_22 1 hour ago|
> Prism is free to use, and anyone with a ChatGPT account can start writing immediately.

Maybe it's cynical, but how does the old saying go? If the service is free, you are the product.

Perhaps, the goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.

AlexCoventry 1 hour ago||
I don't see the use. You can easily do everything shown in the Prism intro video with ChatGPT already. Is it meant to be an overleaf killer?
legitster 2 hours ago||
It's interesting how quickly the quest for the "Everything AI" has shifted. It's much more efficient to build use-case specific LLMs that can solve a limited set of problems much more deeply than one that tries to do everything well.

I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.

All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.

falcor84 22 minutes ago|
I don't get this argument. Our nervous system is also heterogenous, why wouldn't AGI be based on an "executive functions" AI that manages per-function AIs?
hit8run 1 hour ago|
They are really desperate now, right?
More comments...