I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.
All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.
This is all pageantry.
"I know nothing but had an idea and did some work. I have no clue whether this question has been explored or settled one way or another. But here's my new paper claiming to be an incremental improvement on... whatever the previous state of understanding was. I wouldn't know, I haven't read up on it yet. Too many papers to write."
We removed the authorship of a a former co-author on a paper I'm on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.
E.g. “cite that paper from John Doe on lorem ipsum, but make sure it’s the 2022 update article that I cited in one of my other recent articles, not the original article”
Didn't even open a single one of the papers to look at them! Just said that one is not relevant without even opening it.
I thought this was introduced by the NSA some time ago.
Fuck A.I. and the collaborators creating it. They've sold out the human race.
At the end of the day, it's all about the incentives. Can we have a world where we incentivize finding the truth rather than just publishing and getting citations?