I'm sorry, but publishing is hard, and it should be hard. There is a work function that requires effort to write a paper. We've been dealing with low quality mass-produced papers from certain regions of the planet for decades (which, it appears, are now producing decent papers too).
All this AI tooling will do is lower the effort to the point that complete automated nonsense will now flood in and it will need to be read and filtered by humans. This is already challenging.
Looking elsewhere in society, AI tools are already being used to produce scams and phishing attacks more effective than ever before.
Whole new arenas of abuse are now rife, with the cost of producing fake pornography of real people (what should be considered sexual abuse crime) at mere cents.
We live in a little microcosm where we can see the benefits of AI because tech jobs are mostly about automation and making the impossible (or expensive) possible (or cheap).
I wish more people would talk about the societal issues AI is introducing. My worthless opinion is that prism is not a good thing.
I'm not in favor of letting AI do my thinking for me. Time will tell where Prism sits.
Lessons are learned the hard way. I invite the slop - the more the merrier. It will lead to a reduction in internet activity as people puke from the slop. And then we chart our way back to the right path.
It is what it is. Humans.
Look at how much BS flooded psychology but had pretty ideas about p values and proper use of affect vs effect. None of that mattered.
I would not like to be a publisher right now facing the enslaught of thousands and thousands of slop generated articles, trying to find reviewers for them all.
Great, so now I'll have to sift through a bunch of ostensibly legitimate (though legitimate looking) non-peer reviewed whitepapers, where if I forget to check the peer review status even once I risk wasting a large amount of time reading gobbledygook. Thanks openai?