(re the decline of scientific integrity / signal-to-noise ratio in science)
Uhm ... no.
I think we need to put an end to AI as it is currently used (not all of it but most of it).
We dont need more stuff - we need more quality and less of the shit stuff.
Im convinced many involved in the production of LLM models are far too deep in the rabbit hole and cant see straight.
(See also: today’s WhatsApp whistleblower lawsuit.)
Perhaps, like the original PRISM programme, behind the door is a massive data harvesting operation.
Seems like they have only announced products since and no new model trained from scratch. Are they still having pre-training issues?