The example just reinforces the whole concept of LLM slop overwhelming preprint archives. I found it off-putting.
What a bizarre thing to say! I'm guessing it's slop. Makes it hard to trust anything the article claims.
Uhm ... no.
I think we need to put an end to AI as it is currently used (not all of it but most of it).
We dont need more stuff - we need more quality and less of the shit stuff.
Im convinced many involved in the production of LLM models are far too deep in the rabbit hole and cant see straight.