Top
Best
New

Posted by Brajeshwar 1 day ago

‘Overworked, underpaid’ humans train Google’s AI(www.theguardian.com)
276 points | 140 commentspage 2
back2dafucha 23 hours ago|
Diminishing returns is an ugly business. And thats obviously where we are at. The end not the beginning of LLM "innovation".

Any technology that creates "sysiphian" tasks, is not worth anyones time. That includes LLMs, and "Big Data". The "herculean effort" that never ends is the proof in the pudding. The tech doesnt work.

Its like using machine learning for self driving instead of having an actual working algorythm. Your bust.

yanis_t 1 day ago||
From my shallow understanding, it seems that human training is involved heavily in the post-training/fine-tuning stage, after the base model has been solidified already.

In that case, how is the notion of truthiness (what the model accepts as right or wrong) affected during this stage , that is affected by human beings vs. it being sealed into the basic model itself, that is truthiness being deduced by the method / part of its world model.

dolphinscorpion 1 day ago||
"Google" posted a job opening. They applied for and took the job, agreeing to posted pay and conditions. End of the story. It's not up to the Guardian to decide
xkbarkar 1 day ago||
I agree, article is pretty low quality ragebait. Not good journalism at all.
lysace 1 day ago||
It is amazing how much their quality levels have fallen during the past two decades.

I used to point to their reporting as something that my nation’s newspapers should seek to emulate.

(My nation’s newspapers have since fallen even lower.)

jimnotgym 19 hours ago||
Is it amazing? They are struggling to make money as much as every other news organisation, they have to keep cutting costs to do it. Then they need as many click throughs from social platforms as possible so that they can sell at least some advertising. I would say it is inevitable.
lysace 18 hours ago||
It is inevitable that the journalistic integrity of the Guardian goes to shit?
anthonj 1 day ago||
Not so easy. What if you get hired as a physiotherapist somewhere but on your first day you find out you will work in a brothel?

Or join an hospital as nurse, but then you are asked to perform surgery as you were a doctor?

There are serious issues outlined in the article.

lysace 23 hours ago||
This is not what the article is outlining.
anthonj 8 hours ago||
The article mentions some stories such ad the one lady requested to edit medical-related infos without having any qualifications to evaluate thir correctness.

Or the one about handling disturbing concted with no previous warning and no consueling

kerblang 1 day ago||
Are other AI companies doing the same thing? Would like to see more articles about this...
thepryz 1 day ago||
Scale AI’s entire business model was using people in developing countries to label data for training models. Once you look into it, it comes across as rather predatory.

This was one of the first links I found re: Scale’s labor practices https://techcrunch.com/2025/01/22/scale-ai-is-facing-a-third...

Here’s another: https://relationaldemocracy.medium.com/an-authoritarian-work...

lawgimenez 1 day ago|||
Couple of months ago I received a job invite for Kotlin AI trainers from the team at Upwork. I asked what the job is about and she says something like "for the opportunity to review & evaluate content for generative AI." And I'm from a developed country too.
jhbadger 1 day ago|||
Karen Hao's recent book "Empire of AI" about the rise of OpenAI goes into detail how people in Africa and South America were hired (and arguably exploited) for their training efforts.
maltelandwehr 23 hours ago||
Can you explain the exploited part?

My understanding is they performed work and were paid for it at market rate. So just regular capitalism. Or was there more to it?

jhbadger 22 hours ago|||
According to the book they kept dropping the rates paid per item forcing people to work ridiculous 12+ hours/day just to get enough to live on, even in the low cost of living places they were in. It was like something in a cyberpunk dystopia but real.
intended 22 hours ago|||
This is a weird sentence, because its got many assumptions baked in that pull the answers in different directions, if they have to conform with the implied definitions you are using.

Global south nations do not have the same level of Judicial recourse, work safety norms, and health infrastructure as does, say, America. So people doing labelling work who then go ahead and kill themselves after getting PTSD, are just costs of doing business.

This can be put under many labels, to transfer the objectionable portion to some other entity or ideology - in your case "capitalism".

That doesn't mean it is actually capitalism. In this case it's exploitating gaps in global legal infrastructure.

I used to bash capitalism happily, but its becoming a white whale, and catch all. We don't even have capitalism anywhere, since you can get far too many definitions for that term today.

benreesman 1 day ago|||
There's nontrivial historical precedent for this exact playbook: when a new paradigm (Lisp machines and GOFAI search, GPU backprop, softmax self-attention) is scaling fast, a lot of promises get made, a lot of national security money gets involved, and AI Summer is just balmy.

But the next paradigm breakthrough is hard to forecast, and the current paradigm's asymptote is just as hard to predict, so it's +EV to say "tomorrow" and "forever".

When the second becomes clear before the first, you turk and expert label like it's 1988 and pray that the next paradigm breakthrough is soon, you bridge the gap with expert labeling and compute until it works or you run out of money and the DoD guy stops taking your calls. AI Winter is cold.

And just like Game of Thrones, no I mean no one, not Altman, not Amodei, not Allah Most Blessed knows when the seasons in A Song of Math and Grift will change.

jkkola 1 day ago||
There's a YouTube video titled "AI is a hype-fueled dumpster fire" [0] that mentions OpenAI's shenanigans. I haven't fact checked that but I've heard enough stories to believe it.

[0] https://youtu.be/0bF_AQvHs1M?si=rpMG2CY3TxnG3EYQ

sjfaljf 19 hours ago||
How convenient: throw economy in shambles, coerce professionals into labeling labor in an effort to make humanity obsolete. Will it work?
luke-stanley 23 hours ago||
It's strange that the Guardian mentions OpenAI's "O3" model and not GPT-5. Maybe they think o3 is SOTA still, but they should at least name it correctly, in lowercase as OpenAI does.
agigao 19 hours ago||
Isn't this misleading? to say at least...
wslh 1 day ago||
It seems a deja vu of previous Amazon's Mechanical Turk[1] discussions[2] but with AI.

[1] https://www.mturk.com/

[2] https://tinyurl.com/4r2p39v3

throwawaysleep 16 hours ago||
I work for a few of these during meetings and such. Some are so picky about getting everything done quickly that I can’t believe the data is very valuable.
lysace 1 day ago|
with wages starting at $16 an hour for generalist raters and $21 an hour for super raters, according to workers

That’s sort of what I expect the Guardian’s UK online non-sub readers to make.

Perhaps GlobalLogic should open a subsidiary in the UK?

More comments...