Top
Best
New

Posted by SEJeff 17 hours ago

I won a championship that doesn't exist(ron.stoner.com)
185 points | 107 commentspage 3
cemoktra 2 hours ago|
i'm now thinking about creating a github repo that contains non sense code solutions to many problems. if that gets stars and many forks that could have an effect
CrzyLngPwd 16 hours ago||
So it's trivial for an individual to poison the LLMs, but imagine what a state with billions of American dollars could achieve.

We can easily look ahead a few years and see how people will rely on the LLMs to be a source of truth in the same way people looked at Google that way, or newspapers.

Rewriting history has been happening for a while, and with LLMs being the one-stop shop for guidance and truth, the rewrite will be complete.

Doubly so since most people see these things as artificial intelligence, and soon to be superintelligence...so how can they be wrong?

wodenokoto 4 hours ago||
>Trust Laundering >This is the part that really matters.

I can't tell if this is slop or parody!

Havoc 16 hours ago||
Like a FIFA peace prize?
standeven 17 hours ago||
I've had LLMs regurgitate satire as fact many, many times.
duskwuff 16 hours ago|
[dead]
poglet 15 hours ago||
I made a post on Reddit asking for help with a TV, I had made up some (likley incorrect) technical assumptions about the issue. Several years later I asked the LLM about the TV, it used my own post as a citation to tell me what was wrong with it.

I am paranoid that this is happening every time I ask a LLM for a product recommendation or a shop recommendation. In the same way as SEO, anyone wanting to sell or convince needs to do as much as they can to influence the LLM.

cityofdelusion 13 hours ago|
This is becoming a problem real fast. I asked an LLM to find me some reasonable tank-fill inkjet printers with good ratings. It did some research and linked some Reddits as proof. The results looked fishy to me so I cross checked against prosumer review sites for printers and the models suggested were recognized as junk with very poor print quality. Not sure why the LLM rated random redditors higher than say printer SMEs. I feel like I dodged a bullet.
NooneAtAll3 12 hours ago||
so it's just https://xkcd.com/1958/
shevy-java 16 hours ago||
So like Frank Dux! In the movie Bloodsport epilogue, he didn't do that.

It's almost like he was a better Chuck Norris than Chuck Norris. By his own ... testimony ...

nonameiguess 16 hours ago||
Pails in comparison to what Frank Dux and Frank Abagnale were able to convince much of the world they did with no evidence other than their own stories. Who knows how much of recorded and believed history is complete bullshit? Not to get too far into sacred territory, but claims around Siddhartha Gautama, Jesus Christ, and the Prophet Muhammad are quite a bit less plausible than the legends of Ragnar Lodbrok or the tales of Jonathan Swift, but nonetheless widely believed.
adornKey 2 hours ago|
Good point. Also, most humans seem to have no problems believing even stories that are self-contradictory. Philosophers from all periods have often stated that the situation with human mind and reasoning is almost hopeless.

The news here is that AI has too much trust in the internet. The first time I allowed tool-calling, it started googling up some nonsense instead of thinking... But I think at least it's possible for the AI to evaluate the quality of the source - you just have to ask for an analysis, and you'll get a reasonable evaluation. With humans, something like that just doesn't work - they'll get aggressive or might even start throwing bananas...

alex-yost 6 hours ago|
[flagged]
More comments...