Posted by mustaphah 6 days ago
I outrifht stopped using Facebook.
We are doomed if AI is allowed to punish us.
Now obviously this won’t stop with private entities, state and federal law enforcement are gung-ho to leverage any of these sorts of systems and have been for ages. It doesn’t help the current direction the US specifically is moving in, promoting such authoritarian policies.
Medical insurance is quickly becoming a simple scam where you are forced to pay a private entity that refuses to ever perform its function.
They haven’t approved a single insurance claim submitted without calling and fighting it out with them. Each rejection letter looks plausible, although often nonsensical given the situation.
Medicine itself is very first-world. But medical insurance is one of those "worse than developing country" things. The fact that Americans need medical insurance at all is appalling to many countries, first world and otherwise.
And of course, by funny I mean "I can only laugh otherwise I'd cry"
And also military, though I'm not sure if that's something to be proud of.
Then you simply use the services of another private company. Here, in fact, there are no particular dangers, after all, private companies provide services to people because it is profitable for private companies.
- There is real competition. It's less and less the case for many important things, such as food, accommodations, health, etc.
- Companies pay a price for misbehaving that is much higher than what they got from misbehaving. Also less and less the case, thanks to lobbying, huge law firms, corruption, etc.
- The cost of switching is fair. Moving to another places is very expensive. Doing it several times in a row is rarely possible for most people.
- Some practice are not just generalized in the whole industry. In IT tracking is, spying is, and preventing you from managing your device yourself is more and more trendy.
Basically, this view you are presenting is increasingly naive and even dangerous for any citizen practicing it.
If this kind of low-quality AI moderation is the future, I'm not sure if these major platforms will even remain usable.
I suspect sites like Reddit don't care about a few% false positive rate, without considering in context that bot farmers literally do not care, they'll make another free account, but genuine users will have their attitude towards the site turn significantly negative when they're falsely actioned.
Don't worry, Reddit's day of reckoning comes when the advertisers figure out what percent of Reddit's traffic that they're paying to serve ads to are just bots.
This is surreptitious jamming of communications at levels that constitute and exceed thresholds for consideration as irregular warfare.
Genuine users no longer matter, only the user counts which are programmatically driven to distort reflected appraisal. The users are repressed and demoralized because of such false actions, and the platform has no solution because regulation failed to act at a time they could have changed these outcomes.
What comes later will simply be comparable to why "triage" is done on the battlefield.
Adtech is just a gloriously indirect means for money laundering in fiat money-printing environments. Credit/Debt being offers, when it is unbacked without proper reserve is money-printing.
edit: This has definitely soured my already poor opinion of reddit. I mostly post there about video games, or to help people in /r/buildapc or /r/askculinary. I think I'd rather help people somewhere I'm not going to get blackholed because an AI misinterpreted my comments.
Check out this post [1] in which the post includes part of the LLM response ("This kind of story involves classic AITA themes: family drama, boundary-setting, and a “big event” setting, which typically generates a lot of engagement and differing opinions.") and almost no commenter points this out. Hilarious if it weren't so bleak.
1: https://www.rareddit.com/r/AITAH/comments/1ft3bt6/aita_for_n... (using rareddit because it was eventually deleted)
If theres no literacy, there is no critical thinking.
The only solution is to deliver high quality education to all folks and create engaging environments for it to be delivered.
Ultimately it comes down to influencing folks to think deeper about whats going on around them.
Most of the people between the age of 13-30ish right now are kinda screwed and pretty much a write off imo.
No it won't, we'll all have to upload our IDs and videos of our faces just to register or use Reddit or any social media. They will know who is a real monetizable user or not.
But I don't have any alt accounts...??? Appeal process is a joke. I just opted to delete my 12 year old account instead and have stopped going there.
Oh well, probably time for them to go under and be reborn anyways. The default subs and front page has been garbage for some time.
They IP and hardward device banned me, it's crazy! Any appeal auto-rejected and can't make new accounts.
The other thing is that it is simply a complete waste of time. Commenting on pop culture or news or whatever, when I could be reading books, working on projects, or otherwise interacting with people in the real world is better. We don't have so much time on Earth, I am not sure I want to keep spending so much of it in cyberspace.
https://sustainableviews.substack.com/p/the-day-i-kissed-com...
I'm not sure if they actually got taken over by private equity, but they acted like it since about a year before the third-party app tantrum.
Real moderation actions should not be taken without human input and should always be appealable, even if the appeal is just another mod looking at it to see if they agree.
You have malevolent third-party bots taking advantage of poor moderation to conflate similar/same word different context pairs to silence communication.
For example, the reddit AI bots considers "ricing" to be the same as "rice boy". The latter definitely is pejorative, but the former is not.
Just wild and absolutely crazy-making that this is even allowed, since communication is the primary means to inflict compulsion and torture these days.
Intolerable acts without due process or a rule of law lead to only one possible outcome. Coercion isn't new, but the stupid people are trying their hand for another bite at the apple.
The major platforms will not remain usable because eventually you get this hollowing out of meaning, and this behavior will either drive away all your rational intelligent contributors, or lead to accelerated failures such as evaporative cooling in the social networks. People use things because they provide some amount of value. When that stops being the case, the value disappears not overnight, but within a few months.
Just take a look at the linuxquestions subreddit since the mod exodus. They have a automated trickle of the same questions that don't really get sufficiently answered. Its all slop.
All the experienced people who previously shared their knowledge as charity have moved on because they were driven out by caustic harassment and lack of proper moderation to prevent that. The mod list even hides who the mods are now so people who have had moderated action can't appeal to the Reddit Administrators with the specific moderator who did something as a fascist dictator incapable of basic reading level comprehension common to grade schoolers (AI).
One source claimed rice was race inspired custum e-forgot, the other did claim a link with asian street racing.
I'm speculating further: but the imports were cheap and had a thriving aftermarket of bolt-on parts e.g. body and turbo kits. The low barrier of entry afforded opportunities for anybody to play. Ricing was probably a perjorative issued by domestic enthusiasts that was adopted ironically by Asian import enthusiasts. If you can imagine there was a lot of diversity, people who would bolt up body kits to clapped out Civics to people that would push 700hp with extensively tuned cars with no adornments. I think in particular ricing was the more aesthetically motivated of the crowd.
This was later adopted by computer enthusiasts that like to add embelishments to their desktops, things like rainmeter/rocketdock and Windows/Linux skins and etc...
<Victim> "I'm ricing my Linux Shell, check it out." <Bot> That's Racist!
<Bot Brigade> Moderator this person is violating your rules and being racist!
<Moderator> I'm just using AI to determine this. <Bot Brigade> Great! Now they can't contribute. Lets find another.
TL;DR Words have specific meanings, and a growing number of words have been corrupted purposefully to prevent communication, and by extension limit communication to the detriment of all. You get the same ultimate outcomes when people do this as any other false claim. Abuses pile up until eventually in the absence of functioning non-violent conflict resolution; violence forces the system to reform.
Have you noticed that your implication is circular based on the indefinite assumption (foregone conclusion) that the two are linked (tightly coupled)?
You use a lot of ambiguous manipulative language and structure. Doing that makes any reasonable person think you are either a bot, or a malicious actor.
"hung" means to "suspend", so the process is suspended
I’m not sure that AI would necessarily make that mistake, but a semilterate mod, very much could.
I think the real issue is the absolute impossibility of appeal. This is a big problem, for outfits like Google or Microsoft, where stories of businesses being shut down for false positive bans are fairly common.
In my experience, on the other hand, Apple has always been responsive to appeal. I have released a lot of apps, and have had fairly frequent rejections. The process is annoying, because they seldom tell you exactly what caused the rejection, but I usually figure it out, after one or two exchanges. They are almost always word-choice issues, but not bad words. Rather, they don’t like stuff that can step on their brands.
I once had an app rejected, because it had the word “Finder” in its name (it was an app for finding things).
The annoying thing, was that the first rejection said it was because it was a simple re-skinning of a Web site. I’m positive that what happened, was that a human assessor accidentally tapped the wrong button on their dashboard.
Just look at their list of directors. It's the fortune 500 right there.
I also had a CV rejection letter with AI rejection reasons in it as well which was frustrating because none of the reasons matched my CV at all, in my opinion. I am still not sure if the resume was actually reviewed by a human or AI but I am assuming the latter.
I absolutely hated looking for a new job pre-AI and when times were good. Now I'm feeling completely disillusioned with the whole process.
If you advertise on facebook you’re almost guarantee to have your ad account restricted for no apparent reason and no human being to appeal to, even if you spend big money.
It’s so bad that is common knowledge that you should start a fan page, post random stuff and buy page likes for 5-7 days before you start advertising, otherwise their system will just flag your account.
We've got the real criminal right here.
(LinkedIn ramped up anti-bot/inauthentic-user heuristics like that a few years ago. Sadly they are necessary. Near-impossible for heuristics to distinguish between real humans with inauthentic or suspiciously commercial behavior.)
In both cases for me, I had signed up and logged in for the first time, and was met with an immediate ban. No rhyme or reason why.
I, too, needed it for work so had no prior history from my IPs in the case of Facebook at least. So maybe that's why, but still. Very aggressive and annoying blocking algorithm behavior like that cost them some advertising money as we just decided it wasn't worth it to even advertise there.
The only real solution is locally running models, but that goes against the business model. So instead they will seek regulation to create privacy by fiat. Fiat privacy still has all the same problems as telling your therapist that you killed someone, or keeping your wallet keys printed out on paper in a safe. It's dependent on regulations and definitions of greater good that you can't control.
Not if you are selling hardware. If I was Apple, Dell, or Lenovo, I would be pushing for local running models supporting Hugging Face while I full speed developed systems that can do inference locally.
Getting customers to pay for the weights would be entirely dependent on copyright law, which OpenAI already has a complicated relationship with. Quite the needle to thread: it's okay for us to ingest and regurgitate data with total disregard for how it's licensed, but under no circumstances can anyone share these weights.
That's assuming weights are even covered by copyright law, and I have a feeling they are not in the US, since they aren't really a "work of authorship"
It sounds a lot like the browsers war, where the winning strategy had been to aggressively push (for free, which was rather uncommon then) one's platform, in the aim of market dominance for later benefits.
Provide the weights as an add-on for customers who pay for hardware to run them. The customers will be paying for weights + hardware. I think it is the same model as buying the hardware and get the macOS for free. Apple spends $35B a year in R&D. Training GPT5 cost ~$500M. It is a nothing burger for Apple to create a model that runs locally on their hardware.
They could take a lesson from churches. If LLM providers and their employees were willing to commit to privacy and were willing to sacrifice their wealth and liberty for the sake of their clients, society would yield.
I remember seeing a video of a certain Richard Masten, a CrimeStoppers coordinator, destroying the information he had on a confidential source right in the courtroom under the threat of a contempt charge and getting away with a slap on the wrist.
In decent societies standing up for principles does work.
Isn't his company, OpenAI, the one that said the monitor all communications and will report anyone they think is a threat to the government?
https://openai.com/index/helping-people-when-they-need-it-mo...
> If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.
I get they are trying to do something positive overall. At the same time. I don't want corp owned AI that's monitoring everything I ask it.
IIRC it is illegal for the phone company to monitor and censor communications. The government can ask a judge for permission for police to monitor a line but otherwise it's illegal. But now with AI transcription it won't be long until a company can monitor every call, transcribe it, feed to an LLM to judge and decide which lists you should be on.
I understand that people assume LLMs are private but there isn’t any guarantee that is the case, specially when law enforcement comes knocking.
You do realize he became King maker by position himself in YC, who is the owner/operator of Hackernews. What makes you think you are not being traced here, and your messages are not being used to train his LLM?
As far as him being a conman, if you haven't realized that most of the SV elite, that this place worships, are all conmen (See Trump Dinner this week) with clear ties to the intelligence agency (see newly appointed generals who are C-suite in several Mag 7 corps) who will placate a fascist in order to push their agenda(s) then you simply aren't paying attention.
His scam coin is the most insipid of his rap sheet at this point, and I say this as a person who has seen all kind of grifting in that space.
EDIT: I want to add that "training on chat logs" isn't even the issue. In fact it understates the danger. It's better to imagine things like this: when a future ad-bot or influence-bot talks to you, it will receive your past chatlogs with other bots as context, useful to know what'll work on you or not.
EDIT 2: And your chatlogs with other people I guess, if they happened on a platform that stored them and later got desperate enough to sell them. This is just getting worse and worse as I think about it.
They can just prompt "given all your chats with this person, how can we manipulate him to do x"
Not really any expertise needed at all, let the AI to all the lifting.
https://cybersecuritynews.com/fraudgpt-new-black-hat-ai-tool...
If you make an app for interacting with an LLM and in the app the user has access to all sorts of stolen databases, and other conveniences for black hats, then you've got what was described above. Or I'm missing something?
Surplus value isn't really that useful of a concept when it comes to understanding the world.
This is so far from the reality of so many things in life, it's hard to believe you've thought this through.
Maybe it works in the academic, theoretical sense, but it falls down in the real world.
No "artisanal" product, from food to cosmetics to clothing and furniture is ever worth it unless value-for-money (and money in general) is of no significance to you. But people buy them.
I really can't go trough every product class, but take furniture as a painfully obvious example. The amount of money you'd have to spend to get furniture of a similar quality to IKEA is mind-boggling. Trust me, I've done it. Yet I know of people in Sweden who put considerable effort in acquiring second-hand furniture because IKEA is somehow beneath them.
Again, there situations where economies of scale don't exist and situations where a business may not be interested in selling a cheaper or superior product. But they are rarer than we'd like to admit.
This solves the problem of seeing ads that are not best for the user.
Ads are there to change your behavior to make you more likely to buy products, e.g., put downward pressure on your self esteem to make you feel "less than" unless you live a lifestyle that happens to involve buying X product
They are not made in your best interest, they are adverserial psycho-tech that have a side effect of building a economic and political profile on you for whoever needs to know what messaging might resonate with you
https://brandingstrategyinsider.com/achieving-marketing-obje...
"Your ultimate marketing goal is behavior change — for the simple reason that nothing matters unless it results in a shift in consumer actions"
Brainwashing is the systematic effort to get someone to adopt a particular loyalty, instruction, or doctrine.
You have described one type of ad. There are many many types of ads.
If you were actually knowledgeable about this, you'd know that basic fact.
> Each Shiftkey nurse is offered a different pay-scale for each shift. Apps use commercially available financial data – purchased on the cheap from the chaotic, unregulated data broker sector – to predict how desperate each nurse is. The less money you have in your bank accounts and the more you owe on your credit cards, the lower the wage the app will offer you.
https://pluralistic.net/2024/12/18/loose-flapping-ends/#luig...
I'd rather see totally irrelevant ads because they're easy to ignore or dismiss. Targeted ads distract your thought processes explicitly because they know what will distract you; make you want something where there was previously no wanting. Targeted advertising is productised ADHD; it is anti-productive.
Like the start of Madness' One Step Beyond: "Hey you! Don't watch that, watch this!"
Feel free to call me an accelerationist but I hope AI makes social media so awful that no one wants to use it anymore. My hope is that AI is the cleansing fire that burns down social media so that we can rebuild on fertile soil.
...hm. maybe I am worried about the Basilisk, then.
Even in real life, the police in the UK now deploy active face recognition and makes tonnes of arrests based on it (sometimes wrongly). Shops are now looking to deploy active face recognition to detect shoplifters (although it's unclear legally what they will actually do about it).
The UK can compel any person commuting through the UK to give over their passwords and devices - you have no right to appeal. Refusing to hand over the password can get you arrested under the Terrorist Act, where they can hold you indefinitely. When arrested under any terrorism offence you also have not right to legal representation.
The days of privacy sailed unnoticed.
While I owned a cellphone, I never left the house with it. While using it, my location was always known.
None of this is terribly new. For decades our names and addresses were collected in telephone books (along with our numbers) that were given away to everyone. Telecorp doxxing. That too was involuntary.
The company whose buses I carded into (instead of paying cash) knew where I got on and got off. Once you get tuned into these 'services', you are in a position to limit their accuracy. That skill can be refined. We don't have to become predictable.
But a llm is not a human, and I think OpenAI and all the others should make it clear that you are NOT talking to a human. Repeatedly.
I think if society were trained to treat AI as NOT human, things would be better.
I've been learning a hell of a lot from LLMs, and am doing way more coding these days for fun, even if they are doing most of the heavy lifting.
Could you elaborate on why? I am curious but there is no argument.
That chatbot you're interacting with is not your friend. I take it as a fact (assumption? axiom?) that it can never be your friend. A friend is a human - animals, in some sense, can be friends - who has your best interests at heart. But in fact, that chatbot "is" a megacorp whose interests certainly aren't your interests - often, their interests are at odds with your interests.
Google works hard with branding and marketing to make people feel good about using their products. But, at the end of the day, it's reasonably easy to recognize that when you use their products, you are interacting with a megacorp.
Chatbots blur that line, and there is a huge incentive for the megacorps to make me feel like I'm interacting with a safe, trusted "friend" or even mentor. But... I'm not. In the end, it will always be me interacting with Microsoft or OpenAI or Google or whoever.
There are laws, and then there is culture. The laws for AI and surveillance capitalism need to be in place, and we need lawmakers who are informed and who are advocates for the regular people who need to be protected. But we also need to shift culture around technology use. Just like social customs have come in that put guard rails around smartphone usage, we need to establish social customs around AI.
AI is a super helpful tool, but it should never be treated as a human friend. It might trick us into thinking that its a friend, but it can never be or become a friend.
AI chatbots are not humans, they don't have ethics, they can't be held responsible, they are the product of complex mathematics.
It really takes the bad parts from social media to the next level.
It seems like having LLM providers not train on user data is a big part of it. But is using traditional ML models to do keyword analysis considered “AI” or “surveillance”?
The author…and this community in general…are much more prepared to make full recommendations about what AI surveillance policy should be. We should be super clear to try to enact good regulation without killing innovation in the process.
Like nuclear fission, AI should never have been developed.
And people should own all data about themselves, all rights reserved.
It's ironic copyright is the law that protects against this kind of abuse. And this is of course why big "AI" companies are trying to weaken it by arguing models training is not derivative work.
Or by claiming that writing a prompt in 2 minutes is enough creative work to own copyright of the output despite the model being based on 10^12 hours of human work, give or take a few orders of magnitude.
The groups that didn't train on public domain content would have an advantage if it's implemented as a rule moving forward at least for some time.
New models following this could create a gap.
I'm sure competition as has been seen from open-source models will be able to
Just because everyone is doing it doesn't meant it's right or legal. Only that a lot of very rich companies deserve to get punished and pay the creators.
Not arguing, debating about the legality of what the models have done.
Anthropic just paid a settlement. But they also bought a ton of book and scanned them, which might be more than other models. Maybe it's a sign of things to come.
Copyright designed at a time when reproducing work in way which was not verbatim and not obviously modified to avoid detection (like synonym replacement) would require a lot of human work and be too costly to be done. Now it's automated. It fundamentally changes everything.
Human work is what's to be rewarded, according to the amount of quality.
> That’s why we (at DuckDuckGo) started offering Duck.ai for protected chatbot conversations and optional, anonymous AI-assisted answers in our private search engine. In doing so, we’re demonstrating that privacy-respecting AI services are feasible.
I don't know if its a great idea, or just I wonder what does make it feasible, but there is a kind of implied recommendation here.
By "killing innovation" do you just mean: "we need to allow these companies to make money in possibly a predatory way, so they have the money to do... something else"? Or what is the precise concern here? What facet needs to be innovated upon?
I believe that LLM’s will have the capability to fill in for human workers in many important ways. It’s like getting an economic infusion without the associated population growth required.
But we aren’t there yet, so innovation looks like continuing to build out how to efficiently use AI tools. Not necessarily better models, but longer memory, more critical reasoning, etc.
At the same time…there are winner-take-all dynamics and possibility to weaponize that are not good for society in the long-term.
So we need to both encourage innovation while making sure we don’t kill each other in the process.
"Wipeth thine ass with what is written" should be engraved above the doorway of the National Constitution Center.
Most of my close friends are non-technical and expect me to be a cheerleader fir USA AI efforts. They were surprised when I started mentioning the recent Stanford study that 80% of US startups are using Chinese models. I would like us to win but we seem too hype focused and not engineering and practical applications focused.
Then they came for medical science, but I said nothing because I was not a doctor.
Then they came for specialists and subject matter experts, and I said nothing because I was an influencer and wanted the management position.
As I write this, sitting in Peet's Coffee in downtown Los Altos, I count three different cameras recording me, and I'm using their public wifi which I assuming is also being used to track me. That's the world we have now.
If spaces like that irk you, stop going there. Limit your use of the Internet to when you're at home on your own network. Do we truly need to be browsing HN and other sites when we're out of the house?
Ditch the smartphone. Most things that people claim you need a smartphone for "in order to exist in modern society" can also be done via a laptop or a desktop, including banking. You don't need access to everything in the world tucked neatly away in your pocket when you're going grocery shopping, for instance.
Buy physical media so that your viewing habits aren't tracked relentlessly. Find hobbies that get you away from it all (I backpack!).
Fuck off from social media. Support hobby-based forums and small websites that put good faith into not participating in tracking and advertising, if possible.
Limit when you use the internet, and how.
It's hard to escape from it, but we can significantly limit our interactions with it.
For example,
The WIFI signals can uniquely identify every single heartbeat in realtime within a certain range of the AP, multiple linked access points increase this range up to a mile. The radio devices you carry around with you unknowingly beacon at set intervals tracking your location just like an animal on a tracking collar. This includes the minute RFID chips sewn into your clothing and fabrics.
The phones don't turn off their radios when in airplane mode. Your vehicle had at least 3 different layers that uniquely beacon a set of identifiable characteristics to anyone with a passive radio. OBD-II uplink, TPMS sensors (one for each wheel), and Telematics.
Home Depot in cooperation with Flock, has without disclosure captured your biometrics, and tracked your minute movements and put that up for sale to the highest bidder through subscription based profiling.
Ultrasonic beacons are emitted from your phone to associate geographically local devices to individual people. All visible to anyone with a SDR, manipulable by those with a Flipper0, and treated as distinct sources of truth in a layered approach.
All aspects of social interaction with the wider world have now been replaced with a surrogate that runs through a few set centralized points that can be turned off/degraded to drive anyone they wish into poverty with no visible indicator, or alternative.
Imagine you are a job seeker, the AI social credit algorithm they've developed to target wealthy people on one side, and to torture/make people better incorrectly identifies you as a subversive, and so they not only degrade your existing services but isolate all your communications from everyone else intermittently through failure following a statistical approach similar to Turing during WW2.
Imagine the difficulty of finding work in any specialized field which you have experience for, where you can never receive those callbacks because they are inherently interrupt driven; and interrupt driven calls are jammed without your ability to recognize the jamming. Such communications are vulnerable to erasure.
Should any system ever exist whose sole purpose or impact has become to prevent a arbitrary target through interference from finding legitimate work, or other aspects to feed themselves or exercise their existential rights.
In effect such a system of control silently makes these people into slaves without recourse, or informed disclosure. It fundamentally violates their human rights and "these systems exist".
Failure of government to timely uphold the social contract promises and specifics of the constitution becomes after-the-fact purposeful intent through the gross negligence and failure to uphold their constitutional oaths. History has shown that repeatedly if the civilization survives at all, it repeatedly reforms itself through violence. Something no good person wants, but given the narrowing of agency and choice to affect the future; it is the only alternative when the cliff of existential exctinction is present (whether people realize that or not).
Or maybe it never was and this fact is just becoming more transparent.