Posted by tencentshill 5 hours ago
Fortunately, we have more powerful policy tools to clean the air than attacking individual gearheads... convert America to an electric car system. You need to attack these problems at the point of production. Consumption side approaches are petty and not very effective.
The point isn't that you can't run the deep research. Everyone now has more capabilities, and if you want to waste time and tokens you can do it. The point is someone has done the work compiling these, and made it available once, for everyone to read. Think "caching". It has the exact amount of information needed to show the details of every attack. There is a lot. Sadly making it "concise" will remove information -- there is that much.
I do usually make edits to an article after I get it from an AI, as an editor would do when a writer submits something. I hate having AI shibboleths like "It's not X. It's Y". So I make it more humanized. But at the end of the day, the article does what it's supposed to do: make people aware of things in one place, rather than have to research it themselves every time.
Just like I don't want to look at AI art or listen to AI music, I don't want to read AI written blogslop.
The web is now full of shit. What a waste.
Why don't you write all your assembly code yourself? Why do you use a compiler? Why do you generate images, when you can draw them yourself? You're supposed to add value.
I don't think preparing a list of all the threats, editing it and publishing it for others is a "waste". I'm not publishing random stuff, this is important and in line with what I want people to know.
Some people on HN downvote any criticism of AI, other people complain that things are written by AI. If you're such big fans of AI being used more and more, then accept the consequences!
I'm seeing increasing numbers of people credulously citing ChatGPT/Claude/Gemini output as ground-truth fact. Many more are increasingly lulled into a false sense of security by the citations models append (to the point of neglecting even a bare-minimum skim of the cited sources, much less critically evaluating/contextualizing the nature of the sources themselves). My fear is that most people are blissfully ignorant at the new paradigms of propaganda that AI could enable; most of us here wouldn't be taken by the "slop" image-gen deepfakes (right now), but can you say the same about a couple of citations taken out of context?
We already know how trivial it is to win over a sizeable chunk of society by introducing red-herrings, misrepresenting statistical data, etc. -- oil companies perfected that art, and now as a result a huge number of voters in the US believe that climate change (doesn't exist|isn't man-made|is unavoidable). And that effort was "fully manual" and carried out without the aid of extensive psychological profiling at the individual level via an ad-surveillance complex. Today, society is almost completely defenseless against the extreme granularity/subtlety of manipulation that ownership of frontier AI models enables, especially when it's armed with even a fraction of the torrent of personal data that's being collected on each of us every day.
That's kinda fair, like it's still useful to prepare a list, but it's also like if you didn't go research your information yourself why would I start from a position of charitability when I read it? When I research something with LLMs, I know to double-check everything myself before I use it as a basis for my thought or repeat it to other people. Knowing an article is AI written forces me to doubt every sentence. Or maybe it's worse, I have to assume nobody cared about the sentence. The old format was a guarantee that someone gave enough shits to put the article together. Relevance comes implicitly bundled in each sentence. It's like someone talking to you in public in that there's often a reason to pay attention.
It's not as though that person is going to say something correct, or ethical, but I've had a lifetime of dealing with human kinds of wrongness. When stuff is wrong, I'll know it's wrong because the article is slanted or wrong because the author was lazy etc., which will let me discount it selectively and still get value from it when, e.g., a slanted author contradicts themselves. Reading an LLM article I have no clue whether the person who put it up even read the whole thing, so when I read sentences, I have no guarantee that the sentence communicates something worth paying attention to. I dislike that ambiguity and would prefer to guarantee that the text is slop by asking a bot myself. Then I know its worth upfront. I'd be fine with it if these sites included a direct statement in bold at the top: HEY THIS IS AI SLOP IF YOU DONT WANT THAT LEAVE. Then I know exactly how to parse it.
I spent way too much time on actually building this — with Claude and double checking everything — so an article I publish can be OK to push out. We aren’t building a bridge for thousands of cars here, it’s an article.
A lot of things are automated and 95% of the time they are correct. The key is knowing whether the last mile is worth fixing, if the consequences are minor.
What the startup does is make a verifiably trusted, zero-configuration, turnkey environment for businesses to move their data into and run AI workloads on, without worrying about their data being stolen, or some Agents doing unpredictable things. The environment is super-secured, with no ssh. It's an appliance, with over-the-air M of N updates. Think more "Tesla car" and less "OpenClaw". That's the foundation.
That environment then builds everything around a graph database, for people, organizations, and even code. We have Grokers that can ingest a codebase statically once, and then present the graph databases as a far better "RAG" than cosine similarity and pinecone vector databases.
At its most basic level: Agents can't be trusted. We want predictable Workflows, not agents. They can do 99% of everything Agents can, if done properly, and the remaining 1% are the dangerous parts https://safebots.ai/agents.html
It's a lot of innovations at once, including:
Collaborative Bots that are safer than agents.
Workflows and tools that can read, reason and propose actions.
Policies that must be satisfied before actions can be taken.
Logging of everything. Verifiable security and audits for SOC2 compliance etc. etc.
Everything is configurable and designed for serious businesses, not a grandma that finished a Chinese course on how to install OpenClaw on her terminal and not get pwned
This is why you should write things yourself. There is no way an AI would write something so insane in response to that question. Since I can now read your true understanding of the world, I know not to waste my time on your ai slop. I have no reason to believe you fact checked the 'research' done using AI if you cant even understand how the research should have been done in the first place. You want to waste the time of others but arent even willing to sacrifice a bit yourself.
From https://news.ycombinator.com/newsguidelines.html:
> Don't post generated comments or AI-edited comments. HN is for conversation between humans.
This is an app for deliberately causing pollution. The users of that app should be criminally prosecuted and lose their license/spend a few months in prison. The price differential between this device/app and a generic ODB dongle you can buy on Amazon for ~$10 is entirely made up by the criminal features EZ Lynk offers.
The app being software versus hardware doesn't change the legal or moral situation involving it. Much like the DOJ would demand identities of people importing PlayStation 1 modchips back in the 90s, the users of this equally criminal application will be provided to the DOJ.
I think people should have the freedom to do what they want; if you want to have a truck that has horrible exhaust, fine, but we'll have it piped back into your cab for you to breathe instead of the people behind you, and if you want a car that sounds like a thousand go-carts racing down the street fine, but it'll be through headphones destroying your hearing every time you hit the gas.
Hey congrats, you discovered Society! This is what all those rules and shit are all about - your impact on other people, and their impact on you! It turns out that just saying “people should be able to do what they want” doesn’t actually solve anything, because other people also exist, and some of them are you!
I also absolutely loath the coal-rollers and everything about what they do, and if I could snap my fingers and have them lose both their trucks and their licenses to drive with no other consequences beyond their frustration, I'd do it.
Nevertheless, we cannot allow this good reason for which be both agree to be used as a wedge to let the state just wholesale collect data for whatever reason they want.
Very soon, the reason the state wants to wholesale collect data will be for a reason we entirely disagree. That is not an "IF", it is a "WHEN".
So, no, this isn't a justification.
Very soon, that ca
But if they get this thing taken off the market, that’s a huge loss for all of us because there are a ton of things this type of tool enables, many of them things people like us would be very interested in. Such as disabling privacy-invasive telematics, or disabling features like stop start, which I can personally attest has caused significant repair issues with the engine on my last car.
Having access to a tool like this is to a car what having an administrator account is on a PC. Without it, you are merely a guest, not an owner of the system.
For example, the reason we don’t have super efficient turbodiesel subcompacts that are perfectly legal in the EU is thanks to the so called “Clean” air act. Since the law is based on vehicle weight I can go buy a 8,000 pound truck and commute to work alone in it and pollute all I want. But if I want a super clean 80MPG diesel subcompact that’s 1/4 the gross weight supposedly bad for the environment.
But it gets worse in all sorts of ways, the law grandfathers coal plants from all these emissions standards. One coal plant can emit more pollution than millions of trucks. Guess which polluter the government is aggressively pursuing and violating the rights of? You guessed it, car enthusiasts who downloaded an app. Give me a break.
Why is this administration, which is all for coal, oil, and against environmental policies pursuing THIS?
This DOJ is all about pursuing cases for retribution. It could be, they already know someone they want to punish, and already found they're using the device. Or, use it as a source for finding people they want to punish.
This issue is just not directly politically important enough to get the "don't touch" treatment.
Donors and party power brokers aren't rolling coal.