Top
Best
New

Posted by aphyr 8 hours ago

The future of everything is lies, I guess: Where do we go from here?(aphyr.com)
390 points | 400 commentspage 2
andyjohnson0 2 hours ago|
From here in the Uk the site just says:

"Unavailable Due to the UK Online Safety Act [...] Now might be a good time to call your representatives."

So I fired-up a vpn, and it appears to be a personal blog. About ai risks.

The geo-block is kind of a shame, as the writing is good and there appears to be nothing about the site that makes it subject to the OLSA.

Terr_ 41 minutes ago|
> there appears to be nothing about the site that makes it subject to the OLSA.

The regulators of OSA say otherwise. Or at any rate, they refuse to agree and won't rule it out.

____________

For the geo-blocked, reproducing relevant content [0]:

> A few months back I wound up concluding, based on conversations with Ofcom [1] that aphyr.com might be illegal in the UK due to the UK Online Safety Act.

> [...] This blog has the same problem: people use email addresses to post and confirm their comments. I think my personal blog is probably at low risk, but a.) I’d like to draw attention to this legislation, and b.) my risk is elevated by being gay online

[0] https://aphyr.com/posts/395-geoblocking-multiple-localities-...

[1] https://blog.woof.group/announcements/updates-on-the-osa

green_wheel 2 hours ago||
I wonder if the author would advocate for us to stop driving cars as well.
skyberrys 7 hours ago||
The reasons laid out in this article are why it's so important to share how we are using AI and what we are getting in return. I've been trying to contribute towards a positive outcome for AI by tracking how well the big AI companies are doing at being used to solve humanitarian problems. I can't really do most of the suggestions the article, they seem like a way to slow progress. I don't want to slow AI progress, I want the technology we already have to be deployed for useful and helpful things.
catapart 8 hours ago||
the epilogue is what speaks to me most. all of the work I've done with llms takes that same kind of approach. I never link them to a git repo and I only ever ask them to make specific, well-formatted changes so that I can pick up where they left off. my general feelings are that LLMs make the bullshit I hate doing a lot easier - project setup, integrate themeing, prepare/package resources for installability/portability, basic dependency preparation (vite for js/ts, ui libs for c#, stuff like that), ui layout scaffolding (main panel, menu panel, theme variables), auto-update fetch and execute loops, etc...

and while I know they can do the nitty gritty ui work fine, I feel like I can work just as fast, or faster, on UI without them than I can with them. with them it's a lot of "no, not that, you changed too much/too little/the wrong thing", but without them I just execute because it's a domain I'm familiar with.

So my general idea of them is that they are "90% machines". Great at doing all of the "heavy lifting" bullshit of initial setup or large structural refactoring (that doesn't actually change functionality, just prepares for it) that I never want to do anyway, but not necessary and often unhelpful for filling in that last 10% of the project just the way I want it.

of course, since any good PM knows that 90% of the code written only means 50% of the project finished (at best), it still feels like a hollow win. So I often consider the situation in the same way as that last paragraph. Am I letting the ease of the initial setup degrade my ability to setup projects without these tools? does it matter, since project setup and refactoring are one-and-done, project-specific, configuration-specific quagmires where the less thought about fiddly perfect text-matching, the better? can I use these things and still be able to use them well (direct them on architechture/structure) if I keep using them and lose grounded concepts of what the underlying work is? good questions, as far as I'm concerned.

egonschiele 8 hours ago||
I've been thinking about this a lot recently, and I don't know if it is possible to stop. I've been thinking the most impactful thing would be to create open-source tools to make it easier to build agents on top of open-source models. We have a few open-source models now, maybe not as good as Gemini, but if the agent were sufficiently good, could that compensate?

I think that would democratize some of the power. Then again, I haven't been super impressed with humanity lately and wonder if that sort of democratization of power would actually be a good thing. Over the last few years, I've come to realize that a lot of people want to watch the world burn, way more than I had imagined. It is much easier to destroy than to build. If we make it easier for people to build agents, is that a net positive overall?

thushar10 2 hours ago||
Pareto almost never goes away. Democratization usually improves the baseline (rights, resources, time) but it rarely flattens power distribution. Even with open-source models, power will likely tilt toward those with the most compute or the best feedback loops. So considering the imbalance as inevitable , the discussion should be about ensuring the new 'baseline' for humanity is actually net positive.
miltonlost 8 hours ago||
> If we make it easier for people to build agents, is that a net positive overall?

If we make it easier for people to drive and have cars, isn't that a net positive? If we make it easier for X, isn't that better? No, not necessarily, that's the entire point of this series of essays. Friction is good in some cases! You can't learn without friction. You can't have sex without friction.

willrshansen 8 hours ago||
If there's too many lies, "source or gtfo" becomes more important
ipython 8 hours ago||
you would have to trust that the person listening to the lies would know the difference, and that's the rub...
jbxntuehineoh 7 hours ago||
that's the neat part, the source is also going to be bullshit slop!
engeljohnb 7 hours ago||
Therefore, you can dismiss whatwever claim is being made. That's the reason to ask for the source: so you can judge whether it's reliable.
munificent 37 minutes ago|||
Great, I dismissed it.

Unfortunately, the several million other people who live in the same voting unit as me didn't and ended up electing an asshat anyway.

warkdarrior 41 minutes ago|||
> That's the reason to ask for the source: so you can judge whether it's reliable.

So the solution to checking whether an article is reliable is to check whether its sources are reliable? How far back do you go? Or do you disregard immediately any article that does not cite only sources you already trust?

ori_b 7 hours ago||
Some people like roasting marshmallows. Others think that setting the house on fire may have downsides.
gmuslera 7 hours ago||
The epilogue looked weak to me. The previous sections explored why it was essentially wrong to use current LLM technology, the answers can be wrong, or not even wrong, and why it has to be that way. The epilogue focus more in (our) obsolescence in a paradigm shift towards widespread LLM use scenario and not in them doing their work right or wrong.

And that should be the core. There is a new, emergent technology, should we throw everything away and embrace it or there are structural reasons on why is something to be taken with big warning labels? Avoiding them because they do their work too well may be a global system approach, but decision makers optimize locally, their own budget/productivity/profit. But if they are perceived risks, because they are not perfect, that is another thing.

Jeff_Brown 7 hours ago||
As a consequentialist who shares the author's concerns, I feel fine (ethically) using AI without advancing it. Foregoing opportunities meaningful to yourself for deontological reasons when it won't have any impact on society is pointless.
camgunz 5 hours ago|
We should consider how we came to be so powerless. The cringe "people gave their lives for that flag" line is actually true, and we're trading it away for what? Not having to get out of our gaming chairs?

The reason you can't beat index funds is the people who build the market built a system that benefits them and them alone; the index fund is the pitchfork dividend (what you pay to avoid getting pitchforked). The reason you can't get your congressperson on the line is (mostly) they built a system where the only way to influence them is to enrich them; voting is the pitchfork dividend.

The way to build a society that runs on reality is to build it by whatever means possible, then defend it by any means necessary. The only societies that matter are the ones that survive.

I want to build it. I don't wanna build a fuckin crypto app, a stupid ass agent harness, or yet another insipid analytics platform. I want to build a society that furthers the liberation of humankind from the vicissitudes of nature, the predation of tyranny and the corruption of greed. I believe it is possible, and I want to prove it out.

cindyllm 4 hours ago|
[dead]
More comments...