Posted by ramimac 7 hours ago
The net result is a (mostly) American business model predicated on Celebrity C-Suites doing highly visible things while those doing the hard work of creating value are shunted into offices and paid less compared to productivity gains over time. It shouldn’t be a surprise that social media and the internet have supercharged this, especially with groups like YC, Softbank, a16z, and other VCs splashing out Capital on flash over substance, exploitation over business fundamentals, “disruption” over societal benefit and symbiosis.
The net result is a growing schism of resentment by those who do the work towards those who get the credit, glory, and reward, versus those who bask in stardom and truly believe they can replace the perceived entitlement of labor wholesale with an instant gratification machine and somehow survive the resulting societal collapse such a device would bring about.
Circa 1970 Issac Asimov wrote an essay that started with a personal anecdote about how amazed he was that he could get a thyroidectomy for his Graves Disease for about what he made writing one essay -- regardless of how good or bad it really is today, you're not going to see people express that kind of wonder and gratitude about it today.
This discussion circles around it
https://news.ycombinator.com/item?id=47074389
but I think the real working class stance is that you want protection from economic shocks more than "participation", "ownership", "a seat at the table", "upside", etc. This might be a selfish and even antisocial thing to ask for over 80 years near the start of the second millennium, but I think it would sell if it was on offer. It's not on offer very much because it's expensive.
One could make the case that what we really need is downward mobility. Like what would have happened if Epstein had been shot down the first time or if Larry Summers had "failed down" instead of "failing up?" My experience is that most legacy admissions are just fine but some of them can't test their way out of a paper bag and that's why we need a test requirement.
Got it in one. Would I like to travel First Class and stay in fancy hotels? Sure, but I’d much rather have a house that I can improve to meet my needs instead. Would I like a fancy luxury car with all the trimmings over my sixteen-year-old Honda? Absolutely, but the latter is paid off and gets us around just fine. Would I like that spiffy Hasselblad X2D and some lenses? You betcha, but I’d rather take a proper holiday for the first time in fifteen years instead of buying another thing.
The problem is that society at present isn’t organized to prioritize necessities like shelter and healthcare, favoring wealth extraction and exploitation instead. Workers don’t want megayachts and hypercars and butlers, we just want to live more than we work.
It can mean moving within a class.
Surely most people want to better their station. To argue against that is insane and counter to every observable fact about human nature.
Many things changed around that specific time, and I think it does deserve scrutiny. Implied cultural factors seem to be merely correlates of greater historical tide, such as https://en.wikipedia.org/wiki/Bretton_Woods_system#Nixon_sho...
My take here is a monetarist.
Understanding the interconnectedness of systems beyond your own realm of expertise is how you learn what needs to be done to fix issues - and avoid falling for snake oil “silver bullets”/“one weird trick” populist positions.
Naturally, unmentioned are those shut out of reasonable opportunities for meaningful productivity, regardless of technical potential (but largely in line with (lack of) social capital). A few years of this maybe encourages an entrepreneurial spirit. Two decades is quite convincing that there's no place for them in the current order.
The upwardly-mobile opportunity hoarders need to understand, much as the wealth hoarders ought to, that the whole thing falls apart without buy-in from the "losers".
Tang ping bai lan.
Steinmetz contributed heavily to AC systems theory which helped understand and expand transmission. while Scott contributed a lot to transformer theory and design (I have to find his Transformer book.)
In addition to the limits of human planning and intellect, I'd also add incentives:
as cynical as it sounds, you won't get rewarded for building a more safe, robust and reliable machine or system, until it is agreed upon that the risks or problems you address actually occur, and that the costs for prevention actually pays off.
For example, there would be no insurances without laws and governments, because no person or company ever would pay into a promise that has never been held.
That sounds like an onset of a certain type of dark age. Eventually the shiny bits will too fall off when the underlying foundation crumbles. It would be massively ironic if the age of the "electronic brains" brought about the demise of technological advancement.
Windows is maintained by morons, and gets shitter every year.
Linux is still written by a couple of people.
Once people like that die, nobody will know how to write operating systems. I certainly couldn’t remake Linux. There’s no way anyone born after 2000 could, their brains are mush.
All software is just shit piled on top of shit. Backends in JavaScript, interfaces which use an entire web browser behind the scenes…
Eventually you’ll have lead engineers at Apple who don’t know what computers really are anymore, but just keep trying to slop more JavaScript in layer 15 of their OS.
I think I did ok. Would I compare myself to the greats? No. But plenty of my coworkers stacked up to the best who'd ever worked at the company.
Do I think MS has given up on pure technical excellence? Yes, they used to be one of the hardest tech companies to get a job at, with one of the most grueling interview gauntlets and an incredibly high rejection rate. But they were also one of only a handful of companies even trying to solve hard problems, and every engineer there was working on those hard problems.
Now they need a lot of engineers to just keep services working. Debugging assembly isn't a daily part of the average engineer's day to day anymore.
There are still pockets solving hard problems, but it isn't a near universal anymore.
Google is arguably the same way, they used to only hire PhDs from top tier schools. I didn't even bother applying when I graduated because they weren't going to give a bachelor degree graduate from a state school a call back.
All that said, Google has plenty of OS engineers. Microsoft has people who know how to debug ACPI tables. The problem of those companies don't necessarily value those employees as much anymore.
> I certainly couldn’t remake Linux
Go to the os dev wiki. Try to make your own small OS. You might surprise yourself.
I sure as hell surprised myself when Microsoft put me on a team in charge of designing a new embedded runtime.
Stare at the wall looking scared for a few days then get over it and make something amazing.
This is certainly false. There are plenty of young people that are incredibly talented. I worked with some of them. And you can probably name some from the open source projects you follow.
In fact today on GitHub alone you can find hobbyist OSs that are far far more advanced what Linuses little weekend turd ever was originally.
Their success is not gated by technical aspects.
How is that? It's easily the software project with the largest number of contributors ever (I don't know if it's true, but it could be true).
Rent-seeking and Promo-seeking is the only motivation for the people with the power.
None of that class wants to make a better product, or make life better or easier for the people.
Nowadays there are no tastemakers, and thus you need to be a public figure in order to even find your audience / niche in the first place.
That's always been the case depending on what you're trying to do, though. If you want to be Corporation Employee #41,737, or work for the government, you don't need a "personal brand"; just a small social network who knows your skills is good enough. If you're in your early 20s and trying to get 9 figures of investment in your AI startup, yeah you need to project an image as Roy from the article is doing.
It's amplified a bit in the social media world, but remember that only ~0.5% of people actively comment or post on social media. 99.5% of the world is invisible and doing just fine.
That being dismissed as a "nice to have" is like watching people waving flags while strapping c4 to civilizational progress.
He writes COBOL and maintains a banking system that keeps the world running. Literally like a billion people die if the system he maintains fails. I maintain a VC funded webpage that only works half the time. I make more than him, a lot more.
This has to be an exaggeration.
Maybe it will be worse now but I kind of feel like the 90% is just more visible than it used to be.
I find this a great choice for an opener. If linesman across the nation go on strike, its a week before the power is off everywhere. A lot of people seem to think the world is simple, and a reading of 'I, Pencil' would go far enlighten them as to how complicated things are.
> secure the internet...
Here, again, are we doing a good job? We keep stacking up turtles, layers and layers of abstraction rather than replace things at the root to eliminate the host of problems that we have.
Look at docker, Look at flat packs... We have turned these into methods to "install software" (now with added features) because it was easier to stack another turtle than it was to fix the underlying issues...
I am a fan of the LLM derived tools, use them every day, love them. I dont buy into the AGI hype, and I think it is ultimately harmful to our industry. At some point were going to need more back to basics efforts (like system d) to replace and refine some of these tools from the bottom up rather than add yet another layer to the stack.
I also think that agents are going to destroy business models: cancel this service I cant use, get this information out of this walled garden, summarize the news so I dont see all the ad's.
The AI bubble will "burst", much like the Dotcom one. We're going to see a lot of interesting and great things come out of the other side. It's those with "agency" and "motivation" to make those real foundational changes that are going to find success.
> Individual intelligence will mean nothing once we have superhuman AI, at which point the difference between an obscenely talented giga-nerd and an ordinary six-pack-drinking bozo will be about as meaningful as the difference between any two ants. If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.
Believing this feels incredibly unwise to me. I think it's going to do more damage than the AI itself will.
To any impressionable students reading this: the most valuable and important thing you can learn will be to think critically and communicate well. No AI can take it away from you, and the more powerful AI will get the more you will be able to harness it's potential. Don't let these people saying this ahit discourage you from building a good life.
I have heard some form this advice for over 30 years. Not one single penny I have earned in my career came from my critical thinking. It came from someone taking a big financial risk with the hope that they will come out ahead. In fact, I've had jobs that actively discouraged critical thinking. I have also been told that the advice to think critically wasn't meant for me.
I can't help but wonder whether the person who gave you advice "to think critically wasn't for [you]" didn't have YOUR best interests at heart, and/or wasn't a wise person.
I also worked jobs where I was actively discouraged to think critically. Those jobs made me itchy and I moved on. Every time I did it was one step back, three steps forward. My career has been a weird zigzag like that but trended up exponentially over 25 years.
We all have our anecdotes we can share. But ask yourself this: if you get better at making decisions and communicating with other people, who is that most likely to benefit?
It seems you are unnecessarily muddying the water.
/s if not obvious
This. Just thinking that those with power would even allow that leveling seems on the verge of impossible. In a sense, you can already see it practice. Online models are carefully 'made safe' ( neutered is my preferred term ), while online inference is increasingly more expensive.
And that does not even account for whether, 'bozo' will be able to use the tool right.. because an expert with a tool will steal beat a non-expert.
It is a brain race. It may differ in details, but the shape remains very much the same.
But this is veering into lit crit territory, so agree to disagree
Imagination knows no negation.
I'm not saying this for social reasons, just for the definition:
"superhuman intelligence" at what?
Calculations? Puzzles? Sudokus?
Or more like...
image classification? ("is this a thief?", "is this a rope?", "is this a medical professional?", "is this a tree?")
Oh, applying the former to the latter would be a pretty stupid category error.
It's almost as if people had this figured out centuries ago...
Maybe if you read past these paragraph it would have been clearer?
The first time an LLM solves a truly significant, longstanding problem without help is when we will know we are at AGI.
I genuinely like the author's style ( not in the quote above; its here for a different reason ). It paints a picture in a way that I still am unable to. I suck at stories.
Anyway, back to the quote. If that is true, then we are in pickle. Claw and its security issues is just a symptom of that 'break things' spirit. And yes, this has been true for a while, but we keep increasing both in terms of speed and scale. I am not sure what the breaking point is, but at certain point real world may balk.
Yes, sometimes people who barrel forward can create a mess, and there are places where careful deliberation and planning really pay off, but in most cases, my observation has been that the "do-ers" produce a lot of good work, letting the structure of the problem space reveal itself as they go along and adapting as needed, without getting hung up on academic purity or aesthetically perfect code; in contrast, some others can fall into pathological over-thinking and over-planning, slowing down the team with nitpicks that don't ultimately matter, demanding to know what your contingencies are for x y z and w without accepting "I'll figure it out when or if any of those actually happen" - meanwhile their own output is much slower, and while it may be more likely to work according to their own plan the first time without bugs, it wasn't worth the extra time compared to the first approach. It's premature optimization but applied to the whole development process instead of just a piece of code.
I think the over-thinkers are more prone to shun AI because they can't be sure that every line of code was done exactly how they would do it, and they see (perhaps an unwarranted) value in everything being structured according to a perfect human-approved plan and within their full understanding; I do plan out the important parts of my architecture to a degree before starting, and that's a large part of my job as a lead/architect, but overall I find the most value in the do-er approach I described, which AI is fantastic at helping iterate on. I don't feel like I'm committing some philosophical sin when it makes some module as a blackbox and it works without me carefully combing through it - the important part is that it works without blowing up resource usage and I can move on to the next thing.
The way the interviewed person described fast iteration with feedback has always been how I learned best - I had a lot of fun and foundational learning playing with the (then-brand-new) HTML5 stuff like making games on canvas elements and using 3D rendering libraries. And this results in a lot of learning by osmosis, and I can confirm that's also the case using AI to iterate on something you're unfamiliar with - shaders in my example very recently. Starting off with a fully working shader that did most of the cool things I wanted it to do, generated by a prompt, was super cool and motivating to me - and then as I iterated on it and incorporated different things into it, with or without the AI, I learned a lot about shaders.
Overall, I don't think the author's appraisal is entirely wrong, but the result isn't necessarily a bad thing - motivation to accomplish things has always been the most important factor, and now other factors are somewhat diminished while the motivation factor is amplified. Intelligence and expertise can't be discounted, but the important of front-loading them can easily be overstated.
I recently traveled to San Francisco and as an outsider this was pretty much the reaction I had.
(on the other hand, in DC there's ads on the metro for new engine upgrades for fighter jets, and i've gotten used to that.)
I do get that it is not nice to be constantly reminded of work. Trees would make a nicer view.
I think that I shall never see
A billboard lovely as a tree
Indeed, unless the billboards fall
I’ll never see a tree at all.
Song of the Open Road - Ogden NashLinux gets some fame and recognition, meanwhile OpenBSD and FreeBSD are the ones they power routers, CDNs and so many other cool shit while also being legit good systems that even deserve attention for the desktop.
And of course, there's no downside for the investors. If you backed a con artist, you're not culpable - you're a victim.
Why wouldn't investors give these people money? It's not like being an investor implies having morales, all they care about is making money whether it's legal or not and luckily for them crime not only pays but it's legal now too.
Most VCs have no idea how to accuratly judge startups based on their core merit, or how to make good decision in startups (though they may think they do), so instead they focus on things like "will this founder be able to hype up this startup and sell the next round so I can mark it up on my books".
I can believe in that. But just a couple of years ago it was clearly happening because the VCs wanted those people to sell the companies into some mark and return real money to them. I wonder when did the investors became the marks?
The hardest part of startups is probably the making good decisions part. To be a good VC you need to be better at founders at judging startup decisions, AND you need to be good at LP deal flow AND you need to be good at startup deal flow. LP deal flow has to come first (otherwise there is no fund), and because of zirp a lot of VCs got funds up without good startup deal flow or the ability to judge startups well.
In other words it's hard to be good a VC too, but for a while it was artificially easy to be a bad VC.
Basically: nobody wants AI, but soon everyone needs AI to sort through all the garbage being generated by AI. Eventually you spend more time managing your AI that you have no time for anything else, your town has built extra power generators just to support all the AI, and your stuff is more disorganized before AI was ever invented.
I do have a deep fondness for SF billboards being building-stuff oriented. I don't care for consumerism.
The vapidity of the products created is remarkable, however.
San Francisco is a tolerant place. Tolerance is how you get Juicero or Theranos and whatever Cluely seems to have pivoted to, but it’s also how you get Twitter, Uber, Dropbox.. and thousands of others.
So it is crucial to consider proportionality. Taking some bad with some good results in getting a little bit of bad and a hell of a lot of good. But if you aren’t careful, all you’ll see is the bad.