But for me the biggest issue with all this — that I don't see covered in here, or maybe just a little bit in passing — is what all of this is doing to beginners, and the learning pipeline.
> There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though.
> I glimpsed someone on Twitter a few days ago, also scoffing at the idea that anyone would decide not to use the Whatever machine. I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?”
When you're a beginner, it's totally normal to not really want to put in the hard work. You try drawing a picture, and it sucks. You try playing the guitar, and you can't even get simple notes right. Of course a machine where you can just say "a picture in the style of Pokémon, but of my cat" and get a perfect result out is much more tempting to a 12 year old kid than the prospect of having to grind for 5 years before being kind of good.
But up until now, you had no choice and to keep making crappy pictures and playing crappy songs until you actually start to develop a taste for the effort, and a few years later you find yourself actually pretty darn competent at the thing. That's a pretty virtuous cycle.
I shudder to think where we'll be if the corporate-media machine keeps hammering the message "you don't have to bother learning how to draw, drawing is hard, just get ChatGPT to draw pictures for you" to young people for years to come.
Tech never ever prevents people who really want to hone their skills from doing so. World record of 100m sprint kept improving even since car was invented. World record of how many digits of pi memorized kept improving even when a computer does that indefinitely times better.
It's ridiculous to think drawing will become a lost art because of LLM/Diffusal models when we live in a reality where powerlifting is a thing.
As ever, the standard defence of LLM and all gen AI tech rests on this reduction of complex subjectivity to something close to objectivity: the picture looks like other pictures, therefore it is a good picture. The sentence looks plausibly like other sentences, therefore it is a good sentence. That this argument is so pervasive tells me only that the audience for 'creative work' is already so inundated with depthless trash, that they can no longer tell the difference between painting and powerlifting.
It is not the artists who are primarily at risk here, but the audience for their work. Artists will continue to disappear for the same reason they always have: because their prospective audience does not understand them.
The majority of artists, and of all other groups, are in fact mediocre with mediocre virtues, so enough incentives would turn most of them into Whatever shillers like the post describes.
So a non expert cannot easily determine, even if they do stumble upon “Serious art” by happenstance, whether it’s just another empty scheme or indeed someting more serious.
Maybe if they spend several hours puzzling over the artist’s background, incentives, network, claims, past works, etc… they can be 99% sure. But almost nobody likes any particular piece of work that much upon first glance, to put in that much effort.
Anyone can claim to have “real art”.
Far fewer people make their living as musicians than did even thirty years ago, and being a musician is no longer a viable middle-class career. Jaron Lanier, who has written on this, has argued that it's the direct result of the advent of the internet, music piracy, and streaming -- two of which originally were expected or promised to provide more opportunities for artists, not take them away.
So there really are far fewer drummers, and fewer, worse opportunities for those who remain, than there were within the living memory of even most HN users, not because some specific musical technology advanced but because technological advancement provided an easier, cheaper alternative to human labor.
Sound familiar yet?
What he is comparing was a brief time in history that the music industry was at the absolute peak.
We have just gone back to normal that most people can't make money being a musician just like being an actor is not really a viable middle class career option.
Sure, when I graduated high school you could have just made a living in a local rock band because everyone wanted to be in a band to be the next Guns n Roses.
To me, it is like how even Hitler wanted to be a painter because everyone wanted to be a painter at that time. The way everyone wanted to be a rock star when i was a teenager.
Times change and the collective artistic taste change with them. So many musicians are doing better than ever before because of youtube too.
I play the baroque lute and I can tell you that it is much tougher to get a gig in a bar today than it was in 1650 in France.
The best lutenist though are killing it on youtube with Bach videos.
Could you provide data defending this claim? Without it, and even with it, all I see in your comment is that you're begging the question or shrugging your shoulders at the data and saying, "so what," not actually or substantively disagreeing with anything Lanier has said or written.
What caused the decline? You seem very sure you know the answer, and yet your answer basically seems to be to stop asking the question or investigating: "music was at its peak, so obviously it declined." If music was at some absolute peak, why was that? "It was at its peak" isn't an answer. It's a restatement of the question.
And can you show me that there were fewer musicians per capita, making less money in adjusted terms, twenty or thirty years earlier?
And do you have any data showing that more than a tiny, miniscule fraction of musicians are doing "better than ever before" thanks specifically to YouTube? "So many" is slippery and frustratingly difficult to quantify in a manner that lets me evaluate its accuracy.
what's your basis for this claim? please provide some data showing number of drummers over time, or at least musicians, over the last fifty years or so. I tried searching and couldn't find anything but you're so confident, I'm sure you have a source you could link
How many musicians or artists are finding their need to explore similarly met by opportunities that simply didn’t exist in 2002? If art is expression than we should expect the people who might have wielded a brush or guitar to be building software instead.
If this is you, I recommend Rick Rubin’s The Creative Act. It’s as pure an expression of the way I like to work in music, as it is aligned with how I think about code and product design.
I would not buy a calculator that hallucinated wrong answers part of the time. Or a microwave oven that told you it grilled the chicken but it didn't and you have to die from Salmonella poisoning.
(I’ve heard the fans that you hear are there to reflect the micro waves and make them bounce all over the place but I don’t know if that’s true. Regardless, most models have a spinning plate which will constantly reposition the food as it cooks.)
Composition is part of it, but it isn’t the whole story. A microwave oven is a resonant cavity. There are standing electromagnetic waves in there, in several different modes. They have peaks and nulls. That’s why many microwaves have a rotating plate. It physically moves the food relative to the standing waves.
Older microwaves had a fan-like metal stirrer inside the cooking box, that would continuously re-randomize where the waves went. This has been out of fashion for several decades.
Like:
1- Put it on the edge of the plate, not in the middle
2- Check every X seconds and give it a stir
3- Don't put metal in
4- Don't put sealed things in
5- Adjust time for wetness
6- Probably don't put in dry things? (I believe you needed water for a microwave to really work? Not sure, haven't tried heating a cup of flour or making a caramel in the microwave)
7- Consider that some things heat weirdly, for example bread heats stupid quick and then turns into stone equally as quick once you take it out.
...
So should AI also indicate this to me? That the job will suck, and there would be bad coworkers around me in the job?
Obviously if one product hallucinated and one doesn't it's a no brainer (cough Intel FPUs). But in a world where the only calculators were available hallucinated at the 0.5% level you'd probably have one in your pocket still.
And obviously if the calculator hallucinated at the 90% of the time for a task which could otherwise be automated you'd just use that approach.
Slide rule are good for only a couple of digits of precision. That's why shopkeepers used abacuses not slide rules.
I have a hard time understanding your hypothetical. What does it mean to hallucinate at the 0.5% level? That repeating the same question has a 0.5% chance of giving the wrong answer but otherwise it's precise? In that case you can repeat the calculation a few times to get high certainty. Or that even if you repeat the same calculation 100 times and choose the most frequent response then there's still a 0.5% chance of it being the wrong one?
Or that values can be consistently off by within 0.5% (like you might get from linear interpolation)? In that case you are a bit better than a slide rule for estimating, but not accurate enough for accounting purposes, to name one.
Does this hypothetical calculator handle just plus, minus, multiply, and divide? Or everything that a TI 84 can handle? Or everything that WolframAlpha can handle?
If you had a slide rule and knew how to use it, when would you pay $40/month for that calculator service?
https://en.wiktionary.org/wiki/couple - "(informal) a small number"
FWIW, "Maximum accuracy for standard linear slide rules is about three decimal significant digits" - https://en.wikipedia.org/wiki/Slide_rule
While yes, "Astronomical work also required precise computations, and, in 19th-century Germany, a steel slide rule about two meters long was used at one observatory. It had a microscope attached, giving it accuracy to six decimal places" (same Wikipedia page), remember that this thread is about calculating devices one might carry in one's pocket, have on one's self, or otherwise be able to "grab".
(There's a scene in a pre-WWII SF story where the astrogators on a large interstellar FTL spacecraft use a multi-meter long slide rule with a microscope to read the vernier scale. I can't remember the story.)
My experience is that I can easily get two digits, but while I'm close to the full three digits, I rarely achieve it, so I wouldn't say you get three decimal digits from a slide rule of the sort I thought was relevant.
I'm a novice at slide rules, so to double-check I consulted archive.org and found "The slide rule: a practical manual" at https://archive.org/details/sliderulepractic00pickrich/page/...
> With the ordinary slide rule, the accuracy obtainable will largely depend upon the precision of the scale spacings, the length of the rule, the speed of working, and the aptitude of the operator. With the lower scales it is generally assumed that the readings are accurate to within 0.5 per cent. ; but with a smooth-working slide the practised user can work to within 0.25 per cent
That's between 2 and 3 digits. You wouldn't do your bookkeeping with it.
New Merriam-Webster dictionary, 1989, def. 4 "an indefinite small number" - https://archive.org/details/newmerriamwebste00spri/page/180/...
Pocket Oxford English dictionary, 2005, def. 3 "(informal) an indefinite small number" - https://archive.org/details/pocketoxfordengl0000unse_p5e4/pa...
The Random House college dictionary, 1975, def. 6, "a couple of, (Informal) a small number of, a few"
Shopkeepers did integer math, not decimal. They had no need for a slide rule, an abacus is faster at integer math, a slide rule is used for dealing with real numbers.
We teach our kids about microwave oven safety for this reason.
As an anecdote, in my country there is a very popular brand of supermarket pizzas, Casa Tarradellas. I never buy them but a friend of mine used to eat them really frequently. So once he shows up at my house with one, and I say OK, I'm going to heat it. I serve it, he tries a bite and is totally blown away. He says "What did you do? I've been eating these pizzas for years and they never taste like this, this is amazing, the best Casa Tarradellas pizza I've ever had".
The answer was that he used the microwave and I had heated it in the regular oven...
I have never had that issue when heating stuff up. Your pizza example is not reheating (and generally you never want to reheat anything that’s supposed to be crispy in the microwave; though not on the stove top either).
They do a lot more things though, which microwaves don't. Pizza, for example, has to be cooked properly, not with a microwave. If I can only have one, I'll take the mini conventional oven.
(thanks Rory Sutherland for this analogy)
If an LLM hallucinates and you don't know better, it can be bad. Hopefully people are double checking things that really matter, but some things are a little harder to fact check.
Alec Watson of Technology Connections points out that GPS routing defaults to minimizing time, even when that may not the most logical way to get somewhere.
His commentary, which starts at https://youtu.be/QEJpZjg8GuA?t=1804 , is an example of his larger argument about the complacency of letting automation do things for you.
His example is a Google Maps routing which saves one minute by going a long way to use a fast expressway (plus $1 toll), rather than more direct but slower state routes and surface streets. It optimizes one variable - time - of the many variables which might be important to you - wear&tear, toll costs, and the delight of knowing more about what's going on in the neighborhood.
His makes the point that he is not calling for a return to paper maps, but rather to reject automation complacency, which I'll interpret as letting the GPS figure everything out for you.
We've all heard stories of people depending on their GPS too much then ending up stuck on a forest road, or in a lake, or other place which requires rescue - what's the equivalent failure mode with a calculator?
I'm also aware of the failure modes with GPS complacency, including its incomplete knowledge of the variables that I find important.
And that's with something that makes mistakes far less often than LLMs and related technology.
Which is why I don't think that your mention of GPS use is a strong counter-example to bryanrasmussen's comment against using hallucinating devices.
I use it all the time, pretty much zero issues.
I don't know how to respond to your comment in that context.
Do you double-check your calculator all the time, to ensure it's not giving you the wrong answer?
As to Alec Watson's commentary about GPS, how do you know the area well enough to make judgements if you always follow the GPS routing which avoids the neighborhood?
If I spend a lot of time in an area I learn it and don't need gps to navigate it, however I might use gps just to find specific addresses as I don't usually memorize every street name. I also usually find that Google maps chooses perfectly sensible routes anyway, I don't see much point in trying to second guess it. Oh maybe I can save a minute or two or save a few kilometers by avoiding a highway, honestly who cares? I certainly don't. It will usually offer multiple route alternatives anyway, your ideal route or something close to it is probably among them.
However, I do have a pressure cooker and a rice cooker that gets a lot of use. They're extremely reliable and don't use much electricity and I can schedule what they do, which is bulk cooking without me having to care about it while it happens.
I just type the address into Google maps, or place a pin manually, then hit the start button. It'll tell me every step of the way. Keep right at the fork. In a hundred meters, turn left. Turn left. Take the second exit in the roundabout. Your destination is a hundred meters ahead on the right.
It's great and it works almost flawlessly. Even better if you have a passenger able to keep an eye on it for those times when it isn't flawless.
Citation needed.
Being good at coming up with ideas, at critically reading something, at synthesizing research, at writing and editing, are all things that take years to learn. This is not the same as learning the mechanics that a calculator does for you.
In two years, that won't be the case.
Its the same for virtually all other Arts based job. An economy that currently support say 100% of the people now, will at most be able to support 10-30% in a few years time.
> It's ridiculous to think drawing will become a lost art because of LLM/Diffusal
Map reading is pretty much a dead art now (as someone who leads hikes, I've seen it first hand)
Memorising books/oral history is also a long dead art.
Oral story telling is also a dead art, as is folk music, compared to its peak.
Sure _rich_ people will be able to do all the arts they want. Everyone else won't
For example, I have no knowledge of film editing or what “works” in a sequence, but if I wanted to I could create something more than passable with AI.
Why would someone buy a plate off her, when they could get one from IKEA for 1.50 eur?
Yet ceramics is not a dead art. Beats me?
but 200 years ago there were loads of ceramic manufactures, employing hundreds of thousands of skilled potters. even 50 years ago, there were thousands of skilled ceramists in the UK. now its single person artisans, like your very talented other half.
Now, that reduction in work force too 200 years and mirrors the industrial revolution. GenAI is looking like its going to speed run that in ~5-7 years
I should be more clear, there is a difference between dead art (memorizing stories) and non viable career for all but 1% of people compared to now. I'm talking about the latter.
curious rich people.
Obesity rates keep "improving" since the car was invented, up to becoming a major public health crisis and the main amplifier of complications and mortality when the pandemic stroke.
Oh, and the 100m sprint world record has been set for more than a decade and a half now, which means either we reached human optimum, or progress on anti-doping technology has forced a regression on performance.
A very good example! (...although probably not how you think it is ;)
Indeed the world record is achieved by a very limited number of people under stringent conditions.
Meanwhile people by and large† take their cars to go to the bakery which by foot would be 10min away, to disastrous effect on their health.
And by "cars" I mean "technology", which, while a fantastic enabler of things impossible before, has turned more people into couch potatoes than athletes.
† Comparatively to world record holders.
No it's not (like OP's article says). With a calculator you punch in 10 + 9 and get 2 immediately, and this was 50+ years ago. With an LLM you type in "what is 10 + 9" and get three paragraphs of text after a few seconds. (this is false, I just tried it and the response is "10 + 9 = 19" but I'm exaggerating for dramatic effect). With a microwave you yeet in food and press a button and stuff happens the same way, every time.
Sure, if you abstract it to "doing things in an easier and lazier way", LLMs are just the next step, like IDEs with built in error checking and code generation were since 20 years ago. But it's more vague than press button to do a thing.
Your calculator is broken.
> With an LLM you type in "what is 10 + 9" and get three paragraphs of text after a few seconds. (this is false, I just tried it and the response is "10 + 9 = 19" but I'm exaggerating for dramatic effect).
So you’re arguing against a strawman?
You generally don't need a lengthy explanation because it's common sense. When someone doesn't get it then people have to go into lengthy convoluted explanations because they are trying to elucidate common sense to someone who doesn't get it.
I mean how else do I elucidate it?
LLMs are different from any revolutionary technology that came before it. The first thing is we don't understand it. It's a black box. We understand the learning algorithm that trains the weights, but we don't understand conceptually how an LLM works. They are black boxes and we have limited control over them.
You are talking to a thing that understands what you say to it, yet we don't understand this how this thing works. Nobody in the history of science has created anything similar. And yet we get geniuses like you who can use a simple analogy to reduce the creation of an LLM to something like the invention of a car and think there's utterly no difference.
There is a sort of inflection point here. It hasn't happened yet but a possible future is becoming more tangible. A future where technology surpasses humanity in intelligence. You are talking to something that is talking back and could surpass us.
I know the abundance of AI slop has made everyone numb to the events that happened in the past couple of years. But we need to look past that. Something major has happened, something different then the achievements and milestones humanity has surpassed before.
Maybe you're new here friend...
Perhaps you do not understand it, but many software engineers do understand.
Of the human brain we can still say that we don't understand it.
No, they do not. LLMs are by nature a black box problem solving system. This is not true about all the other machines we have, which may be difficult to understand for specific or even most humans, but allow specialists to understand WHY something is happening. This question is unanswerable for an LLM, no matter how good you are at Python or the math behind neural networks.
Let me put it plainly. If we understood LLMs we would understand why hallucinations happen and we would subsequently be able to control and stop hallucinations from happening. But we can’t. We can’t control the LLM because of lack of understanding.
All the code is available on a computer for us to modify every single parameter. We have full access and we can’t control the LLM because we don’t understand or KNOW what to do. This is despite the fact that we have absolute control over the value of every single atomic unit of an LLM
Wrong.
> You are talking to a thing that understands what you say to it
Wrong.
Not trying to be insulting here. But genuinely if you think humanity is better then AI, why is your response to me objectively WORSE then AI slop?? Prove your own statements by being better yourself, otherwise your own statement is in itself proof against your point.
The industrial revolution automated a lot of blue collars, AI is starting to seriously automate white collars to the point less people are needed.
They are automating the mind, there’s not much else to compete with to provide value. Which color people should pivot to?
Even though that is a generalization that you cannot prove, you implicitly admit that it will prevent everybody else from gettings any skills. Which is quite a bad outcome.
> powerlifting is a thing
Those people have a different motivation: looks, competition, prestige, power. That doesn't motivate people to learn to draw.
Your easy dismissal is undoubtedly shared by many, but it is hubris.
Well you sure showed them.
The TFA makes a very concrete point about how the Whatever machine is categorically different from a calculator or a handsaw. A calculator doesn't sometimes hallucinate a wrong result. A saw doesn't sometimes cut wavy lines instead of straight lines. They are predictable and learnable tools. I don't see anyone addressing this criticism, only straw manning.
I guess the analogy isn't that bad! I'd be pretty upset if a professional cook made my steak in a microwave.
There are clear differences. First of a calculator and microwave are quite different, but so is LLM. Both are time savers, in the sense of microwave saves time defrosting and calculator saves time calculating vs human.
They save time to achieve a goal. However calculators come with a penalty, by making multiplication easier they make user worse at it.
LLMs are like calculators but worse. They both are effort savers, and thus come with a huge learning penalty and unprecise enough that you need to learn to know better than them.
It’s time to move on. The history of tech is a steady march of tools that demand less prep, less precision, and less friction from their users.
All this hand-wringing seems to show is that a worforce whose aggregate work ethos usually mocks “get off my lawn” attitudes in others was hiding a significant “but I never thought it would happen to me”, often couched in some variety of “but think of the children”.
Besides, it could be worse: it’s not like when other professions went obsolete practically overnight, like switchboard operators, or cuirassiers.
We go from a society where only a very few people are literate in math to one where everyone has a literal supercomputer at all times. What do you think that would do for math literacy in a society? Would everyone suddenly went to learn algebra and calculus? Or would the vast majority of people use the easy machine and accept its answers without question or understanding?
We need to learn to make technology truly benefit the many. Also in terms of power.
I've heard people express that they liked working in retail. By extension somebody must have enjoyed being a bank teller. After all, why not? You get to talk to a lot of people, and every time you solve some problem or answer a question and get thanked for it you get a little rush of endorphins.
Many jobs that suck only suck due to external factors like having a terrible boss or terrible customers, or having to enforce some terrible policy.
But imagine working in a nice cafe in a quiet small town, or a business that's not too frantically paced, like a clothing store. Add some perks like not always doing the same job and a decent boss, and it shouldn't be too bad. Most any job can be drastically improved by decreasing the workload, cutting hours and adding some variety. I don't think being a cashier is inherently miserable. It's just the way we structure things most of the time makes it suck.
Just like you think a human touch makes art special, a human touch can make a mundane job special. A human serving coffee instead of a machine can tell you about what different options are available, recommend things, offer adjustments a machine might not, chat about stuff while it's brewing... You may end up patronizing a particular cafe because you like the person at the counter.
There's a common fallacy that tries to argue that it'll be alright over time, no matter what happens. Given enough time, you can also say that about atomic wars. But that won't help the generations that are affected.
If you just sit on your hands complaining about the lack of opportunities then you won't get any sympathy from me. People aren't entitled to live wherever they want, humanity's entire thing is adaptability. So adapt. Life is what you make it.
I wouldn't be surprised if at some point in the near future something like "Adapt. Life is what you make it" could be read in big bold letters above the entrance of a place like Alligator Alcatraz.
I hadn't heard about alligator Alcatraz until now, I'm not American so I don't keep up with all of Trump's shenanigans. I feel compelled to make it clear that I in no way support Trump. The fact that the US has elected that clown not once but twice is frankly embarrassing.
People adapt to all kinds of stuff all the time. Saying adapting isn't a thing for most people is ridiculous. Of course it's a thing. It's what you do when your current situation isn't working. You adapt.
Sure, you can smugly say that the hard-working will survive. But i don't want to imagine what the USA will be like millions of unemployed and under-provisioned Americans. Poverty and the process of falling off the socio economic ladder is ugly for everyone, unless you're wealthy enough to afford to insulate yourself from the consequences.
The fact that this nuance appears to be lost on you makes me suspicious of your motives for posting your opinion.
The ironic thing in all this is that these rural people you're talking about are probably the exact people responsible for electing him. Evokes images of leopards eating faces and such.
That said, yes, what about them? These are people with real skin the the game - people who spent years learning their craft expecting it will be their life-long career.
Do we simply exclaim "sucks to be you!"?
Do we tell out-of-work coal miners to switch to a career in programming with the promise it will be a lucrative career move? And when employment opportunities in software development collapse, then what?
All while we increasingly gate health care on being employed?
Software dev opportunities won't collapse any time soon, any half decent dev who's tried vibe coding will tell you that much. It's a tool developers can use, it's not a replacement.
What's your solution to the miners of West Virginia?
https://www.wvva.com/2025/06/25/coal-miners-face-layoffs-fed...
"As West Virginians face possible cuts to Medicaid and SNAP, they are also being hit hard in the job market."
"“I’m worried for the people that are laid off, and are they going to be able to find another job? You know, are they my age? How are you going to start over? You’ve got to find a job back in what you know, because you can’t start over at my age,” said Ricky Estes, a former Coal Mining Safety Representative, who was laid off. "
"Even before these possible cuts, affordable healthcare can be hard to find currently in the mountain state"
I mention mining -> programming because that was the hyped solution a decade ago, eg, https://www.wtrf.com/community/from-coal-to-coding-new-progr... .
How well did that work out?
I wasn't talking about the recent LLM fad, but rather the decades of mass government funding of STEM[1], and programming training in particular (like Joe Manchin's Mined Minds), with the carrot of a high-paying job at the end, leading to a surplus of coders who, as a result, flood the job market and lower salaries and individual employee power.
[1] STEM government funding doesn't seem to end up in, say, marine biology or sociology or the theory of unbounded operators or other fields of science and math that don't make companies a lot of money.
I'm not opposed to having programs to help these people, not at all. I'm from Norway where we have free healthcare, education, social security nets etc. I'm all for that stuff, it benefits all of us.
All I'm saying is if new opportunities don't fall into your lap you need to find them yourself.
Or other jobs in West Virginia, with its long history of coal mining, with profits ending up in the pockets of mine owners, not employees.
Since you think people are only looking for jobs that fall in their lap, I'm certain you have no idea of the issues.
So, now you need find a new job yourself, and it requires a specific training, so you spend your savings on a one year training program, only to find that, once done, the job market has changed and now you also need two years of job experience .. then what?
In the meanwhile, your breathing has gotten more difficult. You think it might be black lung. Your union helped pass the law which helps provide health and financial support to miners who get black lung, and you became a miner expecting this protection, but Trump DOGE'd it so who knows when you'll get your legally required support.
I also can't say I have much sympathy for people who voted for Trump and then got screwed by Trump. West virginians overwhelmingly voted red. You can't vote against social security nets and then complain that you dont have a social security net.
Pull yourselves up by your bootstraps, red blooded patriots. No one can tread on you.
It’s why it’s so exciting.
Whats the benefit of LLMs to the many who barely can operate a search machine?
I am sorry but thinking this will benefit the many is delusional. It's main use is making rich people richer by saving expenses on people's wages. Tell me, how are these people going to get a job once their skills are made useless?
Now with Claude Code, I've cleaned up years of technical debt, added proper test coverage, got everything building in CI again with automated release-on-tag. (Yes, CC will literally debug your GitHub Actions yaml.) My blind users are getting updates again after years of nothing.
I'm nobody special. By basic statistics, if LLMs are this useful to one random developer, they're probably useful to millions of others maintaining their own small projects.
Yes, job displacement is a real concern. But the idea that LLMs [only] help the wealthy get wealthier? I'm living proof that's not true. I'm using them to resurrect accessibility tools that the market wouldn't support. That's not exactly a venture capital use case.
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus
(Original source: https://xcancel.com/AlexBlechman/status/1457842724128833538?... )
In terms of the artist being accessible to overseas fans it's a great improvement, but I do wonder if I had grown up with this, would I have had any motivation to learn?
For a specific example, when 2 grammar points seem to mean the same thing, teachers here in Japan would either not explain the difference, or make a confusing explanation in Japanese.
It's still private-ish/only for myself, but I generated all of this with LLMs and using it to learn (I'm around N4~N3) :
- Grammar: https://practice.cards/grammar
- Stories, with audio (takes a bit to load): https://practice.cards/stories
It's true though that you still need the motivation, but there are 2 sides of AI here and just wanted to give the other side.
But my man, how do you know if it explains perfectly well or is just generating plausible-sounding slop? You're learning, so by definition you don't know!
I also checked with some Japanese and my own notes contain more errors than the LLMs output by a large margin.
Similarly, I used the Busuu app for a while. One of its features is that you can speak or type sentences, and ask native speakers to give feedback. But of course, you have to wait for the feedback (especially with time zone differences), so they added genAI feedback.
Like, what’s the point of this? It’s like that old joke: “We have purposely trained him wrong, as a joke”!
It's killing the accumulative and progressive way of learning that rewards who tries and fail many times before getting it right.
The "learning" is effectively starting to being killed.
I just wonder what would happen to a person after many years using "AI" and suddenly not having access to it. My guess is that you become useless and with a highly diminished capacity to perform even the most basic things by yourself.
This is one of many reasons why I'm so against all the hype that's going on in the "AI" space.
I keep doing things the old school way because I fully comprehend the value of reading real books, trying, failing and repeating the process again and again. There's no other way to truly learn anything.
Does this generation understand the value of it? Will the next one?
The only silver lining I can see is that a new perspective may be forced on how well or badly we’ve facilitated learning, usability, generally navigating pain points and maybe even all the dusty presumptions around the education / vocational / professional-development pipeline.
Before, demand for employment/salary pushed people through. Now, if actual and reliable understanding, expertise and quality is desirable, maybe paying attention to how well the broader system cultivates and can harness these attributes can be of value.
Intuitively though, my feeling is that we’re in some cultural turbulence, likely of a truly historical magnitude, in which nothing can be taken for granted and some “battles” were likely lost long ago when we started down this modern-computing path.
At any point of progress in history you can look backwards and forwards and the world is different.
Before tractors a man with an ox could plough x field in y time. After tractors he can plough much larger areas. The nature of farming changes. (Fewer people needed to farm more land. )
The car arrives, horses leave. Computers arrive, the typing pool goes away. Typing was a skill, now everyone does it and spell checkers hide imperfections.
So yeah LLMs make "drawing easier". Which means just that. Is that good or bad? Well I can't draw the old fashioned way so for me, good.
Cooking used to be hard. Today cooking is easy, and very accessible. More importantly good food (cooked at home or elsewhere) is accessible to a much higher % of the population. Preparing the evening meal no longer starts with "pluck 2 chickens" and grinding a kilo of dried corn.
So yeah, LLMs are here. And yes things will change. Some old jobs will become obsolete. Some new ones will appear. This is normal, it's been happening forever.
There's also the principle of the matter that we shouldn't have to pay for a share of something that was built using our collective unpaid labor/property without our consent.
Firstly the "theft component" isn't exactly new. There have always been rich and poor.
Secondly everyone is standing on the shoulders of giants. The Beatles were influenced by the works of others. Paul and John learned to write by mimicking other writers.
That code you right is the pinicle of endless work dine by others. By Ada Lovelace, and Charles Babbage, and Alan Turing and Brian Kernigan and Denis Ritchie and Doug Englebart and thousands and thousands more.
By your logic the entire output of all industries for all foreseeable generations should be universally owned. [1]
But that's not the direction we have built society on. Rather society has evolved in the US to reward those who create value out of the common space. The oil in Texas doesn't belong to all Texans, it doesn't belong to the pump maker, it belongs to the company that pumps the oil.
Equally there's no such thing as 'your data'. It's your choice to publish or not. Information cannot be 'owned'. Works can be copyrighted, but frankly you have a bigger argument on that front going after Google (and Google Books, not to mention the Internet Archive) than AI. AI may train on data you produced, but it does not copy it.
[1] I'm actually for a basic income model, we don't need everyone working all day like it's 1900 anymore. That means more taxes on companies and the ultra wealthy. Apparently voters disagree as they continue to vote for people who prefer the opposite.
The two parties that end up viable tend to be financed quite heavily by said wealthy, including being proped by the media said wealthy control.
The more right wing side will promise tax cuts (also for the poor that don't seem to materialize) while the more left wing side will promise to tax the rich (but in an easily dodgeable way that only ends up affecting the middle class).
Many people understand this and it is barely part of the consideration in their vote. The last election in the US was a social battle, not really an economic one. And I think the wealthy backers wanted it that way.
I would contest some of your points though.
Firstly, not every country votes, not all that vote have 2 viable parties, so that's a flaw in your argument.
Equally most elections produce a winner. That winner can, and does, get stuff done. The US is paralyzed because it takes 60% to win the senate, which hasn't happened for a while. So US elections are set up so "no one wins". Which of course leads to overreach etc that we're seeing currently.
There's a danger when living inside a system that you assume everywhere else is the same. There's a danger when you live in a system that heavily propagandizes its own superiority, that you start to feel like everywhere else is worse.
If we are the best, and this system is the best, and it's terrible, then clearly all hope is lost.
But what I maybe, just maybe, all those things you absolutely, positively, know to be true, are not true? Is that even worth thinking about?
But I know people whose preference would be something like Ron Paul > Bernie Sanders > Trump > Kamala, which might sound utterly bizarre until you realize that there are multiple factors at play and "we want tax cuts for the rich" is not one of them.
People are welcome to whatever preference they like. Democracy let's them choose. But US democracy is deliberately planned to prefer the "no one wins" scenario. That's not the democracy most of the world uses.
The difference is that, for better or worse, our society chose to follow the model that artists own the rights to their work. That work was used for commercial purposes without the consent of the artists. Therefore it's theft.
I actually do believe all industries should be worker owned because the capitalists have proven they can't be trusted as moral and ethical stewards, but I'm specifically talking about GenAI here.
I think it's disingenuous to say that people have a choice to publish data or not in an economic system that requires them to publish or produce in order to survive. If an artist doesn't produce goods, then they aren't getting paid.
Also this is kind of a pedantic rebuttal but the GenAI software technically does first have to copy the data to then train on it :) But seriously, it can be prompted to reproduce copyrighted works and I don't think the rights holders particularly care how that happens, rather that it can and does happen at all.
This is 100% just the mechanization of a cultural refinement process that has been going on since the dawn of civilization.
I agree with you regarding how the bounty of GenAI is distributed. The value of these GenAI systems is derived far more from the culture they consume than the craft involved in building them. The problem isn't theft of data, but a capitalist culture that normalizes distribution of benefit in society towards those that are already well off. If the income of those billionaires and the profits of their corporations were more equitably taxed, it would solve a larger class of problems, of which this problem is an instance.
If we don't have something better to do we'll all be at home doing nothing. We all need jobs to afford living, and already today many have bullshit jobs. Are we going to a world where 99.9% of the people need a bullshit job just to survive?
>> We all need jobs to afford living
In many countries this is already not true. There is already enough wealth that there is enough for everyone.
Yes, the western mindset is kinda "you don't work, you don't get paid". The idea that people can "free load" on the system is offensive at a really deep emotional level. If I suggest that a third of the people can work, and the other 2 thirds do nothing, but get supported, most will get distressed [1]. The very essence of US society is that we are defined by our work.
And yet if 2% of the work force is in agriculture, and produce enough food for all, why is hunger a thing?
As jobs become ever more productive, perhaps just -considering- a world where worth is not linked to output is a useful thought exercise.
No country has figured this out perfectly yet. Norway is pretty close. Lots of Europe has endless unemployment benefits. Yes, there's still progress to be made there.
[1] of course, even in the US, already it's OK for only a 3rd to work. Children don't work. Neither do retirees. Both essentially live off the labor of those in-between. But imagine if we keep raising the start-working age, while reducing retirement benefits age....
And universally, if you have nothing, you lead a very poor life. You life in a minimal house (trailer park, slums, or housing without running water nor working sewage). You don't have a car, you can't travel, education opportunities are limited.
Most kids want to become independent, so they have control over their spending and power over their own lives. Poor retirees are unhappy, sometimes even have to keep working to afford living.
Norway is close because they have oil to sell, but if no one can afford to buy oil, and they can't afford to buy cars, nor products made with oil, Norway will soon run out of money.
You can wonder, why is Russia attacking Ukraine, russia has enough land, doesn't need more. But in the end there will always be people motivated by more power and money, which makes it impossible to create this communism 2.0 that you're describing.
I'm not suggesting equality or communism. I'm suggesting a bottom threshold where you get enough even if you don't work.
Actually Norway gets most of that from investments, not oil. They did sell oil, but invested that income into other things. The sovereign wealth fund now pays out to all citizens in a sustainable way.
Equally your understanding of dole living in Europe is incomplete. A person on the dole in the UK is perfectly able yo live in a house with running water etc. I know people who are.
Creating a base does not mean "no one works". Lots of people in Europe have a job despite unemployment money. And yes most-all jobs pay better than unemployment. And yes lifestyles are not equal. It's not really communism (as you understand it.)
This is not about equal power or equal wealth. It's about the idea that a job should not be linked to survival.
Why is 60 the retirement age? Why not 25? That sounds like a daft question, but understanding it can help understand how dome things that seem cast in stone, really aren't.
Living on welfare in the Netherlands is not a good life, and definitely not something we should accept for the majority of the people.
Being retired on only a state pension is a bad life, you need to save for retirement to have a good life. And saving takes time, that's why you can't retire at 25.
I'm saying that the blind acceptance of the status quo does not allow for that status to be questioned.
You see the welfare amounts, or retirement amounts as limited. Well then, what would it take to increase them? How could a society increase productivity such that more could be produced in less time?
Are some of our mindsets preventing us from seeing alternatives?
Given that society has reinvented itself many times through history, are more reinvention possible?
If you're a nationalist, your worry is obvious enough, but if you're a humanist, then it's wonderful that the more downtrodden are going to improve their station, while the better off wait for them.
With the WWW we thought everyone having access to all information would enlighten them, but without knowledge people do not recognize the right information, and are more likely to trust (mis)information that they think they understand.
What if LLMs give us all the answers that we need to solve all problems, but we are too uninformed and unskilled to recognize these answers? People will turn away from AI, and return to information that they can understand and trust, even if it's false.
Anyway, nothing new actually, we've seen this with science for some time now. It's too advanced for most people to understand and validate, so people distrust it and turn to other sources of information.
Before that, I had an TI-99 4A at home without a tape drive and the family tv as a display. I mainly was into creating games for my friends. I did all my programming on paper, as the "screen time" needed to be maximized for actually playing the games after typing it in from the paper notebook. Believe it or not, but bugs were very rare.
Much later at uni there were computer rooms with Mac's with a floppy drive. You could actually just program at the keyboard, and the IDE even had a debugger!
I remember observing my fellow students endlessly type-run-bug-repeat until it "worked" and thinking "these guys never learned to reason through their program before running it. This is just trial and error. Beginners should start on paper".
Fortunately I immediately caught myself and thought, no, this is genuine progress. Those that "abuse" it would more than likely not have programmed 'ye old way' anyways, and some others will genuinely become very good regardless.
A second thing: in the early home computer year(s) you had to program. The computer just booted into the (most often BASIC) prompt, and there was no network or packaged software. So anyone that got a computer programmed.
Pretty soon, with systems like the Vic-20, C64 and ZX Spectrum there was a huge market in off the shelf game cassettes. These systems became hugely popular because they allowed anyone to play games at home without learning to program. So only those that liked programming did. Did that lose beginner programmers? Maybe some, for sure.
This should be comparable to how much fewer people in the west today know how to work a farm or build machinery. Each technological shift comes at a cost of population competence.
I do have a feeling that this time it could be different. Because this shift has this meta-quality to it. It has never been easier to acquire, at least theoretical, knowledge. But the incentives for learning are shifting in strange directions.
Fair point; I think this feeling is exacerbated by all the social media being full of people looking like they're good at what they do already, but it rarely shows the years of work they put in beforehand. But that's not new, compare with athletes, famous people, fictional characters, etc. There's just more of it and it's on a constant feed.
It does feel like people will just stop trying though. And when there's a shortcut in the form of an LLM, that's easy. I've used ChatGPT to write silly stories or poems a few times; I look at it and think "you know, if I were to sit down with it proper I could've written that myself". But that'd be a time and effort investment, and for a quick gag that will be pushed down the Discord chat within a few minutes anyway, it's not worth it.
But personally, I don't feel as upset over all this as he does. It seems that all my tech curmudgeonliness over the years is paying off these days, in spades.
Humbly, I suggest that he and many others simply need to disconnect more from The Current Thing or The Popular Thing.
Let's look at what he complains about:
* Bitcoin. Massive hype, never went anywhere. He's totally right. That's why I never used it and barely even read about it. I have no complaints because I don't care. I told myself I'd care if someone ever built something useful with Bitcoin. 10 years later they haven't. I'm going back to bed.
* Windows. Man I'm glad I dodged that bullet and became a Linux user almost 15 years ago. Just do it. Stuff will irk you either way but Linux irks don't make you feel like your dignity as a human being is being violated. Again, he's right that Windows sucks; I just don't have to care, because I walked away.
* Bluesky, Twitter, various dumb things being said on social media. Those bother him too. Fortunately, these products are optional. I haven't logged into my Twitter account for three years. I'll certainly never create a Bluesky one. On some of my devices I straight up block many of these crapo social sites like Reddit etc. in /etc/hosts. I follow some RSS feeds of a few blogs, one of the local timeline for a Mastodon instance. Takes ten minutes and then I go READ BOOKS in my spare time. That's it. He is yet again right, social media sucks, it's the place where you hear about all this dumb stuff like Bitcoin; I just am not reading it.
I'm not trying to toot my own horn here it's just that when you disconnect from all the trash, you never look back, and the frustrations of people who haven't seem a little silly. You can just turn all of this stuff off. Why don't you? Is it an addiction? Treat it like one if so. I used to spend 6 hours a day on my phone and now it's 1 hour, mainly at lunch, because the rest of the time it's on silent, in a bag, or turned off, just like a meth addict trying to quit shouldn't leave meth lying around.
Listen to Stallman. Listen to Doctorow. These guys are right. They were always right. The free alternatives that respect you exist. Just make the leap and use them.
IMO, because it's good in a way or another. I'm not reading your writing because I imagine you toiled over every word of it, but simply because I started reading and it seemed worthwhile to read the rest.
Or, to use a different metaphor, these comments are mentally nutritional Doritos, not a nicely prepared restaurant meal. If your restaurant only serves Dorito-level food, I won't go there even if I do consume chips quite often at home.
LLMs will accelerate the pace of this assimilation. New trends and new things will become popular and generic so fast that we'll have to get really inventive to stay ahead of the curve.
Basically most of it is all noise but eventually something blows ahead of the curve and shines for a couple of seconds before the curve/wave catches up and engulfs it again.
Most of what "goes" ahead of the "curve" is achievements of other people. You, I, and most other people are just watching other exceptional or lucky individuals shine for a couple of seconds. We are all watchers. There's no difference from our perspective if the people we are watching are humans or AI. It's not us anyway.
If you are an exceptional genius who regularly achieves things that go past the "curve" then you will be affected. You will watch AI achieve things faster than you, do better than you, etc. In this case you are affected.
Keep in mind that this "curve" I talk about is both global and has many many localized instances. Like a superstar at a start up is a localized example of an individual blowing past a localized curve, and Einstein is a global example of an individual excelling past a global curve.
The best artists will spot holes in the culture, and present them to us in a way that's expertly composed, artful and meticulously polished. The tools will let them do it faster, and to reach a higher peak of polish than in the past, but the artfulness will still be the artist's.
Futuristic tools aren't replacing art, they're creating a substrate for a higher order of art. Collages are art, and at its most crude, this higher order art reduces to digital collages of high quality generated assets with human intention. With futuristic tools, art becomes reductive rather than constructive. To quote Michelangelo's response to how he made David: "It is simple, I just removed everything that wasn't David"
2 years later and he thought of a project he really wanted to make. He didn't succeed, but its very clear he changes his mind
> up until now, you had no choice and to keep making crappy pictures and playing crappy songs until you actually start to develop a taste for the effort, and a few years later you find yourself actually pretty darn competent at the thing. That's a pretty virtuous cycle.
https://www.deviantart.com/scotchi/art/keep-tryin-690533685
Exactly.
Only putting the work is going to get anyone places. And yes it takes _time_, like, tons, and there's no shortcut.
And I can explain in excruciating detail how to do an ollie or a kickflip even and from a physics point of view you would totally get it but to land the damn thing you simply have to put a shitload of time on the board and fail over and over and over again.
We come from a place where we've been trained as engineers or whatever to do this or that and - somewhat - critically think about things. Instead picture yourself in the shoes of a beginner: how would you, a beginner who has not built their own mental model of discipline $foo, even begin to be critical of AI output?
But we're being advertised magic powder and sweating overalls and whathaveyou that makes you lose weight a) instantly† and b) without going to the gym and well putting in the effort††.
LLMs are the speed diet of the mind.
† comparatively
†† not that putting any arbitrary amount of effort is going to get you places, there _is_ a thing such as wasteful effort; but NOT putting the effort is a solid guarantee that you won't.
I have not yet figured out why anyone would choose this behaviour in a text editor. You have to press something to exit the delimited region anyway, whether that be an arrow key or the closing delimiter, so just… why did the first person even invent the idea, which just complicates things and also makes it harder to model the editor’s behaviour mentally? Were they a hunt-and-peck typist or something?
In theory, it helps keep your source valid syntax more of the time, which may help with syntax highlighting (especially of strings) and LSP/similar tooling. But it’s only more of the time: your source will still be invalid frequently, including when it gets things wrong and you have to relocate a delimiter. In practice, I don’t think it’s useful on that ground.
Poor typists always slow down processes, and frequently become a bottleneck, local or global. If you can speed up a process by only ten seconds per Thing, by improving someone’s typing skills or by fixing bad UI and workflow, you only have to process 360 Things in a day (which is about one minute per Thing) to have saved an entire hour.
It can be very eye-opening to watch a skilled typist experienced with a software system that was designed for speed, working. In more extreme cases, it can be that one person can do the work of ten. In more human-facing things, it can still be at least a 50% boost, so that two skilled people can beneficially replace three mediocre.
I'm nowhere near a hiring position but if I was I'd add assessing that to the application procedure.
It feels like this is part of a set of growing issues, with millennials being the only generation in between gen X / boomers and gen Z that have computer skills and can do things like manage files or read a whole paragraph of text without a computer generated voice + RSVP [0] + Subway Surfers gameplay in the background.
But it was also millennials that identified their own quickly diminishing attention spans, during the rise of Twitter, Youtube, Netflix and the like [1].
I want to believe all of this is giving me some job security at least.
[0] https://en.wikipedia.org/wiki/Rapid_serial_visual_presentati...
[1] https://randsinrepose.com/archives/nadd/ (originally published 2003, updated over time to reference newer trends)
Quite literally laughed out loud at this. I still cannot believe this is a thing people actually do; I thought it was a joke "genre" at first.
If you have to use multiple keyboards, arrows, end, home etc tend to be at different position on the keyboard. Almost no better than using a mouse.
That's were old school vi / emacs shine. CTRL? Always same area, so ctrl-f to go forward? Same gesture whatever brand of a laptop I have to work on.
But I agree that in normal input it is often annoying.
Maybe this is just an XKCD moment https://xkcd.com/1172/ ...
There's no need to break normal editing behavior.
How is it ever wrong, though? If I insert a (, and then a {, and the editor appends so that it's ({}), that's always ?correct. Can it ever not be.
Maybe because on a Norwegian keyboard { is a bit awkward, but I like it. Then even if we're 5 levels deep with useEffect(() => {(({[{[ I can just press ctrl+shift+enter and it just magically finishes up everything and put my caret at the correct place, instead of me trying to write ]}]})) in the correct order.
Whenever you edit something existing that already has the ), ] or } further down and you end up with a ()), []] or {}}. Or when you select some text that you want to replace and start with a quote only to end up with "the text you wanted to replace" instead of the expected ".
I never notice when it works but get annoyed every time it doesn't, so I feel like it never works and always sucks.
I guess it's muscle memory and some people are used to it, but it feels fundamentally wrong to me to have the editor do different basic editing things based on which character is being pressed.
e.g.:
(a + b > c) -> ((a + b > c) -> (()a + b > c) -> no, I was aiming for ((a + b) > c)
(it sound like you're talking about a different feature/implementation, though, since in the annoying case there's no 'completion' shortcut, it just appears)
I think here you are talking about a different thing -- completion of already started parentheses/"/whatever with content in-between, not the pre-application of paired braces or quotation marks, as the author did, no?
This feature is useful for me. So are LLMs. If someone doesn't want to use this or that, they are not obliged to. But don't tell me that features that I find useful "suck".
You can always insert the second " as a ghost(?) character to keep syntax highlighting working. But it's not like any modern language server really struggles with this anyways.
You can perhaps imagine an editor that only inserts the delimiter if you type the start-string symbol in the middle of a line.
Maybe it's like a pale imitation of structural editing? There are editing modes for some editors that more or less ensure the syntax is always valid as you edit, and they of course include this feature.
Can't speak to other editors though.. I don't want to sound like I'm trolling, but they generally feel quite clunky, compared to Emacs (ducks, runs ;p )
It's just matching, and reflecting the way different humans think, and reason, that's all.
(yes, said in jest)
Hah, fun about this is that I press exactly the matched symbol ( `}` to `{` , etc) to exit this delimited region and VS even understand what I want! Incredibly useless thing
On a french keyboard, ~#{[|\\^@]} all require the "alt gr" modifier key which is usually just right of the space key. So totally outside the realm of shift, caps lock, ctrl or alt.
In my perceived experience, every time a delimiter is opened, it automatically closes, allowing you to move away from it without thinking.
Even in places where this is not available (Slack, comment boxes, etc.), I close the delimiter as soon as I open it
I remember when PayPal came to Australia, I was so confused by it as I could just send money via internet banking. Then they tried to lobby the government to make our banking system worse so they could compete, much like Uber.
You literally enter an IBAN and the transfer will appear in the other account the next day. And if you need the money in the target account immediately (within 10 seconds) you can do it, too, by checking a checkbox for a small fee and that fee will drop to ZERO across the EU in October 2025.
Edit: Do you mean that the speed of the transfers was the problem?
Zelle, previously known as clearXchange, and whatever else, but if you had an account at one of the bigger bank, it has long been trivial to send money to each other.
https://en.wikipedia.org/wiki/Zelle
> In April 2011, the clearXchange service was launched. It was originally owned and operated by Bank of America, JPMorgan Chase, and Wells Fargo.[6][7] The service offered person-to-person (P2P), business-to-consumer (B2C), and government-to-consumer (G2C) payments.[8]
> I don’t want to help someone who opens with “I don’t know how to do this so I asked ChatGPT and it gave me these 200 lines but it doesn’t work”.
In the same vein, I've actually worked on crypto projects in both DeFi and NFT spaces, and agree with the "money for criminals" joke assessment of crypto, even if the technology is quite fascinating.
The skill has not been obliterated. We still need to fix the slop written by the LLMs, but it is not that bad.
Some people copy and paste snippets of code without knowing what it does, and in a sense, they spread technical debt around.
LLMs lower the technical debt spread by the clueless, to a lower baseline.
The issue I see is that the amount of code having this level of technical debt is created at a much faster speed now.
The copy-paste of usable code snippets is somewhat comparable to any use of a library or framework in the sense that there's an element of not understanding what the entire thing is doing or at least how, and so every time this is done it adds to the knowledge debt, a borrowing of time, energy and understanding needed to come up with the thing being used.
By itself this isn't a problem and realistically it's impossible to avoid, and in a lot of cases you may never get to the point where you have to pay this back. But there's also a limit on the rate of debt accumulation which is how fast you can pull in libraries, code snippets and other abstractions, and as you said LLMs ability to just produce text at a superhuman rate potentially serves to _rapidly_ increase the rate of knowledge debt accumulation.
If debt as an economic force is seen as something that can stimulate short-term growth then there must be an equivalent for knowledge debt, a short-term increase in the ability of a person to create a _thing_ while trading off the long-term understanding of it.
Take this snippet of code, and this is what each part means, and how you can change it.
It doesn't explain how it is implemented, but it explains the syntax and the semantics of it, and that's enough.
Good documentation makes all the difference, at least for me.
I'm SO stealing this!! <3
Yeah? What about what LLMs help with? Do you have no code that could use translation (move code that looks like this to code that looks like that)? LLMs are real good with that, and they save dozens of hours on single sentence prompt tasks, even if you have to review them.
Or is it all bad? I have made $10ks this year alone on what LLMs do, for $10s of dollars of input, but I must understand what I am doing wrong.
Or do you mean, if you are a man with a very big gun, you must understand what that gun can do before you pull the trigger? Can only the trained can pull the trigger?
You don't want more technical debt.
Ideally, you want zero technical debt.
In practice only a hello world program has zero technical debt.
Only bad code, and what takes the time is understanding it, not rewriting it, and the LLM doesn't make that part any quicker.
> they save dozens of hours on single sentence prompt tasks, even if you have to review them
Really? How are you reviewing quicker than you could write? Unless the code is just a pile of verbose whatever, reviewing it is slower than writing it, and a lot less fun too.
Well, humans typically read way faster than they write, and if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway.
Also, these non-human entities we are discussing tend to output code very fast.
When it's just reading, perhaps, but to review you have to read carefully and understand. It's like the classic quote that if you're writing code at the limits of your ability you won't be able to debug it.
> if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway
The way I see it if the code is that simple and repetitive then probably that repetition should be factored out and the code made a lot shorter. The code should only need to express the novel/distinctive parts of the problem - which, as you say, are the parts we wouldn't trust an LLM with.
No one is becoming a retard omniscient using LLMs and anyone saying they are is lying and pushing a narrative.
Humans still correct things, humans understand systems have flaws, and they can utilize them and correct them.
This is like saying someone used Word's grammar correction feature and accepted all the corrections. It doesn't make sense, and the people pushing the narrative are disingenuous.
That’s a nice description, to be honest.
And thank fuck it happened. All of shell and obscure Unix tools that require brains molded in 80s to use on a day to day basis should’ve been superseded by something user friendly long time ago.
And I'm not the only one saying this but - the bit about LLMs is likely throwing the baby out with the bathwater. Yes the "AI-ification" of everything is horrible and people are shoehorning it into places where it's not useful. But to say that every single LLM interaction is wrong/not useful is just not true (though it might be true if you limit yourself to only freely available models!). Using LLMs effectively is a skill in itself, and not one to be underestimated. Just because you failed to get it to do something it's not well-suited to doesn't mean it can't do anything at all.
Though the conclusion (do things, make things) I do agree with anyway.
We live in a world now where people scare one another into making significant choices with limited information. Person A claims it's the future you don't want to miss, Person B takes that at face value and starts figuring out how to get in on the scam, and Person C looks at A and B and says "me too." Rinse and repeat.
That's why so much of the AI world is just the same app with a different name. I'd imagine a high percentage of the people involved in these projects don't really care about what they're working on, just that it promises to score them more money and influence (or so they think).
So in a way, for the majority, it is just stupid and greedy behavior, but perhaps less conscious.
I have a feeling that line of thinking is going to be of diminishing consolation as the world veers further into systemic and environmental collapse.
I think it is defense mechanism, you see it everywhere, and you have to wonder, "why are people thinking this way?".
I think those with an ethical or related argument deserve to be heard, but opposite of that, it seems like full blinders, ignoring the reality presented before us.
I clicked halfheartedly, started to read halfheartedly, and got sucked into a read that threw me back into the good old days of the internet.
A pity that the micropayments mentioned in the post never materialized, I'd surely throw a few bucks at the author but the only option is a subscription and I hate those.
- there are zero digs of any kind about sexuality in this piece
- the only reference in the text I can possibly find that someone might have considered a "micro-dig at people who are white" is
> This is the driving force behind clickbait, behind thumbnails of white guys making 8O faces, behind red arrows, behind video essayists who just read Wikipedia at you three times a week like clockwork, [...]
To me this feels more like it's identifying a specific thing than a "micro-dig", but opinions may differ.
I'd really like to know why tubgirl spam raids are bad but unicorn weiners are not; I don't really see much difference between the two, TBH.
Edit: I apologies, the author has pre-gpt posts that use em dashes so likely it’s part of their writing style.
Color me naive but honestly I'm pretty sure it's not, though. We all tend to be worse at distinguishing AI from human text than we think, but the text sounds genuine to me and transpires an author with a quirky personality that seems difficult to imitate by an LLM. And that could include using em dashes.
> And the only real hope I have here is that someday, maybe, Bitcoin will be a currency, and circulating money around won’t be the exclusive purview of Froot Loops. Christ
PLEASE NO. The only thing this will lead to is people who didn't get rich with this scheme funding the returns of people who bought in early.
Whatever BTC becomes, everyone who advocates for funneling public money of people who actually work for their salary into Bitcoin is a fraud.
I don't think the blog author actually wants this, but vaguely calling for Bitcoin to become "real money" indirectly will contribute to this bailout.
And yes, I'm well aware that funneling pension funds money etc into this pyramid scheme is already underway. Any politician or bank who supports this should be sued if you ask me.
Why is that? You can just buy 0.00000001 BTC.
Let's say me and my friends agree to carve off 0.00002 BTC supply and pretend that is the whole world of currency. We could run a whole country using that 0.00002 BTC as money. Except that anyone who has 1 BTC can break into our walled garden and, with a tiny fraction of their holdings, buy the entire walled garden, and there's no way to prevent this as long as our money is fungible with theirs. It's the same reason you wouldn't use immibiscoins as a currency: I could just grant myself a zillion of them and buy everything you have. Except that in the case of bitcoin the grant is pre-existing.
Deflationary currencies are fundamentally unstable, just like currencies that one guy can print at will, because they decorrelate quantity and merit.
Luckily there's more than one cryptocurrency. Many of the current generation have asymptotically constant tail emissions, which doesn't really solve the underlying mismatch between emission and demand, but at least doesn't make it deliberately bad.
Well, there was one that tried to maintain a constant US$ price by cross-leveraging all of the risk onto a sister currency, but that crashed pretty hard (partly by the algorithm not working as well as thought, and partly by deliberate rugpull).
One that I'm aware of is radically different and allows both positive and negative balances that decay towards zero; although I don't really like that one's implementation, that feels like an idea worth exploring. It's pretty much incompatible with any traditional currency though.
Last time I named specific cryptocurrencies I got downvotes for advertising so I won't.
He wants normal banking and money transfer... but just to anybody, and for any reason. As an example, he'd like people to be able to pay him to draw bespoke furry porn for them. Or as another example, why can't a US citizen pay an Iranian citizen to do some work for them? (e.g. write a computer program)
That is totally possible. The only thing that stands in his way, and drives him into the arms of the cryptocurrency frauds, are moralising and realpolitiking governments that intentionally use their control of banks to control what bank customers can do with their money.
In an ideal world, government would only regulate banks on fiscal propriety and fair-dealing, and would not get in the way of consenting adults exchanging money for goods and services. But because government does fuck with banks, and sometimes the banks just do the fuckery anyway and government doesn't compel them to offer services to all (e.g. Visa/Mastercard refuse to allow porn merchants?), normal people start listening to the libertarians, the sovereign citizens, and the pump-and-dump fraudsters hyping cryptocurrencies.
He wants decentralised digital cash. How can it be done, if not Bitcoin et al?
Also, I'm not sure if a radical lack of regulation / full decentralization is a good thing when we are talking about money.
In my opinion, money should be regulated by governments.
But this discussion tends to escalate and the arguments have been made ad nauseam, so I'm tuning out here, sorry.
If you were to create a decentralized and limited supply currency, how would you distribute it so that it's “fair”?
Sounds a bit like if the world was running only on proprietary software created by Microsoft and you criticized the move to open source because that would enrich Linus Torvalds and other code creators/early adopters.
Are people better off by continuing to use centralized broken software that they have to pay a subscription for (inflation) than if they did a lump sum buy of a GNU/Linux distro copy from a random guy and become liberated for the rest of their life?
Technological progress makes the world deflationary. Your money should be able to buy more every time as we improve the productive efficiency of everything. And for poor countries, the best thing they could get is a censor resistant and value preserving tool.
Even if there was a tail emission, newer generations wouldn't have the capital needed for mining rigs. That's not just something unique to this case, same happens with stocks, real state or any other investment asset.
The author lost me a little on the AI rant. Yes, everything and everyone is shoving LLMs into places that I don't want it. Just today Bandcamp sent me an email about upcoming summer albums that was clearly in part written by AI. You can't get away from it, it's awful. That being said, the tooling for software development is so powerful that I feel like I'd be crazy not to use it. I save so so much time with banal programming tasks by just writing up a paragraph to cursor about what I want and how I want it done.
You're a platform drone, you have no mind, yada. Yet, we are reading the author's blog.
The author may hate LLMs, but they will lead to many people realizing things they never were aware of, like the author's superficial ability to take information and present it in a way that engages others. Soon that will be a thing that is known. Not many will make money sharing information in prose.
What the author refers to as "LLMs" today, will continually improve and "get better" at everything the author has issues with, maybe in novel ways we can't think of at the moment.
Alternative take:
"Popular culture" has always been a "lesser" ideal of experience, and now that ontological grouping now includes the Internet, as a whole. There are no safe corners, everything you experience on the Internet, if someone shared it with you, is now "Popular culture".
Everyone knows what you know, and you are no longer special or have special things to share, because awareness is ubiquitous.
This is good for society in many ways.
For example, with information asymmetry, where assholes made others their food, it will become less common that people are food.
Things like ad-driven social networks will fade away as this realization becomes normalized.
Unfortunately, we are at the very early stages of this, and it takes a very long time for people to become aware of things like hoaxes.
Yes that is actually roughly the take away here. LLMs are getting so popular in programming not because they are good at solving problems but because they are good at reproducing a solution to some minor variation of an existing problem which has already been solved many times.
Most of the work that most of the industry does is just re-solving the same set of problems. This is not just because of NIH but also because code reuse is a hard problem.
This is not to say that everything is the same product. The set or problems you solve and how you chain those solutions together (the overarching architecture) as well as the small set of unique problems you solve are the real value in a product. But its often not the majority of any single codebase.