Same with AI. I'm notably more autistic (or more aspie, or whatever) than my friend group, and also I much more easily recognize AI text and images as uncanny slop, while my friends are more easily wowed by it. Maybe AI output has the same "superficially impressive but empty inside" quality as the stuff that sociopaths say.
Most likely yet another flawed output from human-LLM (4chan) so online schizos have something to identify themselves with.
But chatgpt does help me work through some really difficult mathematical equations in newest research papers by adding intermediate steps. I can easily confirm when it gets them right and when not, as I do have some idea. It’s super useful.
If you are not able to make LLMs work for you at all, and complain about them on the internet, you are an old man yelling at clouds. The blog post devolves from an insightful viewpoint into a long sad ramble.
It’s 100% fine if you don’t want to use them yourself, but complaining to others gets tired quick.
Most of Internet is crap. Most of media is crap. This does need to stop you (or me) from creating.
The irony of this rant next to the AI rant.
Progress is not uniformly distributed I guess.
But I disagree on LLMs being "worse than useless".
Sure, "vibe coding" an entire app from a short prompt will always give you fragile, subtly broken nonsense. *Code is the spec*. In most cases, you can't meaningfully "compress" your requirements into a short informal prompt. We need better formal languages for expressing requirements concisely and declaratively! Think: Prolog, Haskell...
LLMs are good at small tasks that you can review much quicker than doing it yourself. Something tedious, like doing some local refactoring, writing ad-hoc Bash scripts, SQL queries, FFmpeg commands. I use Bash and SQL regularly, but somehow I always have to google the exact syntax. I already use ShellCheck, by the way. It's a must, and it helps a lot when reviewing LLM output.
I like the autocomplete feature too. It often saves time when writing repetitive or obvious code. `if bad_stuff {` usually autocompletes `return Err(BadStuff)` for me. `MyStruct {` initializer usually autocompletes the list of fields for me. I know that incorrect suggestions piss off some people and make it a net-negative for them. Incorrect suggestions are common, but they don't bother me in practice.
UPDATE: I've turned this comment into a blog post. https://home.expurple.me/posts/my-take-on-llms-for-coding/
> But yes, thanks: I was once offered this challenge when faced with a Ren’Py problem, so I grit my teeth and posed my question to some LLM. It confidently listed several related formatting tags that would solve my problem. One teeny tiny issue: those tags did not and had never existed. Just about anything might be plausible! It can just generate Whatever! I cannot stress enough that this is worse than useless to me.
The probabilistic machine generated a probabilistic answer. Unable to figure out a use for the probabilistic machine in two tries, I threw it into the garbage.
Unfortunately, humans are also probabilistic machines. Despite speaking English for nearly a lifetime, errors are constantly produced by my finger-based output streams. So I'm okay talking to the machine that might be wrong in addition to the human that might be wrong.
> It feels like the same attitude that happened with Bitcoin, the same smug nose-wrinkling contempt. Bitcoin is the future. It’ll replace the dollar by 2020. You’re gonna be left behind. Enjoy being poor.
I mean, you were left behind. I was left behind. I am not enjoying being poor. Most of us were left behind. If we invested in Bitcoin like it was the future in 2011 we'd all be surfing around on yachts right now given the current valuation.
The last line of the article summarizes it perfectly.:
> Do things. Make things. And then put them on your website so I can see them.
I subscribe fully to the first two sentences, but the last one is bullshit. The gloom in the article is born from the authors attaching the value of "making things" to the recognition received for the effort. Put your stuff out there if you think it is of value to someone else. If it is, cool, and if it's not, well, who cares.
> I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?” This is kind of darkly fascinating to me, because it gives rise to such an obvious question: if anyone can do that, then why listen to your music? It takes a significant chunk of 3.5 hours just to listen to an album, so how much manual work was even done here? Apparently I can just go generate an endless stream of stuff of the same quality! Why would I want your particular brand of Whatever?
This gem implies that the value of the music (or art in general) is partially or even wholly dependent on whether or not someone else thinks it's good. I can't even...
If you eliminate the back-patting requirements, and the stuff we make is genuine, then it's value is intrinsic. The "Whatever" machines are just tools, like the rest of the tools we use, to make things. So, just make your things and get on with it.
I had an interesting discussion with a piano teacher once. Some of his students, he told me, would play for themselves but never for any kind of audience. As the saying goes: if a musician plays a piano in a closed room with no one to hear it, does it make a sound?
Obviously there's nothing wrong with extremely personal art that never gets released to the wider public - not every personal diary should be a blog. But there's also the question of what happens to art when none of it gets shared around, and vibrant art communities are, in my opinion (and I think also the author's), something to encourage.
I get what you're after, but that's not a very good example. If a musician is playing an instrument, then of course the musician hears it.
Now, imagine instead that it's a player piano, and the lone "musician" is not actually playing anything at all, but hears the sound of the tones he/she had randomly generated by a "Whatever" machine, resonating through the actual struck strings, and resonant body of a piano, and the hair on the back of their neck stands on end. Then the music ends, the vibrations stop, and all that is left of the moment is whatever memory the "musician" retains.
Was that music while being heard by the "musician"? Is it music when it's just an melody in the "musician's" head? What if it's wasn't a piano at all, but just birds singing? Is it still music? If it is, is it "good" music?
Yes, the world is changing fast, and no, we humans don't seem to handle it well. I agree with the article in that sense. But I see no use in categorizing technology as dystopian, just because it's been misused. You don't have to misuse it yourself, or even use it at all if you don't want to. Complaining about it though... we humans are great at that.
hmmm