Posted by maksimur 3 days ago
- Use the AI and ask for answers. It'll generate something! It'll also be pleasant, because it'll replace the thinking you were planning on doing.
- Use the AI to automate away the dumb stuff, like writing a bespoke test suite or new infra to run those tests. It'll almost certainly succeed, and be faster than you. And you'll move onto the next hard problem quickly.
It's funny, because these two things represent wildly different vibes. The first one, work is so much easier. AI is doing the job. In the second one, work is harder. You've compressed all your thinking work, back-to-back, and you're just doing hard thing after hard thing, because all the easy work happens in the background via LLM.
If you're in a position where there's any amount of competition (like at work, typically), it's hard to imagine where the people operating in the 2nd mode don't wildly outpace the people operating in the first, both in quality and volume of output.
But also, it's exhausting. Thinking always is, I guess.
[0] Rijnard, about https://sourcegraph.com/blog/how-not-to-break-a-search-engin...
“Almost certainly succeed” requires that you mostly plan out the implementation for it, and then monitor the LLM to ensure that it doesn’t get off track and do something awful. It’s hard to get much other work done in the meantime.
I feel like I’m unlocking, like, 10% or 20% productivity gains. Maybe.
The rest are just catching up to the reality now.
Burning out a substantial portion of the workforce for short term gains is going to cause way more long term decline than the short term gains are worth
There is a certain amount of regular work that I don't want to automate away, even though maybe I can. That regular work keeps me in the domain. It leads to epiphany's in regards to the hard problems. It adds time and something to do in between the hard problems.
Exactly, some kinds of refactors are like this for me. Pretty mindless, kind of relaxing, almost algebraic. It's a pleasant way to wander around the code base just cleaning and improving things while you walk down a data or control flow. If you're following a thread then you don't even make decisions really, but you also get better acquainted with parts you don't know, and subconsciously get the practice holding some kind of gestalt in your head.
This kind of almost dream-like "grooming" seems important and useful, because it preps you for working with design problems later. Definitely formatting and style type trivia should absolutely be automated, and real architecture/design work requires active engagement. But there's a sweet spot in the middle.
Even before LLMs maybe you could automate some of these refactors with tools for manipulating ASTs or CSTs, if your language of choice had those tools. But automating everything that can be automated won't necessarily pay off if you're losing fluency that you might need later.
You cannot exclusively do hard things back to back to back every 8 hour day without fail. It will either burn you out, or you will make mistakes, or you will just be miserable.
Human brains do not want to think hard, because millions of years of evolution built brains to be cheap, and they STILL use like 10% of our daily energy.
Because as soon as you realize that the output doesn't do exactly what you need, or has a bug, or needs to be extended (and has gotten beyond the complexity that AI can successfully update), you now need to read and deeply understand a bunch of code that you didn't write before you can move forward.
I think it can actually be fine to do this, just to see what gets generated as part of the brainstorming process, but you need to be willing to immediately delete all the code. If you find yourself reading through thousands of lines of AI-generated code, trying to understand what it's doing, it's likely that you're wasting a lot of time.
The final prompt/spec should be so clear and detailed that 100% of the generated code is as immediately comprehensible as if you'd written it yourself. If that's not the case, delete everything and return to planning mode.
Yes, but you are thinking about the wrong things, so the effort get spent poorly.
It is usually much more efficient to build your own mental model than to try to search for a solution that solves exactly what you need from externally. Without that mental model it is hard to evaluate whether the external solution even does what you want, so its something you need to do either way.
For example, I believe writing unit tests is way too important to be fully relegated to the most junior devs, or even LLM generation! In other fields, "test engineer" is an incredibly prestigious position to have, for example "lead test engineer, Space X/ Nasa/etc" -- that ain't a slouch job, you are literally responsible for some of the most important validation and engineering work done at the company.
So I do question the notion that we can offload the "simple" stuff and just move on with life. It hasn't really fully worked well in all fields, for example have we really outsourced the boring stuff like manufacturing and made things way better? The best companies making the best things do typically vertically integrate.
But this is the idea behind compilers, type checkers, automated testing, version control, and etc. It's perfectly valid.
You don't have to remember everything. You have to remember enough entry points and the shape of what follows, trained through experience and going through the process of thinking and writing, to reason your way through meaningful knowledge work.
When you're writing, you can often take your time. Too little knowledge, though, and it will require a lot of homework.
1- You may remember only the initial state and the brain does the rest, like with mnemonics
2- You may remember only the initial steps towards a solution, like knowing the assumptions and one or two insights to a mathematical proof?
I'd say a Zettlekasten user would agree with you if you mean 1
The point, I believe, was that the more you remember, the better you can think. As in you should strive to remember stuff, and not just be lazy and rely on LLMs. I agree with that.
Brains are for thinking. Documents / PKM systems / tools are for remembering.
IOW: take notes, write things down.
FWIW I have a degree in cognitive psychology (psychobiology, neuroanatomy, human perception) and am an amateur neuroscientist. Somewhat familiar w/ the brain. :)
I'd read Spontaneous Brain by Northoff (Copernican, irreducible neuroscience) or oscillatory neurobiology Buzsaki.
The brain is lossless.
I would agree that external forms of memory are evolutionarily progressive, that ability to utilize the external forms requires a lossless relationship.
Once we grasp the infinitely inferior external of arbitrariness (symbols words) are correlated through superior, lossless, concatenated internals (action-neural-spatial-syntax), until we can externalize that direct perception, the externals are deeply inferior, lossy forms.
“We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024
Nothing is lossless.
Without direct perception, and using such poor tools as symbols and narratives to externalize memory, we're deeply impoverished as to the nature of memory and our ability to access it. But once we have a better grasp of the neuronal units, spatial-syntax, we will unlock every memory.
https://pmc.ncbi.nlm.nih.gov/articles/PMC10500127/
Also to consider are the shapes and phases between oscillation. "It’s high dimensional complexity; the mind is an attractor in high dimensional phase space formed between neural oscillators." Emergent properties are not reducible to their constituent parts.
to my knowledge practical fourier transforms set a number of sine waves they will calculate for, and a window of time to look at. these limitations result in loss.
but, just taking the brain, at some point the person will die and decompose. how are you gonna get the oscillations back out of the rotted flesh? there has to be some form of loss to the brain
In terms of brains, the math is used to model the irreducible occurrences in brains - that everything is still in there. So the math only gives us a window into the complexity. Brains don't compute or calculate necessarily. As an analog, or analoga of differences, it never has to exclude, or experience loss.
For the details: Rhythms of the Brain or Unlocking the Brain both volumes.
Reading one does not make YOU a neurobiologist.
If humans did not have any facilities for abstraction, sure. But then "knowledge work" would be impossible.
You need to remember some set of concrete facts for knowledge work, sure, but it's just one—necessary but small—component. More important than specific factual knowledge, you need two things: strong conceptual models for whatever you're doing and tacit knowledge.
You need to know some facts to build up strong conceptual models but you don't need to remember them all at once and, once you've built up that strong conceptual understanding, you'll need specifics even less.
Tacit knowledge—which, in knowledge work, manifests as intuition and taste—can only be built up through experience and feedback. Again, you need some specific knowledge to get started but, once you have some real experience, factual knowledge stops being a bottleneck.
Once you've built up a strong foundation, the way you learn and retain facts changes too. Memorization might be a powerful tool to get you started but, once you've made some real progress, it becomes unnecessary if not counterproductive. You can pick bits of info up as you go along and slot them into your existing mental frameworks.
My theory is that the folks who hate memorization are the ones who were able to force their way through the beginner stages of whatever they were doing without dull rote memorization, and then, once there, really do not need it any more. Which would at least partly explain why there are such vehement disagreements about whether memorization is crucial or not.
> So, coming back to the initial starting point that “you don’t have to remember anything”. The opposite is true. You have to remember EVERYTHING.
I see it like this: it is absolutely wrong to think that you don't have to remember anything. In fact, ideally you would remember everything. The more you remember, the better you can think. Now in practice, it's impossible to remember absolutely everything, so we should strive to remember as much as we can. And of course we need to be clever in how we select what we remember (but that seems obvious).
The point is really that it is common to say "it's useless to remember it because you can ask your calculator or an LLM", and the article strongly disagrees with that.
And the more experience with computers I get, the more I realize that there's actually not that many pure unique and mutually orthogonal _concepts_ in computer science and software engineering. Yes, a competent engineer must know, feel, live these concepts, and it takes a lot of work and exposure to crystallize them in the brain from all the libraries, books, programs, architectures one has seen. But there's not a lot of them! And once you are intimate with all of them, you can grok anything computer-related quickly and efficiently: because your brain will just wuickly find the "coordinates" of that thing in the concept space, ans that's all you'll have to understand and recall later.
While the article makes some reasonable points, this is too far gone. You don't need to know how to "weigh each minute spend on flexibility against the minutes spent on aerobic capacity and strength" to put together a reasonable workout plan. Sure, your workouts might not be as minmaxed as they possibly could be, but that really doesn't matter. So long as the plan is not downright bad, the main thing is that you keep at it regularly. The same idea extends to nearly every other domain, you don't need to be a deep expert to get reasonably good results.
It's simply different people we're talking about. Certain personalities are always going to gravitate to the "search for reason" model in life rather than "reason about facts".
https://www.youtube.com/watch?v=QQnc6-z7AO8
Post exercise muscle soreness is caused by lactic acid buildup.
https://my.clevelandclinic.org/health/body/24521-lactic-acid
Icing is an effective treatment for minor musculoskeletal injuries.
https://www.yalemedicine.org/news/rice-protocol-for-injuries
Weight training for children will stunt their growth.
https://peterattiamd.com/belindabeck/
Stretching before exercise prevents injuries.
https://mcpress.mayoclinic.org/nutrition-fitness/does-stretc...
I could go on and on but I think you get the point. Almost everything you hear in this field is opinions, extrapolations, and educated guesses rather than anything we could really call a "fact".
I think there are at least two models of work that require knowledge:
1. Work when you need to be able to refer to everything instantly. I don't know if this is actually necessary for most scenarios other than live debates, or some form of hyper-productivity in which you need to have extremely high-quality results near-instantaneously.
(HN comments are, amusingly, also an example – comments that are in-depth but come days later aren't relevant. So if you want to make a comment that references a wide variety of knowledge, you'll probably need to already know it, in toto.)
2. Work when you need to "know a small piece of what you don't remember as a whole", or in other terms, know the map, but not necessarily the entire territory. This is essentially most knowledge work: research, writing, and other tasks that require you to create output, but that output doesn't need to be right now, like in a debate.
For example, you can know that X person say something important about Y topic, but not need to know precisely what it was – just look it up later. However, you do still need to know what you're looking for, which is a kind of reference knowledge.
--
What is actually new lately, in my experience, is that AI tools are a huge help for situations where you don't have either Type 1 or Type 2 knowledge of something, and only have a kind of vague sense of the thing you're looking for.
Google and traditional search engines are functionally useless for this, but asking ChatGPT a question like, "I am looking for people that said something like XYZ." This previously required someone to have asked the exact same question on Reddit/a forum, but now you can get a pretty good answer from AI.
IMO, this is the whole point of the article: AI tools "help" a lot when we are completely uninformed. But in doing that, they prevent us from getting informed in the first place. Which is counter-productive in the long term.
I like to say that learning goes in iterations:
* First you accept new material (the teacher shows some mathematical concept and proves that it works). It convinces you that it makes sense, but you don't know enough to actually be sure that the proof was 100% correct.
* Then you try to apply it, with whatever you could memorise from the previous step. It looked easy when the teacher did it, but when you do it yourself it raises new questions. But while doing this, you memorise it. Being able to say "I can do this exercise, but in this other one there is this difference and I'm stuck" means that you have memorised something.
* Now that you have memorised more, you can go back to the material, and try to convince yourself that you now see how to solve that exercise you were stuck with.
* etc.
It's a loop of something like "accept, understand, memorise, use". If, instead, you prompt until the AI gives you the right answer, you're not learning much.
Great way of framing it - simple and cuts straight to the heart of the issue.
That might be a good criteria for how much to memorize: do you want to be able to do it live?
It’s the same with a lot of other things. AI and search engines help a lot but you are at an advantage if at least you have some ability to gauge what should be possible and how to do it.
People do the same with AI, ask it about something they know little about then assume it is correct, rather than checking their ideas with known values or concepts they might be able to error check.
This, and knowing by heart all the simple formulas/rules for area/volume/density and energy measurements.
The classic example being pizza diameter.
If you never memorize anything, but are highly adept at searching for that information, your brain has only learned how to search for things. Any work it needs to do in the absence of searching will be compromised due to the lack of conditioning/experience. Maybe that works for you, or maybe that works in the world that's being built currently, but it doesn't change the basic premise at all.
“Try to learn something about everything and everything about something.”
Then the internet came, and we asked the internet. The internet wasn't correct, but it was a far higher % correct than asking a random person who was near you.
Now AI comes. It isn't correct, but it's far higher % correct than asking a random person near you, and often asking the internet which is a random blog page which is another random person who may or may not have done any research to come up with an answer.
The idea that any of this needs to be 100% correct is weird to me. I lived a long period in my life where everyone accepted what a random person near them said, and we all believed it.
There, I've shaved a ton of the spread off of your argument. Possibly enough to moot the value of the AI, depending on the domain.
Much like with Wikipedia, using AI to start on this journey (rather than blindly using quick answers) makes a lot of sense.
However, the sibling commenter about books, journals, etc., is also an excellent suggestion.
How about we pick an LLM evaluation and get specific? They have strengths and weaknesses. Some do outperform humans in certain areas.
Often I see people latching on to some reason that “proves” to them “LLMs cannot do X”. Stop and think about how powerful such a claim has to be. Such claims are masquerading as impossibility proofs.
Cognitive dissonance is a powerful force. Hold your claims lightly.
There are often misunderstandings here on HN about the kinds of things transformer based models can learn. Many people use the phrase “stochastic parrots” derisively; most of the time I think these folks are getting it badly wrong. A careful reading of the original paper is essential, not to mention follow up work.
What I'm skeptical about isn't LLMs as a utilitarian tool to enhance productivity in specific use cases, but rather treating LLMs as sources of information in their own right, especially given their defining characteristic of generating novel text through stochastic inference.
I'm 100% behind RAG powering the search engines of the future. Using LLMs to find reliable sources within the vast ocean of dubious information on the modern internet? Perfect -- ChatGPT, find me those detailed blog posts by people competent in the problem domain. Asking LLMs to come up with their own answers to questions? No thanks. That's just an even worse version of "ask a random person to make up an answer on the spot".