Adderall is similar. It makes people feel a lot more productive, but research on its effectiveness[0] seems to show that, at best, we get only a mild improvement in productivity, and marked deterioration of cognitive abilities.
The ADHD was caught early, and treated, but the dyslexia was not. He thought he was a moron, for much of his early life, and his peers and employers did nothing to discourage that self-diagnosis.
Since he learned of his dyslexia, and started treating it, he has been an engineer at Intel, for most of his career (not that I envy him, right now).
Their benefits when used as intended are solidly documented in research literature.
People without ADHD take it, believing that it makes them “super[wo]men.”
That said, I'll leave the conclusions about whether it's valuable for those with ADHD to the mental health professionals.
If legitimate research had found it to be drastically better, that study would definitely have been published in a big way.
Unscientifically, I personally know quite a number of folks that sincerely believed that they couldn’t function without it, but have since learned that they do far better on their own. I haven’t met a single one that actually had their productivity decline (after an adjustment period, of course), after giving up Adderall. In fact, I know several, that have had their careers really take off, after giving it up.
"Antibiotics don't improve your life, but can damage your health" would likely be the outcome on 13 randomly selected healthy individuals. But do the same study on 13 people with a bacterial infection susceptible to antibiotics and your results will be vastly different.
They'll need to learn, the same way I see lots of people learn.
It's been around long enough, though, that all the Adderall-eating people should have established a Gattaca-like "elite," with all us "undermen," scrabbling around at their feet.
Not sure why that never happened...
It seems like that with such small groups and effects you could run the same “study” again and again until you get the result that you initially desired.
https://pmc.ncbi.nlm.nih.gov/articles/PMC3489818/table/tbl1/
Not everything should make sense. Playing , trying and failing is crucial to make our world nicer. Not overthinking is key, see later what works and why.
Waymo's driving people around with an injuries-per-mile rate that's lower than having humans do it. I don't see how that reconciles with "obviously made no sense".
It would be, if there weren't actual important works that needs funding.
* Getting good results from AI forced me to think through and think clearly - up front and even harder.
* AI almost forces me to structure and break down my thoughts into smaller more manageable chunks - which is a good thing. (You can't just throw a giant project at it - it gets really far off from what you want if you do that.)
* I have to make it a habit of reading what code it has added - so I understand it and point to it some improvements or rarely fixes (Claude)
* Everyone has what they think are uninteresting parts of a project that they have to put effort into to see the bigger project succeed - AI really helps with those mundane, cog in the wheel things - it not only speeds things up, personally it gives me more momentum/energy to work on the parts that I think are important.
* It's really bad at reusability - most humans will automatically know oh I have a function I wrote to do this thing in this project which I can use in that project. At some point they will turn that into a library. With AI that amount of context is a problem. I found that filling in for AI for this is just as much work and I best do that myself upfront before feeding it to AI - then I have a hope of getting it to understand the dependency structure and what does what.
* Domain specific knowledge - I deal with Google Cloud a lot and use Gemini for understanding what features exist in some GCP product and how I can use it to solve a problem - works amazingly well to save me time. At the least optioning the solution is a big part of work it makes easier.
* Your Git habits have to be top notch so you can untangle any mess AI creates - you reach a point where you have iterated over a feature addition using AI and it's a mess and you know it went off the rails after some point. If you just made one or two commits now you have to unwind everything and hope the good parts return or try to get AI to deal with it which can be risky.
True in the long run. Like a car with a high acceleration but low top speed.
AI makes you start fast, but regret later because you don't have the top speed.
Chill. Interesting times. Learn stuff, like always. Iterate. Be mindful and intentional and don't just chase mirrors but be practical.
The rest is fluff. You know yourself.
100% this. I fear that AI will cause us to be stuck in a local optimum for the next decades where most of the code will be Python or JS because these are the languages best supported by LLMs. Don't get me wrong, Python and JS and mature and productive languages. That's fine. But we could have it so much better if there was more effort put into a next generation of tools that take all the harsh lessons learnt from the tools before and "just" do it better. I acknowledge that we get incremental improvements here and there but some things are just unfixable without breaking existing ecosystems.
- Autocomplete in Cursor. People think of AI agents first when they talk about AI coding but LLM-powered autocomplete is a huge productivity boost. It merges seamlessly with your existing workflow, prompting is just writings comments, it can edit multiple lines at once or redirect you to the appropriate part of the codebase, and if the output isn’t what you need you don’t waste much time because you can just choose to ignore it and write code as you usually do.
- Generating coding examples from documentation. Hallucination is basically a non-problem with Gemini Pro 2.5 especially if you give it the right context. This gets me up to speed on a new library or framework very quickly. Basically a stack overflow replacement.
- Debugging. Not always guaranteed to work, but when I’m stuck at a problem for too long, it can provide a solution, or give me a fresh new perspective.
- Self contained scripts. It’s ideal for this, like making package installers, cmake configurations, data processing, serverless micro services, etc.
- Understanding and brainstorming new solutions.
- Vibe coding parts of the codebase that don’t need deep integration. E.g. create a web component with X and Y feature, a C++ function that does a well defined purpose, or a simple file browser. I do wonder if a functional programming paradigm would be better when working with LLMs since by avoiding side effects you can work around their weaknesses when it comes to large codebases.