Top
Best
New

Posted by abhaynayar 9/13/2025

AI coding(geohot.github.io)
410 points | 284 commentspage 6
CompoundEyes 9/13/2025|
It takes time. There are cycles of “Oh wow!” “Oh wait...” “What if?” and “Aha!” Each of those has made me more effective and resulted in reliable benefits with less zig zagging back and forth.
8cvor6j844qw_d6 9/13/2025||
Just started using Claude Code recently.

It seems to speed up feature development but requires one to have a good understanding of the codebase to guide it and be aware of edge cases it missed.

Also, it doesn't seem to be able to take advantage of latest information or new SDK features unless deliberately informed. Not sure if I'm doing it right, but I resorted to feeding it documentation when it can't seem to something right.

The only thing I'm still unsure is the context management with /compact and /clear

h4ch1 9/14/2025|
Dunno how it'll work with Claude Code, but I use the context7 MCP to feed up-to-date docs in copilot.

Sometimes that also doesn't work so I make a guiding document using whatever llms.txt the framework/language/platform offers or just wget the entire documentation, condense it into a single text file and use that while prompting.

Seems to work ok, especially for Svelte 5 where most LLMs except Gemini don't give me runes from the get-go.

faangguyindia 9/16/2025||
>Dunno how it'll work with Claude Code, but I use the context7 MCP to feed up-to-date docs in copilot.

No wonder you've no idea about current capability of LLM based coding, copilot is far behind competition.

Havoc 9/13/2025||
>It’s not precise in specifying things.

That's the point - it's a higher level of abstraction.

>highly non-deterministic

...not unlike say a boss telling a junior to change something?

The bet here isn't that AI can be as precise as something hand coded but rather that you can move up a step in the abstraction layer. To use his compiler example...I don't care what the resulting assembly instructions look like, just whether it works. It's the same thing here just one level higher

skydhash 9/13/2025|
> That's the point - it's a higher level of abstraction.

That's not what abstraction is. When I type `echo Hello, World`, I don't have to deal with graphic drivers and test rendering to have the text on the screen. And I don't have to worry that "Goodbye" will appear instead.

> Not unlike say a boss telling a junior to change something?

Junior don't stay junior for long. And bosses usually give juniors less time to grow than people are allocating AI tools to actually prove themselves. Github copilot was more than 3 years ago. Today a new hire is expected to be productive on day one.

redhale 9/13/2025||
> The only reason it works for many common programming workflows is because they are common. The minute you try to do new things, you need to be as verbose as the underlying language.

You can stop here. This is enough to change the game. SO MUCH time and money is spent on mostly-boilerplate line of business apps. If you don't understand this, you are in a bubble of some kind. The corporate world's IT departments and consulting agencies are filled with developers who write nothing but what boil down to custom-tailored CRUD apps. AI is absolutely KILLER for this kind of work.

If you're doing novel research, perhaps not. But that describes a relatively small portion of the worldwide developer community.

goshx 9/13/2025||
This kid is 35, by the way.
coolThingsFirst 9/14/2025||
George, you post this after everyone already realized that LLMs plateaued.

Before that you were crying about the singularity and how we're super close it and in 6 months AI will be basically Einstein. Nothing has given me more joy than to see this AI slop fail.

Here's a 100 page document generated with AI and the other person will shrink it with AI at the end of the day those 2 lines of cognitive effort would've been a better deal than sending slop back and forth.

ripped_britches 9/13/2025||
Who hurt you georgie?
mikewarot 9/13/2025||
LLMs are a tool to help match human thought to what computers can do. People would like them to have exact reproducible results, but they're on the other end of the spectrum, more like people than tools. George correctly points out there is a vast space to explore closer to the compute hardware that might profitably be explored. Thanks to the same LLMs, that's about to get a whole lot easier. If you marveled at the instant response of Turbo Pascal and IDEs, you're in for a whole lot more.

--- (that was the tl;dr, here's how I got there) ---

As a mapper[3], I tend to bounce all the things I know against each new bit of knowledge I acquire. Here's what happens when that coincides with GeoHot's observation about LLMs vs Compilers. I'm sorry this is so long, right now it's mostly just stream of thought with some editing. It's an exploration of bouncing the idea of impedance matching against the things that have helped advance programming and computer science.

--

I've got a cognitive hammer that I tend to over-use, and that is seeing the world through the lens of a Ham Radio operator, and impedance matching[2]. In a normal circuit, maximum power flows when the source of power and the load being driven have the same effective resistance. In radio frequency circuits (and actually any AC circuit) there's another aspect, reactance. It's a time shifted form of current. This is trickier there are now 2 dimensions to consider instead of one, but most of the time, a single value, VSWR is sufficient to tell how well things are matched.

VSWR is adequate to know if a transmitter is going to work, or power bouncing back from the antenna might destroy equipment, but making sure it will work across a wide range of frequencies, yields at least a 3rd dimension. As time progresses, if you actually work with those additional dimensions, it slowly sinks in what works, and how, and what had previously seemed like magic, becomes engineering.

For example, vacuum tube based transmitters have higher resistances that almost any antenna, transformers and coupling through elements that shift power back and forth between the two dimensions allow optimum transfer without losses at the cost of complexity.

On the other hand, semiconductor based transmitters tend to have the opposite problem, their impedances are lower, so different patters work for them, but most people still just see it as "antenna matching", and focus on the single number, ignoring the complexity.

{{Wow... this is a book, not an answer on HN... it'll get shorter after a few edits, I hope, NOPE... it's getting longer...}}

Recently, a tool that used to cost thousands of dollars, the Vector Network Analyzer, has become available at less than $100. It allows for measuring resistance, reactance, and gain simultaneously across frequency. It's like compilers, making things manageable in scope that otherwise seemed too complex. It only took a few times playing with a NanoVNA to understand things that previously would have been some intense EE classwork with Smith Charts.

Similarly, tools like Software Defined Radios for $30, and GNU Radio (for $0.00) allowed understanding digital signal processing in ways that would have been equally difficult without professional instruction. With these tools, you can build a signal flow graph in an interactive window, and in moments have a working radio for FM, AM, Sideband, or any other thing you can imagine. It's magic!

-- back to computing and HN --

In the Beginning was ENIAC, a computer that took days to set up and get working on a given problem by a team with some experience. Then John Von Neumann came along, and added the idea of stored programs, which involved sacrificing the inherently parallel nature of the machine, losing 70% of its performance, but making it possible to set it up for a task simply by loading a "program" onto it's back of instruction switches.

Then came cards and paper tape storage, further increasing the speed at which data and programs could be handled.

It seems to me that compilers were like one of the above tools, they made it possible for humans to do things that only Alan Turing or others similarly skilled could do in the beginning of programming.

Interactive programming increased the availability of compute, and make machines that were much faster that programmers, more easily distributed among teams of programmers.

IDEs were another. Turbo Pascal allowed compile, linking, and execution to happen almost instantly. It widely opened the space for experimentation by reducing the time required to get feedback from minutes to almost zero.

I've done programming on and off through 4 decades of work. Most of my contemplation is as an enthusiast, instead of professional. As far as compilers and the broader areas of Computer Science I haven't formally studied, it seems to me that LLMS, especially the latest "agentic" versions, will allow me to explore things far easier than I might have otherwise done. LLMs have helped me to match my own thoughts across a much wider cognitive impedance landscape. (There's that analogy/hammer in use...)

Compilers are an impedance matching mechanism. Allowing a higher level of abstraction gives flexibility. One of the ideas I've had in the past for helping with better interaction between people and compilers is to allow compilers that also work backwards.[1] I'm beginning to suspect that with LLMs, I might actually be able to attempt to build this system, it's always seemed out of reach because of the levels of complexity involved.

I have several other ideas that might warrant a new attempt, now that I'm out of the job market, and have the required free time and attention.

{{Sorry this turned out to be an essay... I'm not sure how to condense it back down right now}}

[1] https://wiki.c2.com/?BidirectionalCompiler

[2] https://en.wikipedia.org/wiki/Impedance_matching

[3] https://garden.joehallenbeck.com/container/mappers-and-packe...

YJfcboaDaJRDw 9/13/2025|
[dead]
More comments...