Top
Best
New

Posted by GeneralMaximus 3 hours ago

I built a programming language using Claude Code(ankursethi.com)
70 points | 90 comments
andsoitis 2 hours ago|
> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).

Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.

Programming languages are after all the interface that a human uses to give instructions to a computer. If you’re not writing or reading it, the language, by definition doesn’t matter.

marssaxman 2 hours ago||
The constraints enforced in the language still matter. A language which offers certain correctness guarantees may still be the most efficient way to build a particular piece of software even when it's a machine writing the code.

There may actually be more value in creating specialized languages now, not less. Most new languages historically go nowhere because convincing human programmers to spend the time it would take to learn them is difficult, but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.

raincole 1 hour ago|||
> every AI coding bot will learn your new language

If there are millions of lines on github in your language.

Otherwise the 'teaching AI to write your language' part will occupy so much context and make it far less efficient that just using typescript.

calvinmorrison 1 hour ago||
Uh not really. I am already having Claude read and then one-shot proprietary ERP code written in vintage closed source language OOP oriented BASIC with sparse documentation.... just needed to feed it in the millions of lines of code i have and it works.
vrighter 1 hour ago||
"i haven't been able to find much" != "there isn't much on the entire internet fed into them"
UncleOxidant 1 hour ago||||
> but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.

That's assuming that your new, very unknown language gets slurped up in the next training session which seems unlikely. Couldn't you use RAG or have an LLM read the docs for your language?

clickety_clack 1 hour ago|||
Agreed - unpopular languages and packages have pretty shaky outcomes with code generation, even ones that have been around since before 2023.
almog 1 hour ago|||
Neither RAG nor loading the docs into the context window would produce any effective results. Not even including the grammar files and just few examples in the training set would help. To get any usable results you still need many many usage examples.
marssaxman 9 minutes ago||
And yet it works well enough, regardless. I have a little project which defines a new DSL. The only documentation or examples which exist for this little language, anywhere in the world, are on my laptop. There is certainly nothing in any AI's training data about it. And yet: codex has no trouble reading my repo, understanding how my DSL works, and generating code written in this novel language, at my request.
danielvaughn 1 hour ago||||
In addition, I think token efficiency will continue to be a problem. So you could imagine very terse programming languages that are roughly readable for a human, but optimized to be read by LLMs.
Insanity 1 hour ago|||
That's an interesting idea. But IMO the real 'token saver' isn't in the language keywords but it's in the naming of things like variables, classes, etc.

There are languages that are already pretty sparse with keywords. e.g in Go you can write 'func main() string', no need to define that it's public, or static etc. So combining a less verbose language with 'codegolfing' the variables might be enough.

gf000 47 minutes ago||
Go is one of the most verbose mainstream programming languages, so that's a pretty terrible example.
Insanity 14 minutes ago|||
Maybe not a perfect example but it’s more lightweight than Java at least haha
giancarlostoro 16 minutes ago||||
To you maybe, but Go is running a large amount of internet infrastructure today.
LtWorf 36 minutes ago|||
Well LLMs are made to be extremely verbose so it's a good match!
idiotsecant 1 hour ago|||
I think I remember seeing research right here on HN that terse languages don't actually help all that much
thomasmg 42 minutes ago||
I would be very interested in this research... I'm trying to write a language that is simple and concise like Python, but fast and statically typed. My gutfeeling is that more concise than Python (J, K, or some code golfing language) is bad for readability, but so is the verbosity of Rust, Zig, Java.
quotemstr 38 minutes ago||||
Those constraints can be enforced by a library too. Even humans sometimes make a whole new language for something that can be a function library. If you want strong correctness guarantees, check the structure of the library calls.

Programming languages function in large parts as inductive biases for humans. They expose certain domain symmetries and guide the programmer towards certain patterns. They do the same for LLMs, but with current AI tech, unless you're standing up your own RL pipeline, you're not going to be able to get it to grok your new language as well as an existing one. Your chances are better asking it to understand a library.

imiric 1 hour ago|||
> every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.

How will it "learn" anything if the only available training data is on a single website?

LLMs struggle with following instructions when their training set is massive. The idea that they will be able to produce working software from just a language spec and a few examples is delusional. It's a fundamental misunderstanding of how these tools work. They don't understand anything. They generate patterns based on probabilities and fine tuning. Without massive amounts of data to skew the output towards a potentially correct result they're not much more useful than a lookup table.

dmd 25 minutes ago|||
It's wild to me the disconnect between people who actually use these tools every day and people who don't.

I have done exactly the above with great success. I work with a weird proprietary esolang sometimes that I like, and the only documentation - or code - that exists for it is on my computer. I load that documentation in, and it works just fine and writes pretty decent code in my esolang.

"But that can't possibly work [based on my misunderstanding of how LLMs work]!" you say.

Well, it does, so clearly you misunderstand how they work.

Zak 52 minutes ago|||
They don't understand anything, but they sure can repeat a pattern.

I'm using Claude Code to work on something involving a declarative UI DSL that wraps a very imperative API. Its first pass at adding a new component required imperative management of that component's state. Without that implementation in context, I told Claude the imperative pattern "sucks" and asked for an improvement just to see how far that would get me.

A human developer familiar with the codebase would easily understand the problem and add some basic state management to the DSL's support for that component. I won't pretend Claude understood, but it matched the pattern and generated the result I wanted.

This does suggest to me that a language spec and a handful of samples is enough to get it to produce useful results.

voxleone 58 minutes ago|||
In the 90s people hoped Unified Modeling Language diagrams would generate software automatically. That mostly didn’t happen. But large language models might actually be the realization of that old dream. Instead of formal diagrams, we describe the system in natural language and the model produces the code. It reminds me of the old debates around visual web tools vs hand-written HTML. There seems to be a recurring pattern: every step up the abstraction ladder creates tension between people who prefer the new layer and those who want to stay closer to the underlying mechanics.

Roughly: machine code --> assembly --> C --> high-level languages --> frameworks --> visual tools --> LLM-assisted coding. Most of those transitions were controversial at the time, but in retrospect they mostly expanded the toolbox rather than replacing the lower layers.

One workflow I’ve found useful with LLMs is to treat them more like a code generator after the design phase. I first define the constraints, objects, actors, and flows of the system, then use structured prompts to generate or refine pieces of the implementation.

abraxas 42 minutes ago||
I agree with the sentiment but want to point out that the biggest drive behind UML was the enrichment of Rational Software and its founders. I doubt anyone ever succeeded in implementing anything useful with Rational Rose. But the Rational guys did have a phenomenal exit and that's probably the biggest success story of UML.

I'm being slightly facetious of course, I still use sequence diagrams and find them useful. The rest of its legacy though, not so much.

spelunker 2 hours ago|||
Like everything generated by LLMs though, it is built on the shoulders of giants - what will happen to software if no one is creating new programming languages anymore? Does that matter?
idiotsecant 1 hour ago||
I think the only hope is that AGI arises and picks up where humanity left off. Otherwise I think this is the long dark teatime of human engineering of all sorts.
_aavaa_ 2 hours ago|||
I don’t agree with the idea that programming languages don’t have an impact of an LLM to write code. If anything, I imagine that, all else being equal, a language where the compiler enforces multiple levels of correctness would help the AI get to a goal faster.
phn 2 hours ago|||
A good example of this is Rust. Rust is by default memory safe when compared to say, C, at the expense of you having to be deliberate in managing memory. With LLMs this equation changes significantly because that harder/more verbose code is being written by the LLM, so it won't slow you down nearly as much. Even better, the LLM can interact with the compiler if something is not exactly as it should.

On a different but related note, it's almost the same as pairing django or rails with an LLM. The framework allows you to trust that things like authentication and a passable code organization are being correctly handled.

jetbalsa 2 hours ago|||
That is why Typescript is the main one used by most people vibe coding, The LLMs do like to work around the type engine in it sometimes, but strong typing and linting can help a ton in it.
onlyrealcuzzo 1 hour ago|||
> Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.

I'm working on a language as well (hoping to debut by end of month), but the premise of the language is that it's designed like so:

1) It maximizes local reasoning and minimizes global complexity

2) It makes the vast majority of bugs / illegal states impossible to represent

3) It makes writing correct, concurrent code as maximally expressive as possible (where LLMs excel)

4) It maximizes optionality for performance increases (it's always just flipping option switches - mostly at the class and function input level, occassionaly at the instruction level)

The idea is that it should be as easy as possible for an LLM to write it (especially convert other languages to), and as easy as possible for you to understand it, while being almost as fast as absolutely perfect C code, and by virtue of the design of the language - at the human review phase you have minimal concerns of hidden gotcha bugs.

idiotsecant 1 hour ago||
How does a programming language prevent the vast majority of bugs? I feel like we would all be using that language!
onlyrealcuzzo 10 minutes ago|||
See Rust with Use-after-Free, fearless concurrency, etc.

My language is a step ahead of Rust, but not as strict as Ada, while being easier to read than Swift (especially where concurrency is involved).

gf000 41 minutes ago|||
I agree with your questioning of it being capable of preventing bugs, but your second point is quite likely false -- we have developed a bunch of very useful abstractions in "research" languages 50 years ago, only to re-discover them today (no null, algebraic data types, pattern matching, etc).
koolala 1 hour ago|||
Saves tokens. The main reason though is to manage performance for what techniques get used for specific use cases. In their case it seems to be about expressiveness in Bash.
johnfn 2 hours ago|||
> If you’re not writing or reading it, the language, by definition doesn’t matter.

By what definition? It still matters if I write my app in Rust vs say Python because the Rust version still have better performance characteristics.

johnbender 2 hours ago|||
In principle (and we hope in practice) the person is still responsible for the consequences of running the code and so it remains important they can read and understand what has been generated.
andyfilms1 2 hours ago||
I've been wondering if a diffusion model could just generate software as binary that could be fed directly into memory.
entropie 2 hours ago||
Yeah, what could go wrong.
asciimov 17 minutes ago||
This takes all the satisfaction out of spending a few well thought out weekends to build your own language. So many fun options: compiled or interpreted; virtual machine, or not; single pass, double pass, or (Leeloo Dallas) Multipass? No cool BNF grammars to show off either…

It’s missing all the heart, the soul, of deciding and trading off options to get something to work just for you. It’s like you bought a rat bike from your local junkyard and are trying to pass it off as your own handmade cafe racer.

gopalv 16 minutes ago||
> More addictive than that is the unpredictability and randomness inherent to these tools. If you throw a problem at Claude, you can never tell what it will come up with. It could one-shot a difficult problem you’ve been stuck on for weeks, or it could make a huge mess. Just like a slot machine, you can never tell what might happen. That creates a strong urge to try using it for everything all the time.

That is the part of the post that stuck with me, because I've also picked up impossible challenges and tried to get Claude to dig me out of a mess without giving up from very vague instructions[1].

The effect feels like the Loss-Disguised-As-Win feeling of the video-games I used to work on at Zynga.

Sure it made a mistake, but it is right there, you could go again.

Pull the lever, doesn't matter if the kids have Karate at 8 AM.

[1] - https://github.com/t3rmin4t0r/magic-partitioning

bobjordan 1 hour ago||
I've been working on a large codebase that was already significant before LLM-assisted programming, leveraging code I’d written over a decade ago. Since integrating Claude and Codex, the system has evolved and grown massively. Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.

That said, the core value of the software wouldn't exist without a human at the helm. It requires someone to expend the energy to guide it, explore the problem space, and weave hundreds of micro-plans into a coherent, usable system. It's a symbiotic relationship, but the ownership is clear. It’s like building a house: I could build one with a butter knife given enough time, but I'd rather use power tools. The tools don't own the house.

At this point, LLMs aren't going to autonomously architect a 400+ table schema, network 100+ services together, and build the UI/UX/CLI to interface with it all. Maybe we'll get there one day, but right now, building software at this scale still requires us to drive. I believe the author owns the language.

aleksiy123 58 minutes ago||
One topic of llms not doing well with UI and visuals.

I've been trying a new approach I call CLI first. I realized CLI tools are designed to be used both by humans (command line) and machines (scripting), and are perfect for llms as they are text only interface.

Essentially instead of trying to get llm to generate a fully functioning UI app. You focus on building a local CLI tool first.

CLI tool is cheaper, simpler, but still has a real human UX that pure APIs don't.

You can get the llm to actually walk through the flows, and journeys like a real user end to end, and it will actually see the awkwardness or gaps in design.

Your commands structure will very roughly map to your resources or pages.

Once you are satisfied with the capability of the cli tool. (Which may actually be enough, or just local ui)

You can get it to build the remote storage, then the apis, finally the frontend.

All the while you can still tell it to use the cli to test through the flows and journeys, against real tasks that you have, and iterate on it.

I did recently for pulling some of my personal financial data and reporting it. And now I'm doing this for another TTS automation I've wanted for a while.

pluc 1 hour ago||
Claude Code built a programming language using you
dybber 26 minutes ago||
I have been trying this as well, and you can quickly come very far.

However, I fear that agents will always work better on programming languages they have been heavily trained on, so for an agent-based development inventing a new domain specific language (e.g. for use internally in a company) might not be as efficient as using a generic programming language that models are already trained on and then just live with the extra boilerplate necessary.

tines 2 hours ago||
Next you can let Claude play your video games for you as well. Gads we are a voyeuristic society aren’t we.
ajay-b 1 hour ago||
Why not let Claude do our dating? I'm surprised someone hasn't thought of this: AI dating, let the AI find and qualify a date for you, and match with the person who meets you, for you!
g3f32r 1 hour ago||
I suspect this is going to be an iteration of the Simpsons meme soon, but...

Black Mirror did it first https://en.wikipedia.org/wiki/Hang_the_DJ

theblazehen 2 hours ago|||
Here's Claude playing Detroit: Become Human https://www.youtube.com/watch?v=Mcr7G1Cuzwk
jetbalsa 2 hours ago|||
I am kind of doing that now. I put Kimi K2.5 into a Ralph Loop to make a Screeps.com AI. So far its been awful at it. If you want to track its progress, I have its dashboard at https://balsa.info
knicholes 32 minutes ago||
Honestly some of the most fun I had playing Ultima Online was writing scripts to play it for me.
ramon156 2 hours ago||
AI written code with a human writted blog post, that's a big step up.

That said, it's a lot of words to say not a lot of things. Still a cool post, though!

ivanjermakov 2 hours ago||
> with a human writted blog post

I believe we're at a point where it's not possible to accurately decide whether text is completely written by human, by computer, or something in between.

wavemode 1 hour ago||
We're definitely not at that point.

If this blog post is unedited LLM output, the blog owner needs to sell whatever model, setup and/or prompt he used for a million dollars, since it's clearly far beyond the state-of-the-art in terms of natural-sounding tone.

craigmart 1 hour ago||
You can make an LLM sound very natural if you simply ask for it and provide enough text in the tone you’d like it to reproduce. Otherwise, it’s obvious that an LLM with no additional context will try to stick to the tone the company aligned it to produce
exitb 45 minutes ago||
”I named it Cutlet after my cat. It’s completely legal to do that.”

I’ve never seen LLM being able to produce these kind of absurdist jokes. Or any jokes, really.

Bnjoroge 1 hour ago||
Agree. I've been yearning for more insightful posts and there's just not alot of them out there these days
shadeslayer 16 minutes ago|
It’s been a while friend

Congratulations on getting to the front page ;)

More comments...