Top
Best
New

Posted by GeneralMaximus 5 hours ago

I built a programming language using Claude Code(ankursethi.com)
80 points | 118 commentspage 3
jcranmer 4 hours ago|
I recently tried using Claude to generate a lexer and parser for a language i was designing. As part of its first attempt, this was the code to parse a float literal:

  fn read_float_literal(&mut self) -> &'a str {
    let start = self.pos;
    while let Some(ch) = self.peek_char() {
      if ch.is_ascii_alphanumeric() || ch == '.' || ch == '+' || ch == '-' {
        self.advance_char();
      } else {
        break;
      }
    }
    &self.source[start..self.pos]
  }
Admittedly, I do have a very idiosyncratic definition of floating-point literal for my language (I have a variety of syntaxes for NaNs with payloads), but... that is not a usable definition of float literal.

At the end of the day, I threw out all of the code the AI generated and wrote it myself, because the AI struggled to produce code that was functional to spec, much less code that would allow me to easily extend it to other kinds of future operators that I knew I would need in the future.

dboreham 3 hours ago|
I had a somewhat experience with Claude coding an Occam parser but I just let it do it's thing and once I had presented it with a suitable suite of test source code, it course corrected, refactored and ended up with a reasonable solution. The journey was a bit different to an experienced human developer but the results much the same and perhaps 100X cheaper.
jcranmer 2 hours ago||
Some of the issues are undoubtedly that I have a decidedly non-standard architecture for my system that the AI refuses to acknowledge--it hallucinated things like integers, which isn't a part of my system, simply because what I have looks almost like a standard example expression grammar so clearly I must have all of the standard example expression grammar things. (This is a pretty common failure mode I've noticed in AI-based systems--when the thing you're looking for is very similar to a very notable, popular thing, AI systems tend to assume you mean the latter as opposed to the former.)
atoav 3 hours ago||
I rolled a fair dice using ChatGPT.
craigmcnamara 4 hours ago||
Now anyone can be a Larry Wall, and I'm not sure that's a good thing.
nz 4 hours ago|
This is not exactly novel. In the 2000s, someone made a fully functioning Perl 6 runtime in a very short amount of time (a month, IIRC) using Haskell. The various Lisps/Schemes have always given you the ability to implement specialized languages even more quickly and ergonomically than Haskell (IMHO).

This latest fever for LLMs simply confirms that people would rather do _anything_ other than program in a (not necessarily purely) functional language that has meta-programming facilities. I personally blame functional fixedness (psychological concept). In my experience, when someone learns to program in a particular paradigm or language, they are rarely able or willing to migrate to a different one (I know many people who refused to code in anything that did not look and feel like Java, until forced to by their growling bellies). The AI/LLM companies are basically (and perhaps unintentionally) treating that mental inertia as a business opportunity (which, in one way or another, it was for many decades and still is -- and will probably continue to be well into a post-AGI future).

zahirbmirza 3 hours ago||
"Just one more prompt..." I can relate. who else has been affected by this?
ractive 10 minutes ago|
Yes, it completely sucks you in and you do "just one more prompt" until late in the night. And somehow you wake up with headache the next morning...
shevy-java 3 hours ago||
That was step #1.

Step #2 is: get real people to use it!

mriet 4 hours ago||
Wait. You built a new language, that there's thus no training data for.

Who the hell is going to use it then? You certainly won't, because you're dependent on AI.

logicprog 4 hours ago||
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html

Bnjoroge 3 hours ago|||
it's a valid question and one that everyone should be asking, unless ofcourse it's for fun which is what I believe this is.
croes 3 hours ago|||
It isn’t shallow.

Who’s going to use it?

koolala 4 hours ago||
With clear examples in their context they don't need training data.
iberator 3 hours ago||
Nope. You didn't write it. You plagiarized it. AI is bad
cptroot 23 minutes ago|
If you read TFA, you'll find that the author agrees with you - at least on your first point.

While I agree "AI is bad", well-written posts like this one can provide real insight into the process of using them, and reveal more about _why_ AI is bad.

kerkeslager 4 hours ago||
> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).

The "more on that later" was unit tests (also generated by Claude Code) and sample inputs and outputs (which is basically just unit tests by a different name).

This is... horrifically bad. It's stupidly easy to make unit tests pass with broken code, and even more stupidly easy when the test is also broken.

These "guardrails" are made of silly putty.

EDIT: Would downvoters care to share an explanation? Preferably one they thought of?

octoclaw 4 hours ago|
[dead]
More comments...