Top
Best
New

Posted by alpaylan 13 hours ago

LLMs could be, but shouldn't be compilers(alperenkeles.com)
108 points | 115 commentspage 2
dpweb 12 hours ago|
Compilation is transforming one computing model to another. LLMs aren't great at everything, but seem particularly well suited for this purpose.

One of the first things I tried to have an llm do is transpile. These days that works really well. You find an interesting project in python, i'm a js guy, boom js version. Very helpful.

echelon 12 hours ago|
Forest for the trees.

You see a business you like, boom competing business.

These are going to turn into business factories.

Anthropic has a business factory. They can make new businesses. Why do they need to sell that at all once it works?

We're focusing on a compiler implementation. Classic engineering mindset. We focus on the neat things that entertain us. But the real story is what these models will actually be doing to create value.

somesortofthing 8 hours ago||
One thing that's missing from this is that the specification itself only matters insofar as it meets its own meta-specification of "what people will use/pay for". LLMs may have an easier time understanding that than what a specific developer wants from them - a perfect implementation of an un-marketable product is mostly pointless.
plastic-enjoyer 12 hours ago||
> It’s that the programming interface is functionally underspecified by default. Natural language leaves gaps; many distinct programs can satisfy the same prompt. The LLM must fill those gaps.

I think this is an interesting development, because we (linguists and logicians in particular) have spent a long time developing a highly specified language that leaves no room for ambiguity. One could say that natural language was considered deficient – and now we are moving in the exact opposite direction.

calebm 9 hours ago||
My biggest AI win so far was using ChatGPT as a transpiler to convert from vanilla JS code to GLSL. It took 7 prompts and about 1.5 hours, but without the AI, I would have been thrilled to have completed the project in a week.
aethrum 12 hours ago||
Can't we just turn the temp down to 0?
helloplanets 11 hours ago||
Even if you turn the temperature down to 0, it's not deterministic. Floating points are messy. If there is even a tiny difference when it comes to the order of operations on the actual GPU that's running the billions of parallelized floating point operations over and over, it's very possible to end up with changing top probability logits.
kibwen 11 hours ago|||
That doesn't make a difference here. Even with a nonzero temperature, an LLM could still be deterministic as long as you have control of its random seed. As the article says:

"This gets to my core point. What changes with LLMs isn’t primarily nondeterminism, unpredictability, or hallucination. It’s that the programming interface is functionally underspecified by default."

abm53 12 hours ago|||
More to the point: is randomness of representation or implementation an inherent issue if the desired semantics of a program are still obeyed?

This is not really a point about whether LLMs can currently be used as English compilers, but more questioning whether determinism of the final machine code output is a critical property of a build system.

MyHonestOpinon 8 hours ago|||
I suppose that even with temp down to zero the model itself changes over time.
hollowturtle 11 hours ago||
They're giant pattern regurgitators, impressive for sure, but they only can be as good as their training data, reason why they seems to be more effective for TypeScript, Python etc. Nothing less nothing more. No AGI, no Job X is done. Hallucinations are a feature, otherwise they would just spit out training data. The thing is the whole discussion around these tools is so miserable that I'm pondering the idea of canceling from every corner of the internet, the fatigue is real and pushing back the hype feels so exausting, worse than crypto, nft and web3. I'm a user of these tools me pushing back the hype is because its ripple effects arrive inside my day job and I'm exausted of people handing to you generated shit just to try making a point and saying "see? like that"
slopusila 10 hours ago||
two engineers implementing the same task are not-deterministic

yet nobody complained about this

in fact engineers appreciate that, "we are not replaceable code monkeys cogs in the machine as management would like"

MyHonestOpinon 8 hours ago|
But once the code is written, tested, etc. it becomes deterministic.
smallnix 10 hours ago||
> Computer science has been advancing language design by building higher and higher level languages

Why? Because new languages have an IR in their compilation path?

lambda-lollipop 8 hours ago||
cf. Dijkstra "On the foolishness of "natural language programming" https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...

>From one gut feeling I derive much consolation: I suspect that machines to be programmed in our native tongues —be it Dutch, English, American, French, German, or Swahili— are as damned difficult to make as they would be to use.

behnamoh 12 hours ago|
> Specifying systems is hard; and we are lazy.

The more I use LLMs, the more I find this true. Haskell made me think for minutes before writing one line of code. Result? I stopped using Haskell and went back to Python because with Py I can "think while I code". The separation of thinking|coding phases in Haskell is what my lazy mind didn't want to tolerate.

Same goes with LLMs. I want the model to "get" what I mean but often times (esp. with Codex) I must be very specific about the project scope and spec. Codex doesn't let me "think while I vibe", because every change is costly and you'd better have a good recovery plan (git?) when Codex goes stray.

More comments...