Top
Best
New

Posted by indigodaddy 1 day ago

If AI writes your code, why use Python?(medium.com)
890 points | 944 commentspage 15
brontosaurusrex 1 day ago|
Is there a blocker that would prevent future AI to write perfect assembler (for n architectures) in 1st pass?
Decabytes 1 day ago||
I legit have had this same thought. If we are going to be writing programs with AI, We should be programming in a more performant and explicit way, with statically typed programming languages that are able to encode the invariants in the program, even if it requires programming in a way that would be tedious for humans
pvelagal 1 day ago||
Nice perspective on languages in the AI era. I think AI should be used to build best performing and highly scalable software systems.
serf 1 day ago||
1) python is one of the foremost trained upon languages

2) it's practically verbose, not technically

3) it resembles pseudocode

4) batteries included shortcuts a lot of work

all of these reasons are a boon for LLM work.

sakesun 1 day ago||
Python is rather a UI for human logic comprehension. A mathematical notation of logics. Not a code to drive computer.

And prompt does not replace that.

devin 1 day ago||
Clojure is better. REPL + immutable defaults.
vegnus 22 hours ago|
Clojure or a Clojure-like will become the default for LLMs, or should be rather. It seems too good to ignore
fxj 1 day ago||
One thing to consider:

The (well-known) Sapir–Whorf hypothesis (if dont know it, look it uop) is often invoked for natural languages, but there’s a pretty direct analogue for programming languages: the language you "think in" during solving a problem biases which abstractions and idioms you reach for first.

If you force an LLM to first solve a problem in a highly abstract language (Lisp, APL, Prolog) and only then later translate that solution to C++ or Rust, you’re effectively changing the intermediate representation the model works in. That IR has very different "affordance", e.g.

- Lisp pushes you toward recursive tree/list processing, higher‑order functions and macro‑like decomposition. (some nice web frameworks were initially written in LISP, scheme, etc...)

- APL pushes you toward whole‑array transforms, point‑free pipelines and exploiting data parallelism. (banks are still using it because of perforance)

- Prolog pushes you toward facts/rules, constraint satisfaction, and backtracking search. (it is a very high abstraction but might suit LLMs very well)

OK, and when you then translate that program into C++/Rust/python, a lot of this bias leaks through. You often end up with:

Rule engines, constraint solvers, or table‑driven dispatch code when the starting point was Prolog.

Iterator/functor pipelines and EDSL‑like combinators when the starting point was Lisp.

Data‑parallel kernels and "vectorized" loops when the starting point was APL.

In principle, an LLM could generate those idioms directly in C++/Rust. In practice, however, models are heavily shaped by their training distribution and default prompts. If you just say "write in Rust", they tend to regress towards the most common patterns in the corpus (framework‑heavy, imperative, not very aggressively functional or data‑parallel), even when the language would support richer abstractions.

By inserting a "thinking" step in a different paradigm, you bias the search over solution space before you ever get to Rust/C++. That doesn’t magically make the code better, but it does change which regions of the design space the model explores.

Same would also be true for python which is already a multi-idiomatic language. So it might be a good idea to learn a portfolio of different languages and then try to tackle a problem with a specific language instead of automatically using python/go/rust because of performance.

Something to consider...

p.s. how would a problem be solved when the LLM would have to write it first in erlang? Is it the automatically distributed?

p.p.s. the "design pattern" of the GoF comes automatically to my mind, which might be a good hint to the LLM to use.

markb139 23 hours ago||
Why use any high level language at all if AI is writing the software. The high level languages seem mostly about humans not being able to handle complexity. Not an issue for an automated bot.
Myzel394 1 day ago||
Bullshit article. AI is not meant to be a black box, you just spit at it and it'll generate you a whole app and you don't even understand a single line. That WILL eventually fail. There was an article here some time ago where someone described it pretty well "use AI as autocomplete on steroids". Therefore, use any language you can actually debug well and know well and use AI as a tool, not as your replacement. And don't use it to port your electron app to rust if you don't know rust, Jesus.
axegon_ 1 day ago|
> you just spit at it and it'll generate you a whole app and you don't even understand a single line

So we are going to pretend this isn't happening everywhere now? And that it isn't failing on daily basis? I'm sorry but I've been saying this for years now and is my main arguments for not using slop machines: no one writes the code and no one reads the code. I can name dozens of fortune 500 companies where "tokens used" is used as a performance metric for developers, as in, more tokens = better performance, all code is written by slop machines and all reviews are made by slop machines, developers simply add "this is intended" in code reviews.

cultofmetatron 1 day ago|
I can't imagine a better output for llms than python. not because its particularly good. far from it, its got dynamic typing and more or less sets you up for runtime failure. however, it has probably the largest corpus of training data aside from javascript.

Part of my worries that all this push to LLMs will marginalize niche programming languages from being used in startups since the lack of training data means falling back to hardcoding. a skill that I have a feeling will get increasingly niche overtime. I feel capitalism will basically render programming languages into a build artifact overtime.

More comments...