If Mojo succeeds, it could be the one language spanning across those levels, while simplifying heterogeneous hardware programming.
The “build”/run/deploy system was the other major issue. All the python versions, virtual env etc seemed like a mess. A compiled language is so much better(I.e Go, Rust etc) IMHO.
Just wanted to provide an easy counterpoint to the logical fallacy by IceDane.
What’s that supposed to mean?
Since there is not much Mojo code in the wild so the LLMs were trained on it, I wonder how it will work in practice.
Probably the agents will make lots of mistakes and you will spend 10x the tokens compared to using a language the model are well versed in.
But then I read this:
> AI native
> Mojo is built from the ground up to deliver the best performance on the diverse hardware that powers modern AI systems. As a compiled, statically-typed language, it's also ideal for agentic programming.
Well, no thank you. I know the irony here but I want nothing to do with a language made for robots.
Go on, give it a shot. It stops being intimidating soon! And remember that the uv we all love was heavily influenced by Cargo.
Some alternatives are as old as 1958.
I remember Rust very fondly in fact. And I had the same experience as you, learning Rust made me a better Javascript programmer. Lets see if a little neural network can be as fun.
That's a very big claim.
Now I will probably rewrite the model in rust if I want to do anything with it (mostly for the web assembly target as I want this thing to run in browsers) but I will for sure be using Julia for further experimentation. Lovely language.
Funny you should say that... there was recently a very interesting announcement for a Julia-to-WASM compiler and a full-stack signals-based web framework:
https://discourse.julialang.org/t/ann-experimental-wasmtarge...
> Both repos were built iteratively with LLM coding agents
I think I would rather just use Rust.
Modular is giving you at least a public promise that they will open source Mojo it this year but some how here it is a problem.
Unbelievable.
For instance, you can write Python without using CUDA. CUDA's existence doesn't make Python less useful. But what do you do when you bump into a bug in Mojo? You have no ability to fix it yourself. At best, you can report it to the authors and hope they care enough about it to put in the work and release an update. If you run into a Python problem, you, or someone in your org, or a paid consultant, can fix it even if the Python core team doesn't care about it.
But somehow it is a problem when Modular gives a single promise to open-source their Mojo compiler?
> For instance, you can write Python without using CUDA. CUDA's existence doesn't make Python less useful. But what do you do when you bump into a bug in Mojo? You have no ability to fix it yourself.
You don't use AI training / inference with bare Python at all.
PyTorch (which almost all AI researchers use) primarily uses CUDA as the default and it is less useful without it (all other backends are slower). If there is a bug in anywhere from PyTorch to the silicon, you need to investigate if it is a PyTorch problem, C++ or Python issue or both, or a CUDA driver issue.
So a bug in one place (Mojo) vs a bug in 4 different places and one of them (CUDA) will never be open source. The latter is worse.
> At best, you can report it to the authors and hope they care enough about it to put in the work and release an update. If you run into a Python problem, you, or someone in your org, or a paid consultant, can fix it even if the Python core team doesn't care about it.
You are assuming Modular will never open source the Mojo compiler, when it is clear that Nvidia has been completely hostile to opening anything related to CUDA and its compiler.