Top
Best
New

Posted by sbt567 1 day ago

Mojo 1.0 Beta(mojolang.org)
355 points | 222 comments
totalperspectiv 15 hours ago|
Having written a lot of Mojo over the last two year, just for fun, it's a really cool language. Ownership model adjacent to Rust, comptime that is more powerful than Zig, Rich type system, first class SIMD support, etc.

Performance wise it's the first language in long time that isn't just an LLVM wrapper. LLVM is still involved, but they are using it differently than say, Rust or Zig.

Very excited for Mojo once it's open sourced later this year.

ainch 1 day ago||
As someone in ML who's interested in performance, I'm keen for Mojo to succeed - especially the prospect of mixing GPU and CPU code in the same language. But I do wonder if the changes they're making will dissuade Python devs. The last time I booted it up, I tried to do some basic string manipulation just to test stuff out, but spent an hour puzzling out why `var x = 'hello'; print(x[3])` didn't work, and neither did `len(x)` (turns out they'd opted for more specific byte-vs-codepoint representations, but the docs contradicted the actual implementation).

Hopefully they get Mojo to a good place for more general ML, but at the moment it still feels quite limited - they've actually deprecated some of the nice builtins they had for Tensors etc... For now I'll stick with JAX and check in periodically, fingers crossed.

rao-v 12 hours ago||
I still don’t understand why we lack a language that will take uncomplicated computation heavy code and turn it into SIMD / multi thread / multiprocessing / GPU code with minimal additional syntax.

Surely this is the sort of thing compiler / language design nerds dream about?

It doesn’t have to guarantee efficiency or provide cutting edge performance in any context … it should just exist!

My understanding is that we can make such a language … but it’s not caught the fancy of someone who could do it

tosh 7 hours ago|||
Still a bit early but I'm working on kiwi, a k-dialect that can lower to Apple MLX.

Currently supports CPU and GPU on macOS and CPU on linux.

https://kiwilang.com

https://github.com/kiwi-array-lang/kiwi

Kiwi runs computations on small dense arrays in its own runtime, when they are larger it will lower to MLX CPU and eventually to MLX GPU when it is worth it.

As user you don't have to change any code, you just write k.

I'm sure there are other languages designed to take advantage of modern GPUs.

But even with SIMD you can get quite far with array oriented code and many array language implementations will make use of it (BQN, ngn/growler/k, goal, ktye k has a version with SIMD support, …)

rao-v 6 hours ago||
Thanks for sharing this is neat!

I’ve yet to find a language that does SIMD / multithreading / GPU with minimal tweaks let along multiprocessing.

oldmanhorton 12 hours ago||||
Both ahead of time compilers and JIT compilers often perform autovectorization of tight loops. The problem is that lots of hot loops are not necessarily simple loops, and in particular a lot of source code is written in a way which uses sequential dependencies that can’t be modeled in SIMD code. Aside from undefined behavior in C/C++, most compilers will fail to autovectorize because doing so would very slightly change the behavior of your code in a very hard to understand way.
rao-v 11 hours ago||
Surely a high level language can own the contract of making sane choices of when to auto vectorize and when not to (or just inefficiently auto vectorize - that is fine too!)
throwup238 9 hours ago||
That’s like saying “surely a high level language can solve the halting problem.”

Yes, it can, but only by eliminating the features that make it Turing complete. It’s relatively easy to vectorize map with a closure that can’t mutate anything but once you have nontrivial control flow, the compiler can’t make those kinds of assumptions.

rao-v 7 hours ago||
It’s really not! We’re not requiring the language to make optimal choices, just that it convert the same code to these different paradigms (and honestly you could just brute force run the 12 versions and choose the fastest one). Absolutely no theory barriers apply!
bobbyswiss 2 hours ago||
You should design it!
mappu 6 hours ago||||
Intel Ispc is a compiler for a C superset language that targets CPU SIMD and GPUs.
rao-v 5 hours ago||
A beautiful find! It’s what 12+ years old at this point?

Definitely the closest thing so far (doesn’t do multiprocessing) but does seem to do SIMD / multithreading and GPU auto parallelizing!

Any idea why it’s so little known?

JBits 12 hours ago||||
If you're happy with NumPy's API, then surely JAX is exactly what you're looking for.
rao-v 11 hours ago||
JAX can’t do what Numba can do for example. I just want one way to write simple math-y code like you normally would and automagically convert to run on one of the above approaches.

That’s what compilers and high level languages are supposed to be for!

teleforce 10 hours ago|||
>I still don’t understand why we lack a language that will take uncomplicated computation heavy code and turn it into SIMD / multi thread / multiprocessing / GPU code with minimal additional syntax.

It's already (partly) existed called D language, by default it's garbage collected (GC), can also be program without it or hybrid. It's a modern, backward compatible with C and it's included in GCC.

The linear algebra system in D or Mir GLAS is standalone BLAS implementation written directly in D [1]. It's already proven faster than the other widely existing conventional BLAS like OpenBLAS back in 2016, about ten years ago!

This popular OpenBLAS include Fortran based LAPACK (yes you read it right Fortran) and it is being used by almost all data processing languages currently Matlab, Julia, Rust and also Mojo [2].

Interestingly there is a very early stage of standalone BLAS implementation written directly in Mojo namely mojoBLAS similar to Mir GLAS just started very recently [3].

>Surely this is the sort of thing compiler / language design nerds dream about?

You can say this again.

Especially on the GC side of the programming language since this SIMD / multi thread / multiprocessing / GPU can be abstracted away.

Actually someone recently proposed VGC or virtualized garbage collector for Python in C++ for heteregenous GC [4],[5]. However, the current evaluation excludes JIT compilation, AOT optimization, SIMD acceleration, and GPU offloading.

[1] OpenBLAS:

https://en.wikipedia.org/wiki/OpenBLAS

[2] Numeric age for D: Mir GLAS is faster than OpenBLAS and Eigen:

http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...

[3] mojoBLAS:

https://github.com/shivasankarka/mojoBLAS

[4] Virtual Garbage Collector (VGC): A Zone-Based Garbage Collection Architecture for Python's Parallel Runtime:

https://arxiv.org/abs/2512.23768

[5] VGC-for-arxiv:

https://github.com/Abdullahlab-n/VGC-for-arxiv

MohamedMabrouk 3 hours ago|||
I don't think mojo depends on OpenBLAS or other BLAS implementation. I remember that they took a lot of pride in the early days how linalg primitives like matmul which was completely written in mojo was faster than MLK, openBLAS and other implementations.
rao-v 7 hours ago|||
Delightful thank you! Would love to see a version of D that auto vectorizes to Vulkan or something
sureglymop 1 day ago|||
Mojo is cool but I just don't understand the python backwards compat thing. They're holding themselves back with that.

All the flaws I can think of in Kotlin are due to the Java compatibility. They could've made it work here by being more explicit but the way it currently works seems doomed.

geodel 18 hours ago|||
> All the flaws I can think of in Kotlin are due to the Java compatibility.

All the use of Kotlin in industry are due to Java compatibility. Else there would be ~0% marketshare of Kotlin.

loglog 16 hours ago|||
Mojo is NOT Python compatible (although they initially wanted it to be). So they got all downsides without the upsides.
fiedzia 11 hours ago|||
They claim you can easily mix them so there is some degree of compatibility.
Conscat 8 hours ago||
Every reasonable language has a Python interop story. All it takes is C FFI. But what Mojo promised early on was the eventuality of compiling a large amount of Python code if not entire wheels as Mojo.
melodyogonna 4 hours ago||
I don't recall they promised that. They promised it'll be a superset, but Mojo introduces new keyword. Mojo could support all Python features today exactly as they're supported in Python and you wouldn't still be able to copy Python code into Mojo and compile it
boxed 6 hours ago|||
"All downsides"? What do you mean?
jasonjmcghee 15 hours ago|||
There is unfortunately likely a lot of truth to this. I like Kotlin, but, anecdotally, I've only ever chosen it due to needing JVM
davidatbu 18 hours ago||||
I'm pretty sure that they have decided that backwards-compat is not the best path for Mojo. Matter of fact, the following is the _last_ item on the roadmap on the home page:

> Supporting more of Python's dynamic features like classes, inheritance, and untyped variables to maximize compatibility with Python code.

What's more, note how it says "to maximize compatibility" not "to achieve full compatibility."

pjmlp 23 hours ago||||
Same story with C and Objective-C, C and C++, JavaScript and TypeScript, Java and Scala, Java and Clojure,.....

Yes the underlying platform they based their compatibility on, is the reason they got some design flaws, some more than other.

However that compatibility is the reason they won wide adoption in first place.

tasuki 1 day ago||||
They coulda made it Scala!
boxed 6 hours ago|||
> Mojo is cool but I just don't understand the python backwards compat thing. They're holding themselves back with that.

In reality I think they've dropped that pretty hard. Literally you can't even get the length of a string with `len(s)` in the latest release. They also removed negative indexing, which I find baffling and frustrating. The roadmap does say they don't intend to have any "syntax sugar" until later in the implementation, but negative indexing is such a core part of what makes Python so much nicer to work with compared to say C++...

coldtea 1 day ago|||
>As someone in ML who's interested in performance, I'm keen for Mojo to succeed - especially the prospect of mixing GPU and CPU code in the same language. But I do wonder if the changes they're making will dissuade Python devs.

Unless it's open sourced, it's a moot point, as most Python devs wont come anyway.

flakiness 18 hours ago|||
https://mojolang.org/docs/roadmap/#contributing-to-mojo

> We're committed to open-sourcing all of Mojo, but the language is still very young and we believe a tight-knit group of engineers with a common vision moves faster than a community-driven effort. So we will continue to plan and prioritize the Mojo roadmap within Modular until more of its internal architecture is fleshed out.

I hope they stick to their original promise. And the 1.0 release would be a great time to deliver this.

sidkshatriya 5 hours ago|||
> but the language is still very young and we believe a tight-knit group of engineers with a common vision moves faster than a community-driven effort.

This is a false dichotomy.

For years Golang was developed in the open but strictly moved on the vision of its creators rather than being "community-driven". Many other venerable open source projects don't involve the community in serious strategy discussions. The community mainly acts as a bug finder/fixer. Mojo could do the same: be open source but choose its own priorities internally.

I'm guessing that Mojo is still looking for a monetization strategy. Keeping important things proprietary in Mojo at this stage helps I'm sure (nothing wrong with that).

But I feel the era of proprietary programming language play is over. Unless you create some hardware (which the Mojo guys don't) it's going to be tough.

chrislattner 15 hours ago||||
Indeed, this fall 100%
nextaccountic 7 hours ago||
Why didn't you just do this the sqlite way, and open source this, some time ago?

Release the source, but don't take code from external contributors. Take issues and discussion instead

bmandale 17 hours ago||||
open source does not mean open community. you can just throw tarballs over the wall
adamnemecek 18 hours ago||||
This is exactly how the open sourcing of Swift went so I imagine it will be the same.
otabdeveloper4 18 hours ago|||
> We're committed to open-sourcing all of Mojo

Translated from corporatese it means "it will never happen".

jlundberg 17 hours ago||
With Chris Lattners track record, there is little reason to doubt they actually will open source this.
ModernMech 17 hours ago|||
It’s not Chris Lattner who gets to make the call though. He has investors to the tune of $300 million, and making them happy is the reason it hasn’t been done yet. A lot of people, very reasonably, relieve it’s not possible to satisfy them and also the development community, and when when push comes to shove it’ll be the investors who win because they have the money. So it’s not Chris Lattner’s track record that makes people worried — it’s the track record of investors choosing control over openness, which is a pretty solid record.
MohamedMabrouk 16 hours ago||
how is it in investors self interest to keep a programming language (some thing which no one makes money on today) closed? It also means that library authors can't reason about their code well enough because they don't know the language internals, this also hurts ecosystem growth. Their is no money to be made with a closed language that no body uses. probably modular investors know this.
otabdeveloper4 4 hours ago|||
"We're committed" in official speech means "this thing has absolute lowest priority".
Certhas 1 day ago||||
This is a bit ironic, given that people seem to have no problem using CUDA all over the place... Plus they promise to open source with the 1.0 release. We'll see...
_aavaa_ 1 day ago|||
I don’t see irony there. We’re locked into CUDA due to past decisions. And in new decisions we don’t want to repeat that mistake.
pjmlp 19 hours ago|||
CUDA won because AMD and Intel made a mess out of OpenCL, and Khronos had no vision to support anything beyond C99 dialect until it was too late.

Doesn't matter if it was closed, when the alternatives were much worse.

physicsguy 15 hours ago|||
Plus NVIDIA clocked that it was also the developer library ecosystem and even now there just aren’t equivalents. The AMD rocFFT library wasn’t even complete compared to FFTW until very recently and cuFFT did that more than a decade ago
zozbot234 18 hours ago|||
SYCL is the de facto successor to OpenCL that supports higher level languages. So the vision was and is there.
pjmlp 18 hours ago||
As mentioned, Khronos only changed their mind when it was too late.

I can also recite the whole story, the missteps in OpenCL 2. , OpenCL C++, the OpenCL 3.0 reboot, how SYCL came to, CodePlay only proper available implementation, Intel acquisition of CodePlay and everything else.

ktm5j 18 hours ago||||
I'm really not sure that's true.. I can't think of a single Python dev I've worked with who cared about opensource. All they cared about is the language being easy and free to use.
physicsguy 15 hours ago||
The people that write the libraries care, why do you think Python is where we’re writing ML code and not MATLAB?
zbentley 14 hours ago|||
Mojo is free, though. MATLAB costing money is a bigger issue than it being closed source. R was too late to the game and catered too much to professional math/stats/datascience people rather than programming generalists. Python (with native code interop) hit the sweet spot for breadth/accessibility to the market and capability.
IshKebab 14 hours ago|||
Because MATLAB isn't free to use...

(Among other reasons, but that's easily the main one.)

physicsguy 3 hours ago||
Most of the scientific libraries of note originate in academia where MATLAB is effectively free to users. The cross over to Python was well under way by ~2014
MohamedMabrouk 1 day ago|||
I think that plan is to open source the compiler with 1.0 which is expected to be this summer. so in ~3-4 months time.
digdugdirk 17 hours ago||
It does almost seem like they're trying to recreate the Nim programming language in this regard.
maxloh 56 minutes ago||
Modular is going to open source the entire mojo SDK later this year, including the compiler.

> Mojo 1.0 will be finalized later this year, along with opening the compiler and providing language stability.

https://www.modular.com/blog/modular-26-3-mojo-1-0-beta-max-...

armchairhacker 1 day ago||
> We have committed to open-sourcing Mojo in Fall 2026.

https://docs.modular.com/mojo/faq/#will-mojo-be-open-sourced

jlundberg 17 hours ago||
Good catch in the noise. Thanks!
dismalaf 8 hours ago||
Nice. I'd love to see the source of an actual state of the art MLIR program.
tkocmathla 5 hours ago||
HEIR [1] is a homomorphic encryption compiler built on modern MLIR.

IREE [2] is very actively developed ML compiler + runtime, also MLIR-based.

[1] https://github.com/google/heir

[2] https://github.com/iree-org/iree

mathisfun123 5 hours ago||
https://github.com/triton-lang/triton

https://github.com/tenstorrent/tt-mlir

https://github.com/onnx/onnx-mlir

https://github.com/openxla/stablehlo

plenty more - just google

fibonacci112358 1 day ago||
Sadly for them, Nvidia didn't stay still in the meantime and created the next generation of CUDA, CuTile for Python and soon for C++, through CUDA Tile IR (using a similar compiler stack based on MLIR).

Event though it's not portable, it will likely have far greater usage than Mojo just by being heavely promoted by Nvidia, integrated in dev tools and working alongside existing CUDA code.

Tile IR was more likely a response to the threat of Triton rather than Mojo, at least from the pov of how easy is to write a decently performing LLM kernel.

pjmlp 1 day ago||
And for not staying behind, Intel and AMD are doing similar efforts, and then we have the whole CPython JIT finally happening after so many attempts.

Not to mention efforts like GraalPy and PyPy.

And all these efforts work today in Windows, which is quite relevant in companies where that is the assigned device to most employees, even if the servers run Linux distros.

I keep wondering if this isn't going to be another Swift for Tensorflow kind of outcome.

IshKebab 16 hours ago||
The CPython JIT has barely had any impact on its performance. CPython is always going to be dog slow.
pjmlp 5 hours ago||
Of course, it is still on baby steps and has to be explicitly enabled when installing the right build.

It only has to be good enough, to keep the ecosystem going, and the porting cost not be worthwhile, when Mojo finally reaches parity.

Conscat 8 hours ago|||
My understanding from speaking with a few Tile IR devs on dates is that its primary motivation was providing better portability for programming tensor cores than PTX offers. Nobody ever told me they saw it as a response to anything other than customer feedback.
melodyogonna 1 day ago|||
People keep mistaking Mojo as good syntax for writing GPU code, and so imagine Nvidia's Python frameworks already do that. But... would CuTile work on AMD GPUs and Apple Silicon? Whatever Nvidia does will still have vendor lock-in.
pjmlp 1 day ago||
Indeed, but Intel and AMD are also upping their Python JIT game, and in the end Mojo code isn't portable anyway.

You always need to touch the hardware/platform APIs at some level, because even if the same code executes the same, the observed performance, or in the case of GPUs the numeric accuracy has visible side effects.

melodyogonna 23 hours ago||
It is portable in that you can write code to target multiple platforms in the same codebase. Mojo has powerful compile-time metaprogramming that allows you to tell the compiler how to specialise using a compile-time conditional, e.g. https://github.com/modular/modular/blob/9b9fc007378f16148cfa...

Of course, this won't be necessary in most cases if you're building on top of abstractions provided by Modular.

You don't get this choice using vendor-specific libraries; you're locked into this or that.

pjmlp 23 hours ago||
Yes you do, you get PyTorch or whatever else, built on top of those vendor-specific libraries.

That is the thing with Mojo, when it arrives as 1.0, the LLM progress and the investment that is being done in GPU JITs for Python, make it largely irrelevant for large scale adoption.

Sure some customers might stay around, and keep Modular going, the gold question is how many.

melodyogonna 20 hours ago||
Pytorch is built on an amalgamation of these different frameworks, not on one of them used to target different vendors.
pjmlp 19 hours ago||
The point still stands as middleware.
melodyogonna 18 hours ago||
Have you ever wondered how much work would have been saved by the Pytorch team if they could have used just Cuda for all the platforms they support? If they didn't have to write compatibility abstractions or layers, and instead just focused on the problem of training neural networks? What if all the primitives they used from Cuda and cuDNN worked just as well on AMD GPUs, Apple GPUs, and probably Google's TPUs as they did on Nvidia GPUs?

Mojo and Modular's Max platform would do to heterogeneous compute what LLVM did to programming language development. People who dismiss the real value offering here know nothing. Modular have already raised $350m+ from industry giants (including Nvidia and Google) to solve this, and I believe they will.

pjmlp 5 hours ago|||
Yes, because one of my hobbies was graphics programming for a long time, and I keep observing all the time how FOSS folks misunderstand the games industry, what gets talked at GDC and IGDA events, isn't one API to rule them all.
bigyabai 9 hours ago|||
> What if all the primitives they used from Cuda and cuDNN worked just as well on AMD GPUs, Apple GPUs, and probably Google's TPUs as they did on Nvidia GPUs?

Why should they? CUDA is a GPGPU paradigm, AMD/Apple/Intel all ship diverse raster-focused hardware, and TPUs are a systolic array. How much can you realistically expect to abstract with unified primitives? How much performance do you perceive to be left on the table with native CUDA-based implimentations?

Pytorch's abstractions answer this by ignoring raster hardware conventions entirely. The underlying ATen library is basically a CUDA wrapper, which is not much of a surprise since nobody else is willing to standardize a better alternative. We learned as much when OpenCL died, and now that Khronos is riding into the sunset it's unlikely we'll even see that level of paltry early-2010s cooperation. Mojo really should have taken Vulkan's lessons to heart; you need stakeholders to succeed, simply "disrupting" the proprietary status quo is a recipe for coming dead last in adoption rates.

> People who dismiss the real value offering here know nothing.

So explain the value, then. This is not an "optimize this IR for MIPS and x86" problem, the Lattner Fairy can't shoehorn shaders into every CUDA Compute Capability to make raster GPUs a viable GPGPU platform. If you followed Geohot's gradual descent into (sadly, quite literal) insanity then this would have been glaringly obvious from the offset. Tinygrad has an IR, industry-scale support, multiplat deployment, and it's still a dumpster fire. The project exacerbated all of the issues in ROCm and Metal, without contributing to any form of upstream cooperation between CUDA's competitors. If Mojo goes the same route with a more ambitious goal, they'll end up entrenching CUDA and obsoleting themselves. As much as people hate to admit it, CUDA is less of a software moat and more of a hardware one.

melodyogonna 4 hours ago||
> Why should they? CUDA is a GPGPU paradigm, AMD/Apple/Intel all ship diverse raster-focused hardware, and TPUs are a systolic array. How much can you realistically expect to abstract with unified primitives?

Ah, it seems impossible to you. These are very different hardwares... It is hard enough to make compatibility among different hardwares of the same vendor. Very difficult to imagine building primitives for hardwares with completely different memory layouts.

> How much performance do you perceive to be left on the table with native CUDA-based implimentations?

Zero is the idea. And I wasn't saying there should be a native cuda-based implementation, I'm asking you to imagine how much easier everything would have been if Cuda was cross-platform without any performance or ergonomic penalties.

Mojo is a foundational step here. The big HOW is powerful parametric programming. So much information could be passed during compile time which the compiler uses to specialize.

brcmthrowaway 1 day ago||
Interesting, how big impact is CuTile?
modeless 1 day ago||
When I first heard about Mojo I somehow got the impression that they intended to make it compatible with existing Python code. But it seems like they are very far away from that for the foreseeable future. I guess you can call back and forth between Python and Mojo but Mojo itself can't run existing Python code.
ainch 1 day ago||
In their original pitch that was definitely part of it: take Python code, add type hints, get a big speedup. As they've built it out it seems to have diverged.
melodyogonna 17 hours ago||
It was always going to be a long-term thing, if it were even possible. You can't make a compiler that can compile Python into efficient machine code in just a year (which was how long Mojo had been in development when it was announced).

The messaging was changed because people got sold too hard on that, and kept trying Mojo with the expectation that it could compile existing Python code when it couldn't. What Modular did was change the messaging to reflect what Mojo is today, and provide a roadmap[1] of what they hope it'll turn into in the future. As it evolves, the messaging will evolve with it to continue reflecting current capabilities.

1. https://mojolang.org/docs/roadmap/

infraredshift 12 hours ago||
[dead]
dtj1123 1 day ago|||
They also advertised a 36,000x speedup over equivalent Python if I remember correctly, without at any point clarifying that this could only be true in extreme edge cases. Feels more like a pump-dump cryptography scheme than an honest attempt to improve the Python ecosystem.
jdiaz97 18 hours ago|||
The modern way to advertise: lie a lot.
boxed 1 day ago||||
Well... the article made self deprecating fun of the click bait title, showed the code every step of the way, and actually did achieve the claim (albeit with wall clock time, not CPU/GPU time).

And it wasn't "equivalent python", whatever that means, they did loop unrolling and SIMD and stuff. That can't be done in pure python at all, so there literally is no equivalent python.

dtj1123 15 hours ago||
Watch Chris Lattner's interview with Lex Fridman. He talks about mojo as a 36,000x speedup over Python without any indication that you need to think about vectorization to achieve it.
boxed 6 hours ago||
I'm looking at this transcript and I'm getting a different picture than what you describe https://podscripts.co/podcasts/lex-fridman-podcast/381-chris... . Yea, he doesn't specifically say vectorization and multi-threading or whatever but he also doesn't say you don't need some skill to get to huge speedups.
dtj1123 3 hours ago||
Does he say that you _do_ need skill to get huge speedups?

In fairness it's been a long time since I watched this, but I remember being struck by how obviously dishonest Lattner was throughout. For example at one point he talks about approachin mojo from a first principles perspective, using the speed of light as a limiting factor for what's computationally possible. Complete bullshit. You'd have to be working at the hardware layer for that to begin to be relevant, and even then photonic computation is years away. It's essentially technobabble.

dtj1123 15 hours ago|||
Crypto*
Certhas 1 day ago|||
If you paid very close attention it was actually clear from the start that the idea was to build a next gen systems language, taking the lessons from Swift and Rust, targeting CPU/GPU/Heterogeneous targets, and building around MLIR. But then also building it with an eye towards eventually embedding/extending Python relatively easily. The Python framing almost certainly helped raise money.

Chris Lattner talked more about the relationship between MLIR and Mojo than Python and Mojo.

pjmlp 1 day ago||
So basically Chapel, which is actually being used in HPC.
Certhas 23 hours ago|||
I don't know Chapel in detail, I was more thinking Hylo. I don't think Chapel has a clear value/reference semantics or ownership/lifetime story? Am I wrong here?

The Mojo docs include two sections dedicated to these topics:

https://mojolang.org/docs/manual/values/

https://mojolang.org/docs/manual/lifecycle/

The metaprogramming story seems to take inspiration from Zig, but the way comptime, parameters and ownership blend in Mojo seems relatively novel to me (as a spectator/layman):

https://mojolang.org/docs/manual/metaprogramming/

I was sort of paying attention to all these ideas and concepts two-three years ago from the sidelines (partially with the idea to learn how Julia could potentially evolve) but it's far from my area of expertise, I might well be getting stuff wrong.

pjmlp 22 hours ago||
You make use of 'owned', 'shared', 'unmanaged', 'borrowed'.

https://chapel-lang.org/docs/language/spec/classes.html#clas...

Certhas 22 hours ago||
I see, seems like the design is not complete and a work in progress (which is the same for Mojos Origins concept I think):

"The details of lifetime checking are not yet finalized or specified. Additional syntax to specify the lifetimes of function returns will probably be needed."

I think Rust proved that lifetimes, ownership and borrow checking can be useful for a mainstream language. The discussions in the Mojo context revolve on how to improve the ergonomics of these versus Rust.

pjmlp 21 hours ago||
Contrary to Mojo, plenty of people are using it in HPC, and is open source.

https://hpsf.io/blog/2026/hpsf-project-communities-to-gather...

https://developer.hpe.com/platform/chapel/home

See "Projects Powered by Chapel".

Certhas 21 hours ago||
So? What point are you making? A different language with different design philosophy, has success in a different niche than Mojo is targeting?
pjmlp 20 hours ago||
One is used in production already by key laboratories in HPC research, the other wants to be and is far away from being 1.0.

Chapel current version is 2.8.0.

Certhas 4 hours ago|||
I don't think Mojo is targeting HPC at all.
MohamedMabrouk 19 hours ago||||
I don't understand this framing, so? Cpp, Julia are more widely adopted, used in HPC. it does not mean that people shouldn't start, learn new languages.
pjmlp 19 hours ago||
In the LLM age, maybe the focus should be elsewhere instead of syntax.
MohamedMabrouk 18 hours ago||
is that so? People are still reading their code to understand it and ask (or make modifications). even in the (LLM age) language design, readability is still as relevant as before.

I don't see the superficial comparisons between why this new Y when we have X are not really helpful. Languages and system got adopted not for their stated goal only, but for the underlying details capabilities, good design which translates to better user experience and ecosystem growth.

melodyogonna 19 hours ago|||
Mojo isn't that far away from 1.0. Some point this year is the target
zzzoom 12 hours ago|||
Is it? Spack has only one package that depends on chapel.
mastermage 1 day ago|||
That was what was originaly advertised, they wanted to be what Kotlin is to Java but for Python. They quickly turned tails on this.

That and the not completely open source development model is what has always felt very vaporwary to me.

victorio 1 day ago|||
From the site:

Python interop > Mojo natively interoperates with Python so you can eliminate performance bottlenecks in existing code without rewriting everything. You can start with one function, and scale up as needed to move performance-critical code into Mojo. Your Mojo code imports naturally into Python and packages together for distribution. Likewise, you can import libraries from the Python ecosystem into your Mojo code.

fwip 18 hours ago|||
That's because Mojo told you that. https://web.archive.org/web/20231221132631/https://docs.modu...

> Our long-term goal is to make Mojo a superset of Python (that is, to make Mojo compatible with existing Python programs). Python programmers should be able to use Mojo immediately, and be able to access the huge ecosystem of Python packages that are available today.

simplyvibecode 15 hours ago||
Mojo has refocused on Python interoperability vs. superset, though yes, the original idea was being a superset.

It's possible the language evolves to that in the longterm, but it's not the short term goal.

We published a Mojo roadmap on Mojolang.org that helps contextualize this: https://mojolang.org/docs/roadmap/

Note: I work at Modular

pansa2 1 day ago|||
> they intended to make it compatible with existing Python code

That was the original claim, but it was quietly removed from the website. (Did they fall for the common “Python is a simple language” misconception?).

Now they promise I can “write like Python”, but don’t even support fundamentals like classes (which are part of stage 3 of the roadmap, but they’re still working on stage 1).

Maybe Mojo will achieve all its goals, but so far has been over-promising and under-delivering - it’s starting to remind me of the V language.

simplyvibecode 15 hours ago||
[dead]
samuell 1 day ago|||
The communication had me try to run some very simple python code assuming it of course should run (reading files line by line), which didn't work at all.

For me this was a big disappointment, and I wonder how much this has backfired across developers.

kjsingh 1 day ago|||
isn't that achieved by Codon?
haskman 1 day ago|||
Really the only thing good about Python is its ecosystem.
coldtea 1 day ago|||
Nah, it's also a very fine language for getting an idea down quickly.

Might not have the niceties purists like, but perhaps that's exactly it's a great language for that.

It's like executable pseudocode, and unlike other languages, all the ceremony is optional.

People flocked to it way before it became a "must" for ML and CS thanks to that ecosystem becoming dominant.

mastermage 1 day ago|||
but that ecosystem is realy good.
haskman 19 hours ago||
That it is
jdiaz97 18 hours ago||
They just lie a lot, they make fake blogs with fake benchmarks and then they delete them
csvance 11 hours ago||
Mojo looks neat but I'm pretty satisfied with Julia at this point for high performance numerical computing across CPU, GPU, etc. I can't help but feel this niche is already mostly solved beyond having Python like syntax. Even Python has things like Numba and Triton that are effective for less complicated / more self contained type problems.
smartmic 1 day ago||
Advertising prominently with "AI native" seems necessary today, at least for some folks. To me, that's kind of off-putting, since it doesn't really say anything.

Can anyone of the AI enthusiasts here explain, why, or, what is meant by

> As a compiled, statically-typed language, it's also ideal for agentic programming.

jpnc 1 day ago||
It's been really interesting to see all the desperation on hero pages for all these products and services ever since AI came into prominence. I think the funniest for me was opening IBM DB2 product page and seeing it labeled as 'AI database'. Hysterical.

> why, or, what is meant by More errors caught at compile time means an agent can quickly check their work statically without unit and other tests.

Derbasti 16 hours ago|||
Current LLMs have been trained on extensive libraries of past code. Therefore, LLMs will for the foreseeable future work better for established languages than new ones. Especially languages with a lot of open source code available, like Python. That's a big problem for incumbents without any existing code to train LLMs on.

Thus this desparate "AI native" marketing is probably necessary to even be considered relevant in an "agentic" world. Whether it's enough, only time will tell.

chillfox 1 day ago|||
I don’t really consider myself an “AI enthusiasts”, but I do use it.

So, agents tend to do better the more feedback they can get. Type checking is pretty good for catching a bunch of dumb mistakes automatically.

The point is more hints for the agent is more better most of the time.

phyrog 1 day ago||
So just like for humans...
kstrauser 21 hours ago|||
It’s the new “…on the blockchain”.

Python+ruff+pycheck and TypeScript are compiled to bytecode instead of machine code. They’re not statically typed in the Rust sense. And yet, I’ve watched model crank out good, valid in both of those without needing to be either strictly “compiled” or “statically typed”. Turns out AI couldn’t care less about those properties as long as you have good tooling to quickly check the code and iterate.

fuzztester 14 hours ago||
>It’s the new “…on the blockchain”.

yes, except it's more ... on the same lines, just to hammer the point home:

it's web 2, it's SaaS, it's the latest weekly, er, sorry, daily, hottest JS framework, its the latest rap / punk / hippie / dreadlock / crewcut / swami / grunge/ guru hairstyle, it's agile, it's functional programming, it's OOP, it's OOAD, it's UML, its the Unix philosophy, its Booch notation, it's CASE tools, ... going back even further, it's structured programming, it's high-level languages, it's assemblers, its veganism, it's the keto diet, it's the Atkins diet, it's the paleo diet, it's cholesterol is bad, no, it's good, etc etc etc.

fuzztester 13 hours ago||
iow, it's the equivalent of your common or garden variety of teenager proclaiming that this new thing they just found is gr8, all else is shite, only to jump on the next bandwagon next week, month, or more rarely, year.
andrekandre 6 hours ago||
its what alan kay talks about when he says programming in general isnt a serious discipline and is instead a pop-culture...

  > only to jump on the next bandwagon next week, month
good for marketing as well; there are a no shortage of juniors who are mesmerized by the new shiny
Reubend 1 day ago|||
I don't know what they meant by it, and I share your opinion that "AI native" is somewhat meaningless for a programming language like this.

Regarding compilation and static typing, it's extremely helpful to be able to detect issues at compile time when doing agentic programming. That way, you don't run into as many problems at runtime, which of course the agent has more difficulty addressing. Unit tests can help bridge the gap somewhat but not entirely.

What's not stated on their website is that Mojo is likely a bad choice for agentic programming simply because there isn't much Mojo training data yet.

boxed 1 day ago||
I've recently used Claude to write quite a bit of mojo (https://github.com/boxed/TurboKod) and I can quite confidently say that Claude will write deprecated mojo syntax a lot, but the compiler tells it and it fixes it pretty fast too. The only reason I notice is that I look at Claude while it's working and I see the compilation warnings (and sometimes Claude is lazy and doesn't compile so I have to see it).

But yea, to write mojo 1.0 code even after getting errors might take a new training round, so next or even next-next models.

msaelices 14 hours ago||
Have you used the Mojo syntax skill with modern LLMs? It is updated to latest Mojo and I can say nearly 100% of my code is written by AI, with good quality, and the compiler helping it too.
melodyogonna 1 day ago|||
https://mojolang.org/docs/tools/skills/
rmnclmnt 1 day ago||
Because a coding agent (when instructed well) will try to make a piece of code work in a loop. Static typing and compilation help in the process (no more undefined variables discovered at runtime for instance). But that’s not bullet proof at all as most of us know
pjmlp 1 day ago||
Julia is more mature for the same purposes, and since last year NVidia is having feature parity between Python and C++ tooling on CUDA.

Python cuTile JIT compiler allows writing CUDA kernels in straight Python.

AMD and Intel are following up with similar approaches.

If Mojo will still arrive on time to gain wider adoption remains to be seen.

adev_ 21 hours ago|
> Python cuTile JIT compiler allows writing CUDA kernels in straight Python.

It is currently not straight Python and will never be.

All these "Performance friendly" python dialects (Tryton, Pythran, CuTile, Numba, Pycell, cuPy, ...) appears like Python but are nothing like Python as soon as you scratch the surface.

They are DSL with a python-looking syntax but made to be optimized, typed and inferred properly. And it feels like it when you use it: in each of them, there is many (most?) python features you simply can not use while you still suffer of inherent python issues.

Lets not lie to ourself: Python is inherently bad for efficiency and performance.

And that goes way beyond the GIL: dynamic typing, reference semantics, monkey patching, ultra-dynamic object model, CPython ABI, BigInt by default, runtime module system, ... are all technical choices that makes sense for a small scripting language but terribly sucks for HPC and efficiency.

The entire Numpy/scipy ecosystem itself is already just a hack around Python limitations for simple CPU bound tensor arithmetics. Mainly because builtin python performance sucks so much that a simple for loop would make Excel looks like a race horse.

Mojo is different.

Mojo tries to start from a clean sheet instead of hacking the existing crap.

And tries to provide a "Python like experience" but on top of a well designed language constructed over past language design experience (Python is >30y old)

And just for that, I wish them success.

jdiaz97 18 hours ago|||
> Mojo tries to start from a clean sheet instead of hacking the existing crap.

Their whole original pitch was to be a superset of Python btw.

adev_ 16 hours ago||
> Their whole original pitch was to be a superset of Python btw.

To my understanding, they offer a full python compatibility but guide the user to something else.

For instance, Mojo itself is statically typed.

kstrauser 20 hours ago||||
> All these "Performance friendly" python dialects (Tryton, Pythran, CuTile, Numba, Pycell, cuPy, ...) appears like Python but are nothing like Python as soon as you scratch the surface.

Which is the whole point. Python has properties that make it bad for massive, fast number twiddling. However, it’s exceptionally nice for doing all the command line parsing and file loading and setup and other wrapping tasks required to run those pipelines.

Fortran’s fantastic at math stuff. I’d sure hate to have to write all the related non-math stuff in it.

And yes, Python’s slower than other languages. But in production, most Python code spends a huge chunk of its time waiting for other code to execute. It takes more CPU for Python to parse an HTTP request or load data files than an AOT language would take, but it’s as efficient sitting there twiddling its thumbs waiting for a DB query or numeric library to finish.

IshKebab 14 hours ago||
I wouldn't call it "exceptionally nice". Decentish if you use uv & strict Pyright... sure.

> most Python code spends a huge chunk of its time waiting for other code to execute.

Highly dependent on what you are doing. That hasn't been my experience most of the time.

jz391 3 hours ago||
> I wouldn't call it "exceptionally nice"

I guess depends on your reference point :-) I recall in the beginning, python offering an easier/more readable alternative to Perl, which itself was a step up from awk/sed/sh script (for the tasks/uses GP mentions)

loglog 16 hours ago||||
> on top of a well designed language constructed over past language design experience

While I believe that Chris Lattner is a great compiler designer, his language design record has been less stellar. Swift bidirectional type inference for instance feels like it was implemented because they had a compiler algorithm that they wanted to use, rather than a genuine need, and is just a completely avoidable problem. Trying to make a HPC language that is also Python compatible was doomed from the start. Hopefully the damage from going into this direction will remain limited.

pjmlp 21 hours ago|||
I love when dialects for C and C++ count as being proper C and C++, are even argued as being more relevant than ISO standards by themselves, but anyone else that does the same, it is no longer the same language.

As for Python not being the ideal, there we agree, but the solutions with proper performance already exist, Lisp, Scheme, Julia, Futhark,...

Heck maybe someone could dig out StarLisp.

adev_ 20 hours ago||
> I love when dialects for C and C++ count as being proper C and C++, are even argued as being more relevant than ISO standards by themselves

I did not argue about CUDA being proper C++ :)

I honestly believe that the best days of C++ as an accelerator language are behind.

That is the main problem currently: We do miss a modern language for system programming that play well with accelerators. C++ is not (really) one of them (Hello aliasing).

I do not know if Mojo will succeed there, but I wish them good luck.

pjmlp 19 hours ago||
I would argue Chapel or Futhark could be such languages, but they aren't cool.
adev_ 16 hours ago||
Chapel maybe, but it is too low level to attract a large public outside of the HPC community.
bobajeff 11 hours ago|
I've been keeping my eye on mojo. Honestly though the thing I least like about Python is it's syntax.

Someone else here is bringing up Julia. Which I think is a fine language but the compiler error messages and the library documentation are not what I would want in a language as far along as it is. I'm also worried about the correctness issues I've read about in a blog awhile back. Also I don't feel like I can make the kind of Python module I want with it (because of binary size and time to first x)

That being said I'm only hoping that Mojo can become an option. But I really like to use a REPL and I like the dynamicness of Python. So I might not ever get around to doing anything outside of maybe Numpy for performance.

archargelod 9 hours ago||
> the thing I least like about Python is it's syntax.

For me it's the opposite - the only thing I like about Python is it's syntax. That's why I really like Nim - you get C speed, "comptime", metaprogramming, powerful type system, memory safety and code is often short and elegant.

Mojo seems interesting too, but so far they're mostly focused on ML stuff and not general programming. And I believe compiler is still not open-source?

Recurecur 11 hours ago||
I’m a big fan of Mojo’s design. It isn’t comparable to Julia since it has deterministic memory management.

I also think Mojo is more focused on being an industrial strength language. I was shocked to see the first iteration of Julia ahead of time compilation did not provide file I/O.

More comments...