Posted by jspdown 2 days ago
I do think there may be a limit to how far it can be improved, though. Like typed nil means that a variable of an interface type (say coming from pure Go code) should enter Lisette as Option<Option<http.Handler>>. Sure, one can match on Some(Some(h)) to not require two unwrapping steps, but it becomes a bit awkward anyway. (note: this double-Option is not a thing in Lisette at least as of now)
Lisette also doesn't remove the need to call defer (as opposed to RAII) in the very awkward way Go does. E.g. de facto requiring that you double-close on any file opened for write.
Typescript helps write javascript, but that's because until WASM there was no other language option to actually run in the browser. So even typescript would be a harder sell now that WASM can do it. Basically, why try to make Go more like Rust when Rust is right there? And fair enough, the author may be aiming for somewhere in between. And then there's the issue of existing codebases; not everything is greenfield.
So this seems best suited for existing Go codebases, or when one (for some reason) wants to use the Go runtime (which sure, it's at least nicer than the Java runtime), but with a better language. And it does look like a better language.
So I guess what's not obvious to me (and I mentioned this to the author) is what's the quick start guide to having the next file be in Lisette and not Go. I don't think this is a flaw, but just a matter of filling in some blanks.
[1] https://blog.habets.se/2025/07/Go-is-still-not-good.html
Go gives you access to a compute- and memory-efficient concurrent GC that has few or no equivalents elsewhere. It's a great platform for problem domains where GC is truly essential (fiddling with spaghetti-like reference graphs), even though you're giving up the enormous C-FFI ecosystem (unless you use Cgo, which is not really Go in a sense) due to the incompatibilities introduced by Go's weird user-mode stackful fibers approach.
The avg developer moves a lot faster in a GC language. I recently tried making a chatbot in both Rust and Python, and even with some experience in Rust I was much faster in Python.
Go is also great for making quick lil CLI things like this https://github.com/sa-/wordle-tui
Similar to how even smaller problems are better suited for just writing a bash script.
When you can have the whole program basically in your head, you don't need the guardrails that prevent problems. Similar to how it's easy to keep track of object ownership with pointers in a small and simple C program. There's no fixed size after which you can no longer say "there are no dangling pointers in this C program". (but it's probably smaller than the size where Python becomes a problem)
My experience writing TUI in Go and Rust has been much better in Rust. Though to be fair, the Go TUI libraries may have improved a lot by now, since my Go TUI experience is older than me playing with Rust's ratatui.
Only in the old "move fast and break things" sense. RAII augmented with modern borrow checking is not really any syntactically heavier than GC, and the underlying semantics of memory allocations and lifecycles is something that you need to be aware of for good design. There are some exceptions (problems that must be modeled with general reference graphs, where the "lifecycle" becomes indeterminate and GC is thus essential) but they'll be quite clear anyway.
No, definitely not only in that sense. GC is a boon to productivity no matter how you slice it, for projects of all sizes.
I think the idea that this is not the case, perhaps stems from the fact that Rust specifically has a better type system than Java specifically, so that becomes the default comparison. But not every GC language is Java. They don't all have lax type systems where you have to tiptoe around nulls. Many are quite strict and are definitely not "move fast and break things" type if languages.
A Lua interpreter written in Rust+GC makes a lot of sense.
A simplified Rust-like language written in, and compiling to, Rust+GC makes a lot of sense too.
A simplified language written in Rust and compiling to Go is a no-go.
Not saying those are the only two GC languages, just circling back to the post spawning these comments.
Syntax is simple and small without too many weird/confusing features, it's cross platform, has a great runtime and GC out of the box, "errors as values" so you can build whatever kind of error mechanism you want on top, green threading, speedy AOT compiler. Footguns that apply when writing Go don't apply so much when just using it as a compile target.
I've been writing a tiny toy functional language targeting Go and it's been really fun.
Go's defer is generally good, but it interacts weirdly with error handling (huge wart on Go language design) and has weird scoping rules (function scoped instead of scope scoped).
> Go was not satisfied with one billion dollar mistake, so they decided to have two flavors of NULL
Thanks for raising this kind of things in such a comprehensible way.
Now what I don't understand is that TypeScript, even if it was something to make JavaScript more bearable, didn't fix this! TS is even worse in this regard. And yet no one seems to care in the NodeJS ecosystem.
<selfPromotion>That's why I created my own Option type package in NPM in case it's useful for anyone: https://www.npmjs.com/package/fp-sdk </selfPromotion>
But yeah it's a fair point. Sometimes I think I should just write my own lang (a subset of typescript), in the same fashion that Lisette dev has done.
You can't enforce it in any normal codebase because null is used extensively in the third party libraries you'll have to use for most projects.
Go allows creating lightweight threads to the point where it's a good pattern to just spin off goroutines left and right to your heart's content. That's more of a concurrency primitive than async. Sure, you combine it with a channel, and you've created an async future.
The explicit passing of contexts is interesting. I initially thought it would be awkward, but it works well in practice. Except of course when you need to call a blocking API that doesn't take context.
And in environments where you can run a multitasking runtime, that's pretty cool. Rust's async is more ambitious, but has its drawbacks.
Go's concurrency story (I wouldn't call it an async story) is way more yolo, as is the rest of the Go language. And in my experience that Go yolo tends to blow up in more hilarious ways once the system is complex enough.
But like I said, in my opinion this compares with Go not having an async story at all.
If you want to look at Rust peer languages though, I do think the direction the Zig team is heading with 0.16 looks like a good direction to me.
https://github.com/ivov/lisette/issues/12
I have a few approaches in mind and will be addressing this soon.
Here a few things that i noticed.
- Third party Go code support (like go-chi) is a absolute must have. This is THE feature that will possibly sky-rocket Lisette adoption. So something like stubs etc, maybe something like ReScript has for its JS interop (https://rescript-lang.org/docs/manual/external). The cli tool could probably infer and make these stubs semi-easily, as the go typesystem is kind of simple.
- The HM claim did confuse me. It does not infer when matching on an Enum, but i have to manually type the enum type to get the compiler to agree on what is being matched on. Note, this is a HARD problem (ocaml does this probably the best), and maybe outside the scope of Lisette, but maybe tweak the docs if this is the case. (eg. infers somethings, but not all things)
- Can this be adopted gradually? Meaning a part is Go code, and a part generated from Lisette. Something like Haxe perhaps. This ties to issue 1 (3rd party interop)
But so far this is the BEST compile to Go language, and you are onto something. This might get big if the main issues are resolved.
It's a really valid FFI concern though! And I feel like superset languages like this live or die on their ability to be integrated smoothly side-by-side with the core language (F#, Scala, Kotlin, Typescript, Rescript)
In C/C++ you have the #line preprocessor directive. It would be nice if Go had something similar.
I'm curious about the compiled Go output though. The Result desugaring gets pretty verbose, which is totally fine for generated code, but when something breaks at runtime you're probably reading Go, not Lisette. Does the LSP handle mapping errors back to source positions?
Also wondering about calling Lisette from existing Go code (not just the other direction). That feels like the hard part for adoption in a mixed codebase.
Is the goal here to eventually be production-ready or is it more of a language design exploration? Either way it's a cool project.
The CLI command `lis run` supports a `--debug` flag to insert `//line source.lis:21:5` directives into the generated Go, so stack traces from runtime errors point back to the original Lisette source positions. The LSP handles compile-time errors, which reference `.lis` files by definition.
Calling Lisette from existing Go is not yet supported and is the harder direction, as you noted. This is on my mind, but the more immediate priority is enabling users to import any Go third-party package from Lisette.
Lisette began as an exploration, but I intend to make it production-ready.
I'm asking because your goal is to make it production ready, so what are you doing to assure people this is more than just another vibe coded language (of which there are countless examples by now)?
Like I said, these LLM-driven language projects have proliferated recently, and they follow a common pattern:
- Dump hundreds of thousands of lines of lines into a blank repo with a new repo.
- Throw up a polished-looking LLM generated website (they all look the same).
- Post about the project on a bunch of tech sites like HN.
- Claim it's a real project with deep roots despite there being no evidence.
Here's another one:
https://www.reddit.com/r/ProgrammingLanguages/comments/1sa1a...
These things are so common that r/programminglanguages had to ban them, because they were being posted constantly. So my concern is: what differentiates your project from the sea of others exactly like it, which as I've been following them? Usually the main dev grows bored with it quickly when the agent starts having trouble building features and the project is silently abandoned.
The merits of any project are yours to evaluate.
To me, I see some encouraging thoughtfulness here. However, again, it's true most projects like this don't achieve liftoff.
But I can't help wondering:
If it is similar to Rust why not make it the the same as Rust where it feature-matches?
Why import "foo.bar" instead of use foo::bar?
Why Bar.Baz => instead of Bar::Baz =>? What are you achieving here?
Why make it subtlety different so someone who knows Rust has to learn yet another language?
And someone who doesn't know Rust learns a language that is different enough that the knowledge doesn't transfer to writing Rust 1:1/naturally?
Also: int but float64?
Edit: typos
As for int and float64, this comes from Go's number type names. There's int, int64, and float64, but no float. It's similar to how Rust has isize but no fsize.
isize is the type for signed memory offsets, fsize is completely nonsensical.
Then realized Rust wasn't that hard.
Rust devs continued belief that they're the center of the universe is amusing.
Look at gleam, its a fresh take on nice dxp
Lisette brings you the best of both worlds.
1. Struct fields are really important in Rust because of auto-traits. Your life as a Rust programmer is easier if all fields fit on the screen, because one of them may be the reason your struct is `!Sync` or whatever.
2. Impl blocks can have different generic bounds from the struct itself, which is a nice shorthand for repeating the same generic bounds for a series of related methods. So you need to be able to write multiple per type anyway. It would he confusing if there was an “implied” impl block to look for as well.
3. It helps emphasize that Rust is a language that wants you to think about the shape of your data.
struct Example {
number: i32,
}
impl Example {
fn boo() {
println!("boo! Example::boo() was called!");
}
}
trait Thingy {
fn do_thingy(&self);
}
impl Thingy for Example {
fn do_thingy(&self) {
println!("doing a thing! also, number is {}!", self.number);
}
}
This could be expressed as: struct Example {
number: i32,
impl {
fn boo() {
println!("boo! Example::boo() was called!");
}
}
impl Thingy {
fn do_thingy(&self) {
println!("doing a thing! also, number is {}!", self.number);
}
}
}
trait Thingy {
fn do_thingy(&self);
}
Keeping related things together is just infinitely more readable, in my opinion. In fact, the confusing nature of "impl <struct>" becoming "impl <trait> for <struct>" is obviated by internal impl blocks. Keeping them separate just seems so artificial, if not downright dogmatic.Edit: No it is still not open source. There are still same promises of open sourcing eventually, but there is no source despite the URL and the website claiming it's an open language. What's "open" here is "MAX AI kernels", not Mojo. They refer to this as "750k lines of open source code" https://github.com/modular/modular/tree/main/max/kernels
This feels icky to me.
Static python can transpile to mojo. I haven't seen an argument on what concepts can only be expressed in mojo and not static python?
Borrow checker? For sure. But I'm not convinced most people need it.
Mojo therefore is a great intermediate programming language to transpile to. Same level of abstraction as golang and rust.
And this is before we talk about the real selling point, which is enabling portable heterogenous compute.
py2many can compile static python to mojo, apart from rust and golang.
Is it comprehensive? No. But it's deterministic. In the age of LLMs, with sufficient GPU you can either:
* Get the LLM to enhance the transpiler to cover more of the language/stdlib
* Accept the non-determinism for the hard cases
The way mojo solves it is by stuffing two languages into one. There are two ways to write a function for example.I don't think the cost imposed by a transpiler is worse. In fact, it gets better over time. As the transpiler improves, you stop thinking about the generated code.
Due to the closed source nature, every mojo announcement I see I think "whatever, next"
If the actual intent is to open-source, just do it, dump out whatever you have into a repo, call it 'beta'
Valuable technologies are not so easily dismissed
Last commit was 9 years ago though, so targets Python 2.7.
"Python to rust transpiler" -> pyrs (py2many is a successor) "Python to go transpiler" -> pytago
Grumpy was written around a time when people thought golang would replace python. Google stopped supporting it a decade ago.
Even the 2022 project by a high school student got more SEO
I'm curious what compilation times are like? Are there theoretical reasons it'd be order of magnitude slower than Go? I assume it does much less than the rust compiler...
Relatedly, I'd be curious to see some of the things from Rust this doesn't include, ideally in the docs. Eg I assume borrow checking, various data types, maybe async etc are intentionally omitted?
Go with more expressive types and a bit stricter compiler to prevent footguns would be a killer backend language. Similar to what TypeScript was to JavaScript.
My 2 cents would be to make it work well with TypeScript frontends. I think TypeScript is so popular in backends because 1. you can share types between frontend code and backend code and 2. it's easy for frontend devs to make changes to backend code.