I found rhai's syntax very straightforward, and I could almost accomplish my needs just by looking at some basic examples.
I use Rhai in wasm, and it can handle real-time audio blocks, which is really impressive:
Rhai also offers some safety guarantees: no panics, no stack overflows, etc. Rhai seemingly requires slightly less ceremony when interfacing with Rust: direct use of many things, as opposed to implementing a trait to interact with Koto.
(Disclaimer: I spent 5 minutes skimming the docs of each.)
On the other hand, you can always write your own Rhai interpreter if necessary. And if you restrict the language to a limited set of features, which you need to do anyway to keep it realtime-safe, you could even compile it to native code.
> and it can handle real-time audio blocks, which is really impressive:
Any scripting language can do this as long as you stick to operations that don't cause memory allocations, system calls or other non-realtime-safe operations.
For example, you can use Lua to write process functions for Pd objects: https://agraef.github.io/pd-lua/tutorial/pd-lua-intro.html#s...
The question is rather how much you can do in a given audio callback.
---
All that being said, Glicol is very cool!
I gave Lua a shot, but getting the toolchain set up in wasm was a hassle:
https://bytedream.github.io/litbwraw/introduction.html
So at least I can say Rhai has some advantages on syntax and compatibility with wasm
I just don't really understand why Rhai uses an AST walking interpreter, that's basically the least efficient way of implement a scripting language. Once you have an AST, a byte code compiler/interpreter is not really hard to implement, so I'm wondering why they knowingly leave so much performance on the table...
I'd start by trying UniFFI [1] which looks much simpler than the approach of manually writing a C API and using that as a foundation for higher-level language bindings.
This would also likely be the starting point for a package management system (if there ends up being demand for one). Rust doesn't have a stable ABI so to make sure that dynamically loaded Rust packages are compatible, either Koto would need to be in the business of Rust toolchain management so that packages can be recompiled when needed, or an API layer would be needed. There are some projects that provide ABI-compatibility shims but I don't like the idea of having two separate approaches to FFI, so I'd want to try to build on the foreign-FFI layer once it's in place.
I'm half hoping that by the time I'm interested in working on this Rust will have decided to pursue ABI stability. And there's also something in the back of my mind that's yelling 'Wasm!' at me but I would need someone wiser to convince me that it would be the right direction.
I'm fine with the Rust API breaking compatibility.
Before 1.0 I'd want to address at least:
- the FFI / package management topics mentioned above
- async support: https://github.com/koto-lang/koto/issues/277
- extend the parser to support an autoformatter: https://github.com/koto-lang/koto/issues/286
...and then have a larger number of people using it in projects without major issues coming up for a good while, e.g. a year+.
At the time we did it with Lua.
We extended Nginx and Envoy-proxy with a Rust library (and a server), and added a Lua interface, so users can further tweak the config and the flow.
https://github.com/zuriby/curiefense/tree/main/curiefense/cu...
After looking at this for 5 minutes it seems it is better than rhai according to my metric. But not necessarily better than lua.
If it helps here's a git blame helper script I made for Helix (as an example of practical code without any effort put in to making it readable for others): https://github.com/irh/dotfiles/blob/main/scripts/.scripts/g...
e.g. https://github.com/irh/dotfiles/blob/main/scripts/.scripts/t...
Nice! Now, if it only had type support.
Edit: Sure enough, "whitespace is important in Koto, and because of optional parentheses, `f(1, 2)` is not the same as `f (1, 2)`. The former is parsed as a call to f with two arguments, whereas the latter is a call to f with a tuple as the single argument."
I'm guessing because "it's cleaner/simpler", but that's a shallow understanding of those words. Just because there are two characters fewer on the screen doesn't make the code simpler. Simple semantics are what you should aim for and inconsistencies like these throw a wrench into that.
For example, strings in JSON vs YAML. Isn't it "simpler" to not have to quote every string? So long as your string isn't "no" that may be true. So now instead of a simple mental model of "every string must be quoted" it's "strings don't need quotes except for these specific exceptions that cause issues: <list>". So much simpler... sigh.
<virtual high five>
My thinking is that the potential footgun here is outweighed by the win of paren-free calls for quick scripting / rapid iteration, but it certainly counts towards the language strangeness budget [1] so I figured it was worth pointing out in the guide.
[1] https://steveklabnik.com/writing/the-language-strangeness-bu...
One of the problems of terse syntaxes is that one typo away from a syntactically valid program lies another syntactically valid program with entirely different semantics.
I prefer a syntax that has enough "gaps" between syntactic constructs, so that a single typo usually leads to an obvious syntax error. In this regard, Python or Java or TS are comfortable, Haskell or Lisp or C are okay, and Coffescript or Scala or C++ are terrible.