Posted by em-bee 10/26/2024
That is, the Hotspot VM was such a phenomenal engine that lots of other languages sprung up to take advantage of that: Closure, Scala, Kotlin, etc.: https://en.m.wikipedia.org/wiki/List_of_JVM_languages . Even with the Java language itself, syntactic changes happen much more frequently than VM-level bytecode changes.
With an interpreted language like JavaScript, the dividing line is a little grayer, because the shippable code isn't bytecode, it's still just text. But it still seems to make sense to me to target a a "core", directly interpretable language, and then let all the syntactic sugar be precompiled down to that (especially since most JS devs have a compilation step now anyway). Heck, we basically already did this with asm.js, the precursor to WebAssembly.
asm.js came about because it was a very optimizable subset of JavaScript, then it was superseded by WebAssembly, then the proposal in TFA is basically asking for asm.js back, but perhaps the better answer is to make WebAssembly fully support all of what JS could originally do.
This is perhaps why as I get older I sometimes feel like I want to get out of software development and become a goose farmer like that dude on LinkedIn - lots of times feels more like spiraling in circles than actually advancing.
Look at the stronghold grip of C/C++ and how long it’s taken Rust to gain a meaningful foothold in those realms for example.
Google wanted to flat replace JS once already, that was the entire origin of Dart. They only pivoted to the cross platform mobile framework as its primary target after it failed to gain traction as a standard
Dartium was cancelled and AdWords team, having just migrated from GWT to AngularDart, saved the Dart team.
Eventually many left the team.
Somewhere at Google, Flutter started and when they decided to replace JavaScript on their original design, they decided to reach out to the Dart team.
So Dart got a new purpose in life, being Flutter's language.
In this process it was rebooted from dynamic language, into a static type one, having JIT and AOT toolchains.
How long Flutter, and by association Dart, remain relevant remains to be seen.
It was actually really fortunate in that for a long time it didn’t have a big community behind it and they just put a lot of very smart language designers on the team where they had ten years to try various approaches and learn from the mistakes of not only themselves but others without a lot of outside noise.
But there’s no other language I would prefer to write applications in. It’s just a really nice mix of ergonomic, expressive and powerful.
Additionally we already have LLVM all over the place, alongside JVM and CLR, it is the most deployed compiler infrastructure with contributions at the same level as the Linux kernel.
Javagator / Jazilla!
https://www.cnet.com/tech/tech-industry/javagator-down-not-o...
The current active proposal for it is the Component Model: https://component-model.bytecodealliance.org/design/why-comp....
Anything you can do in JavaScript, including access to the DOM, can be put into a JavaScript function. You can import that function into a WebAssembly Module, and you can use WebAssembly Memory to transfer large or complicated data as an efficient side channel. It all works.
This is what StackOverflow tells me (2020):
> Unfortunately, the DOM can only be accessed within the browser's main JavaScript thread. Service Workers, Web Workers, and Web Assembly modules would not have DOM access. The closest manipulation you'll get from WASM is to manipulate state objects that are passed to and rendered by the main thread with state-based UI components like Preact/React.
> JSON serialization is most often used to pass state with postMessage() or Broadcast Channels. Bitpacking or binary objects could be used with Transferrable ArrayBuffers for more performant messages that avoid the JSON serialization/deserialization overhead.
This feels like "we can have DOM access at home" meme.
Web Workers can't directly access the DOM in JavaScript either. This is not a WebAssembly problem. If you want a Web Worker to manipulate your document, you're going to post events back and forth to the main thread, and Web Assembly could call imported functions to do that too.
I don't even know what he's on about with Preact/React...
Save the following as "ergonomic.html" and you'll see that WebAssembly is manipulating the DOM.
<!doctype html><title>Not that hard</title>
<script type="module">
document.addEventListener('DOMContentLoaded', () => {
/* Compile this module with wat2wasm to make the binary below:
(module
(import "env" "easy" (func $easy (param i32)))
(func $run (param) (result)
(call $easy (i32.const 123))
(call $easy (i32.const 456))
)
(memory $mem 1)
(export "run" (func $run))
(export "mem" (memory $mem))
)
*/
const binary = new Uint8Array([
0, 97, 115, 109, 1, 0, 0, 0,
1, 8, 2, 96, 1, 127, 0, 96,
0, 0, 2, 12, 1, 3, 101, 110,
118, 4, 101, 97, 115, 121, 0, 0,
3, 2, 1, 1, 5, 3, 1, 0,
1, 7, 13, 2, 3, 114, 117, 110,
0, 1, 3, 109, 101, 109, 2, 0,
10, 14, 1, 12, 0, 65, 251, 0,
16, 0, 65, 200, 3, 16, 0, 11,
]);
const imports = {
easy(arg) {
const div = document.createElement("div");
div.textContent = "DOM this: " + String(arg);
document.body.appendChild(div);
}
};
const module = new WebAssembly.Module(binary);
const instance = new WebAssembly.Instance(module, { env: imports });
instance.exports.run();
});
</script>
That `easy(arg)` function could do much more elaborate things, and you could pass lots of data in and out using the memory export.I'd like to believe a simple standalone example like this would be enough to get people to shutup about the DOM thing, but I know better. It'll be the same people who think you need to link with all of SDL in an Emscripten project in order to draw a line on a canvas.
> This feels like "we can have DOM access at home" meme.
And I'm sure somebody (maybe you) will try to move the goal posts and claim some other meme applies.
Around 10 years ago, I was having lunch in a food court and overheard "Luckily I don't have to use javascript, just jquery".
Around 5 years ago, a co-worker admitted he still had issues distinguishing what functionality was python and what came from Django (web framework), despite having used them both daily for years. He thought it was because he learned both at the same time.
I wouldn't be surprised if this was more of the same, and just getting worse as we pile more and more abstractions on top.
but what bothers me a bit is that this example still uses custom javascript code.
i tried to find and answer but essentially what appears to be missing is the ability to access js objects from wasm. to access the document object it looks like i need a wrapper function in js:
jsdocument(prop, arg){
document[prop](arg)
}
so far so good, i can import this jsdocument() function and use it to all any property on the document object, but if document[fun](arg) returns another DOM object, then what?maybe more elaborate:
callDOMobj(prop, arg, prop2, arg2){
document[prop](arg)[prop2](arg2)
}
i can call this function with the arguments ("getElementById", "foo", "append", "<div>more foo</div>") in any WASM language and it will result in calling document.getElementById("foo").append("<div>more foo</div>"); which allows some basic DOM manipulation already. but then i want to continue with that object so maybe i can do this: getDOMobj(prop, arg){
var len = objlist.push(document[prop](arg))
return len-1;
}
callDOMobj(pos, prop, arg){
objlist[pos]["prop"](arg)
}
can you see what i am getting at here? building up some kind of API that allows me to access and manipulate any DOM object via a set of functions that i can import into WASM to work around the fact that i can't access document and other objects directly. it looks like this is similar to this answer here: https://stackoverflow.com/a/53958939solving this problem is what i mean when i ask for direct access to the DOM. i believe such an interface should be written only once so that everyone can use it without having to reinvent it like it appears to be necessary at the moment.
It's nice of you to say so. Thank you.
> can you see what i am getting at here?
I mostly can, but I'm not sure we're clear what we're talking about yet.
I see a lot of people who repeat something about "WebAssembly isn't usable because it can't manipulate the DOM". Ok, so I show an example of WebAssembly manipulating the DOM. That should put that to rest, right? If not, I'm curious what they meant.
> building up some kind of API that allows me to access and manipulate any DOM object via a set of functions that i can import into WASM to work around the fact that i can't access document and other objects directly,
This is a shortcoming in the language implementation, or the library for the language. The machinery is already there at the WebAssembly level. If your language is low level (Rust, C, or C++), and doesn't have what you want, you could roll your own. If your language is high level (Python or Lua), you're at the mercy of the person who built your version of Python.
The core of WebAssembly is a lot like a CPU. It's analogous to AMD64 or AArch64. It'd be weird to say you need changes to your CPU just to use a function called `getElementByName()` or `setAttribute()`. Some WebAssembly extensions have added features to make that "CPU" more like a Java style virtual machine. There are (or will be) garbage collected references to strings, arrays, and structs. This might make it better for implementing Go and Java style languages, and it could help with a fresh implementation of Python or Pike too. And maybe some of those changes will give controlled access to JavaScript style objects.
There's a last discussion to be had about performance. Maybe the bridge between WebAssembly imports and exports is too slow for intensive use. That's a debate that should be backed up with benchmarks of creative solutions. Maybe accessing JavaScript strings is so common, so important, and so slow that it really does require an enhancement to the standard.
Thanks for confirming that WebAssembly still cannot manipulate DOM in 2024.
It can only call custom javascript functions that manipulate DOM AND I need to write some arcane function signature language for every DOM manipulating function I want to call.
I'll give another 4 years and see if they fixed this.
You really don't know that you can create WebAssembly in other languages?!? I used WAT to keep the example short, but that's clearly lost on you.
> I'll give another 4 years and see if they fixed this.
In that time, there are lot of things you could be learning. Embracing ignorance and belligerence isn't like to serve you well in the long term.
I'm glad someone liked it :-)
> I love his description of Forth as "a weird backwards lisp with no parentheses"
I've been interested in that duality between Forth and Lisp before, but my progression always seems to following this path:
- Since Forth is just Lisp done backwards and without parens, and since it's not hard to write an sexpr parser, I might as well do Lisp to check the arity on function calls.
- But in addition to arity errors, I'd really like the compiler to catch my type errors too.
- And since I've never seen an attractive syntax for Lisp with types, I might as well have a real grammar...
And then I've talked myself out of Forth and Lisp! Oh well.
https://donhopkins.medium.com/the-shape-of-psiber-space-octo...
Not coincidentally, James Gosling designed the NeWS window system and implemented its PostScript interpreter, years before designing and implementing Java. And before that he designed and implemented "MockLisp" in his Unix version of Emacs, which he self effacingly described like: "The primary (some would say only) resemblance between Mock Lisp and any real Lisp is the general syntax of a program, which many feel is Lisp's weakest point."
https://news.ycombinator.com/item?id=29954778
James Gosling's Emacs Mocklisp was like FEXPRs on PCP, with support for prompting the user to supply omitted arguments.
https://news.ycombinator.com/item?id=14312249
DonHopkins on May 10, 2017 | parent | context | favorite | on: Emacs is sexy
Hey at least Elisp wasn't ever as bad as Mock Lisp, the extension language in Gosling (aka UniPress aka Evil Software Hoarder) Emacs.
It had ultra-dynamic lazy scoping: It would defer evaluating the function parameters until they were actually needed by the callee (((or a function it called))), at which time it would evaluate the parameters in the CALLEE's scope.
James Gosling honestly copped to how terrible a language MockLisp was in the 1981 Unix Emacs release notes:
https://archive.org/stream/bitsavers_cmuGosling_4195808/Gosl
12.2. MLisp - Mock Lisp
Unix Emacs contains an interpreter for a language
that in many respects resembles Lisp. The primary
(some would say only) resemblance between Mock Lisp
and any real Lisp is the general syntax of a program,
which many feel is Lisp's weakest point. The
differences include such things as the lack of a
cons function and a rather peculiar method of
passing parameters.
"Rather peculiar" is an understatement. More info, links and code examples:https://news.ycombinator.com/item?id=8727085
Comparison of PostScript with Forth and Lisp:
https://news.ycombinator.com/item?id=22456471
[...]
PostScript is much higher level than Forth, and a lot more like Lisp than Forth, and has much better data structures than Forth, like polymorphic arrays (that can be used as code), dictionaries (that can be used as objects), strings, floating point numbers, and NeWS "magic dictionaries" that can represent built-in objects like canvases, processes, events, fonts, etc.
Yet Forth doesn't even have dynamically allocated memory, although in a few pages of code you can implement it, but it's not standard and very few Forth libraries use it, and instead use the linear Forth dictionary memory (which is terribly limited and can't be freed without FORGETting everything defined after you allocated it):
https://donhopkins.com/home/archive/forth/alloc.f
PostScript is homoiconic. Like Lisp, PostScript code IS first class PostScript data, and you can pass functions around as first class objects and call them later.
https://en.wikipedia.org/wiki/Homoiconicity
[...]
There was a language called "V" a while back, different than a more recent language called V. It was basically a Forth where quoting was done with square brackets. This replaced the colon-semi notation for defining words, and it was also nice for nested data structures. This language seems to have fallen off the web though.
You mentioned FExprs. I never looked at Mock Lisp, and it sounds like Gosling doesn't think I should! However, I'm sure you're aware of Kernel. I think of Scheme as the "prettiest programming language I don't want to use", and I think the vau stuff in Kernel makes it even prettier. (But I still don't want to use it.)
For homoiconicity, I've also considered something like Tcl or Rebol/Red. The latter two blur the lines between lexer and parser in a way that I'd like to learn more about.
But really, I always come back to wanting static typing. Both for compile time error checking, and to give the compiler a leg up in runtime performance. Instead of using separate declarations like you see in Typed Racket and some others, I wonder if a Lisp with the addition of one "colon operator" to build typed-pairs would do it. Just one step down the slippery slope of infix syntax sugar. In the context of WebAssembly, something like this:
(import (foo a:i32 b:f64):())
(export (bar x:i64 y:f32):(i32 i32)
(code goes here)
)
Using colons to specify the types of the parameters and return result(s). It'd also be nice to have colon pairs like this for key:value in hash tables, or as cond:code in switch/conditional constructs.I.e. they are correct that it is arcane. What percentage of programmers today do you think have ever seen code written in any Lisp dialect, let alone understand it?
That seems like an easy question, but there a lot of choices which complicate it. I'm sure people have compiled CPython to WebAssembly, but I think you only get the WebAssembly imports they (or Emscripten) have chosen for you. I can't use that as an example of what I was trying to show.
It looks like py2wasm is aiming for a true Python to WASM compiler, rather than compiling an interpreter. However, I don't think it supports user-defined imports/exports yet. There's a recent issue thread about this (https://github.com/wasmerio/py2wasm/issues/5).
> how would it compare with non-wasm javascript?
I'm not sure I understand the question. If you're just using JavaScript, it just looks like JavaScript.
for the second question the example you gave is equivalent to what in plain html/js?
No. It means you only get what the person who ported CPython or py2wasm gave you. It's not a limitation in WebAssembly, and maybe they have some other (hopefully better) API than the `easy(123)` example I was trying to show.
> for the second question the example you gave is equivalent to what in plain html/js?
If I understand what you're asking, it's just:
<!doctype html><title>plain jane</title>
<script type="module">
function easy(arg) {
const div = document.createElement("div");
div.textContent = "DOM this: " + String(arg);
document.body.appendChild(div);
}
easy(123);
easy(456);
</script>that's what i meant. it's not possible until someone adds the necessary features to the wasm port of the language. makes sense of course, like any feature of a new architecture.
If I understand what you're asking
exactly that, thank you. it is easier to understand the example if there is no doubt as to what is the js part and what is wasm (it also didn't help that the code was not easy to read on a phone)
Flaunting your ignorant anti-intellectualism isn't a good look.
You do know this is 2024, you have Internet access, and you can just look shit up or ask ChatGPT to learn new things, instead of cultivating ignorance and pointlessly criticising programmers trying to raise awareness, share their experiences, and educate themselves and other people.
In case you've been living under a rock and didn't realize it, JavaScript, the topic of this discussion, is essentially a dialect of Lisp, with garbage collection, first class functional closures, polymorphic JSON structures instead of s-expressions, a hell of a lot more like and inspired by Lisp and Scheme than C++, and LOTS of people know it.
If you’re trying to raise awareness of something, don’t act like the reader is stupid if they don’t already understand. Insisting that something is obvious, especially when it is not, means any reader who does not understand it will likely perceive the comment as snobby. As does including snide remarks such as “in case you’ve been living under a rock”.
> Flaunting your ignorant anti-intellectualism isn’t a good look.
Why do you assume that I personally don’t know what s-expressions are just because I agree that they’re arcane? Labelling someone as an ignorant anti-intellectual just because they disagree with something you said isn’t a good look either.
Why don't you just go away and let other people have their interesting discussions without you, instead of bitterly complaining about things you purposefully know nothing about and refuse to learn? How does it hurt your delicate feelings to just shut up and not bitch and whine about discussions you're not interested in?
I think you’re assuming that all of the comments you’re talking about are written by the same person, when they’re not. I haven’t been attacking anyone, and I don’t think I’ve replied to anyone who’s tried to explain it.
> things you purposefully know nothing about and refuse to learn
Why do you still assume I don’t know what they are? I’ve already pointed out that my belief that s-expressions are arcane doesn’t mean I don’t know what they are.
As another illustration of my point, I just stumbled across this comment on another post:
> But maybe the whole "ease of use" budget is blown by using a Lisp in the first place.[0]
The fact is that Lisp syntax is understood by relatively few programmers, which meets the definition of arcane. You immediately flying off the handle when someone calmly points this out will not help your goal.
I, like most devs, know what Lisp is.
I, like most devs, just don't care.
I knew Lisp the way I know that that guy walking down the street is my neighbor Bob. But since I've never had a conversation with Bob, I actually have no idea who he is.
When I see Korean writing in hangeul, I know it is Korean writing, but can't read a letter of it (nor speak a word of Korean).
These examples are like knowing what Lisp is.
The thing I had not expected was how the knowledge in the Lisp world and its perspectives are very informative about a whole lot of non-Lisp!
Nobody is trying to make you stop talking about it. We’re trying to make you understand that the way you’re talking about it is elitist. When someone said they were confused by the syntax, you could have just explained it without judgement. Instead, you felt compelled to flaunt your membership of the in-group who understands Lisp, and try to make others feel stupid by implying that people who don’t understand it aren’t good programmers, or are anti-intellectual.
You’re doubling down on it in this comment, too, still insistent on making people feel like they’re “less than” because they don’t know Lisp:
> so other more knowledgeable and curious people
If I didn’t know Lisp, and my first exposure to it was from someone who sees this kind of toxicity as a reasonable way to speak to people, would I want to join their community?
Wouldn't (didn't!) faze me. Every community has it. The most popular languages, platforms and tools in fact bring out unbridled hostility. Probably, hostility finds a peak in the second most popular camps. :)
We have already lost people who are influenced by this sort of fluff, because those people will be turned away from Lisp by the anti-Lisp trolling about parentheses, niches and slow processing over everything being a list, and so on. There aren't enough Lisp people around to counter it.
Infantile language tribalism though, have no place in engineering and is blatant ignorance when coming from a supposed adult.
Implementations of Lisp are no more niche than other languages with managed run-times.
Lisp has been used for even operating system development: Lisp code taking interrupts, and driving ethernet cards and disks and so on.
Which member of the Lisp family are you talking about, and what do you think is the niche?
No more niche than Java, C# .NET and Python? Right...
> Which member of the Lisp family are you talking about, and what do you think is the niche?
You can combine all of the Lisp family together and still it wouldn't scratch the popularity, demand or job positions of any of the top languages.
Look, nobody denies Lisp'like languages are being used. Just like Fortran. :)
Fortran has a niche: numeric computing in scientific areas. However, even Fortran is not your grandfather's Fortran 66 or 77 any more. I had a semester of the latter once, as part of an engineering curriculum before switching to CS.
It supposedly has OOP programming in it, and operator overloading and such.
I don't know modern Fortran, so I wouldn't want to look ignorant spreading decades-old misinformation about Fortran.
They just don't care enough to invest time in it because it is niche. And proponents tend to tirelessly spam about it from their ivory towers like it's flawless and everyone who didn't learn it is somehow inferior, somehow justifying personal attacks like yours. Classy as usual.
We’re becoming annoyed not because people are trying to talk about something interesting, but because they are being intentionally insulting and condescending and then using bad faith arguments like this one when they’re called out on it.
Which of these quotes represent the commenter “trying to talk about something interesting”?
> flaunting your ignorance and your anti-intellectualism
> The point is that anyone who's distracted by the arcanity of Web Assembly Text Format obviously doesn't understand the first thing about WASM
> You do know this is 2024, you have Internet access, and you can just look shit up
> In case you've been living under a rock and didn't realize it
> but you're whining, lashing out, and attacking people who are trying to explain it, and trying to police and derail discussions between other people who are more knowledgeable and interested in it, which makes you a rude anti-intellectual asshole… Why don't you just go away and let other people have their interesting discussions without you, instead of bitterly complaining about things you purposefully know nothing about and refuse to learn? How does it hurt your delicate feelings to just shut up and not bitch and whine about discussions you're not interested in?
> And that proves my point that you're flaunting your ignorance and your anti-intellectualism. But you be you. There's no point in trying to make other people stop talking about Lisp by complaining about how proudly ignorant you are, and how you want to remain that way, so you don't want anyone else to talk about it. This really isn't the place for that, since you always have the option of not reading, shutting up, and not replying and interrupting and derailing the discussion, so other more knowledgeable and curious people can have interesting discussions without you purposefully harassing them like a childish troll.
> Look, it's pretty clear I stepped on some insecurity.
Only the last quote is mine, and I stand by it.
- I said WebAssembly can already manipulate the DOM with functions.
- He asked for an ergonomic example because StackOverflow told him it can't be done. The "we can have DOM access at home" bit seems like the start of things to come.
- I provided a concise example, and expressed skepticism that this would settle the discussion.
- He responded with sarcasm, and weirdly accused me of sarcasm.
- I reacted poorly to his bitchy and ungrateful reply.
My best guess is that the WAT format confused him. He didn't know it was a programming language, and he didn't know you could do it with other programming languages, so he got insecure and lashed out.
Do you have a better explanation for the weird transition from technical discussion to flame war and hurt feelings?
I feel like the WASM fervor has more to do with the fact people don’t enjoy using Frontend tools or JavaScript etc. vs looking at the actual utility tradeoffs
>> WASM isn’t the silver bullet everyone seems to cling to.
And it isn’t the silver bullet exactly for the reason that it's horribly complicated to access normal JS objects including strings. Copy from JavaScript to WebAssembly:
Use TextEncoder to convert a JS String to Uint8Array
Copy the bytes from the Uint8Array to WemAssembly.Memory
Copy from WebAssembly to JavaScript:
Copy the bytes from WebAssembly.Memory into a Uint8Array
Use TextDecoder to convert from Uint8Array to JS String
JS Strings are pretty much always going to be "rope data structures". Trying to provide anything other than copy-in and copy-out is going to expose implementation details that are complicated as fuck and not portable between browsers.https://github.com/WebAssembly/js-string-builtins/blob/main/...
"the overhead of importing glue code is prohibitive for primitives such as String, ArrayBuffer, RegExp, Map, and BigInt where the desired overhead of operations is a tight sequence of inline instructions, not an indirect function call"
I guess the more elegant and universal stringref proposal is DEAD now !?
https://github.com/WebAssembly/stringref/blob/main/proposals...
I don't really mind, as it keeps the wasm bytecode cleaner.
We don’t yet have consensus on this proposal in the Wasm standardization group, and we may never reach there, although I think it’s still possible. As I understand them, the objections are two-fold:
WebAssembly is an instruction set, like AArch64 or x86. Strings are too high-level, and should be built on top, for example with (array i8).
The requirement to support fast WTF-16 code unit access will mean that we are effectively standardizing JavaScript strings.
I really like stringref and hope the detractors can be convinced of its usefulness. Dealing with strings is not fun right now.And dealing with strings isn't fun in many other languages or runtimes or OSes.
e.g.1. C# "Strings in .NET are stored using UTF-16 encoding. UTF-8 is the standard for Web protocols and other important libraries. Beginning in C# 11, you can add the u8 suffix to a string literal to specify UTF-8 encoding. UTF-8 literals are stored as ReadOnlySpan<byte> objects" - https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
e.g.2. Erlang/BEAM/Elixir: "The Erlang string type is implemented as a single-linked-list of unicode code points. That is, if we write “Hello” in the language, this is represented as [$H, $e, $l, $l, $o]". The overhead of this representation is massive. Each Cons-cell use 8 bytes for the code point and 8 bytes for the pointer to the next value. This means that the 5-byte ASCII-representation of “Hello” is 5*16 = 80 bytes in the Erlang representation." - https://medium.com/@jlouis666/erlang-string-handling-7588daa...
This refers just to Erlang's string() type, not BEAM strings in general; it's just a bad default. If you're not using binaries, you're doing it wrong, and that's exactly why Elixir's strings are UTF-8 binaries.
I agree about keeping wasm bytecode cleaner. The core plus simd stuff is such a great generalization of the ARM and X86 CPUs we mostly use. The idea of gunking it all up with DOM related stuff is distasteful.
It supports nearly arbitrary imports of anything you want. How much more flexibility do you need? You could provide an `eval` function to run arbitrary code with a small amount of effort.
Is the problem that Emscripten and/or Rust haven't laid it all out on a platter?
But what if it was?
What I would like to see is:
- a bytecode "language" that roughly corresponds to Javascript semantics, and that is what the engines interpret (and JIT compile)
- browsers still include a compiler to compile JS sourcecode to the bytecode. Possibly wasm could work, although it would need a lot more functionality, like native support for GC, and DOM access, etc.
- browsers include a disassmbler/decompiler to improve debugging the bytecode
Then simple sites, and development can use plain JS source, but for higher performance you can just deploy the pre-compiled bytecode.
Java is a product of the JVM, which was the innovation, not the reverse. A successful language moving post-success to a new byte code format would be as far as I know unprecedented.
The idea that JavaScript is an interpreted language is also fairly shaky. It’s JIT compiled as soon as it arrives to your browser. Honestly, a modern JS engine is not different from any other VM.
The question as you rightfully pointed really is what do you send to the browser and under it lies the fundamental question of what is a browser actually. Is it a way to browse hypertext content or a standardised execution environment?
https://blog.stenmans.org/theBeamBook/#_compiler_pass_core_e...
Splitting the language might make sense from an engineering perspective but what about all the extra energy and bandwidth that will be needed?
1) JavaScript, the original assembly language of the internet, does not need new language features.
2) JavaScript, the front-end web development language is a fractal of infinitely many sub-languages that transpile back to ES5.
The proposal, as I read it, is: Let's stop adding front-end web features to the assembly language; it doesn't get easier, better or faster if we change the underlying, slowly adopting and hard-to-optimize foundation.
When you want a new language feature, add it to the fractal part that transpiles back to the part well-supported and highly optimized in existing runtimes. The only loss is that you need to transpile, so your build pipeline becomes non-trivial. But it's either "code assembly" or "get a modern build tool".
There are still some new language features that need to be transpiled, but most projects do not need to worry about transpiling cost/let/arrow functions/etc.
I mean even newer features like nullish coalescing and optional chaining are at 93-94% support.
At the end of the day, I would say tools like babel for transpiling are less and less important. Yes, you still use a bundler because the web has a lot of runtime constraints other native applications don’t have (gotta ship a tiny bundle so the page loads fast), but it’s better for the language features to be implemented in the VM and not just faked with more JS.
I did assess the ES6 coverage of ~97% a month ago.
I just evaluated that while it sounds high, 3% of people is a lot of people to cut off if your JavaScript is essential.
E.g. Firefox sits at ~2.7% browser market share. (Not incidentally the part that doesn't support ES6, but it's a demography the size of my own.)
These are probably the 3% that won’t affect your business much. They’re more likely to be on older hardware and also have less discretionary income. Or browsing on really weird hardware that is also unlikely to lead to a sale.
JavaScript for some scripting, any other language for bigger applications.
Keep the single threaded event loop approach but kill the JS semantics.
https://developer.chrome.com/blog/wasmgc/
https://v8.dev/blog/wasm-gc-porting
But languages like C# want more features in WasmGC:
https://github.com/dotnet/runtime/issues/94420
No direct DOM access yet. You still have to use JavaScript glue code to get at the DOM.
And Scala.js has shipped it. [1] Although technically experimental, it has no known bugs and it has full support of things like manipulating DOM objects from Scala.js-on-Wasm code.
[1] https://www.scala-js.org/news/2024/09/28/announcing-scalajs-...
I want fixed-size buffer-backed structs for JS. Basically a DataView as a C struct. This would massively benefit interop and solve some shortcomings of DataView.
There was a proposal for a binary AST for JS several years ago [1]. Why not just use that as JS0? It's separate and can offer new possibilities as well.
How would this be useful for a JS0?
But there's still far to go. Large parts of the browser API are still not directly available in WASM.
I very much look forward to WASM reaching stability. It's very enjoyable to run Rust code in the browser.
WebAssembly can call arbitrary JavaScript through imports. You could literally provide an `eval` function if you were motivated to.
I also have to wonder if people are excited about replacing Javascript, why they would want to have HTML/CSS/DOM on top of WASM. A different front-end UI tech could be much better than slow, old DOM.
Providing access to an already proven DOM would be the better solution.
The download isn't much different to a typical website. That Flutter demo in wasm is 2 megabytes.
Avalonia UI's WebAssembly uses canvas in C#: https://avaloniaui.net/
Uno Platform's WebAssembly implementation uses the DOM rather than drawing to canvas: https://platform.uno/
Uno's philosophy is to use platform native controls. The benefit is that you get platform native characteristics, the cost is it will never be exactly the same in each browser and platform.
A 2mb base floor before any code or assets is not acceptable for most use cases.
A) Get your hands dirty and write what you want. Once.
B) Chant along with the mob who doesn't even understand what they're asking for.
C) Wait several years for some super complicated solution to be designed by committee.
I wouldn't even want direct access to the DOM if we had it today. The DOM as an API is atrocious.
Instead, I want a set of nice functions that do things like put a graphical chart on the page - all in one call. Or one call to pass a bunch of 3D triangles or splats to visualize in a WebGL canvas. Or one call to play some audio samples. Or a function to poll for recording audio. And so on...
I choose option A.
wait for a framework that implements option A.
if option A works, why aren't there any frameworks yet that implement it?
maybe all the framework devs are waiting for C?
but why?
you could be right about A but at present the majority view seems to be that C is the right option. which is what pushes me into going with B because i have no interest in developing my own framework.
if a framework appears that implements option A i'll gladly consider it. (just as long as it isn't tightly coupled with a backend)
A random drawn rectangle is not a UI, it’s not accessible, not inspectable, not part of the de facto OS native toolkit.
If all we wanted is a random cross-platform canvas element to draw onto from a vm, it could be solved in a weekend. There are million examples of that.
Of course it is. All screen based user interfaces are blinking lights.
> it’s not accessible
It's best to read the documentation first. It's a low effort thing to do:
https://docs.flutter.dev/ui/accessibility-and-internationali...
https://medium.com/flutter/accessibility-in-flutter-on-the-w...
> The Flutter team would like to eventually turn the semantics on by default in Flutter Web. However, at the moment, this would lead to noticeable performance costs in a significant number of cases, and requires some optimization before the default can be changed
But good. That's a kind of progress.
did you mean there are snappy webapplications running in WASM? if you have any examples, i'd be curiuos to learn more.
That doesn't have to be true.
Eventually WASM will get direct access to the full browser API, without going through JavaScript.
The browser exposes a browser API to the JavaScript VM it hosts, so things like the DOM are available.
Those things aren't available in other JavaScript VMs, like Node. (There's no DOM to interact with.)
And they're not yet available in the WASM VM in the browser, either.
The reason is that the WASM APIs/ABIs have not stabilised. It takes time to make right, but there is progress.
Eventually WASM will get direct access to the full browser API, without going through JavaScript.
well, that is what i am waiting for. my point is that it's not the case yet, while the gp seemed to suggest that it's not needed because access through the host is available
> [...] go through the javascript host, which is slow
And now you admit:
> this is going beyond my level of experience [...]
> how much slower, i don't know
I guess people just repeat what they hear without questioning or understanding it, and then it becomes dogma.
> did you mean there are snappy webapplications running in WASM?
No. I meant that all existing web apps go through the "javascript host", using JavaScript. So if any of them are fast enough, and some certainly are, the problem isn't the "javascript host".
i am only talking about webapps running inside WASM. are there any WASM based webapps that are as fast as pure js webapps?
Lol, your question asked me what I meant. I told you what I meant.
> are there any WASM based webapps that are as fast as pure js webapps?
You can browse links from Google for examples and benchmarks. Maybe one of these will scratch your itch, but I won't vouch for any of them:
https://madewithwebassembly.com/
But really, JavaScript and WebAssembly are both very fast. I don't think speed is the reason to choose one or the other.
For me, I like WebAssembly because it lets me program in languages other than JavaScript. JavaScript makes me want to scratch my eyes out.
I'd rather we just move to native cross platform applications and stop using a document browser to build interactive applications.
What's more likely is that all of this will probably be eclipsed by LLM and virtual assistants - which could be controlled by native apps with a dynamically generated GUI or voice.
I think APIs exposing data and executing functions will fundamentally change what we think the web is.
Go back to garbage "cross platform" UI toolkits and having to help users manage software dependencies on their machine? No thanks.
Here you go. Do both native and wasm:
Flutter example:
It’s unfortunate there isn’t a more native “app like” UI toolkit. Especially on mobile, web apps generally are bad and a lot of the reason is trying to shoehorn an app experience onto the dom.
If you're using Safari it's true that Safari's WebAssembly implementation is behind the other browsers. But that's a Safari problem more than a WebAssembly problem.
Cool stuff, let's kill web for good.
Also... do you really think its wise to rewrite v8 to target WebAssembly?
Here's a demo of Dart and Flutter compiled to WebAssembly:
https://flutterweb-wasm.web.app/
WebAssembly enables you to use any language. And when you can use any language, why would you use JavaScript?
Google has started migrating parts of Google Sheets to WebAssembly. They're compiling Java to WebAssembly and seeing a 100% performance increase:
https://web.dev/case-studies/google-sheets-wasmgc
Amazon has been migrating its Prime Video app from JavaScript to WebAssembly. They're compiling Rust to WebAssembly and they've seen increased performance and lower memory usage:
https://www.amazon.science/blog/how-prime-video-updates-its-...
But you can use vanilla js you say? Yes it is true, but I find js very terrible and I want to write things in common lisp, or, if need be, Go. Neither of these require any bullshit with tooling or anything hard. You can learn enough Go in a few hours to be productive (like C) and llms are super at it (unlike js/ts where they produce things that had breaking updates 40 times since the llm knowledge cut off for no reason at all). With vugu framework you don't need to touch js either. Common lisp is also not hard to learn, bit harder but even better tooling and you don't need to know all of it to write nice stuff; there is the clog framework which is basically all you need to get going as it is an ide, web dev environment and you get away with writing almost no js.
> Neither of these require any bullshit with tooling or anything hard.
JS requires any text editor and a browser.
TS requires Node and npm i -D typescript.
> You can learn enough Go in a few hours to be productive (like C)
You want to say that C and Go are easier to learn that JS? JS is literally primitive types, objects and functions at its core.
So need to install node and npm and then typescript. Go is 1 binary to download and throw in a dir.
I don't find js hard to learn (and I have been programming in it since it came out (I was in the CMS business since the early 90s), I just find it ugly and annoying to work with. But yeah, I guess the ecosystem definitely doesn't help as it's hard to see the proliferation of terrible software/frameworks and habits totally separate from the language; apparently there is something in it that attracts these terrible practices.
It has amazing coroutines library and it started with a nice set of features but failed to evolve. Sealed types are a joke compared to union types in TS. No inline types so you’re forced to created stupid data classes everywhere even if it’s used only once. Constant fight between wannabe functional programmers that try to replicate Rust’s Result monad, but without official language support, and exceptions crowd. Static delegation. Still no pattern matching when even freaking Java has it nowadays. Hilarious. Constant focus on KMM, even though language stagnated for a while.
Rust is just a pain to develop. Slow to compile and constantly have to please borrow checker. I’m not sure if you’re joking, but you can’t seriously think that Rust is better language for prototyping than JS/TS.
> designed much later and not so hastily, so they have less warts than JavaScript as a result.
That’s irrelevant. Modern JS has evolved over the years and is a joy to use now.
On a serious note I don’t see the point in turning browsers into an OS on top of the OS. I know it’s some kind of Google wet dream so they can suck up even more data than they already do but still. If you want to ship applications, just do that. The sandboxing should be done at the OS level where it belongs.
- at work they expect me to write code with the latest features
- my colleagues write code with the latest features and I have to review/extend/build upon using the same style
- the community, books, copilots, LLMs, libraries, tooling are "forcing" all that new stuff upon me
- etc.
Transparent polyfills (or anything that really needs to be loaded "first") also don't work, since async means you can't specify the order to load things. This means that every single module you write has to explicitly mention which exact polyfill it's going to end up using (hope you don't accidentally skip one or specify a conflicting polyfill) ... or you just abandon modules and load your polyfills with a normal script (which means that now your source is a bastard mixture).
Lazy imports are technically possible via dynamic imports, but unnecessarily annoying and break in all sorts of places. Granted, the standardization of the "leaky browser abstraction" makes this pretty awful regardless.
You can choose to just work with the core and maybe a minimal sugar library. Which will probably be faster and don't include "all the language features that were forced upon you".
Yet every language has either that or BigDecimal. Even if Google's frontend devs haven't found a use, there also exist JS devs outside of Google who certainly have found uses (though possibly more of them on the backend).
Similarly, not every developer has a compilation step in their JS work. And there are places where you can't have one, e.g. in the browser console. Develop the language instead of tons of incompatible tools.
in pike, bigint and int are integrated in such a way that the transition is automatic. there is no overflow but as long as the values fit in an int, that is used internally to keep the code fast.
This is where you say something about "exact" vs "inexact" as though that will hand wave it away.
The numeric tower in Scheme describes general number types with above of in the tower graphic (in the Wikipedia article) meaning subtype of. double precision floats and arbitrary precision integers are representations of numbers. Both would also be Real numbers.
I'm not familiar with this debate, but how is that a hand wave? The article describes a reasonable-sounding way to extend the tower with a second dimension of precision. Following those rules, you would never just convert between bigint and float, but an expression involving both would output a float.
float(0.5) +
bigint(9007199254740993)
== float(9007199254740992)
I wouldn't parade it around as a triumph over the problem, and it's arguably better to require people to be explicit about whether converting the float to bigint, or the bigint to float, is what you wanted.Basically, ULP-level inaccuracy is a problem inherent to having float at all, even without bignum interactions. They would be a menace even if you had a pure tower from 32 bit int to double to complex to more.
The real point is that you can get some non-intuitive answers from letting that numeric tower make conversion decisions for you. It's just a rule, and it's not an amazing rule.
Then it's even less of issue. Yes if you convert to a float you get rounding, what did you expect when you introduced a float?
It's somewhat unintuitive but that's the nature of floating point.
> The real point is that you can get some non-intuitive answers from letting that numeric tower make conversion decisions for you. It's just a rule, and it's not an amazing rule.
But again, you can have the same kind of issue without bignums. It's not a tower problem it's a float problem.
And that says nothing about whether implicit conversions are a good idea or not.
But more importantly, I'm saying that the problematic rounding can occur even if your tower does not have both bigint and float. It can happen even if every layer can completely represent every value of the layer above it. Do you have any complaints that are unique to a tower that has both bigint and float, and don't apply to towers that only have float?
To elaborate on that, an implicit cast directly from a single bigint to a single float won't happen with the rules in the wikipedia article. You'd have to do something like bigint+float, which can have horrible rounding errors, but those horrible rounding errors are also present in float+float.
And you can even have these problems without a tower. So I don't see how the bigint and float scenario is an argument against towers.
N < Z < Q < R < C < H
ℕ ⊂ ℤ ⊂ ℚ ⊂ ℝ ⊂ ℂ ⊂ ℍ
That's a nice statement about idealized sets of numeric values. So `integer?` implies `rational?` implies `real?` implies `complex?` implies `number?` in Scheme predicates and type conversions.But no programming language can have "Reals" (they aren't computable), so floats are a common/useful approximation. And in actuality `bigint?` doesn't imply `floating?`, and `floating?` doesn't imply `bigint?`. Neither is a strict subset of the other, and because of this you can easily find examples where implicit conversion does something "questionable". You've made it about rounding errors, but I'm trying to criticize something about pretending they are subtypes/subsets. Claiming it's a tower and hand waving about exact/inexact doesn't make it a tower, and so I think implicit conversion for these is a poor choice.
You can have little subset relations for implicit conversions:
float32? implies float64?
float64? implies complex64?
float32? implies complex32?
complex32? implies complex64?
fixint? implies bigint?
fixint? implies rational?
bigint? implies rational?
Since this is supposedly in a discussion about JavaScript, maybe even: fixint? implies float64?
All of those relate true subsets for the collection of values they can represent, but it's not much of a tower. It's more a collection of DAGs.> I'm trying to criticize something about pretending they are subtypes/subsets. Claiming it's a tower and hand waving about exact/inexact doesn't make it a tower
I thought we established right away that it's not a single tower. The description in the wikipedia page is two towers with links between them. (Or at least it's two if you don't waste effort on things like having both float64 and complex32.)
But I don't see any hand waving. The relationships and conversions are very clear. That's why I interpreted your complaint as being more about the specific operation. So with your correction, I need you to explain where you see hand-waving.
If you just don't like the name "Tower" for an implementation that has both bignums and floats then okay I agree I guess?
Where did we say that? The first picture on the Wikipedia page shows the tower as a linear stack of items from set theory. The Scheme predicates are named similarly. This is the appealing myth.
> The description in the wikipedia page is two towers with links between them.
Not on the page I'm seeing. Are you reading the English page? At the bottom, I see a tree of abstract types (sets).
This shows that you can traverse (Integer to Rational to Real) and (Float to Real) to find the common abstract type Real. But there isn't actually a Real type you can do operations with. You've got concrete BigInt and Float64, and even if Real is implemented as a C-style tagged-union of the two types, you still need to pick one or the other for doing operations like addition. Then the Scheme standard says stuff like, "try to be exact when you can, but inexact is ok sometimes". So all the set theory justification is out the window, and it's really just an ad hoc rule.
It's just not as elegant as it seems, and it gives an unsound justification to making implicit conversions.
> If you just don't like the name "Tower" then okay I agree I guess?
Please don't do that. I've tried to clarify details in response to your questions, but if you're just going to dismiss it with some snarky crap like that then you can go fuck yourself.
Reply if you want, but I'm guessing we're done here.
In the section where the wikipedia page talks about exact and inexact, the specific thing you were calling out, it says "Another common variation is to support both exact and inexact versions of the tower or parts of it; R7RS Scheme recommends but does not strictly require this of implementations. In this case, similar semantics are used to determine the permissibility of implicit coercion: inexactness is a contagious property of numbers,[6] and any numerical operation involving both exact and inexact values must yield inexact return values of at least the same precision as the most precise inexact number appearing in the expression, unless the precision is practically infinite (e.g. containing a detectable repetend), or unless it can be proven that the precision of the result of the operation is independent of the inexactness of any of its operands (for example, a series of multiplications where at least one multiplicand is 0).".
I want to especially highlight the phrase "exact and inexact versions of the tower or parts of it" Which I then reacted to by saying "The article describes a reasonable-sounding way to extend the tower with a second dimension of precision." Once you have two dimensions it's no longer a single tower. I thought that was the common ground that we were talking on, that if you use that method it's not a true tower anymore.
> Please don't do that. I've tried to clarify details in response to your questions, but if you're just going to dismiss it with some snarky crap like that then you can go fuck yourself.
That wasn't snark. I am really trying to understand your argument, because it looks like we've been talking about different things the entire time.
I had thought we established from the very start that the description on the wiki page wasn't actually a single tower. If you are still trying to convince me it's a more complicated graph, then I agree with you, and I don't understand how we got so far without that being clear. Sorry for sounding reductionist about it.
So please, honest question for clarification, do you object to the graph of number types described by that paragraph, do you object to using the word "tower" to talk about it, or do you object to both? Please don't get mad at me for asking, or think I'm trying to dismiss you.
And if someone builds a pure tower that goes int32, double, complex, quaternion, do you think that's inherently self-defeating because it can't live up to the promises of a tower? It doesn't have the issue of floats versus bignums; it's strict subsets all the way down.
> do you object to using the word "tower" to talk about it
No, I don't really care about the terminology, except when it helps to communicate.
> do you object to the graph of number types described by that paragraph
I think the problem boils down to using a flawed analogy to arrive at a conclusion and then pretending the conclusion is sound and elegant. There are really two things going on:
First, we've got a tower, or tree, or DAG of "abstract" types. These are mathematical constructs or Platonic ideals. So you can build a tower that says "All Integers are Rationals" and "All Rationals are Reals". And it's supported by Set Theory! So you conclude that you can use an Integer anywhere that a Rational or Real is allowed. Then, knowing that we're going to apply this to a programming language, you add "All Floats are Reals". Fine, we've got abstract Floats, and it looks lovely.
Second we've got actual "concrete" data types. These are things like Float64, Int32, or BigInt. Importantly, you can't have an implementation of Real anything. In general, Real numbers can't be processed on a Turing machine. You can have a tagged union of Computable things, but that's not really the same as "Real" in bold quotes.
Ok, so the mistake comes when you try to combine those first and second sets of things. We say concrete BigInt is like the abstract Integers, and concrete Float64 is like the abstract Floats. So far so good. Then we look at the abstract tower, we decide that Integers and Floats need to become Real, so we say BigInt and Float64 need to use Reals to get a common type. But there is no common type. We said the concrete types are analogous to the abstract types and made an unsound conclusion.
Finally, we write the compiler, and reality hits us. So we go back to the standard and add some bits about "Some things really should be Exact. Conforming implementations should try to avoid Inexact when they can." It's not a separate tower - it's a bandaid for flawed logic.
Anyways, this is all a bit too philosophical. I'm not actually passionate about it, but our discussion kept going, and you kept asking, so I kept trying to explain. Most people like implicit conversions in their programming languages, and so you've got to make up some rules. I just don't like pretending the rules are not ad hoc, and it's nothing a smug lisp weenie should really be smug about.
Assuming the obvious implementation of complex and quaternion built on two or four doubles, it's fine. Each type represents a set that is a proper subset of the next type in the list.
Annoyingly, it'll all go to crap if you have int64 though.
btw, i just checked, typeof() no longer shows the difference between int and bigint. it did in the past if i remember correctly
Saying, "That's nice." is a cliche condescension. You're free to disagree, but I think his intent was clear.
However, that works for int and bigint, but Number (double precision) can represent numbers that BigInt can not, and BigInt can represent numbers which Number can not. There isn't a graceful way to automatically promote/degrade one to the other in all cases, and a silent conversion will do the wrong thing in many cases.
If performance and complexity really are the primary concerns then the language must stop pandering to children. We already know what high performance looks like. I wrote about it here: https://github.com/prettydiff/wisdom/blob/master/performance...
If the goal really is higher performance and lower complexity the most desirable solution is to create a new language with forced simplicity directly in the image of JavaScript, where simple means less and not easy, and transition to that new language slowly over time. Yes, I understand Google went down this road in the past with Dart, but Dart solved for all the wrong problems and thus was dead on arrival. Instead the idea is to take something that works well and shave off all the bullshit reducing it down to the smallest possible thing.
Forced simplicity means absolutely ignoring all vanity/stylistic concerns and instead only focusing on fewer ways of doing things. As an example consider a language that requires strong typing like TypeScript and thus thereby eliminates type coercion. Another example is operators that have a single purpose (not overloaded), single character, and no redundant operators.
Will there be a lot of crying about vanity bullshit... yes, absolutely. Just ignore it because, you cannot reasonably expect to drive above 120mph on a child's tricycle. If people wish to offer their own stylistic considerations they should include performance metrics and explanations how their suggestions reduce quantity of operations without unnecessary abstraction.
Language Evolution: Problems, and What Can We Do About It? - https://news.ycombinator.com/item?id=41795190 - Oct 2024 (1 comment)
Proposal of JavaScript becoming a compiled language: JS0 and JSSugar - https://news.ycombinator.com/item?id=41764825 - Oct 2024 (2 comments)
And btw, the TypeScript tooling scene is far from being able to get standardized. TypeScript is basically a Microsoft thing, and we don't see a single non-official TypeScript tool can do type-checking. And there's no plan to port the official tools to a faster language like Rust. And the tsc is not designed for doing tranditional compiler optimizations. The TypeScript team made it clear that the goal of tsc is to only produce idiomatic JavaScript.
I agree that the tooling/UI around this could be better, but by focusing on this approach, things like Typescript get better as well.
If the browser starts treating JS as assembly, then there would probably be a greater onus for features like this.
Are those being supplied with every website you use?
For example: TypeScript's sourceMap [1], Elm's time-travelling debugger [2], Vue.js DevTools [3], just to name a few I've tried. Especially well-typed languages tend to behave well at run-time once they pass type-checking. Or rather, I have not made enough front-end code to discover transpiler bugs.
[1]: https://www.typescriptlang.org/tsconfig/#sourceMap [2]: https://elm-lang.org/news/time-travel-made-easy (2014 [3]: https://devtools.vuejs.org/
As easy, certainly. But how are they easier?
Elm's debugger lets you step forwards and backwards in the application's state.
TypeScript's type system lets you catch bugs before you run the code.
Vue.js's DevTools extend the browser's with a component-based overview, so you can interactively see what's going on at a high level of abstraction. (I'm sure something similar exists for most frameworks similar to Vue.js, and possibly even frameworks made in vanilla ES5, I'm just picking one I've tried.)
With vanilla ES5 you get interactive debugging.
So if I agree with GP then I just haven't found the right tooling yet?