Posted by em-bee 1 day ago
I agree that the tooling/UI around this could be better, but by focusing on this approach, things like Typescript get better as well.
If the browser starts treating JS as assembly, then there would probably be a greater onus for features like this.
Are those being supplied with every website you use?
For example: TypeScript's sourceMap [1], Elm's time-travelling debugger [2], Vue.js DevTools [3], just to name a few I've tried. Especially well-typed languages tend to behave well at run-time once they pass type-checking. Or rather, I have not made enough front-end code to discover transpiler bugs.
[1]: https://www.typescriptlang.org/tsconfig/#sourceMap [2]: https://elm-lang.org/news/time-travel-made-easy (2014 [3]: https://devtools.vuejs.org/
As easy, certainly. But how are they easier?
Elm's debugger lets you step forwards and backwards in the application's state.
TypeScript's type system lets you catch bugs before you run the code.
Vue.js's DevTools extend the browser's with a component-based overview, so you can interactively see what's going on at a high level of abstraction. (I'm sure something similar exists for most frameworks similar to Vue.js, and possibly even frameworks made in vanilla ES5, I'm just picking one I've tried.)
With vanilla ES5 you get interactive debugging.
So if I agree with GP then I just haven't found the right tooling yet?
Seriously frontend is already the most fragmented and fast changing area of web there is. Don’t split the language.
Who cares? If backwards compatability is maintained then this fails to have any impact on my experience as a developer. It sounds like the VM maintainers are busy making their own lives hell. Not my problem.
I do. Maybe if someone programs in one language it's okay for them to keep up with language changes, but if you have to constantly juggle multiple languages it becomes a real chore to stay up to date with every one of them.
Thankfully.. both maintain reasonable backwards compatability where security is not otherwise implicated.
You still need to be aware of them when you encounter unfamiliar syntax.
I think if it was that simple, it would be done that way already (maybe it is, for some features). Two big arguments for doing the "desugaring" offline are the (1) speed, and (2) security of the browser. Those two things also conflict somewhat if addressed on the client, since faster but more complex compiler code increases the surface area for potential exploits.
But if you do this compile step offline, you don't need to worry about compromising the performance or security of the browser.
If you read the original slides from the proposers, they're presenting a framing where there is an inherent tension between "serving the user" and "helping the developer". They argue that there is too much of the latter, and that a formalized splitting should push more to the former.
From an end-user perspective, it definitely makes more sense that the js doesn't have to be transformed locally before it can interpreted. I think your suggestion is not compatible with the motivations of the proposal.
The problem is distributing the runtime(s). By having developers transpile to a small core, anyone can freely invent new language features without waiting for the rest of the internet to download support for them.
JS0 should be a subset of current JS
JS1 should be current JS
JsSugar should be current JS plus future features
BigInt failing to materialize I think has more to do with ergonomics around it, they’re a bit unwieldy and there aren’t able to be used with the built in Math object functions.
They also have zero JSON support out of the box which is a huge miss.
Honestly it should have been roadmapped to replace the built in Number type
But the idea is that it should have been proposed with a roadmap of what it would look like to have it eventually supplant Number
If anything, I expect those existing VMs to slowly be replaced by WebAssembly due to how crucial and complicated that very specific sandbox requirement is - and how useful that is once you have it working reliably.
Personally I never want to run untrusted code on any of my computers outside of a robust sandbox. I look forward to a future where every application I might install runs in a sandbox that I can trust.
The more important thing to consider, however, is the fact that CLR, JVM, etc. provide internal memory safety whereas Wasm runtimes don't.
e.g. a C program that goes sufficiently out of bounds on an array is guaranteed to segfault in the C runtime, but that runtime error does not necessarily occur on a wasm target. That is to say, the program in the sandbox can have totally strange runtime behavior -- still, defined behavior according to wasm -- although the program has undefined behavior in the source language. In the case of JVM languages, this can't really happen.
> As told in JavaScript: The First Twenty Years, Brenden Eich joined Netscape in April 1995.
> [..]
> However, Eich didn’t think he’d have to write a new language from scratch. There were existing options available — such as the research language, Scheme, or a Unix-based language like Perl or Python. So when he joined, Eich “was expecting to implement Scheme in the browser.” But the increasingly fractious politics of the software companies of the day (it was, basically, everyone against Microsoft) soon saw the project take a more creative turn.
> On 23 May 1995, Sun Microsystems launched a new programming language into the world: Java. As part of the launch, Netscape announced that it would license Java for use in the browser. This was all well and good, but Java didn’t really fit the bill for the web. Java is a general-purpose programming language that promised Write Once, Run Anywhere (WORA) functionality, but it was too complicated for web designers and other non-programmers to use. So Netscape decided it needed a scripting language, which was a trendy term at the time for a smaller, easier to learn programming language.
There's a whole lot more interesting stuff but I think that part directly answers most of what you're wondering.
Look at the Java bytecode, and you'll see it features such things as a goto with an arbitrary offset: https://en.m.wikipedia.org/wiki/List_of_Java_bytecode_instru...
They had to build a verifier that attempts to ensure the bytecode isn't doing anything bad. That proved to be fairly difficult, and comes at a considerable cost.
The JVM and CLR are poor compilation targets for C and C++, because those languages weren't designed to target those runtimes and those runtimes weren't designed to run those languages. (C++/CLI isn't C++.) It's possible to get something working, and a few people have tried, but you run into a lot of impedance mismatches and compatibility issues. I think you would see people run into a lot more problems trying to get their code running on the JVM or CLR than they in fact run into trying to get it running on WebAssembly. (Though I think the CLR is less bad about this than the JVM.)
As for the idea of using LLVM bitcode as an interchange format, we don't have to guess how that would have gone, because it was actually tried! Google implemented this in Chrome and called it PNaCl, and some sites and extensions relied on it for a while. They ultimately withdrew it in favor of WebAssembly. I don't understand all the reasons why it failed, but I think part of the problem is that it ran into a bunch of "the spec is whatever LLVM happens to do" type issues that were real problems for would-be toolchain authors and made the other browser vendors (including Apple, LLVM's de facto primary corporate sponsor) reluctant to support it. WebAssembly has a relatively short and simple standard that you can actually read; writing a WebAssembly interpreter is an undergraduate exercise, though of course writing a highly performant one is much more work.
Also, as far as I can tell, LLVM hasn't at all been optimized to death for the use case of runtime code generation, where the speed of the compiler is about as important as that of the generated code. The biggest dynamic language I know that uses LLVM is Julia, which is a decently big deal, but the overwhelming majority of LLVM usage is for ahead-of-time compilation of languages like C, C++, Swift, and Rust.
On a bigger-picture note, I'm not sure I at all understand why adopting an existing bytecode language would have made things easier. Yes, it would have been much easier to reuse existing Java code if the JVM had been adopted, or to reuse existing C# code if the CLR had been adopted, but those options are mutually exclusive; the goal was something that would work at least okay for all the languages. Python doesn't have a stable bytecode format, and Rust and Haskell compile to LLVM bitcode (which LLVM has no problem lowering to WebAssembly since WebAssembly was designed to make that straightforward), so I don't see how those languages are in any way disadvantaged by the choice of WebAssembly as the target bytecode language instead of some alternative.
Or are your concerns about I/O? That's a bigger can of worms, and you'd need to explain how you imagine this would work, but the short version is that reusing the interfaces that existing OSes provided would not have worked well, because the browser has a different (and in many ways better) security model.
I'm not a huge fan of WASM but it's easy to see that the authors would clearly not want to leave control in the hands of Microsoft or Oracle (and as a result all of us are hostages to Google instead because of evil that is Chromium).
https://ecma-international.org/publications-and-standards/st...
And it was used by some browsers, there was just no consensus between different vendors due to politics. The problem largely solved itself by.. only one vendor remaining, chromium.
I like that JavaScript now has modules/imports, destructuring, Proxies, async/await, etc. These were all new features at one point, But yeah, why did Symbol.species get in? Seems like it’s to enable some odd subclassing pattern? I’m an anti-OOP zealot, so my hot take would be that maybe OOP subclassing is unnecessarily complex already, so stuff like that shouldn’t make it in. We got the OOP syntactic sugar, which is enough. Stop there.
How much of the extra complexity is from stuff like that that is rarely used? Maybe we just need to be a lot more conservative about what makes it in, but stopping changes and forcing everything into more tooling complexity is not the direction I’d like to go in. We need to reduce tooling, not increase it.