Top
Best
New

Posted by ffin 7 hours ago

Servo is now available on crates.io(servo.org)
338 points | 115 comments
nicoburns 6 hours ago|
Some notes:

- The docs.rs docs are still building, but the docs from the recent RC are available [0]

- The Slint project have an example of embedding Servo into Slint [1] which is good example of how to use the embedding API, and should be relatively easy to adapt to any other GUI framework which renders using wgpu.

- Stylo [2] and WebRender [3] have both also been published to crates.io, and can be useful standalone (Stylo has actually been getting monthly releases for ~year but we never really publicised that).

- Ongoing releases on a monthly cadance are planned

[0]: https://docs.rs/servo/0.1.0-rc2/servo

[1]: https://github.com/slint-ui/slint/tree/master/examples/servo

[2]: https://docs.rs/stylo

[3]: https://docs.rs/webrender

apitman 5 hours ago|
Tangent, but Slint is a really cool project. Not being able to dynamically insert widgets from code was the only thing that turned me off of it for my use case.
simonw 4 hours ago||
Here's a vibe-coded "servo-shot" CLI tool which uses this crate to render an image of a web page: https://github.com/simonw/research/tree/main/servo-crate-exp...

  git clone https://github.com/simonw/research
  cd research/servo-crate-exploration/servo-shot
  cargo build
  ./target/debug/servo-shot https://news.ycombinator.com/
Here's the image it generated: https://gist.github.com/simonw/c2cb4fcb15b0837bbc4540c3d398c...
scrame 4 hours ago||
That's pretty cool. I'm guessing it would need some tweaking to handle things like cookies, or does it just need a pointer to the cookiejar? I'm not too familiar with servo,
simonw 1 hour ago||
It's a VERY simple initial demo, I expect things like cookies would require quite a lot more work.
echelon 4 hours ago||
This is super useful! I have immediate use for this.

Do you know if Servo is 100% Rust with no external system dependencies? (ie, can get away with rustls only?)

Can this do Javascript? (Edit: Rendering SPAs / Javascript-only UX would be useful.)

Edit 2: Can it do WebGL? Same rationale for ThreeJS-style apps and 3D renders. (This in particular is right up my use case's alley.)

simonw 4 hours ago|||
It depends on stuff like SpiderMonkey so not pure Rust.

It should be able to render JavaScript but I've seen it throw bugs on simple pages, no doubt because my vibe-coded thing is crap not because Servo itself can't handle them.

minimaxir 4 hours ago|||
I have been building/vibecoding a similar tool and unfortunately came to the conclusion that in practice, there are just too many features dependent on the full Chrome stack that it's just more pragmatic to use a real Chromium installation despite the file size. Performance/image generation speed is still fine, though.

In Rust, the chromiumoxide crate is a performant way to interface with it for screenshots: https://crates.io/crates/chromiumoxide

rafaelmn 4 hours ago||
This should be the real benchmark of AI coding skills - how fast do we get safe/modern infrastructure/tooling that everyone agrees we need but nobody can fund the development.

If Anthropic wants marketing for Mythos without publishing it - show us servo contrib log or something like that. It aligns nicely with their fundamental infrastructure safety goals.

I'd trust that way more than x% increase on y bench.

Hire a core contributor on Servo or Rust, give him unlimited model access and let's see how far we get with each release.

mort96 4 hours ago||
We do not need vibe-coded critical infrastructure.
falcor84 4 hours ago|||
As I see it, the focus should not be about the coding, but about the testing, and particularly the security evaluation. Particularly for critical infrastructure, I would want us to have a testing approach that is so reliable that it wouldn't matter who/what wrote the code.
bawolff 4 hours ago|||
I dont think that will ever be possible.

At some point security becomes - the program does the thing the human wanted it to do but didn't realize they didn't actually want.

No amount of testing can fix logic bugs due to bad specification.

skrtskrt 2 hours ago|||
AI as advanced fuzz-testing is ridiculously helpful though - hardly any bug you can in this sort of advanced system is a specification logic bug. It's low-level security-based stuff, finding ways to DDOS a local process, or work around OS-level security restrictions, etc.
bawolff 36 minutes ago|||
I'm kind of doubtful that AI is all that great at fuzz testing. Putting that aside though, we are talking about web browsers here. Security issues from bad specification or misunderstanding the specification is relatively common.
thephyber 58 minutes ago|||
Re-read the thread you are replying to.

Each of the last 4 comments in your thread (including yours) are conflating what they mean by AI.

falcor84 3 hours ago|||
Well, yes, agreed - that is the essential domain complexity.

But my argument is that we can work to minimize the time we spend on verifying the code-level accidental complexity.

bawolff 2 hours ago||
Sure, but that is what we've been doing since the early 2000s (e.g. aslr, read only stacks, static analysis, etc).

And we've had some succeses, but i wouldn't expect any game changing breakthroughs any time soon.

mort96 4 hours ago|||
I disagree. Thorough testing provides some level of confidence that the code is correct, but there's immense value in having infrastructure which some people understand because they wrote it. No amount of process around your vibe slop can provide that.
px43 4 hours ago|||
That's just status quo, which isn't really holding up in the modern era IMO.

I'm sure we'll have vibed infrastructure and slow infrastructure, and one of them will burn down more frequently. Only time will tell who survives the onslaught and who gets dropped, but I personally won't be making any bets on slow infrastructure.

falcor84 3 hours ago|||
I somewhat agree, but even then would argue that the proper level at which this understanding should reside is at the architecture and data flow invariants levels, rather than the code itself. And these can actually be enforced quite well as tests against human-authored diagrammatical specs.
t43562 3 hours ago|||
If you don't fully understand the code how do you know it implements your architecture exactly and without doing it in a way that has implications you hadn't thought of?

As a trivial example I just found a piece of irrelevant crap in some code I generated a couple of weeks ago. It worked in the simple cases which is why I never spotted it but would have had some weird effects in more complicated ones. It was my prompting that didn't explain well enough perhaps but how was I to know I failed without reading the code?

mort96 3 hours ago|||
I disagree. The code itself matters too.
rafaelmn 4 hours ago||||
If you're trusting core contributors without AI I don't see why you wouldn't trust them with it.

Hiring a few core devs to work on it should be a rounding error to Anthropic and a huge flex if they are actually able to deliver.

mort96 3 hours ago|||
I trust people to understand the code they write. I don't trust them to understand code they didn't write.
t43562 3 hours ago||||
It's extremely tempting to write stuff and not bother to understand it similar to the way most of us don't decompile our binaries and look at the assembler when we write C/C++.

So, should I trust an LLM as much as a C compiler?

jddj 1 hour ago|||
What if it impairs judgement?
andai 1 hour ago||||
They're getting really good at proofs and theorems, right?
scrame 4 hours ago||||
Unfortunately we're going to get it whether or not we need it.
teaearlgraycold 2 hours ago|||
Well if the big players want to tell me their models are nearly AGI they need to put up or shut up. I don't want a stochastically downloaded C compiler. I want tech that improves something.
Night_Thastus 1 hour ago|||
The problem with such infrastructure is not the initial development overhead.

It's the maintenance. The long term, slow burn, uninteresting work that must be done continually. Someone needs to be behind it for the long haul or it will never get adopted and used widely.

Right now, at least, LLMs are not great at that. They're great for quickly creating smaller projects. They get less good the older and larger those projects get.

rafaelmn 53 minutes ago||
I mean the claim is that next generation models are better and better at executing on larger context. I find that GPT 5.4 xhigh is surprisingly good at analysis even on larger codebases.

https://x.com/mitchellh/status/2029348087538565612

Stuff like this where these models are root causing nontrivial large scale bugs is already there in SOTA.

I would not be surprised if next generation models can both resolve those more reliability and implement them better. At that point would be sufficiently good maintainers.

They are suggesting that new models can chain multiple newly discovered vulnerabilities into RCE and privilege escalations etc. You can't do this without larger scope planning/understanding, not reliabily.

nicoburns 4 hours ago|||
> show us servo contrib log or something like that

Servo may not be the best project for this experiment, as it has a strict no-AI contributions allowed policy.

andai 2 hours ago|||
Replicating Chromium as a benchmark? ;)

Replicating Rust would also be a good one. There are many Rust-adjacent languages that ought to exist and would greatly benefit mankind if they were created.

dabinat 2 hours ago|||
The true solution to this is to fund things that are important, especially when billion-dollar companies are making a fortune from them.
raincole 35 minutes ago|||
Perhaps, you know, not every thing, especially not every thread on HN, has to be about AI?

I read the link twice and no AI or LLM mentioned. I don't know why people are so eager to chime in and try to steer the conversation towards AI.

manx 4 hours ago|||
Agreed. Which other software does society need badly?
beepbooptheory 1 hour ago||
Oh good, I was worried for a sec that people wouldn't be talking about AI in this thread.
phaistra 6 hours ago||
Is there a table of implemented RFCs? Something similar to http://caniuse.com where we can see what HTML/JS/CSS standards and features are implemented? If it exists, I can't seem to find it. Closest thing seems to be "experimental features" page but its not quite detailed enough.
lastontheboat 5 hours ago||
Oh, I forgot that https://arewebrowseryet.com/ exists for this too!
lastontheboat 5 hours ago|||
https://doc.servo.org/apis.html is auto-generated from WebUDL interfaces that exist in Servo. It's not great but better than nothing.
jszymborski 5 hours ago|||
Closest is perhaps the web platform tests

https://servo.org/wpt/

that_lurker 5 hours ago||
Their bloghas monthly posts on changes https://servo.org/blog/
giovannibonetti 3 hours ago||
For those of you using a browser to generate PDFs, the Rust crate you should look into is Typst [1]. Regardless of your application language, you can use their CLI.

It takes some time to get used to their DSL to write PDFs, but nowadays with AI that shouldn't take too long.

[1] https://crates.io/crates/typst

andai 1 hour ago|
I keep hearing about this one as a LaTeX alternative. I shall have to take a proper look.
givemeethekeys 4 hours ago||
So, since this is the top post on Hacker News, and the website's description is a bit too high level for me, what does Servo let me do? By "web technologies", does it mean "put a web browser inside your desktop app"?
01HNNWZ0MV43FF 48 minutes ago||
Yes, Servo is an embeddable web browser / webview, like Chromium Embedded Framework. (CEF)

Electron = Node.js + CEF

Tauri = Rust + webview

Tauri has an experimental branch to use Servo to provide a bundled webview. Currently it relies on a system-level webview, like Edge on Windows, Safari on MacOS, and webkit-gtk on Linux.

givemeethekeys 21 minutes ago||
Thank you!
caminanteblanco 4 hours ago||
It's an alternative browser engine, vis a vis Ladybird
swiftcoder 3 hours ago||
Specifically, it's the browser engine that spun out of Mozilla's early efforts towards a rust-based browser, and is one of the motivating projects for the entire Rust ecosystem
apitman 5 hours ago||
> As you can see from the version number, this release is not a 1.0 release. In fact, we still haven’t finished discussing what 1.0 means for Servo

Wait, crate versions go up to 1.0?

EDIT: Sorry, while crate stability may be an interesting conversation, this isn't the place for it. But I can't delete this comment. Please downvote it. Mods feel free to delete or demote it.

mort96 5 hours ago||
The fundamental problem with Rust versioning is that 0.3.5 is compatible with 0.3.6, but not 0.4.0 or 1.0.0; when major version is 0, the minor takes the role of major and patch takes the role of minor. So packages iterate through 0.x versions, and eventually, they reach a version that's "stable".

If version 0.7 turned out to hit the right API and not require backward incompatible changes, releasing a version 1.0 would be as disruptive as a major version change to your users and communicate through version semantics that it is a breaking change.

Semver declares that version 0.x is for initial development where there is no stability guarantee at all. This is the right semantics for a versioning system, but Cargo doesn't follow this part of semver. Providing stability guarantees throughout the 0.x cycle inevitably results in projects getting stuck in 0.x.

This is one of my biggest gripes with Cargo. But Rust people seem to universally consider it a non-issue so I don't think it'll ever be fixed.

kibwen 50 minutes ago|||
> If version 0.7 turned out to hit the right API and not require backward incompatible changes, releasing a version 1.0 would be as disruptive as a major version change

Nope, this is what the semver trick is for: https://github.com/dtolnay/semver-trick

TL;DR: You take the 0.7 library, release it as 1.0, then make a 0.7.1 release that does nothing other than depend on 1.0 and re-export all its items. Tada, a compatible 1.0 release that 0.7 users will get automatically when they upgrade.

Even more interesting is that you can use this to coordinate only partially-breaking changes, e.g. if you have 100 APIs in your library but only make a breaking change to one, you can re-export the 99 unbroken APIs and only end up making breaking changes in practice for users who actually use the one API with breaking changes.

sheepscreek 4 hours ago||||
> The fundamental problem with Rust versioning is that 0.3.5 is compatible with 0.3.6, but not 0.4.0 or 1.0.0

That’s a feature of semver, not a bug :)

Long answer: You are right to notice that minor versions within a major release can introduce new APIs and changes but generally, should not break existing APIs until the next major release.

However, this rule only applies to libraries after they reach 1.0.0. Before 1.0.0, one shouldn’t expect any APIs to be frozen really.

mort96 4 hours ago||
No, it's explicitly not. Semver says:

> Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.

Cargo is explicitly breaking with Semver by considering 0.3.5 compatible with 0.3.6.

demurgos 4 hours ago||
To go further, semver provides semantics and an ordering but it says nothing about version requirement syntax. The caret operator to describe a range of versions is not part of the spec. It was introduced by initial semver-aware package managers such as npm or gem. Cargo decided to default to the caret operator, but it's still the caret operator.

In practice, there's no real issue with using the first non-zero component to define the group of API-compatible releases and most package managers agree on the semantics.

steveklabnik 3 hours ago||
Thank you.

Eventually this will get cleared up. I’m close than I’ve ever been to actually handling this, but it’s been 9 years already, so what’s another few months…

Starlevel004 4 hours ago||||
The standard library has a whole bunch of tools to let them test and evolve APIs with a required-opt in, but every single ecosystem package has to get it right first try because Cargo will silently forcibly update packages and those evolution tools aren't available to third party packages.

Such a stupid state of affairs.

moron4hire 4 hours ago|||
Personally, I think the 0 major version is a bad idea. I hear the desire to not want to have to make guarantees about stability in the early stages of development and you don't want people depending on it. But hiding that behind "v0.x" doesn't change the fact that you are releasing versions and people are depending on it.

If you didn't want people to depend on your package (hence the word "dependency") then why release it? If your public interface changes, bump that major version number. What are you afraid of? People taking your project seriously?

jaapz 4 hours ago|||
0.x is not that you don't want people depending on it, you just don't want them to come and complain when you quickly introduce some breaking changes. The project is still in development, it might be stable enough for use in "real projects(tm)", but it might also still significantly change. It is up to the user to decide whether they are OK with this.

1.x communicates (to me at least) you are pretty happy with the current state of the package and don't see any considerable breaking changes in the future. When 2.x comes around, this is often after 1.x has been in use for a long time and people have raised some pain points that can only be addressed by breaking the API.

OtomotO 4 hours ago|||
But people will complain, so ex falso quodlibet
moron4hire 4 hours ago|||
If you are at the point that other people can use your software, then you should use v1. If you are not ready for v1, then you shouldn't be releasing to other people.

Because this comment, "The project is still in development, it might be stable enough for use in "real projects(tm)", but it might also still significantly change." That describes every project. Every project is always in development. Every project is stable until it isn't. And when it isn't, you bump the major number.

the__alchemist 4 hours ago||
I think we can come up with a reason why bumping the version number each breaking change isn't an elegant solution either: You would end up with version numbers in the hundreds or thousands.
zokier 1 hour ago||
Browser version numbers are in the hundreds and it doesn't seem to be a problem.
the__alchemist 1 hour ago||
Indeed! I think both 0-based versioning, and this (maybe?) downside I bring up addresses the tension between wanting to limit the damage caused by breaking changes with retaining the ability to make them.
mort96 4 hours ago||||
Versioning is communication. I find it useful to communicate, through using version 0.x, "this is not a production ready library and it may change at any time, I provide no stability guarantees". Why might I release it in that state? Because it might still be useful to people, and people who find it useful may become contributors.
moron4hire 4 hours ago||
Any project may change at any time. That's why they bump from v1 to v2. But by not using the full precision of the version number, you're not able to communicate as clearly about releases. A minor release may not be 100% compatible with the previous version, but people still expect some degree of similarity such that migrating is not a difficult task. But going from v0.n to v0.(n+1) uses that field to communicate "hell, anything could happen, YOLO."

Nobody cares that Chrome's major version is 147.

mort96 4 hours ago||
By releasing a library with version 1.0, I communicate: "I consider this project to be in a state where it is reasonable to depend on it".

By releasing a library with version 0.x, I communicate: "I consider this project to be under initial development and would advice people not to depend on in unless you want to participate in its initial development".

I don't understand why people find this difficult or controversial.

steveklabnik 1 hour ago||
There is additional subtlety here.

For example, sometimes projects that have a 0.y version get depended on a lot, and so moving to 1.0.0 can be super painful. This is the case with the libc crate in Rust, which the 0.1.0 -> 0.2.0 transition was super painful for the ecosystem. Even though it should be a 1.0.0 crate, it is not, because the pain of causing an ecosystem split isn't considered to be worth the version number change.

maxloh 2 hours ago|||
[dead]
the__alchemist 4 hours ago|||
Hey - Many rust libraries adopt [0-based versioning](https://0ver.org/). That link can describe it more elegantly than I.
Fokamul 4 hours ago||
If you want to lure Microslop to migrate all their "great" apps to Servo.

Easy, just add bloat code so it will use 5GB of RAM by default, that's instant adoption by MS.

nmvk 1 hour ago||
Really excited to see this. I contributed to Servo open source 10 years ago, and it was a very cool experience.
tracker1 3 hours ago||
I was a little curious to see if there was any Tauri integration, and it looks like there is (tauri-runtime-verso) ... Not sure where that comes out size-wise compared to Electron at that point though. My main desire there would be for Linux/flathub distribution of an app I've been working on.
solomatov 5 hours ago|
What this crate could be used for?
More comments...