Posted by atilimcetin 5 days ago
There are probably good reasons for all of that, but it is just both bad DX and bad UX. It feels like you need to be a hardcore LaTeX expert or consult with one, in order to accomplish the most mundane things. Especially in a reliable way, that won’t break upon making seemingly unrelated changes, or won’t break other things itself.
I used Typst for a few weeks. It already feels much more understandable, consistent, hackable, and customizable. I guess that is the difference between an ad hoc macro system and an actually thought through programming language.
The only drawback I can see is the ecosystem being smaller and less mature. That is, however, counteracted by being able to do things on your own, without immersing yourself deeply in LaTeX for years. Also, it will improve with time.
LaTeX is great, don’t get me wrong. But its heritage and historical baggage is really dragging it down.
Posts/discussion I found interesting:
- http://www.goodmath.org/blog/2008/01/10/the-genius-of-donald...
- https://tex.stackexchange.com/q/24671
- https://news.ycombinator.com/item?id=15733381
In particular it's interesting how people seem to think TeX itself is actually quite nice to use but its popularity and LaTeX packages created a huge mess of a system.
Added to that, academics specifically are more willing to suffer old crufty stuff than software engineers tend to be. After all their job is to absorb fields of material whether good or bad, and the technology tends to be lagging behind the bleeding edge in many subfields anyway so TeX doesn't even necessarily stand out.
Bingo. Compared to troff and what preceded, TeX was amazing just in its usage. But its real value was in the quality of its typesetting. Knuth put a lot of effort into the beauty and historical correctness of the output, so much so that it was solving optimization problems to calculate line breaks. MS Word still can't break a line properly in 2026.
Celebrating batch-mode typesetting in 2026 feels like some weird cyberpunk fixation.
Programmable like Emacs (but via Scheme), interfaced with major Computer Algebra Systems, tree-structured documents that are live-queryable and modifiable, and typesetting that rivals TeX without using TeX - TeXmacs provides all that, and much more (https://www.texmacs.org/tmweb/home/videos.en.html)
Further many docs from that era are plagued with abandonware.
TeX did one thing well for an era when often the only interface to the machine was over a Xyplex terminal server connecting to a tty at 9600 baud.
another part is many people built their own solution to their own corner of this domain, and not all of them had the deep appreciation for how the rest of the TeX system works.
I hear similar complaints about "Make web page look good", which is popular but also a huge mess of a system.
Even just a sane layout renderer is incredibly hard. A decade ago I wrote a bespoke DNA sequence typesetter (in svg) and I had claude build an extension, for whatever reason it chose to build it from scratch instead of using the components I had built, and it did everything wrong.
The backslash based syntax allows for some really powerful typesetting which is far above anything that exists today. At the same time, the use of backslash-based langauge right to the bottom in terms of macros is what is causing the frustration.
Typst kind of solves that by having backslash based syntax implemented in Rust.
I won't lie: It takes getting used to and you need to learn a lot if you want to achieve fancy complex typesetting effects. But - it's not half as inconvenient as it once was.
So whether the resulting file looks right depends on whether the rendering engine chooses the correct font. Looks like it's supposed to be Nimbus Sans or something metric compatible with that, but the serif font chosen by Typst looks obviously wrong.
The legend and title were generated by Gadfly.jl.
When I went through such a selection process years ago, we took all sorts of tests before and even after the selection process. Towards the end, the head instructor told us they don't really have a good way to measure who will make it through. What he did tell us though is that top physical fitness test scores were not indicative that a candidate will make it to the end.
Is there a PDF version or instructions for building your thesis? I'd like to read it.
But I think the main things it has going for it are that it: produces nice output, and all the journals accept it. Does there exist a tool that renders Typist to LaTeX? That could play nicely with the existing ecosystem.
That's why people take the math subset of latex and use it in other contexts - exactly like this product.
How is this even achieved with HTML or Word?
Eg. variable width numbers like
4.53
13.98765
7
-1,000,234.76
Not perfect in TeX either, but at least possible.I don't worry too much about HTML output still being WIP. Even if TeX had a massive head start, Typst has a good development speed, and a little bit of slope makes up for a lot of y-intercept.
It's worth noting that TeX was developed in the same time period that the details of lexical scope were being nailed down by Guy Steele in the Rabbit compiler for Scheme. It's not that TeX is an ad hoc system; it's more the case that people didn't actually know how to implement a better system at the time.
What is Tony famous for? Well, lots of things, including his very important comparison sort algorithm Quicksort, but, in this context how about the Billion Dollar Mistake ? That's a pretty nasty booboo in many programming languages for which Tony blames himself because it was his idea.
Like your parent said, TeX shipped a long time ago and we learned a lot since then, it is not a surprise that we know how to do better today, in fact it would be a serious black mark for Computer Science if we couldn't.
People did know how to implement things back then, and TeX is a great example of that. It is just our definitions have changed over the years of what we consider better.
This seems like the _perfect_ use for an LLM: systematically porting over as much of the "ecosystem" to Typst as possible. Is anyone doing that?
doesn't appear indifferent or hostile
* Higher priority work currently being done on ArXiV (moving from Perl to Python/cloud)
* No "standard" Typst distro
* Support team needs to be re-trained for a new language
* Persistency: TeX has 30+ years of history; will Typst be around in 30 years? Will current code compile? Will existing documents be supported?
I can actually like write my own functions when I need to. I don't think I have ever written a LaTeX macro without having to look up a lot of stuff.
It's like the JSX of Latex: markup in a programming language, not a programming language pretends to be markup.
> The only drawback I can see is the ecosystem being smaller and less mature. That is, however, counteracted by being able to do things on your own, without immersing yourself deeply in LaTeX for years. Also, it will improve with time.
Most "matches KaTeX" claims I've seen in the wild rely on screenshot eyeballing, which collapses on edge cases like spacing primes, integral subscripts, and matrix delimiters that scale.
One thing I'd be curious about: how are font fallbacks handled when the same Rust core ships to platforms with different system font availability?
KaTeX bundles fonts and assumes they load cleanly; CoreGraphics and Skia bring their own glyph caches and metrics.
Does the display list carry metric snapshots from the host text shaper, or does the core compute layout from a bundled metric file independent of the backend?
The webpage also does read like it was at least heavily LLM assisted, which makes it a bit hard to trust it.
That all said, this is definitely something I'd be interested in using for Zulip if is indeed going to be a well maintained open source project.
(We currently have a node server component that the Zulip server runs only the render LaTeX).
The `render` binary weighed 4.0 MB on disk when I compiled it a few minutes ago. Not sure if that's what you were looking for, but just in case it is, there you go.
Here's the logs, if you want: https://gist.github.com/ethmarks/8df92a68c3076ea2f4a5aedba9f...
[1]: https://keenwrite.com/screenshots.html
Is accessibility anywhere on the roadmap for RaTeX?
On a related note: is mathML more accessible than an AI generated text of how a human would read the mathematical or chemical formula?
Yes, screen readers would typically allow you to navigate the formulas in ways that are more sophisticated than text (not to mention the issues with translating to Braille, which I don't claim to understand, at all). In fact alternative text is a poor substitute for structured information about the formula, which is what you get with MathML.
Plus, the MathML + screen reader combo is deterministic and debuggable, as opposed to OCR'ing an image.
> There is katex-rs[0] that outputs html and mathML. I'd assume the two could work together and the mathML would be fed to whatever the screen reader receives instead of the image?
Maybe! You are parsing the input twice, but it could be a pragmatic solution. I don't know myself how native apps are supposed to expose MathML to screen readers (or if it is even possible without an embedded browser!).
KaTeX depends on the browser engine to do most of the layout, so generating an image would require a tool which converts the HTML nodes back into an image. There are some tools that can do that (like html2canvas), but using LaTeX to generate a PDF and then converting that to an image is probably going to be easier.
But the native and library nature of RaTeX is very interesting, especially with the provided C ABI.
Just thought I'd mention since it's related and I really like the project.
There are probably enough tests for a Rust rewrite of docutils and sphinx with Python extension compatibility; docutils.rs and sphinxdoc.rs?
That one jumped out to me too. The phrasing is so wiggly but technically correct it feels intentional. When I saw it I didn't blame it on the LLM, which is worse.
Otherwise it's a super cool looking project
After a bit of tinkering and understanding the idiosyncracies of Typst, the joy of having reliable, consistent, beautiful, data-driven resumes and cover letters is not measurable. It basically lifted any barrier to applications, while whatever I had before I had always considered a burden.
On top of that, I can add hiring process data directly to the yaml file to run further analysis.
Can LaTeX do this? Most probably, but the learning curve is the difference.