Posted by _kush 4 days ago
1. Even though there are newer XSLT standards, XSLT 1.0 is still dominant. It is quite limited and weird compared to the newer standards.
2. Resolving performance problems of XSLT templates is hell. XSLT is a Turing-complete functional-style language, with performance very much abstracted away. There are XSLT templates that worked fine for most documents, but then one document came in with a ~100 row table and it blew up. Turns out that the template that processed the table is O(N^2) or worse, without any obvious way to optimize it (it might even have an XPath on each row that itself is O(N) or worse). I don't exactly know how it manifested, but as I recall the document was processed by XSLT for more than 7 minutes.
JS might have other problems, but not being able to resolve algorithmic complexity issues is not one of them.Features are now available like key (index) to greatly speedup the processing. Good XSLT implementation like Saxon definitively helps as well on the perf aspect.
When it comes to transform XML to something else, XSLT is quite handy by structuring the logic.
XSLT 2+ was more about side effects.
I never really grokked later XSLT and XPath standards though.
XSLT 1.0 had a steep learning curve, but it was elegant in a way poetry is elegant because of extra restrictions imposed on it compared to prose. You really had to stretch your mind to do useful stuff with it. Anyone remembers Muenchian grouping? It was gorgeous.
Newer standards lost elegance and kept the ugly syntax.
No wonder they lost mindshare.
My biggest problem with XSLT is that I've never encountered a problem that I wouldn't rather solve with an XPath library and literally any other general purpose programming language.
When XSLT was the only thing with XPath you could rely on, maybe it had an edge, but once everyone has an XPath library what's left is a very quirky and restrictive language that I really don't like. And I speak Haskell, so the critic reaching for the reply button can take a pass on the "Oh you must not like functional programming" routine... no, Haskell is included in that set of "literally any other general purpose programming language" above.
There's clearly value in XSLT's near-universal support as a web-native system. It provides templating out of the box without invoking JavaScript, and there's demand for that[1]. But it still lacks decent in-browser debugging which JS has in spades.
[1] https://justinfagnani.com/2025/06/26/the-time-is-right-for-a...
XML world is full of ugly standards and failed contenders. None remembers RelaxNG. But had reacher expressive power than XMLSchema and a human-readable syntax.
https://github.com/Juniper/libslax/wiki/Intro
It looks like it was developed by Juniper and has shipped in their routers?
Just to frame this people, imagine a JSON-based programming language for transforming JSON files into other JSON files and the program is also in JSON and turing complete. Now imagine it's not JSON but XML! Now any program can read it! Universal code, magic!
The idea behind XXSLT is now, we actually have a program whose job it is to specify a program. So we have a XML file which specifies a second XML file, which is the program, whose job it is to transform XML to XML. As we all know, layers of abstraction are always good, and common formats such as XML are especially good, so what we have now is the ability to generate a whole family and diverse ontology of programs, all of them XML, all of them by and for XML. Imagine the compiling with your favourite XML-based compilation chain!
XML (the data structure) needs a non-XML serialization.
Similar to how Semantic Web's Owl has four different serializations, only one of them being the XML serialization. (eg. Owl can be represented in Functional, Turtle, Manchester, Json, and N-triples syntaxes.)
That's YAML, and it is arguibly worse. Here's a sample YAML 1.2 document straight from their spec:
%TAG !e! tag:example.com,2000:app/
---
- !local foo
- !!str bar
- !e!tag%21 baz
Nightmare fuel. Just by looking at it, can you tell what it does?--
Some notes:
- SemWeb also has JSON-LD serialization. It's a good compromise that fits modern tooling nicely.
- XML is still a damn good compromise between human readable and machine readable. Not perfect, but what is perfect anyway?
- HTML5 is now more complex than XHTML ever was (all sorts of historical caveats in this claim, I know, don't worry).
- Markup beauty is relative, we should accept that.
KDL is a very interesting attempt, but my impression is that people are already trying to shove way too much unnecessary complexity into it.
IMO, the KDL's document transformation is not a really good example of a better XSLT, tough. I mean, it's better, but it probably can still be improved a lot.
But the fundamental problem here is the same: no matter what new things are added to the spec, the best you can hope for in browsers is XSLT 1.0, even though we've had XSLT 3.0 for 8 years now.
Trying to close the gap often ends up creating more complexity than intended, or maybe even more than XML in some hands.
It definitely would be an interesting piece.
- Xee: https://github.com/Paligo/xee
- xrust: https://docs.rs/xrust/latest/xrust/xslt/
- XJSLT (compiles XSLT to JS): https://github.com/egh/xjslt
Xee is WIP AFAIK and I don't know the maturity of xrust and XJSLT.
Also, I want a cookie & a pony.
How, where? In 2013 I was still working a lot with XSLT and 1.0 was completely dead everywhere one looked. Saxon was free for XSLT 2 and was excellent.
I used to do transformation of both huge documents, and large number of small documents, with zero performance problems.
Obviously, that means there's a lot of legacy processes likely still using it.
The easiest way to improve the situation seems to be to upgrade to a newer version of XSLT.
In the early days the xsl was all interpreted. And was slow. From ~2004 or so, all the xslt engines came to be jit compiled. XSL benchmarks used to be a thing, but rapidly declined in value from then onward because the perf differences just stopped mattering.
Besides XML is not universally loved.
It's not my first choice, but I won't rule it out because I know how relatively flexible and capable it can be.
XSLT might just need a higher abstraction level on top of it?
The idea behind XSLT is genial, but the real essence of it is XPath which makes it possible. And we've seen XPath evolve into CSS Selectors, and being useful on its own.
So in essence there are two sides of the transformation:
- selection - when you designate which parts of the tree match
- transformation - when building the new three
And while there are established approaches to the first part, perhaps XSLT is the only one which fits the definition of 'generally accepted' when it comes to the transformation.But one can argue the transformation is possible with jq, it is just that I definitely don't like its overly-engineered syntax. IMHO the champion of the transformation syntax is yet undecided, even though in 2025 XSLT is still more or less king. Which is fascinating as XML is long not a usual choice of preference.
Don’t get me wrong, XPath is by far the best thing to come out of the xml ecosystem, but the actual idea at the core of xslt is the match/patch during traversal, and talking about it in terms of selection misses that boat entirely. Select / update is how you manipulate a tree with jQuery, or really the average xml library.
Streaming is not supported until later version.
But in the end the core problem is XSLT, the language. Despite being a complete programming language, your options are very limited for resolving performance issues when working within the language.
I worked with a guy who knew all about complexity analysis, but was quick to assert that "n is always small". That didn't hold - but he'd left the team by the time this became apparent.
A couple of blue chip websites I‘ve seen that could be completely taken down just by requesting the sitemap (more than once per minute).
PS: That being said it is an implantation issue. But it may speak for itself that 100% of the XSLT projects I‘ve seen had it.
I'm pretty sure that's because implementing XSLT 2.0 needs a proprietary library (Saxon XSLT[0]). It was certainly the case in the oughts, when I was working with XSLT (I still wake up screaming).
XSLT 1.0 was pretty much worthless. I found that I needed XSLT 2.0, to get what I wanted. I think they are up to XSLT 3.0.
I think the guy behind Saxon may be one of the XSLT authors.
That said, Saxon does (or at least did) have an open source version. It doesn't have all the features, e.g. no schema validation or query optimization, but all within the boundaries of the spec. The bigger problem there is that Saxon is written in Java, and browsers understandably don't want to take a dependency on that just for XSLT 2+.
It looks great, then you design your stuff and it goes great, then you deploy to the real world and everything catches on fire instantly and everytime you stop one another one starts.
Generally speaking I feel like this is true for a lot of stuff in programming circles, XML included.
New technology appears, some people play around with it. Others come up with using it for something else. Give it some time, and eventually people start putting it everywhere. Soon "X is not for Y" blogposts appear, and usage finally starts to decrease as people rediscover "use the right tool for the right problem". Wait yet some more time, and a new technology appears, and the same cycle begins again.
Seen it with so many things by now that I think "we'll" (the software community) forever be stuck in this cycle and the only way to win is to explicitly jump out of the cycle and watch it from afar, pick up the pieces that actually make sense to continue using and ignore the rest.
We eventually said, "what if we made databases based on JSON" and then came MongoDB. Worse performance than a relational database, but who cares! It's JSON! People have mostly moved away from document databases, but that's because they realized it was a bad idea for the majority of usecases.
I think the only left out part is about people currently believing in the current hyped way, "because this time it's right!" or whatever they claim. Kind of the way TypeScript people always appear when you say that TypeScript is currently one of those hyped things and will eventually be overshadowed by something else, just like the other languages before it, then soon sure enough, someone will share why TypeScript happen to be different.
>wasting cycles to manifest structured data in an unstructured textual format
JSON IS a structured textual format you dofus. What you're complaining about is that the message defines its own schema.
>has massive overhead on the source and destination sides
The people that care about the overhead use MessagePack or CBOR instead.
I personally hope that I will never have to touch anything based on protobufs in my entire life. Protobuf is a garbage format that fails at the basics. You need the schema one way or another, so why isn't there a way to negotiate the schema at runtime in protobuf? Easily half or more of the questionable design decisions in protobuffers would go away if the client retrieved the schema at runtime. The compiler based workflow in Protobuf doesn't buy you a significant amount of performance in the average JS or JVM based webserver since you're copying from a JS object or POJO to a native protobuf message anyway. It's inviting an absurd amount of pain for essentially zero to no benefits. What I'm seeing here is a motte-bailey justification for making the world a worse place. The motte being the argument that text based formats are computationally wasteful, which is easily defended. The bailey being the implicit argument that hard coding the schema the way protobuf does is the only way to implement a binary format.
Note that I'm not arguing particularly in favor of MessagePack here or even against protobuf as it exists on the wire. If anything, I'm arguing the opposite. You could have the benefits of JSON and protobuf in one. A solution so good that it makes everything else obsolete.
Please avoid snark.
https://en.wikipedia.org/wiki/XML_appliance
E.g.
https://www.serverwatch.com/hardware/power-up-xml-data-proce...
> An XML appliance is a special-purpose network device used to secure, manage and mediate XML traffic.
Holy moly
JSON has its own set of problems like lack of comments and for some reason no date type.
But in the end they are just data file formats. We have bigger things to worry about.
I would say that the main benefit of XML is that it has a very mature ecosystem around it that JSON is still very much catching up with.
I think part of the problem is focusing on the wrong aspect. In the case of XSLT, I'd argue its most important properties are being pure, declarative, and extensible. Those can have knock-on effects, like enabling parallel processing, untrusted input, static analysis, etc. The fact it's written in XML is less important.
Its biggest competitor is JS, which might have nicer syntax but it loses those core features of being pure and declarative (we can implement pure/declarative things inside JS if we like, but requiring a JS interpreter at all is bad news for parallelism, security, static analysis, etc.).
When fashions change (e.g. XML giving way to JS, and JSON), we can end up throwing out good ideas (like a standard way to declare pure data transformations).
(Of course, there's another layer to this, since XML itself was a more fashionable alternative to S-expressions; and XSLT is sort of like Lisp macros. Everything old is new again...)
i don't believe this is true. machine language doesn't need the kind of verbosity that xml provides. sgml/html/xml were designed to allow humans to produce machine readable data. so they were meant for humans to talk to machines and vice versa.
Markup languages are a fine and useful and powerful way for modeling documents, as in narrative documents with structure meant for human consumption.
XML never had much to recommend it as the general purpose format for modeling all structured data, including data meant primarily for machines to produce and consume.
1. the browsers were inconsistent in 1990-2000 so we started using JS to make them behave the same
2. meanwhile the only thing we needed were good CSS styles which were not yet present and consistent behaviour
3. over the years the browsers started behaving the same (mainly because Highlander rules - there can be only one, but Firefox is also coping well)
4. but we already got used to having frameworks that would make the pages look the same on all browsers. Also the paradigm was switched to have json data rendered
5. at the current technology we could cope with server generated old-school web pages because they would have low footprint, work faster and require less memory.
Why do I say that? Recently we started working on a migration from a legacy system. Looks like 2000s standard page per HTTP request. Every action like add remove etc. requires a http refresh. However it works much faster than our react system. Because:
1. Nowadays the internet is much faster
2. Phones have a lot of memory which is wasted by js frameworks
3. in the backend all's almost same old story - CRUD CRUD and CRUD (+ pagination, + transactions)
It works well here on HN for example as it is quite simple.
There are a lot of other examples where people most likely should do a simple website instead of using JS framework.
But "we could all go back to full page reloads" is not true, as there really are proper "web applications" out there for which full page reloads would be a terrible UX.
To summarize there are:
"websites", "web documents", "web forms" that mostly could get away with full page reloads
"web applications" that need complex stuff presented and manipulated while full page reload would not be a good solution
Let's face it, most uses of JS frameworks are for blogs or things that with full page reload you not even notice: nowadays browsers are advanced and only redraw the screen when finished loading the content, meaning that they would out of the box mostly do what React does (only render DOM elements who are changes), meaning that a page reload with a page that only changes one button at UI level does not result in a flicker or loading of the whole page.
BTW, even React now is suggesting people to run the code server-side if it is possible (it's the default of Next.JS), since it makes the project easier to maintain, debug, test, as well as get better score in SEO from search engines.
I'm still a fan of the "old" MVC models of classical frameworks such as Laravel, Django, Rails, etc. to me make overall projects that are easier to maintain for the fact that all code runs in the backend (except maybe some jQuery animation client side), model is well separated from the view, there is no API to maintain, etc.
grug remember ancestor used frames
then UX shaman said frame bad all sour faced frame ugly they said, multiple scrollbar bad
then 20 years later people use fancy js to emulate frames grug remember ancestor was right
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
Even with these problems, classic frames might have been salvageable, but nobody bothered to fix them.
https://pubs.opengroup.org/onlinepubs/9799919799/
They can navigate targeting any other frame. For example, clicking "System Interfaces" updates the bottom-left navigation menu, while keeping the state of the main document frame.
It's quite simple, just uses the `target` attribute (target=blank remains popular as a vestigial limb of this whole approach).
This also worked with multiple windows (yes, there were multi-window websites that could present interactions that handled multiple windows).
The popular iframe is sort of salvaged from frame tech, it is still used extensively and not deprecatred.
Classic frames are simple. Too simple. Your link goes to the default state of that frameset. Can you link me any non-default state? Can I share a link to my current state with you?
Most frames are used for menu, navigation, frame for data, frame for additional information of data. And they are great for that. I don't think that frames are different instances of the browser engine(?) but that doesn't matter the slightest(?). They are fast and lightweight.
> The header/footer/sidebar frames are subordinate and should not navigate freely.
They have the ability to navigate freely but obviously they don't do that, they navigate different frames.
History doesn't work right
Bookmarks don't work right -- this applies to link sharing and incoming links too
Back button doesn't work right
The concept is good. The implementation is bad.
> Bookmarks don't work right -- this applies to link sharing and incoming links too
> Back button doesn't work right
Statements that apply to many JS webpages too.
pushState/popState came years after frames lost popularity. These issues are not related to their downfall.
Relax, dude. I'm not claiming we should use frames today. I'm saying they were simple good tools for the time.
And, ironically, the best way to fix these problems with frames is to use JavaScript.
They were good enough.
> For some sites, it wasn't a big deal
Precisely my point.
> POSIX specs or Javadocs
Hey, they work for me.
> the best way to fix these problems with frames is to use JavaScript.
Some small amounts of javascript. Mainly, proxy the state for the main frame to the address bar. No need for virtual dom, babel, react, etc.
--
_Again_, you're arguing like I'm defending frames for use today. That's not what I'm doing.
Many websites follow a "left navigation, center content" overall layout, in which the navigation stays somehow stationary and the content is updated. Frames were broken, but were in the right direction. You're nitpicking on the ways they were broken instead of seeing the big picture.
Along with other issues, this gave rise to AJAX and SPAs and JS frameworks. A big part of how we got where we are today is because the people making the web standards decided to screw around with XHTML and "the semantic web" (another directionally correct but badly done thing!) and other BS for about a decade instead of improving the status quo.
So we can and often should return to ancestor but if we're going to lay blame and trace the history, we ought to do it right.
Frames gave place to (the incorrect use of) tables. The table era was way worse than it is today. Transparent gif spacers, colspan... it was all hacks.
The table era gave birth to a renewal of web standards. This ran mostly separately from the semantic web (W3C is a consortium, not a single central group).
The table era finally gave way to the jQuery era. Roughly around this time, browser standards got their shit together... but vendors didn't.
Finally, the jQuery era ended with the rise of full JS frameworks (backbone first, then ember, winjs, angular, react). Vendors operating outside standards still dominate in this era.
There's at least two whole generations between frames and SPAs. That's why I used the word "ancestor", it's 90s tech I barely remember because I was a teenager. All the other following eras I lived through and experienced first hand.
The poison on the frames idea wore off ages ago. The fact that websites not made with them resemble their use is a proof of that, they just don't share the same implementation. The "idea" is seen with kind eyes today.
The key point about frames in the original context of this thread as I understood it was that they allowed a site to only load the content that actually changes. So accounting for the table-layout era doesn't really change my perspective: frames were so bad, that web sites were willing to regress to full-page-loads instead, at least until AJAX came along -- though that also coincides with the rise of the (still ongoing) div-layout era.
I agree wholeheartedly that the concept of partial page reloading in a rectilinear grid is alive and well. Doing that with JavaScript and CSS is the whole premise of an SPA as I understand it, and those details are key to the difference between now and the heyday of frames. But there was also a time when full-page-loading was the norm between the two eras, reflecting the disillusionment with frames as they were implemented and ossified.
The W3C (*) spent a good few years working on multiple things most of which didn't pan out. Maybe I'm being too harsh, but it feels like a lot of their working groups just went off and disconnected from practice and industry for far too long. Maybe that was tangential to the ~decade-long stagnation of web standards, but that doesn't really change the point of my criticism.
* = Ecma has a part in this too, since JavaScript was standardized by them instead of W3C for whatever reason, and they also went off into la-la land for roughly the same period of time
Probably, yes!
> So accounting for the table-layout era doesn't really change my perspective: frames were so bad, that web sites were willing to regress to full-page-loads instead
That's where we disagree.
From my point of view, what brought sites to full page loads were designers. Design folk wanted to break out of the "left side navigation, right content" mold and make good looking visual experiences.
This all started with sites like this:
https://www.spacejam.com/1996/
This website is a interstitial fossil between frames and full table nightmare. The homepage represents what (at the time) was a radical way of experiencing the web.
It still carries vestiges of frames in other sections:
https://www.spacejam.com/1996/cmp/jamcentral/jamcentralframe...
However, the home is their crown jewel and it is representative of the years that followed.
This new visual experience was enough to discard partial loading. And for a while, it stayed like this.
JS up to this point was still a toy. DHTML, hover tricks, trinkets following the mouse cursor. It was unthinkable to use it to manage content.
It was not until CSS zen garden, in 2003, that things started to shift:
https://csszengarden.com/pages/about/
Now, some people were saying that you could do pretty websites without tables. By this time, frames were already forgotten and obsolete.
So, JS never killed frames. There was a whole generation in between that never used frames, but also never used JS to manage content (no AJAX, no innerHTML shinenigans, nothing).
Today, websites look more like the POSIX spec (in structure and how content is loaded) than the SpaceJam website that defined a generation. The frames idea is kind of back in town. It doesn't matter that we don't use the same 90s tech, they were right about content over style, right about partial loading, right about a lot of structural things.
I should clarify. I don't think JS killed frames, that's not what I meant. If anything, I think JS could have saved frames. But the failure of frames left a gap that eventually JS (esp. with AJAX) filled. Lots of other stuff was going on at this time too, including alternative tech like Java, Flash, and ActiveX, all of which were trying to do more by bypassing the "standard" tech stack entirely.
I think the ossification of web standards from ca. 1999 to 2012, combined with the rapidly growing user base, and with web developers/designers aggressively pushing the envelope of what the tech could do, put the standard stuff on the back foot pretty badly. Really, I'm talking about the whole ecosystem and not just the standards bodies themselves; there was an era where e.g. improving HTML itself was just not the active mentality. Both inside and outside of W3C (etc.), it seemed that nobody cared to make the standard stuff better. W3C focused on unproductive tangents; web devs focused on non-standard tech or "not the intended use" (like tables for layout).
So I think we can say that <frameset> frames died a somewhat unfair death, caused partly by their initial shortcomings, partly by people trying to break outside of the (literal) boxes they imposed, and partly by the inability of the standard tech to evolve and address those shortcomings in a timely fashion. But just as there was a reason they failed, there was a reason they existed too.
Take the POSIX specs linked in a sibling comment.
Or take the classic Javadocs. I am currently looking at the docs for java.util.ArrayList. Here's a link to it from my browser's URL bar: https://docs.oracle.com/javase/8/docs/api/
But you didn't go to the docs for java.util.ArrayList, you went to the starting page. Ok, fine, I'll link you directly to the ArrayList docs, for which I had to "view frame source" and grab the URL: https://docs.oracle.com/javase/8/docs/api/java/util/ArrayLis...
Ok, but now you don't see any of the other frames, do you? And I had one of those frames pointing at the java.util class. So none of these links show you what I saw.
And if I look in my history, there is no entry that corresponds to what I actually saw. There are separate entries for each frame, but none of them load the frameset page with the correct state.
These are strongly hyperlinked reference documents. Classic use of HTML. No JavaScript or even CSS needed.
> Ok, fine, I'll link you directly to the ArrayList docs, for which I had to "view frame source" and grab the URL:
You could've just right click on the "frames" link, and copy the URL: https://docs.oracle.com/javase/8/docs/api/index.html?java/ut... . They use javascript to navigate based on the search params in the URL. It's not great, it should update the URL as you navigate, maybe you can send them a PR for that. (And to change state of the boxes on the left too.)
Also browser history handling is really messy and hard to get right, regardless of frames.
> And if I look in my history, there is no entry that corresponds to what I actually saw.
? If you write a javascript +1 button that updates a counter, there won't be a corresponding entry in your history for the actual states of your counter. I don't see how that is a fundamental problem with javascript(?).
I don't understand how pre-HTML5, non-AJAX reference docs qualify as an "SPA". This is just an ordinary web site.
Then again, it was a long time. Maybe it's me misremembering.
This was maybe 2008?
jQuery took over very quickly though for all of those.
Almost sure it was available on IE6. But even if not, you could emulate it using hidden iframes to call pages which embedded some javascript interacting with the main page. I still have fond memories of using mootools for lightweight nice animations and less fond ones of dojo.
Kuro5hin had a dynamic commenting system based on iframes like you describe.
Internet Explorer didn’t support DOM events, so addEventListener wasn’t cross-browser compatible. A lot of people put work in to come up with an addEvent that worked consistently cross-browser.
The DOMContentLoaded event didn’t exist, only the load event. The load event wasn’t really suitable for setting up things like event handlers because it would wait until all external resources like images had been loaded too, which was a significant delay during which time the user could be interacting with the page. Getting JavaScript to run consistently after the DOM was available, but without waiting for images was a bit tricky.
These kinds of things were iterated on in a series of blog posts from several different web developers. One blogger would publish one solution, people would find shortcomings with it, then another blogger would publish a version that fixed some things, and so on.
This is an example of the kind of thing that was happening, and you’ll note that it refers to work on this going back to 2001:
https://robertnyman.com/2006/08/30/event-handling-in-javascr...
When jQuery came along, it was really trying to achieve two things: firstly, incorporating things like this to help browser compatibility; and second, to provide a “fluent” API where you could chain API calls together.
2002, I was using “JSRS”, and returning http 204/no content, which causes the browser to NOT refresh/load the page.
Just for small interactive things, like a start/pause button for scheduled tasks. The progress bar etc.
But yeah, in my opinion we lost about 15 years of proper progress.
The network is the computer came true
The SUN/JEE model is great.
It’s just that monopolies stifle progress and better standards.
Standards are pretty much dead, and everything is at the application layer.
That said.. I think XSLT sucks, although I haven’t touched it in almost 20 years. The projects I was on, there was this designer/xslt guru. He could do anything with it.
XPath is quite nice though
Internet Explorer 6 was released in 2001 and didn’t drop below 3% worldwide until 2015. So that’s a solid 14 years of paralysis in browser compatibility.
DOM manipulation of that sort is JS dependent, of course, but I think considering language features and the environment, like the DOM, to be separate-but-related concerns is valid. There were less kitchen-sink-y libraries that only concentrated on language features or specific DOM features. Some may even consider a few parts in a third section: the standard library, though that feature set might be rather small (not much more than the XMLHTTPRequest replacement/wrappers?) to consider its own thing.
> For stuff which didn't need JS at all, there also shouldn't be much need for JQuery.
That much is mostly true, as it by default didn't do anything to change non-scripted pages. Some polyfills for static HTML (for features that were inconsistent, or missing entirely in, usually, old-IE) were implemented as jQuery plugins though.
--------
[1] Though I don't think they were called that back then, the term coming later IIRC.
[2] Method chaining³, better built-in searching and filtering functions⁴, and so forth.
[3] This divides opinions a bit though was generally popular, some other libraries did the same, others tried different approaches.
[4] Which we ended up coding repeatedly in slightly different ways when needed otherwise.
HTML was the original standard, not JS. HTML was evolving early on, but the web was much more standard than it was today.
Early-mid 1990s web was awesome. HTML served HTTP, and pages used header tags, text, hr, then some backgound color variation and images. CGI in a cgi-bin dir was used for server-side functionality, often written in Perl or C: https://en.m.wikipedia.org/wiki/Common_Gateway_Interface
Back then, if you learned a little HTML, you could serve up audio, animated gifs, and links to files, or Apache could just list files in directories to browse like a fileserver without any search. People might get a friend to let them have access to their server and put content up in it or university, etc. You might be on a server where they had a cgi-bin script or two to email people or save/retrieve from a database, etc. There was also a mailto in addition to href for the a (anchor) tag for hyperlinks so you could just put you email address there.
Then a ton of new things were appearing. PhP on server-side. JavaScript came out but wasn’t used much except for a couple of party tricks. ColdFusion on server-side. Around the same time was VBScript which was nice but just for IE/Windows, but it was big. Perl then PhP were also big on server-side. If you installed Java you could use Applets which were neat little applications on the page. Java Web Server came out serverside and there were JSPs. Java Tomcat came out on server-side. ActionScript came out to basically replace VBScript but do it on serverside with ASPs. VBScript support went away.
During this whole time, JavaScript had just evolved into more party tricks and thing like form validation. It was fun, but it was PhP, ASP, JSP/Struts/etc. serverside in early 2000s, with Rails coming out and ColdFusion going away mostly. Facebook was PhP mid-2000s, and LAMP stack, etc. People breaking up images using tables, CSS coming out with slow adoption. It wasn’t until mid to later 2000s until JavaScript started being used for UI much, and Google’s fostering of it and development of v8 where it was taken more seriously because it was slow before then. And when it finally got big, there was an awful several years where it was framework after framework super-JavaScript ADHD which drove a lot of developers to leave web development, because of the move from server-side to client-side, along with NoSQL DBs, seemingly stupid things were happening like client-side credential storage, ignoring ACID for data, etc.
So- all that to say, it wasn’t until 2007-2011 before JS took off.
The thing was that it was really hard to write code that did the same DOM + placement on all the browsers, and if a framework could do that, this was becoming great help. I started my webpage development in 2000-ish with if (`document.forms /* is ie */`) ... and was finding a way to run IE on my Linux computer to test the webpage rendering there. And CSS 2 was released on 1998 and could change everything and was the Deus Ex Machine everyone expected, except for it didn't work, especially on IE (which had majority of market, and especially if you developed a business application, you had to count it as the majority of all your clients, if not the only ones). So in CSS 2 you could __allegedly__ do things you really needed, like placing things together or in a related position, instead of calculating browser's sizes etc., but it didn't work correctly, so you had to fallback to javascript `document.getElementById().position = screenWidth/2 etc`.
So according to my memory, (1) these were the dark times mainly because of m$ being lazy and abusing their market position (2) we used javascript to position elements, colorize them, make complicated bevels, borders etc (3) this created gap for Google that they could use to gain power (which we admired at that time as the saviours of the web) (4) Opera was the thing and Resistance icon (boasting themselves of fulfilling all standards and being fast, but they failed a few times too)
also DSL, LAN internet sharing and AOL (in Poland 0202122 ppp/ppp), tshshshshshs, tidutidumtidum, tshshshshshsh ...
I've got a .NET/Kestrel/SQLite stack that can crank out SSR responses in no more than ~4 milliseconds. Average response time is measured in hundreds of microseconds when running release builds. This is with multiple queries per page, many using complex joins to compose view-specific response shapes. Getting the data in the right shape before interpolating HTML strings can really help with performance in some of those edges like building a table with 100k rows. LINQ is fast, but approaches like materializing a collection per row can get super expensive as the # of items grows.
The closer together you can get the HTML templating engine and the database, the better things will go in my experience. At the end of the day, all of that fancy structured DOM is just a stream of bytes that needs to be fed to the client. Worrying about elaborate AST/parser approaches when you could just use StringBuilder and clever SQL queries has created an entire pointless, self-serving industry. The only arguments I've ever heard against using something approximating this boil down to arrogant security hall monitors who think developers cant be trusted to use the HTML escape function properly.
Unfortunately, they're not actually wrong though :-(
Still, there are ways to enforce escaping (like preventing "stringly typed" programming) which work perfectly well with streams of bytes, and don't impose any runtime overhead (e.g. equivalent to Haskell's `newtype`)
unless you have a high latency internet connection: https://news.ycombinator.com/item?id=44326816
-- edit --
by the way in 2005 I programmed using very funny PHP framework PRADO that was sending every change in the UI to the server. Boy it was slow and server heavy. This was the direction we should have never gone...
not a good example. i can't find it now, but there was a story/comment about a realtor app that people used to sell houses. often when they were out with a potential buyer they had bad internet access and loading new data and pictures for houses was a pain. it wasn't until they switched to using a frontend framework to preload everything with the occasional updates that the app became usable.
low latency affects any interaction with a site. even hackernews is a pain to read over low latency and would improve if new comments where loaded in the background. the problem creeps up on you faster than you think.
It can also be imposed by the client, e.g. via a https://en.wikipedia.org/wiki/Web_accelerator
By the way, GWT did it before.
I'm probably guilty of some of the bad practice: I have fond memories of (ab)using XSLT includes back in the day with PHP stream wrappers to have stuff like `<xsl:include href="mycorp://invoice/1234">`
This may be out-of-date bias but I'm still a little uneasy letting the browser do the locally, just because it used to be a minefield of incompatibility
Last thing I really did with XML was a technology called EXI, a transfer method that converted an XML document into a compressed binary data stream. Because translating a data structure to ASCII, compressing it, sending it over HTTP etc and doing the same thing in reverse is a bit silly. At this point protobuf and co are more popular, but imagine if XML stayed around. It's all compatible standards working with each other (in my idealized mind), whereas there's a hard barrier between e.g. protobuf/grpc and JSON APIs. Possibly for the better?
I was curious about how it is implemented and I found the spec easy to read and quite elegant: https://www.w3.org/TR/exi/
For a transport tech XML was OK. Just wasted 20% of your bandwidth on being a text encoding. Plus wrapping your head around those style sheets was a mind twister. Not surprised people despise it. As it has the ability to be wickedly complex for no real reason.
XPath is kind of fine. It's hard to remember all the syntax but I can usually get there with a bit of experimentation.
XSLT is absolutely insane nonsense and needs to die in a fire.
https://rimworldwiki.com/wiki/Modding_Tutorials/PatchOperati...
True, and it's even more sad that XML was originally just intended as a simplified subset of SGML (HTML's meta syntax with tag inference and other shortforms) for delivery of markup on the web and to evolve markup vocabularies and capabilities of browsers (of which only SVG and MathML made it). But when the web hype took over, W3C (MS) came up with SOAP, WS-this and WS-that, and a number of programming languages based on XML including XSLT (don't tell HNers it was originally Scheme but absolutely had to be XML just like JavaScript had to be named after Java; such was the madness).
If your document has namespaces, xpath has to reflect that. You can either tank it or explicitly ignore namespaces by foregoing the shorthands and checking `local-name()`.
/*bookstore/*book/*title
its been some godawful mess like
/*[name()='bookstore']/*[name()='book']/*[name()='title']
... I guess because they couldn't bear to have it just match on tags as they are in the file and it had to be tethered to some namespace stuff that most people dont bother with. A lot of XML is ad-hoc without a namespace defined anywhere
Its like
Me: Hello Xpath, heres an XML document, please find all the bookstore/book/title tags
Xpath: *gasps* Sir, I couldn't possibly look for those tags unless you tell me which namespace we are in. Are you some sort of deviant?
Me: oh ffs *googles xpath name() syntax*
Is not actually relevant and is not an information the average XML processor even receives. If the file uses a default namespace (xmlns), then the elements are namespaced, and anything processing the XML has to either properly handle namespaces or explicitly ignore namespaces.
> A lot of XML is ad-hoc without a namespace defined anywhere
If the element is not namespaced xpath does not require a prefix, you just write
//bookstore/book/title
my:book is a different thing from your:book and you generally don't want to accidentally match on both. Keeping them separate is the entire point of namespaces. Same as in any programming language.
/*:bookstore/*:book/*:title
Until some joker decided to employ xml namespaces, then everything turns ugly real fast. I am not sure I can articulate why it is so unpleasant, something about how everything gets super verbose and api now needs all sorts of extra state.
XML is a markup language system. You typically have a document, and various parts of it can be marked up with metadata, to an arbitrary degree.
JSON is a data format. You typically have a fixed schema and things are located within it at known positions.
Both of these have use-cases where they are better than the other. For something like a web page, you want a markup language that you progressively render by stepping through the byte stream. For something like a config file, you want a data format where you can look up specific keys.
Generally speaking, if you’re thinking about parsing something by streaming its contents and reacting to what you see, that’s the kind of application where XML fits. But if you’re thinking about parsing something by loading it into memory and looking up keys, then that’s the kind of application where JSON fits.
It works surprisingly well, the only issue I ever ran into was a decades old bug in Firefox that doesn't support rendering HTML content directly from the XML document. I.e. If the blog post content is HTML via cdata, I needed a quick script to force Firefox to render that text to innerHTML rather than rendering the raw cdata text.
I learned quickly to leave this particular experience off of my resume as sundry DoD contractors contacted me on LinkedIn for my "XML expertise" to participate in various documentation modernization projects.
The next time you sigh as you use JSX to iterate over an array of Typescript interfaces deserialized from a JSON response remember this post - you could be me doing the same in XSLT :-).
Thank you reading specs.
Thank you making tool.
XML is the C++ of text based file formats if you ask me. It's mature, batteries included, powerful and can be used with any language, if you prefer.
Like old and mature languages with their own quirks, it's sadly fashionable to complain about it. If it doesn't fit the use case, it's fine, but treating it like an abomination is not.
Just imagine how fast websites would have rendered if we went that route
I would rather that they introduced support for v3, as that would make it easier to serving static webpages with native support for templating.
What exactly is the difference between generating HTML using the browser's XLST 1.0 runtime and SaxonJS's XLST 3.0 runtime? Before you say the goal is to not have to deal with JS, then you've already accomplished that goal. You don't need to touch NPM, webpack, React, JSX, etc.
Blocking first party JS is lunacy by the way.
Several hundred kB (compressed) of runtime, for one. It could make sense for browsers to have something like that built-in like they did with pdf.js, though Saxon is proprietary so it would not be that thing.
It might not scale for larger businesses, but for regular people on the web who just want to put something out in the world and have minimal churn keeping it up, it can have great value!
It wasn't that bad. We used tomcat and some apache libraries for this. Worked fine.
Our CMS was spitting out XML files with embedded HTML that were very cachable. We handled personalization and rendering to HTML (and js) server side with a caching proxy. The XSL transformation ran after the cache and was fast enough to keep up with a lot of traffic. Basically the point of the XML here was to put all the ready HTML in blobs and all the stuff that needed personalization as XML tags. So the final transform was pretty fast. The XSL transformer was heavily optimized and the trick was to stream its output straight to the response output stream and not do in memory buffering of the full content. That's still a good trick BTW. that most frameworks do wrong out of the box because in memory buffering is easier for the user. It can make a big difference for large responses.
These days, you can run whatever you want in a browser via wasm of course. But back then javascript was a mess and designers delivered photoshop files, at best. Which you then had to cut up into frames and tables and what not. I remember Google Maps and Gmail had just come out and we were doing a pretty javascript heavy UI for our CMS and having to support both Netscape and Internet Explorer, which both had very different ideas about how to do stuff.
I updated an XSLT system to work with then latest Firefox a couple of years ago. We have scripts in a different directory to the documents being transformed which requires a security setting to be changed in Firefox to make it work, I don't know if an equivalent thing is needed for Chrome.
??
I was transforming XML with, like, three lines of VBScript in classic ASP.
It was exactly because of the "holy grail of host-anywhere static templating". But somehow everybody that knew about it made a vow of silence and was forbidden from actually saying it.
Plug: here is a small project to get the basic information about the XSLT processor and available extensions. To use with a browser find the 'out/detect.xslt' file there and drag it into the browser. Works with Chrome and Firefox; didn't work with Safari, but I only have an old Windows version of it.
You needed the jvm and saxon and that was about it...
Depressed and quite pessimistic about the team’s ability to orchestrate Java development in parallel with the rapid changes to the workbook, he came up with the solution: a series of XSLT files that would automatically build Java classes to handle the Struts actions defined by the XML that was built by Visual Basic from the workbook that was written in Excel.
https://raganwald.com/2008/02/21/mouse-trap.html
HN Discussions:
https://news.ycombinator.com/item?id=120379 · https://news.ycombinator.com/item?id=947952