But there's a lot more CSS features now. While in the past, Turing completeness in CSS required humans to click on checkboxes, now CSS can emulate an entire CPU without JavaScript or requiring user interaction.[1] So I wonder if DOOM could be purely CSS too, in real time.
[0]: https://keithclark.co.uk/labs/css-fps/ [1]: https://lyra.horse/x86css/
> Yes, Lyra Rebane build a x86 CPU completely in CSS, but that technique is simply not fast enough for handing the game loop. So the result is something that uses a lot of JavaScript.
IDDQD
and IDKFA
did not work unfortunately.EDIT: https://cssdoom.wtf/
Interestingly, it was more choppy in Chromium.
I could not find a key for moving sideways ("strafing").
All in all, quite mind-boggling.
Firefox's WebRender is truly a great creation. While Chrome is faster at most things especially involving JS, Firefox puts so much of its rendering on the GPU so moving elements around is incredibly fast.
CSS started as purely declarative styling, but between things like conditionals, math functions, and now these rendering tricks, it’s slowly creeping into “programmable system” territory. Not because it’s the right tool for it, but because browsers are becoming the real runtime. The interesting question isn’t “can Doom run in CSS”, it’s how much logic we’ll keep pushing into layers that were never meant to handle it.
at what point is CSS powerful enough to become a malware vector.
For static content like documents the distinction is easy to determine. When you think about applications, widgets, and other interactive elements the line starts to blur.
Before things like flex layout, positioning content with a 100% height was hard, resulting in JavaScript being used for layout and positioning.
Positioning a dropdown menu, tooltip, or other content required JavaScript. Now you can specify the anchor position of the element via CSS properties. Determining which anchor position to use also required JavaScript, but with things like if() can now be done directly in CSS.
Implementing disclosure elements had to be done with a mix of JavaScript and CSS. Now you can use the details/summary elements and CSS to style the open/close states.
Animation effects when opening an element, on hover, etc. such as easing in colour transitions can easily be done in CSS now. Plus, with the reduced motion media query you can gate those effects to that user preference in CSS.
FYI if you want to use inspect element, the viewport div consumes mouse elements, you can get rid of this with
#viewport {
pointer-events: none;
}
#viewport * {
pointer-events: initial;
}> The problem: CSS can compute a number – 0 for visible and 1 for hidden – but you can’t directly use that number to set visibility. There is a new feature coming to CSS that solves this: if(), but right now it only just shipped in Chrome.
> So I used a trick called type grinding. You create a paused animation that toggles visibility between visible and hidden. Then you set the animation-delay based on the computed value to determine which keyframe is used:
animation: cull-toggle 1s step-end paused;
animation-delay: calc(var(--cull-outside) \* -0.5s);
@keyframes cull-toggle {
0%, 49.9% { visibility: visible; }
50%, 100% { visibility: hidden; }
}
> A negative animation delay on a paused animation jumps to that point in the timeline. So a delay of 0s lands in the visible range, and -0.5s lands in the hidden range. It’s a hack, but a functional one. When CSS if() gets wider support, we can replace this with a clean conditional.
```[1] Can Doom Run It? An Adding Machine in Doom https://blog.otterstack.com/posts/202212-doom-calculator/
Would that be better or worse for webdev? I don't know. But I like to ponder.
Good question, I personally think that seperating by concerns is good. But when problems arise like boundaries that get crossed or compilers implementing language features into css like Sass, maybe it proves that those things are actually not two concerns but one.
Lately I am using Catch2 (a c++ testing framework) and wanted to benchmark some code. My first instinct was looking for a benchmark framework. But to my surprise Catch2 does also have a benchmarking framework included!.
Most people would argue that a testing framework should not include a benchmarking framework. But using it myself it showed me that both concerns of benchmarking for performance regressions and testing are similar.
Similar enough that I would prefer both of them together.
Most people, me included, are asking: "Should this be split into more?" But seldom, we ask: "Should this be merge into one?"