Top
Best
New

Posted by speckx 3 days ago

Game devs explain the tricks involved with letting you pause a game(kotaku.com)
404 points | 223 comments
netcoyote 10 hours ago|
One of the fun features that I developed for Warcraft (the RTS) was to fade the screen to grayscale when the game is paused.

Since the game uses a 256 color palette, it was only necessary to update a few bytes of data (3x256) instead of redrawing the whole screen, so the effect was quick.

I also used this trick when the game stalled due to missing network packets from other players. Initially the game would still be responsive when no messages were received so that you could still interact and send commands. After a few seconds the game would go into paused state with grayscale screen to signify the player that things were stuck. Then several seconds after that a dialog box would show allowing a player to quit the game.

This was much less disruptive than displaying a dialog box immediately on network stall.

upmostly 7 hours ago||
One of my favourite things of being on HN is reading comments like this. Namely, devs who worked on games I played growing up. I absolutely love hearing stories from their past about little technical nuances like this comment. The more technical / specific, the better.

I'd honestly love to compile a book of "war stories" told by devs like netcoyote.

Maybe I will.

Net, if you're interested, hit me up.

ryanisnan 7 hours ago||
This is a great idea, but respectfully, if you're going to get traction you need to be the one instigating getting people to talk to you. Have a pitch, have an explicit ask, and be willing to put effort into making it happen.

Fantastic idea though, you should do it.

acomjean 5 hours ago||
There are a few of these floating around for older games, but the world needs more:

Ara technica has a war stories feature on game development.

https://arstechnica.com/video/series/war-stories

For apple 2 games John Romero did a podcast. It’s decent but he seems to have stopped doing them.

https://appletimewarp.libsyn.com/ Or YouTube

Ted dabney experience has a lot of interesting interviews with older arcade game designers:

https://www.teddabneyexperience.com/episodes

colechristensen 2 hours ago||
Sid Meier's Memoir! is exactly that, Sid Meier wrote a memoir which is indeed mostly war stories of his involvement in making games.
RobRivera 8 hours ago|||
Omg I love this! I have been finding excuses to do little animation engine features that arent on the critical path of development for the sake of creative self-indulgence. Such features shipped was alpha channel based fading using the fundamental opengl fade parameter (under the hood its a linear interpolation of alpha values over 256, pieced together over a provided pair of timestamps).

I tell you what I'll do today on my dev time, I'll try implementing grayscale without aby research on pause and then compare notes (I'm assuming this wc code is available somewhere, which may be a bad assumption)

bombcar 8 hours ago|||
WC code is likely not (legally) available, but Wolfenstein and Doom both did similar palette tricks and are documented in the Black Books for each - https://fabiensanglard.net/three_books_update/index.html

Code for those is available.

RobRivera 7 hours ago||
Oh rad! Thanks for the heads up. I'll do a post-dev comparison to see what I land on and what was done here.
netcoyote 5 hours ago|||
While I don't have the original code, it's something along the lines of this:

    // for each palette entry:
    pal.r = pal.b = pal.g = (byte) (0.299 * pal.r + 0.587 * pal.b + 0.114 * pal.b)
RobRivera 1 hour ago|||
So I was able to create all the bits necessary to introduce the palette change in a similar manner (3x256 changes) on the triggerz and at the moment of truth instead of grey I got a GREEN and PURPLE fadeout (I wasnt sure if you meant rbg or rgb for the ratios so i tried both).

I also tried 128 across the board for grey, and it just made a dull fade which may be the best I can do with my method.

I think it may simply be because rather than have palletes controlled by rgb, I load predrawn sprites using sfml's sprite and texture classes. So the default rgba is 255,255,255,255 - so I have a sidequest to figure out the RIGHT WAY of applying rgb changes to predrawn sprites.

It may very well be a simple matter of "sfml does it differently" or perhaps having grey variants of all sprites and toggling. I feel there has to be a way to accomplish the fade to grey programmatically. Fun little dive tho! I'll have to post an update when I figure it out.

RobRivera 4 hours ago|||
I havent gotten behind the console, but thats like, exactly what I was gonna do, except precompute like 5 or 6 tween values for r,g,b between 255 and the target for greyscale.

But rather than do that and cache them for timing triggers, I kind of like the scaling down by multiplication approach.

Edit: manipulate the rgb values that is - I wouldnt have converged on those hard values on my own.

KellyCriterion 9 hours ago|||
Palette rotation was also heavily used by Ultima & Origin games up to U8 - Pagan
netcoyote 5 hours ago|||
Oh, and I forgot to mention that pause had to be synchronized across the network, so the pause button would pause for all players.

And in the "this is why we can't have nice things", that also introduced problems, because we didn't want a player who was losing to keep pausing the game until the winning player quit out of frustration, so I think we kept a per-player pause counter, which would only be restored if other players also paused? (I don't quite remember all the details, just that we had to prevent yet another abuse vector).

spike021 4 hours ago||
I remember something like this back when playing StarCraft (maybe Brood War, it's been a minute) online.
herodoturtle 8 hours ago||
This was a neat design choice I remember it well.

And also that my “sound card works perfectly!”

vintermann 17 hours ago||
One of the things that impressed me in Quake (the first one) was the demo recording system. The system was deterministic enough that it could record your inputs/the game state and just play them back to get a gameplay video. Especially given that Quake had state of the art graphics at the time, and video playback on computers otherwise was a low-res, resource intensive affair at the time, it was way cool.

It always surprised me how few games had that feature - though a few important ones, like StarCraft, did - and it only became rarer over the years.

ndepoel 16 hours ago||
It wasn't really that much to do with determinism. Quake uses a client-server network model all the time, even when you're only playing a local single-player game. What the demo recording system does is capture all of the network packets that are being sent from the server to the client. When playing back a demo, all the game has to do is run a client and replay the packets that it originally received from the server. It's a very elegant system that naturally flows out of the rather forward-looking decision to build the entire engine around a robust networking model.
anonymous_sorry 15 hours ago|||
I don't see why it makes a difference for this purpose that you're replaying network packets or controller inputs or any other interface to the game engine. The important thing is that there is some well-defined interface. I guess designing for networked multiplayer does probably necessitate that, but if the engine isn't deterministic it still isn't going to work.

There was a twitter thread years ago (which appears to be long gone) about how the SNES Pilot Wings pre-game demo was just a recording of controller inputs. For cartridges manufactured later in the game's life, a plane in the demo crashes rather than landing gracefully, due to a revised version of a chip in the cartridge. The inputs for the demo were never re-recorded, so the behaviour was off.

ndepoel 15 hours ago|||
It does make quite a big difference. The network packets received from the server in Quake will tell you exactly what state the game is in at any point in time. They contain information about the position and state of every entity and their motion, compressed via delta encoding. That means there's very little room for misinterpretation on the client side that would lead to de-sync issues. In fact clients have quite a lot of freedom in how they want to represent said game state, and can for example add animation interpolation to smoothen things out.

The example you mention of demo playback de-syncing when the circumstances slightly change, that is exactly what you get when you only record inputs from the player. Doom actually did this too for its networking model and demo playback system. That relies much more on the engine being deterministic and the runtime environment behaving consistently, because each client that replays those inputs has to run the exact same game simulation, in order for the resulting game states to match.

vvanders 15 hours ago||||
Look into dead reckoning vs lock step for networking. Lockstep requires determinism at the simulation layer, dead reckoning can be much more tolerant of differences and latency. Quake and most action games tend to be dead reckoning (with more modern ones including time rewind and some other neat tricks).

Very common that replay/demo uses the network stack of it's present in a game.

mjlee 12 hours ago|||
I used to be a professional sailor, and love finding nautical terminology in programming. At sea dead reckoning is navigating using the speed and direction of the ship, and adding tide and wind to calculate a fix based on the last known position. The term dates back to the 1600s.

It is fun to point at a chart and confidently state “We’re here! I reckon...”

drzaiusx11 11 hours ago|||
There's a book I read a while back named "Longitude" that maps the storied quest in science to improve upon dead reckoning by devising greater and greater accuracy in time pieces used on ships. Iirc it was a fun read if anyone else finds that sort of thing interesting (as I do.)
mjlee 9 hours ago|||
It's a great read! A story of how the scientific elite stalled progress because the right answer wasn't the one they hoped it would be, and didn't come from the sort of person they thought it should.

If you get the chance, you can see some of Harrison's chronometers at the Royal Observatory in London, though I don't know if they're always on display.

I'll add a recommendation for Sextant by David Barrie.

drzaiusx11 3 hours ago||
Thanks for the recommendation, I'll add it to the shortlist!
sebg 9 hours ago|||
What other books do you like?
Jare 9 hours ago|||
I think the introduction of the term in networking simulations and games came with SIMNET https://en.wikipedia.org/wiki/SIMNET and continued more widely in the DIS https://en.wikipedia.org/wiki/Distributed_Interactive_Simula...

I first learned of it in some writing about a 1997 multiplayer game called, heh, Dead Reckoning.

cubefox 13 hours ago|||
An interesting thing about the a lockstep solution which only considers inputs is that any RNG required in the game must be generated from the input history somehow. This could lead to players being able to manipulate their luck with extremely precise inputs.
mackman 11 hours ago|||
The other interesting trick is you need a separate RNG for visual only affects such as particles than the one you use for the physics simulation. Depending on the game during replays, you could position the camera differently and then particle effects would render differently depend, depending on what’s on screen. Obviously that shouldn’t affect the way objects decide to break during the physics simulation.
10000truths 10 hours ago||||
> a lockstep solution which only considers inputs

Nothing stops you from adding a PRNG seed parameter to initialize your deterministic game engine.

Jare 9 hours ago|||
Typical deterministic game engines will do this, send it to every machine as part of the initial game state, and also check the seed across machines on every simulation frame (or periodically) to detect desyncs.
cubefox 9 hours ago|||
That could lead to other subtle problems elsewhere though, because it requires synchronizing the seed. If you can't do that, it could lead to problems. E.g. when comparing offline speedruns where everyone would have a different seed. Then some players could have more luck than others even with the same inputs, which would be unfair. (Though I can't think of anything else at the moment.)
chowells 7 hours ago||
That's not more of a problem than synchronizing the player names at game start. It's table stakes for an online game.
DonHopkins 7 hours ago|||
https://news.ycombinator.com/item?id=30359560

DonHopkins on Feb 16, 2022 | parent | context | favorite | on: Don't use text pixelation to redact sensitive info...

When I implemented the pixelation censorship effect in The Sims 1, I actually injected some random noise every frame, so it made the pixels shimmer, even when time was paused. That helped make it less obvious that it wasn't actually censoring penises, boobs, vaginas, and assholes, because the Sims were actually more like smooth Barbie dolls or GI-Joes with no actual naughty bits to censor, and the players knowing that would have embarrassed the poor Sims.

The pixelized naughty bits censorship effect was more intended to cover up the humiliating fact that The Sims were not anatomically correct, for the benefit of The Sims own feelings and modesty, by implying that they were "fully functional" and had something to hide, not to prevent actual players from being shocked and offended and having heart attacks by being exposed to racy obscene visuals, because their actual junk that was censored was quite G-rated. (Or rather caste-rated.)

But when we later developed The Sims Online based on the original The Sims 1 code, its use of pseudo random numbers initially caused the parallel simulations that were running in lockstep on the client and headless server to diverge (causing terribly subtle hard-to-track-down bugs), because the headless server wasn't rendering the randomized pixelization effect but the client was, so we had to fix the client to use a separate user interface pseudo random number generator that didn't have any effect on the simulation's deterministic pseudo random number generator.

[4/6] The Sims 1 Beta clip ♦ "Dana takes a shower, Michael seeks relief" ♦ March 1999:

https://www.youtube.com/watch?v=ma5SYacJ7pQ

(You can see the shimmering while Michael holds still while taking a dump. This is an early pre-release so he doesn't actually take his pants off, so he's really just sitting down on the toilet and pooping his pants. Thank God that's censored! I think we may have actually shipped with that "bug", since there was no separate texture or mesh for the pants to swap out, and they could only be fully nude or fully clothed, so that bug was too hard to fix, closed as "works as designed", and they just had to crap in their pants.)

Will Wright on Sex at The Sims & Expansion Packs:

https://www.youtube.com/watch?v=DVtduPX5e-8

The other nasty bug involving pixelization that we did manage to fix before shipping, but that I unfortunately didn't save any video of, involved the maid NPC, who was originally programmed by a really brilliant summer intern, but had a few quirks:

A Sim would need to go potty, and walk into the bathroom, pixelate their body, and sit down on the toilet, then proceed to have a nice leisurely bowel movement in their trousers. In the process, the toilet would suddenly become dirty and clogged, which attracted the maid into the bathroom (this was before "privacy" was implemented).

She would then stroll over to toilet, whip out a plunger from "hammerspace" [1], and thrust it into the toilet between the pooping Sim's legs, and proceed to move it up and down vigorously by its wooden handle. The "Unnecessary Censorship" [2] strongly implied that the maid was performing a manual act of digital sex work. That little bug required quite a lot of SimAntics [3] programming to fix!

[1] Hammerspace: https://tvtropes.org/pmwiki/pmwiki.php/Main/Hammerspace

[2] Unnecessary Censorship: https://www.youtube.com/watch?v=6axflEqZbWU

[3] SimAntics: https://news.ycombinator.com/item?id=22987435 and https://simstek.fandom.com/wiki/SimAntics

setr 13 hours ago||||
if its deterministic lockstep, then all you need to do is record inputs and replay the inputs, since the engine itself is guaranteed to behave the same. If it's client/server and non-deterministic, then you need to record the entire state of the system at every step (which you'll naturally receive from the server) to replay. The main difference would be in how large a replay file would get; and more dynamism naturally implies more information to record. Large unit quantities in e.g. an RTS behaves more sanely with deterministic replay.

the other negative with deterministic input-based replay is what you've said -- if the engine deviates in any manner, the replay becomes invalidated. You'd have to probably ship with every version of the engine, and the replay just runs on the relevant release. Just replaying and re-recording the inputs on the new version wouldn't do anything, because the outcome behavior would inevitably out of sync with the original.

I'm also not sure how one would support scrubbing, except by also having inverse operations defined for every action or by fully-capturing state at various snapshots and replaying forward at like 10x speed.

forrestthewoods 5 hours ago|||
> I don't see why it makes a difference for this purpose that you're replaying network packets or controller input

Building a simulation that has perfect determinism is incredibly time consuming. Incredibly. Especially one that is identical across platforms, chipsets, and architectures.

Deterministic simulation replay also breaks anytime you change the simulation. Which is kind of obvious. But quite meaningful.

In any case, I’ve shipped games that use both solutions. And let me tell you, deterministic simulation from input is an order of magnitude more effort to build, test, and maintain!

_jackdk_ 13 hours ago||||
Carmack wrote a really interesting .plan about this. It seems to be written between Q2 and Q3A, and cites the Windows message queue as a big inspiration:

https://github.com/ESWAT/john-carmack-plan-archive/blob/mast...

Narishma 10 hours ago||||
I'm not sure that's the reason since Doom and Wolfenstein 3d before it also had such demo systems but they didn't use a client/server model.
Jare 9 hours ago|||
Doom and Wolf3d and many other multiplayer games of the 90s (including some I worked on) were deterministic/lockstep and machines only needed to exchange inputs (in a deterministic manner ofc).

Quake was completely different. The client/server term was aimed at describing that the game state is computed on the server, updated based on client inputs send to the server, and then the game state is sent from server to the clients for display. Various optimizations apply.

Deterministic/lockstep games more often used host/guest terminology to indicate that a machine was acting as coordinator/owner of the game, but none of them were serving state to others. This terminology is not strict and anyone could use those terms however they wanted, but it is a good ballpark.

lelanthran 9 hours ago|||
It doesn't need a client server model, but it does need a message pump design.

Then you record the messages as they are recieved, and if networked, tx and rx the messages in the main pump loop.

If not networked, everything still works as normal: game engine itself never knows the difference.

trashface 9 hours ago||||
The engine needs to save the RNG seed too and various other details, the goal is definitely to make it as deterministic as possible (and yes saving the packets is part of that).
syspec 10 hours ago||||
"All you have the game has to do is run the client and replay the packets"

---

Sure after you build a sophisticated the system that supports that, then you "just" do as you described. EASY!

Rohansi 9 hours ago||
It sounds like it would be complicated but it's really not! The server should already be sending a snapshot of the world when you connect and then stream deltas after that. If you capture all of the packets the server sends you can mock the connection to the server and it should just work because the client renders everything based on that data. You'll only need to do a bit of work to disable client input etc.
alfg 10 hours ago||||
It's just capturing inputs and replaying them.
furyofantares 10 hours ago||
That's not true though, is what they're saying. Quake demo files are server-to-client packets, results of the simulation, not client-to-server packets, the inputs.

If you wanted to add random critical hits and random bullet spread based on the pixels in a live feed of a lava lamp cam, clients could still record .dem files and they would still work.

andrewmcwatters 3 hours ago||||
[dead]
kahrl 11 hours ago||||
[flagged]
kahrl 11 hours ago|||
[flagged]
mackman 11 hours ago||
Instant replays that require long-term deterministic behavior have to be bit perfect in a way that is dramatically hard harder to implement if trying to also do network synchronization. The hard parts of each of those is fundamentally different and trying to do them at the same time terrifies me. I have shipped console games doing both (independently) and was responsible for de-bugging determinism.
purple-leafy 12 minutes ago|||
Hey I am actually working on a browser game that is fully-deterministic (except for player inputs) and so I can basically replay games entirely. Think like a chess engine game.

I’ve been obsessing over this determinism and replayability for months, to the point where any game played is fully replay-able to the exact same events as the original game. So you can play, then watch a recording and spectate your played games from different actors perspective (enemy perspective etc).

My rendering and game logic are fully decoupled.

I wrote the “engine” for the game from scratch. I think I only use one third party library currently.

Cool to see this discussion

klodolph 12 hours ago|||
> The system was deterministic enough that it could record your inputs/the game state and just play them back to get a gameplay video.

NOT how demos work in Quake. It’s more like Quake uses a client/server architecture, and the demo is a capture of the messages.

https://www.gamers.org/dEngine/quake/Qdem/dem-1.0.2.html

eterm 17 hours ago|||
Related to that is the ability to watch games using the game-client too.

This used to be a promoted feature in CS, with "HLTV/GOTV", but sadly disappeared when they moved to CS2.

Spectating in-client is such as powerful way to learn what people are doing that you can't always see even from a recording from their perspective.

gryfft 16 hours ago|||
> Related to that is the ability to watch games using the game-client too.

Halo 3's in-engine replay system was the high water mark of gaming for me.

manuhabitela 16 hours ago||||
Also allowed to watch games _live_! Long before streaming videos was a reality.

Ah, the good old days of watching live competition of quake through the game itself, chatting with others basically through the game console.

Pretty cool system.

dabber21 15 hours ago|||
I think some games allow this, I remember watching DotA 2 torunaments this way

The game engine, Source, is also using client-server architecture

Paradigma11 13 hours ago|||
Also allows maphacks, not cool.
saulr 16 hours ago|||
This absolutely still exists - I have a library for reading Source 2 (CS2, Deadlock etc) demo files and streams (HTTP ones like CSTV).

https://github.com/saul/demofile-net

eterm 16 hours ago||
Demo files work, but I'm talking about spectating live. The "Watch" tab was removed and the ability to just browse and spectate the top games currently being played.

I'm sure the technology still exists in the engine, but it's no longer the key feature it once was. HLTV/GOTV was launched with some fanfare back in the day.

Timshel 15 hours ago|||
Guessing too much potential for abuse if the same server was handling both match and spectating.
saulr 15 hours ago||
Spectators don't watch the game on the same server that's hosting the game. The host server sends the traffic to a 'relay' on a delay, which spectators then connect to. Similarly for the HTTP streamed games, the game server is writing the data for spectators on a delay.
throwthrowuknow 15 hours ago|||
Absolutely crazy they haven’t revived this yet given the popularity of streaming.
greazy 16 hours ago|||
Interesting you mention StarCraft. The replay feature could diverge off due to the non deterministic nature of the game.

https://news.ycombinator.com/item?id=21920508

raincole 13 hours ago|||
The comment you linked to doesn't know what they are talking. (Edit: given the context, they know what they're talking about, but you don't)

A game having random mechanisms has absolutely nothing to do with whether it's deterministic.

Slay the spire is 100% deterministic, gameplay-wise. All the online poker games too.

rcxdude 15 hours ago||||
That's not the kind of nondeterminism that would cause replay divergence. The PRNG seed is stored in the replay (if it wasn't, almost every game would diverge very quickly. And since the multiplayer works the same basic way, the game would basically not function at all).
RedNifre 15 hours ago||||
The way I remember it was that replay playback would only break if you played a replay with a different game version than it was recorded with.
esrauch 12 hours ago|||
There was definitely sync bugs with replays at various points.

There was even desync bugs even in live multiplayer games; there was detection that it desynced which would end the game, which in turn meant exploits that would intentionally cause a desync (which would typically involve cancelled zerg buildings for some reason).

shaan7 13 hours ago|||
or, if you are replaying a single-player game that you saved+loaded (i.e. the replay only worked if the full game happened in one go without any loads).
cataflam 12 hours ago|||
You misinterpreted the comment you are citing.

This non-determinism would not and did not cause replays to diverge (the PRNG seed was most likely stored and would reproduce exactly the same results).

applfanboysbgon 17 hours ago|||
Checking in as a random indie developer who still prioritises determinism in my engine. I don't understand why so many games/engines sacrifice it when it has so much utility.
JoshTriplett 16 hours ago|||
I think if it were as simple as "remember the RNG seed", game developers would do it every time. But determinism also means, for instance, running the physics engine at a deterministic timestep regardless of the frame rate, to avoid differences in accumulated error or collision detection. And that's something that needs designing in from day one.

Thank you for still prioritizing it.

magicalhippo 16 hours ago|||
> running the physics engine at a deterministic timestep

As well as using special library versions of floating-point functions which don't behave the same across different processors I suppose, if you want to be safe.

Eg cr-libm[1] or more modern alternatives like core-math[2].

[1]: https://github.com/SixTrack/crlibm

[2]: https://core-math.gitlabpages.inria.fr/

Krutonium 16 hours ago|||
Tying physics to framerate at all is a mistake. Like, should be filed as a bug mistake.

There's no scenario in which that's desirable.

And yet even Rockstar gets it wrong. (GTA V has several framerate dependent bugs)

mrob 15 hours ago|||
It's desirable for arcade games, which have fixed hardware including the display. There's no possibility of upgrading for better framerate, and the game can be designed so slowdown is rare or non-existent. Tying the physics to the framerate gives you very low and very consistent input latency with minimum developer effort.
Grimblewald 1 hour ago||
Right, all valid points, but consider the scale of a game like those coming out of rockstar. I'd understand for indie games and arcade games, but a single player rpg that will likely never be seen in arcade settings? Seems odd to me to see it here. Rockstar has the resources to do it properly, one would think, no?
JoshTriplett 16 hours ago||||
I completely agree, but it's an easy mistake to make.
Keyframe 16 hours ago|||
not framerate of rendering but physics running at (its own) fixed frame rate.
nottorp 16 hours ago|||
Every game logic update, not only physics, should run on a timer that's fully independent from the frame rate.

The only place where that doesn't matter is fixed hardware - i.e. old generation consoles, before they started to make "pro" upgrades.

JoshTriplett 7 hours ago||
> i.e. old generation consoles, before they started to make "pro" upgrades.

And before it was realistically possible to port a game to run on multiple consoles without a complete rewrite.

bregma 15 hours ago|||
I think you mean timestep. The video frames get updated on one timestep (the so-called "frame rate" because it is the rate at which video frames get redrawn, the inverse of its timestep), physics gets updated on a separate timestep, and gameplay or input or network polling can be updated on its own timestep.
Keyframe 10 hours ago||
pretty much, over the dozen or so game and rendering engines I made over the decades name mutated from tick to timestep to frame (rate) to refresh rate (hz) to tick again.. it doesn't matter as long as every system is decoupled and rendering is unbounded (if hardware/display combo supports it). This needs thinking from day one. Cool stuff you can do then is determinism, you can do independent timers which go forward, halt, backward in time, different speed multipliers over those (so some things run slower, faster, everything goes slower / faster), etc.
Jare 9 hours ago||||
Mostly because determinism is an all or nothing proposition. Either EVERYTHING in game logic is perfectly deterministic and isolated from everything else, or it pretty much as if nothing was. So if you want to commit to determinism, you have to be constantly vigilant and debugging these maddening types of bugs. Whether this investment is worth it or not is up to each dev.

Sometimes you can find small areas of the game that can be deterministic and worth it. In a basketball game I worked on in the 90s, I designed the ball physics to be deterministic (running at 100hz). The moment the ball left the player hands it ran deterministically; we knew if it was going to hit the shot and if not, where the rebound would go to.

bob1029 17 hours ago||||
Determinism isn't essential to achieve record/playback experiences. You could just record the transform of the player at some fixed interval and then replay it with interpolation. This is arguably more "deterministic" in a strategic sense of shipping a viable product.
magicalhippo 16 hours ago||
The player is just one entity, you'd need to do the same to any other non-trivial entity. And you couldn't use fixed intervals and naive interpolation, otherwise you'd have entities clipping the ground when bouncing etc.
Cthulhu_ 16 hours ago|||
Probably (armchair HN reader, not a game developer here) due to dealing with multiplayer latency and / or performance / multithreading / scalability.
jval43 17 hours ago|||
Bungies Marathon series (1994) had the same recording system, as other commenters mentioned due to networking multiplayer.

What's totally insane is that the modern engine rewrite Aleph One can also play back such old recordings, for M2 Durandal (1995) and Infinity (1996) at least.

GaelFG 17 hours ago|||
I'm pretty sure it's because it's in fact 'just' a cool side effect to a common network architecture optimisation from the time where you could'nt send the 'state' of the entire game even with only delta modifiers and so you make the game detertministic to only synchronize inputs :) an exemple article I remember : https://www.gamedeveloper.com/programming/1500-archers-on-a-...

The main downside which probably caused the diseapearance is that any patch to the game will make the replay file unusable. Also at the time (not sure for quake) there was often fixed framerate, today the upsides of using delta time based frame calculation AND multithreading/multi platform target probably make it harded to stay deterministic (specialy for game where you want to optimize input latency)

amiga386 17 hours ago|||
I think it's more the patching thing that made "collect and replay inputs" less common.

Networked games have a "tickrate", just for the networking/state aspect. For example, Counter-Strike 2 has a 64Hz tickrate by default. They also typically have a fixed time interval for physics engines. Both of these should be completely independent of framerate, because that's jittery and unpredictable.

pornel 2 hours ago||||
Quake 1 ran game logic and model animation at 10 ticks per second (a reasonable choice when PCs running Quake at 20fps were impressive).

Camera and linear motion of objects were interpolated to the framerate.

foota 17 hours ago||||
Fun fact, overwatch must have done a similar things because they would let you play back games up until some release when you could no longer replay them unless you'd saved the render.

I think if I remember right there were also funny moments where things didn't look right after patches?

MattRix 12 hours ago||
Overwatch also has “kill cams”, which basically create an entire alternate game state to show you how the enemy killed you, and they have the “Play of the Game” system that replays the coolest moment of the game at the end. It’s impressive tech.
Silphendio 16 hours ago|||
You don't need to tun the whole game at a fixed framerate, only the physics. That's actually common practice.

The bigger problem is that floating point math isn't deterministic. So replays need to save key frames to avoid drift.

Quake used fixed point math.

anthk 16 hours ago||
Quake needs a FPU; if that was true it would run on a 486 SX.
Silphendio 16 hours ago||
You're right, I must have gotten that mixed up. Sorry.

I guess floats are still mostly deterministic if you use the exact same machine code on every PC.

mackman 11 hours ago|||
One of the hardest determinism bugs I had to solve on the PlayStation three was that the PPU and the SPU actually used a different instruction set and had a different internal floating point register size. We had a multi threaded physics simulation, and during instant replay, we had to ensure that the job scheduler sent the exact same work to the correct cores or we got back subtly different floating point values, which of course, immediately caused major divergences.
12_throw_away 6 hours ago|||
> I guess floats are still mostly deterministic if you use the exact same machine code on every PC.

Nope, they are not. Part of the problem is that "mostly deterministic" is a synonym for "non-deterministic".

kbolino 4 hours ago||
Floating-point non-determinism issues usually come from one of these sources:

- Inconsistent use of floating-point precision, which can arise surprisingly easily if e.g. you had optional hardware acceleration support, but didn't truncate the intermediate results after every operation to match the smaller precision of the software version and other platforms.

- Treating floating-point arithmetic as associative, whether at the compiler level, the network level, or even the code level. Common violations here include: changing compiler optimization flags between versions (or similar effects from compiler bugs/changes), not annotating network messages or individual events with strictly consistent sequence numbers, and assuming it's safe to "batch" or "accumulate" deltas before applying them.

- Relying on hardware-accelerated trigonometric etc. functions, which are implemented slightly differently from FPU to FPU, since these, unlike basic arithmetic, are not strictly specified by IEEE. There are many reasons for them to vary, too, like: which argument ranges should be closest to correct, should a lookup table be used to save time or avoided to save space, after how many approximation iterations should the computation stop, etc.

pornel 2 hours ago|||
It wasn't deterministic. It didn't record the inputs. It recorded the basic state of the objects you could see.

Deterministic game sync is a completely different approach more often used in RTS games. Quake had non-deterministic authoritative central server + clients getting an incomplete view of the world.

zaptheimpaler 10 hours ago|||
Age of Empires 4 also does this. It's very cool and saves a lot of space, but it does have some significant downsides at least the way its implemented there - you can't rewind replays, and they become unwatchable when the game updates significantly.
reitzensteinm 9 hours ago||
You can still rewind by storing checkpoints, resuming at the most recent before the seek time and fast forwarding from there.

The updates thing is a shame. You can store multiple configuration files for balance patches, but executable code is much harder.

limaoscarjuliet 14 hours ago|||
In some games - most famously Doom - entire multiplayer is based on exchanging just the inputs and the games on all connected computers are deterministic enough to provide same outcome on all of them.

I am one of the authors of Fire Fight game (1996-ish) and we pulled the same stunt. It was actually easy, we just had to build our own "random number generator" and fix all bugs with uninitialized memory :-)

ductionist 6 hours ago||
I love Fire Fight! I got the demo on a PC Gamer disc when I was a kid and played it forever. And couple of years ago when I built a retro PC, it was one of the first games I loaded up :)
awakeasleep 11 hours ago|||
The alternate to this was the first WORMS game, where, if I remember it properly, there were nondeterministic replays and the next turn picked up from the replay not the initial action
nickjj 12 hours ago|||
Demos were really useful for helping validate competitive play too. While certain anti-cheat programs were available such as PunkBuster (Quake 3), having gaming ladders request everyone records a demo and upload it from their POV was a very low friction way to deter cheating. The idea being, no one looked at them unless there was suspicion so it wasn't even a time sink for administrators.

No fancy kernel level anti-cheats. Just ensure matches were played on legitimate servers and demos were recorded.

Also, back then live streaming while playing was usually too much of a computational and network burden (56k modems), but casting was just coming around as being a thing and certain Quake 3 mods had spectator modes that let someone streaming spectate you from the first person live which also helped deter cheating. There was even split screen spectating modes so you can follow the action (useful for 4v4 games, etc.).

Carmack and team really made something special back then. The ideas they had and what they did with their tech on relatively low end hardware was remarkable.

7373737373 16 hours ago|||
Supreme Commander 2 savefiles appear to be a list of timestamped user interface inputs and unit commands
slightlygrilled 3 hours ago||
Yeah they used determinism, few bit can be read about here:

https://gafferongames.com/post/floating_point_determinism/ https://www.forrestthewoods.com/blog/synchronous_rts_engines...

silisili 17 hours ago|||
Quake1 was my first love. From the old DOS version to the GLQuake to grappling hooks on Trinicom. I was amazed not only by said demo system but by QuakeC, and how heavily it was used, especially in college communities. I remember MIT and UWisc both being unreasonably productive modders in said language.

As a kid, I couldn't wait to see what came next. Sadly, Q1 was rather one of a kind, and it was many years until anything else like it showed up.

joelanman 12 hours ago|||
You could play back multiplayer Halo 3 matches in 3D, with a free camera. Was really interesting to see how matches played out, how you got killed and so on, and for taking cool screenshots.
jmorenoamor 14 hours ago|||
Factorio follows that method also, given its complexity it's quite an achievement
ErneX 16 hours ago|||
Rocket League is a relatively recent game that allows match recording. It’s nice.
zimpenfish 15 hours ago|||
Be nice if they fixed whatever bug they have on the Switch 2 that means every replay I previously downloaded[0] is unplayable and further that every time I now try and download a replay, the game crashes.

But then it'd also be nice if they fixed the "game crashes randomly when joining games" bug too.

(To give them credit, it doesn't now take 5 minutes after waking the Switch 2 before Rocket League reconnects me to the Epic servers like it did a couple of months ago...)

[0] Also the stupidly low limit on how many you can download - it's my storage cost, not yours, wtf.

avereveard 16 hours ago||||
also trackmania, and it's a common way they use to catch cheaters as pb on leaderboard have inputs
zimpenfish 15 hours ago||
This[0] is a good intro to the topic (see also [1] for the written report)

[0] https://www.youtube.com/watch?v=yDUdGvgmKIw

[1] https://donadigo.com/tmx1

larrry 16 hours ago|||
Replays are very common in fighting games as well, rollback netcode gets you most of the way to a replay system already (replaying game state from inputs is a core requirement for online play)
abejfehr 14 hours ago|||
You might find this talk interesting: https://youtu.be/W20t1zCZv8M

It’s one of my favourites

diath 15 hours ago|||
If memory serves well, that worked by replaying network packets, which is what some other games do as well, the problem with that approach is that for live service games unlike old games that were often "set in stone", the protocol always changes, so it's a huge maintenance burden. You either need to add conversion tools, keep maintaining backwards compatibility with older protocol versions, or you accept that replays quickly become outdated.
krautsauer 15 hours ago|||
Or you bundle a copy of the engine and game content with every recording…
TacticalCoder 12 hours ago|||
> ... or you accept that replays quickly become outdated

That's how Warcraft 3's fully deterministic save files would work. Old replay files would only work tied to one specific patch patch of the game.

But here's the thing: it's still a godsend while in development and it was still a godsend to players too. "Battlenet user spiritwolf beat me even though I had the upper hand, how did he do it? Let's check the replay immediately".

Also if you really think about it: if you plan for it from day one, there's not much preventing your game engine from having a pluggable system where you could have the various different patches of the game engine ship with every subsequent release of the game.

So when 1.03c is out but you want to play a replay meant for version 1.02b, the game automatically just use that version of the game engine.

The only case where this basically ain't working is if there's a patch for a security exploit: that'd probably need to be patched for good.

But for all other cases, backward compatibility for replay files / deterministic game engines is totally doable. It may not be how things are done, but it's totally doable.

Jyaif 8 hours ago||
> not much preventing your game engine from having a pluggable system where you could have the various different patches of the game engine ship with every subsequent release of the game

Starcraft 2 does that. It's still quite an achievement.

maccard 16 hours ago|||
I worked on this for a pretty big game. We recorded the network traffic and played it back and simulated the game - so same problem with patches. It also has the awkward side effect of exposing a metric crap ton of “join in progress” style bugs because our game didn’t support JiP.
stephbook 15 hours ago|||
The best replay feature was in "Heroes of Newerth." (DotA 1.5 in 2009)

Warcraft 3 replays couldn't jump in time, just forward very fast. HoN could do that. It was amazing.

For a few months they even made ALL replays searchable on a website. Every game of HoN played globally.

Lerc 17 hours ago|||
I had a puzzle game were all of the solutions it would show were playbacks of my keypresses as I solved it myself. As the puzzles got more difficult it got harder and harder to record a solution without having pauses to think about what to do next.
retsibsi 13 hours ago||
I don't mean this in an "I know better" way, just genuine curiosity: why couldn't you record a solution with pauses and then strip them from the replay file?
Lerc 3 hours ago||
I tried but the change in behaviour immediately before and after the pause could be seen in the playback.

It's the time it takes to go "uhh, I'm stuck, I'd better pause" and then the bit before your brain kicks in following a pause.

nagaiaida 17 hours ago|||
saving rocket league replays to watch yourself play from your opponent's perspective was super helpful in 1v1
pull_my_finger 8 hours ago|||
I always wondered how NES games, which were notoriously low memory, could have game simulation on the start screens. Think Super Mario Bros, but there are many others. If no input is received at the start menu, the game starts playing a demo run. You always see videos and posts about how developers were dissecting sprites and swapping color pallets to work around the small memory, so how in the heck did they manage having the gameplay demos?
amiga386 6 hours ago||
The demo playback on 8-bit games was rarely more than a few seconds long, and it's just a capture of input data.

Here's Super Mario Bros's demo replay data: https://gist.github.com/1wErt3r/4048722#file-smbdis-asm-L108...

21 bytes of joypad input and 21 bytes of input timings

dSebastien 17 hours ago|||
I wish I kept my demo files!
TacticalCoder 12 hours ago|||
> The system was deterministic enough ...

I wrote about it here many times over the years but in 1991 I wrote a little DOS game (and I had a publisher and a deal but it never came out and yet it's how my career started but that's another story) and at some point I had an "impossible to find" bug because it was so hard to reproduce.

So I modified my game engine to be entirely deterministic: I'd record "random seed + player input + frame at which user(s) [two players but non-networked] input was happening". With that I could make tiny save files and replay (and I did find my "impossible to find" bug thanks to that).

First time I remember someone talking about it was a Gamasutra article by an Age of Empire dev (article which another poster already mentioned here in this thread): they had a 100% deterministic engine. FWIW I wrote an email to the author of that article back then and we discussed deterministic game engines.

Warcraft 3 definitely had a deterministic game engine: save files, even for 8 players (networked) games were tiny. But then you had another issue: when units, over different patches, would be "nerfed" to balance the game (or any other engine change really), your replay files wouldn't play correctly anymore. The game wouldn't bother shipping with older engines: no backward compatibility for replay files.

I had a fully deterministic game engine in 1991 and, funnily enough, a few days ago with the help of Claude Code CLI / Sonnet 4.6 I compiled that old game of mine again (I may put it on a public repo one day): I still had the source files and assets after all those years, but not the tooling anymore (no more MASM / no more linker) so I had to "fight" a bit (for example I had not one but two macros who now clashed with macros/functions used by the assembler: "incbin" and another one I forgot) to be able to compile it again (now using UASM, to compile for DOS but from Linux).

Another fun sidenote... A very good friends of mine wrote "World Rally Fever" (published by Team 17) and I was a beta tester of the game. Endless discussion with my friend because I was pissed off for his engine was so "non-deterministic" than hitting the Turbo button on my 486 (I think it was a 486) while I was playing the game would change the behavior of the (computer) opponents.

https://youtu.be/NhRQWNqbvTk

To me a deterministic game engine, unless you're a massively networked multi-player game, just makes sense.

Blizzard could do it for Warcraft 3 in 2002 for up to 8 players and hundreds of units. Several games had it already in the nineties.

It simplifies everything and I'd guesstimate something like 99% of all the game out there that don't do it could actually do it.

But it touches to something much more profound: state and how programmers think about state and reproducibility. Hint: most don't think about that at all.

Some do though: I was watching a Clojure conf vid the other day and they often keep hammering that "view is a function of state". And it is. That's how things are. It was true in 1991 when I wrote my DOS game, it was true for Age of Empire, Warcraft 3 and many other games. And it is still true today.

But we're in 2026 and there are still many devs insisting that "functional programming sucks" and that we should bow to the mutability gods for that is the only way and they'll fight you to death if you dare to say that "view <- fn(state)".

This explains that.

henriksoerensen 10 hours ago||
> I had a fully deterministic game engine in 1991 and, funnily enough, a few days ago with the help of Claude Code CLI / Sonnet 4.6 I compiled that old game of mine again (I may put it on a public repo one day): I still had the source files and assets after all those years, but not the tooling anymore (no more MASM / no more linker) so I had to "fight" a bit (for example I had not one but two macros who now clashed with macros/functions used by the assembler: "incbin" and another one I forgot) to be able to compile it again (now using UASM, to compile for DOS but from Linux).

Fun thing - I'm working on modernizing a legacy Fortran / Win32 application to something a bit more modern, and ran into similar issues with toolchain not being available anymore; and further some libraries where source is needed to compile, but only have binaries of the libraries.

Claude Code was amazing creating stubs by looking at function calls used and how, and then getting just enough in place to call existing binaries correctly; and further updating the code to be in alignment with Fortran specs that can compile on existing compilers - but it was a 'fight'.

saagarjha 15 hours ago|||
Super Smash Bros Brawl does this too for replays. I remember being a child and just learning about how computers worked and being very confused at how such a long video (which I knew to be "big") could possibly fit in such a small number of "blocks" on the Wii while screenshots were larger. I think the newer games do this too but they have issues because the game can be updated and then the replays no longer work.
sikozu 15 hours ago||
[dead]
glenneroo 4 hours ago||
Too bad they didn't ask any VR developers. It's truly another beast, especially if you're developing with Unity for the Quest platform, since setting TimeScale to 0 in Unity effectively disables the physics engine, which means things like hands/controllers no longer work, which then breaks one of the requirements to even be able to release in the Meta store (handle pause state). The workaround used by Half-Life: Alyx (as told to me by the Hurricane VR developer when I asked years ago how to deal with pausing) is to clone your hands and disable/delete all physics-related stuff (e.g. Rigidbodies) on the new "paused" hands. If you are using laser pointers, then you'll also have to switch those out as well. If you have any active effects, particles or objects that obstruct the player's vision and/or visibility of the pause/resume UI, then you'll want to either disable those out or at least dim them substantially so the player can interact with the resume button e.g. with a laser pointer. You might also want to adjust the lighting to indicate that the user is paused.

Outside of VR, Unity offers a nice "AudioListener.pause" to pause all audio, but if you have any sound effects in your pause menu like when a user changes a setting, those no longer work, further requiring more hacky fun (or just set it to true, and ignore user-complaints about no audio on menus when paused).

On top of that, you have things like Animators for characters, which also have to be paused manually by setting speed to 0 (TimeScale = 0 doesn't seem to do it). Some particle systems also don't seem to be affected by TimeScale. If you have a networked game then you also have to (obviously) send the pause command to clients. If you have things in motion, pausing/restarting the physics engine might cause weird issues with (at least angular) velocity so you might have to save/restore those on any moving objects.

ninkendo 12 hours ago||
I’d love to hear about the 2020 release of Microsoft Flight Simulator, which had an “active pause” feature that they hyped as a big innovation for that release. You could pause and switch camera angles and see what was going on, then quickly resume. Pretty much the whole game was still interact-able, but with your plane’s position paused. It was supposed to be a nice user-friendly way to pause while you checked gauges or fiddled with cockpit settings or whatever.

It never worked. You’d pause, and the plane was frozen in place yes, but the instrument cluster would still animate and show your altitude/speed changing as if you never paused. But you couldn’t control anything until unpaused. So you’d resume, and your momentum would suddenly leap to where the accumulated deltas ended up. So if you active-paused at full throttle, you’d unpause and start going way too fast… if you active paused while stalling, you’d unpause and your speed would be near zero… you’d even consume fuel while paused.

It’s like they literally just froze the plane’s position and left every other aspect of the physics engine untouched, never tested it, shipped it, and even did a bunch of marketing at how great the feature was. When it was so obviously broken.

I came back to the game after a year or so of updates, and not a thing had improved, it was every bit as broken as when they shipped it.

The 2024 release seems to have largely fixed it though from what I can see. It’s just nuts they had such a clearly broken feature for that long.

shmeeed 10 hours ago||
I was going to comment on MSFS2020's Active Pause as well. It's a curious implementation, but I always assumed the behaviour was intentional. After all, there's a "regular" pause mode as well.

You gotta learn and understand its quirks, though. As long as your flight state is rock stable (e.g. on Auto Pilot) and/or you're not fiddling with the controls while paused, it's pretty much always worked fine for me.

I've also used its interactivity to my advantage and saved the plane from an otherwise unsaveable flight state, e.g. by gaining airspeed while paused.

shakow 2 hours ago|||
> which had an “active pause” feature that they hyped as a big innovation for that release

So my memory might be playing me a trick, but wasn't that already the case in FS2002 and 2004? I seem to remember using the pause as a kid to look at my plane under every angle.

recursivecaveat 17 hours ago||
The strangest pause bug I know is in Mario Sunshine: pausing will misalign the collision logic (which runs 4 times per frame) and the main game loop. So certain specific physics interactions will behave differently depending on how many times the game has been paused modulo 4.
positive-spite 11 hours ago||
There is another great one in one of the Legend of Zelda games.

The game world is paused whenever Link pulls an item from a chest, but because his animation does not loop perfectly, because of a missing frame, he slowly slides across the ground and even through walls.

One of the minimum % speedrun abuses this by looping the animation for many hours in order to glitch through a wall, and not collect a progression item, which would count towards the collection percentages.

jwitthuhn 5 hours ago||
My favorite as a kid was also in a Zelda game.

In the original (and maybe also DX) release of Link's Awakening, the game uses a top-down view with the world split up into tiles. Walking of the left side of a screen makes you end up on the right side of the next screen over.

What you could do is pause at the right frame on the screen transition, and you would end up on the new screen but link's position would not change. So you walk off the left side of a screen and end up on the left side of the new screen. Lots of fun to be had with skipping important stuff with that.

butvacuum 16 hours ago||
really? is one state the one where you fall through bridges? I can't play Sunshine because of that.
zamadatix 14 hours ago||
If it's the one in Pianta Village there is a well known glitch on that one to do with watersliding over it. I haven't heard of a general bridge glitch though.
varunramesh 5 hours ago||
Pausing is unintuitive in Unity because you don't control the main loop - all active objects get updated every frame. The recommended way to do it is to set the "time scale" to zero and have menu animations use special timers that ignore time scale. If you control the game loop, you can usually just get away with an "if (paused)" [0].

[0] https://github.com/rameshvarun/marble-mouse/blob/8b25684a815...

qwery 12 hours ago||
Like a lot of issues in gamedev, pausing the game is a surprisingly difficult problem to solve. It's especially difficult to provide a one size fits all solution which would obviously be desirable for the popular modern engines that try to be a general solution.

I see a lot of comments here saying something along the lines of "isn't it just a state in the state machine?" which isn't wrong, but is an extremely simplistic way of thinking about it. In, say, 1983, you could get away with something like that:

- pause the game: write "PAUSED" to the tilemap

- paused main loop: check input to unpause

- unpause the game: erase the "PAUSED" / restore the previous tiles

But at that time you could already see the same sort of issues as today. Something somewhat common in Famicom/NES games is the sprites disappearing when the game is paused. Perhaps deliberate/desirable in some cases (e.g. Tetris) but a lot of the time, probably just a result of the 'is paused' conditional branch in the main loop skipping the sprite building code[0].

There's an extremely large problem space and ultimately, each game has its own way to define what "paused" actually means.

You might be interested in the features Godot provides[1] for this. Particularly, the thing that makes it interesting is the 'process mode' that each node in the scene tree has. This gives the developer quite a lot of control over what pausing actually means for a given game. It's not a complete solution, but a useful tool to help solve the various problems.

[0] Simplified description of course. Also, the sprite building code often ended up distributed throughout the various gameplay logic routines, which you don't want to run in the paused state.

[1] https://docs.godotengine.org/en/stable/tutorials/scripting/p...

[ed] Just adding that Tetris is only an example of a game where you might want that behaviour, not a comment about how any of the Tetris games were actually made.

an0malous 9 hours ago|
I read your comment and the article and I’m still not really clear on why this isn’t as simple as saving the current state or pausing the render or time loop.
jayd16 6 hours ago|||
Saving the current state? What does that even mean? You don't save transient things like visual particle positions but it's something you expect to persist between pauses.

Pausing the render? But not the physics so you keep falling? You need to pause many systems. At the very least you'd want to pause gameplay, and sound and physics. You'd want to keep input and some of the UI unpaused. If you have a fancy UI with VFX, you need to make sure those are not using the paused game time. etc etc

glenneroo 3 hours ago||
And if you pause sound in Unity using AudioListener.pause = true, which is supposed to make life easier, but ends up being useless if changing settings/clicking buttons has audio feedback, or changing the volume has audible feedback to tell you how loud, or you allow to change voice style, and on and on.

Repeat that for every system - all those edge cases for each system can waste a lot of time and energy.

dwaltrip 6 hours ago|||
It’s only simple if your state machine has zero implicit state and all transitions are perfectly and precisely articulated. Good luck with that!

P.s. And once you are done achieving the above, you can then make sure you haven’t caused performance issues :)

But yes, conceptually, it’s a relatively simple idea. The devil is always in the details.

bel8 17 hours ago||
I quite like when games keep playing some visual-only animations when paused.

Like torch flames and trees swaying in the wind.

entuno 16 hours ago||
Against the Storm (and excellent rouguelite city-builder) does this in a really cool way. Pausing is a core mechanic of the game, and you frequently pause while you place building or things like that - and all the visual animations stop (fire, rain, trees swaying, etc).

But when you find a broken ancient seal in the forest, the giant creepy eyeball moving around in it keeps moving even when you pause the game, which helps emphasise how other-worldly it is.

rahkiin 16 hours ago|||
I find it confusing: for me a clear indicator the game is paused is all animations also pausing. Some games do not pause in menu’s, for example. And some do, but not when in a multiplayer session
nkrisc 13 hours ago||
I think it really just depends on the game and what purposes “pausing” serves in that game. Take a game like solitaire, for example: there is no meaningful “pause” feature you could add, since the game state only advances in response to a user action.

Other pause some underlying simulation while still letting you modify the game state, as an expected part of gameplay, like a city builder. As the user might spend a significant amount of time in a paused state building things, it would be pretty visually unappealing to have the entire world completely frozen the whole time.

Others might pause all gameplay entirely, such as for displaying a menu, in which case pausing even environmental animations might make more sense since the user isn’t actively playing.

For the second type, I would much prefer some GUI element to indicate the simulation is paused rather than freezing the whole game world, such as a border around the screen or maybe a change of color theme of the GUI or similar.

Rendello 7 hours ago|||
I'm the opposite, it drives me crazy! Along with sound effects / music playing.
adrianton3 16 hours ago|||
Torch flames and trees swaying in the wind do not affect gameplay at all - they're most likely done in a shader and I think it's easier to keep updating a time uniform than to add extra conditions everywhere :D
mjfisher 16 hours ago|||
That's usually because the system that runs those things is independent of the timing of the main game loop. And then when someone finally gets around to implementing the pause screen, they still run even with the main game time stopped. And you look at it and think "eh, you know what - looks cool - we'll leave it".
bob1029 10 hours ago||
It's nice when a bug manifests as a feature.
pcblues 13 hours ago||
This is silly reporting with a couple of interesting stories. Forget about the technical ways of doing it. Doing it at all changes the game experience.

Pausing a game has a massive impact on the game experience. It lets you break the fourth wall experientially. Not wrong, but it changes the dynamic of the game.

Same as saving at any time does. As losing your loot or your life permanently does. Not wrong, but a hard choice that appeals to some players and not to others.

I used to pause pacman on my Atari 800 so I could run to church and sing in the choir or be an altar boy. Then I ran home and unpaused to continue. Sometimes in summer the computer over-heated and I lost everything while I was at church.

Lessons learnt? None, I think :)

mylasttour 15 hours ago|
There used to be a funny bug in Dota 2:

While the game is paused, if a player were to click on the "level up" buttons for their skills, each click actually advanced the game by 1 frame - so it was possible for people to die etc. during a pause screen.

More comments...