Top
Best
New

Posted by azhenley 10/31/2025

John Carmack on mutable variables(twitter.com)
515 points | 627 comments
EastLondonCoder 10/31/2025|
After a 2 year Clojure stint I find it very hard to explain the clarity that comes with immutability for programmers used to trigger effects with a mutation.

I think it may be one of those things you have to see in order to understand.

rendaw 10/31/2025||
I think the explanation is: When you mutate variables it implicitly creates an ordering dependency - later uses of the variable rely on previous mutations. However, this is an implicit dependency that isn't modeled by the language so reordering won't cause any errors.

With a very basic concrete example:

x = 7

x = x + 3

x = x / 2

Vs

x = 7

x1 = x + 3

x2 = x1 / 2

Reordering the first will have no error, but you'll get the wrong result. The second will produce an error if you try to reorder the statements.

Another way to look at it is that in the first example, the 3rd calculation doesn't have "x" as a dependency but rather "x in the state where addition has already been completed" (i.e. it's 3 different x's that all share the same name). Doing single assignment is just making this explicit.

jstimpfle 10/31/2025|||
The immutable approach doesn't conflate the concepts of place, time, and abstract identity, like in-place mutation does.

In mutating models, typically abstract (mathematical / conceptual) objects are modeled as memory locations. Which means that object identity implies pointer identity. But that's a problem when different versions of the same object need to be maintained.

It's much easier when we represent object identity by something other than pointer identity, such as (string) names or 32-bit integer keys. Such representation allows us to materialize us different versions (or even the same version) of an object in multiple places, at the same time. This allows us to concurrently read or write different versions of the same abstract object. It's also an enabler for serialization/deserialization. Not requiring an object to be materialized in one particular place allows saving objects to disk or sending them around.

SpaceNoodled 10/31/2025||
The hardware that these programs are running on store objects in linear memory, so it doesn't not make sense to treat it as such.
wtallis 10/31/2025|||
DRAM is linear memory. Caches, less so. Register files really aren't. CPUs spend rather a lot of transistors and power to reconcile the reality of how they manipulate data within the core against the external model of RAM in a flat linear address space.
jstimpfle 11/1/2025|||
Can you clarify?
repstosb 11/1/2025||
Modern CPUs do out-of-order execution, which means they need to identify and resolve register sharing dependencies between instructions. This turns the notional linear model of random-access registers into a DAG in practice, where different instructions that might be in flight at once actually read from or write to different "versions" of a named register. Additionally, pretty much every modern CPU uses a register renaming scheme, where the register file at microarchitecture level is larger than that described in the software-level architecture reference, i.e. one instruction's "r7" has no relationship at all to another's r7".

Caches aren't quite as mix-and-match, but they can still internally manage different temporal versions of a cache line, as well as (hopefully) mask the fact that a write to DRAM from one core isn't an atomic operation instantly visible to all other cores.

Practice is always more complicated than theory.

FooBarBizBazz 11/1/2025|||
Realistically, the compiler is building a DAG called SSA; and then the CPU builds a DAG to do out of order execution, so at a fine grain -- the basic block -- it seems to me that the immutable way of thinking about things is actually closer to the hardware.
jstimpfle 11/2/2025|||
That doesn't affect what I said though. Register renaming and pipelining does not make mutation go away and doesn't allow you to work on multiple things "at once" through the same pointer.

It's still logically the same thing with these optimizations, obviously -- since they aren't supposed to change the logic.

EastLondonCoder 10/31/2025||||
I agree that the explicit timeline you get with immutability is certainly helpful, but I also think its much easier to understand the total state of a program. When an imperative program runs you almost always have to reproduce a bug in order to understate the state that caused it, fairly often in Clojure you can actually deduct whats happening.
ryandv 10/31/2025|||
That's right - immutability enables equational reasoning, where it becomes possible to actually reason through a program just by inspection and evaluation in one's head, since the only context one needs to load is contained within the function itself - not the entire trace, where anything along the thread of execution could factor into your function's output, since anybody can just mutate anybody else's memory willy-nilly.

People jump ahead using AI to improve their reading comprehension of source code, when there are still basic practices of style, writing, & composition that for some reason are yet to be widespread throughout the industry despite already having a long standing tradition in practice, alongside pretty firm grounding in academics.

adrianN 10/31/2025||
In theory it’s certainly right that imperative programs are harder to reason about. In practice programmers tend to avoid writing the kind of program where anything can happen.
ryandv 10/31/2025||
> In practice programmers tend to avoid writing the kind of program where anything can happen.

My faith in this presumption dwindles every year. I expect AI to only exacerbate the problem.

Since we are on the topic of Carmack, "everything that is syntactically legal that the compiler will accept will eventually wind up in your codebase." [0]

[0] https://www.youtube.com/watch?v=1PhArSujR_A&t=15m54s

alain94040 10/31/2025||||
That example is too simple for me to grasp it. How would you code a function that iterates over an array to compute its sum. No cheating with a built-in sum function. If you had to code each addition, how would that work? Curious to learn (I probably could google this or ask Claude to write me the code).
supergarfield 10/31/2025|||
Carmack gives updating in a loop as the one exception:

> You should strive to never reassign or update a variable outside of true iterative calculations in loops.

If you want a completely immutable setup for this, you'd likely have to use a recursive function. This pattern is well supported and optimized in immutable languages like the ML family, but is not super practical in a standard imperative language. Something like

  def sum(l):
    if not l: return 0
    return l[0] + sum(l[1:])
Of course this is also mostly insensitive to ordering guarantees (the compiler would be fine with the last line being `return l[-1] + sum(l[:-1])`), but immutability can remain useful in cases like this to ensure no concurrent mutation of a given object, for instance.
bmacho 10/31/2025|||
You don't have to use recursion, that is, you don't need language support for it. Having first class (named) functions is enough.

For example you can modify sum such that it doesn't depend on itself, but it depends on a function, which it will receive as argument (and it will be itself).

Something like:

  def sum_(f, l):
    if not l: return 0
    return l[0] + f(f, l[1:])

  def runreq(f, *args):
    return f(f, *args)

  print(runreq(sum_, [1,2,3]))
DemocracyFTW2 11/1/2025||
> You don't have to use recursion

You're using recursion. `runreq()` calls `sum_()` which calls `sum()` in `return l[0] + f(f, l[1:])`, where `f` is `sum()`

bmacho 11/1/2025||
> You're using recursion.

No, see GP.

> `runreq()` calls `sum_()` which calls `sum()` in `return l[0] + f(f, l[1:])`, where `f` is `sum()`

Also no, see GP.

DemocracyFTW2 11/1/2025||
I am too stupid to understand this. This:

    def sum_(f, l):
      if not l: return 0
      return l[0] + f(f, l[1:])

    def runreq(f, *args):
      return f(f, *args)

    print(995,runreq(sum_, range(1,995)))
    print(1000,runreq(sum_, range(1,1000)))
when run with python3.11 gives me this output:

    995 494515
    Traceback (most recent call last):
      File "/tmp/sum.py", line 9, in <module>
        print(1000,runreq(sum_, range(1,1000)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/tmp/sum.py", line 6, in runreq
        return f(f, *args)
               ^^^^^^^^^^^
      File "/tmp/sum.py", line 3, in sum_
        return l[0] + f(f, l[1:])
                      ^^^^^^^^^^^
      File "/tmp/sum.py", line 3, in sum_
        return l[0] + f(f, l[1:])
                      ^^^^^^^^^^^
      File "/tmp/sum.py", line 3, in sum_
        return l[0] + f(f, l[1:])
                      ^^^^^^^^^^^
      [Previous line repeated 995 more times]
    RecursionError: maximum recursion depth exceeded in comparison
A RecursionError seems to indicate there must have been recursion, no?
hermitdev 10/31/2025|||
While your example of `sum` is a nice, pure function, it'll unfortunately blow up in python on even moderately sized inputs (we're talking thousands of elements, not millions) due to lack of tail calls in Python (currently) and the restrictions on recursion depth. The CPython interpreter as of 3.14 [0] is now capable of using tail calls in the interpreter itself, but it's not yet in Python, proper.

[0]: https://docs.python.org/3/whatsnew/3.14.html#a-new-type-of-i...

dragonwriter 10/31/2025||
Yeah, to actually use tail-recursive patterns (except for known-to-be-sharply-constrained problems) in Python (or, at least, CPython), you need to use a library like `tco`, because of the implementation limits. Of course the many common recursive patterns can be cast as map, filter, or reduce operations, and all three of those are available as functions in Python's core (the first two) or stdlib (reduce).

Updating one or more variables in a loop naturally maps to reduce with the updated variable(s) being (in the case of more than one being fields of) the accumulator object.

raincole 10/31/2025||||
Yet even Rust allows you to shadow variables with another one with the same name. Yes, they are two different variables, but for a human reader they have the same name.

I think that Rust made this decision because the x1, x2, x3 style of code is really a pain in the ass to write.

wongarsu 10/31/2025|||
In idiomatic Rust you usually shadow variables with another one of the same name when the type is the only thing meaningfully changing. For example

   let x = "29"
   let x = x.parse::<i32>()
   let x = x.unwrap()
These all use the same name, but you still have the same explicit ordering dependency because they are typed differently. The first is a &str, the second a Result<i32, ParseIntError>, the third an i32, and any reordering of the lines would provide a compiler error. And if you add another line `let y = process(x)` you would expect it to do something similar no matter where you introduce it in these statements, provided it accepts the current type of x, because the values represent the "same" data.

Once you actually "change" the value, for example by dividing by 3, I would consider it unidiomatic to shadow under the same name. Either mark it as mutable for preferably make a new variable with a name that represents what the new value now expresses

waffletower 10/31/2025|||
In a Clojure binding this is perfectly idiomatic, but symbolically shared bindings are not shadowed, they are immutably replaced. Mutability is certainly available, but is explicit. And the type dynamism of Clojure is a breath of fresh air for many applications despite the evangelism of junior developers steeped in laboratory Haskell projects at university. That being said, I have a Clojure project where dynamic typing is throughly exploited at a high level, allows for flexible use of Clojure's rational math mixed with floating point (or one or the other entirely), and for optimization deeper within the architecture a Rust implementation via JVM JNI is utilized for native performance, assuring homogenous unboxed types are computed to make the overall computation tractable. Have your cake and eat it too. Types have their virtues, but not without their excesses.
nayuki 11/1/2025||||
In Rust, one way I use shadowing is to gather a bunch of examples into one function, but you can copy and paste any single example and it would work.

    fn do_demo() {
        let qr = QrCode::encode_text("foobar");
        print_qr(qr);
        
        let qr = QrCode::encode_text("1234", Ecc::LOW);
        print_qr(qr);
        
        let qr = QrCode::encode_text("the quick brown fox");
        print_qr(qr);
    }
In other languages that don't allow shadowing (e.g. C, Java), the first example would declare the variable and be syntactically correct to copy out, but the subsequent examples would cause a syntax error when copied out.
ymyms 10/31/2025||||
Another idiomatic pattern is using shadowing to transform something using itself as input:

let x = Foo::new().stuff()?; let x = Bar::new(x).other_stuff()?;

So with the math example and what the poster above said about type changing, most rust code I write is something like:

let x: plain_int = 7

let x: added_int = add(x, 3);

let x: divided_int = divide(x, 2);

where the function signatures would be fn add(foo: plain_int, int); fn divide(bar: added_int, int);

and this can't be reordered without triggering a compiler error.

combyn8tor 11/1/2025|||
I did this accidentally the other day in Rust:

let x = some_function();

... A bunch of code

let x = some_function();

The values of x are the same. It was just an oversight on my part but wondered if I could set my linter to highlight multiple uses of the same variable name in the same function. Does anyone have any suggestions?

suspended_state 10/31/2025||||
Or they got inspired by how this is done in OCaml, which was the host language for the earliest versions of Rust. Actually, this is a behaviour found in many FP languages. Regarding OCaml, there was even a experimental version of the REPL where one could access the different variables carrying the same name using an ad-hoc syntax.
airstrike 10/31/2025|||
I do find shadowing useful. If you're writing really long code blocks in which it becomes an issue, you are probably doing too much in one place.
Tarean 10/31/2025||||
Sometimes keeping a fixed shape for the variable context across the computation can make it easier to reason about invariants, though.

Like, if you have a constraint is_even(x) that's really easy to check in your head with some informal Floyd-Hoare logic.

And it scales to extracting code into helper functions and multiple variables. If you must track which set of variables form one context x1+y1, x2+y2, etc I find it much harder to check the invariants in my head.

These 'fixed state shape' situations are where I'd grab a state monad in Haskell and start thinking top-down in terms of actions+invariants.

amy_petrik 11/1/2025||||
Guys, guys, I don't think we're on the same page here.

The conversation I'm trying to have is "stop mutating all the dynamic self-modifying code, it's jamming things up". The concept of non-mutating code, only mutating variables, strikes me as extremely OCD and overly bureaucratic. Baby steps. Eventually I'll transition from my dynamic recompilation self-modifying code to just regular code with modifying variables. Only then can we talk about higher level transcendental OOP things such as singleton factory model-view-controller-singleton-const-factories and facade messenger const variable type design patterns. Surely those people are well reasoned and not fanatics like me

zerd 10/31/2025||||
It's funny that converting the first example to the second is a common thing a compiler does, Static single assignment [0], to make various optimizations easier to reason about.

[0] https://en.wikipedia.org/wiki/Static_single-assignment_form

skeezyjefferson 10/31/2025||||
whats the difference between immutable and constant, which has been in use far longer? why are you calling it mutable?
munificent 10/31/2025|||
"Constant" is ambiguous. Depending on who you ask, it can mean either:

1. A property known at compile time.

2. A property that can't change after being initially computed.

Many of the benefits of immutability accrue properties whose values are only known at runtime but which are still known to not change after that point.

throwaway2037 11/3/2025||
DotNet/C# makes this distinction in the form of (1) const and (2) readonly.
inanutshellus 10/31/2025||||
"Constant" implies a larger context.

As in - it's not very "constant" if you keep re-making it in your loop, right?

Whereas "immutable" throws away that extra context and means "whatever variable you have, for however long you have it, it's unchangeable."

skeezyjefferson 10/31/2025||
> As in - it's not very "constant" if you keep re-making it in your loop, right?

you cant change a constant though

veilrap 10/31/2025||
He’s implying that the variable it’s being defined within the loop. So, constant, but repeatedly redefined.
ghurtado 10/31/2025||
That's the opposite of what any reasonable engineer means by "constant".
fnordsensei 10/31/2025|||
That’s the point, you’re just haggling about scopes now. All the way from being new per program invocation to new per loop.

Immutability doesn’t have this connotation.

davrosthedalek 10/31/2025||
How? I think the same argument applies: If it's changing from loop to loop, seems mutable to me.
fnordsensei 10/31/2025|||
I think you’re after something other than immutability then.

You’re allowed to rebind a var defined within a loop, it doesn’t mean that you can’t hang on to the old value if you need to.

With mutability, you actively can’t hang on to the old value, it’ll change under your feet.

Maybe it makes more sense if you think about it like tail recursion: you call a function and do some calculations, and then you call the same function again, but with new args.

This is allowed, and not the same as hammering a variable in place.

Zambyte 10/31/2025|||
I can give a specific example.

    for (0..5) |i| {
        i = i + 1;
        std.debug.print("foo {}\n", .{i});
    }
In this loop in Zig, the reassignment to i fails, because i is a constant. However, i is a new constant bound to a different value each iteration.

To potentially make it clearer that this is not mutation of a constant between iterations, technically &i could change between iterations, and the program would still be correct. This is not true with a c-style for loop using explicit mutation.

skeezyjefferson 11/6/2025||
I argue in your example there are 6 constants, not 1 constant with 6 different values, though this could be semantics ie we could both be right in some way
Zambyte 11/10/2025||
Exactly. As said higher in this comment chain:

> So, constant, but repeatedly redefined.

skeezyjefferson 11/11/2025||
it was constant if it could have been redefined, by the basic definition of what constant means. so no, not constant, but constants
davrosthedalek 10/31/2025|||
No? It has a lifetime of one loop duration, and is constant during that duration. Seems perfectly fine to me.
Thorrez 10/31/2025||||
Immutable and constant are the same. rendaw didn't use the word mutable. One reason someone might use the word "mutable" is that it's a succinct way of expressing an idea. Alternative ways of expressing the same idea are longer words (changeable, non-constant).
a4isms 10/31/2025|||
In languages like JavaScript, immutable and constant may be theoretically the same thing, but in practice "const" means a variable cannot be reassigned, while "immutable" means a value cannot be mutated in place.

They are very, very different semantically, because const is always local. Declaring something const has no effect on what happens with the value bound to a const variable anywhere else in the program. Whereas, immutability is a global property: An immutable array, for example, can be passed around and it will always be immutable.

JS has always hade 'freeze' as a kind of runtime immutability, and tooling like TS can provide for readonly types that provide immutability guarantees at compile time.

everforward 10/31/2025||
Arrays are a very notable example here. You can append to a const array in JS and TS, even in the same scope it was declared const.

That’s always felt very odd to me.

DemocracyFTW2 11/1/2025|||
I think JavaScript has a language / terminology problem here. It has to be explained constantly (see) to newcomers that `const a = []` does not imply you cannot say `a.push( x )` (mutation), it just keeps you from being able to say `a = x` further down (re-binding). Since in JavaScript objects always start life as mutable things, but primitives are inherently immutable, `const a = 4` does guarantee `a` will be `4` down the line, though. The same is true of `const a = Object.freeze( [] )` (`a` will always be the empty list), but, lo and behold, you can still add elements to `a` even after `const a = Object.freeze( new Set() )` which is, shall we say, unfortunate.

The vagaries don't end there. NodeJS' `assert` namespace has methods like `equal()`, `strictEqual()`, `deepEqual()`, `deepStrictEqual()`, and `partialDeepStrictEqual()`, which is both excessive and badly named (although there's good justification for what `partialDeepStrictEqual()` does); ideally, `equal()` should be both `strict` and `deep`. That this is also a terminology problem is borne out by explanations that oftentimes do not clearly differentiate between object value and object identity.

In a language with inherent immutability, object value and object identity may (conceptually at least) be conflated, like they are for JavaScript's primitive values. You can always assume that an `'abc'` over here has the same object identity (memory location) as that `'abc'` over there, because it couldn't possibly make a difference were it not the case. The same should be true of an immutable list: for all we know, and all we have to know, two immutable lists could be stored in the same memory when they share the same elements in the same order.

a4isms 11/1/2025||||
There is no exception for ANY data structure that includes references to other data structures or primitives. Not only can you add or remove elements from an array, you can change them in place.

A const variable that refers to an array is a const variable. The array is still mutable. That's not an exception, its also how a plain-old JavaScript object works: You can add and remove properties at will. You can change its prototype to point to something else and completely change its inheritance chain. And it could be a const variable to an unfrozen POJO all along.

That is not an exception to how things work, its how every reference works.

everforward 11/1/2025||
I know, and I do agree it's consistent, but then it doesn't make any sense to me as a keyword in a language where non-primitives are always by-reference.

You can't mutate the reference, but you _can_ copy the values from one array into the data under an immutable reference, so const doesn't prevent basically any of the things you'd want to prevent.

The distinction makes way more sense to me in languages that let you pass by value. Passing a const array says don't change the data, passing a const reference says change the data but keep the reference the same.

a4isms 11/2/2025||
The beauty of `const` in JS is that it's almost completely irrelevant. Not only does it have nothing to do with immutability, it's also local. Which means, if I were to write `let` instead of `const`, I could still see whether my code reassigned that variable at a glance. The keyword provides very little in the way of a guarantee I could not otherwise observe for myself.

Immutability is completely different. Determining whether a data structure is mutated without an actual immutable type to enforce is impractical, error-prone, and in any event impossible to prove for the general case.

raddan 10/31/2025|||
That's because in many languages there is a difference between a stored reference being immutable and the contents of the thing the reference points to being immutable.
skeezyjefferson 10/31/2025||||
but we already had the word variable for values that can change. on both counts it seems redundant
Thorrez 11/1/2025||
Oh, good point. I misunderstood your previous question.

Is there a name that refers to the broader group that includes both constants and variables? In practice, and in e.g. C++, "variable" is used to refer to both constants and actual variables, due to there not being a different common name that can be used to refer to both.

kgwxd 10/31/2025|||
They aren't the same for object references. The reference can't be changed, but the properties can.
Thorrez 11/1/2025||
Depends on the language. In C++

  const std::vector<int>& foo = bar.GetVector();
foo is a constant object reference cannot have its properties changed (and also cannot be changed to refer to a new object).

  std::vector<int>& foo = bar.GetVector();
Is an object reference that can have its properties changed (but cannot be changed to refer to a new object).
scott_w 10/31/2025|||
In plenty of languages, there's not really a difference. In Rust, there is a difference between a `let var_name = 10;` and `const var_name: u64 = 10;` in that the latter must have its value known at compile-time (it's a true constant).

> why are you calling it mutable?

Mostly just convention. Rust has immutable by default and you have to mark variables specifically with `mut` (so `let mut var_name = 10;`). Other languages distinguish between variables and values, so var and val, or something like that. Or they might do var and const (JS does this I think) to be more distinct.

ape4 10/31/2025|||
I would be nicer if you gave x1 and x2 meaningful names
catlifeonmars 10/31/2025||
What would those names be in this example?
ape4 10/31/2025||
In a real application meaningful names are nearly always possible, eg:

    const pi = 3.1415926
    const 2pi = 2 * pi
    const circumference = 2pi * radius
tmtvl 10/31/2025|||
Calling tau 2pi is the most cursed thing I've seen all day. Appropriate for Halloween.
smrq 10/31/2025||
If you call a variable tau in production code then you're being overly cute. I know what it means, because I watch math YouTube for fun, but $future_maintainer in all likelihood won't.
lock1 11/1/2025||
Where do you draw the line then? Stopping at `tau` just because `$future_maintainer` might get confused feels like an arbitrary limit to me.

What about something like `gamma`? Lorentz factor? Luminance multiplier? Factorial generalization?

Why not just use the full sentence rather than assign it to an arbitrary name/symbol `gamma` and leave it dependent on the context?

And it's not that hard to add an inline comment to dispel the confusion

  const tau = 2*pi; // Alternate name for 2pi is "tau"
catlifeonmars 10/31/2025|||
Agree in real life you can come up with meaningful names (and should when the names are used far away from the point of assignment), but it doesn’t make sense for GPs example, where the whole point was to talk about assignments in the abstract.
zelphirkalt 10/31/2025|||
Made a similar experience with Scheme. I could tell people whatever I wanted, they wouldn't really realize how much cleaner and easier to test things could be, if we just used functions instead of mutating things around. And since I was the only one who had done projects in an FP language, and they only used non-FP languages like Java, Python, JavaScript and TypeScript before, they would continue to write things based on needless mutation. The issue was also, that using Python it can be hard to write functional style code in a readable way too. Even JS seems to lend itself better to that. What's more is, that one will probably find oneself hard pressed to find the functional data structures one might want to use and needs to work around recursion due to the limitations of those languages.

I think it's simply the difference between the curious mind, who explores stuff like Clojure off the job (or is very lucky to get a Clojure job) and the 9 to 5 worker, who doesn't know any better and has never experienced writing a FP codebase.

DemocracyFTW2 11/1/2025|||
I'm really afraid that the weak point of the argument is really Scheme having a Lisp syntax. One might say syntax is the most superficial thing about a language but as a matter of fact it's the mud pool in front of the property where everybody's wheels get stuck and they feel their only option is to go into reverse and maybe try another day, or never. The same happens with APL; sure it's a genius who invented it and tic-tac-toe in a single short line of code is cool—doesn't mean many people get over the syntax.

FWIW I believe that JS for one would greatly benefit from much better support for immutable data, including time- and space-efficient ways to produce modified copies of structured data (like you don't think twice when you do `string.replace(...)` where you do in fact produce a copy; `list.push(...)` could conceivable operate similarly).

zelphirkalt 11/1/2025||
Doesn't even have to be true copies. Structural sharing is a thing, that enables many or most functional data structures and avoids excessive memory usage. I agree with your point, and it would put JS higher in my liked languages list.
SoftTalker 10/31/2025||||
JS is much more of a functional language than it was given credit for a long time. It had first-class functions and closures from day one if I'm not mistaken.
inopinatus 10/31/2025||
I’m fond of saying “JS is a Lisp”. It’s not a hill I’d bother dying on, however.
waynesonfire 11/2/2025|||
Python is like a mutation wet-dream. The language is so broken in modern times.
emil0r 10/31/2025|||
The way I like to think about is that with immutable data as default and pure functions, you get to treat the pure functions as black boxes. You don't need to know what's going on inside, and the function doesn't need to know what's going on in the outside world. The data shape becomes the contract.

As such, localized context, everywhere, is perhaps the best way to explain it from the point of view of a mutable world. At no point do you ever need to know about the state of the entire program, you just need to know the data and the function. I don't need the entire program up and running in order to test or debug this function. I just need the data that was sent in, which CANNOT be changed by any other part of the program.

DrScientist 10/31/2025||
Sure modularity, encapsulation etc are great tools for making components understandable and maintainable.

However, don't you still need to understand the entire program as ultimately that's what you are trying to build.

And if the state of the entire programme doesn't change - then nothing has happened. ie there still has to be mutable state somewhere - so where is it moved to?

raddan 10/31/2025|||
In functional programs, you very explicitly _do not_ need to understand an entire program. You just need to know that a function does a thing. When you're implementing a function-- sure, you need to know what it does. But you're defining it in such a way that the user should not know _how_ it works, only _what_ it does. This is a major distinction between programs written with mutable state and those written without. The latter is _much_ easier to think about.

I often hear from programmers that "oh, functional programming must be hard." It's actually the opposite. Imperative programming is hard. I choose to be a functional programmer because I am dumb, and the language gives me superpowers.

DrScientist 10/31/2025||
I think you missed the point. I understand that if you writing a simple function with an expected interface/behaviour then that's all you need to understand. Note this isn't something unique to a functional approach.

However, somebody needs to know how the entire program works - so my question was where does that application state live in a purely functional world of immumutables?

Does it disappear into the call stack?

MetaWhirledPeas 10/31/2025||
It didn't disappear; there's just less of it. Only the stateful things need to remain stateful. Everything else becomes single-use.

Declaring something as a constant gives you license to only need to understand it once. You don't have to trace through the rest of the code finding out new ways it was reassigned. This frees up your mind to move on to the next thing.

raddan 10/31/2025|||
> Only the stateful things need to remain stateful.

And I think it is worth noting that there is effectively no difference between “stateful” and “not stateful” in a purely functional programming environment. You are mostly talking about what a thing is and how you would like to transform it. Eg, this variable stores a set of A and I would like to compute a set of B and then C is their set difference. And so on.

Unless you have hybrid applications with mutable state (which is admittedly not uncommon, especially when using high performance libraries) you really don’t have to think about state, even at a global application level. A functional program is simply a sequence of transformations of data, often a recursive sequence of transformations. But even when working with mutable state, you can find ways to abstract away some of the mutable statefulness. Eg, a good, high performance dynamic programming solution or graph algorithm often needs to be stateful; but at some point you can “package it up” as a function and then the caller does not need to think about that part at all.

DrScientist 11/3/2025|||
And what about that state that needs to exist? - like application state ( for example this text box has state in terms of keeping track of text entered, cursor position etc ).

Where does that go?

Are you creating a new immutable object at every keystroke that represents the addition of the latest event to the current state?

Even then you need to store a pointer to that current state somewhere right?

fwip 10/31/2025||||
It's moved toward the edges of your program. In a lot of functional languages, places that can perform these effects are marked explicitly.

For example, in Haskell, any function that can perform IO has "IO" in the return type, so the "printLine" equivalent is: "putStrLn :: String -> IO". (I'm simplifying a bit here). The result is that you know that a function like "getUserComments :: User -> [CommentId]" is only going to do what it says on the tin - it won't go fetch data from a database, print anything to a log, spawn new threads, etc.

It gives similar organizational/clarity benefits as something like "hexagonal architecture," or a capabilities system. By limiting the scope of what it's possible for a given unit of code to do, it's faster to understand the system and you can iterate more confidently with code you can trust.

emil0r 11/1/2025||||
You are very right in that things need to change. If they don't, nothing interesting happens and we as programmers don't get paid :p. State changes are typically moved to the edges of a program. Functional Core, Imperative Shell is the name for that particular architecture style.

FCIS can be summed up as: R->L->W where R are all your reads, L is where all the logic happens and is done in the FP paradigm, and W are all your writes. Do all the Reads at the start, handle the Logic in the middle, Write at the end when all the results have been computed. Teasing these things apart can be a real pain to do, but the payoff can be quite significant. You can test all your logic without needing database or other services up and running. The logic in the middle becomes less brittle and allows for easier refactoring as there is a clear separation between R, L and W.

For your first question. Yes, and I might misunderstand the question, so give me some rope to hang myself with will ya ;). I would argue that what you really need to care about is the data that you are working with. That's the real program. Data comes in, you do some type of transformation of that data, and you write it somewhere in order to produce an effect (the interesting part). The part where FP becomes really powerful, is when you have data that always has a certain shape, and all your functions understands and can work with the shape of that data. When that happens, the functions starts to behave more like lego blocks. The data shape is the contract between the functions, and as long as they keep to that contract, you can switch out functions as needed. And so, in order to answer the question, yes, you do need to understand the entire program, but only as the programmer. The function doesn't, and that's the point. When the code that resides in the function doesn't need to worry about what the state of the rest of the program is, you as the programmer can reason about the logic inside, without having to worry about some other part of the program doing something that it should do that at the same time will mess up the code that is inside the function.

Debugging in FP typically involves knowing the data and the function that was called. You rarely need to know the entire state of the program.

Does it make sense?

DrScientist 11/4/2025||
I'm trying to work out in my head if it helps the true challenge of programming - not writing the program in the first place, but maintaining it as requirements evolve.

The examples for functional programming benefits always seem to boil down to composable functions operating on lists of stuff where the shape has to be the same or you convert between shapes as you go.

It's very useful, but it's not a whole programme - unless you have some simple server side data processing pipeline - and I'd argue those aren't difficult program.

Programming get's difficult when you have to manage state - so I accept that parts that don't have to do that are therefore much simplier, however you have just moved the problem, not solved it.

And you say you've moved it to the edge of the program - that's fine with a simple in->function-> out, but in the case of a GUI isn't state is at the core of the program?

In that case isn't something with a central model that receives and emits events, easier to reason over and mutate?

emil0r 11/4/2025||
Even the GUI can follow the FCIS architecture. It helps immensely with testing and moving things around.

For a bigger program that handles lots of things, you can still build it around the FCIS architecture, you just end up with more in->chains of functions->out. The things at the edges might grow, but at a much slower pace than the core.

My experience with both sides is what's driven me to FP+immutability.

For your last question: I believe it's a false belief. I believed the same when I started with FP+immutability. I just did not understand where I should put my changes, because I was so used to mutating a variable. Turned out that I only really need to mutate it when I store in a db of some sort (frontend or backend), send it over the wire (socket, websocket, http response, gRPC, pub/sub, etc) or act as an object hiding inherit complexity (hardware state like push button, mouse, keyboard, etc). Graphics would also qualify, but that's one area where I think FP+immutability is ill suited.

Hit me up if you have any more questions :).

jimbokun 10/31/2025||||
> However, don't you still need to understand the entire program as ultimately that's what you are trying to build.

Of course not, that's impossible. Modern programs are way to large to keep in your head and reason about.

So you need to be able to isolate certain parts of the program and just reason about those pieces while you debug or modify the code.

Once you identify the part of the program that needs to change, you don't have to worry about all the other parts of the program while you're making that change as long as you keep the contracts of all the functions in place.

DrScientist 10/31/2025||
> Once you identify the part of the program that needs to change,

And how do you do that without understanding how the program works at a high level?

I understand the value of clean interfaces and encapsulation - that's not unique to functional approaches - I'm just wondering in the world of pure immutability where the application state goes.

What happens if the change you need to make is at a level higher than a single function?

jimbokun 10/31/2025||
Yes, obviously a program with no mutability only heats up the CPU.

The point is to determine the points in your program where mutation happens, and the rest is immutable data and pure functions.

In the case of interacting services, for example, mutation should happen in some kind of persistent store like a database. Think of POST and PUT vs GET calls. Then a higher level service can orchestrate the component services.

Other times you can go a long way with piping the output of one function or process into another.

In a GUI application, the contents of text fields and other controls can go through a function and the output used to update another text field.

The point is to think carefully about where to place mutability into your architecture and not arbitrarily scatter it everywhere.

DrScientist 11/3/2025||
> The point is to think carefully about where to place mutability into your architecture and not arbitrarily scatter it everywhere.

So you mean like having a centralised stateful application model for example?

maleldil 10/31/2025||||
> there still has to be mutable state somewhere - so where is it moved to?

This is one way of thinking about it: https://news.ycombinator.com/item?id=45701901 (Simplify your code: Functional core, imperative shell)

SatvikBeri 10/31/2025||||
A pretty basic example: I write a lot of data pipelines in Julia. Most of the functions don't mutate their arguments, they receive some data and return some data. There are a handful of exceptions, e.g. the functions that write data to a db or file somewhere, or a few performance-sensitive functions that mutate their inputs to avoid allocations. These functions are clearly marked.

That means that 90% of the time, there's a big class of behavior I just don't need to look for when reading/debugging code. And if it's a bug related to state, I can pretty quickly zoom in on a few possible places where it might have happened.

scott_w 10/31/2025||||
> However, don't you still need to understand the entire program as ultimately that's what you are trying to build.

Depends on what I'm trying to do. If what I'm trying to handle is local to the code, then possibly not. If the issue is what's going into the function, or what the return value is doing, then I likely do need that wider context.

What pure-functional functions do allow is certainty the only things that can change the behaviour of that function are the inputs to that function.

bcrosby95 10/31/2025|||
It lets you refine when and where it happens more than other methods of restricting state change, such as in imperative OOP.
rafaelmn 10/31/2025|||
I would say it's more than immutability - it's the "feel" of working with values. I've worked with at least 6 languages professionally and likely more for personal projects over last 20 years. I can say that Clojure was the most impactful language I learned.

I tried to learn Haskel before but I just got bogged down in the type system and formalization - that never sat with me (ironically in retrospect Monads are a trivial concept that they obfuscated in the community to oblivion, yet another Monad tutorial was a meme at the time).

I used F# as well but it is too multi paradigm and pragmatic, I literally wrote C# in F# syntax when I hit a wall and I didn't learn as much about FP when I played with it.

Clojure had the lisp weirdness to get over, but it's homoiconicty combined with the powerful semantics of core data structures - it was the first time where the concept of working with values vs objects 'clicked' for me. I would still never use it professionally, but I would recommend it to everyone who does not have a background in FP and/or lisp experience.

MarkMarine 10/31/2025||
I have dreams of being at a “Clojure shop” but I fear daily professional use might dull my love for the language. Having to realize that not everyone on my team wants to learn lisp (or FP) just to work with my code (something I find amazing and would love to be paid to do) was hard.

On a positive note I have taken those lessons from clojure (using values, just use maps, Rich’s simplicity, functional programming without excessive type system abstraction, etc) and applied them to the rest of my programming when I can and I think it makes my code much better.

StopDisinfo910 10/31/2025|||
I think the advantage is often oversold and people often miss how things actually exist on a continuum and just plainly opposing mutable and immutable is sidestepping a lot of complexity.

For exemple, it's endlessly amusing to me to see all the efforts the Haskell community does to basically reinvent mutability in a way which is somehow palatable to their type system. Sometimes they even fail to even realise that it's what they are doing.

In the end, the goal is always the same: better control and warranties about the impact of side effects with minimum fuss. Carmack approach here is sensible. You want practices which make things easy to debug and reason about while mainting flexibility where it makes sense like iterative calculations.

pxc 10/31/2025|||
If you read through the Big Red Book¹ or its counterpart for Kotlin², it's quite explicit about the goals with these techniques for managing effects, and goes over rewriting imperative code to manage state in a "pure" way.

I think the authors are quite aware of the relationship between these techniques and mutable state! I imagine it's similar for other canonical functional programming texts.

Besides the "pure" functional languages like Haskell, there are languages that are sort of immutability-first (and support sophisticated effects libraries), or at least have good immutable collections libraries in the stdlib, but are flexible about mutation as well, so you can pick your poison: Scala, Clojure, Rust, Nim (and probably lots of others).

All of these go further and are more comfortable than just throwing `const` or `.freeze` around in languages that weren't designed with this style in mind. If you haven't tried them, you should! They're really pleasant to work with.

----

1: https://www.manning.com/books/functional-programming-in-scal...

2: https://www.manning.com/books/functional-programming-in-kotl...

MetaWhirledPeas 10/31/2025||
> If you read through the Big Red Book

This is a thoughtful response, but I can't help but chuckle at a response that starts with, just read this book!.

pxc 11/1/2025||
For me, well-written books are an enjoyable way to learn, and I'll admit I'm partial to that!

But of course you can learn in whatever way you like. Books are just a convenient example to point to as an indicator of how implementers, enthusiasts, and educators working with these techniques make sense of them and compare them to mutating variables. They're easy to refer to because they're notable public artifacts.

Fwiw, there's also an audiobook of the Red Book. To really follow the important parts, you'll want to be reading and writing and running code, but you can definitely get a sense of the more basic philosophical orientation just listening along while doing chores or whatever. :)

eyelidlessness 10/31/2025||||
> Sometimes they even fail to even realise that it's what they are doing.

Because that’s not what they’re doing. They’re isolating state in a systemic, predictable way.

StopDisinfo910 10/31/2025||
Lenses is mutation by another name. You are basically recreating states on top of an immutable system. Sure, it's all immutable actually but conceptually it doesn't really change anything. That's what makes it hilarious.

In the end, the world is stateful and even the purest abstractions have to hit the road at some point. But the authors of Haskell were fully aware of that. The monadic type system was conceived as a way to easily track side effects after all, not banish them.

eyelidlessness 10/31/2025||
But there isn’t anything hilarious about that.

It’s a clear-minded and deliberate approach to reconciling principle with pragmatic utility. We can debate whether it’s the best approach, but it isn’t like… logically inconsistent, surprising, or lacking in self awareness.

pxc 11/1/2025|||
You might also think of it a bit like poetry: creativity emerging from the process of working within formal constraints. By asking how you can represent something familiar in a specially structured way, you can learn both about that structure and the thing you're trying to unite with it. Occasionally, you'll even create something beautiful or powerful, as well.

Maybe in that sense there's an "artificial" challenge involved, but it's artificial in the sense of being deliberate rather than merely arbitrary or absurd.

eyelidlessness 11/1/2025||
This is a fantastic way to put it, thank you for adding it!
StopDisinfo910 11/1/2025|||
You don’t see what’s hilarious about recreating what you are pretending to remove only one abstraction level removed?

Anyway, I have great hopes for effect system as a way to approach this in a principled way. I really like what Ocaml is currently doing with concurrency. It’s clear to me that there is great value to unlock here.

eyelidlessness 11/1/2025||
I don’t agree with your characterization that anyone is “pretending”. The whole point of abstraction is convenience of reasoning. No one is fooling themselves or anyone else, nor trying to. It’s a conscious choice, for clear purposes. That’s precisely as hilarious as using another abstraction you might favor more, such as an effect system.
Maxatar 10/31/2025|||
>For exemple, it's endlessly amusing to me to see all the efforts the Haskell community does to basically reinvent mutability in a way which is somehow palatable to their type system.

That's because Haskell is a predominantly a research language originally intended for experimenting with new programming language ideas.

It should not be surprising that people use it to come up with or iterate on existing features.

ndr 10/31/2025|||
Clojure also makes it very easy, it'd require too much discipline to do such a thing in Python. Even Carmack, who I think still does python mostly by himself instead of a team, is having issues there.
MetaWhirledPeas 10/31/2025||
> it'd require too much discipline to do such a thing in Python

Is Python that different from JavaScript? Because it's easy in JavaScript. Just stop typing var and let, and start typing const. When that causes a problem, figure out how to deal with it. If all else fails: "Dear AI, how can I do this thing while continuing to use const? I can't figure it out."

codethief 10/31/2025|||
I agree that Python is not too different and in general I treat my Python variables as const. One thing, however, where I resort to mutating variables more often than I'd like is when building lists & dictionaries. Lambdas in Python have horrible DX (no multi-line, no type annotations, bad type checker support even in obvious cases), which is why the functional approach to build your list, using map() and filter() is much more cumbersome than in JS. As a result, whenever a list comprehension becomes too long, you end up building your list the old-fashioned way, using a for loop and the_list.append().
filoeleven 11/1/2025||||
Javascript only enforces reassignments to const. So this,

  const arr = []
  arr.push(“grape nuts”]
is just peachy in JS and requires the programmer to avoid using it.

More importantly, because working immutably in JS is not enforced, trying to use it consistently either limits which libraries you can use and/or requires you to wrap them to isolate their side effects. ImmerJS can help a lot here, since immutability is its whole jam. I’d rather work in a language where I get these basic benefits by default, though.

throwaway2037 11/3/2025||

    > Javascript only enforces reassignments to const.
Java is the same with the keyword final. I never heard anyone complain about it. Are you asking for the special hell that is C++ const correctness?
SAI_Peregrinus 11/3/2025||||
Python doesn't have constants at the language level. You can create classes without setter properties, only getter properties, to have constant objects. This is rare, usually people just write the name in SCREAMING_SNAKE_CASE to document it's supposed to be a constant but Python will still allow mutating it.
ndr 11/1/2025|||
In python there's no let, var nor const. So yes.
ErroneousBosh 10/31/2025|||
I guess I'm not that good a programmer, because I don't really understand why variables that can't be varied are useful, or why you'd use that.

How do you write code that actually works?

jimbokun 10/31/2025|||
The concept is actually pretty simple: instead of changing existing values, you create new values.

The classic example is a list or array. You don't add a value to an existing list. You create a new list which consists of the old list plus the new value. [1]

This is a subtle but important difference. It means any part of your program with a reference to the original list will not have it change unexpectedly. This eliminates a large class of subtle bugs you no longer have to worry about.

[1] Whether the new list has completely new copy of the existing data, or references it from the old list, is an important optimization detail, but either way the guarantee is the same. It's important to get these optimizations right to make the efficiency of the language practical, but while using the data structure you don't have to worry about those details.

ErroneousBosh 11/2/2025|||
> The classic example is a list or array. You don't add a value to an existing list. You create a new list which consists of the old list plus the new value. [1]

Getting back to this, though - where would this be useful? What would do this?

I'm not getting why having a new list that's different from the old list, with some code working off the old list and some working off the new list, is anything you'd ever want.

Can you give a practical example of something that uses this?

Why doesn't the list just have a mutex?

ErroneousBosh 10/31/2025|||
> It means any part of your program with a reference to the original list will not have it change unexpectedly.

I don't get why that would be useful. The old array of floats is incorrect. Nothing should be using it.

That's the bit I don't really understand. If I have a list and I do something to it that gives me another updated list, why would I ever want anything to have the old incorrect list?

jimbokun 10/31/2025|||
You pass in an array to a function meant to perform a transformation on each item of the array and return the result.

You pass in an array of 10 values.

While the function is executing, some other thread adds two more values to the array.

How many values should the result of the function call have? 10 or 12? How do you guarantee that is the case?

ErroneousBosh 10/31/2025||
> While the function is executing, some other thread adds two more values to the array.

This is not something that can happen.

jimbokun 11/1/2025||
Why not? Which language?
ErroneousBosh 11/1/2025||
Well, the stuff I'm writing is in C, but in general it would make no sense for anything to attempt to add items to a fixed-sized buffer.

If you have something so fundamentally broken as to attempt that, you'd probably want to look at mutexes.

Why one earth would you have something attempt to expand a fixed-sized buffer while something else is working on it?

filoeleven 11/1/2025||
There’s a mismatch between your assumptions coming from C and GP’s assumptions coming from a language where arrays are not fixed-length. Having a garbage collector manage memory for you is pretty fundamental to immutable-first languages.

Rich Hickey asked once in a talk, “who here misses working with mutable strings?” If you would answer “I do,” or if you haven’t worked much in languages where strings are always immutable and treated as values, it makes describing the benefits of immutability more challenging.

Von Neumann famously thought Assembly and higher-level language compilers were a waste of time. How much that opinion was based on his facility with machine code I don’t know, but compilers certainly helped other programmers to write more closely to the problem they want to solve instead of tracking registers in their heads. Immutable state is a similar offloading-of-incidental-complexity to the machine.

ErroneousBosh 11/1/2025||
I must admit I do regard assembly language with some suspicion, because the assembler can make some quite surprising choices. Ultra-high-level languages like C are worse, though, because they can often end up doing things like allocating really wacky bits of memory for variables and then having to get up to all sorts of stunts to index into your array.
b_e_n_t_o_n 10/31/2025|||
State exists in time, a variable is usually valid at the point it's created but it might not be valid in the future. Thus if part of your program accesses a variable expecting it to be from a certain point in time but it's actually from another point in time (was mutated) that can cause issues.
jayd16 10/31/2025||||
If you need new values you just make new things.

If you want to do an operation on fooA, you don't mutate fooA. You call fooB = MyFunc(fooA) and use fooB.

The nice thing here is you can pass around pointers to fooA and never worry that anything is going to change it underneath you.

You don't need to protect private variables because your internal workings cannot be mutated. Other code can copy it but not disrupt it.

ErroneousBosh 10/31/2025|||
> If you want to do an operation on fooA, you don't mutate fooA. You call fooB = MyFunc(fooA) and use fooB.

This is the bit I don't get.

Why would I do that? I will never want a fooA and a fooB. I can't see any circumstances where having a correct fooB and an incorrect fooA kicking around would be useful.

takinola 10/31/2025|||
It is about being able to think clearly about your code logic. If your code has many places where a variable can change, then it is hard to go back and understand exactly where it changed if you have unexpected behavior. If the variable can never change then the logical backtrace is much shorter.
jayd16 10/31/2025|||
As Carmack points out, naming the intermediate values aides in debugging. It also helps you write code as you can give a name to every mutation.

But also keep in mind that correct and incorrect is not binary. You might want to pass a fooA to another class that does not want the fooB mutation.

If you just have foo, you end up with situations where a copy should have happened but didn't and then you get unwanted changes.

ErroneousBosh 10/31/2025||
But that's just it, why would a copy ever happen? Why would you want a correct and an incorrect version of your variable hanging about?
kubanczyk 11/1/2025||
Taking your point of view: you assigned a value1 to a name. Then you assigned a value2 to the same name.

You say that value2 is correct. It logically follows that value1 was incorrect. Why did you assign it then?

The names are free, you can just use a correct name every single time.

zaphirplane 11/2/2025||
Because the account owner withdrew money . The player scored a goal, the month ticked over, the rain started, the car accelerated, a new comment was added to the thread .

The world by definition mutates over time.

kubanczyk 11/2/2025||
Ah, true. If the var is a part of a long-living state, all good. That's just rarely seen in CRUD apps, more common in games.
MetaWhirledPeas 10/31/2025|||
> If you need new values you just make new things. > If you want to do an operation on fooA, you don't mutate fooA. You call fooB = MyFunc(fooA) and use fooB.

The beautiful thing about this is you can stop naming things generically, and can start naming them specifically what they are. Comprehension goes through the roof.

ErroneousBosh 10/31/2025||
Yes, but you can do that without having loads of stale copies of incorrect data lying around, presumably.
MetaWhirledPeas 11/3/2025||
With techniques like method chaining you can avoid using a constant or a variable and just pass return values around.
mleo 10/31/2025||||
It forces you to consider when, where and why a change occurs and can help reason later about changes. Thread safety is a big plus.
ErroneousBosh 10/31/2025||
Okay, so for example I might set something like "this bunch of parameters" immutable, but "this 16kB or so of floats" are just ordinary variables which change all the time?

Or then would the block of floats be "immutable but not from this bit"? So the code that processes a block of samples can write to it, the code that fills the sample buffer can write to it, but nothing else should?

stickfigure 10/31/2025||
Sounds like you have a data structure like `Array<Float>`. The immutable approach has methods on Array like:

   Array<Float> append(Float value);
   Array<Float> replace(int index, Float value);
The methods don't mutate the array, they return a new array with the change.

The trick is: How do you make this fast without copying a whole array?

Clojure includes a variety of collection classes that "magically" make these operations fast, for a variety of data types (lists, sets, maps, queues, etc). Also on the JVM there's Vavr; if you dig around you might find equivalents for other platforms.

No it won't be quite as fast as mutating a raw buffer, but it's usually plenty fast enough and you can always special-case performance sensitive spots.

Even if you never write a line of production Clojure, it's worth experimenting with just to get into the mindset. I don't use it, but I apply the principles I learned from Clojure in all the other languages I do use.

ErroneousBosh 10/31/2025||
> The methods don't mutate the array, they return a new array with the change.

But then I need to update a bunch of stuff to point to the new array, and I've still got the old incorrect array hanging around taking up space.

This just sounds like a great way to introduce bugs.

stickfigure 10/31/2025||
It ends up being quite the opposite - many, many bugs come from unexpected side effects of mutation. You pass that array to a function and it turns out 10 layers deeper in the call stack, in code written by somebody else, some function decided to mutate the array.

Immutability gives you solid contracts. A function takes X as input and returns Y as output. This is predictable, testable, and thread safe by default.

If you have a bunch of stuff pointing at an object and all that stuff needs to change when the inner object changes, then you "raise up" the immutability to a higher level.

    Universe nextStateOfTheUniverse = oldUniverse.modifyItSomehow();
If you keep going with this philosophy you end up with something roughly like "software transactional memory" where the state of the world changes at each step, and you can go back and look at old states of the world if you want.

Old states don't hang around if you don't keep references to them. They get garbage collected.

ErroneousBosh 10/31/2025||
Okay, so this sounds like it's a method of programming that is entirely incompatible with anything I work on.

What sort of thing would it be useful for?

The kind of things I do tend to have maybe several hundred thousand floating point values that exist for maybe a couple of hundred thousandths of a second, get processed, get dealt with, and then are immediately overwritten with the next batch.

I can't think of any reason why I'd ever need to know what they were a few iterations back. That's gone, maybe as much as a ten-thousandth of a second ago, which may as well be last year.

stickfigure 11/1/2025||
It is useful for the vast majority of business processing. And, if John Carmack is to be believed, video game development.

Carmamack's post explains it - if you make a series of immutable "variables" instead of reassigning one, it is much easier to debug. This is a microcosm of time travel debugging; it lets you look at the state of those variables several steps back.

In don't know anything about your specific field but I am confident that getting to the point where you deeply understand this perspective will improve your programming, even if you don't always use it.

bjoli 11/1/2025|||
I spent some time discussing in another thread discussing why the foreach loop is so bad in many languages. Most of the bugs I write come from me managing state, yet if I want to do much more than going start to end of a collection I have to either use methods that are slower than a proper loop or I have to manage all the state myself.

In common lisp you have the loop macro (or better: iterate), in racket you have the for loops. I wrote a thing for guile scheme [0]. Other than that I dont know if many nice looping facilities. In many languages you can achieve all that with conbinatoes and what not, but always at the cost of performance.

I think this is an opportunity for languages to become safer and easier to use without changing performance.

0:https://rikspucko.koketteriet.se/bjoli/goof-loop

runeks 11/1/2025|||
One big problem with mutation is that it makes it too easy to violate many good design principles, e.g. modularity, encapsulation and separation of concerns.

Because any piece of code that holds a reference to a mutable variable is able to, at a distance, modify the behavior of a piece of code that uses this mutable variable.

Conversely, a piece of code that only uses immutable variables, and takes as argument the values that may need to vary between executions, is isolated against having its behavior changed at a distance at any time.

nvarsj 11/1/2025|||
> I think it may be one of those things you have to see in order to understand.

Or the person doesn't understand, then declares the language to be too difficult to use. This probably happens more than the former, sadly.

ex. I've heard people argue for rewriting perfectly working Erlang services in C++ or Java, because they find Erlang "too difficult". Despite it being a simpler language than either of those.

dwwoelfel 10/31/2025|||
Carmack is talking about variable reassignment here, which Clojure will happily let you mutate.

For example:

  (let [result {:a 1}
        result (assoc result :b 2)]
    ...)

He mentions that C and C++ allow const variables, but Clojure doesn't support that.

clj-kondo has a :shadowed-var rule, but it will only find cases where you shadow a top-level var (not the case in my example).

manoDev 10/31/2025|||
That's not mutation though.

The `assoc` on the second binding is returning a new object; you're just shadowing the previous binding name.

This is different than mutation, because if you were to introduce an intermediate binding here, or break this into two `let`s, you could be holding references to both objects {:a 1} and {:a 1 :b 2} at any time in a consistent way - including in a future/promise dereferenced later.

potetm 11/1/2025|||
regardless of the mechanism, you still run into the exact same problem John had.
didibus 10/31/2025|||
It's more nuanced, because the shadowing is block-local, so when the lexical scope exits the prior bindings are restored.

I think in practice this is the ideal middle ground of convenience (putting version numbers at the end of variables being annoying), but retaining mostly sane semantics and reuse of prior intermediate results.

m463 10/31/2025|||
I think a lot of this kind of stuff should have language support (like he mentions), even if it is not that functional and is just as a hint.

That said, utopias are not always a great idea. Making all your code functional might be philosophically satisfying, but sometimes there are good reasons to break the rules.

ratelimitsteve 10/31/2025|||
the flash of enlightment I had when I understood the incredible power the rules of functional programming give you as a coder is probably the biggest one I've had in my career so far. idempotence, immutability and statelessness on their own let you build a thing once in a disciplined way and then use it all willy nilly anywhere you want without having to think about anything other than "things go into process, other things come out" and it's so nice.
oldpersonintx2 10/31/2025|||
[dead]
m_rpn 10/31/2025||
salutes from a WestLondonCoder
hyperhello 10/31/2025||
> I wish it was the default, and mutable was a keyword.

I wish the IDE would simply provide a small clue, visible but graphically unobtrusive, that it was mutated.

In fact, I end up wishing this about almost every language feature that passes my mind. For example, I don't need to choose whether I can or can't append to a list; just make it unappendable if you can prove I don't append. I don't care if it's a map, list, set, listOf, array, vector, arrayOf, Array.of(), etc unless it's going to get in my way because I have ten CPU cores and I'll optimize the loop when I need to.

throwaway2037 10/31/2025||
In my IntelliJ (a recent version), if I write a small Java function like this:

    private static void blah()
    {
        final int abc = 3;
        for (int def = 7; def < 20; ++def)
        {
            System.out.print(def);
        }
    }
The variable 'def' is underlined. Mouse-over hint shows: 'Reassigned local variable'. To be clear, 'abc' is not underlined. When I write Java, I try to use the smallest variable scopes possible with as much final (keyword) as possible. It helps me to write more maintainable code, that is easier to read.
shagie 10/31/2025|||
As an aside, you might also enjoy the inline inferred annotations.

https://www.jetbrains.com/help/idea/annotating-source-code.h...

Seeing @NotNull in there even if the author hasn't specifically written that can help in understanding (and not needing to consider) various branches.

xxs 10/31/2025||||
1st) you use ++def in a loop, don't be weird; 2nd) if 'abc' is to be used in the loop body, define in the loop, e.g. for (int def = 7, abc =3; ...); 3rd) this is an IntelliJ bug - both 'def' and 'abc' in the sample are always defined.
throwaway2037 11/6/2025|||

    > you use ++def in a loop, don't be weird
I come from a C++ background where it is always advised to use ++i instead of i++. It's just a habit. Does it stress you to read ++i over i++?
xxs 11/10/2025||
>Does it stress you to read ++i over i++?

Of course not, I do use both. Admittedly i++ is a lot more common in Java, and for loops i++ is the standard idiom. Not using the standard idiom usually implies less practice, e.g. str.indexOf('x') < 0 is the standard one, not == -1. Even the backwards iteration is a postfix subtraction::

  for (int i = array.length; i-- > 0;) doStuff(array[i]);
pacoverdi 10/31/2025||||
3) looks like you read 'underlined' as 'undefined'
xxs 10/31/2025||
true that, thanks!
sitzkrieg 10/31/2025|||
the only thing that is weird is your lack of understanding temporary variables
xxs 10/31/2025||
perhaps... yet, Java doesn't have a definition for temporary variables
e-topy 10/31/2025|||
This works in RustRover as well! Super useful.
sn9 10/31/2025||
Rust's type system specifically facilitates more powerful tools: https://github.com/willcrichton/flowistry
NathanaelRea 10/31/2025|||
I don't think this is the best option, there could be very hard bugs or performance cliffs. I think I'd rather have an explicit opt-in, rather than the abstraction changing underneath me. Have my IDE scream at me and make me consider if I really need the opt-in, or if I should restructure.

Although I do agree with the sentiment of choosing a construct and having it optimize if it can. Reminds me of a Rich Hickey talk about sets being unordered and lists being ordered, where if you want to specify a bag of non-duplicate unordered items you should always use a set to convey the meaning.

It's interesting that small hash sets are slower than small arrays, so it would be cool if the compiler could notice size or access patterns and optimize in those scenarios.

bbminner 10/31/2025||
Right, sql optimizers are a good example - in theory it should "just know" what is the optimal way of doing things, but because these decisions are made at runtime based on query analysis, small changes to logic might cause huge changes in performance.
nielsbot 10/31/2025|||
I use Swift for work. The compiler tell you this. If a mutable variable is never mutated it suggests making it non-mutable. And vice versa.
bartvk 10/31/2025|||
Yup, it's pretty great. You get into the habit of suspiciously eyeing every variable that's not a constant.
qmmmur 10/31/2025|||
As will Typescript, at least using Biome to lint it does.
nielsbot 10/31/2025|||
My very minor complaint about TypeScript is you use to use `const` which is 2 additional letters.

Seriously though, I do find it slightly difficult to reason about `const` vars in TypeScript because while a `const` variable cannot be reassigned, the value it references can still be mutated. I think TypeScript would benefit from more non-mutable values types... (I know there are some)

Swift has the same problem, in theory, but it's very easy to use a non-mutable value types in Swift (`struct`) so it's mitigated a bit.

maleldil 10/31/2025|||
eslint has this too: https://eslint.org/docs/latest/rules/prefer-const
estimator7292 10/31/2025|||
Your IDE probably supports this as an explicit action. JetBrains has a feature that can find all reads and writes to a variable
Denvercoder9 10/31/2025||
It also has the ability to style mutated variables differently.
greenicon 10/31/2025||
Yes, depending on your highlighting scheme. Not every highlighting scheme shows this by default, unfortunately.

To me, this seems initially like some very minor thing, but I find this very helpful working with non-trivial code. For larger methods you can directly discern whether a not-as-immutable-declared variable behaves immutable nonetheless.

bee_rider 10/31/2025|||
I don’t have any useful ideas here but if you make a linter for this sort of thing, I suggest calling it “mutalator.”
considerdevs 10/31/2025|||
Could Pylint help? It has atleast check for variable redefinition: https://pylint.pycqa.org/en/latest/user_guide/messages/refac...
spidersouris 11/1/2025||
For type only though.
worthless-trash 10/31/2025|||
If you write in erlang, emacs does this by default ;)
HDThoreaun 10/31/2025||
Clang-tidy's misc-const-correctness warns for this. Hook it up to claude code and it'll const all non mutated mutables.
slifin 10/31/2025||
Yeah I wish variables were immutable by default and everything was an expression

Oh well continues day job as a Clojure programmer that is actively threatened by an obnoxious python take over

sunrunner 10/31/2025||
As a Python programmer at day job, that is Clojure-curious and sadly only gets to use it for personal projects, and is currently threatened by an obnoxious TypeScript take over, I feel this.
hn_throw2025 10/31/2025|||
In the context of the original discussion, TypeScript (and ES6) has const and let.
dragonwriter 10/31/2025|||
Neither let nor even const are immutable (const prevents reassignment but not mutation if the value is of a mutable type like object or array).
Kailhus 10/31/2025|||
Yep, I believe you'd need to call Object.seal(foo) to prevent mutability. Haven't really had the chance to use it
mwcz 11/1/2025||
Object.freeze is the one you're looking for.

const + Object.freeze is a lot to remember and cumbersome to use throughout a codebase, very relevant to Carmack's wish for immutability by default. I'm grateful Rust opted for that default.

sunrunner 10/31/2025||||
Fair enough about const and let, the obnoxiousness for me is a combination of the language ergonomics, language ecosystem, but mostly the techno-political decision making behind it.
AstroBen 10/31/2025|||
well yeah except const doesn't make objects or arrays immutable
MetaWhirledPeas 10/31/2025|||
Yeah it makes their structure immutable? Something like that. Not useless but not what you would expect.

But for non-objects and non-arrays it's fine.

catlifeonmars 10/31/2025|||
I feel that Java’s “final” would have been a better choice than “const”. It doesn’t have the same confusing connotation.
Garlef 10/31/2025||||
If you avoid metaprogramming and stick to the simple stuff, python and typescript are almost the same language.

To be fair, comprehensions (list/object expresions) are a nice feature that I miss a lot in JS/TS. But that's about it.

dude250711 10/31/2025|||
Removing barriers to sloppy code is a language feature.

That is why vibe coding, JavaScript and Python are so attractive.

Xelbair 10/31/2025||
Removing barriers to civil engineering building design is a feature.

Who needs to calculate load bearing supports, walls, and floors when you can just vibe oversize it by 50%.

huflungdung 10/31/2025||
Well if it does the job. So what?
ziml77 10/31/2025|||
Rust taught me that a language does not have to be purely functional to have everything be an expression, and ever since I first used Rust years ago I've been wishing every other language worked that way. It's such a nice way to avoid or limit the scope of mutations
sgt 10/31/2025|||
Clojure will always be faster than Python. So you have that, at least.
nvader 10/31/2025||
You are not a Clojure programmer. You use Clojure to solve problems in a professional context. I'm sorry that there's a political tribal war based on language going on at your workplace.

But especially now that coding agents are radically enabling gains in developer productivity, you don't need to feel excluded by the artificial tribal boundaries.

If you haven't, I recommend reading: https://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-pr...

sunrunner 10/31/2025|||
I remember that post and essentially agree with everything in it and your points too.

However there's a real-world factor that I don't think it covers, which is that having ten years of experience in the ecosystem for any language almost guarantees that you're going to be faster, more efficient, more idiomatic, and generally more comfortable through familiarity with that language and its ecosystem than with any other 'drop in replacement'. And you'll also probably be more aware of what doesn't work, which is just as useful. You can always tell when someone knows their tools well when they can immediately tell you what sucks about them, and possibly even the history of it and why it might happen to make sense, even if seems bad.

This isn't an argument for favouring speed or efficiency, just an ackowledgement of what is lost when you choose or are forced to move to a different environment.

Languages are a lot more than just syntax. Language-specific features, conventions and common idioms, language implementation details that end up being valuable to understand, familiarity with core library, familiarity with third party libaries (including the ones that are so well-known as to almost be considered core), package management, documentation standards, related tooling, foreign-function interfaces and related tools to make that workable, release concerns. The list goes on.

There's no tribal boundary here, just a belief that time spent with a given tool and all its idiosyncrancies (and programming languages are their idosyncracies, otherwise they wouldn't be different) is valuable and not something to pass up, even if I agree with the thesis of the article.

Can you bootstrap your way to a passable, possibly even idiomatic, solution with coding agents? Yes. Does that mean you've managed to short circuit the results of long-term experience? I'm not so sure. Does it matter? Depends on the person or environment, I guess.

I don't think the learning curve for a new tool is a straight line (I imagine more logarithmic), so it's not that you'd need the same amount of exposure in terms of time, but that does imply the cost of changing is up-front.

There's also a difference between choosing to investigate a new language out of your own interest and having the time to do it properly, versus having some top-down mandate that you must now use <X>, meanwhile still having to meet the same deadlines as before.

jimbokun 10/31/2025|||
You know I read this when it came out but have gotten out of the habit of applying it.

Thanks for the reminder. Will work on putting these ideas back into practice again.

gwbas1c 10/31/2025||
Years ago I did a project where we followed a lot of strict immutability for thread safety reasons. (Immutable objects can be read safely from multiple threads.)

It made the code easier to read because it was easier to track down what could change and what couldn't. I'm now a huge fan of the concept.

piker 10/31/2025|
You should check out Rust
gwbas1c 10/31/2025||
Rust wasn't available at the time.

It probably won't come as a surprise to you, but I am a big fan of Rust.

nixpulvis 10/31/2025||
I try to keep deeper mutation to where it belongs, but I'll admit to shadowing variables pretty often.

If I have a `result` and I need to post-process it. I'm generally much happier doing `result = result.process()` rather than having something like `preresult`. Works nicely in cases where you end up moving it into a condition, or commenting it out to test an assumption while developing. If there's an obvious name for the intermediate result, I'll give it that, but I'm not over here naming things `result_without_processing`. You can read the code.

furyofantares 10/31/2025||
You're using really generic terms which I have to think is mostly because you're talking about it in the abstract. In most scenarios I find there are obvious non-generic names I can use for each step of a calculation.
eugenekolo 10/31/2025|||
I disagree you'd find "obvious" non-generic names easily. After all, "naming" is one of the hardest things in computer science.
nixpulvis 10/31/2025|||
I mean, I use `result` in a function named `generate` within a class `JSON < Generator`. Stuff like this is pretty common.
philipov 10/31/2025|||
if you're already committing to generic names, what's wrong with a name like `processed_result`?
snarfy 10/31/2025|||
In the flow he describes you end up with processed_processed_processed_result.
WhyNotHugo 10/31/2025||
Java mentioned!
strbean 10/31/2025||
AbstractFactoryResultFactoryProcessedResultProcessedResultProcessorBeanFactory
codr7 11/1/2025||
...BeanFactoryContextConfig

First you configure a context, then you can use that to get a bean factory and start processing your whatevers.

catlifeonmars 10/31/2025||||
That name is kind of redundant, since `result` implies `processed` in the first place.
throwway120385 10/31/2025|||
I think what they're getting at is that they sometimes use composition of functions in places where other people might call the underlying functions as one procedure and have intermediate results.

At the end of the day, you're all saying different ways of keeping track of the intermediate results. Composition just has you drop the intermediate results when they're no longer relevant. And you can decompose if you want the intermediates.

MetaWhirledPeas 10/31/2025|||
> Stuff like this is pretty common.

Common != Good

jimbokun 10/31/2025|||
Yes, but there are often FP tricks and conveniences that make this unnecessary.

Like chaining or composing function calls.

result = x |> foo |> bar |> baz (-> x foo bar baz)

Or map and reduce for iterating over collections.

Etc.

nixpulvis 10/31/2025||
Yea, very true. Not every language makes this nice though.
swid 10/31/2025|||
I'm going to ignore the actual names used here - you can use any name you want. I think this pattern is vulnerable to introducing bugs that allow security bugs. I'm imagining process being some kind of sanitization or validation. Then, you have this thing called result, and some of the time it might be "safe" or processed, and sometimes not. Sometimes people will process it more or less than once with real consequences.

So yeah, definitely it is much better to name the first one in a way that makes it more clear it hasn't been processed yet.

nailer 10/31/2025|||
result.process()

That doesn’t make logical sense. You already have a result. It shouldn't need processing to be a result.

riskable 10/31/2025||
It also doesn't make sense for `process()` to be an attribute of `result`. Why would you instantiate a class and call it result‽
Bayko 10/31/2025|||
A more common example for me at work is getting a response from url. Then you gotta process it further like response.json() or response.header or response.text etc etc. and then again select the necessary array index or doc value from it. Giving a name like pre_result or result_json etc etc would just become cumbersome.
nixpulvis 10/31/2025||
I would never do `response = response.json()`. I use it when it's effectively the same type, but with further processing which may be optional.
nomel 10/31/2025||
Depends on how clear it is.

I usually write code to help local debug-ability (which seems rare). For example, this allows one to trivially set a conditional breakpoint and look into the full response:

    response = get_response()
    response = response.json()
The fact that the first response is immediately overwritten proves to the reader it's not important/never used, so they can forget about it, where a temp variable would add cognitive load since it might be used later.

and I think is just as clear as this:

    response = get_response().json()

This motivated by years of watching people through code, and me working with deeply non-software engineers, and is always very much appreciated.
nailer 11/1/2025||

    get_response().json() 
is ideal, and I'm assuming yoiu're writing an HTTP wrapper since decoding JSON is a sensible default.

If you need to add an intermediary variable, name it as clearly as possible:

    raw_response = get_response()
    response = raw_response.json()
nomel 11/4/2025||
> The fact that the first response is immediately overwritten proves to the reader it's not important/never used, so they can forget about it, where a temp variable would add cognitive load since it might be used later.

I strive to write code that reduces cognitive load. To me, putting it in a temp variable is more of habit of old languages, mixed with a bit of cargo cult.

nailer 11/4/2025||
> To me, putting it in a temp variable is more of habit of old languages

If you do want an intermediate variable, naming it non-deceptively will reduce cognitive load. If you don't want one, that's fine too. There's no deception with a name that doesn't exist.

feoren 10/31/2025|||
> Why would you instantiate a class and call it result‽

Are you suggesting that the results of calculations should always be some sort of primitive value? It's not clear what you're getting hung up on here.

nailer 11/3/2025||
No, the result of a calculation could be a key value or list or other compound value - whatever the result is. I am getting hung up on deceptive naming. If you have a 'result', the calculation is done. You have a result.
MetaWhirledPeas 10/31/2025||
> result.process()

What result? What process?

...says every person who has to read your code later.

b_e_n_t_o_n 10/31/2025||
I mean, it's probably pretty clear when you look at result's initial assignment...
lopatin 10/31/2025||
How fast this got to the top, you would think John Carmack just invented nuclear fusion.
AndrewOMartin 10/31/2025||
I have no doubt that he could, not only invent nuclear fusion, but get it running on a Pentium 90.
devnullbrain 10/31/2025|||
I was part of the Carmack cult but the illusion was broken when I saw him use the same authoratative tone on a subject I'm more knowledgable about.
carabiner 10/31/2025|||
I don't think gellman amnesia is really a material issue. Carmack is indubitably an expert in his field, but that doesn't mean he's an expert in every field (like aerospace or AI). I'm an expert in some things, but I've probably said some stupid shit in other fields where I dabble such as cooking, playing music, raising cats.
metaltyphoon 10/31/2025||||
Such as? Any links to this?
maleldil 10/31/2025|||
Good old Gell-Mann Amnesia! It doesn't mean he's incompetent in his core area of expertise, though.
trallnag 10/31/2025|||
How many hours of discussions went into topics like code formatting and naming things like variables, endpoints, classes, etc?
throwaway314155 10/31/2025|||
Right? This isn't even a hot take - it's just standard software engineering advice we all learn in school or on the job.
villgax 10/31/2025|||
Just like AGI he was supposedly brought on board for but…..checks notes. Nothing.
AnotherGoodName 10/31/2025|||
>Just like AGI he was supposedly brought on board for but…..checks notes. Nothing.

His AGI work was entirely his own? As in he literally stepped down from a high level corporate role where he was responsible for Oculus (3D games/applications) to do this in his own time. Similar to his work on Armadillo Aerospace.

yeasku 11/2/2025||
Oculus wich also failed hard.
jpgvm 10/31/2025|||
Dude isn't a god.

That said it's worth listening when he chimes in about C/C++ or optimisation as he has earned his respect when it comes to these fields.

Will he crack AGI? Probably not. He didn't crack rockets either. Doesn't make him any less awesome, just makes him human.

TheAceOfHearts 10/31/2025||
There are certain figures who are very experienced and knowledgeable in certain domains, so when they speak up about a topic it's usually worth listening to them. That doesn't mean they're always going to be correct, and they shouldn't be worshiped as superhuman entities, but it's almost always a bad idea to completely ignore them.
viraptor 10/31/2025|||
Sometimes it's nice to be reminded of some basic good ideas. Even if you already know. Also https://xkcd.com/1053/
bmitc 10/31/2025||
People worship this guy, but other than being a good C++ graphics programmer, it isn't clear what he's actually done.
tom_ 10/31/2025|||
Well some people do, I'm sure, but most people just pay ordinary levels of attention to him. And they do that because he's made interesting contributions to multiple products that people like using - which is enough, surely?

(Regarding this specific tweet, this seems to be him visiting his occasional theme of how to write C++ in a way that will help rather than hinder the creation of finishable software products. He's qualified to comment.)

rvba 10/31/2025||||
Doom which was countless fun for people up to this day who make mods? (E.g. the great Myhouse.wad that was perhaps FPS of the year... 2023)

Quake, which was a good game, but arguably a better engine that lead to things like Half-Life 1?

Other games?

Shared the code to Doom and Quake?

I guess you dont understand how big of a game Doom was. The first episode holds suprisngly well up to this day, even after hundreds of doom-clones as they used to call FPS games.

bmitc 10/31/2025||
But he didn't design the Doom game. He designed its graphics engine.
zenlot 10/31/2025||
You understand how games work, do you? Please tell me you do. Else, what kind of argument is it?
tredre3 10/31/2025|||
The engine enables people with actual creativity to realize their vision. I believe Carmack was part of that creative process, but the condescending tone of your comment really isn't appropriate because in most games the engine is just a means to an end (nobody sane idolizes games because they're using unreal underneath, for example)..
zenlot 11/1/2025||
Just decided to talk shit on Friday? What you believe doesn't matter. Facts matter, and your knowledge of facts are far away from truth.
bmitc 10/31/2025|||
What is your point?
zenlot 11/1/2025||
[flagged]
modeless 10/31/2025|||
His engines are open source, and graphics are far from the only interesting thing about them. If you don't know what he's done that's on you; it's no secret.
bmitc 10/31/2025||
So again, what has he done successfully besides C++ graphics?
zenlot 10/31/2025|||
More than you will ever do mate.
bmitc 10/31/2025||
That's not part of my argument.
zenlot 11/1/2025||
You don't have any argument at all.
anymouse123456 10/31/2025||
I completely agree with the assertion and the benefits that ensue, but my attention is always snagged by the nomenclature.

I know there are alternate names available to us, but even in the context of this very conversation (and headline), the thing is being called a "variable."

What is a "variable" if not something that varies?

tialaramex 10/31/2025||
In the cases we're interested in here the variable does vary, what it doesn't do is mutate.

Suppose I have a function which sums up all the prices of products in a cart, the total so far will frequently mutate, that's fine. In Rust we need to mark this variable "mut" because it will be mutated as each product's price is added.

After calculating this total, we also add $10 shipping charge. That's a constant, we're (for this piece of code) always saying $10. That's not a variable it's a constant. In Rust we'd use `const` for this but in C you need to use the C pre-processor language instead to make constants, which is kinda wild.

However for each time this function runs we do also need to get the customer ID. The customer ID will vary each time this function runs, as different customers check out their purchases, but it does not mutate during function execution like that total earlier, in Rust these variables don't need an annotation, this is the default. In C you'd ideally want to label these "const" which is the confusing name C gives to immutable variables.

ajross 10/31/2025|||
> In the cases we're interested in here the variable does vary, what it doesn't do is mutate.

Those are synonyms, and this amounts to a retcon. The computer science term "variable" comes directly from standard mathematical function notation, where a variable reflects a quantity being related by the function to other variables. It absolutely is expected to "change", if not across "time" than across the domain of the function being expressed. Computers are discrete devices and a variable that "varies" across its domain inherently implies that it's going to be computed more than once. The sense Carmack is using, where it is not recomputed and just amounts to a shorthand for a longer expression, is a poor fit.

I do think this is sort of a wart in terminology, and the upthread post is basically right that we've been using this wrong for years.

If I ever decide to inflict a static language on the masses, the declaration keywords will be "def" (to define a constant expression) and "var" (to define a mutable/variable quantity). Maybe there's value in distinguishing a "var" declaration from a "mut" reference and so maybe those should have separate syntaxes.

zahlman 10/31/2025|||
> Those are synonyms, and this amounts to a retcon.

The point is that it varies between calls to a function, rather than within a call. Consider, for example, a name for a value which is a pure function (in the mathematical sense) of the function's (in the CS sense) inputs.

ajross 10/31/2025||
Or between iterations of the loop scope in which it's defined, const/immutable definitions absolutely change during the execution of a function. I understand the nitpicky argument, I just think it's kinda dumb. It's a transparent attempt to justify jargon that we all know is needlessly confusing.
tialaramex 10/31/2025||
Ah! Actually this idea that the immutable variables in a loop "change during execution" is a serious misunderstanding and some languages have tripped themselves up and had to fix it later when they baked this mistake into the language.

What's happening is that each iteration of the loop these are new variables but they have the same name, they're not the same variables with a different value. When a language designer assumes that's the same thing the result is confusing for programmers and so it usually ends up requiring a language level fix.

e.g. "In C# 5, the loop variable of a foreach will be logically inside the loop"

ajross 10/31/2025||
Seems like you're coming around to my side of the fence that calling these clearly distinct constant expressions "variables" is probably a mistake?
tialaramex 10/31/2025||
I don't think so? I've been clear that there are three distinct kinds of thing here - constants, immutable variables, and mutable variables.

In C the first needs us to step outside the language to the macro pre-processor, the second needs the keyword "const" and the third is the default

In Rust the first is a const, the second we can make with let and the third we need let mut, as Carmack says immutable should be the default.

ajross 10/31/2025||
There are surely more than three! References can support mutation or not, "constants" may be runtime or compile time.

The point is that the word "variable" inherently reflects change. And choosing it (a-la your malapropism-that-we-all-agree-not-to-notice "immutable variables") to mean something that does (1) is confusing and (2) tends to force us into worse choices[1][2] elsewhere.

A "variable" should reflect the idea of something that can be assigned.

[1] In rust, the idea of something that can change looks like a misspelled dog, and is pronounced so as to imply that it can't speak!

[2] In C++, they threw English out the window and talk about "lvalues" for this idea.

kelipso 11/1/2025||
The term variable is from math is 100s (probably) of years old. Variables in pure functional languages are used exactly the same way it’s used in math. The idea of mutating and non-mutating variable is pretty old too and used in math as well. Neither are going to change.
IshKebab 10/31/2025|||
Well maybe global constants shouldn't be called "variables", but I don't see how your definition excludes local immutable variables from being called "variables". E.g.

  fn sin(x: f64) -> f64 {
    let x2 = x / PI;
    ...
Is x2 not variable? It's value varies depending on how I assign x.

Anyway this is kind of pointless arguing. We use the word "variable". It's fine.

barisozmen 10/31/2025||||
Even if the term 'variable' has roots in math where it is acceptable that it might not mutate, I think for clarity, the naming should be different. It's uneasy to think about something that can vary but not mutate. More clear names can be found.
tredre3 10/31/2025|||
> In Rust we'd use `const` for this but in C you need to use the C pre-processor language instead to make constants, which is kinda wild.

I get that you're not very familiar with C? Because in C we'd use const as well.

    const int x = 2;
    x = 3; // error: assignment of read-only variable 'x'
tialaramex 10/31/2025|||
That's not a constant, that's an immutable variable which is why your diagnostic said it was read-only.

   const int x = 2;
   int *p = &x;
   *p = 3; // Now x is 3
And since I paid for the place where I'm writing this with cash earned writing C a decade or so ago, I think we can rule out "unfamiliar with C" as a symptom.
tredre3 11/2/2025||
Now x is 3 but you also get a compiler warning telling you not to do that.

In my opinion it's a bit disingenuous to argue that it isn't a const just because you can ignore the compiler and shoot yourself in the foot. If you listen to the compiler, it is reflected in the assembly that it is a constant value the same as #define x 2.

Is Rust better at enforcing guarantees? Of course. Is `const` in C `const` if you don't ignore compiler warnings and errors? Also of course.

> And since I paid for the place where I'm writing this with cash earned writing C a decade or so ago

Ditto!

superblas 10/31/2025|||
Perhaps they’re conflating how you can’t use “const” as a compile time constant (e.g., you can’t declare the size of an array with a “const” variable). If so, C23 solves this by finally getting the constexpr keyword from c++
nayuki 10/31/2025|||
> What is a "variable" if not something that varies?

If I define `function f(x) { ... }`, even if I don't reassign x within the function, the function can get called with different argument values. So from the function's perspective, x takes on different values across different calls/invocations/instances.

layer8 10/31/2025|||
Variables are called variables because their values can vary between one execution of the code and the next. This is no different for immutable variables. A non-variable, aka a constant, would be something that has the same value in all executions.

Example:

  function circumference(radius)
      return 2 * PI * radius
Here PI is a constant, while radius is a variable. This is independent of whether radius is immutable or not.

It doesn’t have to be a function parameter. If you read external input into a variable, or assign to it the result of calling a non-pure function, or of calling even a pure function but passing non-constant expressions as arguments to it, then the resulting value will in general also vary between executions of that code.

Note how the term “variable” is used for placeholders in mathematical formulas, despite no mutability going on there. Computer science adopted that term from math.

https://en.wikipedia.org/wiki/Variable_(mathematics)

Warwolt 10/31/2025|||
It's a variable simply because it doesn't refer to a specific object, but any object assigned to it as either function argument or by result of a computation.

It's in fact us programmers who are the odd ones out compared to how the word variable has been used by mathematics and logicians for a long time

throwaway_7274 10/31/2025|||
Right, yeah, it’s a funny piece of terminology! The sense in which a ‘variable’ ‘varies’ isn’t that its value changes in time, but that its value is context-dependent. This is the same sense of the word as used in math!
jayd16 10/31/2025|||
A common naming is value. You can call them immutable values and mutable variables.

Another way to look at it is a variables are separate from compile time constants whether you mutate them or not.

garethrowlands 10/31/2025|||
The term 'variable' is from mathematics. As others have said, the values of variables do vary but they do not mutate.
astrobe_ 10/31/2025|||
Yes, and math has the notion of "free variable" and "bound variable" [1].

[1] https://en.wikipedia.org/wiki/Free_variables_and_bound_varia...

1-more 11/3/2025|||
They are often called bindings, not variables, so as to make it clear that they are just a name for a thing that will not change.
usrusr 10/31/2025|||
Some languages like Kotlin have var and val introducing the distinction between variables (that are expected to get reassigned, to vary over time, and values, which are just that, a value that has been given a name. I like these small improvements.

(unfortunately, Kotlin then goes on and introduces "val get()" in interfaces, overloading the val term with the semantics of "read only, but may very well change between reads, perhaps you could even change it yourself through some channel other than simple assignment which is a definite no")

didibus 10/31/2025|||
That's why in some languages they don't call them variables, but bindings instead.

(let [a 10] a)

Let the symbol `a` be bound to the value `10` in the enclosing scope.

hannasm 10/31/2025|||
You could always interpret a variable from the perspective of it's memory address. It is clearly variable in the sense that it can and will change between allocations of that address, however an immutable variable is intended to remain constant as long as the current allocation of it remains.
MetaWhirledPeas 10/31/2025|||
> What is a "variable" if not something that varies?

Really it's a constant. But they are referenced like variables, so people just get a little lazy (or indifferent) talking about it.

ychen306 10/31/2025||
I try to avoid this ambiguity by calling such variables "values".
ordu 11/1/2025||
It leads to a further ambiguity, because "value" is something that is assigned to a variable (or whatever we call it). For example some Rust code:

  let v1 = Vec::new();
  let v2 = v1;
The first line creates value of type Vec and places it into variable v1. The second line moves the value from variable v1 to variable v2. If we rename "variable" to "value" then my description of the code will become unreadable.

If I was as pedantic as the OP, I'd use "lexical binding" instead of "variable". But I'm not sure how it will work with C and C++, because their semantics assumes that a variable has a memory associated with it that can hold a value of a given type. Modern compilers are smarter than that, but still they try hard to preserve the original semantics. The variable in C/C++ is not just a name that ceases to exist after compiler have done its work. It creates a possibility that calling C/C++ variables "lexical bindings" we'll get more pedants accusing us of improper use of words, even if we never change values of those variables.

DashAnimal 10/31/2025||
https://nitter.net/id_aa_carmack/status/1983593511703474196
DoctorOW 11/1/2025|
I wish this was one of HN's auto formatting rules, automatically replace the link with one of these frontends
DashAnimal 11/3/2025||
Tbh I think it's ok - as much as I also want to avoid Twitter I do encourage original sourcing, especially since these nitter services can also have downtime fairly often. As long as someone jumps in and shares a link
agentultra 10/31/2025||
Agree. After working seriously on a large production Haskell codebase for several years I definitely took it for granted. Now that I’m writing stuff in C again I do think immutability should be the default.

const isn’t really it though. It could go further.

1718627440 10/31/2025||
Well in C actually you can not mutate something, you can only reassign, as it is always pass-by-value. You need to work around that, by passing a pointer to the object instead. In that sense mutability is kind of a language keyword: '&'. When you want to just get the object, you pass object it, if you need to modify it, you need to pass &object. This is something I hate in C++, that random function invocations can mutate arguments without it being obvious in the call syntax.
astrobe_ 10/31/2025||
I think that's why the * is generally preferred over the & for this purpose. It also can give some hints about ownership issues. This "pass by reference" thing is syntactic sugar and sometimes is great to have, but as Perlis said, "Syntactic sugar causes cancer of the semicolon" [1].

[1] https://www.cs.yale.edu/homes/perlis-alan/quotes.html

nixpulvis 10/31/2025||
Are Rust's defaults far enough?
sunrunner 10/31/2025|
I like the idea of immutable-by-default, and in my own musings on this I've imagined a similar thing except that instead of a mutable keyword you'd have something more akin to Python's with blocks, something like:

    # Immutable by default
    x = 2
    items = [1,2,3]

    with mutable(x, items):
        x = 3
        items.append(4)

    # And now back to being immutable, these would error
    x = 5
    items.append(6)  
I have put almost zero thought into the practicality of this for implementation, the ergonomics for developers, and whether it would be different enough and useful enough to be worth having.
pizza234 10/31/2025||
This is in essence a mutable borrow - by looking at Rust's borrow checker, one can see the complexities of the concept.
danenania 10/31/2025||
Clojure has transients—a similar idea I believe. Basically bounded mutation.
unrealhoang 10/31/2025||
Without a borrowck, inside your mutable block, another variable can reference to the mutable version of your x or items, and be mutated outside of that block.
teo_zero 10/31/2025||
No if you're allowed to only get an immutable reference from an immutable variable.
More comments...