Top
Best
New

Posted by HexDecOctBin 1 day ago

The provenance memory model for C(gustedt.wordpress.com)
218 points | 133 comments
gavinray 1 day ago|
Also of interest to folks looking at this might be TySan, the recently-merged LLVM Type-Based Aliasing sanitizer:

https://clang.llvm.org/docs/TypeSanitizer.html

https://www.phoronix.com/news/LLVM-Merge-TySan-Type-Sanitize...

aengelke 1 day ago||
It's probably worth noting that TySan currently only catches aliasing violations that LLVM would be able to exploit. For some types, e.g. unions, Clang doesn't emit accurate type-based aliasing information and therefore TySan won't catch these.
flohofwoe 20 hours ago||
Which is fine I think, considering that union type punning is legal in C (and even in C++ where union type punning is UB I have never seen it break - theoretically it might of course).
uecker 14 hours ago||
The problem might be that Clang does not even implement type-based aliasing correctly. So I assume it checks its broken rules, instead of the one specified in the C standard.
lioeters 1 day ago||
Looks like a code block didn't get closed properly, before this phrase:

> the functions `recip` and `recip⁺` and not equivalent

Several paragraphs after this got swallowed by the code block.

Edit: Oh, I didn't realize the article is by the author of the book, Modern C. I've seen it recommended in many places.

> The C23 edition of Modern C is now available for free download from https://hal.inria.fr/hal-02383654

zmodem 1 day ago||
> Looks like a code block didn't get closed properly

This seems to have been fixed now.

perching_aix 1 day ago||
I still see it, even after clearing caches, visiting from a separate browser from a separate computer (even a separate network).
johnisgood 1 day ago|||
It is a great book. I prefer the second edition, not the latest one though with what I call "bloated C".
laqq3 1 day ago||
I'm wondering if you could elaborate? I'd be curious to hear more about "bloated C" and the differences between the 2nd and 3rd edition.
shakabrah 1 day ago||
It made immediate sense to me it was Jen once I saw the code samples given
Measter 6 hours ago||
In the section about the ambiguous provenance from synthesising pointers, it's explained that the compiler will infer the correct provenance from usage. Would it not be worth having some way for the programmer to inform the compiler directly, with something analogous to Rust's Strict Provenance ptr::with_addr?

To convert it to C syntax, it's a function with roughly this signature:

    void* with_addr(void* ptr, uintptr_t addr)
Where the returned pointer has the address of `addr` and the provenance of `ptr`.
cryptonector 3 hours ago||
I'd also like to have builtin functions and/or function attributes for designating allocation and deallocation. malloc() and free() (and realloc()) should not be special because of their names -- they should be special because of their declared attributes or their derived attributes given their internals.
charleslmunger 4 hours ago||
This is doable via this trick:

https://github.com/protocolbuffers/protobuf/blob/ae0129fcd01...

tialaramex 1 day ago||
Presumably this was converted from markdown or similar and the conversion partly failed or the input was broken.

From the PVI section onward it seems to recover, but if the author sees this please fix and re-convert your post.

[Edited, nope, there are more errors further in the text, this needed proper proofreading before it was posted, I can somewhat struggle through because I already know this topic but if this was intended to introduce newcomers it's probably very confusing]

gustedt 1 day ago|
The problem is that wordpress changes these things once you edit in some part. I will probably regenerate the whole.
gustedt 1 day ago||
Randomly introduced translation errors from markdown to wordpress-internal should be fixed, now. Sorry for the incovenience!
cryptonector 14 hours ago|
There are some grammar errors here and there, but TFA is very nice. Thank you for your hard work!
zombot 1 day ago||
Does C allow Unicode identifiers now, or is that pseudo code? The code snippets also contain `&`, so something definitely went wrong with the transcoding to HTML.
pjmlp 1 day ago||
Besides the sibling comment on C23, it does work fine on GCC.

https://godbolt.org/z/qKejzc1Kb

Whereas clang loudly complains,

https://godbolt.org/z/qWrccWzYW

qsort 1 day ago|||
Quoting cppreference:

An identifier is an arbitrarily long sequence of digits, underscores, lowercase and uppercase Latin letters, and Unicode characters specified using \u and \U escape notation(since C99), of class XID_Continue(since C23). A valid identifier must begin with a non-digit character (Latin letter, underscore, or Unicode non-digit character(since C99)(until C23), or Unicode character of class XID_Start)(since C23)). Identifiers are case-sensitive (lowercase and uppercase letters are distinct). Every identifier must conform to Normalization Form C.(since C23)

In practice depends on the compiler.

dgrunwald 1 day ago||
But the source character set remains implementation-defined, so compilers do not have to directly support unicode names, only the escape notation.

Definitely a questionable choice to throw off readers with unicode weirdness in the very first code example.

qsort 1 day ago||
If it were up to me, anything outside the basic character set in a source file would be a syntax error, I'm simply reporting what the spec says.
ncruces 1 day ago|||
I use unicode for math in comments, and think makes certain complicated formulas far more readable.
kzrdude 1 day ago||
I've just been learning pinyin notation, so now i think the variable řₚ should have a value that first goes down a bit and then up.
zelphirkalt 1 day ago||
I am not sure it is a good idea to mix such specific phonetic script ideas about diacritic marks with the behavior of the program over time. Even considering the shape, it does not align with the idea of first down a little, then up a lot.
kzrdude 10 hours ago||
To be sure, it's a joke. Mostly trying to joke at the expense of these excessively complicated variable names (that are only there because it's pseudocode) :)

And yeah, the chinese tone in practice does not align with the idea of "down a little up a lot" either. It depends on context...

guipsp 1 day ago|||
What a "basic character set" is depends on locale
qsort 1 day ago|||
https://en.cppreference.com/w/c/language/charset.html
account42 1 day ago|||
Anything except US-ASCII in source code outside comments and string constants should be a syntax error.
guipsp 1 day ago||
You are aware other languages exist? Some of which don't even use the Latin script?
nottorp 1 day ago|||
Dunno about the OP but I'm very aware as I'm not an english speaker.

I still don't want anything as unpredictable as Unicode in my code. How many different encodings will display as the same variable name and how is the compiler supposed to decide?

If you're thinking of comments and user facing strings, the OP already excluded those.

cryptonector 3 hours ago||
The language and compiler & linker should reject Zalgo in identifiers, and they should reject confusable script mixes in identifiers, but otherwise they treat all equivalent strings as equivalent. To make it easier on the linker compilers should normalize all symbols to one common form (e.g., NFC).
account42 11 hours ago||||
And those are not programming languages, or at least not the C programming language which only needs a very limited character set.
steveklabnik 1 hour ago||
C does allow for limited unicode in identifiers, though you need to use the \u prefix and write the code out. Compilers like clang let it work like C++ and follow TR31, though this is nonstandard.
Y_Y 1 day ago|||
What; like APL‽
Y_Y 1 day ago|||
Implementation-defined until C99, explicitly possible via UCNs aince c99, possible with explicit encoding since C23, but literals are still implementation defined.
unwind 1 day ago||
I can't even view the post, I just get some kind of content management system-like with the page as JSON or something, in pink-on-white. I'm super confused. :|

The answer to your question seems to (still) be "no".

nikic 22 hours ago||
At least at a skim, what this specifies for exposure/synthesis for reads/writes of the object representation is concerning. One of the consequences is that dead integer loads cannot be eliminated, as they may have an exposure side effect. I guess C might be able to get away with it due to the interaction with strict aliasing rules. Still quite surprised that they are going against consensus here (and reduces the likelihood that these semantics will get adopted by implementers).
ben0x539 20 hours ago||
Can you say more about what the consensus is that this is going against?
nikic 11 minutes ago||
That type punning through memory does not expose or synthesize memory. There are some possible variations on this, but the most straightforward is that pointer to integer transmutes just return the address (without exposure) and integer to pointer transmutes return a pointer with nullary provenance.
comex 21 hours ago|||
> I guess C might be able to get away with it due to the interaction with strict aliasing rules.

But not for char-typed accesses. And even for larger types, I think you would have to worry about the combo of first memcpying from pointer-typed memory to integer-typed memory, then loading the integer. If you eliminate dead integer loads, then you would have to not eliminate the memcpy.

alextingle 9 hours ago|||
I don't imagine that the exposed state would need to be represented in the final compiler output, so the optimiser could mark the pointer as exposed, but still eliminate the dead integer load.

Or from a pragmatic viewpoint, perhaps if the optimiser eliminates a dead load, then don't mark the pointer as exposed? After all, the whole point is to keep track of whether a synthesised pointer might potentially refer to the exposed pointer's storage. There's zero danger of that happening if the integer load never actually occurs.

uecker 22 hours ago||
(Never mind, I misread you comment at first.) Yes, the representation access needs to be discussed... I took a couple of years to publish this document. More important would be if the ptr2int exposure could be implemented.
hinkley 22 hours ago||
> Unfortunately no C compiler can do this optimization automatically:

> The functions recip and recip⁺ and not equivalent.

This is one of those examples of how optimizing code can improve legibility, robustness, or both.

The first implementation allows for side effects to change the outcome of the function. But the problem is that the code is not written expecting someone to modify the values in the middle of the loop. It's incorrect behavior, and you're paying a performance penalty for it to boot.

Functional Core code tends not to have this problem, in that we pass in a snapshot of data and it either gets an answer or an error.

I've seen too much code that checks 3 times if a user is either still logged in or has permission to do a task, and not one of them was set up to deal with one answer for the first call and a different one for any of the subsequent ones. They just go into undefined behavior.

jvanderbot 1 day ago||
I love Rust, but I miss C. If C can be updated to make it generally socially acceptable for new projects, I'd happily go back for some decent subset of things I do. However, there's a lot of anxiety and even angst around using C in production code.
flohofwoe 1 day ago||
> to make it generally socially acceptable for new projects...

Or better yet, don't let 'social pressure' influence your choice of programming language ;)

If your workplace has a clear rule to not use memory-unsafe languages for production code that's a different matter of course. But nothing can stop you from writing C code as a hobby - C99 and later is a very enjoyable and fun language.

Y_Y 1 day ago|||
I don't want to summon WB, but honest-to-god, D is a good middle ground here.
TimorousBestie 1 day ago||||
> Or better yet, don't let 'social pressure' influence your choice of programming language ;)

It’s hard. Programming is a social discipline, and the more people who work in a language, the more love it gets.

spauldo 1 day ago||
If you're on UNIX or working in the embedded space, C is still everywhere and gets lots of love. C tends to get lots of libraries anyway because everything can FFI to it.
xxs 1 day ago|||
I was about the reply no amount of pressure can tell me how to program. C was totally fine for esp32
bnferguson 1 day ago|||
Feels like Zig is starting to fill that role in some ways. Fewer sharp edges and a bit more safety than C, more modern approach, and even interops really well with C (even being possible to mix the two). Know a couple Rust devs that have said it seems to scratch that C itch while being more modern.

Of course it's still really nice to just have C itself being updated into something that's nicer to work with and easier to write safely, but Zig seems to be a decent other option.

dnautics 1 day ago|||
(self-promotion) in principle one should be able to implement a fairly mature pointer provenance checker for zig, without changing the language. A basic proof of concept (don't use this, branches and loops have not been implemented yet):

https://www.youtube.com/watch?v=ZY_Z-aGbYm8

purplesyringa 1 day ago||||
How close are Zig's safety guarantees to Rust's? Honest question; I don't follow Zig development. I can't take C seriously because it hasn't even bothered to define provenance until now, but as far as I'm aware, Zig doesn't even try to touch these topics.

Does Zig document the precise mechanics of noalias? Does it provide a mechanism for controllably exposing or not exposing provenance of a pointer? Does it specify the provenance ABA problem in atomics on compare-exchange somehow or is that undefined? Are there any plans to make allocation optimizations sound? (This is still a problem even in Rust land; you can write a program that is guaranteed to exhibit OOM according to the language spec, but LLVM outputs code that doesn't OOM.) Does it at least have a sanitizer like Miri to make sure UB (e.g. data races, type confusion, or aliasing problems) is absent?

If the answer to most of the above is "Zig doesn't care", why do people even consider it better than C?

dnautics 1 day ago||
safety-wise, zig is better than C because if you don't do "easily flaggable things"[0] it doesn't have buffer overruns (including protection in the case of sentinel strings), or null pointer exceptions. Where this lies on the spectrum of "C to Rust" is a matter of judgement, but if I'm not mistaken it is easily a majority of memory-safety related CVEs. There's also no UB in debug, test, or release-safe. Note: you can opt-out of release-safe on a function-by-function basis. IIUC noalias is safety checked in debug, test, and release-safe.

In a sibling comment, I mentioned a proof of concept I did that if I had the time to complete/do correctly, it should give you near-rust-level checking on memory safety, plus automatically flags sites where you need to inspect the code. At the point where you are using MIRI, you're already bringing extra stuff into rust, so in practice zig + zig-clr could be the equivalent of the result of "what if you moved borrow checking from rustc into miri"

[0] type erasure, or using "known dangerous types, like c pointers, or non-slice multipointers".

tialaramex 23 hours ago||
This is very much a "Draw the rest of the fucking owl" approach to safety.
dnautics 22 hours ago||
what percentage of CVEs are null pointer problems or buffer overflows? That's what percentage of the owl has been drawn. If someone (or me) builds out a proper zig-clr, then we get to, what? 90%. Great. Probably good enough, that's not far off from where rust is.
comex 21 hours ago||
Probably >50% of exploits these days target use-after-frees, not buffer overflows. I don’t have hard data though.

As for null pointer problems, while they may result in CVEs, they’re a pretty minor security concern since they generally only result in denial of service.

Edit 2: Here's some data: In an analysis by Google, the "most frequently exploited" vulnerability types for zero-day exploitation were use-after-free, command injection, and XSS [3]. Since command injection and XSS are not memory-unsafety vulnerabilities, that implies that use-after-frees are significantly more frequently exploited than other types of memory unsafety.

Edit: Zig previously had a GeneralPurposeAllocator that prevented use-after-frees of heap allocations by never reusing addresses. But apparently, four months ago [1], GeneralPurposeAllocator was renamed to DebugAllocator and a comment was added saying that the safety features "require the allocator to be quite slow and wasteful". No explicit reasoning was given for this change, but it seems to me like a concession that applications need high performance generally shouldn't be using this type of allocator. In addition, it appears that use-after-free is not caught for stack allocations [2], or allocations from some other types of allocators.

Note that almost the entire purpose of Rust's borrow checker is to prevent use-after-free. And the rest of its purpose is to prevent other issues that Zig also doesn't protect against: tagged-union type confusion and data races.

[1] https://github.com/ziglang/zig/commit/cd99ab32294a3c22f09615...

[2] https://github.com/ziglang/zig/issues/3180.

[3] https://cloud.google.com/blog/topics/threat-intelligence/202...

dnautics 17 hours ago||
yeah I don't think the GPA is really a great strategy for detecting UAF, but it was a good try. It basically creates a new virtual page for each allocation, so the kernel gets involved and ?I think? there is more indirection for any given pointer access. So you can imagine why it wasn't great.

Anyways, I am optimistic that UAF can be prevented by static analysis:

https://www.youtube.com/watch?v=ZY_Z-aGbYm8

Note since this sort of technique interfaces with the compiler, unless the dependency is in a .so file, it will detect UAF in dependencies too, whether or not the dependency chooses to run the static analysis as part of their software quality control.

comex 1 hour ago||
Fair enough. In some sense you’re writing your own borrow checker. But (you may know this already) be warned: this has been tried many times for C++, with different levels of annotation burden imposed on programmers.

On one side are the many C++ “static analyzers” like Coverity or clang-analyzer, which work with unannotated C++ code. On the other side is the “Safe C++” proposal (safecpp.org), which is supposed to achieve full safety, but at the cost of basically transplanting Rust’s type system into C++, requiring all functions to have lifetime annotations and disallow mutable aliasing, and replacing the entire standard library with a new one that follows those rules. Between those two extremes there have been tools like the C++ Core Guidelines Checker and Clang’s lifetimebound attribute, which require some level of annotations, and in turn provide some level of checking.

So far, none of these have been particularly successful in preventing memory safety vulnerabilities. Static analyzers are widely used in industry but only find a fraction of bugs. Safe C++ will probably be too unpopular to make it into the spec. The intermediate solutions have some fundamental issues (see [1], though it’s written by the author of Safe C++ and may be biased), and in practice haven’t really taken off.

But I admit that only the “static analyzer” side of the solution space has been extensively explored. The other projects are just experiments whose lack of adoption may be due to inertia as much as inherent lack of merit.

And Zig may be different… I’m not a Zig programmer, but I have the impression that compared to C++ it encourages fewer allocations and smaller codebases, both of which may make lifetime analysis more tractable. It’s also a much younger language whose audience is necessarily much more open to change.

So we’ll see. Good luck - I’d sure like to see more low-level languages offering memory safety.

[1] https://www.circle-lang.org/draft-profiles.html

tialaramex 1 hour ago|||
One of the key things in Sean's "Safe C++" is that, like Rust, it actually technically works. If we write software in the safe C++ dialect we get safe programs just as if we write ordinary safe (rather than ever invoking "unsafe") Rust we get safe programs. WG21 didn't take Safe C++ and it will most likely now be a minor footnote in history, but it did really work.

"I think this could be possible" isn't an enabling technology. If you write hard SF it's maybe useful to distinguish things which could happen from those which can't, but for practical purposes it only matters if you actually did it. Sean's proposed "Safe C++" did it, Zig, today, did not.

There are other obstacles - like adoption, as we saw for "Safe C++" - but they're predicated on having the technology at all, you cannot adopt technologies which don't exist, that's just make believe. Which I think is already the path WG21 has set out on.

steveklabnik 1 hour ago|||
> Safe C++ will probably be too unpopular to make it into the spec.

Not just that, but the committee accepted a paper that basically says it's design is against C++'s design principles, so it's effectively dead forever.

tialaramex 22 minutes ago||
This was adopted as standing document SD-10 https://isocpp.org/std/standing-documents/sd-10-language-evo...

Here's somebody who was in the room explaining how this was agreed as standing policy for the C++ programming language.

"It was literally the last paper. Seen at the last hour. Of a really long week. Most everyone was elsewhere in other working group meetings assuming no meaningful work was going to happen."

pjmlp 1 day ago|||
As usual the remark that much of the Zig's safety over C, has been present since the late 1970's in languages like Modula-2, Object Pascal and Ada, but sadly they didn't born with curly brackets, nor brought a free OS to the uni party.
mikewarot 1 day ago|||
If you can stomach the occasional Begin and End, and a far less confusing pointer syntax, Pascal might be the language for you. Free Pascal has some great string handling, so you never have to worry about allocating and freeing them, and they can store gigabytes of text, even Unicode. ;-)
jvanderbot 1 day ago|||
If my fellow devs cringe at C, imagine their reaction to Pascal
mikewarot 1 day ago||
C has all the things to hate in a programming language

  CaSe Sensitivity
  Weird pointer syntax
  Lack of a separate assignment token
  Null terminated strings
  Macros - the evil scourge of the universe
On the plus side, it's installed everywhere, and it's not indent sensitive
zelphirkalt 1 day ago|||
You mean "mere string replacement macros, instead of hygienic macros", of course : )
ioasuncvinvaer 1 day ago||||
Except for null terminated strings these don't seem like mayor issues to me. Can you elaborate?
jvanderbot 1 day ago||||
At this point, you're talking to someone who isn't here
cryptonector 14 hours ago||||
> C has all the things to hate in a programming language

> CaSe Sensitivity

Wait, what, you.. you want a case-insensitive language? Like SQL?

I love SQL, but please no more case-insensitive programming languages!

1718627440 1 day ago|||
> Lack of a separate assignment token

What does that mean?

kbolino 1 day ago||
Assignment is = which is too close to equality == and thus has been the source of bugs in the past, especially since C treats assignment as an expression and coerces lots of non-boolean values to true/false wherever a condition is expected (if, while, for). Most compilers warn about this at least nowadays.
tialaramex 22 hours ago||
Even with warnings this is just terrible. People need to stop inventing languages where "False" is true, or an empty container is false or other insane "coercions" of this kind.

True is true, and false is false, if you're wondering whether this Doodad is Wibbly, you should ask that question not rely on a convention that Wibbly Doodads are somehow "truthy" while the non-Wibbly ones are not.

tgv 1 day ago|||
Or try Ada.
modeless 1 day ago|||
Fil-C is a modified version of Clang that makes C and C++ memory safe. It supports things you wouldn't expect to work like signal handling or setjmp/longjmp. It can compile real C projects like SQLite and OpenSSL with minimal to no changes, today. https://github.com/pizlonator/llvm-project-deluge/blob/delug...
tialaramex 22 hours ago||
Fil-C does seem like a quicker route if your existing idea was something like "rewrite it in Java" and it exists today whereas both C and C++ have only vague ambitions to deliver some future language which might meet your needs.

I will be very surprised if there's widespread adoption of Fil-C for many new projects though.

cryptonector 15 hours ago||
A big stumbling block is that Fil-C requires all C in the program to be built with Fil-C, including all libraries. That means that Debian and such would need to either adopt Fil-C (perhaps for some distros) or ship Fil-C and non-Fil-C libraries for all pkgs with libraries. The alternative is that you have to build everything yourself, and this gets painful if you need to support ELFs/DLLs.
bmn__ 6 hours ago|||
https://github.com/tsoding/crust
uecker 1 day ago||
Do you really love Rust, or do you feel pressured to say so?
grg0 1 day ago|||
He grew up in a very stringent household. Everybody was writing Rust and he was like, "damn, I wish I could write C."
smcameron 1 day ago|
Ugh. Are unicode variable names allowed in C now? That's horrific.
1over137 1 day ago||
Horrific? You might not think so if your (human) language used a different alphabet.
Joker_vD 1 day ago|||
My language uses Cyrillic and I personally prefer English-based keywords and variable names precisely because they are not words of my (human) language. It introduces an easy and obvious distinction between the machine-oriented and the human-oriented.
cryptonector 3 hours ago|||
Yes, I also think the whole word should program in English.

That's half tongue in cheek. I am fluent in three languages, but I program "in English" and I greatly appreciate that my colleagues who are fluent in languages other than the ones I'm fluent in (except English) also do. Basically English is the world's lingua franca today. Nonetheless if a company in France wants to use French for their symbol names, or a company in Mexico wants to use Spanish for their symbol names, or a company in China wants to use Chinese for their symbol names, who am I to stop them?! Surely it's not my place.

ZoomZoomZoom 1 day ago|||
I know what you mean and I shudder when I see code that uses words from my native lang, but most code is human-oriented.
eqvinox 1 day ago||||
Yes but also no. The thing about software is that 90% of it is not culturally bound. If you're writing, say, some tax reporting tool, a grammar reference, or something religious… sure, it makes sense to write that in your language. So, yeah, C should support that.

However, everything else, from spreadsheet software to CAD tools to OS kernels to JavaScript frameworks is universal across cultures and languages. And for better or for worse (I'm not a native English speaker either), the world has gone with English for a lot of code commons.

And the thing with the examples in that post isn't about supporting language diversity, it's math symbols which are noone's native language. And you pretty much can't type them on any keyboard. Which really makes it a rather poor flex IMHO. Did the author reconfigure their keyboard layout for that specific math use case? It can't generically cover "all of math" either. Or did they copy&paste it around? That's just silly.

[…could some of the downvoters explain why they're downvoting?]

OkayPhysicist 1 day ago|||
When I was doing a lot of Physics simulation in Julia, I had a Vim extension which would just allow me to type something like \gamma, hit tab, and get γ. This was worth the (minimal) hassle, because it made it very easy to spot check formulas. When you're shuffling data around in a loosely-described space like most of web dev, descriptive function and variable names are important because the description of what you're doing and what you're doing it too is the important information, and the actual operations you're taking are typically approximately trivial.

In heavily mathematical contexts, most of those assumptions get turned on their head. Anybody qualified to be modifying a model of electromagnetism is going to be intimately familiar with the language of the formulas: mu for permeability, epsilon for permittivity, etc. With that shared context,

1/(4*π*ε)*(q_electron * q_proton)/r^2 is going to be a lot easier to see, at a glance, as Coulombs law

compared to

1 / (4 * Math.Pi * permitivity_of_free_space)*(charge_electron * charge_proton)/distance_of_separation

Source code, like any other language built for humans, is meant to be read by humans. If those humans have a shared context, utilizing that shared context improves the quality and ease of that communication.

eqvinox 1 day ago||
Hrm. Fair point. But will the other humans, even if they have the shared context, also have the ability to type in these symbols, if they want to edit the code? They probably don't have your vim extension…

I guess maybe this is an argument for better UI/UX for symbolic input…

cryptonector 3 hours ago|||
> […could some of the downvoters explain why they're downvoting?]

Because you made false assertions ("And you pretty much can't type them on any keyboard").

eqvinox 2 hours ago||
Please show me the keyboard layout that has keys for ⁺, ř and ₚ.

(Unless you're being pedantic because I wrote "keyboard" rather than "keyboard layout", or ignored the qualifying "pretty much". In either of those cases you're unwilling to communicate cooperatively and I can't help you.)

ajross 1 day ago|||
Little to no source code is written for single (human) language development teams. Sure, everyone would like the ability to write source code in their native language. That's natural.

Literally no one, anywhere, wants to be forced to read source written in a language they can't read (or more specifically in this case: written in glyphs they can't even produce on their keyboard). That idea, for almost everyone, seems "horrific", yeah.

So a lingua franca is a firm requirement for modern software development outside of extremely specific environments (FSB malware authors probably don't care about anyone else reading their cyrillic variable names, etc...). Must it be ASCII-encoded English? No. But that's what the market has picked and most people seem happy enough with it.

OkayPhysicist 1 day ago||
> Little to no source code is written for single (human) language development teams.

This is blatantly false. I'd posit that a solid 90% of all source code written is done so by single, co-located teams (a substantial portion of which are teams of 1). That certainly fits the bill for most companies I've worked at.

mananaysiempre 1 day ago|||
“Now” as in since C99, twenty-five years ago, yes. (It seemed like a good idea at the time.)
kevincox 1 day ago|||
Being able to program in languages that don't fit into ASCII is a good idea. Using one-character variable names is a bad idea.
RossBencina 21 hours ago|||
Mathematics is a language that doesn't fit into ASCII and commonly uses one-character variable names. If you are implementing a documented mathematical algorithm (i.e. one with a description in a paper or book) then sticking to the notation of the paper (i.e. using one character variable names) makes sense to me.
kevincox 21 hours ago|||
I find math far easier to read when the authors use proper names for variables. But I understand that it isn't the idiomatic style and agree that it can be useful to match the paper when re-implementing an algorithm.
mananaysiempre 21 hours ago|||
Unfortunately, many of the things of this nature that you’ll want to implement use indices, which are inevitably going to start at 1. So you’ll still got plenty of hours of unpleasant debugging ahead of you, and a non-obvious correspondence to the original paper at the end of it.
adrianN 1 day ago|||
Using variable names that are different but render (almost) the same can be a bad idea.
90s_dev 1 day ago|||
See also https://www.ethiocloud.com/bunnascript.aspx and https://en.wikipedia.org/wiki/Non-English-based_programming_...
OkayPhysicist 1 day ago|||
Why shouldn't they be? It's not the 00's anymore, Unicode support is universal. You'd have to dust off some truly ancient tech to find something incapable of rendering it.

Source code is for humans, and thus should be written in whatever way makes it easiest to read, write, and understand for humans. If your language doesn't map onto ASCII, then Unicode support improves that goal. If your code is meant to directly implement some physics formula, then using the appropriate unicode characters might make it easier to read (and thus spot transcription errors, something I find far too often in physics simulations).

bigstrat2003 1 day ago|||
They shouldn't be precisely because it makes the code harder to read and write when you include non-ASCII characters.
wheybags 1 day ago||||
Hot take, but I've always felt the world would be better served if mathematicians and physicists would stop using terrible short variable names and use longCamelCaseDescriptiveNames like the rest of us, because paper is cheap, and abbreviations are confusing. I know it's nicer when you're writing by hand, but when you clean up a proof or formula for publishing, would it really be so hard to switch to descriptive names?

I'm a practitioner of neither though, so I can't condemn the practice wholeheartedly as an outsider, but it does make me groan.

nsingh2 1 day ago|||
Better served to students and those unfamiliar with the field, but noisy to those familiar. Considering that much of mathematical work is done using pen/paper, it would be a total pain to write out huge variable names every time.

Consider a simple programming example, in C blocks are delimited by `{}`, why not use `block_begin` and `block_end`? Because it's noisy, and it doesn't take much to internalize the meaning of braces.

senbrow 1 day ago|||
Long names are good for short expressions, but they obfuscate complex ones because the identifiers visually crowd out the operators.

This can be especially difficult if the author is trying to map 1:1 to a complex algorithm in a white paper that uses domain-standard mathematical notation.

The alternative is to break the "full formula" into simpler expression chunks, but then naming those partial expression results descriptively can be even more challenging.

someplaceguy 1 day ago|||
> using the appropriate unicode characters might make it easier to read

It's probably also a great way to introduce almost undetectable security vulnerabilities by using Unicode characters that look similar to each other but in fact are different.

OkayPhysicist 1 day ago||
This would cause your compilation to fail, unless you were deliberately declaring and using near identical symbols. Which would violate the whole "Code is meant to be easily read by humans" thing.
someplaceguy 1 day ago||
> unless you were deliberately declaring and using near identical symbols.

Yes, that would probably be one way to do it.

> Which would violate the whole "Code is meant to be easily read by humans" thing.

I'd think someone who's deliberately and sneakily introducing a security vulnerability would want it to be undetectable, rather than easily readable.

loeg 1 day ago|||
Math people shouldn't be allowed to write code. It's not the unicode, so much as the extremely terse variable names.
perching_aix 1 day ago||
Isn't that basically all C/C++ code? Admittedly I don't have much exposure to it, but it's pretty much a trope in and of itself, along with Java and C# suffering from the opposite problem.

Such a silly issue too, you'd think we'd have come up with some automated wrangling for this, so that those experienced with a codebase can switch over and see super short versions of identifiers, while people new to it all will see the long stuff.

flohofwoe 20 hours ago||
> Isn't that basically all C/C++ code?

Maybe for code that was written in the early 90's, but the only 'tradition' that has survived is calling the vanilla loop variable 'i'.

SV_BubbleTime 1 day ago||
> void recip(double* aₚ, double* řₚ) > { > for (;;) > { > register double Π = (aₚ)(řₚ);

My first thought before I saw this was “I wonder is this going to be an article from people who build things or something from “academics” that don’t.”

At least it was answered quickly.

More comments...