Posted by JNRowe 5 days ago
[1] https://www.cs.toronto.edu/~meel/Slides/meel-distform.pdf [2] https://github.com/meelgroup/pepin [3] https://cs.stanford.edu/~knuth/papers/cvm-note.pdf
I've once wrote a function to parse the date format from log files that Go doesn't natively support, and forgot to add November. I quit that job in April, so I never saw any issues. However when 1st of November came my ex-colleagues saw no logs for this day, and when they found out the reason they created a hash tag #nolognovember which you can probably find somewhere to this day :)
Faced with this symptom I would bet there was a "No" in a yaml somewhere :-)
Therein lies the importance of runtime assertions (so we can sanity check that parsing actually succeeded rather than silently failing) and monitoring (so we can sanity check that, for example, we don't ever go 24 hours without receiving data from the parsing job).
This attitude is defeatist. The success of property based testing (see QuichCheck in Haskell or Hypothesis in Python) especially when combined with fuzzing shows that instead of looping over every possible input, looping over thousands of inputs tend to be good enough in practice to catch bugs.
Throwing infinity as a cop out is a lazy opinion held by people who don't understand infinity, or rather, the concept of infinity that's countable. Everything we model on a computer is countably infinite. When we have multiple of such countable infinite sets, the standard dovetail constructions guarantees that their union will be countable. Their Cartesian product will also be countable. You can always obtain an interesting prefix of such an infinite set for testing purposes.
What I'm saying is that it's foolish not to take any measures at runtime to validate that the system is behaving correctly.
Who's to say that the logs themselves are even formatted correctly? Your software could be perfectly bug-free and you'd still have problems without knowing it, due to bugs in some other person's software. That's the point you're missing - no matter how many edge cases you account for, there's always another edge case.
I didn't say anything about measures at runtime to validate things. That's complementary to good tests.
So yeah, you need monitoring and assertions. A decent coverage of unit tests is good, but I wouldn't bother trying to invest in some sort of advanced fuzzing or quickcheck system. In my experience the juice isn't worth the squeeze.
When I was writing a nontrivial data structure library I was amazed (and humbled) by how many bugs were caught by PBT (again, combined with copious assertions) but not by my unit tests (which tried to cover all the "obvious" edge cases).
func MonthToString(month int) string {
switch month {
case 1: return "January"
case 2: return "February"
...
case 10: return "October"
case 12: return "December"
default: panic(fmt.Errorf("invalid month number: %d", month))
}
}
are usually written? You take the switch's body, shove it into the test function, and then replace "case/return" with regexp to "assert.Equal" or something: func TestMonthToString(t *testing.T) {
assert.Equal(t, "January", MonthToString(1))
assert.Equal(t, "February", MonthToString(2))
...
assert.Equal(t, "October", MonthToString(10))
assert.Equal(t, "December", MonthToString(12))
assert.PanicsWithError(t, "invalid month number: 13", func() { MonthToString(13) })
}
Look ma, we got that sweet 100% code coverage! for (i=1, i<13, i++) {
assert.Equal(t, i, StringToMonth(MonthToString(i)))
}
the reverse composition is harder to test.This is patently false. Any Undefined Behavior is harmful because it allows the optimizer to insert totally random code, and this is not a purely theoretical behavior, it's been repeatedly demonstrated happening. So even if your UB code isn't called, the simple fact it exists may make some seemingly-unrelated code behave wrongly.
For example, in clang/llvm, currently, doing arithmetic UB (signed overflow, out-of-range shifts, offsetting a pointer outside its allocation bounds, offsetting a null pointer, converting an out-of-range float to int, etc) will never result in anything bad, as long as you don't use it (where "using it" includes branching on or using as a load/store address or returning from a function a value derived from it, but doesn't include keeping it in a variable, doing further arithmetic, or even loading/storing it). Of course that's subject to change and not actually guaranteed by any documentation. Not a thing to rely on, but currently you won't ever need to release an emergency fix and get a CVE number for having "void *mem = malloc(10); void *tmp[1]; tmp[0] = mem-((int)two_billion + (int)two_billion); if (two_billion == 0) foo(tmp); free(mem);" in your codebase (..at least if compiling with clang; don't know about other compilers). (yes, that's an immense amount of caveats for an "uhh technically")
This is fortunately not true. If it were, it would make runtime checks pointless. Consider this code
free(ptr)
already_freed = true;
if (!alread_freed) {
free(ptr)
}
The second free would be undefined behavior, but since it doesn't run the snippet is fine.Undefined to who, though? Specific platforms and toolchains have always attached defined behavior to stuff the standard lists as undefined, and provided ways (e.g. toolchain-specific volatile semantics, memory barriers, intrinsic functions) to exploit that. Even things like inline assembly live in this space of dancing around what the standard allows. And real systems have been written to those tools, successfully. At the bottom of the stack, you basically always have to deal with stuff like this.
Your point is a pedantic proscription, basically. It's (heh) "patently false" to say that "Any Undefined Behavior is harmful".
The bigger issue with ISO C and POSIX is everything around 'struct sockaddr': you don't have any way of knowing what types the implementation is internally reading in or writing out. If you give it a casted pointer to a 'struct sockaddr_in' but it reads the sa_family from the 'struct sockaddr *', that's UB; ditto if listen() gives you a 'struct sockaddr_in' and you read the sa_family from a 'struct sockaddr *'. Or if you use 'struct sockaddr_storage' at all, that's also UB. IIRC, the latest POSIX edition just tells implementations to "pretty please allow aliasing between these types in particular!"
Of course, POSIX has nothing on Windows APIs, many of which encourage the caller to cast around pointers with impunity. As far as I'm aware, MSVC doesn't care about strict aliasing at all, and only has a minimal set of optimizations for 'restrict' pointers.
So they punted and left it up to the toolchains, and the toolchains admirably picked up the slack and provided good tools. The problem then becomes the pedants who invent rules like "any UB is harmful" above, not realizing the UB-reliant code is plugging the holes keeping their system afloat.
By contrast, I'd assume any other report by ubsan to be fair game for the optimizer to do its thing and generate whatever code is going to be different from what was likely the developer's intention. If not in the current version, maybe in a future one.
With that being said, I would definitely expect that the small set of UB that ubsan reports about is actually undefined for the compiler that implements the sanitizer (meaning: either problematic now or problematic in some future update).
What are you even saying - what is your definition of "random code". FYI UB is exactly (one of) the places where an optimizer can insert optimized code.
The typical optimization showcase (better code generation for signed integer loop counts) only works when the (undefined behaviour) signed integer overflow doesn't actually happen (e.g. the compiler is free to assume that the loop count won't overflow). But when the signed integer overflow happens all bets are off what will actually happen to the control flow - while that same signed integer overflow in another place may simply wrap around.
Another similar example is to specifically 'inject' UB by putting a `std::unreachable` into the default case of a switch statement. This enables an optimization that the compiler omits a range check before accessing the switch-case jump table. But if the switch-variable isn't handled in a case-branch, the jump table access may be out-of-bounds and there will be a jump to a random location.
In other situations the compiler might even be able to detect at compile time that the UB is triggered and simply generate broken code (usually optimizing away some critical part), or if you're lucky the compiler inserts an ud instruction which crashes the process.
You might think this code would be fine if address 0 were mapped to RAM, but both gcc and clang know it's undefined behavior to use the null pointer like that, so they add "random code" that forces a processor exception.
To clarify, the undefined behavior here is that the sanitizer sees `free` trying to access memory outside the bounds of what was returned by `malloc`.
It's perfectly valid to compute the address of a struct just before memory pointed to by a pointer you have, as long as the result points to valid memory:
void not_free(void *p) {
struct header *h = (struct header *) (((char *)p) - sizeof(struct header));
// ...
}
In the case of `free`, that resulting pointer is technically "invalid" because it's outside what was returned by `malloc`, even though the implementation of `malloc` presumably returned a pointer to memory just past the header.I saw a string library once which took advantage of this. The library passed around classic C style char* pointers. They work in printf, and basically all C code that expects a string. But the strings had extra metadata stored before the string content. That metadata contained the string’s current length and the total allocation size. As a result, you could efficiently get a string length without scanning, append to a string, and do all sorts of other useful things that are tricky to do with bare allocations. All while maintaining support for the rest of the C ecosystem. It’s a very cool trick!
I"m not very fond of this design as it's easy to pass a "normal" C string, which compiles because BSTR is just a typedef to it.
You can allocate the exact same data structure, but store a pointer to the size prefix, instead of the first byte - you avoid that issue, and can still pass the data field to anything expecting a zero-terminated string:
struct WeirdString { int size; char data[0]; };
struct WeirdString* ws = ...;
fopen(ws->data);
[1] BSTR - https://learn.microsoft.com/en-us/previous-versions/windows/...Small nitpick, the UB sanitizer also has some checks specific for C++ https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html
; asr rd, rs1, rs2 ; rd = signed(rs1) >> rs2
and rt, rs1, 0x8000 ; isolate sign bit
lsr rt, rt, rs2 ; shift sign bit to final position
neg rt, rt ; sign-extended part of final result
lsr rd, rs1, rs2 ; base part of final result
or rd, rd, rt ; combine both parts
It might be easier to understand broken down this way for anyone who didn't understand the article's one-liner.