What's noteworthy is that the compiler isn't required to generate a warning if the array is too small. That's just GCC being generous with its help. The official stance is that it's simply undefined behaviour to pass a pointer to an object which is too small (yes, only to pass, even if you don't access it).
> In my testing, it's between 1.2x and 4x slower than Yolo-C. It uses between 2x and 3x more memory. Others have observed higher overheads in certain tests (I've heard of some things being 8x slower). How much this matters depends on your perspective. Imagine running your desktop environment on a 4x slower computer with 3x less memory. You've probably done exactly this and you probably survived the experience. So the catch is: Fil-C is for folks who want the security benefits badly enough.
(from https://news.ycombinator.com/item?id=46090332)
We're talking about a lack of fat pointers here, and switching to GC and having a 4x slower computer experience is not required for that.
The fact that the correct type signature, a pointer to fixed-size array, exists and that you can create a struct containing a fixed-size array member and pass that in by value completely invalidates any possible argument for having special semantics for fixed-size array parameters. Automatic decay should have died when it became possible to pass structs by value. Its continued existence continues to result in people writing objectively inferior function signatures (though part of this it the absurdity of C type declarations making the objectively correct type a pain to write or use, another one of the worst actual design mistakes).
Fat pointers or argument-aware non-fixed size array parameters are a separate valuable feature, but it is at least understandable for them to not have been included at the time.
That's not entirely accurate: "fixed-size" array parameters (unlike pointers to arrays or arrays in structs) actually say that the array must be at least that size, not exactly that size, which makes them way more flexible (e.g. you don't need a buffer of an exact size, it can be larger). The examples from the article are neat but fairly specific because cryptographic functions always work with pre-defined array sizes, unlike most algorithms.
Incidentally, that was one of the main complaints about Pascal back in the day (see section 2.1 of [1]): it originally had only fixed-size arrays and strings, with no way for a function to accept a "generic array" or a "generic string" with size unknown at compile time.
[1] https://www.cs.virginia.edu/~evans/cs655/readings/bwk-on-pas...
The problem is that they are attractive for reducing repeated declarations:
typedef unsigned char thing_t[THING_SIZE];
struct red_box_with_a_hook {
thing_t thing1, thing2;
}
void shake_hands_with(thing_t *thing);
That is all well. But thing_t is an array type which still decays to pointer.It looks as if thing_t can be passed by value, but since it is an array, it sneakily isn't passed by value:
void catch_with_net(thing_t thing); // thing's type is actually "usnsigned char *"
// ...
unsigned char x[42]];
catch_with_net(x); // pointer to first element passed; type checks #include <stddef.h>
void foo(size_t n, int b[static n]);
https://godbolt.org/z/c4o7hGaG1It is not limited to compile-time constants. Doesn't work in clang, sadly.
There are perhaps only 3 numbers: 0, 1, and lots. A fair argument might be made that 2 also exists, but for anything higher, you need to think about your abstraction.
"There was a unanimous vote that the feature is ugly, and a good consensus that its incorporation into the standard at the 11th hour was an unfortunate decision." - Raymond Mak (Canada C Working Group), https://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_205.htm
That's is the current state of both gcc and clang: they will both happily, without warnings, pass a NULL pointer to a function with a `[static N]` parameter, and then REMOVE ANY NULL CHECK from the function, because the argument can't possibly be NULL according to the function signature, so the check is obviously redundant.
See the example in [1]: note that in the assembly of `f1` the NULL check is removed, while it's present in the "unsafe" `f2`, making it actually safer.
Also note that gcc will at least tell you that the check in `f1()` is "useless" (yet no warning about `g()` calling it with a pointer that could be NULL), while clang sees nothing wrong at all.
For example, both compilers do complain if you try to pass a literal NULL to `f1` (because that can't possibly be right), the same way they warn about division by a literal zero but give no warnings about dividing by a number that is not known to be nonzero.
Inside of a project that's all compiled together however it tends to work as expected. It's just that you must make sure your nullable pointers are being checked (which of course one can enforce with annotations in C).
TLDR: Explicit non-null pointers work just fine but you shouldn't be using them on external interfaces and if you are using them in general you should be annotating and/or explicitly checking your nullable pointers as soon as they cross your external interfaces.
https://clang.llvm.org/docs/AttributeReference.html#counted-...
For reference: https://digitalmars.com/articles/C-biggest-mistake.html