Posted by jandeboevrie 3 days ago
Do they do this? I thought they swapped this as well.
The most common is to start from the most significant digit and read left-to-right until the last two digits, which you then read right-to-left.
A less common alternative is to read right-to-left starting from the least significant digit.
Now that I think about it though, I've only seen the latter way used for the year in a date.
Most (all?) Western languages say out their numbers in big endian, as do East Asian languages like Chinese, Japanese and Korean. It is only natural that we write down our numbers in big endian, it can be argued that the mistake was making little endian CPUs.
Big endian: byte N has value 256^(L-N), bit n has value 2^n or 2^(l-n) depending on the architecture (some effectively have little bit-endian but big byte-endian) and where L and l are the byte and bit size of the whole integer respectively.
Design hardware or even write arbitrary precision routines, and you'll quickly realise that "big endian is backwards, little endian is logical".
Language is messy, some more than others [1].
Yeah, I know there are exceptions, but on average most human languages are big endian.
presented at Embedded Linux Conf
Of course the endianness only matters to C programmers who take endless pleasure in casting raw data from external sources into structs.
https://gist.github.com/siraben/cb0eb96b820a50e11218f0152f2e...
Nice article! But pity it does not elaborate on how...
Eh, is it? There aren't any big endian systems left that matter for anyone that isn't doing super niche stuff. Unless you are writing a really foundation library that you want to work everywhere (like libc, zlib, libpng etc.) you can safely just assume everything is little endian. I usually just put a static_assert that the system is little endian for C++.