Also one of my favourite kernel patch messages: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
As you might've guessed, it lacked November, but no one noticed for 4+ months, and I've left the company since. It created a local meme #nolognovember and even got to the public (it was in Russia: https://pikabu.ru/story/no_log_november_10441606)
That hardware real time clocks keep time in date and time drives me batty. And no one does the right thing which is just a 64 bit counter counting 32khz ticks. Then use canned tested code to convert that to butt scratching monkey time.
Story my old boss designed an STD Bus RTC card in 1978 or something. Kept time in YY:MM:DD HH:MM:SS 1/60 sec. And was battery backed. With shadow registers that latched the time. Couple of years later redesigned it as a 32 bit seconds counter. With a 32khz sub seconds counter. Plus a 48 bit offset register. What was a whole card was now a couple of 4000 series IC's on the processor card. He wrote 400 bytes of Z80 assembly to convert that to date and time. He said was tricky to get right but once done was done.
I know of one that draws 0.5uA in normal mode but 12uA in binary counter mode.
But to be fair, it doesn't seem that onerous an issue - the biggest problem would have been if this was completely undocumented. One obvious workaround is to read the time immediately on wake up, and then ignore the result until reading the time returns something different.
>.< haha i remember this
On a QWERTY keyboard, the O key is also next to the I key. It's also possible someone accidentally fat-fingered "GenuineIontel" , noticed something was off, and moved their cursor between the "o" and "n", and accidentally hit Delete instead of Backspace.
Maybe an unlikely set of circumstances, but I imagine a random bit flip caused at the hardware-level is rare since it might cause other problems, if something more important was bit-flipped.
"GenuineIotel" is definitely odd, but difficult to research more about; I suspect these CPUs might actually end up being collector's items sometime in the future.
because inserting no-op instructions after them prevents the issue.
The early 386s were extremely buggy and needed the same workaround: https://devblogs.microsoft.com/oldnewthing/20110112-00/?p=11...
> For example, there was one bug that manifested itself in incorrect instruction decoding if a conditional branch instruction had just the right sequence of taken/not-taken history, and the branch instruction was followed immediately by a selector load, and one of the first two instructions at the destination of the branch was itself a jump, call, or return.
Even if you write up a comprehensive test plan for the branch predictor, and for selector loads, and so on, it might easily not include that particular corner case. And pre silicon testing is expensive and slow, which also limits how much of it you can do.
Nevertheless, the states of the internal pipelines, which were supposed to be stopped, flushed and restarted cleanly by taken branches, depended on whether the previous branches had been taken or not taken.
Thus on an 80386 or 80486 CPU not taken branches behaved like predicted branches on a modern CPU and taken branches behaved as mispredicted branches on a modern CPU.
The 80386 bug described above was probably caused by some kind of incomplete flushing of some pipeline after a taken branch, which leaved it in a state partially invalid, which could be exposed by a specific sequence of the following instructions.
Though the bugs we were looking to catch there were definitely not the multiple-interacting-subsystems type, and more just the "corner cases in input data values in floating point instructions" variety.
This also wasn't that uncommon. Sparc also had a delay slot that operated similarly to MIPS.
That is not the workaround in the documentation that was just linked.
Workarounds:
The solution to this problem is to put two instructions that do not require write back data after the mul instruction.
This seems reasonable for your compiler vendor to implement without getting rid of multiplication altogether.They also mention in the next sentence that they adopted the "correct" workaround (by providing a multiplication library function for the compiler to call).
That being said, the disabling of MUL is being done at a software project level here, not by the CPU vendor. It's in the same linked commit that added in the NOP instructions to the arithmetic routines.