Posted by onurkanbkrc 18 hours ago
Even then it is only some file systems that guarantee it and even then file size updating isn’t atomic afaik.
Not so sure about file size update being atomic in this case but fairly sure about the rest.
Matklad had some writing or video about this.
Also there is a tool called ALICE and authors of that tool have a white paper about this subject.
Also there was a blog post about how badger database fixed some issues around this problem.
If there is a failure like a crash or power outage etc. then it doesn’t work like that.
You might as well be pushing into an in-memory data structure and writing to disk at program exit in terms of reliability
POSIX says that for a file opened with O_APPEND "the file offset shall be set to the end of the file prior to each write." That's it. That's all it does.
That tradeoff is at the root of why most notify APIs are either approximate (events can be dropped) or rigidly bounded by kernel settings that prevent truly arbitrary numbers of watches. fanotify and some implementations of kqueue are better at efficiently triggering large recursive watches, but that’s still just a mitigation on the underlying memory/performance tradeoffs, not a full solution.
Inotify is the way to shovel these events out of the kernel, then userspace process rules apply. It's maybe not elegant from your pov, but it's simple.
There are sample "drivers" in easily-modified python that are fast enough for casual use.
if(condition) {do the thing;}
With that said, at least for C and C++, the behavior of (std::)atomic when dealing with interprocess interactions is slightly outside the scope of the standard, but in practice (and at least recommended by the C++ standard) (atomic_)is_lock_free() atomics are generally usable between processes.
In the interview when they were describing this problem, I asked why the didn't just put all of the new release in a new dir, and use symlinks to roll forward and backwards as needed. They kind of froze and looked at each other and all had the same 'aha' moment. I ended up not being interested in taking the job, but they still made sure to thank me for the idea which I thought was nice.
Not that I'm a genius or anything, it's something I'd done previously for years, and I'm sure I learned it from someone else who'd been doing it for years. It's a very valid deployment mechanism IMO, of course depending on your architecture.
Just git branch (one branch per region because of compliance requirements) -> branch creates "tar.gz" with predefined name -> automated system downloads the new "tar.gz", checks release date, revision, etc. -> new symlink & php (serverles!!!) graceful restart and ka-b00m.
Rollbacks worked by pointing back to the old dir & restart.
Worked like a charm :-)
that's how Chrome updates itself, but without the symlink part
The OS core is deployed as a single unit and is a few GB in size, pretty small when internal storage is into the hundreds of GB.