Top
Best
New

Posted by dxdxdt 2 days ago

Defrag.exfat Is Inefficient and Dangerous(github.com)
33 points | 10 comments
zapzupnz 1 day ago|
Does nobody else think the responses from the person who wrote the code read like the usual sycophantic “you’re absolutely right!” tone you get from AI these days?
pajko 1 day ago|
There at least 2 AIs.
zapzupnz 1 day ago||
Probably. The way nobody is calling that out in the thread is wild.
ycombinatrix 1 day ago||
>We prioritized simplicity and correctness first, and plan to incrementally introduce performance optimizations in future iterations.

Sir, this is a correctness issue.

forgotpwd16 1 day ago||
>After reviewing the core defrag logic myself, I've come to a conclusion that it's AI slop.

Will call it a human slop. AI may've given them some code but they certainly haven't use it fully. I uploaded the defrag.c in ChatGPT asking to review on performance/correctness/safety and pointed the sames issues as you (alongside bunch of others but not interested at the moment to review them).

dxdxdt 23 hours ago||
I did the same. Was genuinely curious. Didn't get much from it. I'm still confused.

The code base is huge for an LLM to handle, perhaps it was generated over multiple prompts idk. Not sure if someone can train a model on the kernel code or exfatprogs and generate the code. I doubt someone with such expertise would even go through the process when they can just write the code themselves which is much easier.

stuaxo 1 day ago||
Talk about a baptism of fire for the dev.

Seems like they are very new to tbings and didn't expect it to be adopted, but were hoping for a bit of feedback.

burnt-resistor 1 day ago|
Sigh. Piss poor engineering, likely by humans. For the love of god, do atomic updates by duplicating data first such as in a move-out-of-the-way-first strategy before doing metadata updates. And keep a backup of metadata at each point of time to maximize crash consistency and crash recovery while minimizing the potential for data loss. An online defrag kernel module would likely be much more useful but I don't trust them to be able to handle such an undertaking.

If a user has double storage available, it's probably best to do the old-fashioned "defrag" by single-threaded copying all files and file metadata to a newly-formatted volume.

dxdxdt 23 hours ago||
Yeah. Pretty much.

Read the defrag code in other well-established fs like ext4 or btrfs. They all have limitations(or caveats, if you will). It's one of those problems where you just have to throw money at it and hope for the best. Even Microsoft kinda just gave up on it because it's really a pointless exercise at this point in time and age.

doubled112 1 day ago||
That last paragraph sums up the ZFS defrag procedure at one shop I worked at. Buy new disks and send/receive the pool.

At our size and use case the timing was usually close to perfect. The pools were getting close to full and fragmented as larger disks became inexpensive.