Top
Best
New

Posted by jnsgruk 3 days ago

Introducing architecture variants(discourse.ubuntu.com)
238 points | 142 comments
mobilio 2 days ago|
Announce was here: https://discourse.ubuntu.com/t/introducing-architecture-vari...

and key point: "Previous benchmarks we have run (where we rebuilt the entire archive for x86-64-v3 57) show that most packages show a slight (around 1%) performance improvement and some packages, mostly those that are somewhat numerical in nature, improve more than that."

ninkendo 2 days ago||
> show that most packages show a slight (around 1%) performance improvement

This takes me back to arguing with Gentoo users 20 years ago who insisted that compiling everything from source for their machine made everything faster.

The consensus at the time was basically "theoretically, it's possible, but in practice, gcc isn't really doing much with the extra instructions anyway".

Then there's stuff like glibc which has custom assembly versions of things like memcpy/etc, and selects from them at startup. I'm not really sure if that was common 20 years ago but it is now.

It's cool that after 20 years we can finally start using the newer instructions in binary packages, but it definitely seems to not matter all that much, still.

Amadiro 2 days ago|||
It's also because around 20 years ago there was a "reset" when we switched from x86 to x86_64. When AMD introduced x86_64, it made a bunch of the previously optional extension (SSE up to a certain version etc) a mandatory part of x86_64. Gentoo systems could already be optimized before on x86 using those instructions, but now (2004ish) every system using x86_64 was automatically always taking full advantage of all of these instructions*.

Since then we've slowly started accumulating optional extensions again; newer SSE versions, AVX, encryption and virtualization extensions, probably some more newfangled AI stuff I'm not on top of. So very slowly it might have started again to make sense for an approach like Gentoo to exist**.

* usual caveats apply; if the compiler can figure out that using the instruction is useful etc.

** but the same caveats as back then apply. A lot of software can't really take advantage of these new instructions, because newer instructions have been getting increasingly more use-case-specific; and applications that can greatly benefit from them will already have alternative code-pathes to take advantage of them anyway. Also a lot of the stuff happening in hardware acceleration has moved to GPUs, which have a feature discovery process independent of CPU instruction set anyway.

slavik81 1 day ago|||
The llama.cpp package on Debian and Ubuntu is also rather clever in that it's built for x86-64-v1, x86-64-v2, x86-64-v3, and x86-64-v4. It benefits quite dramatically from using the newest instructions, but the library doesn't have dynamic instruction selection itself. Instead, ld.so decides which version of libggml.so to load depending on your hardware capabilities.
ignoramous 1 day ago||
> llama.cpp package on Debian and Ubuntu is also rather clever … ld.so decides which version of libggml.so to load depending on your hardware capabilities

Why is this "clever"? This is pretty much how "fat" binaries are supposed to work, no? At least, such packaging is the norm for Android.

mikepurvis 1 day ago|||
> AVX, encryption and virtualization

I would guess that these are domain-specific enough that they can also mostly be enabled by the relevant libraries employing function multiversioning.

izacus 1 day ago||
You would guess wrong.
mikepurvis 1 day ago|||
Isn’t the whole thrust of this thread that most normal algorithms see little to no speed up from things like avx, and therefore multiversioning those things that do makes more sense that compiling the whole OS for a newer set of cpu features?
ploxiln 1 day ago||||
FWIW the cool thing about gentoo was the "use-flags", to enable/disable compile-time features in various packages. Build some apps with GTK or with just the command-line version, with libao or pulse-audio, etc. Nowadays some distro packages have "optional dependencies" and variants like foobar-cli and foobar-gui, but not nearly as comprehensive as Gentoo of course. Learning about some minor custom CFLAGS was just part of the fun (and yeah some "funroll-loops" site was making fun of "gentoo ricers" way back then already).

I used Gentoo a lot, jeez, between 20 and 15 years ago, and the install guide guiding me through partitioning disks, formatting disks, unpacking tarballs, editing config files, and running grub-install etc, was so incredibly valuable to me that I have trouble expressing it.

mpyne 1 day ago|||
I still use Gentoo for that reason, and I wish some of those principles around handling of optional dependencies were more popular in other Linux distros and package ecosystems.

There's lots of software applications out there whose official Docker images or pip wheels or whatever bundle everything under the sun to account for all the optional integrations the application has, and it's difficult to figure out which packages can be easily removed if we're not using the feature and which ones are load-bearing.

zerocrates 1 day ago||||
I started with Debian on CDs, but used Gentoo for years after that. Eventually I admitted that just Ubuntu suited my needs and used up less time keeping it up to date. I do sometimes still pull in a package that brings a million dependencies for stuff I don't want and miss USE flags, though.

I'd agree that the manual Gentoo install process, and those tinkering years in general, gave me experience and familiarity that's come in handy plenty of times when dealing with other distros, troubleshooting, working on servers, and so on.

michaelcampbell 1 day ago||||
Someone has set up an archive of that site; I visit it once in a while for a few nostalgiac chuckles

https://www.shlomifish.org/humour/by-others/funroll-loops/Ge...

viraptor 1 day ago|||
Nixpkgs exposes a lot of options like that. You can override both options and dependencies and supply your own cflags if you really want.
oivey 1 day ago||||
This should build a lot more incentive for compiler devs to try and use the newer instructions. When everyone uses binaries compiled without support for optional instruction sets, why bother putting much effort into developing for them? It’ll be interesting to see if we start to see more of a delta moving forward.
Seattle3503 1 day ago||
And application developers to optimize with them in mind?
hajile 1 day ago||||
According to this[0] study of the Ubuntu 16.04 package repos, 89% of all x86 code was instructions were just 12 instructions (mov, add, call, lea, je, test, jmp, nop, cmp, jne, xor, and -- in that order).

The extra issue here is that SIMD (the main optimization) simply sucks to use. Auto-vectorization has been mostly a pipe dream for decades now as the sufficiently-smart compiler simply hasn't materialized yet (and maybe for the same reason the EPIC/Itanium compiler failed -- deterministically deciding execution order at compile time isn't possible in the abstract and getting heuristics that aren't deceived by even tiny changes to the code is massively hard).

Doing SIMD means delving into x86 assembly and all it's nastiness/weirdness/complexity. It's no wonder that devs won't touch it unless absolutely necessary (which is why the speedups are coming from a small handful of super-optimized math libraries). ARM vector code is also rather Byzantine for a normal dev to learn and use.

We need a more simple assembly option that normal programmers can easily learn and use. Maybe it's way less efficient than the current options, but some slightly slower SIMD is still going to generally beat no SIMD at all.

[0] https://oscarlab.github.io/papers/instrpop-systor19.pdf

badlibrarian 1 day ago|||
Agner Fog's libraries make it pretty trivial for C++ programmers at least. https://www.agner.org/optimize/
kccqzy 1 day ago||||
The highway library is exactly the kind of a simpler option to use SIMD. Less efficient than hand written assembler but you can easily write good enough SIMD for multiple different architectures.
JonChesterfield 1 day ago|||
The sufficiently smart vectoriser has been here for decades. Cuda is one. Uses all the vector units just fine, may struggle to use the scalar units.
suprjami 1 day ago||||
I somehow have the memory that there was an extremely narrow time window where the speedup was tangible and quantifiable for Gentoo, as they were the first distro to ship some very early gcc optimisation. However it's open source software so every other distro soon caught up and became just as fast as Gentoo.
harha 1 day ago|||
Would it make a difference if you compile the whole system vs. just the programs you want optimized?

As in, are there any common libraries or parts of the system that typically slow things down, or was this more targeting a time when hardware was more limited so improving all would have made things feel faster in general.

juujian 2 days ago|||
Are there any use cases where that 1% is worth any hassle whatsoever?
wongarsu 2 days ago|||
If every computer built in the last decade gets 1% faster and all we have to pay for that is a bit of one-off engineering effort and a doubling of the storage requirement of the ubuntu mirrors that seems like a huge win

If you aren't convinced by your ubuntu being 1% faster, consider how many servers, VMs and containers run ubuntu. Millions of servers using a fraction of a percent less energy multiplies out to a lot of energy

vladms 2 days ago|||
Don't have a clear opinion, but you have to factor in all the issues that can be due to different versions of software. Think of unexposed bugs in the whole stack (that can include compiler bugs but also software bugs related to numerical computation or just uninitialized memory). There are enough heisenbugs without worrying that half the servers run on a slightly different software.

It's not for nothing that some time ago "write once, run everywhere" was a selling proposition (not that it was actually working in all cases, but definitely working better than alternatives).

sumtechguy 2 days ago||||
That comes out to about 1.5 hours faster per week for many tasks. If you are running full tilt. But that seems like an ok easy win.
alkonaut 1 day ago||||
If I recompile a program to fully utilize my cpu better (use AVX or whatever) then if my program takes 1 second to execute instead of 2, it likely did not use half the _energy_.
darkwater 1 day ago|||
Obviously not. But scale it out to a fleet of 1000 servers running your program continuously, you can now shut down 10 for the same exact workload.
zymhan 23 hours ago|||
Sure, but we're talking about compiled packages being distributed by a package manager.
alkonaut 4 hours ago||
Yes but my point is: if I download the AVX version instead of the SSE version of a package and that makes my 1000 servers 10% _quicker_ that is not the same as being 10% more _efficient_.

Because typically these modern things are a way of making the CPU do things faster by eating more power. There may be savings from having fewer servers etc, but savings in _speed_ are not the same as savings in _power_ (and some times even work the opposite way)

duskdozer 1 day ago|||
how much energy would we save if every website request weren't loaded down with 20MB of ads and analytics :(
Aissen 2 days ago||||
You need 100 servers. Now you need to only buy 99. Multiply that by a million, and the economies of scale really matter.
iso1631 2 days ago||
1% is less than the difference between negotiating with a hangover or not.
gpm 2 days ago|||
What a strange comparison.

If you're negotiating deals worth billions of dollars, or even just millions, I'd strongly suggest not doing so with a hangover.

Pet_Ant 2 days ago|||
> If you're negotiating deals worth billions of dollars, or even just millions, I'd strongly suggest not doing so with a hangover.

...have you met salespeople? Buying lap dances is a legitimate business expense for them. You'd be surprised how much personal rapport matters and facts don't.

In all fairness, I only know about 8 and 9 figure deals, maybe at 10 and 11 salespeople grow ethics...

bregma 1 day ago|||
I strongly suspect ethics are inversely proportional to the size of the deal.
glenstein 1 day ago||||
That's more an indictment of sales culture than a critique of computational efficiency.
squeaky-clean 1 day ago|||
Well sure, because you want the person trying buy something from you for a million dollars to have a hangover.
tclancy 1 day ago|||
Sounds like someone never read Sun Tzu.

(Not really, I just know somewhere out there is a LinkedInLunatic who has a Business Philosophy based on being hungover.)

gpm 1 day ago||
Appear drunk when you are sober, and sober when you are drunk

- Sun Zoo

PeterStuer 2 days ago||||
A lott of improvements are very incremental. In agregate, they often compound and are vey significant.

If you would only accept 10x improvements, I would argue progress would be very small.

wat10000 2 days ago||||
It's rarely going to be worth it for an individual user, but it's very useful if you can get it to a lot of users at once. See https://www.folklore.org/Saving_Lives.html

"Well, let's say you can shave 10 seconds off of the boot time. Multiply that by five million users and thats 50 million seconds, every single day. Over a year, that's probably dozens of lifetimes. So if you make it boot ten seconds faster, you've saved a dozen lives. That's really worth it, don't you think?"

I put a lot of effort into chasing wins of that magnitude. Over a huge userbase, something like that has a big positive ROI. These days it also affects important things like heat and battery life.

The other part of this is that the wins add up. Maybe I manage to find 1% every couple of years. Some of my coworkers do too. Now you're starting to make a major difference.

ilaksh 1 day ago||||
They did say some packages were more. I bet some are 5%, maybe 10 or 15. Maybe more.

Well one example could be llama.cpp . It's critical for them to use every single extension the CPU has move more bits at a time. When I installed it I had to compile it.

This might make it more practical to start offering OS packages for things like llama.cpp

I guess people that don't have newer hardware aren't trying to install those packages. But maybe the idea is that packages should not break on certain hardware.

Blender might be another one like that which really needs the extensions for many things. But maybe you so want to allow it to be used on some oldish hardware anyway because it still has uses that are valid on those machines.

adgjlsfhk1 2 days ago||||
it's very no uniform. 99% see no change, but 1% see 1.5-2x better performance
2b3a51 2 days ago|||
I'm wondering if 'somewhat numerical in nature' relates to lpack/blas and similar libraries that are actually dependencies of a wide range of desktop applications?
adgjlsfhk1 2 days ago||
blas and lapack generally do manual multi-versioning by detecting CPU features at runtime. This is more useful 1 level up the stack in things like compression/decompression, ode solvers, image manipulation and so on that are still working with big arrays of data, but don't have a small number of kernels (or as much dev time), so they typically rely on compilers for auto-vectorization
Insanity 2 days ago|||
I read it as, across the board a 1% performance improvement. Not that only 1% of packages get a significant improvement.
IAmBroom 2 days ago|||
In a complicated system, a 1% overall benefit might well be because of a 10% improvement in just 10% of the system (or more in a smaller contributor).
darkwater 1 day ago|||
The announcement is pretty clear on this:

   > Previous benchmarks (...) show that most packages show a slight (around 1%) performance improvement and some packages, mostly those that are somewhat numerical in nature, improve more than that.
locknitpicker 1 day ago||||
> Are there any use cases where that 1% is worth any hassle whatsoever?

I don't think this is a valid argument to make. If you were doing the optimization work then you could argue tradeoffs. You are not, Canonical is.

Your decision is which image you want to use, and Canonical is giving you a choice. Do you care about which architecture variant you use? If you do, you can now pick the one that works best for you. Do you want to win an easy 1% performance gain? Now you have that choice.

godelski 1 day ago||||

  > where that 1% is worth any hassle
You'll need context to answer your question, but yes there are cases.

Let's say you have a process that takes 100hrs to run and costs $1k/hr. You save an hour and $1k every time you run the process. You're going to save quite a bit. You don't just save the time to run the process, you save literal time and everything that that costs (customers, engineering time, support time, etc).

Let's say you have a process that takes 100ns and similarly costs $1k/hr. You now run in 99ns. Running the process 36 million times is going to be insignificant. In this setting even a 50% optimization probably isn't worthwhile (unless you're a high frequency trader or something)

This is where the saying "premature optimization is the root of all evil" comes from! The "premature" part is often disregarded and the rest of the context goes with it. Here's more context to Knuth's quote[0].

  There is no doubt that the holy grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

  Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.
Knuth said: "Get a fucking profiler and make sure that you're optimizing the right thing". He did NOT say "don't optimize".

So yes, there are plenty of times where that optimization will be worthwhile. The percentages don't mean anything without the context. Your job as a programmer is to determine that context. And not just in the scope of your program, but in the scope of the environment you expect a user to be running on. (i.e. their computer probably isn't entirely dedicated to your program)

[0] https://dl.acm.org/doi/10.1145/356635.356640 (alt) https://sci-hub.se/10.1145/356635.356640

dehrmann 2 days ago||||
Anything at scale. 1% across FAANG is huge.
dabinat 1 day ago|||
If those kinds of optimizations are on the table, why would they not already be compiling and optimizing from source?
darkwater 1 day ago||
I'm not an hyperscaler, I run a thousand machines. If by just changing the base image I use to build - in an already automated process - those machines, well, the optimization is basically for free. Well, unless it triggers some new bug that was not there before.
Havoc 2 days ago||||
Arguable same across consumers too. It’s just harder to measure than central datacenters
notatoad 1 day ago||
nah, performance benefits are mostly wasted on consumers, because consumer hardware is very infrequently CPU-constrained. in a datacentre, a 1% improvement could actually mean you provision 99 CPUs instead of 100. but on your home computer, a 1% CPU improvement means that your network request completes 0.0001% faster, or your file access happens 0.000001% faster, and then your CPU goes back to being idle.

an unobservable benefit is not a benefit.

bandrami 1 day ago|||
Isn't Facebook still using PHP?
dehrmann 1 day ago|||
They forked PHP into Hack. They've diverged pretty far by this point (especially with data structures), but it maintains some of PHPs quirks and request-oriented runtime. It's jitted by HHVM. Both Hack and HHVM are open-source, but I'm not aware of any major users outside Meta.
speed_spread 1 day ago|||
Compiled PHP. I'm pretty sure they ran the numbers.
gwbas1c 1 day ago||||
> some packages, mostly those that are somewhat numerical in nature, improve more than that

Perhaps if you're doing CPU-bound math you might see an improvement?

rossjudson 1 day ago||||
Any hyperscaler will take that 1% in a heartbeat.
colechristensen 2 days ago|||
Very few people are in the situation where this would matter.

Standard advice: You are not Google.

I'm surprised and disappointed 1% is the best they could come up with, with numbers that small I would expect experimental noise to be much larger than the improvement. If you tell me you've managed a 1% improvement you have to do a lot to convince me you haven't actually made things 5% worse.

noir_lord 1 day ago||
No but a lot of people are buying a lot of compute from Google, Amazon and Microsoft.

At scale marginal differences do matter and compound.

horizion2025 1 day ago|||
How many additions have there even been outside of AVX-x? And even AVX-2 is from 2011. If we ignore AVX-x the last I can recall are the few instructions added in the manipulation sets BMI/ABM, but they are Haswell/Piledriver/Jaguar era (2012-2013). While some specific cases could benefit, doesn't seem like a goldmine of performance improvements.

Further, maybe it has not been a focus for compiler vendors to generate good code for these higher-level archs if few are using the feature. So Ubuntu's move could improve that.

dang 2 days ago|||
Thanks - we've merged the comments from https://news.ycombinator.com/item?id=45772579 into this thread, which had that original source.
pizlonator 2 days ago|||
That 1% number is interesting but risks missing the point.

I bet you there is some use case of some app or library where this is like a 2x improvement.

alternatex 1 day ago||
Aggregated metrics are always useless as they tend to show interesting and sometimes exciting data that in actuality contains zero insight. I'm always weary of people making decisions based on aggregate metrics.

Would be nice to know the per app metrics.

jwrallie 1 day ago||
Is it worth it losing the ability to just put your hdd on your older laptop and booting it in an emergency?
random29ah 3 hours ago||
I'm really "new" to x64 (I only migrated from 32-bit in 2020...) and the difference I noticed between x86-64-v1 and x86-64-v3 was only with video (with ffmpeg), audio (mp3/ogg/mp4...) and encryption; the rest remains practically the same.

Naively, I believe it might be more appropriate to have x86-64-v1 and x86-64-vN options only for specific software and leave the rest as x86-64-v1.

AVX seemed to give the biggest boost to things.

Regarding those who are making fun of Gentoo users, it really did make a bigger difference in the past, but with the refinement of compilers, the difference has diminished. Today, for me, who still uses Gentoo/CRUX for some specific tasks, what matters is the flexibility to enable or disable what I want in the software, and not so much the extra speed anymore.

As an example, currently I use -Os (x86-64-v1) for everything, and only for things related to video/sound/cryptography (I believe for things related to mathematics in general?) I use -O2 (x86-64-v3) with other flags to get a little more out of it.

Interestingly, in many cases -Os with -mtune=nocona generates faster binaries even though I'm only using hardware from Haswell to today's hardware (who can understand the reason for this?).

theandrewbailey 2 days ago||
A reference for x86-64 microarchitecture levels: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...

x86-64-v3 is AVX2-capable CPUs.

jsheard 2 days ago|
> x86-64-v3 is AVX2-capable CPUs.

Which unfortunately extends all the way to Intels newest client CPUs since they're still struggling to ship their own AVX512 instructions, which are required for v4. Meanwhile AMD has been on v4 for two generations already.

theandrewbailey 2 days ago||
At least Intel and AMD have settled on a mutually supported subset of AVX-512 instructions.
wtallis 2 days ago|||
The hard part was getting Intel and Intel to agree on which subset to keep supporting.
cogman10 1 day ago||
Even on the same chip.

Having a non-uniform instruction set for one package was a baffling decision.

jsheard 1 day ago|||
I think that stemmed from their P-core design being shared between server and client. They needed AVX512 for server so they implemented it in the P-cores, and it worked fine there since their server chips are entirely P-cores or entirely E-cores, but client uses a mixture of both so they had to disable AVX512 to bring the instruction set into sync across both sides.
wtallis 1 day ago||
Server didn't really have anything to do with it. They were fine shipping AVX 512 in consumer silicon for Cannon Lake (nominally), Ice Lake, Tiger Lake, and most damningly Rocket Lake (backporting an AVX 512-capable core to their 14nm process for the sole purpose of making a consumer desktop chip, so they didn't even have the excuse that they were re-using a CPU core floorplan that was shared with server parts).

It's pretty clear that Alder Lake was simply a rush job, and had to be implemented with the E cores they already had, despite never having planned for heterogenous cores to be part of their product roadmap.

jiggawatts 1 day ago|||
It’s a manifestation of Conway’s law: https://en.wikipedia.org/wiki/Conway%27s_law

They had two teams designing the two types of cores.

zozbot234 2 days ago||
What are the changes to dpkg and apt? Are they being shared with Debian? Could this be used to address the pesky armel vs. armel+hardfloat vs. armhf issue, or for that matter, the issue of i486 vs. i586 vs. i686 vs. the many varieties of MMX and SSE extensions for 32-bit?

(There is some older text in the Debian Wiki https://wiki.debian.org/ArchitectureVariants but it's not clear if it's directly related to this effort)

Denvercoder9 1 day ago||
Even if technically possible, it's unlikely this will be used to support any of the variants you mentioned in Debian. Both i386 and armel are effectively dead: i386 is reduced to a partial architecture only for backwards compatibility reasons, and armel has been removed entirely from development of the next release.
zozbot234 1 day ago||
What you said is correct wrt. official support, but Debian also has an unofficial ports infrastructure that could be repurposed towards enabling Debian for older architecture variants.
mwhudson 1 day ago|||
> Could this be used to address the pesky armel vs. armel+hardfloat vs. armhf issue

No, because those are different ABIs (and a debian architecture is really an ABI)

> the issue of i486 vs. i586 vs. i686 vs. the many varieties of MMX and SSE extensions for 32-bit?

It could be used for this but it's about 15 years too late to care surely?

> (There is some older text in the Debian Wiki https://wiki.debian.org/ArchitectureVariants but it's not clear if it's directly related to this effort)

Yeah that is a previous version of the same design. I need to get back to talking to Debian folks about this.

bobmcnamara 1 day ago||
This would allow mixing armel and softvfp ABIs, but not hard float ABIs, at least across compilation unit boundaries (that said, GCC never seems to optimize ABI bottlenecks within a compilation unit anyway)
watersb 1 day ago||
Over the past year, Intel has pulled back from Linux development.

Intel has reduced its number of employees, and has lost lots of software developers.

So we lost Clear Linux, their Linux distribution that often showcased performance improvements due to careful optimization and utilization of microarchitectural enhancements.

I believe you can still use the Intel compiler, icc, and maybe see some improvements in performance-sensitive code.

https://clearlinux.org/

"It was actively developed from 2/6/2015-7/18/2025."

dooglius 1 day ago|
icc was discontinued FWIW. The replacement, icx, is AIUI just clang plus some proprietary plugins
watersb 18 hours ago||
I wonder how this relates to Intel's "One API", which extends a single C code base across the various CPU targets (such as the core ALU, base vector units, AVX-512, NPU) and Intel GPU accelerators.

Not the same thing, or perhaps an augmentation of Intel performance libraries (which required C++, I believe).

Sure, harmonizing all of this may have suggested that there were too many software teams. But device drivers don't write themselves, and without feedback from internal software developers, you can't validate your CPU designs.

Hasz 1 day ago||
Getting a 1% across the board general purpose improvement might sound small, but is quite significant. Happy to see Canonical invest more heavily in performance and correctness.

Would love to see which packages benefited the most in terms of percentile gain and install base. You could probably back out a kWh/tons of CO2 saved metric from it.

dfc 2 days ago||
> you will not be able to transfer your hard-drive/SSD to an older machine that does not support x86-64-v3. Usually, we try to ensure that moving drives between systems like this would work. For 26.04 LTS, we’ll be working on making this experience cleaner, and hopefully provide a method of recovering a system that is in this state.

Does anyone know what the plans are to accomplish this?

dmoreno 2 days ago||
If I were them I would make sure the V3 instructions are not used until late in the boot process, and some apt command that makes sure all installed programs are in the right subarchitecture for the running system, reinstalling as necessary.

But that does not sound like a simple for non technical users solution.

Anyway, non technical users using an installation on another lower computer? That sounds weird.

mwhudson 1 day ago|||
I am probably going to be the one implementing this and I don't know what I am going to do yet! At the very least we need the failure mode to be better (currently you get an OOPS when the init from the initrd dies due to an illegal instruction exception)
theandrewbailey 3 days ago||
A reference for x86-64 microarchitecture levels: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...

x86-64-v3 is AVX2-capable CPUs.

mananaysiempre 20 hours ago|
Right, though compared to what one generally thinks of as an “AVX2-compatible” CPU, it curiously omits AES-NI and CLMUL (both relevant to e.g. AES-GCM). Yes, they are not technically part of AVX2, but they are present in all(?) the qualifying Intel and AMD CPUs (like many other technically-not-AVX2 stuff that did get included, like BMI or FMA3).
benatkin 2 days ago||
There's an unofficial repo for ArchLinux: https://wiki.archlinux.org/title/Unofficial_user_repositorie...

> Description: official repositories compiled with LTO, -march=x86-64-vN and -O3.

Packages: https://status.alhp.dev/

rock_artist 1 day ago|
So if it got it right, This is mostly a way to have branches within a specific release for various levels of CPUs and their support of SIMD and other modern opcodes.

And if I have it right, The main advantage should come with package manager and open sourced software where the compiled binaries would be branched to benefit and optimize newer CPU features.

Still, this would be most noticeable mostly for apps that benefit from those features such as audio dsp as an example or as mentioned ssl and crypto.

jeffbee 1 day ago|
I would expect compression, encryption, and codecs to have the least noticeable benefit because these already do runtime dispatch to routines suited to the CPU where they are running, regardless of the architecture level targeted at compile time.
WhyNotHugo 1 day ago||
OTOH, you can remove the runtime dispatching logic entirely if you compile separate binaries for each architecture variant.

Especially the binaries for the newest variant, since they can entirely conditionals/branching for all older variants.

jeffbee 1 day ago||
That's a lot of surgery. These libraries do not all share one way to do it. For example zstd will switch to static BMI2 dispatch if it was targeting Haswell or later at compile time, but other libraries don't have that property and will need defines.
More comments...