Top
Best
New

Posted by rurban 4 hours ago

BarraCUDA Open-source CUDA compiler targeting AMD GPUs(github.com)
136 points | 49 comments
h4kunamata 2 hours ago|
>Requirements

>A will to live (optional but recommended)

>LLVM is NOT required. BarraCUDA does its own instruction encoding like an adult.

>Open an issue if theres anything you want to discuss. Or don't. I'm not your mum.

>Based in New Zealand

Oceania sense of humor is like no other haha

The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.

The cheer amount of knowledge required to even start such project, is really something else, and prove the manual wrong on the machine language level is something else entirely.

When it comes to AMD, "no CUDA support" is the biggest "excuse" to join NVIDIA's walled garden.

Godspeed to this project, the more competition the less NVIDIA can continue destroying the PC parts pricing.

querez 2 hours ago||
> The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.

The project owner is talking about LLVM,a compiler toolkit, not an LLM.

kmaitreys 1 hour ago||
It's actually quite easy to spot if LLMs were used or not.

Very few total number of commits, AI like documentation and code comments.

But even if LLMs were used, the overall project does feel steered by a human, given some decisions like not using bloated build systems. If this actually works then that's great.

butvacuum 1 hour ago|||
Since when is squashing noisesum commits an AI activity instead of good manners?
natvert 53 minutes ago|||
Says the clawdbot
ekianjo 5 minutes ago|||
LLVM, nothing to do with LLMs
wild_egg 2 hours ago|||
This project very most definitely has significant AI contributions.

Don't care though. AI can work wonders in skilled hands and I'm looking forward to using this project

ZaneHam 1 hour ago||
Hello! I didn't realise my project was posted here but I can actually answer this.

I do use LLM's (specifically Ollama) particularly for test summarisation, writing up some boilerplate and also I've used Claude/Chatgpt on the web when my free tier allows. It's good for when I hit problems such as AMD SOP prefixes being different than I expected.

magicalhippo 1 hour ago|||
> Oceania sense of humor is like no other haha

Reminded me of the beached whale animated shorts[1].

[1]: https://www.youtube.com/watch?v=ezJG0QrkCTA&list=PLeKsajfbDp...

colordrops 31 minutes ago|||
I'm still blown away that AMD hasn't made it their top priority. I've said this for years. If I was AMD I would spend billions upon billions if necessary to make a CUDA compatibility layer for AMD. It would certainly still pay off, and it almost certainly wouldn't cost that much.
andy_ppp 26 minutes ago||
Moving target, honestly just get PyTorch working fully (loads of stuff just doesn’t work on AMD hardware) and also make it work on all graphics cards from a certain generation. The matrix of support needed GFX cards, architectures and software together is quite astounding but still yes that should have at least that working and equivalent custom kernels.
dboreham 32 minutes ago|||
Unrelated: just returned from a month in NZ. Amazing people.
samrus 1 hour ago|||
> >LLVM is NOT required. BarraCUDA does its own instruction encoding like an adult.

> The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.

"Has tech literacy deserted the tech insider websites of silicon valley? I will not beleove it is so. ARE THERE NO TRUE ENGINEERS AMONG YOU?!"

bigyabai 1 hour ago||
> and prove the manual wrong on the machine language level

I'll be the party pooper here, I guess. The manual is still right, and no amount of reverse-engineering will fix the architecture AMD chose for their silicon. It's absolutely possible to implement a subset of CUDA features on a raster GPU, but we've been doing that since OpenCL and CUDA is still king.

The best thing the industry can do is converge on a GPGPU compute standard that doesn't suck. But Intel, AMD and Apple are all at-odds with one another so CUDA's hedged bet on industry hostility will keep paying dividends.

piker 3 hours ago||
> # It's C99. It builds with gcc. There are no dependencies.

> make

Beautiful.

parlortricks 1 hour ago|
You gotta love it, simple and straight to the point.
esafak 2 hours ago||
Wouldn't it funny and sad if a bunch of enthusiasts pulled off what AMD couldn't :)
bri3d 2 hours ago||
The lack of CUDA support on AMD is absolutely not that AMD "couldn't" (although I certainly won't deny that their software has generally been lacking), it's clearly a strategic decision.

Supporting CUDA on AMD would only build a bigger moat for NVidia; there's no reason to cede the entire GPU programming environment to a competitor and indeed, this was a good gamble; as time goes on CUDA has become less and less essential or relevant.

Also, if you want a practical path towards drop-in replacing CUDA, you want ZLUDA; this project is interesting and kind of cool but the limitation to a C subset and no replacement libraries (BLAS, DNN, etc.) makes it not particularly useful in comparison.

enlyth 1 hour ago|||
Even disregarding CUDA, NVidia has had like 80% of the gaming market for years without any signs of this budging any time soon.

When it comes to GPUs, AMD just has the vibe of a company that basically shrugged and gave up. It's a shame because some competition would be amazing in this environment.

cebert 31 minutes ago||
What about PlayStation and Xbox? They use AMD graphics and are a substantial user base.
ekianjo 1 minute ago||
Because AMD has the APU category that mixes x86_64 cores with powerful integrated graphics. Nvidia does not have that.
fdefitte 1 hour ago|||
Agreed on ZLUDA being the practical choice. This project is more impressive as a "build a GPU compiler from scratch" exercise than as something you'd actually use for ML workloads. The custom instruction encoding without LLVM is genuinely cool though, even if the C subset limitation makes it a non-starter for most real CUDA codebases.
h4kunamata 2 hours ago|||
Many projects turned out to be far better than proprietary because open-source doesn't have to please shareholders.

What sucks is that such projects at some point become too big, and make so much noise forcing big techs to buy them and everybody gets fuck all.

All it requires to beat proprietary walled garden, is somebody with knowledge and a will to make things happen. Linus with git and Linux is the perfect example of it.

Fun fact, BitKeeper said fuck you to the Linux community in 2005, Linus created git within 10 days.

BitKeeper make their code opensource in 2016 but by them, nobody knew who they were lol

So give it time :)

guerrilla 1 hour ago|||
> couldn't

More like wouldn't* most of the time.

Well isn't that the case with a few other things? FSR4 on older cards is one example right now. AMD still won't officially support it. I think they will though. Too much negativity around it. Half the posts on r/AMD are people complaining about it.

DiabloD3 1 hour ago||
Because FSR4 is currently slower on RDNA3 due to lack of support of FP8 in hardware, and switching to FP16 makes it almost as slow as native rendering in a lot of cases.

They're working the problem, but slandering them over it isn't going to make it come out any faster.

guerrilla 1 hour ago||
> Because FSR4 is currently slower on RDNA3 due to lack of support of FP8 in hardware, and switching to FP16 makes it almost as slow as native rendering in a lot of cases.

It works fine.

> They're working the problem, but slandering them over it isn't going to make it come out any faster.

You have insider info everyone else doesn't? They haven't said any such thing yet last I checked. If that were true, they should have said that.

wmf 2 hours ago||
We have HIP at home.
ByThyGrace 49 minutes ago||
How feasible is it for this to target earlier AMD archs down to even GFX1010, the original RDNA series aka the poorest of GPU poor?
gzread 1 hour ago||
Nice! It was only a matter of time until someone broke Nvidia's software moat. I hope Nvidia's lawyers don't know where you live.
bravetraveler 2 hours ago||
> No HIP translation layer.

Storage capacity everywhere rejoices

yodon 2 hours ago||
<checks stock market activity>
7speter 29 minutes ago||
Will this run on cards that don’t have ROCM/latest ROCM support? Because if not, its only gonna be a tiny subset of a tiny subset of cards that this will allow cuda to run on.
whizzter 3 hours ago||
Not familiar with CUDA development, but doesn't CUDA support C++ ? Skipping Clang/LLVM and going "pure" C seems to be quite limiting in that case.
hackyhacky 2 hours ago|||
Honestly I'm not sure how good is LLVM's support for AMD GX11 machine code. It's a pretty niche backend. Even if it exists, it may not produce ideal output. And it's a huge dependency.
h4kunamata 2 hours ago||
Real developer never depended on AI to write good quality code, in fact, the amount of slope code flying left and right is due to LLM.

Open-source projects are being inundated with PR from AIs, not depending on them doesn't limit a project.

That project owner seems pretty knowledgeable of what is going on and keeping it free of dependencies is not an easy skill. Many developers would have written the code with tons of dependency and copy/paste from LLM. Some call the later coding :)

gsora 2 hours ago|||
LLVM and LLM are not the same thing
brookman64k 2 hours ago|||
LLVM (Low Level Virtual Machine) != LLM (Large Language Model)
latchkey 1 hour ago|
Note that this targets GFX11, which is RDNA3. Great for consumer, but not the enterprise (CDNA) level at all. In other words, not a "cuda moat killer".
More comments...