Posted by todsacerdoti 1 day ago
And you can build mythologies around falsehoods to further reinforce it: "I have a legal obligation to maximize shareholder value." No buddy, you have some very specific restrictions on your ability to sell the company to your cousin (ha!) for a handful of glass beads. You have a legal obligation to bin your wafers the way it says on your own box, but that doesn't seem to bother you.
These days I get a machine like the excellent ASUS Proart P16 (grab one of those before they're all gone if you can) with a little 4060 or 4070 in it that can boot up Pytorch and make sure the model will run forwards and backwards at a contrived size, and then go rent a GB200 or whatever from Latitude or someone (seriously check out Latitude, they're great), or maybe one of those wildly competitive L40 series fly machines (fly whips the llama's ass like nothing since Winamp, check them out too). The GMTek EVO-X1 is a pretty capable little ROCm inference machine for under 1000, its big brother is nipping at the heels of a DGX Spark under 2k. There is good stuff out there but its all from non-incumbent angles.
I don't game anymore but if I did I would be paying a lot of attention to ARC, I've heard great things.
Fuck the cloud and their ancient Xeon SKUs for more than Latitude charges for 5Ghz EPYC. Fuck NVIDIA gaming retail rat race, its an electrical as well as moral hazard in 2025.
It's a shame we all have to be tricky to get what used to be a halfway fair deal 5-10 years ago (and 20 years ago they passed a HUGE part of the scaling bonanza down to the consumer), but its possible to compute well in 2025.
But I do spend a lot of effort finding good deals on modern ass compute. This is the shit I use to get a lot of performance on a budget.
Will people pay you to post on HN? How do I sign up?
Dude, no one talks about this and it drives me up the wall. The only way to guarantee modern CPUs from any cloud provider is to explicitly provision really new instance types. If you use any higher-level abstracted services (Fargate, Cloud Run, Lambda, whatever) you get salvation army second-hand CPUs from 15 years ago, you're billed by the second so the slower, older CPUs screw you over there, and you pay a 30%+ premium over the lower-level instances because its a "managed service". Its insane and extremely sad that so many customers put up with it.
It's also way faster to deploy and easier to operate now. And mad global, I've needed to do it all over the world (a lot of places the shit works flawlessly and you can get Ryzen SKUs for nothing).
Protip: burn a partition of Ubuntu 24.04 LTS which is the default on everything and use that as "premium IPMI", even if you run Ubuntu. you can always boot into a known perfect thing with all the tools to tweak whatever. If I have to even restart on I just image it, faster than launching a VM on EC2.
The spoiled gamer mentality is getting old for those of us that actually work daily in GPGPU across industries, develop with RTX kit, do AI research, etc.
Yes they’ve had some marketing and technical flubs as any giant publically traded company will have, but their balance of research-driven development alongside corporate profit necessities is unmatched.
Also, nobody ever said they hate their researchers.
Whoa, the stuff covered in the rest of the post is just as egregious. Wow! Maybe time to figure out which AMD models compares performance-wise and sell this thing, jeez.
In other news I hope intel pulls their thumb out of their ass cause AMD is crushing it and that’s gonna end the same way
Not for me. I prefer Intel offerings. Open and Linux friendly.
I even hope they would release the next gen Risc-V boards with Intel Graphics.
NVIDIA Keynote from the upcoming RISC-V Summit China: "Enabling RISC-V application processors in NVIDIA compute platforms"
I feel like this is a misunderstanding, though I admit I'm splitting hairs here. DLSS is a form of TAA, and so is FSR and most other modern upscalers. You generally don't need an extra antialiasing pipeline if you're getting an artificially supersampled image.
We've seen this technique variably developed across the lifespan of realtime raster graphics; first with checkerboard rendering, then TAA, then now DLSS/frame generation. It has upsides and downsides, and some TAA implementations were actually really good for the time.
MSAA only helps with geometric edges, shader aliasing can be combatted with prefiltering but even then it's difficult to get rid of it completely. MSAA also needs beefy multisample intermediate buffers, this makes it pretty much a non-starter on heavily deferred rendering pipelines, which throw away coverage information to fit their framebuffer budget. On top of that the industry moved to stochastic effects for rendering all kinds of things that were too expensive before, the latest being actual realtime path tracing. I know people moan about TAA and DLSS but to do realtime path tracing at 4k is sort of nuts really. I still consider it a bit of a miracle we can do it at all.
Personally, I wish there was more research by big players into things like texture space lighting, which makes shading aliasing mostly go away, plays nice with alpha blending and would make MSAA viable again. The issue there is with shading only the stuff you see and not wasting texels.
SSAA is an even older technique than MSAA but the results are not visually the same as just having a really high-DPI screen with no AA.
Ray tracing? Temporal accumulation and denoising. Irradiance cache? Temporal accumulation and denoising. most modern light rendering techniques cannot be done in time in a single frame. Add to that the fact that deferred or hybrid rendering makes implementing MSAA be anywhere between "miserable" and "impossible", and you have the situation we're in today.
You need a 4k rendering resolution, at least. Modern effects look stunning at that res.
Unfortunately, nothing runs well at 4k with all the effects on.
Customers don’t matter, the company matters.
Competition sorts out such attitude quick smart but AMD never misses a chance to copy Nvidias strategy in any way and intel is well behind.
So for now, you’ll eat what Jensen feeds you.
you are safe.