Top
Best
New

Posted by PaulHoule 1/25/2026

Implementing a tiny CPU rasterizer (2024)(lisyarus.github.io)
118 points | 27 comments
delta_p_delta_x 7 days ago|
This is a great resource. Some others along the same lines:

TinyRenderer: https://haqr.eu/tinyrenderer/

ScratchAPixel: https://www.scratchapixel.com/index.html

3D Computer Graphics Programming by Pikuma (paid): https://pikuma.com/courses/learn-3d-computer-graphics-progra...

Ray-tracing:

Ray Tracing in One Weekend: https://raytracing.github.io/

Ray Tracing Gems: https://www.realtimerendering.com/raytracinggems/

Physically Based Rendering, 4th Edition: https://pbr-book.org/

Both:

Computer Graphics from Scratch: https://gabrielgambetta.com/computer-graphics-from-scratch/

I'll also link a comment[1] I made a while back about learning 3D graphics. There's no better teacher than manually implementing the rasterisation and ray-tracing pipelines.

[1]: https://news.ycombinator.com/item?id=46410210#46416135

ggambetta 7 days ago||
May I add Computer Graphics From Scratch, which covers both rasterization and raytracing? https://gabrielgambetta.com/computer-graphics-from-scratch/i...

I have to admit I'm quite surprised by how eerily similar this website feels to my book. The chapter structure, the sequencing of the concepts, the examples and diagrams, even the "why" section (mine https://gabrielgambetta.com/computer-graphics-from-scratch/0... - theirs https://lisyarus.github.io/blog/posts/implementing-a-tiny-cp...)

I don't know what to make of this. Maybe there's nothing to it. But I feel uneasy :(

delta_p_delta_x 7 days ago|||
Ah yes, great book; thanks for pointing it out. Added to the list.

As for similarity, I think the sections you've highlighted are broadly similar, but I can't detect any phrase-for-phrase copy-pasting that is typical of LLM or thesaurus find-replace. I feel that the topic layout and the motivations for any tutorial or course covering the same subject matter will eventually converge to the same broad ideas.

The website's sequence of steps is also a bit different compared to your book's. And most telling, the code, diagrams, and maths in the website are all different (such assets are usually an instant giveaway of plagiarism). You've got pseudocode; the website uses the C++ standard library to a great extent.

If it were me, I might rest a little easier :)

lisyarus 7 days ago||
Hi! Blog post author here. I have heard the "Computer Graphics from Scratch" book before, but I haven't read it myself, so it would be quite hard for me to plagiarize it. I guess some similarities are expected when talking about a well-established topic.
globalnode 7 days ago|||
its a standard pipeline, everything from everyone will look roughly similar. your book likely looks something like previous work. i wouldnt worry about it, ps i really loved your web tutorials back in the day
gopla 7 days ago|||
An additional resource on rasterisation, using the scan conversion technique:

https://kristoffer-dyrkorn.github.io/scanline-rasterizer/

gustavopezzi 6 days ago|||
Thanks for mentioning pikuma. :-)

The 3D software rendering is still the most popular lecture from our school even after all these years. And it really surprises me because we "spend a lot of time" talking about some old techniques (MS-DOS, Amiga, ST, Archimedes, etc.). But it's fun to see how much doing things manually help students understand the math and the data movement that the GPU helps automate and vectorize in modern systems.

Levitating 7 days ago||
I can vouch for scratchapixel, it taught me the basics of 3d projection
Sohcahtoa82 7 days ago||
I tried doing this in Python a bit ago. It did not go well and it really showed how SLOW Python really is.

Even with just an 1280x720 window, setting every pixel to a single color by setting a value in a byte array and then using a PyGame function to just give it a full frame to draw, I maxed out at like 10 fps. I tried so many things and simply could not get any faster.

feelamee 7 days ago||
> Triangles are easy to rasterize

sure, rasterizing triangle is not so hard, but.. you know, rasterizing rectangle is far far easier

qingcharles 7 days ago||
Rasterizing triangles is a nightmare, especially if performance is a goal. One of the biggest issues is getting abutting triangles to render so you don't have overlapping pixels or gaps.

I did this stuff for a living 30 years ago. Just this week I had Deep Think create a 3D engine with triangle rasterizer in 16-bit x86 for the original IBM XT.

ralferoo 7 days ago|||
It's fairly easy to get triangle rasterisation performant if you think about the problem hard enough.

Here's an implementation I wrote for the PS3 SPU many moons ago: https://github.com/ralferoo/spugl/blob/master/pixelshaders/t...

That does perspective correct texture mapping, and from a quick count of the instructions in the main loop is approximately 44 cycles per 8 pixels.

The process of solving the half-line equation used also doesn't suffer from any overlapping pixel or gaps, as long as both points are the same and you use fixed point arithmetic.

The key trick is to rework each line equation such that it's effectively x.dx+y.dy+C=0. You can then evaluate A=x.dx+y.dy+C at the top left of the square that encloses the triangle. Every pixel to the right, you can just add dx, and every pixel down, you can just add dy. The sign bit indicates whether the pixel is or isn't inside that side of the triangle, and you can and/or the 3 side's sign bits together to determine whether a pixel is inside or outside the triangle. (Whether to use and or or depends on how you've decided to interpret the sign bit)

The calculation for the all the values consumed by the rasteriser (C,dx,dy) for all 3 sides of a triangle, given the 3 coordinates is here: https://github.com/ralferoo/spugl/blob/db6e22e18fdf3b4338390...

Some of the explanations I wrote down while trying to understand Barycentric coordinates (from which this stuff kind of just falls out of), ended up here: https://github.com/ralferoo/spugl/blob/master/doc/ideas.txt

(Apologies if my memory/terminology is a bit hazy on this - it was a very long time ago now!)

IIRC in terms of performance, this software implementation filling a 720p screen with perspective-correct texture mapped triangles could hit 60Hz using only 1 of the the 7 SPUs, although they weren't overlapping so there was no overdraw. The biggest problem was actually saturating the memory bandwidth, because I wasn't caching the texture data as an unconditional DMA fetch from main memory always completed before the values were needed later in the loop.

qingcharles 7 days ago|||
It's definitely not "fairly easy" once you get into perspective-correct texture-mapping on the triangles, and making sure the pixels along the diagonal of a quad aren't all janky so they texture has an obvious line across it. Then you add on whatever methods you're using to light/shade it. It gets horrible really quick. To me, at least!
ralferoo 7 days ago||
The first link I posted, specifically lines 200-204 ( https://github.com/ralferoo/spugl/blob/master/pixelshaders/t... ) isn't quite what I remembered as this seems the be doing a texture correct visualisation of s,t,k used for calculating mipmap levels and not actually doing the texture fetch - you'll have to forgive me, it's been 17 years since I looked at the code so I forgot where everything is.

It looks like the full texture mapper including mipmap levels is only in the OLD version of the code here: https://github.com/ralferoo/spugl/blob/master/old/shader.c

This is doing full perspective correct texture mapping, including mipmapping and then effectively doing GL_NEAREST by sampling the 4 nearest pixels from 2 mipmap layers, and blending both sets of 4 pixels and then interpolating between the mipmaps.

But anyway, to do any interpolation perspective correctly, you need to interpolate w, exactly as you would interpolate r,g,b for flat colours or u,v for texture coords. You then have 1 reciprocal per pixel to get 1/w, and then multiply all the interpolated parameters by that.

In terms of "obvious line across it", it could be that you're just not clamping u and v between 0,1 (or whatever texture coordinates you're using) or clamping them not wrapping for a wrapped texture. And if you're not doing mipmapping and just doing nearest pixel on a high-res texture, then you will get sparklies.

I've got a very old and poor quality video here, and it's kind of hard to see anything because it was filmed using a phone pointing at the screen: https://www.youtube.com/watch?v=U5o-01s5KQw I don't have anything newer as I haven't turned on my linux PS3 for probably at least 15 years now, but even though it's low quality there's no obvious problem at the edges.

ralferoo 7 days ago|||
Forgot to add, that when you're calculating these fixed values for each triangle, you can also get the hidden surface removal for free. If you have a constant CW or CCW orientation, the sign of the base value for C tells you whether the triangle is facing towards you or away.
qingcharles 7 days ago||
It does, but all hell breaks loose if you start having translucent stuff going on :(
ralferoo 7 days ago||
I'm not sure I understand the problem you're having.

Obviously, if you have translucent, then you need to be doing those objects last, but if you're using the half line method, then two triangles that share an edge will follow the same edge exactly if you're using fixed point math (and doing it properly, I guess!) A pixel will either be in one or the other, not both.

The only issue would be if you're wanted to do MSAA, then yes it gets more complicated, but I'd say it's conceptually simpler to have a 2x resolution and then downsample later. I didn't attempt to tackle MSAA, but one optimisation would be to write 2x2 from a single calculated pixel and but do the half line equation at the finer resolution to determine which of the 2x2 pixels receive the contribution. And then after you render everything, do a 2x2 downsample on the final image.

tubs 6 days ago||||
As the other posters have shown it’s not that hard.

Most graphics specs will explicitly say how tie break rules work.

The key is to work in fixed point (16.8 or even 16.4 if you’re feeling spicy). It’s not “trivial” but in general you write it and it’s done. It’s not something you have to go back to over and over for weird bugs.

Wide lines are a more fun case…

nottorp 7 days ago|||
> One of the biggest issues is getting abutting triangles to render so you don't have overlapping pixels or gaps. > I did this stuff for a living 30 years ago.

So you did CAD or something like that? Since that matters far less in games.

qingcharles 7 days ago||
It still looks janky in games. Look at any old 8/16-bit pre-GPU games. Getting this stuff right is hard.
nottorp 6 days ago||
It does. But gamers accepted it and played the hell out of it. That's why I assumed you did CAD.
maximilianburke 7 days ago||
…as long as all points are co-planar.
nottorp 7 days ago||
With the discrete GPUs pricing themselves out of the consumer space, we may actually need to switch back to software rendering :)
cyber_kinetist 7 days ago|
That's too much of a stretch, but I believe games of the next era will be optimized more towards integrated GPUs (such as AMD's iGPU in the Steam Deck and Steam Machine).

When hardware is priced out for most consumers (along with a global supply chain collapse due to tariffs and a potential Taiwan invasion), a new era awaits where performance optimization is going to be critical again for games. I expect existing game engines like Unity and Unreal Engine falling out because of all the performance issues they have, and maybe we can return to a temporary "wild west" era where everyone has their own hacky solution to cram stuff into limited hardware.

nottorp 6 days ago|||
> everyone has their own hacky solution to cram stuff into limited hardware

Limited hardware gave us a lot of classic titles and fundamental game mechanics.

Off the top of my head:

Metal Gear's stealth was born because they couldn't draw enough enemy sprites to make a shooting game. Instead they drew just a few and made you avoid them.

Ico's and Silent Hill's foggy atmosphere is partially determined by their polygon budget. They didn't have the hardware to draw distant scenery so they hid it in fog.

bool3max 6 days ago|||
That's wishful thinking. The reality is that your iGPU will be used to decode the video stream of an Unreal game running on a dedicated GPU on some cloud server which you pay a monthly subscription fee for.
phendrenad2 7 days ago|
I'm surprised more indie games don't use software rendering, just to get a more unique style. 640x400 aught to be enough for anybody!