Posted by luthenwald 1/15/2026
Sketch synthesis is an area I'm pretty interested in lately; I'm currently exploring similar things with CLIP to guide fitness, natural evolution strategy to optimize the rendered results, and using an implicit neural representation to represent pen plotter paths (rather than a series of explicit curves/strokes).[2]
For Bézier curves in particular, iteratively constraining the search around initial matches seems key to retaining detail (see the “rep” argument in Fogleman’s work), for example in the eyes of the Vermeer portrait in the OP.
I think I lost the code, but it was initially a genetic algorithm that randomly placed overlapping polygons, but the later improved method had connected polygons that shared points - which was far more computationally cheaper.
Another method I explored was to compose a representative image via a two-colour binarised bitmap, which provided a pixelated version of the image as a placeholder.
The core idea is that you drop the image as a small Data URI straight into the page, and then fetch the high-detail version later. From the user's perspective, they are getting a very usable web page early on, even on poor connections.
I made a similar thing a few years ago using randomly placed translucent circles of random sizes and fixed opacity level (about 20% as I recall). Initially the circles had unknown colours; after placing them all, for each of the R, G and B channels, I used a linear programming solver to solve exactly for the intensities in that channel that would minimise an overall error (I had to use the L1 distance, since the more usual squared error couldn't be expressed in the solver). This produced some quite nice images, that were also fun to animate :)
GPU acceleration is a really good idea; my CPU implementation of a similar idea with triangles (https://github.com/anematode/triangle-stacking) was compute constrained for smaller images (which was fixable with good SIMD optimizations) but became bandwidth constrained for very large images that don't fit in L2. I think a port to OpenCL would have been a good idea.
[0] https://bottosson.github.io/posts/oklab/. The better a color space matches human perception, the easier it is to certain processing operations, such as converting to grayscale while preserving the perceived brightness.