Posted by meetpateltech 11/12/2025
All of these have fairly "exact" representations, and generation techniques are also often fairly "exact" in trying to create worlds that won't break physics engines(big part) or rendering engines, often hand-crafted algorithms but nothing really that really stopped neural networks from being used on a higher level.
One important detail in most generation systems in games is that they are often built to be controllable to work with game-logic (think how Minecraft generates the world to include biomes,villages,etc) or more or less artist controllable.
3d scanning has often relied on point-clouds, but were heavy, full of holes,etc and have been infeasible for direct rendering for long so many methods were developed to make decent polygon meshes.
Nerf's and Gaussian splatting(GS) started appearing a few years back, these are more "approximate" and totally ignore polygon generation instead relying on quantization of the world into NN-matrix-"fields"(NERF) or fuzzy-point-clouds (GS), visually these have been impressive since they managed to capture "real" images well.
This system is built on GS since that probably meshed fairly well with neural network token and diffusion techniques for encoding inputs (images, texts).
They do mention mesh exports (there has been some research into polygon generation from GS).
If the system scales to huge worlds this could change game-dev, and there seems to be some aim with the control methods, but it'd probably require more control and world/asset management since you need predictability with existing things to produce in the long term (same as with code agents).
A typical game has thousands of hand placed nodes in 3D space, that do things like place lights, trigger story beats, account for physics and collisions etc. That wouldn't change with Gaussian splats, but if you needed to edit the world then even with deterministic generation, the whole world might change, and all your gameplay nodes are now misplaced.
That doesn't matter for some games, but I think it does matter for most.
That said, all those collisions, triggers, lights, etc could be authored together with blockouts in Unity, Godot or some other editor capable of creating levels that integrates with the rest of the game authoring process.
If they create a way to keep the contexts of generation (or rebuild them from marker objects with prompts that are kept in the level editor and continiously re-imported) and allow for a sane way to re-generate and keep chunks then I feel that this could be fairly bad for world artists (Yes, they'd probably still be needed to adjust things to not look like total slop).
The issue of real voxels (not MC style) is that they fill in fixed spaces that then can creates gaps once you start animating, you probably have the same issues with GS (but that's probably why they are doing exports).
Along with entertainment, they can be used for simulation training for robots. And allow for imagining potential trajectories
Other "world model"s are Image + (keyboard input) to Video or Streaming Images, that effectively function like a game engine / video hybrid.
edit: Just tried it and it doesn't, but it does a good job of creating something like a CS map.
Presumably de_dust2
I linked it elsewhere in this thread.
Update - yes you can. To be tested.
Update - it is a paid feature
seems anything to do with asteroids (or explosions I imagine) are blocked.