Posted by spacemarine1 11 hours ago
Entering flow is one of the beautiful things I love about programming. And being knocked out of it often feels like a physical jolt.
Lobster seems to take the idea of optimisation and speed to new levels. Entering and remaining in flow must be even easier. First though, I'll need to put the time into learning enough to be able to do it!
Edit: I was mistaken about what Lobster is (potentially compiled instead of jit), but the main point stands.
I'm working on an indie game project and just got frustrated with Unity, I'm porting everything over to Godot.
I even learned about using Kotlin with Godot today [0] and I am really hopeful this is stable (it seems so), because I favor a more functional style of programming and C# ends up making everything 5 times more verbose than Kotlin.
As if you can't program C# functional style.
Ref: https://aardappel.github.io/lobster/language_reference.html
Recent changes to Xcode have meant that on device debugging now launches WAY slower for me every time.
Once it’s going it’s fine. But an extra 20 seconds every time you start the app just kills things for me. It was never instant but now it’s trash.
Right now the editor has a UI driven minimalistic language for specifying quests and other gameplay actions.
Also, does it have a single world grid? (I saw you say octree somewhere) or many separate elements?
Yes there is a single world grid, so all world objects are axis aligned and same size.
On top of that it can have floating "sprites" which are used for monsters, particles and such.
Because that's where I always get stuck. There are so many cool algorithms and ideas that I have like combining ray tracing with meshing and even like imposters for the distant chunks.
But this is getting very complicated with contrees and raytracing/marching etc.
Some reasons why we don't have a super far render distance, in order of importance:
The biggest is GPU memory. The octree that holds the world gets gigantic at large sizes. We'd have to swap parts out as the camera moves. We can do that but haven't gotten there.
Noise: raytracing looks incredibly noisy if you simply cast rays far into small geometry. Hence we even have LOD for blocks, even though they're not needed for efficiency, they're needed for visual stability.
If you're unlucky and a ray has a lot of near misses with geometry, it does more work than other rays and it causes GPU performance degradation. It's rare but to raytrace far you have to optimize for a certain amount of ray steps, we actually put a bound on this.
We find having fog gives some visual unity and sense of scale. With far away fog, the world looks VERY busy visually.
It'd be really neat to have some way of enabling really long-distance raytraced voxels so you can make planet-scale worlds look good, but as far as I'm aware noone's really nailed the technical implementation yet. A few companies and engines seem to have come up with pieces of what might end up being a final puzzle, but not seen anything close to a complete solution yet.
We have a "depth of field" implementation for when you're in dialog with an NPC. There it looks nice, because you're focused on one thing. But when looking around its not that great.
Ideally you want it close to native res in the distance, but without any wobble produced by noise as you move. This is really hard.
[1]: https://veloren.net/
Ideally, we would be able to do physics in voxel space itself (sort of like a cellular automata based classical mechanics), but that doesn't seem to be possible.