Posted by Naulius 3 hours ago
Interesting point about the RL cold start, one could definitely use the paths discovered first through the evolutionary exploration to seed an RL agent's initial experience which could help skip the early random flailing phase.
The key difference from RL is the goal. We're not trying to learn an optimal policy for playing the game and instead we're trying to explore as much of the state space as possible to find bugs. In Part 2 we plug in a behavior model that validates correctness at every frame during exploration (velocity constraints, causal movement checks, collision invariants). The combination is where it gets interesting: autonomous exploration discovers the states, and the behavior model catches when the game violates its own rules. For testing, the main reason we even care about completing each level is that a completed path serves as the base for more extensive exploration at every point along it. If the exploration can't reach the end, by definition we miss a large part of the state space.