# Join paths multiply exponentially when Sales department exists in multiple buildings
%employee = ('1' => { name => 'Alice', dept => 'Sales', building => 'West', project => 'Alpha' });
# Each B-tree indexes one specific relationship: employee_id → dept_id → building_id
%employee = ('1' => { name => 'Alice', dept_id => '1' });
%department = ('1' => { name => 'Sales', building_id => '1' });
%building = ('1' => { name => 'West' });
sub get_employee_location { my ($emp_id) = @_; return $building{$department{$employee{$emp_id}{dept_id}}{building_id}}{name}; }
The normalized approach means each B-tree has a clear purpose and lookups follow a defined path. The "Universal relation" seems intuitive, but it's definitely computationally expensive.
What Graham proved specifically was about relational databases: if you try to maintain consistent joins across all possible paths (aka "join consistency"), it becomes an NP-complete problem. The "Universal Relation Problem" requires ALL possible join paths to give consistent results.
Prolog doesn't have verify consistency across an exponential number of join path, so I fail to see how it is a "universal relation" system.
Datomic's approach is more like a temporal log of facts, it's not trying to maintain universal join consistency across all possible paths. The EAV model lets you treat everything as "facts over time" (Datomic's strength yes) instead trying to make every possible join path consistent