A research-first artificial intelligence lab exploring new architectures, engineered insight, and the future of machine understanding.
The Causal Measure
We introduce the Causal Measure, a normalized scalar weighting scheme over finite directed acyclic graphs. Causal substance is modeled as a non-negative scalar for purposes of the measure. Given the premise that causes precede effects, the measure assigns to each node a value D(i) = W(i)/W_T · P, where W(i) denotes the causal weight of a node’s territory and P is the total causal pool. By construction, the measure satisfies the normalization identity Σ D(i) = P. The measure and the Incompatibility Theorem for Positive Inputs and Predecessor Closure are formally independent.
Independently of the measure, we establish the Incompatibility Theorem for Positive Inputs and Predecessor Closure: no finite, well-founded DAG satisfying five structural conditions can contain a node with positive causal input without requiring at least one predecessor outside the modeled system. The proof proceeds via three lemmas — root zero-input, walk termination, and positivity propagation. Hence any exogenous positive source demanded by the theorem lies strictly outside the modeled set A and cannot be internalized as a node while preserving the axioms.
Together, the normalization identity and the impossibility theorem identify a structural boundary condition: under the stated axioms, such a causal system cannot be self-originating within its own formal domain.
We bridge ancient wisdom traditions with frontier AI research, building systems that don't just process information — they develop understanding.
Neural Architecture
Novel approaches to machine learning systems and cognitive architectures
Engineered Insight
AI systems that develop deeper understanding, not just pattern recognition
Understanding
Research into how artificial systems can achieve genuine comprehension