Skip to content

Benchmarks

Performance targets and current results. Updated each session.


Current Results

Metric Target Result Notes
Sparsity 1–4% 2.43% ✅ Fixed in Session 2 via clusters
Search efficiency > 95% 99.1% ✅ Only 1% of nodes searched per query
Compression ratio > 1.0x 1.78x ✅ 44% less memory than dense approach
Build time (100k nodes) < 5 sec 0.87s ✅ 257x speedup via vectorized NumPy
Search speed (bucket) < 10ms 3.5ms ✅ Fast enough for real-time inference
Clustering (100k → 32) < 30 sec 4.4s ✅ K-means with vectorized cosine similarity
Search speed (cluster) < 50ms 30.9ms ✅ Searches whole cluster (~3,125 nodes)
Context boost boosted > unboosted 3431→3506 ✅ 2% more nodes activated
.nci save (100k) < 5 sec 0.4s ✅ Binary format, ~34 MB
.nci load (100k) < 5 sec 1.6s ✅ Full round-trip verified identical
Consolidation reinforced > fresh 0.50→0.70 ✅ 10 reinforcements tested
Inter-cluster connections > 0 per cluster 3.0 avg ✅ 96 total across 32 clusters
Long-term compression 10x–100x Future ⚪ Requires semantic signatures
RPi inference speed 2–5 tok/sec Future ⚪ End goal

Brain vs NCI vs Traditional AI

Concept Human Brain NCI Traditional AI
Knowledge unit Neuron — fires on right input Resonance Node — activates on signature match Weight in a matrix — always participates
Sparsity 1–4% active at any moment Targeting 1–4% — built in 100% of weights used every inference
Compression Native — concepts as relationships Native — compression is the model Optional afterthought
Consolidation Sleep consolidates memory Consolidation score on every node No equivalent
Autonomous learning Continuous Designed for it Impossible without full retraining
Power required ~20 watts Target: modest CPU, no GPU Thousands of watts