HippocampAIF / implementation_plan.md
algorembrant's picture
Upload 253 files
32d978d verified
# HippocampAIF β€” Fully Biological Sub-Symbolic Cognitive Framework
A brain-inspired cognitive architecture built from computational neuroscience first principles, grounded in three papers: **Lake et al. BPL** (Science 2015), **Distortable Canvas one-shot learning** (oneandtrulyone), and **Friston's Free-Energy Principle** (Trends Cogn Sci, 2009).
## User Review Required
> [!IMPORTANT]
> **Scale & Scope:** This is an 80+ component biological framework. The plan is phased β€” each phase produces tested, working code before moving on. Given the constraint of no PyTorch/TF/JAX, everything uses NumPy + SciPy only.
> [!WARNING]
> **Performance Targets:**
> - **MNIST**: >90% accuracy with ONE sample per digit (10 total training images). The Distortable Canvas paper achieves 90% with just 4 examples.
> - **Breakout**: Master the game under 5 episodes. This is extremely ambitious and requires strong innate priors (Spelke's physics core knowledge) plus hippocampal fast-learning.
> [!CAUTION]
> **No POMDP / VI Active Inference / MCMC:** Per user directive, we replace these with biologically-grounded gradient-descent free-energy minimization (Friston-style) + hippocampal index memory + Spelke's core knowledge priors. The "common sense" stack replaces MCMC sampling.
---
## Architecture Overview
```
hippocampaif/
β”œβ”€β”€ __init__.py
β”œβ”€β”€ core/ # Phase 1: Core infrastructure
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ tensor.py # Lightweight ndarray wrapper with sparse ops
β”‚ β”œβ”€β”€ free_energy.py # Variational free-energy engine (Friston)
β”‚ β”œβ”€β”€ message_passing.py # Hierarchical prediction-error message passing
β”‚ └── dynamics.py # Continuous-state dynamics & gradient descent
β”‚
β”œβ”€β”€ retina/ # Phase 2: Retinal processing
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ photoreceptor.py # Center-surround, ON/OFF channels
β”‚ β”œβ”€β”€ ganglion.py # Magno/Parvo/Konio pathways
β”‚ └── spatiotemporal_energy.py # Adelson-Bergen energy model
β”‚
β”œβ”€β”€ visual_cortex/ # Phase 3: V1-V5 visual hierarchy
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ v1_gabor.py # 2D Gabor filter bank + simple/complex cells
β”‚ β”œβ”€β”€ v1_disparity.py # Binocular disparity energy model
β”‚ β”œβ”€β”€ v2_contour.py # Contour integration, border-ownership
β”‚ β”œβ”€β”€ v3_shape.py # Shape-from-contour, curvature
β”‚ β”œβ”€β”€ v3a_motion.py # Motion processing (dorsal link)
β”‚ β”œβ”€β”€ v4_color_form.py # Color constancy + intermediate form
β”‚ β”œβ”€β”€ v5_mt_flow.py # Optic flow, motion integration
β”‚ └── hmax.py # HMAX model (S1-C1-S2-C2 hierarchy)
β”‚
β”œβ”€β”€ hippocampus/ # Phase 4: Hippocampal complex
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ dentate_gyrus.py # Pattern separation (sparse coding)
β”‚ β”œβ”€β”€ ca3_autoassociation.py # Pattern completion (attractor network)
β”‚ β”œβ”€β”€ ca1_comparator.py # Match/mismatch detection
β”‚ β”œβ”€β”€ entorhinal_cortex.py # Grid cells, spatial representation
β”‚ β”œβ”€β”€ index_memory.py # Fast one-shot index-based memory (BPL replacement)
β”‚ └── replay.py # Memory consolidation replay
β”‚
β”œβ”€β”€ core_knowledge/ # Phase 5: Spelke's core knowledge systems
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ object_system.py # Object permanence, cohesion, contact
β”‚ β”œβ”€β”€ agent_system.py # Intentional agency, goal-directedness
β”‚ β”œβ”€β”€ number_system.py # Approximate number system, subitizing
β”‚ β”œβ”€β”€ geometry_system.py # Geometric/spatial relations + Distortable Canvas
β”‚ β”œβ”€β”€ social_system.py # Social evaluation, in-group preference
β”‚ └── physics_system.py # Gravity, friction, mass priors (believed, not computed)
β”‚
β”œβ”€β”€ neocortex/ # Phase 6: Neocortical processing
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ prefrontal.py # Working memory, executive control
β”‚ β”œβ”€β”€ temporal.py # Object recognition, semantic memory
β”‚ β”œβ”€β”€ parietal.py # Spatial attention, sensorimotor integration
β”‚ └── predictive_coding.py # Hierarchical predictive coding (Friston Box 3)
β”‚
β”œβ”€β”€ attention/ # Phase 6b: Attention & salience
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ superior_colliculus.py # Saccade control, salience map
β”‚ β”œβ”€β”€ precision_modulation.py # Synaptic gain / precision (Friston attention)
β”‚ └── competition.py # Hemifield competition, biased competition
β”‚
β”œβ”€β”€ learning/ # Phase 7: One-shot & fast learning
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ distortable_canvas.py # From oneandtrulyone paper
β”‚ β”œβ”€β”€ amgd.py # Abstracted Multi-level Gradient Descent
β”‚ β”œβ”€β”€ one_shot_classifier.py # One-shot classification pipeline
β”‚ └── hebbian.py # Hebbian/anti-Hebbian learning rules
β”‚
β”œβ”€β”€ action/ # Phase 8: Action & motor control
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ motor_primitives.py # Motor primitive library
β”‚ β”œβ”€β”€ active_inference.py # Action as free-energy minimization (NOT VI/POMDP)
β”‚ └── reflex_arc.py # Innate reflexive behaviors
β”‚
β”œβ”€β”€ agent/ # Phase 9: Integrated agent
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ brain.py # Full brain integration (all modules)
β”‚ β”œβ”€β”€ mnist_agent.py # MNIST one-shot benchmark agent
β”‚ └── breakout_agent.py # Breakout game agent
β”‚
└── tests/ # All phases: Component tests
β”œβ”€β”€ test_core.py
β”œβ”€β”€ test_retina.py
β”œβ”€β”€ test_visual_cortex.py
β”œβ”€β”€ test_hippocampus.py
β”œβ”€β”€ test_core_knowledge.py
β”œβ”€β”€ test_neocortex.py
β”œβ”€β”€ test_learning.py
β”œβ”€β”€ test_action.py
β”œβ”€β”€ test_mnist.py # MNIST >90% one-shot benchmark
└── test_breakout.py # Breakout mastery <5 episodes
```
---
## Proposed Changes
### Phase 1: Core Infrastructure (`core/`)
The foundation: lightweight tensor operations, the free-energy engine, and hierarchical message passing. Everything else builds on this.
#### [NEW] [tensor.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core/tensor.py)
- Sparse ndarray wrapper over NumPy β€” supports lazy computation, sparsity masks
- The brain is "lazy and sparse" β€” this is computationally modeled here
- Key ops: sparse dot, threshold activation, top-k sparsification
#### [NEW] [free_energy.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core/free_energy.py)
- Implements Friston's variational free energy: **F = Energy βˆ’ Entropy**
- `F = βˆ’βŸ¨ln p(y,Ο‘|m)⟩_q + ⟨ln q(Ο‘|ΞΌ)⟩_q`
- Laplace approximation: q specified by mean ΞΌ and conditional precision Ξ (ΞΌ)
- Gradient descent on F w.r.t. internal states (perception) and action parameters
- **NOT** variational inference in the ML sense β€” this is biological FEP
#### [NEW] [message_passing.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core/message_passing.py)
- Hierarchical prediction-error scheme (Friston Box 3, Figure I)
- Forward (bottom-up): prediction errors Ξ΅ from superficial pyramidal cells
- Backward (top-down): predictions ΞΌ from deep pyramidal cells
- Lateral: precision-weighted error at same level
- Ρ⁽ⁱ⁾ = μ⁽ⁱ⁻¹⁾ βˆ’ g(μ⁽ⁱ⁾) βˆ’ Ξ›(μ⁽ⁱ⁾)Ρ⁽ⁱ⁾ (recognition dynamics)
#### [NEW] [dynamics.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core/dynamics.py)
- Continuous-state generalized coordinates of motion (Friston Box 2, Eq. I)
- y(t) = g(x⁽¹⁾,v⁽¹⁾,θ⁽¹⁾) + z⁽¹⁾
- Hierarchical state transitions with random fluctuations
- Euler integration of recognition dynamics
#### [NEW] [test_core.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/tests/test_core.py)
- Tests sparse ops, free-energy computation convergence, message passing stability
---
### Phase 2: Retinal Processing (`retina/`)
The eye's computational front-end: center-surround antagonism, ON/OFF channels, and motion energy.
#### [NEW] [photoreceptor.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/retina/photoreceptor.py)
- Difference-of-Gaussians (DoG) center-surround
- ON-center/OFF-surround and OFF-center/ON-surround channels
- Luminance adaptation (Weber's law)
#### [NEW] [ganglion.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/retina/ganglion.py)
- Magnocellular (motion/flicker), Parvocellular (color/detail), Koniocellular (blue-yellow) pathways
- Temporal filtering: transient vs sustained responses
#### [NEW] [spatiotemporal_energy.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/retina/spatiotemporal_energy.py)
- Adelson-Bergen spatio-temporal energy model for local motion detection
- Oriented space-time filters (quadrature pairs)
- Motion energy = sum of squared quadrature pair outputs
#### [NEW] [test_retina.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/tests/test_retina.py)
- Tests DoG produces expected center-surround, motion energy detects drifting gratings
---
### Phase 3: Visual Cortex V1–V5 + HMAX (`visual_cortex/`)
The ventral "what" and dorsal "where/how" streams, modeled as formalized computational neuroscience.
#### [NEW] [v1_gabor.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v1_gabor.py)
- 2D Gabor filter bank: G(x,y) = exp(βˆ’(x'Β²+Ξ³Β²y'Β²)/2σ²) Γ— cos(2Ο€x'/Ξ» + ψ)
- Multiple orientations (0Β°, 45Β°, 90Β°, 135Β° ...), spatial frequencies, phases
- Simple cells: linear filtering. Complex cells: energy model (sum of squared quadrature)
- Half-wave rectification + normalization (divisive normalization)
#### [NEW] [v1_disparity.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v1_disparity.py)
- Binocular disparity energy model (Ohzawa et al.)
- Left/right eye Gabor responses β†’ phase-difference disparity tuning
- Position and phase disparity computation
#### [NEW] [v2_contour.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v2_contour.py)
- Contour integration via association fields
- Border-ownership signals
- Texture boundary detection
#### [NEW] [v3_shape.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v3_shape.py)
- Shape-from-contour: curvature computation
- Medial axis / skeleton extraction
#### [NEW] [v3a_motion.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v3a_motion.py)
- Motion processing bridging V1β†’V5 (MT)
- Pattern motion vs component motion selectivity
#### [NEW] [v4_color_form.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v4_color_form.py)
- Color constancy (von Kries adaptation)
- Intermediate form representation (curvature-selective)
#### [NEW] [v5_mt_flow.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/v5_mt_flow.py)
- Optic flow computation (Lucas-Kanade style with biological plausibility)
- Motion integration / intersection of constraints
#### [NEW] [hmax.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/visual_cortex/hmax.py)
- HMAX hierarchy: S1 (Gabor) β†’ C1 (MaxPool) β†’ S2 (learned patches) β†’ C2 (MaxPool)
- Position/scale invariance through max-pooling
- Crucial for the MNIST one-shot pipeline
#### [NEW] [test_visual_cortex.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/tests/test_visual_cortex.py)
- Tests Gabor filter orientations, HMAX produces invariant features, disparity tuning curves
---
### Phase 4: Hippocampal Complex (`hippocampus/`)
The fast-learning, index-memory, pattern-differentiation engine. This replaces MCMC by providing rapid one-shot binding and retrieval.
#### [NEW] [dentate_gyrus.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/dentate_gyrus.py)
- Pattern separation via sparse expansion coding
- Input β†’ high-dimensional sparse representation (expansion ratio ~5-10Γ—)
- Winner-take-all competitive inhibition
#### [NEW] [ca3_autoassociation.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/ca3_autoassociation.py)
- Attractor network for pattern completion
- Recurrent connections with Hebbian learning
- Given partial input, settles to stored pattern
#### [NEW] [ca1_comparator.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/ca1_comparator.py)
- Match/mismatch detection between CA3 recall and direct entorhinal input
- Novelty signal generation
- Drives encoding vs retrieval mode switching
#### [NEW] [entorhinal_cortex.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/entorhinal_cortex.py)
- Grid-cell-like spatial coding (hexagonal pattern formation via self-organization)
- Conjunctive representations (space Γ— item)
#### [NEW] [index_memory.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/index_memory.py)
- **Key innovation for BPL replacement:** one-shot binding of cortical representations
- Store: bind HMAX feature vector ↔ label in single exposure
- Retrieve: given new input, find nearest stored representation
- "Good enough" threshold (~60%) + gap filling from core knowledge priors
- No MCMC β€” just direct hippocampal fast-mapping
#### [NEW] [replay.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/hippocampus/replay.py)
- Memory consolidation via offline replay
- Strengthens hippocampal→cortical transfer
#### [NEW] [test_hippocampus.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/tests/test_hippocampus.py)
- Tests pattern separation orthogonality, pattern completion from partial cues, one-shot store/retrieve accuracy
---
### Phase 5: Spelke's Core Knowledge (`core_knowledge/`)
Innate priors β€” not tabula rasa. These are "believed, not computed."
#### [NEW] [object_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/object_system.py)
- Object permanence: objects persist when occluded
- Cohesion: objects move as bounded wholes
- Contact: objects don't pass through each other
- Continuity: objects trace continuous paths
- Implemented as hard constraint priors on object state transitions
#### [NEW] [agent_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/agent_system.py)
- Goal-directedness detection: efficient action toward goals
- Contingency: agents respond to other agents
- Self-propulsion: agents can initiate motion
#### [NEW] [number_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/number_system.py)
- Approximate Number System (ANS): Weber ratio-based numerosity
- Subitizing: exact enumeration for ≀4 items
- Ordinal comparison
#### [NEW] [geometry_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/geometry_system.py)
- Geometric/spatial relations (left, right, above, below, inside, outside)
- **Boosted by Distortable Canvas** from oneandtrulyone paper
- Smooth deformations as canvas-based geometric transformations
- Surface layout representations
#### [NEW] [social_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/social_system.py)
- Social evaluation: helper vs hinderer distinction
- In-group preference priors
#### [NEW] [physics_system.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/core_knowledge/physics_system.py)
- **Believed, not computed** β€” these are hardcoded priors on world dynamics:
- Gravity: objects fall downward (constant downward acceleration prior)
- Friction: moving objects slow down without force
- Mass: heavier objects are harder to move
- Elasticity: objects bounce on collision
- Support: unsupported objects fall
- Critical for Breakout: ball trajectory prediction, paddle physics understanding
#### [NEW] [test_core_knowledge.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/tests/test_core_knowledge.py)
- Tests object permanence tracking, numerosity discrimination (Weber ratio), physics predictions match intuition
---
### Phase 6: Neocortex + Attention (`neocortex/`, `attention/`)
Higher cognitive processing, predictive coding, and precision-based attention.
#### [NEW] [predictive_coding.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/neocortex/predictive_coding.py)
- Full hierarchical predictive coding (Friston Box 3)
- SG layer: prediction errors (superficial pyramidal)
- L4: state estimation
- IG layer: predictions (deep pyramidal)
- Recognition dynamics via gradient descent on free-energy
#### [NEW] [prefrontal.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/neocortex/prefrontal.py)
- Working memory buffer (limited capacity ~7Β±2)
- Executive control: task switching, inhibition
- Goal maintenance
#### [NEW] [temporal.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/neocortex/temporal.py)
- Object recognition pathway (ventral "what" stream terminus)
- Semantic memory / category formation
#### [NEW] [parietal.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/neocortex/parietal.py)
- Spatial attention, sensorimotor integration
- Coordinate transformations (retinotopic β†’ egocentric β†’ allocentric)
#### [NEW] [superior_colliculus.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/attention/superior_colliculus.py)
- Bottom-up salience map (intensity, color, orientation contrasts)
- Saccade target selection
#### [NEW] [precision_modulation.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/attention/precision_modulation.py)
- Attention as precision optimization (Friston): ΞΌΜ‡α΅Ÿ = βˆ‚A/βˆ‚Ξ», Γ… = F
- Synaptic gain control per hierarchical level
- Top-down precision weighting of prediction errors
#### [NEW] [competition.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/attention/competition.py)
- Hemifield competition (visual field rivalry)
- Biased competition model (Desimone & Duncan)
---
### Phase 7: One-Shot Learning (`learning/`)
The Distortable Canvas + hippocampal fast-mapping pipeline for one-shot classification.
#### [NEW] [distortable_canvas.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/learning/distortable_canvas.py)
- From oneandtrulyone paper:
- Images as smooth functions on elastic 2D canvas
- Canvas deformation field u(x,y), v(x,y) β€” smooth via Gaussian regularization
- Color distortion: pixel-wise intensity distance
- Canvas distortion: geometric warping energy (Jacobian penalty)
- Dual distance = color_dist + Ξ» Γ— canvas_dist
#### [NEW] [amgd.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/learning/amgd.py)
- Abstracted Multi-level Gradient Descent from oneandtrulyone
- Coarse-to-fine optimization of canvas deformation
- Multiple resolution levels, warm-starting from coarser solutions
- Step size adaptation
#### [NEW] [one_shot_classifier.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/learning/one_shot_classifier.py)
- Full pipeline: Retina β†’ V1 Gabor β†’ HMAX β†’ Hippocampal Index Memory β†’ Classify
- For each test image: extract HMAX features, compare to stored prototypes
- Distortable Canvas distance as similarity metric for ambiguous cases
- "Good enough" (>60%) confidence β†’ classify; otherwise β†’ refine with canvas
#### [NEW] [hebbian.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/learning/hebbian.py)
- Hebbian learning: Ξ”w = Ξ· Γ— pre Γ— post
- Anti-Hebbian for decorrelation
- BCM rule for selectivity
- Used for online adaptation within cortical layers
---
### Phase 8: Action & Active Inference (`action/`)
Action as free-energy minimization β€” NOT POMDP/VI.
#### [NEW] [active_inference.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/action/active_inference.py)
- Action selection via Θ§ = βˆ’βˆ‚F/βˆ‚a (Friston Box 1)
- Action changes sensory input to fulfill predictions
- Prior expectations about desired states β†’ action to reach them
- For Breakout: prior = "ball stays in play" β†’ paddle moves to predicted ball position
#### [NEW] [motor_primitives.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/action/motor_primitives.py)
- Library of basic motor actions (move left, move right, stay, fire)
- Motor commands mapped from continuous action signals
#### [NEW] [reflex_arc.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/action/reflex_arc.py)
- Innate reflexive behaviors (e.g., tracking moving objects)
- Fast pathway bypassing full cortical processing
---
### Phase 9: Integrated Agent (`agent/`)
Wire everything together for benchmarks.
#### [NEW] [brain.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/agent/brain.py)
- Full brain integration: all modules connected
- Processing pipeline: Retina β†’ V1-V5 β†’ Hippocampus ↔ Neocortex β†’ Action
- Free-energy minimization loop running across all levels
- Sparse "lazy" processing β€” only activates needed pathways
#### [NEW] [mnist_agent.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/agent/mnist_agent.py)
- One-shot MNIST classification agent
- Stores 1 exemplar per digit (10 total)
- Pipeline: raw image β†’ retinal processing β†’ V1 Gabor β†’ HMAX features β†’ hippocampal matching + Distortable Canvas refinement
#### [NEW] [breakout_agent.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/hippocampaif/agent/breakout_agent.py)
- Breakout game agent using gymnasium[atari] + ale-py
- Physics core knowledge: predicts ball trajectory (gravity-free, elastic bouncing)
- Visual tracking: retina + V1 motion energy β†’ ball/paddle/brick detection
- Hippocampal fast-learning: after first 1-2 episodes, learns brick patterns and optimal strategies
- Active inference: prior = "keep ball alive" + "maximize brick destruction"
---
### Phase 10: Dependencies & Setup
#### [NEW] [setup.py](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/setup.py)
- Package setup with minimal dependencies: `numpy`, `scipy`, `Pillow`
- Optional: `gymnasium[atari]`, `ale-py` for Breakout benchmark only
#### [NEW] [requirements.txt](file:///C:/Users/User/Desktop/debugrem/clawd-one-and-only-one-shot/requirements.txt)
- `numpy>=1.24`, `scipy>=1.10`, `Pillow>=9.0`
- `gymnasium[atari]>=1.0`, `ale-py>=0.9` (Breakout only)
---
## Verification Plan
### Automated Tests
Each phase includes unit tests that verify **real** functionality (not stubs):
```bash
# Run all tests
python -m pytest hippocampaif/tests/ -v
# Phase-by-phase
python -m pytest hippocampaif/tests/test_core.py -v # Free-energy convergence, message passing
python -m pytest hippocampaif/tests/test_retina.py -v # DoG, motion energy
python -m pytest hippocampaif/tests/test_visual_cortex.py -v # Gabor orientations, HMAX invariance
python -m pytest hippocampaif/tests/test_hippocampus.py -v # Pattern separation/completion, index memory
python -m pytest hippocampaif/tests/test_core_knowledge.py -v # Object permanence, physics, numerosity
python -m pytest hippocampaif/tests/test_neocortex.py -v # Predictive coding convergence
python -m pytest hippocampaif/tests/test_learning.py -v # Distortable Canvas, AMGD, one-shot
python -m pytest hippocampaif/tests/test_action.py -v # Active inference action selection
```
### Benchmark Tests (End-to-End)
```bash
# MNIST one-shot (target: >90% accuracy with 1 sample per digit)
python -m pytest hippocampaif/tests/test_mnist.py -v -s
# Breakout mastery (target: master under 5 episodes)
python -m pytest hippocampaif/tests/test_breakout.py -v -s
```
### Manual Verification
- Inspect HMAX feature visualizations to confirm Gabor filters look biologically plausible
- Review Distortable Canvas deformation fields to confirm smooth warping
- Monitor free-energy curves during perception to confirm they decrease (convergence)
- Watch Breakout agent play to verify it tracks the ball and learns brick patterns
---
## Implementation Order & Dependencies
| Phase | Component | Depends On | Estimated Effort |
|-------|-----------|------------|-----------------|
| 1 | Core infrastructure | Nothing | Foundation |
| 2 | Retina | Core | Small |
| 3 | Visual Cortex V1-V5 + HMAX | Core, Retina | Large |
| 4 | Hippocampus | Core | Medium |
| 5 | Core Knowledge | Core | Medium |
| 6 | Neocortex + Attention | Core, Visual Cortex | Medium |
| 7 | One-Shot Learning | Visual Cortex, Hippocampus, Core Knowledge | Medium |
| 8 | Action | Core, Core Knowledge | Small |
| 9 | Integrated Agent | All above | Medium |
| 10 | Setup & packaging | All above | Small |
> [!TIP]
> **Build-then-verify loop**: Each phase ends with passing tests before moving to the next. This prevents cascading errors and ensures each biological component genuinely works.