| # Implementation Plan - Vector-HaSH Financial Trader |
|
|
| ## Objective |
| Implement the Vector-HaSH algorithm for predicting pure financial prices (XAUUSD 3-minute timeframe) inside Google Colab (T4 GPU). Evaluate strategy via strict anchored Walk-Forward Optimization (WFO) to eliminate forward-looking bias. |
|
|
| ## Proposed Strategy Architecture |
|
|
| ### 1. Feature Engineering |
| We will rely **ONLY** on pure price transformations. |
| - Compute rolling features: Log returns, rolling volatility, and sequence windows of size $W$ (e.g. 15 bars). Let the state at time $t$ be $\mathbf{x}_t \in \mathbb{R}^{W}$. |
| - **Discrete Quantization**: To map continuous prices into the discrete elements similar to the visual "sbook" in Vector-HaSH, we will use `flash-kmeans` (with $K$ clusters) to quantize the historical $\mathbf{x}_t$ vectors into discrete sensory classes $\mathbf{s}_t$. |
| |
| ### 2. Vector-HaSH Memory Scaffold |
| Instead of a 2D spatial grid, we will use a **1D Continuous Track** (approximating time). |
| - **Grid Scaffold ($\mathbf{g}_t$)**: Synthesize multiscale 1D grid cell representations (using sine/cosine waves or cyclic shifts). |
| - **Place Cells ($\mathbf{p}_t$)**: Project Grid cells into a sparse higher-dimensional space: $\mathbf{p}_t = \sigma(\mathbf{W}_{pg} \mathbf{g}_t)$. |
| - **Hetero-associative Memory**: Train the sensory-to-place map $\mathbf{W}_{sp}$ dynamically using Recursive Least Squares (RLS), mimicking the [pseudotrain_2d_iterative_step](file:///C:/Users/User/Desktop/debugrem/Vector-HaSH-agent-trader/VectorHaSH-main/MTT.py#133-140) seen in [MTT.py](file:///C:/Users/User/Desktop/debugrem/Vector-HaSH-agent-trader/VectorHaSH-main/MTT.py). |
| |
| ### 3. Machine Learning Wrapper (XGBoost) |
| - At time $t$, extract the *Memory Recall Error* ($\mathbf{s}_t - \hat{\mathbf{s}}_t$) and the *Place Cell Activations* ($\mathbf{p}_t$). |
| - Feed these VectorHaSH embeddings into an XGBoost Classifier/Regressor. |
| - Target: Next bar log return $r_{t+1}$ or direction $\text{sign}(r_{t+1})$. |
|
|
| ### 4. Anchored Walk-Forward Optimization |
| To avoid cheating: |
| - Train/Test splits expand over time. |
| - Fold 1: Train $[0, T]$, Test $[T, T+H]$. |
| - Fold 2: Train $[0, T+H]$, Test $[T+H, T+2H]$. |
| - `flash-kmeans`, Vector-HaSH memory construction, and XGBoost fitting will occur **ONLY** on the Training slice of each fold, and act out-of-sample on the Test slice. |
|
|
| ### 5. Mono-Script Colab Implementation (`vector_hash_trader.py`) |
| - Vectorized using PyTorch (`device='cuda'`) or NumPy (`cuml`/`cupy`/XGBoost-GPU). |
| - Plotting module included: cumulative returns, drawdown, WFO heatmaps, and memory collapse analysis. |
|
|
| ## Verification |
| - Assert strictly positive index lookups when indexing arrays (no `t` to `t+1` leakage before target definition). |
| - Verify standard performance metrics: Sharpe Ratio, Sortino Ratio, Max Drawdown. |
|
|