YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
aurekai/fpqx-alignments
Feature-to-proxy quantization (FPQx) alignment repository for Aurekai. Enables zero-shot model-to-model translation and cross-model semantic routing.
Overview
FPQx alignments establish learned mappings between feature spaces of different models, enabling Aurekai to route semantic queries across heterogeneous model architectures. This repository hosts:
- FPQx Alignment Files: Learned model-to-model feature mappings (
.akfpqx,.bffpqx) - Alignment Metadata: Performance metrics, training details, and validation results
- Conversion Tools: CLI utilities for translating activations between model spaces
- Benchmarks: Cross-model consistency and downstream task performance
Quick Start
# Download Qwen3βLLaMA3 alignment
curl -L https://huggingface.co/aurekai/fpqx-alignments/resolve/main/qwen3-to-llama3.akfpqx \
-o qwen3-to-llama3.akfpqx
# Use with Aurekai runtime
akai run <recipe> \
--fpqx-alignment ./qwen3-to-llama3.akfpqx \
--target-model llama3
# Convert activations between models
akai fpqx:align \
--source-activation weights.qwen3.bin \
--alignment qwen3-to-llama3.akfpqx \
--output weights.llama3.bin
Format Specifications
Aurekai Format (.akfpqx)
Binary FPQx alignment in Aurekai native format:
[Header: 16 bytes]
- Magic: "AKFPQX"
- Version: 1
- Alignment stem: "qwen3-to-llama3"
[Source Model Spec: 64 bytes]
- Model name
- Dimension
- Quantization scheme
[Target Model Spec: 64 bytes]
- Model name
- Dimension
- Quantization scheme
[Alignment Matrix: variable]
- Feature projection weights
- Quantization boundaries
- Proxy indicators
[Metadata: variable]
- Training date
- Accuracy metrics
- Hardware specs
[Signature: 32 bytes (SHA256)]
Legacy Bonfyre Format (.bffpqx)
Legacy format for backward compatibility with Bonfyre runtime:
- Same underlying alignment data
- Different metadata layout and serialization
- Auto-converted by Aurekai runtime
Available Alignments
Qwen3-8B β LLaMA3-8B
- File:
qwen3-to-llama3.akfpqx/qwen3-to-llama3.bffpqx - Direction: Qwen3 β LLaMA3 (reversible)
- Accuracy: 94.2% semantic preservation (evaluated on 10K examples)
- Latency: ~1.2ms per sample alignment
- Training: Calibrated on shared instruction tuning corpus
- Size: ~8 MB
Performance Metrics:
- Activation MSE: 0.003
- Cosine similarity (after alignment): 0.96
- Downstream task delta: +0.3% average
- Zero-shot transfer success: 89%
Adding New Alignments
To contribute a new alignment:
Train alignment matrix using Aurekai alignment pipeline:
akai fpqx:train \ --source-model qwen3-8b \ --target-model llama3-8b \ --calibration-set corpus.jsonl \ --output alignment.akfpqxValidate alignment quality:
akai fpqx:validate \ --alignment alignment.akfpqx \ --test-set validation.jsonlSubmit PR with alignment file and validation report
Integration with Aurekai
Environment Variables
export AUREKAI_FPQX_ALIGNMENT=./qwen3-to-llama3.akfpqx
export AUREKAI_TARGET_MODEL=llama3-8b
export AUREKAI_ALIGNMENT_CACHE=/tmp/alignment-cache
Manifest Registration
aurekai.manifest.json:
{
"fpqx_alignments": [
{
"stem": "qwen3-to-llama3",
"akfpqx": "aurekai/fpqx-alignments/qwen3-to-llama3.akfpqx",
"bffpqx": "aurekai/fpqx-alignments/qwen3-to-llama3.bffpqx",
"accuracy": 0.942,
"bidirectional": true
}
]
}
Activation Translation
# Direct translation of model activations
akai fpqx:align \
--source-model qwen3-8b \
--target-model llama3-8b \
--input-activations source-layer-10.bin \
--alignment qwen3-to-llama3.akfpqx \
--output target-layer-10.bin
# Batch alignment
akai fpqx:batch-align \
--alignment qwen3-to-llama3.akfpqx \
--input-dir ./qwen3-activations/ \
--output-dir ./llama3-activations/
Cross-Model Routing
FPQx alignments enable semantic routing across models:
// In Aurekai operator
const router = new SemanticRouter({
models: ["qwen3-8b", "llama3-8b"],
alignments: ["qwen3-to-llama3.akfpqx"]
});
// Route query to appropriate model
const response = await router.query(semanticQuery);
// β Automatically handles model translation and cache harmonization
Validation & Benchmarks
Each alignment includes validation metrics:
- Semantic Preservation: Cosine similarity after alignment
- Task Performance: Downstream accuracy delta
- Zero-shot Transfer: Cross-model capability retention
- Latency: Per-sample alignment time
- Memory: Peak memory during alignment computation
Run benchmarks locally:
akai fpqx:benchmark \
--alignment qwen3-to-llama3.akfpqx \
--benchmark-suite semantic-routing
Tools & Commands
akai fpqx:train: Train new alignment between modelsakai fpqx:validate: Validate alignment qualityakai fpqx:align: Translate activations between modelsakai fpqx:batch-align: Batch alignment processingakai fpqx:benchmark: Run performance benchmarksfpqx_convert.py: Legacy Bonfyre β Aurekai format converter
Related Repositories
- Main Aurekai Repo: https://github.com/aurekai/aurekai
- Model Memory: https://huggingface.co/aurekai/model-memory
- SAE Dictionaries: https://huggingface.co/aurekai/sae-dictionaries
- Semantic Cache Bench: https://huggingface.co/aurekai/semantic-cache-bench
Citation
If you use these FPQx alignments, please cite:
@dataset{aurekai_fpqx_alignments_2026,
title={Aurekai FPQx Alignment Repository},
author={Aurekai Community},
year={2026},
url={https://huggingface.co/aurekai/fpqx-alignments}
}
License
Licensed under the Aurekai Open Source License. See main Aurekai repository for full license terms.