Codette LoRA Adapters

DOI

8 domain-specialized LoRA adapters for the Codette cognitive architecture β€” a sovereign modular AI framework for ethical multi-agent reasoning.

Author: Jonathan Harrison Β· ORCID Β· Raiff's Bits LLC


Base Model

meta-llama/Llama-3.1-8B-Instruct with QLoRA (4-bit quantization)

Adapter Configuration

Parameter Value
PEFT Type LoRA
Rank (r) 16
Alpha 32
Dropout 0.05
Target Modules q_proj, k_proj, v_proj, o_proj
Bias none
Task Type CAUSAL_LM
Quantization 4-bit (QLoRA)

Adapters

Each adapter specializes in a distinct cognitive perspective, trained on curated perspective-tagged datasets:

Adapter Description Training Examples Status
newton/ Analytical physics reasoning β€” Newtonian precision and scientific method 3,000 βœ… Uploaded
davinci/ Creative invention thinking β€” DaVinci's cross-disciplinary creativity 2,500 βœ… Uploaded
empathy/ Emotional understanding and compassionate reasoning 2,500 βœ… Uploaded
philosophy/ Conceptual and philosophical reasoning β€” depth and rigor 2,000 βœ… Uploaded
quantum/ Probabilistic and quantum-inspired reasoning 2,000 βœ… Uploaded
consciousness/ Recursive cognition and RC+ΞΎ framework reasoning 3,000 βœ… Uploaded
multi_perspective/ Multi-perspective synthesis across analytical lenses 2,500 βœ… Uploaded
systems_architecture/ AI systems architecture and design reasoning 2,000 πŸ”„ Training

Total: 20,500 training examples across 8 cognitive domains

Training Details

  • Epochs: 3 per adapter
  • Hardware: NVIDIA A10G (cloud) + Intel Arc 140V / CPU (local)
  • Framework: Hugging Face TRL (SFTTrainer) + PEFT
  • Training Pipeline: Raiff1982/codette-training-lab
  • Novel contribution: Two GPU-free CPU training pipelines validated on consumer laptops (see paper)

Training Metrics (Newton adapter example)

Metric Value
Final Loss ~0.071
Mean Token Accuracy 97.4%
Gradient Norm ~0.05–0.13

Usage

Load a single adapter

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-3.1-8B-Instruct",
    load_in_4bit=True,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")

# Load the newton adapter
model = PeftModel.from_pretrained(base_model, "Raiff1982/codette-lora-adapters", subfolder="newton")

Load multiple adapters (multi-perspective reasoning)

from peft import PeftModel

# Load base
model = PeftModel.from_pretrained(base_model, "Raiff1982/codette-lora-adapters", subfolder="newton", adapter_name="newton")

# Add additional perspectives
model.load_adapter("Raiff1982/codette-lora-adapters", subfolder="empathy", adapter_name="empathy")
model.load_adapter("Raiff1982/codette-lora-adapters", subfolder="davinci", adapter_name="davinci")

# Switch between perspectives
model.set_adapter("empathy")

How Adapters Fit in the Codette Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Codette Orchestrator                               β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Reasoning Forge (6 agents + Critic + Synthesis)    β”‚
β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”‚
β”‚    β”‚ Newton  β”‚ β”‚ DaVinci β”‚ β”‚ Empathy β”‚  ...        β”‚  ← LoRA adapters
β”‚    β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜            β”‚
β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β”‚
β”‚                     β–Ό                              β”‚
β”‚         RC+ΞΎ Attractor Convergence                 β”‚
β”‚         Phase Coherence Ξ“ β†’ 0.99                   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  AEGIS Ethical Governance (Ξ· = 0.961)              β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  QuantumSpiderweb Β· CognitionCocooner Β· Memory     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Each adapter represents a specialized cognitive perspective. The Reasoning Forge orchestrates them through shared attractor dynamics, achieving multi-agent phase coherence (Ξ“ = 0.99) within 10 recursive iterations.

Directory Structure

codette-lora-adapters/
β”œβ”€β”€ newton/
β”‚   β”œβ”€β”€ adapter_config.json
β”‚   β”œβ”€β”€ adapter_model.safetensors
β”‚   β”œβ”€β”€ tokenizer.json
β”‚   β”œβ”€β”€ tokenizer_config.json
β”‚   β”œβ”€β”€ chat_template.jinja
β”‚   β”œβ”€β”€ checkpoint-500/
β”‚   └── checkpoint-1125/
β”œβ”€β”€ davinci/
β”‚   β”œβ”€β”€ adapter_config.json
β”‚   β”œβ”€β”€ adapter_model.safetensors
β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ checkpoint-500/
β”‚   └── checkpoint-939/
β”œβ”€β”€ empathy/
β”‚   β”œβ”€β”€ adapter_config.json
β”‚   β”œβ”€β”€ adapter_model.safetensors
β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ checkpoint-500/
β”‚   └── checkpoint-939/
β”œβ”€β”€ philosophy/          (coming soon)
β”œβ”€β”€ quantum/             (coming soon)
β”œβ”€β”€ consciousness/       (coming soon)
β”œβ”€β”€ multi_perspective/   (coming soon)
└── systems_architecture/ (coming soon)

Related Resources

Citation

@article{harrison2026codette,
  title={Codette: A Sovereign Modular Cognitive Architecture for Ethical Multi-Agent AI},
  author={Harrison, Jonathan},
  year={2026},
  doi={10.5281/zenodo.18913936},
  publisher={Raiff's Bits LLC},
  url={https://huggingface.co/Raiff1982/codette-paper}
}

License

CC BY 4.0 β€” Creative Commons Attribution 4.0 International

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Raiff1982/codette-lora-adapters

Adapter
(1733)
this model