Vedic AI Framework Models

This Hugging Face repository hosts the key assets generated and used within the Vedic AI Sovereign Kernel project. This includes the optimized Language Model, C++ source code for Vedic arithmetic, compiled executables, and diagnostic logs, specifically configured for resource-constrained mobile devices like the Redmi 14C.

Repository Contents:

  • tinyllama-1.1b-chat-v1.0.Q2_K.gguf: The TinyLlama 1.1B Chat model, quantized for efficient inference on mobile CPUs (like the Redmi 14C), serving as the Small Language Model (SLM) for self-correction loops and core AI logic.

  • vedic_multiplier.cpp: The C++ source code for the vedic_multiply function, implementing the Urdhva-Tiryagbhyam (Vedic Multiplication) algorithm. This is a core component for demonstrating efficient mathematical operations inspired by Vedic Sutra 1.

  • vedic_multiplier: The compiled executable of the vedic_multiplier.cpp program. This binary is used for testing and verifying the Vedic multiplication logic on target hardware.

  • llama_runtime.log: Simulated runtime logs from the llama.cpp framework, used for demonstrating the 'Sutra 14 Diagnosis Engine' and 'Sutra 19 Surgical Patching' within the self-healing loop. These logs contain examples of 'Kapha imbalance' (memory issues) and 'Pitta imbalance' (arithmetic overflows).

Project Overview:

The overarching Vedic AI Sovereign Kernel project aims to integrate ancient Indian knowledge systems (Vedic Sutras) with modern AI architecture to build self-healing, epistemically grounded, and resource-optimized AI. This Hugging Face repository provides the deployable and foundational assets for that vision.

Links:


Developed as part of the Divine Earthly initiative for sovereign, intelligent systems.

Downloads last month
13
GGUF
Hardware compatibility
Log In to add your hardware

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support