| | --- |
| | base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 |
| | library_name: transformers |
| | license: mit |
| | language: |
| | - en |
| | - pt |
| | metrics: |
| | - accuracy |
| | pipeline_tag: text-generation |
| | tags: |
| | - hf-inference |
| | - education |
| | - logic |
| | - math |
| | - low-resource |
| | - transformers |
| | - open-source |
| | - causal-lm |
| | - lxcorp |
| | --- |
| | |
| | # lambda-1v-1b — Lightweight Math & Logic Reasoning Model |
| |
|
| | **lambda-1v-1b** is a compact, fine-tuned language model built on top of `TinyLlama-1.1B-Chat-v1.0`, designed for educational reasoning tasks in both Portuguese and English. It focuses on logic, number theory, and mathematics, delivering fast performance with minimal computational requirements. |
| |
|
| | --- |
| |
|
| | ## Model Architecture |
| |
|
| | - **Base Model**: TinyLlama-1.1B-Chat |
| | - **Fine-Tuning Strategy**: LoRA (applied to `q_proj` and `v_proj`) |
| | - **Quantization**: 8-bit (NF4 via `bnb_config`) |
| | - **Dataset**: [`HuggingFaceH4/MATH`](https://huggingface.co/datasets/HuggingFaceH4/MATH) — subset: `number_theory` |
| | - **Max Tokens per Sample**: 512 |
| | - **Batch Size**: 20 per device |
| | - **Epochs**: 3 |
| |
|
| | --- |
| |
|
| | ## Example Usage (Python) |
| |
|
| | ```python |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | |
| | model = AutoModelForCausalLM.from_pretrained("lxcorp/lambda-1v-1b") |
| | tokenizer = AutoTokenizer.from_pretrained("lxcorp/lambda-1v-1b") |
| | |
| | input_text = "Problema: Prove que 17 é um número primo." |
| | inputs = tokenizer(input_text, return_tensors="pt") |
| | |
| | output = model.generate(**inputs, max_new_tokens=100) |
| | print(tokenizer.decode(output[0], skip_special_tokens=True)) |
| | |
| | ``` |
| | --- |
| |
|
| | About λχ Corp. |
| |
|
| | λχ Corp. is an indie tech corporation founded by Marius Jabami in Angola, focused on AI-driven educational tools, robotics, and lightweight software solutions. The lambdAI model is the first release in a planned series of educational LLMs optimized for reasoning, logic, and low-resource deployment. |
| |
|
| | Stay updated on the project at lxcorp.ai and huggingface.co/lxcorp. |
| |
|
| |
|
| | --- |
| |
|
| | Developed with care by Marius Jabami — Powered by ambition, faith, and open source. |
| |
|
| |
|
| | --- |
| |
|
| | --- |