How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Rustamshry/BioGenesis-ToT-GGUF:F16
# Run inference directly in the terminal:
llama-cli -hf Rustamshry/BioGenesis-ToT-GGUF:F16
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Rustamshry/BioGenesis-ToT-GGUF:F16
# Run inference directly in the terminal:
llama-cli -hf Rustamshry/BioGenesis-ToT-GGUF:F16
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Rustamshry/BioGenesis-ToT-GGUF:F16
# Run inference directly in the terminal:
./llama-cli -hf Rustamshry/BioGenesis-ToT-GGUF:F16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Rustamshry/BioGenesis-ToT-GGUF:F16
# Run inference directly in the terminal:
./build/bin/llama-cli -hf Rustamshry/BioGenesis-ToT-GGUF:F16
Use Docker
docker model run hf.co/Rustamshry/BioGenesis-ToT-GGUF:F16
Quick Links

Model Card for BioGenesis-ToT

alt="General Benchmark Comparison Chart"

GGUF version of https://huggingface.co/khazarai/BioGenesis-ToT

BioGenesis-ToT is a fine-tuned version of Qwen3-1.7B, optimized for mechanistic reasoning and explanatory understanding in biology. This model has been trained on the moremilk/ToT-Biology dataset — a reasoning-rich collection of biology questions emphasizing why and how processes occur, rather than simply what happens.

The model demonstrates strong capabilities in:

  • Structured biological explanation generation
  • Logical and causal reasoning
  • Chain-of-thought (ToT) reasoning in scientific contexts
  • Interdisciplinary biological analysis (e.g., bioengineering, medicine, ecology)

Uses

🚀 Intended Use

  • Educational and scientific explanation generation
  • Biological reasoning and tutoring applications
  • Model interpretability research
  • Training datasets for reasoning-focused LLMs

⚠️ Limitations

  • Not a replacement for expert biological judgment
  • May occasionally over-generalize or simplify complex phenomena
  • Limited to reasoning quality within biological contexts (not trained for creative writing or coding)

🧪 Dataset: moremilk/ToT-Biology

The ToT-Biology dataset emphasizes mechanistic understanding and explanatory reasoning within biology. It’s designed to help AI models develop interpretable, step-by-step reasoning abilities for complex biological systems.

It spans a wide range of biological subdomains:

  • Foundational biology: Cell biology, genetics, evolution, and ecology
  • Advanced topics: Systems biology, synthetic biology, computational biophysics
  • Applied domains: Medicine, agriculture, bioengineering, and environmental science

Dataset features include:

  • 🧩 Logical reasoning styles — deductive, inductive, abductive, causal, and analogical
  • 🧠 Problem-solving techniques — decomposition, elimination, systems thinking, trade-off analysis
  • 🔬 Real-world problem contexts — experiment design, pathway mapping, and data interpretation
  • 🌍 Practical relevance — bridging theoretical reasoning and applied biological insight
  • 🎓 Educational focus — for both AI training and human learning in scientific reasoning

🧭 Objective

This fine-tuning project aims to build an interpretable reasoning model capable of:

  • Explaining biological mechanisms clearly and coherently
  • Demonstrating transparent, step-by-step thought processes
  • Applying logical reasoning techniques to biological and interdisciplinary problems
  • Supporting educational and research use cases where reasoning transparency matters

Citation

BibTeX:

@model{khazarai/BioGenesis-ToT,
  title     = {BioGenesis-ToT: A Fine-Tuned Model for Explanatory Biological Reasoning},
  author    = {Rustam Shiriyev},
  year      = {2025},
  publisher = {Hugging Face},
  base_model = {Qwen3-1.7B},
  dataset   = {moremilk/ToT-Biology},
  license   = {MIT}
}
Downloads last month
39
GGUF
Model size
2B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Rustamshry/BioGenesis-ToT-GGUF

Finetuned
Qwen/Qwen3-1.7B
Quantized
(1)
this model

Collection including Rustamshry/BioGenesis-ToT-GGUF