AI & ML interests

None defined yet.

Recent Activity

kimou605  updated a Space about 16 hours ago
Kamka-IT/README
kimou605  published a Space about 16 hours ago
Kamka-IT/README
View all activity

Organization Card

🚀 Kamka IT | Open-Source AI & Backend Engineering

Empowering the open-source community with robust ML pipelines, fine-tuned models, and agentic workflows.

Website Hugging Face Contact


🌍 About Us

Based in Tunisia, Kamka IT is a specialized consulting firm operating at the intersection of Advanced Backend Engineering and Artificial Intelligence. We build scalable, self-hosted architectures and intelligent agentic systems.

Beyond our enterprise consulting, we are deeply committed to the open-source ethos. Our Hugging Face organization is dedicated to sharing our internal research, fine-tuned models, and end-to-end pipelines with the global AI community.

🎯 Our Open-Source Mission

At Kamka IT, we believe that the future of AI lies in transparency, accessibility, and collaboration. Our open-source objectives on Hugging Face are:

  1. Developing Specialized Models: Releasing state-of-the-art weights fine-tuned for niche domains (such as Bioinformatics and Software Engineering).
  2. Open Pipelines: Sharing robust, reproducible training and inference pipelines to help developers integrate AI into their own self-hosted infrastructure.
  3. Advancing Agentic Workflows: Contributing models and datasets optimized for agentic frameworks like LangGraph and LiteLLM.

🔬 Featured Open-Source Contributions

🧬 Bioinformatics & Genomics Models

We have invested heavily in the intersection of LLMs and biological data.

  • BioTATA-7B: A specialized 7B parameter model designed for advanced sequence analysis and biological text generation.
  • shadow-clown-BioMistral-7B-DARE: An experimental merge using the DARE technique to combine robust reasoning with bio-medical knowledge.
  • shadow-clown-BioMistral-7B-SLERP: A SLERP-merged variant of the BioMistral architecture, optimizing the interpolation of weights for enhanced downstream performance.

📊 Datasets

High-quality models require high-quality data. We open-source our curation efforts to accelerate research.

  • TATA-NOTATA-FineMistral: A specialized dataset for nucleotide transformer downstream tasks, heavily curated for DNA sequence classification.

🛠 Tech Stack & Expertise

Our models and pipelines are built using modern, scalable, and sovereign technologies:

  • AI & LLMs: PyTorch, Transformers, PEFT, TRL, vLLM, Ollama.
  • Agentic Frameworks: LangChain, LangGraph, LiteLLM.
  • Backend & Infrastructure: Node.js, Next.js, Supabase, n8n.
  • Deployment & Self-Hosting: Docker, Coolify, scalable VPS architectures.

💻 Using Our Pipelines

We design our models to be easily integrable into your existing workflows. Here is a quick example of how to load one of our text-generation models via the transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Kamka-IT/BioTATA-7B"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

prompt = "Analyze the following nucleotide sequence: "
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

(Note: For comprehensive pipeline tutorials, check the specific Model Cards!)


🤝 Let's Collaborate!

Whether you are a researcher looking to fine-tune a model, a developer building an agentic system, or a company seeking to deploy sovereign, self-hosted AI architecture, we would love to connect.

  • 🌐 Discover our services: kamka-it.com
  • 💡 Discuss a project: Open a discussion on any of our model repositories.