Turkish-LLM-7B-Instruct

A Turkish-enhanced 7B language model fine-tuned from Mistral-7B-Instruct on curated Turkish instruction data.

Part of the Turkish LLM Family.

Highlights

Quick Start

With Ollama

ollama run hf.co/ogulcanaydogan/Turkish-LLM-7B-Instruct-GGUF:Q4_K_M

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("ogulcanaydogan/Turkish-LLM-7B-Instruct", torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("ogulcanaydogan/Turkish-LLM-7B-Instruct")

messages = [{"role": "user", "content": "Turkiye'nin baskenti neresidir?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Turkish LLM Family

Citation

@misc{aydogan2026turkishllm,
  title={Turkish LLM Family: Open-Source Turkish Language Models},
  author={Ogulcan Aydogan},
  year={2026},
  url={https://huggingface.co/collections/ogulcanaydogan/turkish-llm-family-69b303b4ef1c36caffca4e94}
}
Downloads last month
227
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ogulcanaydogan/Turkish-LLM-7B-Instruct

Finetuned
(1081)
this model
Quantizations
3 models

Dataset used to train ogulcanaydogan/Turkish-LLM-7B-Instruct

Space using ogulcanaydogan/Turkish-LLM-7B-Instruct 1

Collection including ogulcanaydogan/Turkish-LLM-7B-Instruct