How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="QuantFactory/UltraMerge-7B-GGUF",
	filename="",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

QuantFactory/UltraMerge-7B-GGUF

This is quantized version of mlabonne/UltraMerge-7B created using llama.cpp

Model Description

This model is an experimental DPO fine-tune of automerger/YamShadow-7B on the following datasets:

  • mlabonne/truthy-dpo-v0.1
  • mlabonne/distilabel-intel-orca-dpo-pairs
  • mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
  • mlabonne/ultrafeedback-binarized-preferences-cleaned

I have no idea about what's the best chat template. Probably Mistral-Instruct or ChatML.

Downloads last month
138
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantFactory/UltraMerge-7B-GGUF

Quantized
(4)
this model