How to use from the
Use from the
llama-cpp-python library
# Gated model: Login with a HF token with gated access permission
hf auth login
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="dwikitheduck/gen-sql-1-Q4_K_M-GGUF",
	filename="gen-sql-1-q4_k_m.gguf",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

dwikitheduck/gen-sql-1-Q4_K_M-GGUF

This model was converted to GGUF format from dwikitheduck/gen-sql-1 using llama.cpp via Convert Model to GGUF.

Key Features:

  • Quantized for reduced file size (GGUF format)
  • Optimized for use with llama.cpp
  • Compatible with llama-server for efficient serving

Refer to the original model card for more details on the base model.

Usage with llama.cpp

1. Install llama.cpp:

brew install llama.cpp  # For macOS/Linux

2. Run Inference:

CLI:

llama-cli --hf-repo dwikitheduck/gen-sql-1-Q4_K_M-GGUF --hf-file gen-sql-1-q4_k_m.gguf -p "Your prompt here"

Server:

llama-server --hf-repo dwikitheduck/gen-sql-1-Q4_K_M-GGUF --hf-file gen-sql-1-q4_k_m.gguf -c 2048

For more advanced usage, refer to the llama.cpp repository.

Downloads last month
-
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support