GGUF
How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf QuantFactory/Promt-generator-GGUF:
# Run inference directly in the terminal:
llama-cli -hf QuantFactory/Promt-generator-GGUF:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf QuantFactory/Promt-generator-GGUF:
# Run inference directly in the terminal:
llama-cli -hf QuantFactory/Promt-generator-GGUF:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf QuantFactory/Promt-generator-GGUF:
# Run inference directly in the terminal:
./llama-cli -hf QuantFactory/Promt-generator-GGUF:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf QuantFactory/Promt-generator-GGUF:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf QuantFactory/Promt-generator-GGUF:
Use Docker
docker model run hf.co/QuantFactory/Promt-generator-GGUF:
Quick Links

QuantFactory Banner

QuantFactory/Promt-generator-GGUF

This is quantized version of UnfilteredAI/Promt-generator created using llama.cpp

Original Model Card

Model Card: UnfilteredAI/Promt-generator

Model Overview

The UnfilteredAI/Promt-generator is a text generation model designed specifically for creating prompts for text-to-image models. It leverages PyTorch and safetensors for optimized performance and storage, ensuring that it can be easily deployed and scaled for prompt generation tasks.

Intended Use

This model is primarily intended for:

  • Prompt generation for text-to-image models.
  • Creative AI applications where generating high-quality, diverse image descriptions is critical.
  • Supporting AI artists and developers working on generative art projects.

How to Use

To generate prompts using this model, follow these steps:

  1. Load the model in your PyTorch environment.
  2. Input your desired parameters for the prompt generation task.
  3. The model will return text descriptions based on the input, which can then be used with text-to-image models.

Example Code:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("UnfilteredAI/Promt-generator")
model = AutoModelForCausalLM.from_pretrained("UnfilteredAI/Promt-generator")

prompt = "a red car"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
generated_prompt = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(generated_prompt)
Downloads last month
104
GGUF
Model size
0.8B params
Architecture
bloom
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support