Instructions to use macedonizer/sr-gpt2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use macedonizer/sr-gpt2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="macedonizer/sr-gpt2")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("macedonizer/sr-gpt2") model = AutoModelForCausalLM.from_pretrained("macedonizer/sr-gpt2") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use macedonizer/sr-gpt2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "macedonizer/sr-gpt2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "macedonizer/sr-gpt2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/macedonizer/sr-gpt2
- SGLang
How to use macedonizer/sr-gpt2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "macedonizer/sr-gpt2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "macedonizer/sr-gpt2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "macedonizer/sr-gpt2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "macedonizer/sr-gpt2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use macedonizer/sr-gpt2 with Docker Model Runner:
docker model run hf.co/macedonizer/sr-gpt2
sr-gpt2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this paper and first released at this page.
Model description
sr-gpt2 is a transformers model pretrained on a very large corpus of Serbian data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of the continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of the word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens.
This way, the model learns an inner representation of the Macedonian language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for, however, which is generating texts from a
prompt.
How to use
Here is how to use this model to get the features of a given text in PyTorch:
import random
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('macedonizer/sr-gpt2')
model = AutoModelWithLMHead.from_pretrained('macedonizer/sr-gpt2')
input_text = 'Ја сам био '
if len(input_text) == 0:
encoded_input = tokenizer(input_text, return_tensors="pt")
output = model.generate(
bos_token_id=random.randint(1, 50000),
do_sample=True,
top_k=50,
max_length=1024,
top_p=0.95,
num_return_sequences=1,
)
else:
encoded_input = tokenizer(input_text, return_tensors="pt")
output = model.generate(
**encoded_input,
bos_token_id=random.randint(1, 50000),
do_sample=True,
top_k=50,
max_length=1024,
top_p=0.95,
num_return_sequences=1,
)
decoded_output = []
for sample in output:
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
print(decoded_output)
- Downloads last month
- 1,146