How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="trl-internal-testing/tiny-FalconMambaForCausalLM")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("trl-internal-testing/tiny-FalconMambaForCausalLM")
model = AutoModelForCausalLM.from_pretrained("trl-internal-testing/tiny-FalconMambaForCausalLM")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

Tiny FalconMambaForCausalLM

This is a minimal model built for unit tests in the TRL library.

Downloads last month
100,094
Safetensors
Model size
525k params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Spaces using trl-internal-testing/tiny-FalconMambaForCausalLM 2

Collection including trl-internal-testing/tiny-FalconMambaForCausalLM