How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="Intel/tiny-random-falcon")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Intel/tiny-random-falcon")
model = AutoModelForCausalLM.from_pretrained("Intel/tiny-random-falcon")
Quick Links

Model Card for Model ID

This is a tiny random Falcon model derived from "tiiuae/falcon-7b-instruct".

This is useful for functional testing (not quality generation, since its weights are random) on optimum-intel

Downloads last month
181
Safetensors
Model size
8.55M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support