LitGPT High-level Python API
This is a work-in-progress draft for a high-level LitGPT Python API.
Model loading & saving
The LLM.load command loads an llm object, which contains both the model object (a PyTorch module) and a preprocessor.
from litgpt import LLM
llm = LLM.load(
model="url | local_path",
# high-level user only needs to care about those:
memory_reduction="none | medium | strong"
# advanced options for technical users:
source="hf | local | other"
quantize="bnb.nf4",
precision="bf16-true",
device=""auto | cuda | cpu",
)
Here,
llm.modelcontains the PyTorch Module- and
llm.preprocessor.tokenizercontains the tokenizer
The llm.save command saves the model weights, tokenizer, and configuration information.
llm.save(checkpoint_dir, format="lightning | ollama | hf")
Inference / Chat
response = llm.generate(
prompt="What do Llamas eat?",
temperature=0.1,
top_p=0.8,
...
)
Dataset
The llm.prepare_dataset command prepares a dataset for training.
llm.download_dataset(
URL,
...
)
dataset = llm.prepare_dataset(
path,
task="pretrain | instruction_finetune",
test_portion=0.1,
...
)
Training
llm.instruction_finetune(
config=None,
dataset=dataset,
max_iter=10,
method="full | lora | adapter | adapter_v2"
)
llm.pretrain(config=None, dataset=dataset, max_iter=10, ...)
Serving
llm.serve(port=8000)
Then in another Python session:
import requests, json
response = requests.post(
"http://127.0.0.1:8000/predict",
json={"prompt": "Fix typos in the following sentence: Example input"}
)
print(response.json()["output"])