File size: 1,831 Bytes
53e0dae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
# LitGPT High-level Python API
This is a work-in-progress draft for a high-level LitGPT Python API.
## Model loading & saving
The `LLM.load` command loads an `llm` object, which contains both the model object (a PyTorch module) and a preprocessor.
```python
from litgpt import LLM
llm = LLM.load(
model="url | local_path",
# high-level user only needs to care about those:
memory_reduction="none | medium | strong"
# advanced options for technical users:
source="hf | local | other"
quantize="bnb.nf4",
precision="bf16-true",
device=""auto | cuda | cpu",
)
```
Here,
- `llm.model` contains the PyTorch Module
- and `llm.preprocessor.tokenizer` contains the tokenizer
The `llm.save` command saves the model weights, tokenizer, and configuration information.
```python
llm.save(checkpoint_dir, format="lightning | ollama | hf")
```
## Inference / Chat
```
response = llm.generate(
prompt="What do Llamas eat?",
temperature=0.1,
top_p=0.8,
...
)
```
## Dataset
The `llm.prepare_dataset` command prepares a dataset for training.
```
llm.download_dataset(
URL,
...
)
```
```
dataset = llm.prepare_dataset(
path,
task="pretrain | instruction_finetune",
test_portion=0.1,
...
)
```
## Training
```python
llm.instruction_finetune(
config=None,
dataset=dataset,
max_iter=10,
method="full | lora | adapter | adapter_v2"
)
```
```python
llm.pretrain(config=None, dataset=dataset, max_iter=10, ...)
```
## Serving
```python
llm.serve(port=8000)
```
Then in another Python session:
```python
import requests, json
response = requests.post(
"http://127.0.0.1:8000/predict",
json={"prompt": "Fix typos in the following sentence: Example input"}
)
print(response.json()["output"])
```
|