File size: 3,877 Bytes
f9dff7a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 |
# Serve and Deploy LLMs
This document shows how you can serve a LitGPT for deployment.
## Serve an LLM with LitServe
This section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.
### Step 1: Start the inference server
```bash
# 1) Download a pretrained model (alternatively, use your own finetuned model)
litgpt download microsoft/phi-2
# 2) Start the server
litgpt serve microsoft/phi-2
```
> [!TIP]
> Use `litgpt serve --help` to display additional options, including the port, devices, LLM temperature setting, and more.
### Step 2: Query the inference server
You can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:
```python
import requests, json
response = requests.post(
"http://127.0.0.1:8000/predict",
json={"prompt": "Fix typos in the following sentence: Example input"}
)
print(response.json()["output"])
```
Executing the code above prints the following output:
```
Example input.
```
### Optional: Use the streaming mode
The 2-step procedure described above returns the complete response all at once. If you want to stream the response on a token-by-token basis, start the server with the streaming option enabled:
```bash
litgpt serve microsoft/phi-2 --stream true
```
Then, use the following updated code to query the inference server:
```python
import requests, json
response = requests.post(
"http://127.0.0.1:8000/predict",
json={"prompt": "Fix typos in the following sentence: Example input"},
stream=True
)
# stream the response
for line in response.iter_lines(decode_unicode=True):
if line:
print(json.loads(line)["output"], end="")
```
```
Sure, here is the corrected sentence:
Example input
```
## Serve an LLM with OpenAI-compatible API
LitGPT provides OpenAI-compatible endpoints that allow you to use the OpenAI SDK or any OpenAI-compatible client to interact with your models. This is useful for integrating LitGPT into existing applications that use the OpenAI API.
### Step 1: Start the server with OpenAI specification
```bash
# 1) Download a pretrained model (alternatively, use your own finetuned model)
litgpt download HuggingFaceTB/SmolLM2-135M-Instruct
# 2) Start the server with OpenAI-compatible endpoints
litgpt serve HuggingFaceTB/SmolLM2-135M-Instruct --openai_spec true
```
> [!TIP]
> The `--openai_spec true` flag enables OpenAI-compatible endpoints at `/v1/chat/completions` instead of the default `/predict` endpoint.
### Step 2: Query using OpenAI-compatible endpoints
You can now send requests to the OpenAI-compatible endpoint using curl:
```bash
curl -X POST http://127.0.0.1:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "SmolLM2-135M-Instruct",
"messages": [{"role": "user", "content": "Hello! How are you?"}]
}'
```
Or use the OpenAI Python SDK:
```python
from openai import OpenAI
# Configure the client to use your local LitGPT server
client = OpenAI(
base_url="http://127.0.0.1:8000/v1",
api_key="not-needed" # LitGPT doesn't require authentication by default
)
response = client.chat.completions.create(
model="SmolLM2-135M-Instruct",
messages=[
{"role": "user", "content": "Hello! How are you?"}
]
)
print(response.choices[0].message.content)
```
## Serve an LLM UI with Chainlit
If you are interested in developing a simple ChatGPT-like UI prototype, see the Chainlit tutorial in the following Studio:
<a target="_blank" href="https://lightning.ai/lightning-ai/studios/chatgpt-like-llm-uis-via-chainlit">
<img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/studio-badge.svg" alt="Open In Studio"/>
</a>
|