Instructions to use OS-Copilot/OS-Atlas-Base-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OS-Copilot/OS-Atlas-Base-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="OS-Copilot/OS-Atlas-Base-7B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("OS-Copilot/OS-Atlas-Base-7B") model = AutoModelForImageTextToText.from_pretrained("OS-Copilot/OS-Atlas-Base-7B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OS-Copilot/OS-Atlas-Base-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OS-Copilot/OS-Atlas-Base-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OS-Copilot/OS-Atlas-Base-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/OS-Copilot/OS-Atlas-Base-7B
- SGLang
How to use OS-Copilot/OS-Atlas-Base-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OS-Copilot/OS-Atlas-Base-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OS-Copilot/OS-Atlas-Base-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OS-Copilot/OS-Atlas-Base-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OS-Copilot/OS-Atlas-Base-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use OS-Copilot/OS-Atlas-Base-7B with Docker Model Runner:
docker model run hf.co/OS-Copilot/OS-Atlas-Base-7B
Prompt for point
Is it possible to prompt the model for a point rather then bbox? I'm asking it because the paper states the following:
In terms of grounding data format, to maintain consistency with the
original InternVL training process, we convert all box format data into the form [[x1, y1,
x2, x2]], where (x1, y1) and (x2, y2) are the normalized relative coordinates within the
range [0,1000]. Similarly, point data is converted into [[x, y]] format. ,
</box/>, , and are treated as special tokens.
I tried to modify the prompt from:
f'In this UI screenshot, what is the position of the element corresponding to the command "{utterance}" (with bbox)?'
With the result (skip_special_tokens=True, clean_up_tokenization_spaces=True):
['drive(160,366),(256,442)']
to:
f'In this UI screenshot, what is the position of the element corresponding to the command "{utterance}" (with point)?'
With the result (skip_special_tokens=True, clean_up_tokenization_spaces=True):
['tap drive[[110, 394, 136, 456]]']
I have to admit that, in order to save computational resources, we only included a small amount of point data when training the 7B model. (The 4B model's ability to generate points and bounding boxes should be normal.) This makes it difficult for the model to follow instructions that generate a point. If you really want to generate a point, you can try using only the element you want to target as the prompt, without adding any additional information, such as:
prompt = "main page"
However, I'm not sure if this will always be effective, especially when the command is as indirect as something like "utterance". The best approach might be to finetune the model further with a small amount of point-formatted data.