📁 Repository-Level Pre-Trained OpenCoder 🧩
Collection
All the checkpoints from Table 3 of the paper “On Pretraining for Project-Level Code Completion.” • 33 items • Updated • 3
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "JetBrains-Research/OpenCoder-1.5B-Path-Distance-Py" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "JetBrains-Research/OpenCoder-1.5B-Path-Distance-Py",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'This model is derived from OpenCoder-1.5B-Base by applying additional context extension fine-tuning. The repository context is composed using the Path Distance .py composer, more details on which, along with others, can be found in the On Pretraining for Project-Level Code Completion paper (arxiv). Specifically, Section A.1 of the Appendix describes the context composition method, and Table 3 provides a comparison with other composers from the same collection.
We publish this checkpoint to support the reproducibility and accessibility of our research results.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "JetBrains-Research/OpenCoder-1.5B-Path-Distance-Py"
tokenizer_name = "infly/OpenCoder-1.5B-Base"
model = AutoModelForCausalLM.from_pretrained(model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, trust_remote_code=True)
inputs = tokenizer("# write a quick sort algorithm", return_tensors="pt")
outputs = model.generate(**inputs.to(model.device), max_new_tokens=256)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
Base model
infly/OpenCoder-1.5B-Base
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "JetBrains-Research/OpenCoder-1.5B-Path-Distance-Py" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "JetBrains-Research/OpenCoder-1.5B-Path-Distance-Py", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'