Instructions to use Salesforce/codegen25-7b-instruct_P with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Salesforce/codegen25-7b-instruct_P with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Salesforce/codegen25-7b-instruct_P")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen25-7b-instruct_P") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen25-7b-instruct_P") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Salesforce/codegen25-7b-instruct_P with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Salesforce/codegen25-7b-instruct_P" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Salesforce/codegen25-7b-instruct_P", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Salesforce/codegen25-7b-instruct_P
- SGLang
How to use Salesforce/codegen25-7b-instruct_P with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Salesforce/codegen25-7b-instruct_P" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Salesforce/codegen25-7b-instruct_P", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Salesforce/codegen25-7b-instruct_P" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Salesforce/codegen25-7b-instruct_P", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Salesforce/codegen25-7b-instruct_P with Docker Model Runner:
docker model run hf.co/Salesforce/codegen25-7b-instruct_P
Update README.md
Browse files
README.md
CHANGED
|
@@ -102,6 +102,13 @@ Please refer to the [blog](https://blog.salesforceairesearch.com/codegen25) for
|
|
| 102 |
As an autoregressive language model, CodeGen2.5 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
|
| 103 |
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
|
| 104 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 105 |
## BibTeX entry and citation info
|
| 106 |
|
| 107 |
Please cite CodeGen2 paper:
|
|
|
|
| 102 |
As an autoregressive language model, CodeGen2.5 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
|
| 103 |
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
|
| 104 |
|
| 105 |
+
## Attribution & Other Requirements
|
| 106 |
+
The pretraining dataset of the model was filtered for permissive licenses only.
|
| 107 |
+
Nevertheless, the model can generate source code verbatim from the dataset.
|
| 108 |
+
The code's license might require attribution and/or other specific requirements that must be respected.
|
| 109 |
+
The data provider BigCode provides a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that lets you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
|
| 110 |
+
|
| 111 |
+
|
| 112 |
## BibTeX entry and citation info
|
| 113 |
|
| 114 |
Please cite CodeGen2 paper:
|