Text Generation
Transformers
PyTorch
English
gpt_bigcode
langchain
python
yolov8
vertexai
text-generation-inference
Instructions to use iterateai/Interplay-AppCoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use iterateai/Interplay-AppCoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="iterateai/Interplay-AppCoder")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("iterateai/Interplay-AppCoder") model = AutoModelForCausalLM.from_pretrained("iterateai/Interplay-AppCoder") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use iterateai/Interplay-AppCoder with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "iterateai/Interplay-AppCoder" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "iterateai/Interplay-AppCoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/iterateai/Interplay-AppCoder
- SGLang
How to use iterateai/Interplay-AppCoder with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "iterateai/Interplay-AppCoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "iterateai/Interplay-AppCoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "iterateai/Interplay-AppCoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "iterateai/Interplay-AppCoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use iterateai/Interplay-AppCoder with Docker Model Runner:
docker model run hf.co/iterateai/Interplay-AppCoder
Update README.md
Browse files
README.md
CHANGED
|
@@ -5,6 +5,7 @@ tags:
|
|
| 5 |
- langchain
|
| 6 |
- python
|
| 7 |
- yolov8
|
|
|
|
| 8 |
---
|
| 9 |
# Interplay-AppCoder a CodeGeneration LLM
|
| 10 |
**Iterate’s new top-performing Interplay-AppCoder LLM scores 2.9 on usefulness and 2.7 on functionality on the ICE Benchmark Test**
|
|
@@ -25,7 +26,7 @@ The result is Interplay-AppCoder LLM, a brand new high performing code generatio
|
|
| 25 |
|
| 26 |
|
| 27 |
- **Developed by:** [Iterate.ai]
|
| 28 |
-
- **Language(s) (NLP):** [Python,Langchain,yolov8]
|
| 29 |
- **Finetuned from model :** [Wizardcoder-15B-v1.0]
|
| 30 |
|
| 31 |
### Model Sources [optional]
|
|
@@ -129,7 +130,11 @@ You can read more about the ICE methodology in this paper.
|
|
| 129 |
|
| 130 |
|
| 131 |
|
|
|
|
| 132 |
|
|
|
|
| 133 |
|
| 134 |
-
|
| 135 |
-
|
|
|
|
|
|
|
|
|
| 5 |
- langchain
|
| 6 |
- python
|
| 7 |
- yolov8
|
| 8 |
+
- vertexai
|
| 9 |
---
|
| 10 |
# Interplay-AppCoder a CodeGeneration LLM
|
| 11 |
**Iterate’s new top-performing Interplay-AppCoder LLM scores 2.9 on usefulness and 2.7 on functionality on the ICE Benchmark Test**
|
|
|
|
| 26 |
|
| 27 |
|
| 28 |
- **Developed by:** [Iterate.ai]
|
| 29 |
+
- **Language(s) (NLP):** [Python,Langchain,yolov8,vertexai]
|
| 30 |
- **Finetuned from model :** [Wizardcoder-15B-v1.0]
|
| 31 |
|
| 32 |
### Model Sources [optional]
|
|
|
|
| 130 |
|
| 131 |
|
| 132 |
|
| 133 |
+
## Can you try it?
|
| 134 |
|
| 135 |
+
Yes, we’ve opened it up. Try out yourself right here:
|
| 136 |
|
| 137 |
+
* Can you provide a python script that uses the YOLOv8 model from the Ultralytics library to detect people in an image, draw green bounding boxes around them, and then save the image?
|
| 138 |
+
* Write a python code using langchain to do Question and Answering over a blog post.
|
| 139 |
+
* Write a python code using langchain library to retrieve information from SQL database and a vector store
|
| 140 |
+
* How can I set up clients for job service, model service, endpoint service, and prediction service using the Vertex AI client library in Python?
|