| Quantization made by Richard Erkhov. |
|
|
| [Github](https://github.com/RichardErkhov) |
|
|
| [Discord](https://discord.gg/pvy7H8DZMG) |
|
|
| [Request more models](https://github.com/RichardErkhov/quant_request) |
|
|
|
|
| semcoder_1030 - bnb 8bits |
| - Model creator: https://huggingface.co/semcoder/ |
| - Original model: https://huggingface.co/semcoder/semcoder_1030/ |
|
|
|
|
|
|
|
|
| Original model description: |
| ---
|
| license: other
|
| library_name: transformers
|
| license_name: deepseek
|
| license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
|
| pipeline_tag: text-generation
|
| ---
|
| # 🤔 SemCoder: Training Code Language Models with Comprehensive Semantics Reasoning
|
|
|
| > Refer to our GitHub repo [ARiSE-Lab/SemCoder](https://github.com/ARiSE-Lab/SemCoder/) for detailed introduction to SemCoder!
|
|
|
| ## Model Details
|
|
|
|
|
| Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.
|
|
|
| ```python
|
| from transformers import pipeline
|
| import torch
|
|
|
| generator = pipeline(
|
| model="semcoder/semcoder_1030",
|
| task="text-generation",
|
| torch_dtype=torch.float16,
|
| device_map="auto",
|
| )
|
|
|
| # Generate Code
|
|
|
| CODEGEN_REQUEST = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable <Code> according to <NL_Description>
|
|
|
| <NL_Description>
|
| {desc}
|
|
|
| <Code>
|
| """
|
| desc = """You are tasked with implementing a Python class that simulates a simple version of a "To-Do List" application. The class should have the following functionalities:
|
| 1. Add a new task to the to-do list.
|
| 2. Mark a task as completed.
|
| 3. Display all tasks in the to-do list.
|
| 4. Display only the incomplete tasks in the to-do list.
|
| """
|
|
|
| prompt = CODEGEN_REQUEST.format(desc=desc)
|
| result = generator(prompt, max_length=2048, num_return_sequences=1, temperature=0.0)
|
| code = result[0]["generated_text"].split("```python")[1].split("```")[0]
|
| print(code)
|
|
|
| # Understand Code with Monologues
|
|
|
| FWD_MNL_REQUEST = """Simulate the Execution: You are given a Python function and an assertion containing a function input. Complete the assertion containing the execution output corresponding to the given input in [ANSWER] and [/ANSWER] tags.
|
| {code}
|
| """
|
|
|
| tests = """
|
| todo_list = ToDoList()
|
| todo_list.add_task("Buy groceries")
|
| todo_list.add_task("Complete assignment")
|
| todo_list.mark_completed("Buy groceries")
|
| assert todo_list.tasks == ???
|
| """
|
| code += tests
|
| prompt = FWD_MNL_REQUEST.format(code=code)
|
| result = generator(prompt, max_length=2048, num_return_sequences=1, temperature=0.0)
|
| print(result[0]["generated_text"])
|
| ```
|
|
|
| ## Citation
|
|
|
| ```bibtex
|
| @article{ding2024semcoder,
|
| title={SemCoder: Training Code Language Models with Comprehensive Semantics},
|
| author={Yangruibo Ding and Jinjun Peng and Marcus J. Min and Gail Kaiser and Junfeng Yang and Baishakhi Ray},
|
| journal={arXiv preprint arXiv:2406.01006},
|
| year={2024}
|
| }
|
| ```
|
|
|
| ## Important Note
|
|
|
| SemCoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. SemCoder will not compete with OpenAI's commercial products. |
|
|
|
|