| | --- |
| | license: mit |
| | tags: |
| | - code |
| | widget: |
| | - text: "print(" |
| | example_title: "Example 1" |
| | - text: "def calculate" |
| | example_title: "Example 2" |
| | --- |
| | ```Complexity-1B``` |
| |
|
| | # Model Details |
| | Complexity-1B is a finetuned version of the GPT-NeoX 1.3B model [@gpt-neox] for code completion tasks. It was finetuned on a dataset of Python code from open source projects on GitHub. |
| |
|
| | # Intended Uses |
| | This model is intended to be used for code completion in Python. It can suggest likely completions for partially written Python code. |
| |
|
| | # Evaluation Data |
| | The model was evaluated on a holdout set from the training data distribution, containing Python code snippets. |
| |
|
| | # Metrics |
| | The primary evaluation metric was accuracy of code completion on the evaluation set. The model achieves 49% accuracy on code completion. |
| |
|
| | # Ethical Considerations |
| | The training data contains code from public GitHub repositories. Care should be taken to avoid completing code in unethical or harmful ways not intended by the original developers. |
| |
|
| | # Caveats and Recommendations |
| | The model is designed for Python code completion only. Performance on other programming languages is unknown. Users should carefully validate any generated code before executing or deploying it. |