| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | pipeline_tag: text-generation |
| | tags: |
| | - role-play |
| | - fine-tuned |
| | - qwen2 |
| | base_model: Qwen/Qwen2-1.5B |
| | library_name: transformers |
| | --- |
| | |
| |  |
| |
|
| | ## Introduction |
| |
|
| | **Oxy 1 Micro** is a fine-tuned version of the [Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) language model, specialized for **role-play** scenarios. Despite its small size, it delivers impressive performance in generating engaging dialogues and interactive storytelling. |
| |
|
| | Developed by **Oxygen (oxyapi)**, with contributions from **TornadoSoftwares**, Oxy 1 Micro aims to provide an accessible and efficient language model for creative and immersive role-play experiences. |
| |
|
| | ## Model Details |
| |
|
| | - **Model Name**: Oxy 1 Micro |
| | - **Model ID**: [oxyapi/oxy-1-micro](https://huggingface.co/oxyapi/oxy-1-micro) |
| | - **Base Model**: [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) |
| | - **Model Type**: Chat Completions |
| | - **License**: Apache-2.0 |
| | - **Language**: English |
| | - **Tokenizer**: [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) |
| | - **Max Input Tokens**: 32,768 |
| | - **Max Output Tokens**: 8,192 |
| |
|
| | ### Features |
| |
|
| | - **Fine-tuned for Role-Play**: Specially trained to generate dynamic and contextually rich role-play dialogues. |
| | - **Efficient**: Compact model size allows for faster inference and reduced computational resources. |
| | - **Parameter Support**: |
| | - `temperature` |
| | - `top_p` |
| | - `top_k` |
| | - `frequency_penalty` |
| | - `presence_penalty` |
| | - `max_tokens` |
| |
|
| | ### Metadata |
| |
|
| | - **Owned by**: Oxygen (oxyapi) |
| | - **Contributors**: TornadoSoftwares |
| | - **Description**: A Qwen2-1.5B fine-tune for role-play; small model but still good. |
| |
|
| | ## Usage |
| |
|
| | To utilize Oxy 1 Micro for text generation in role-play scenarios, you can load the model using the Hugging Face Transformers library: |
| |
|
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("oxyapi/oxy-1-micro") |
| | model = AutoModelForCausalLM.from_pretrained("oxyapi/oxy-1-micro") |
| | |
| | prompt = "You are a wise old wizard in a mystical land. A traveler approaches you seeking advice." |
| | inputs = tokenizer(prompt, return_tensors="pt") |
| | outputs = model.generate(**inputs, max_length=500) |
| | response = tokenizer.decode(outputs[0], skip_special_tokens=True) |
| | print(response) |
| | ``` |
| |
|
| | ## Performance |
| |
|
| | Performance benchmarks for Oxy 1 Micro are not available at this time. Future updates may include detailed evaluations on relevant datasets. |
| |
|
| | ## License |
| |
|
| | This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). |
| |
|
| | ## Citation |
| |
|
| | If you find Oxy 1 Micro useful in your research or applications, please cite it as: |
| |
|
| | ``` |
| | @misc{oxy1micro2024, |
| | title={Oxy 1 Micro: A Fine-Tuned Qwen2-1.5B Model for Role-Play}, |
| | author={Oxygen (oxyapi)}, |
| | year={2024}, |
| | howpublished={\url{https://huggingface.co/oxyapi/oxy-1-micro}}, |
| | } |
| | ``` |