Text Generation
Transformers
causal-lm
linear-attention
rwkv
reka
knowledge-distillation
multilingual
Instructions to use OpenMOSE/HRWKV7-Reka-Flash3-Preview with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenMOSE/HRWKV7-Reka-Flash3-Preview with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="OpenMOSE/HRWKV7-Reka-Flash3-Preview")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("OpenMOSE/HRWKV7-Reka-Flash3-Preview", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OpenMOSE/HRWKV7-Reka-Flash3-Preview with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OpenMOSE/HRWKV7-Reka-Flash3-Preview" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenMOSE/HRWKV7-Reka-Flash3-Preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/OpenMOSE/HRWKV7-Reka-Flash3-Preview
- SGLang
How to use OpenMOSE/HRWKV7-Reka-Flash3-Preview with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OpenMOSE/HRWKV7-Reka-Flash3-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenMOSE/HRWKV7-Reka-Flash3-Preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OpenMOSE/HRWKV7-Reka-Flash3-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenMOSE/HRWKV7-Reka-Flash3-Preview", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use OpenMOSE/HRWKV7-Reka-Flash3-Preview with Docker Model Runner:
docker model run hf.co/OpenMOSE/HRWKV7-Reka-Flash3-Preview
Remove library name and Transformers code snippet
Browse files
README.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
pipeline_tag: text-generation
|
| 4 |
-
library_name: transformers
|
| 5 |
tags:
|
| 6 |
- text-generation
|
| 7 |
- causal-lm
|
|
@@ -98,31 +97,6 @@ Performance evaluation is ongoing. The model shows promising results in:
|
|
| 98 |
- Significantly improved needle-in-haystack task performance compared to pure RWKV architectures
|
| 99 |
- Competitive performance on standard language modeling benchmarks
|
| 100 |
|
| 101 |
-
## Usage with Hugging Face Transformers
|
| 102 |
-
|
| 103 |
-
This model can be loaded and used with the `transformers` library. Ensure you have `transformers` installed: `pip install transformers`.
|
| 104 |
-
When loading, remember to set `trust_remote_code=True` because of the custom architecture.
|
| 105 |
-
|
| 106 |
-
```python
|
| 107 |
-
from transformers import pipeline, AutoTokenizer
|
| 108 |
-
import torch
|
| 109 |
-
|
| 110 |
-
model_name = "OpenMOSE/HRWKV7-Reka-Flash3-Preview" # Replace with the actual model ID if different
|
| 111 |
-
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
| 112 |
-
pipe = pipeline(
|
| 113 |
-
"text-generation",
|
| 114 |
-
model_name,
|
| 115 |
-
tokenizer=tokenizer,
|
| 116 |
-
torch_dtype=torch.bfloat16, # or torch.float16 depending on your GPU and model precision
|
| 117 |
-
device_map="auto",
|
| 118 |
-
trust_remote_code=True,
|
| 119 |
-
)
|
| 120 |
-
|
| 121 |
-
text = "The quick brown fox jumps over the lazy "
|
| 122 |
-
result = pipe(text, max_new_tokens=20, do_sample=True, top_p=0.9, temperature=0.7)[0]["generated_text"]
|
| 123 |
-
print(result)
|
| 124 |
-
```
|
| 125 |
-
|
| 126 |
## Run with RWKV-Infer (as provided by original authors)
|
| 127 |
- RWKV-Infer now support hxa079
|
| 128 |
```bash
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
pipeline_tag: text-generation
|
|
|
|
| 4 |
tags:
|
| 5 |
- text-generation
|
| 6 |
- causal-lm
|
|
|
|
| 97 |
- Significantly improved needle-in-haystack task performance compared to pure RWKV architectures
|
| 98 |
- Competitive performance on standard language modeling benchmarks
|
| 99 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 100 |
## Run with RWKV-Infer (as provided by original authors)
|
| 101 |
- RWKV-Infer now support hxa079
|
| 102 |
```bash
|