Instructions to use date3k2/mamba-text-classification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use date3k2/mamba-text-classification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="date3k2/mamba-text-classification", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("date3k2/mamba-text-classification", trust_remote_code=True) model = AutoModelForSequenceClassification.from_pretrained("date3k2/mamba-text-classification", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
Update config.json
Browse files- config.json +3 -0
config.json
CHANGED
|
@@ -3,6 +3,9 @@
|
|
| 3 |
"architectures": [
|
| 4 |
"MambaForSequenceClassification"
|
| 5 |
],
|
|
|
|
|
|
|
|
|
|
| 6 |
"bos_token_id": 0,
|
| 7 |
"conv_kernel": 4,
|
| 8 |
"d_inner": 1536,
|
|
|
|
| 3 |
"architectures": [
|
| 4 |
"MambaForSequenceClassification"
|
| 5 |
],
|
| 6 |
+
"auto_map": {
|
| 7 |
+
"AutoModelForSequenceClassification": "hf_mamba_classification.MambaForSequenceClassification"
|
| 8 |
+
},
|
| 9 |
"bos_token_id": 0,
|
| 10 |
"conv_kernel": 4,
|
| 11 |
"d_inner": 1536,
|