Instructions to use mozilla-ai/WizardCoder-Python-34B-V1.0-llamafile with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mozilla-ai/WizardCoder-Python-34B-V1.0-llamafile with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("mozilla-ai/WizardCoder-Python-34B-V1.0-llamafile", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Quantize wizardcoder-python-34b-v1.0 with llamafile-0.7.3 Q2_K
Browse files
wizardcoder-python-34b-v1.0.Q2_K.llamafile
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b1d2adfbecd62a459d08a0f9878612748336d5f17e53612ff846d10803a159cb
|
| 3 |
+
size 12530029045
|