Instructions to use mozilla-ai/WizardCoder-Python-34B-V1.0-llamafile with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mozilla-ai/WizardCoder-Python-34B-V1.0-llamafile with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("mozilla-ai/WizardCoder-Python-34B-V1.0-llamafile", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Quantize wizardcoder-python-34b-v1.0 with llamafile-0.7.3 Q4_K
Browse files
wizardcoder-python-34b-v1.0.Q4_K.llamafile
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3a669206ae02beaeb153e16a643493ae99803466f560bcef9487f11500e8a087
|
| 3 |
+
size 20244271605
|