Instructions to use mozilla-ai/WizardCoder-Python-34B-V1.0-llamafile with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mozilla-ai/WizardCoder-Python-34B-V1.0-llamafile with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("mozilla-ai/WizardCoder-Python-34B-V1.0-llamafile", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Quantize wizardcoder-python-34b-v1.0 with llamafile-0.7.3 Q3_K_L
Browse files
wizardcoder-python-34b-v1.0.Q3_K_L.llamafile
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dde6a45c3a1280ebe1719e1c7889fa377f2784aa9525af8be172a26cd93209fb
|
| 3 |
+
size 17795912185
|