Instructions to use Ailiance-fr/devstral-python-lora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Ailiance-fr/devstral-python-lora with PEFT:
Task type is invalid.
- MLX
How to use Ailiance-fr/devstral-python-lora with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # if on a CUDA device, also pip install mlx[cuda] # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("Ailiance-fr/devstral-python-lora") prompt = "Once upon a time in" text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- MLX LM
How to use Ailiance-fr/devstral-python-lora with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Generate some text mlx_lm.generate --model "Ailiance-fr/devstral-python-lora" --prompt "Once upon a time"
File size: 1,169 Bytes
8701776 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | {
"adapter_path": "/Users/clems/KIKI-Mac_tunner/output/eu-kiki/devstral-python",
"batch_size": 1,
"clear_cache_threshold": 0,
"config": "/Users/clems/KIKI-Mac_tunner/output/eu-kiki/devstral-python/train_config.yaml",
"data": "/Users/clems/KIKI-Mac_tunner/data/micro-kiki/python",
"fine_tune_type": "lora",
"grad_accumulation_steps": 4,
"grad_checkpoint": true,
"iters": 500,
"learning_rate": 1e-05,
"lora_parameters": {
"alpha": 32,
"dropout": 0.05,
"rank": 16,
"scale": 2.0
},
"lr_schedule": null,
"mask_prompt": false,
"max_seq_length": 2048,
"model": "/Users/clems/KIKI-Mac_tunner/models/Devstral-Small-2-24B-Instruct-2512",
"num_layers": -1,
"optimizer": "adam",
"optimizer_config": {
"adam": {},
"adamw": {},
"muon": {},
"sgd": {},
"adafactor": {}
},
"project_name": null,
"report_to": null,
"resume_adapter_file": null,
"save_every": 100,
"seed": 42,
"steps_per_eval": 100,
"steps_per_report": 5,
"test": false,
"test_batches": 500,
"train": true,
"val_batches": 10
} |