This model DeProgrammer/Jan-v3-4B-base-instruct-MNN was converted to MNN format from janhq/Jan-v3-4B-base-instruct using llmexport.py in MNN version 3.4.0 with default settings (4-bit quantization).

Inference can be run via MNN, e.g., MNN Chat on Android.

Downloads last month
19
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DeProgrammer/Jan-v3-4B-base-instruct-MNN

Quantized
(14)
this model