Instructions to use AxiaoDBL/DeepSeek-R1-0528-Qwen3-8B-CodeLx-Reasoning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AxiaoDBL/DeepSeek-R1-0528-Qwen3-8B-CodeLx-Reasoning with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("AxiaoDBL/DeepSeek-R1-0528-Qwen3-8B-CodeLx-Reasoning", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 12bc70b55beab97c87b9036a0fedba345b12d09cc39dbf92352a914038c3869b
- Size of remote file:
- 11.4 MB
- SHA256:
- 93d5fd6d2f8cf1172ac86cf982e2b88fa6732366b44dc1a32349379a54a6a044
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.