| | --- |
| | license: apache-2.0 |
| | pipeline_tag: image-text-to-text |
| | library_name: transformers |
| | --- |
| | |
| | # JanusCoderV-7B |
| |
|
| | [💻Github Repo](https://github.com/InternLM/JanusCoder) • [🤗Model Collections](https://huggingface.co/collections/internlm/januscoder) • [📜Technical Report](https://www.arxiv.org/abs/2510.23538) |
| |
|
| | ## Introduction |
| |
|
| | We introduce JanusCoder and JanusCoderV, a suite of open-source foundational models designed to establish a unified visual-programmatic interface for code intelligence. |
| | This model suite is built upon open-source language models (such as Qwen3-8B and 14B) and multimodal models (such as Qwen2.5-VL and InternVL3.5-8B). The JanusCoder series is trained on JANUSCODE-800K—the largest multimodal code corpus to date, generated by an innovative synthesis toolkit, covering everything from standard charts to complex interactive Web UIs and code-driven animations. |
| | This enables the models to uniformly handle diverse visual-programmatic tasks, such as generating code from textual instructions, visual inputs, or a combination of both, rather than building specialized models for isolated tasks. JanusCoder excels at flexible content generation (like data visualizations and interactive front-ends) as well as precise, program-driven editing of visual effects and complex animation construction. |
| |
|
| | ## Model Downloads |
| |
|
| | | Model Name | Description | Download | |
| | | --- | --- | --- | |
| | | JanusCoder-8B | 8B text model based on Qwen3-8B. | 🤗 [Model](https://huggingface.co/internlm/JanusCoder-8B) | |
| | | JanusCoder-14B | 14B text model based on Qwen3-14B. | 🤗 [Model](https://huggingface.co/internlm/JanusCoder-14B) | |
| | | 👉 **JanusCoderV-7B** | 7B multimodal model based on Qwen2.5-VL-7B. | 🤗 [Model](https://huggingface.co/internlm/JanusCoderV-7B) | |
| | | JanusCoderV-8B | 8B multimodal model based on InternVL3.5-8B. | 🤗 [Model](https://huggingface.co/internlm/JanusCoderV-8B) | |
| |
|
| | ## Performance |
| |
|
| | We evaluate the JanusCoderV model on various benchmarks that span multimodal code intelligence tasks on multiple PLs: |
| |
|
| | | Model | JanusCoderV-7B | Qwen2.5VL-7B-Instruct | InternVL3-8B | InternVL3.5-8B | MiniCPM-V-2-6 | Llama3.2-11B-Vision-Instruct | GPT-4o | |
| | | --- | --- | --- | --- | --- | --- | --- | --- | |
| | | ChartMimic (Customized) | 72.77 | 58.69 | 60.04 | 59.55 | 48.18 | 39.63 | 67.42 | |
| | | DesignBench (Gen) | 73.31 | 72.73 | 69.34 | 71.73 | 66.25 | 62.24 | 76.83 | |
| | | DesignBench (Edit) | 8.79 | 6.85 | 7.76 | 8.63 | 4.56 | 6.61 | 9.23 | |
| | | WebCode2M | 26.21 | 12.83 | 12.40 | 11.95 | 9.73 | 6.57 | 13.00 | |
| | | InteractScience (Func.) | 17.73 | 8.40 | 8.93 | 11.47 | 0.13 | 6.67 | 27.20 | |
| | | InteractScience (Visual) | 27.67 | 19.83 | 53.35 | 24.17 | 7.70 | 13.24 | 46.01 | |
| |
|
| |
|
| | ## Quick Start |
| |
|
| | **Transformers** |
| |
|
| | The following provides demo code illustrating how to generate text using JanusCoderV-7B. |
| |
|
| | > Please use transformers >= 4.55.0 to ensure the model works normally. |
| |
|
| | ```python |
| | from transformers import AutoProcessor, AutoModelForCausalLM |
| | import torch |
| | |
| | model_name = "internlm/JanusCoderV-7B" |
| | processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True) |
| | model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True) |
| | |
| | messages = [ |
| | { |
| | "role": "user", |
| | "content": [ |
| | {"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"}, |
| | {"type": "text", "text": "Please describe the image explicitly."}, |
| | ], |
| | } |
| | ] |
| | |
| | inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16) |
| | |
| | generate_ids = model.generate(**inputs, max_new_tokens=32768) |
| | decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True) |
| | print(decoded_output) |
| | ``` |
| |
|
| | ## Citation |
| | 🫶 If you are interested in our work or find the repository / checkpoints / benchmark / data helpful, please consider using the following citation format when referencing our papers: |
| |
|
| | ```bibtex |
| | @article{sun2025januscoder, |
| | title={JanusCoder: Towards a Foundational Visual-Programmatic Interface for Code Intelligence}, |
| | author={Sun, Qiushi and Gong, Jingyang and Liu, Yang and Chen, Qiaosheng and Li, Lei and Chen, Kai and Guo, Qipeng and Kao, Ben and Yuan, Fei}, |
| | journal={arXiv preprint arXiv:2510.23538}, |
| | year={2025} |
| | } |
| | |
| | @article{sun2024survey, |
| | title={A survey of neural code intelligence: Paradigms, advances and beyond}, |
| | author={Sun, Qiushi and Chen, Zhirui and Xu, Fangzhi and Cheng, Kanzhi and Ma, Chang and Yin, Zhangyue and Wang, Jianing and Han, Chengcheng and Zhu, Renyu and Yuan, Shuai and others}, |
| | journal={arXiv preprint arXiv:2403.14734}, |
| | year={2024} |
| | } |
| | |
| | @article{chen2025interactscience, |
| | title={InteractScience: Programmatic and Visually-Grounded Evaluation of Interactive Scientific Demonstration Code Generation}, |
| | author={Chen, Qiaosheng and Liu, Yang and Li, Lei and Chen, Kai and Guo, Qipeng and Cheng, Gong and Yuan, Fei}, |
| | journal={arXiv preprint arXiv:2510.09724}, |
| | year={2025} |
| | } |
| | |
| | @article{sun2025codeevo, |
| | title={CodeEvo: Interaction-Driven Synthesis of Code-centric Data through Hybrid and Iterative Feedback}, |
| | author={Sun, Qiushi and Gong, Jinyang and Li, Lei and Guo, Qipeng and Yuan, Fei}, |
| | journal={arXiv preprint arXiv:2507.22080}, |
| | year={2025} |
| | } |
| | ``` |