AI & ML interests

None defined yet.

prithivMLmods 
posted an update 3 days ago
view post
Post
2862
Introducing QIE-2511-Zoom-Master for highlight-guided area zoom-in, enabling lossless zooming within a drawn square area, and QIE-2511-Object-Remover-v2 for precise object or highlight-guided area cleanup. These experimental adapters are trained based on QIE-2511. Find the adapters below.

🕹️QIE-2511-Zoom-Master : prithivMLmods/QIE-2511-Zoom-Master
🕹️QIE-2511-Object-Remover-v2: prithivMLmods/QIE-2511-Object-Remover-v2

🤗Demo: prithivMLmods/Qwen-Image-Edit-Object-Manipulator

📂Collection: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-exps

To learn more, visit the app page or the respective model pages.
  • 1 reply
·
telcom 
posted an update 7 days ago
telcom 
posted an update 10 days ago
view post
Post
1574
MAD-GRPO: https://huggingface.co/blog/telcom/mad-grpo
In R1-Zero-Like Training *, Dr.GRPO treats GRPO’s by dropping std, but that often comes with a hidden side effect: length-weighted updates that can nudge model toward verbosity.
MAD-GRPO provides robust scale (MAD + epsilon) per-token normalization stability without verbosity bias.

*https://huggingface.co/papers/2503.20783

prithivMLmods 
posted an update 15 days ago
view post
Post
5518
LTX-2 Camera-Control LoRA demo with dolly-in/out and dolly-left/right is now available on Hugging Face, paired with ltx-2-19b-distilled-lora for fast inference. It also includes dynamic GPU duration adjustments for long video generations. Click the related Space links below.

🤗Try it now on : prithivMLmods/LTX-2-LoRAs-Camera-Control-Dolly
⭐Github: https://github.com/PRITHIVSAKTHIUR/LTX-2-LoRAs-Camera-Control-Dolly
🕹️Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection

To learn more, visit the app page or the respective model pages.
  • 2 replies
·
telcom 
posted an update 18 days ago
prithivMLmods 
posted an update 22 days ago
view post
Post
2447
Dropping Image Edit (Object Manipulator): Add or remove specified objects/designs, with flexible support for both single-image and multi-image modes.

🤗 Demo: prithivMLmods/Qwen-Image-Edit-Object-Manipulator

Qwen-Image-Edit-2511-Object-Remover is an adapter (LoRA) developed for Qwen’s Qwen-Image-Edit-2511 image-to-image model. It is specifically designed for precise object removal from images.

⭐ Model: prithivMLmods/Qwen-Image-Edit-2511-Object-Remover

Qwen-Image-Edit-2511-Object-Adder is an adapter (LoRA) developed for Qwen’s Qwen-Image-Edit-2511 image-to-image model. It is specifically designed for precise object addition to images.

⭐ Model: prithivMLmods/Qwen-Image-Edit-2511-Object-Adder

🕹️ Collection: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-object-manipulator
🕹️ github: https://github.com/PRITHIVSAKTHIUR/Qwen-Image-Edit-Object-Manipulator

To learn more, visit the app page or the respective model pages.
telcom 
posted an update 26 days ago
view post
Post
234
if you are interested in HUB (https://saemi410.github.io/HUB/ I recommend the fork I have created with some updates to make it smooth in running a smoke test git@github.com:javadtaghia/HUB.git) and you want to run the UCE (https://unified.baulab.info), please check:
- Model weights for UCE here: telcom/uce_NSFW
- Model weights for ESD here: telcom/esd_NSFW
- datasets and more download materials from: telcom/HUB_reference_dataset

Please read the notes in the model card.
prithivMLmods 
posted an update 29 days ago
view post
Post
4186
Update: TRELLIS.2 (Text to 3D, Image to 3D) Gradio with Rerun Embedded demo with improved visualization of the 3D model previewer is now available on Hugging Face. Generate assets and view them in the 3D viewer, powered and streamlined with Microsoft’s TRELLIS.2 and Tongyi-MAI’s Z-Image-Turbo models.

🤗 TRELLIS.2 (Demo): prithivMLmods/TRELLIS.2-Text-to-3D
🕹️ GitHub: https://github.com/PRITHIVSAKTHIUR/TRELLIS.2-Text-to-3D-RERUN
🕹️ Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations

To know more about it, visit the app page or the respective model page!
prithivMLmods 
posted an update about 1 month ago
view post
Post
4247
Introducing the Qwen-Image-Edit-2511-LoRAs-Fast demo, featuring image property comparison and contrast, built on top of Gradio and the combined Rerun SDK. It supports single and multi-image edits with existing LoRAs that are lazily loaded. (Note: This is still an experimental Space for Qwen-Image-Edit-2511.)

⭐ Space Demo: prithivMLmods/Qwen-Image-Edit-2511-LoRAs-Fast
⭐ GitHub: https://github.com/PRITHIVSAKTHIUR/Qwen-Image-Edit-2511-LoRAs-Fast-Multi-Image-Rerun
⭐ Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection

To know more about it, visit the app page or the respective model page!
  • 2 replies
·
telcom 
posted an update about 1 month ago
view post
Post
266
NVIDIA’s Groq deal ... I think, inference efficiency is becoming the main driver of profitability, and NVIDIA’s Groq deal is evidence the market is moving from “who can train biggest” to “who can serve cheapest and fastest at scale.” That points to a maturing phase of AI, not necessarily the end of a bubble, but definitely a correction in what “wins” long-term.
What do you think?
  • 2 replies
·
telcom 
posted an update about 1 month ago
view post
Post
185
CIFAR-10 your handing image dataset ...
CIFAR-10 is a small, standard computer-vision dataset used to quickly test and compare ideas.

- 60,000 color images, each 32×32 pixels, labeled into 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
- Label mapping (important):

- 0 airplane
- 1 automobile
- 2 bird
- 3 cat
- 4 deer
- 5 dog
- 6 frog
- 7 horse
- 8 ship
- 9 truck
- Split: 50,000 train and 10,000 test.
- Why people use it: fast benchmarking for image classifiers (small CNNs, ResNet, ViT), and quick experiments for training pipelines, augmentation, regularization, pruning, distillation, and demos.
- Sizes (downloads): Python version about 163 MB, binary about 162 MB. Hugging Face shows about 144 MB for the dataset files.
- Where to get it: the official CIFAR page (University of Toronto) and the Hugging Face CIFAR-10 dataset page.
uoft-cs/cifar10
If you want something more, check the table below
| Dataset | Resolution | Classes | Best For |
| ImageNet 1K | 224–256×256 | 1000 | Real-world large-scale classification |
| ImageNet-256. | 256×256 | 1000 | Direct high-res training |
| TinyImageNet | 64×64 | 200 | Mid-range benchmark |
| UC Merced Land Use | 256×256 | ~21 | Higher resolution small classification |
| MS COCO | >256×256 | ~80 objects | Detection / segmentation |
telcom 
posted an update about 1 month ago
view post
Post
2054
arXiv CS endorsement

It's Javad, my Google Scholar Profile:
https://scholar.google.com/citations?user=bja6GwoAAAAJ&hl=en
I would like to share my articles with you on Hugging Face, I'm asking for endorsement* in Computer Science arxiv.org.

If you would like to endorse me, please visit the following URL:
https://arxiv.org/auth/endorse?x=NVUAPL
If that URL does not work for you, please visit
http://arxiv.org/auth/endorse.php
and enter the following six-digit alphanumeric string:
Endorsement Code: NVUAPL

Thanks you in advance.
Javad Taghia

* Who is qualified to endorse?

To endorse another user to submit to the cs.AI (Artificial Intelligence) subject class, an arXiv submitter must have submitted 3 papers to any of cs.AI, cs.AR, cs.CC, cs.CE, cs.CG, cs.CL, cs.CR, cs.CV, cs.CY, cs.DB, cs.DC, cs.DL, cs.DM, cs.DS, cs.ET, cs.FL, cs.GL, cs.GR, cs.GT, cs.HC, cs.IR, cs.IT, cs.LG, cs.LO, cs.MA, cs.MM, cs.MS, cs.NA, cs.NE, cs.NI, cs.OH, cs.OS, cs.PF, cs.PL, cs.RO, cs.SC, cs.SD, cs.SE, cs.SI or cs.SY earlier than three months ago and less than five years ago.

prithivMLmods 
posted an update about 1 month ago
view post
Post
3730
Introducing demos for new SOTA models from AI2: SAGE-MM (Smart Any-Horizon Agents for Long-Video Reasoning) and Molmo-2, an open vision-language model that supports multi-image (QA and pointing) and video (QA, pointing, and tracking). The respective demo-related collections are listed below. 🎃🔥

✨ SAGE-MM [Video-Reasoning]: prithivMLmods/SAGE-MM-Video-Reasoning
✨ Molmo2 [Demo]: prithivMLmods/Molmo2-HF-Demo

🎃 GitHub[SAGE-MM]: https://github.com/PRITHIVSAKTHIUR/SAGE-MM-Video-Reasoning
🎃 GitHub[Molmo2]: https://github.com/PRITHIVSAKTHIUR/Molmo2-HF-Demo
🎃 Multimodal Implementations: https://huggingface.co/collections/prithivMLmods/multimodal-implementations

To know more about it, visit the app page or the respective model page!
  • 1 reply
·
prithivMLmods 
posted an update about 1 month ago
view post
Post
2091
Introducing TRELLIS.2 Text-to-3D. The demo for the TRELLIS.2-4B (Image-to-3D) model is streamlined with the Z-Image Turbo image generation model to enable Text-to-3D functionality. There is no need for input assets, making a small leap forward for ideation. Optionally, it also includes default support for Image-to-3D inference using direct image assets. Find the demo and related collections below... 🤗🔥

✨ TRELLIS.2-Text-to-3D [Demo]: prithivMLmods/TRELLIS.2-Text-to-3D
✨ Multimodal Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations
✨ Github: https://github.com/PRITHIVSAKTHIUR/TRELLIS.2-Text-to-3D

To know more about it, visit the app page or the respective model page!
prithivMLmods 
posted an update about 1 month ago
view post
Post
2043
Demo for Molmo2 on Hugging Face is live now, including Single/Multi-Image VQA, Visual Pointing/Grounding, Video VQA, and Video Point Tracking. Find the demo and related collections below. 🔥🤗

● Molmo2 HF Demo🖥️: prithivMLmods/Molmo2-HF-Demo
● Model Collection: https://huggingface.co/collections/allenai/molmo2
● Related Multimodal Space Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations

To know more about it, visit the app page or the respective model page!
prithivMLmods 
posted an update about 1 month ago
view post
Post
5568
Introducing the Z Image Turbo LoRA DLC App, a gallery space for plug-and-play Z-Image-Turbo LoRAs. It features a curated collection of impressive LoRAs for generating high-quality images. By default, it runs on the base model. Simply choose a LoRA, type your prompt, and generate images. You can find the app and more details below. 🤗🧪

● Space [Demo]: prithivMLmods/Z-Image-Turbo-LoRA-DLC
● Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection
● Check the list of Z-Image LoRA's: https://huggingface.co/models?other=base_model:adapter:Tongyi-MAI/Z-Image-Turbo
● Github: https://github.com/PRITHIVSAKTHIUR/Z-Image-Turbo-LoRA-DLC

Other related image gen spaces:-

● FLUX-LoRA-DLC2: prithivMLmods/FLUX-LoRA-DLC2
● FLUX-LoRA-DLC: prithivMLmods/FLUX-LoRA-DLC
● Qwen-Image-LoRA-DLC: prithivMLmods/Qwen-Image-LoRA-DLC
● Qwen-Image-Edit-2509-LoRAs-Fast: prithivMLmods/Qwen-Image-Edit-2509-LoRAs-Fast
● Qwen-Image-Edit-2509-LoRAs-Fast-Fusion: prithivMLmods/Qwen-Image-Edit-2509-LoRAs-Fast-Fusion

& more...

To know more about it, visit the app page or the respective model page!
  • 2 replies
·
telcom 
posted an update about 2 months ago
view post
Post
268
Recently I was playing with my model . What is your idea about "unlearning" since I need it 😀
telcom/deewaiREALCN, I have the original one on the main branch and trained version "cp550" and "n_680" on anther branch.
Both trained on telcom/deewaiREALCN-training.
I got three results when doing prompt:
"Athlete portrait, 26-year-old woman, post-training sweat, gym ambient light, chalk dust particles, intense gaze, crisp detail."
Apparently, model is sensitive to the word "old".
You can see the training on more faces improved from main, however, still not ideal...
I am working now on unlearning. I would like to hear about your opinion.
#unlearning


prithivMLmods 
posted an update about 2 months ago
view post
Post
2751
Introducing the D.Markdown Experimental Models, Proxima and Epsilon OCR models, built on top of Qwen3-VL and Qwen2.5-VL respectively. Proxima is optimized for Markdown generation and is capable of embedding inline programming code snippets and generating rich nodes such as HTML, XML, JSON, and YAML. Epsilon is optimized for reconstructing complex layouts including tables, forms, and mathematical content. 🌌✨

● proxima-ocr-d.markdown-post3.0.l: prithivMLmods/proxima-ocr-d.markdown-post3.0.l
● epsilon-ocr-d.markdown-post3.0.m: prithivMLmods/epsilon-ocr-d.markdown-post3.0.m
● proxima-ocr-d.markdown-post3.0.l-gguf: prithivMLmods/proxima-ocr-d.markdown-post3.0.l-GGUF
● epsilon-ocr-d.markdown-post3.0.m-gguf: prithivMLmods/epsilon-ocr-d.markdown-post3.0.m-GGUF

● Collection: https://huggingface.co/collections/prithivMLmods/dynamic-markdowns
● Multimodal Apps: https://huggingface.co/collections/prithivMLmods/multimodal-implementations

👉 These models are stage progression models, and currently they may contain artifacts.

To know more about it, visit the app page or the respective model page!
prithivMLmods 
posted an update about 2 months ago
view post
Post
1148
Try CUA GUI Operator 🖥️ Space, the demo of some interesting multimodal ultra-compact Computer Use Agent (CUA) models in a single app, including Fara-7B, UI-TARS-1.5-7B, and Holo models, to perform GUI localization tasks.

● CUA-GUI-Operator [Demo]: prithivMLmods/CUA-GUI-Operator
● Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations

Other related multimodal spaces

● Qwen3-VL: prithivMLmods/Qwen3-VL-HF-Demo
● Multimodal-VLM-v1.0: prithivMLmods/Multimodal-VLM-v1.0
● Vision-to-VibeVoice-en: prithivMLmods/Vision-to-VibeVoice-en

I have planned to add Chrome sandboxes to streamline it and turn it into a browser based CUA multimodal tool, which will be added to the same space soon.

To know more about it, visit the app page or the respective model page!
  • 1 reply
·
prithivMLmods 
posted an update about 2 months ago
view post
Post
3583
One speech model with seven voices, streamlined with multimodal capabilities for vision tasks. Performs vision(image-text) to audio inference with Qwen2.5-VL + VibeVoice-Realtime-0.5B. Vision to VibeVoice (EN) - The demo is live. 🗣️🔥

🤗 Vision-to-VibeVoice-en [Demo]: prithivMLmods/Vision-to-VibeVoice-en
✨ Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations
✨ Speech [VibeVoice-Realtime-0.5B]: microsoft/VibeVoice-Realtime-0.5B
✨ Vision [Qwen2.5-VL]: Qwen/Qwen2.5-VL-7B-Instruct

To know more about it, visit the app page or the respective model page!
·