[FEEDBACK] Daily Papers
Note that this is not a post about adding new papers, it's about feedback on the Daily Papers community update feature.
How to submit a paper to the Daily Papers, like @akhaliq (AK)?
- Submitting is available to paper authors
- Only recent papers (less than 7d) can be featured on the Daily
Then drop the arxiv id in the form at https://huggingface.co/papers/submit
- Add medias to the paper (images, videos) when relevant
- You can start the discussion to engage with the community
Please check out the documentation
We are excited to share our recent work on MLLM architecture design titled "Ovis: Structural Embedding Alignment for Multimodal Large Language Model".
Paper: https://arxiv.org/abs/2405.20797
Github: https://github.com/AIDC-AI/Ovis
Model: https://huggingface.co/AIDC-AI/Ovis-Clip-Llama3-8B
Data: https://huggingface.co/datasets/AIDC-AI/Ovis-dataset
we are excited to share our work titled "Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models" : https://arxiv.org/abs/2406.12644
Hi
Hi @meganariley @AdinaY , requesting a manual exception for Daily Papers submission β .
Paper: EMCompress: Video-LLMs with Endomorphic Multimodal Compression
arXiv: https://arxiv.org/abs/2508.21094 (v3, uploaded Apr 24, 2026)
Venue: ACL 2026 Findings (camera-ready)
The system blocks submission because v1 was uploaded Aug 2025. However, v3 is a completely rewritten paper, not a revision β the only thing shared with v1 is the arxiv ID:
- Everything fully rewritten: v1 was "Temporal Visual Screening for Video Question Answering (TVS)"; v3 is "EMCompress: Video-LLMs with Endomorphic Multimodal Compression (EMC)" β different title, different motivation / introduction / framing / formalization.
- New mathematical formulation: v3 introduces sufficient-statistic / Information Bottleneck framing (Markov chain Aβ(V,Q)β(v,q)A \to (V,Q) \to (v,q) Aβ(V,Q)β(v,q), classical sufficiency condition in VideoQA-natural form) β entirely absent in v1/v2.
- New experiments: v3 adds long-video benchmark evaluation (Video-MME, MLVU, LVBench, EgoSchema) and a full cost-efficiency analysis β neither was in v1.
- Benchmark restructured: YouCookII-TVS β EMCompress, repositioned as a standalone dual-task benchmark.
All five figures redrawn to reflect the new framework.
Peer-reviewed: this is the ACL 2026 Findings camera-ready, ~7 months after v2.
We kept the arxiv ID for citation continuity with the ACL submission metadata, but the work itself is new. Could you manually enable submission, or alternatively remove and re-index the paper page so we can resubmit during the v3 window? Happy to provide a v1βv3 diff if helpful.
Thanks!