parent_paper_title stringclasses 63
values | parent_paper_arxiv_id stringclasses 63
values | citation_shorthand stringlengths 2 56 | raw_citation_text stringlengths 9 63 | cited_paper_title stringlengths 5 161 | cited_paper_arxiv_link stringlengths 32 37 ⌀ | cited_paper_abstract stringlengths 406 1.92k ⌀ | has_metadata bool 1
class | is_arxiv_paper bool 2
classes | bib_paper_authors stringlengths 2 2.44k ⌀ | bib_paper_year float64 1.97k 2.03k ⌀ | bib_paper_month stringclasses 16
values | bib_paper_url stringlengths 20 116 ⌀ | bib_paper_doi stringclasses 269
values | bib_paper_journal stringlengths 3 148 ⌀ | original_title stringlengths 5 161 | search_res_title stringlengths 4 122 | search_res_url stringlengths 22 267 | search_res_content stringlengths 19 1.92k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | NBERw21340 | \cite{NBERw21340} | Effective Policy for Reducing Inequality? The Earned Income Tax Credit and the Distribution of Income | null | null | true | false | Hoynes, Hilary W and Patel, Ankur J | 2,015 | July | http://www.nber.org/papers/w21340 | 10.3386/w21340 | null | Effective Policy for Reducing Inequality? The Earned Income Tax Credit and the Distribution of Income | Effective Policy for Reducing Inequality? The Earned Income | https://ideas.repec.org/p/nbr/nberwo/21340.html | Our results show that a policy-induced $1000 increase in the EITC leads to a 7.3 percentage point increase in employment and a 9.4 percentage point reduction |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | NBERw21211 | \cite{NBERw21211} | The Earned Income Tax Credit (EITC) | null | null | true | false | Nichols, Austin and Rothstein, Jesse | 2,015 | May | http://www.nber.org/papers/w21211 | 10.3386/w21211 | null | The Earned Income Tax Credit (EITC) | What is the earned income tax credit? - Tax Policy Center | https://taxpolicycenter.org/briefing-book/what-earned-income-tax-credit | The earned income tax credit (EITC) provides substantial support to low- and moderate-income working parents who claim a qualifying child. |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | Foo2019ProcessAC | \cite{Foo2019ProcessAC} | Process and Critical Approaches to Solving the Systemic Climate Change Governance Problem | null | null | true | false | Check Woo Foo | 2,019 | null | https://api.semanticscholar.org/CorpusID:235319207 | null | Politics \& Energy eJournal | Process and Critical Approaches to Solving the Systemic Climate Change Governance Problem | Process and Critical Approaches to Solving the Systemic Climate ... | https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3608501 | The most important and urgent task, besides avoiding nuclear war, is abatement of the existential threat of systemic climate change, |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | Patjoshi2015DesignAD | \cite{Patjoshi2015DesignAD} | Design and Development of Advanced Control strategies for Power Quality Enhancement at Distribution Level | null | null | true | false | Rajesh Kumar Patjoshi | 2,015 | null | https://api.semanticscholar.org/CorpusID:112918597 | null | null | Design and Development of Advanced Control strategies for Power Quality Enhancement at Distribution Level | (PDF) Advanced Control Strategies for UPQC to Improve ... | https://www.researchgate.net/publication/279289697_Advanced_Control_Strategies_for_UPQC_to_Improve_Power_Quality_of_Power_Distribution_Systems | PDF | On Jul 2, 2014, Quoc Nam Trinh published Advanced Control Strategies for UPQC to Improve Power Quality of Power Distribution Systems |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | 10.1257/jep.25.4.165 | \cite{10.1257/jep.25.4.165} | The Case for a Progressive Tax: From Basic Research to Policy Recommendations | null | null | true | false | Diamond, Peter and Saez, Emmanuel | 2,011 | December | https://www.aeaweb.org/articles?id=10.1257/jep.25.4.165 | 10.1257/jep.25.4.165 | Journal of Economic Perspectives | The Case for a Progressive Tax: From Basic Research to Policy Recommendations | The Case for a Progressive Tax | https://economics.mit.edu/sites/default/files/2022-09/jep.25.4.165.pdf | Therefore, optimal income tax theory is fi rst a normative theory that shows how a social welfare objective combines with constraints arising from theory that shows how a social welfare objective combines with constraints arising from limits on resources and behavioral responses to taxation in order to derive specifi c... |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | 10.2307/2296779 | \cite{10.2307/2296779} | An Exploration in the Theory of Optimum Income Taxation12 | null | null | true | false | Mirrlees, J. A. | 1,971 | 04 | https://doi.org/10.2307/2296779 | 10.2307/2296779 | The Review of Economic Studies | An Exploration in the Theory of Optimum Income Taxation12 | Exploration in the Theory of Optimum Income Taxation12 | https://academic.oup.com/restud/article-abstract/38/2/175/1527903 | by JA Mirrlees · 1971 · Cited by 7415 — J. A. Mirrlees; An Exploration in the Theory of Optimum Income Taxation12, The Review of Economic Studies, Volume 38, Issue 2, 1 April 1971, Pages 175–208, |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | RePEc:aea:aecrev:v:61:y:1971:i:1:p:8-27 | \cite{RePEc:aea:aecrev:v:61:y:1971:i:1:p:8-27} | Optimal Taxation and Public Production: I--Production Efficiency | null | null | true | false | Diamond, Peter and Mirrlees, James | 1,971 | null | https://EconPapers.repec.org/RePEc:aea:aecrev:v:61:y:1971:i:1:p:8-27 | null | American Economic Review | Optimal Taxation and Public Production: I--Production Efficiency | [PDF] Optimal Taxation and Public Production I: Production Efficiency | http://hassler-j.iies.su.se/Courses/DynPubFin/Papers/DiamondMirrlees.pdf | Theories of optimal production in a planned economy have usually assumed that the tax system can allow the govern- ment to achieve any desired redistribution of |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | 10.1111/1467-937X.00166 | \cite{10.1111/1467-937X.00166} | Using Elasticities to Derive Optimal Income Tax Rates | null | null | true | false | Saez, Emmanuel | 2,001 | 01 | https://doi.org/10.1111/1467-937X.00166 | 10.1111/1467-937X.00166 | The Review of Economic Studies | Using Elasticities to Derive Optimal Income Tax Rates | Using Elasticities to Derive Optimal Income Tax Rates | https://academic.oup.com/restud/article/68/1/205/1568609 | by E Saez · 2001 · Cited by 1885 — This paper derives optimal income tax formulas using compensated and uncompensated elasticities of earnings with respect to tax rates. |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | 10.1257/pol.6.1.230 | \cite{10.1257/pol.6.1.230} | Optimal Taxation of Top Labor Incomes: A Tale of Three Elasticities | null | null | true | false | Piketty, Thomas and Saez, Emmanuel and Stantcheva, Stefanie | 2,014 | February | https://www.aeaweb.org/articles?id=10.1257/pol.6.1.230 | 10.1257/pol.6.1.230 | American Economic Journal: Economic Policy | Optimal Taxation of Top Labor Incomes: A Tale of Three Elasticities | Optimal Taxation of Top Labor Incomes: A Tale of Three Elasticities | https://www.nber.org/papers/w17616 | This paper presents a model of optimal labor income taxation where top incomes respond to marginal tax rates through three channels. |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | 10.1257/pol.20180033 | \cite{10.1257/pol.20180033} | Optimal Income Taxation with Unemployment and Wage Responses: A Sufficient Statistics Approach | null | null | true | false | Kroft, Kory and Kucko, Kavan and Lehmann, Etienne and Schmieder, Johannes | 2,020 | February | https://www.aeaweb.org/articles?id=10.1257/pol.20180033 | 10.1257/pol.20180033 | American Economic Journal: Economic Policy | Optimal Income Taxation with Unemployment and Wage Responses: A Sufficient Statistics Approach | Optimal Income Taxation with Unemployment and Wage Responses | https://www.aeaweb.org/articles?id=10.1257/pol.20180033 | We derive a sufficient statistics tax formula in a model that incorporates unemployment and endogenous wages to study the shape of the optimal income tax. Key |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | zheng2020aieconomistimprovingequality | \cite{zheng2020aieconomistimprovingequality} | The AI Economist: Improving Equality and Productivity with AI-Driven Tax
Policies | http://arxiv.org/abs/2004.13332v1 | Tackling real-world socio-economic challenges requires designing and testing
economic policies. However, this is hard in practice, due to a lack of
appropriate (micro-level) economic data and limited opportunity to experiment.
In this work, we train social planners that discover tax policies in dynamic
economies that c... | true | true | Stephan Zheng and Alexander Trott and Sunil Srinivasa and Nikhil Naik and Melvin Gruesbeck and David C. Parkes and Richard Socher | 2,020 | null | https://arxiv.org/abs/2004.13332 | null | null | The AI Economist: Improving Equality and Productivity with AI-Driven Tax
Policies | [PDF] Improving Equality and Productivity with AI-Driven Tax Policies - arXiv | http://arxiv.org/pdf/2004.13332 | The AI Economist uses AI to discover tax policies that improve the trade-off between equality and productivity, achieving a 16% improvement |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | NBERc14009 | \cite{NBERc14009} | The Impact of Machine Learning on Economics | null | null | true | false | Susan Athey | 2,018 | January | http://www.nber.org/chapters/c14009 | null | null | The Impact of Machine Learning on Economics | The Impact of Machine Learning on Economics | https://www.gsb.stanford.edu/faculty-research/publications/impact-machine-learning-economics | # The Impact of Machine Learning on Economics This paper provides an assessment of the early contributions of machine learning to economics, as well as predictions about its future contributions. It begins by briefly overviewing some themes from the literature on machine learning, and then draws some contrasts with tra... |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | AxtellFarmer2022 | \cite{AxtellFarmer2022} | Agent Based Modeling in Economics and Finance: Past, Present, and Future | null | null | true | false | Axtell, R. and Farmer, J. | 2,022 | null | null | null | Journal of Economic Literature | Agent Based Modeling in Economics and Finance: Past, Present, and Future | [PDF] Agent-Based Modeling in Economics and Finance: Past, Present ... | https://complexityhandbook.uni-hohenheim.de/fileadmin/einrichtungen/complexityhandbook/AXTELL_Robert.pdf | Abstract. Agent-based modeling is a novel computational methodology for representing the behavior of individuals in order to study social phenomena. |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | DelliGatti2018 | \cite{DelliGatti2018} | Contents | null | null | true | false | Delli Gatti, Domenico and Fagiolo, Giorgio and Gallegati, Mauro and Richiardi, Matteo and Russo, Alberto | 2,018 | null | null | null | null | Contents | CONTENTS | definition in the Cambridge English Dictionary | https://dictionary.cambridge.org/us/dictionary/english/contents | everything that is contained within something: contents of The contents of his bag spilled all over the floor. He didn't need to open the box because |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | shen2025phyxdoesmodelwits | \cite{shen2025phyxdoesmodelwits} | PhyX: Does Your Model Have the "Wits" for Physical Reasoning? | http://arxiv.org/abs/2505.15929v2 | Existing benchmarks fail to capture a crucial aspect of intelligence:
physical reasoning, the integrated ability to combine domain knowledge,
symbolic reasoning, and understanding of real-world constraints. To address
this gap, we introduce PhyX: the first large-scale benchmark designed to assess
models capacity for ph... | true | true | Hui Shen and Taiqiang Wu and Qi Han and Yunta Hsieh and Jizhou Wang and Yuyue Zhang and Yuxin Cheng and Zijian Hao and Yuansheng Ni and Xin Wang and Zhongwei Wan and Kai Zhang and Wendong Xu and Jing Xiong and Ping Luo and Wenhu Chen and Chaofan Tao and Zhuoqing Mao and Ngai Wong | 2,025 | null | https://arxiv.org/abs/2505.15929 | null | null | PhyX: Does Your Model Have the "Wits" for Physical Reasoning? | PhyX: Does Your Model Have the "Wits" for Physical Reasoning? | http://arxiv.org/pdf/2505.15929v2 | Existing benchmarks fail to capture a crucial aspect of intelligence:
physical reasoning, the integrated ability to combine domain knowledge,
symbolic reasoning, and understanding of real-world constraints. To address
this gap, we introduce PhyX: the first large-scale benchmark designed to assess
models capacity for ph... |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | zhao2024competeaiunderstandingcompetitiondynamics | \cite{zhao2024competeaiunderstandingcompetitiondynamics} | CompeteAI: Understanding the Competition Dynamics in Large Language
Model-based Agents | http://arxiv.org/abs/2310.17512v2 | Large language models (LLMs) have been widely used as agents to complete
different tasks, such as personal assistance or event planning. While most of
the work has focused on cooperation and collaboration between agents, little
work explores competition, another important mechanism that promotes the
development of soci... | true | true | Qinlin Zhao and Jindong Wang and Yixuan Zhang and Yiqiao Jin and Kaijie Zhu and Hao Chen and Xing Xie | 2,024 | null | https://arxiv.org/abs/2310.17512 | null | null | CompeteAI: Understanding the Competition Dynamics in Large Language
Model-based Agents | CompeteAI: Understanding the Competition Dynamics in Large ... | https://arxiv.org/abs/2310.17512 | In this paper, we seek to examine the competition dynamics in LLM-based agents. We first propose a general framework for studying the competition between |
TaxAgent: How Large Language Model Designs Fiscal Policy | 2506.02838v1 | nie2024surveylargelanguagemodels | \cite{nie2024surveylargelanguagemodels} | A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges | null | null | true | false | Yuqi Nie and Yaxuan Kong and Xiaowen Dong and John M. Mulvey and H. Vincent Poor and Qingsong Wen and Stefan Zohren | 2,024 | null | https://arxiv.org/abs/2406.11903 | null | null | A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges | A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges | http://arxiv.org/pdf/2406.11903v1 | Recent advances in large language models (LLMs) have unlocked novel
opportunities for machine learning applications in the financial domain. These
models have demonstrated remarkable capabilities in understanding context,
processing vast amounts of data, and generating human-preferred contents. In
this survey, we explo... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | vllm | \cite{vllm} | Efficient Memory Management for Large Language Model Serving with
PagedAttention | http://arxiv.org/abs/2309.06180v1 | High throughput serving of large language models (LLMs) requires batching
sufficiently many requests at a time. However, existing systems struggle
because the key-value cache (KV cache) memory for each request is huge and
grows and shrinks dynamically. When managed inefficiently, this memory can be
significantly wasted... | true | true | Woosuk Kwon and
Zhuohan Li and
Siyuan Zhuang and
Ying Sheng and
Lianmin Zheng and
Cody Hao Yu and
Joseph Gonzalez and
Hao Zhang and
Ion Stoica | 2,023 | null | https://doi.org/10.1145/3600006.3613165 | 10.1145/3600006.3613165 | null | Efficient Memory Management for Large Language Model Serving with
PagedAttention | Efficient Memory Management for Large Language Model ... | https://arxiv.org/pdf/2309.06180 | Efficient Memory Management for Large Language Model Serving with PagedAttention Woosuk Kwon 1,∗ Zhuohan Li 1,∗ Siyuan Zhuang 1 Ying Sheng 1,2 Lianmin Zheng 1 Cody Hao Yu 3 Joseph E. Gonzalez 1 Hao Zhang 4 Ion Stoica 1 1 UC Berkeley 2Stanford University 3Independent Researcher 4UC San Diego Abstract High throughput ser... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | chunkattention | \cite{chunkattention} | ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and
Two-Phase Partition | http://arxiv.org/abs/2402.15220v4 | Self-attention is an essential component of large language models (LLM) but a
significant source of inference latency for long sequences. In multi-tenant LLM
serving scenarios, the compute and memory operation cost of self-attention can
be optimized by using the probability that multiple LLM requests have shared
system... | true | true | Lu Ye and Ze Tao and Yong Huang and Yang Li | 2,024 | null | https://aclanthology.org/2024.acl-long.623 | null | null | ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and
Two-Phase Partition | [PDF] Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase ... | https://aclanthology.org/2024.acl-long.623.pdf | ChunkAttention is a prefix-aware self-attention module that uses a prefix-aware KV cache and two-phase partition to improve memory utilization |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | cachedattention | \cite{cachedattention} | Cost-Efficient Large Language Model Serving for Multi-turn Conversations
with CachedAttention | null | null | true | false | Bin Gao and
Zhuomin He and
Puru Sharma and
Qingxuan Kang and
Djordje Jevdjic and
Junbo Deng and
Xingkun Yang and
Zhou Yu and
Pengfei Zuo | 2,024 | null | https://www.usenix.org/conference/atc24/presentation/gao-bin-cost | null | null | Cost-Efficient Large Language Model Serving for Multi-turn Conversations
with CachedAttention | Cost-Efficient Large Language Model Serving for Multi-turn ... - arXiv | https://arxiv.org/abs/2403.19708 | This paper proposes CachedAttention, a new attention mechanism that enables reuse of KV caches across multi-turn conversations, significantly reducing the |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | promptcache | \cite{promptcache} | Prompt Cache: Modular Attention Reuse for Low-Latency Inference | http://arxiv.org/abs/2311.04934v2 | We present Prompt Cache, an approach for accelerating inference for large
language models (LLM) by reusing attention states across different LLM prompts.
Many input prompts have overlapping text segments, such as system messages,
prompt templates, and documents provided for context. Our key insight is that
by precomput... | true | true | In Gim and
Guojun Chen and
Seung{-}Seob Lee and
Nikhil Sarda and
Anurag Khandelwal and
Lin Zhong | 2,024 | null | https://proceedings.mlsys.org/paper\_files/paper/2024/hash/a66caa1703fe34705a4368c3014c1966-Abstract-Conference.html | null | null | Prompt Cache: Modular Attention Reuse for Low-Latency Inference | [PDF] Prompt Cache: Modular Attention Reuse for Low-Latency Inference | https://proceedings.mlsys.org/paper_files/paper/2024/file/a66caa1703fe34705a4368c3014c1966-Paper-Conference.pdf | Prompt Cache accelerates LLM inference by reusing attention states of frequently occurring text segments, precomputed and stored in memory. |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | sglang | \cite{sglang} | Efficiently Programming Large Language Models using SGLang | null | null | true | false | Lianmin Zheng and
Liangsheng Yin and
Zhiqiang Xie and
Jeff Huang and
Chuyue Sun and
Cody Hao Yu and
Shiyi Cao and
Christos Kozyrakis and
Ion Stoica and
Joseph E. Gonzalez and
... | 2,023 | null | https://doi.org/10.48550/arXiv.2312.07104 | 10.48550/ARXIV.2312.07104 | CoRR | Efficiently Programming Large Language Models using SGLang | Efficiently Programming Large Language Models using SGLang | https://arxiv.org/html/2312.07104v1 | SGLang simplifies the writing of LLM programs and boosts execution efficiency. Our experiments demonstrate that SGLang can speed up common LLM tasks by up to 5 |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | cacheblend | \cite{cacheblend} | CacheBlend: Fast Large Language Model Serving for RAG with Cached
Knowledge Fusion | http://arxiv.org/abs/2405.16444v3 | Large language models (LLMs) often incorporate multiple text chunks in their
inputs to provide the necessary contexts. To speed up the prefill of the long
LLM inputs, one can pre-compute the KV cache of a text and re-use the KV cache
when the context is reused as the prefix of another LLM input. However, the
reused tex... | true | true | Jiayi Yao and
Hanchen Li and
Yuhan Liu and
Siddhant Ray and
Yihua Cheng and
Qizheng Zhang and
Kuntai Du and
Shan Lu and
Junchen Jiang | 2,024 | null | https://doi.org/10.48550/arXiv.2405.16444 | 10.48550/ARXIV.2405.16444 | CoRR | CacheBlend: Fast Large Language Model Serving for RAG with Cached
Knowledge Fusion | CacheBlend: Fast Large Language Model Serving for RAG ... - arXiv | https://arxiv.org/abs/2405.16444 | Image 4: arxiv logo>cs> arXiv:2405.16444 View a PDF of the paper titled CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion, by Jiayi Yao and 8 other authors View a PDF of the paper titled CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion, by Jiayi Yao an... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | openaiapi | \cite{openaiapi} | OpenAI developer platform | null | null | true | false | OpenAI | null | null | null | null | null | OpenAI developer platform | Introducing Verdi, an AI dev platform powered by GPT-4o - OpenAI | https://openai.com/index/mercado-libre/ | Verdi, a development platform layer using GPT-4o, GPT-4o mini, and GPT-3.5 Turbo, which is transforming how Mercado Libre handles customer service and other |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | genimiapi | \cite{genimiapi} | Gemini API | null | null | true | false | Google | 2,025 | null | null | null | null | Gemini API | Gemini Developer API | Gemma open models | Google AI for ... | https://ai.google.dev/ | Gemini Developer API | Gemma open models | Google AI for Developers - Gemini Showcase - Gemini Showcase ### Integrate Google AI models with an API key Build with cutting-edge AI models, like Gemini, Imagen, and Veo, from Google DeepMind Integrate Google AI models with an API key Unlock AI capabilities for your apps w... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | claudeapi | \cite{claudeapi} | Claude API | null | null | true | false | Anthropic | 2,025 | null | null | null | null | Claude API | Anthropic API | https://docs.anthropic.com/en/home | Home - Anthropic Claude Documentation Learn how to get started with the Anthropic API, the Console, and Claude Code. Explore the advanced features and capabilities now available in Claude.## API reference Integrate and scale using our API and SDKs.## Anthropic Console Learn about changes and new features in Claude and ... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | mooncake | \cite{mooncake} | Mooncake Trace | null | null | true | false | null | 2,025 | null | null | null | null | Mooncake Trace | kvcache-ai/Mooncake - GitHub | https://github.com/kvcache-ai/Mooncake | Moonshot AI. Now both the Transfer Engine and Mooncake Store are open-sourced! This repository also hosts its technical report and the open sourced traces. |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | hu2024epic | \cite{hu2024epic} | EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models | null | null | true | false | Junhao Hu and Wenrui Huang and Haoyi Wang and Weidong Wang and Tiancheng Hu and Qin Zhang and Hao Feng and Xusheng Chen and Yizhou Shan and Tao Xie | 2,024 | null | https://arxiv.org/abs/2410.15332 | null | null | EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models | EPIC: Efficient Position-Independent Caching for Serving Large... | https://openreview.net/forum?id=qjd3ZUiHRT&referrer=%5Bthe%20profile%20of%20Yizhou%20Shan%5D(%2Fprofile%3Fid%3D~Yizhou_Shan2) | Summary: This paper introduces PICI, an efficient position-independent context caching system for serving large language models. The system pre-computes the KV |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | streamingllm | \cite{streamingllm} | Efficient Streaming Language Models with Attention Sinks | http://arxiv.org/abs/2309.17453v4 | Deploying Large Language Models (LLMs) in streaming applications such as
multi-round dialogue, where long interactions are expected, is urgently needed
but poses two major challenges. Firstly, during the decoding stage, caching
previous tokens' Key and Value states (KV) consumes extensive memory. Secondly,
popular LLMs... | true | true | Guangxuan Xiao and
Yuandong Tian and
Beidi Chen and
Song Han and
Mike Lewis | 2,024 | null | https://openreview.net/forum?id=NG7sS51zVF | null | null | Efficient Streaming Language Models with Attention Sinks | Efficient Streaming Language Models with Attention Sinks | http://arxiv.org/pdf/2309.17453v4 | Deploying Large Language Models (LLMs) in streaming applications such as
multi-round dialogue, where long interactions are expected, is urgently needed
but poses two major challenges. Firstly, during the decoding stage, caching
previous tokens' Key and Value states (KV) consumes extensive memory. Secondly,
popular LLMs... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | h2o | \cite{h2o} | {H2O:} Heavy-Hitter Oracle for Efficient Generative Inference of Large
Language Models | null | null | true | false | Zhenyu Zhang and
Ying Sheng and
Tianyi Zhou and
Tianlong Chen and
Lianmin Zheng and
Ruisi Cai and
Zhao Song and
Yuandong Tian and
Christopher R{\'{e}} and
Clark W. Barrett and
... | 2,023 | null | http://papers.nips.cc/paper\_files/paper/2023/hash/6ceefa7b15572587b78ecfcebb2827f8-Abstract-Conference.html | null | null | {H2O:} Heavy-Hitter Oracle for Efficient Generative Inference of Large
Language Models | Hogwild! Inference: Parallel LLM Generation via Concurrent Attention | https://arxiv.org/html/2504.06261v1 | H2o: Heavy-hitter oracle for efficient generative inference of large language models. Advances in Neural Information Processing Systems, 36 |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | infinigen | \cite{infinigen} | InfiniGen: Efficient Generative Inference of Large Language Models with
Dynamic KV Cache Management | http://arxiv.org/abs/2406.19707v1 | Transformer-based large language models (LLMs) demonstrate impressive
performance across various natural language processing tasks. Serving LLM
inference for generating long contents, however, poses a challenge due to the
enormous memory footprint of the transient state, known as the key-value (KV)
cache, which scales ... | true | true | Wonbeom Lee and
Jungi Lee and
Junghwan Seo and
Jaewoong Sim | 2,024 | null | https://www.usenix.org/conference/osdi24/presentation/lee | null | null | InfiniGen: Efficient Generative Inference of Large Language Models with
Dynamic KV Cache Management | InfiniGen: Efficient Generative Inference of Large Language Models ... | https://arxiv.org/abs/2406.19707 | In this paper, we present InfiniGen, a novel KV cache management framework tailored for long-text generation, which synergistically works with modern |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | pyramidkv | \cite{pyramidkv} | PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information
Funneling | http://arxiv.org/abs/2406.02069v4 | In this study, we investigate whether attention-based information flow inside
large language models (LLMs) is aggregated through noticeable patterns for long
context processing. Our observations reveal that LLMs aggregate information
through Pyramidal Information Funneling where attention is scattering widely in
lower ... | true | true | Zefan Cai and
Yichi Zhang and
Bofei Gao and
Yuliang Liu and
Tianyu Liu and
Keming Lu and
Wayne Xiong and
Yue Dong and
Baobao Chang and
Junjie Hu and
Wen Xiao | 2,024 | null | https://doi.org/10.48550/arXiv.2406.02069 | 10.48550/ARXIV.2406.02069 | CoRR | PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information
Funneling | PyramidKV: Dynamic KV Cache Compression based on Pyramidal... | https://openreview.net/forum?id=jZVNmDiU86 | We developed PyramidKV, a novel and effective KV cache compression method. This approach dynamically adjusts the KV cache size across different layers. |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | KVQuant | \cite{KVQuant} | KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache
Quantization | http://arxiv.org/abs/2401.18079v6 | LLMs are seeing growing use for applications which require large context
windows, and with these large context windows KV cache activations surface as
the dominant contributor to memory consumption during inference. Quantization
is a promising approach for compressing KV cache activations; however, existing
solutions f... | true | true | Coleman Hooper and
Sehoon Kim and
Hiva Mohammadzadeh and
Michael W. Mahoney and
Yakun Sophia Shao and
Kurt Keutzer and
Amir Gholami | 2,024 | null | http://papers.nips.cc/paper\_files/paper/2024/hash/028fcbcf85435d39a40c4d61b42c99a4-Abstract-Conference.html | null | null | KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache
Quantization | KVQuant: Towards 10 Million Context Length LLM Inference with KV ... | https://github.com/SqueezeAILab/KVQuant | GitHub - SqueezeAILab/KVQuant: [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization [Paper]... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | lruk | \cite{lruk} | The {LRU-K} Page Replacement Algorithm For Database Disk Buffering | null | null | true | false | Elizabeth J. O'Neil and
Patrick E. O'Neil and
Gerhard Weikum | 1,993 | null | https://doi.org/10.1145/170035.170081 | 10.1145/170035.170081 | null | The {LRU-K} Page Replacement Algorithm For Database Disk Buffering | [PDF] The LRU-K Page Replacement Algorithm For Database Disk Buffering | https://www.cs.cmu.edu/~natassa/courses/15-721/papers/p297-o_neil.pdf | The basic idea of. LRU-K is to keep track of the times of the last K references to popular database pages, using this information to statis- tieall y estimate |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | slru | \cite{slru} | Caching Strategies to Improve Disk System Performance | null | null | true | false | Ramakrishna Karedla and
J. Spencer Love and
Bradley G. Wherry | 1,994 | null | https://doi.org/10.1109/2.268884 | 10.1109/2.268884 | Computer | Caching Strategies to Improve Disk System Performance | Caching strategies to improve disk system performance - IEEE Xplore | http://ieeexplore.ieee.org/document/268884/ | In this article, we examine the use of caching as a means to increase system response time and improve the data throughput of the disk subsystem. |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | twoq | \cite{twoq} | 2Q: {A} Low Overhead High Performance Buffer Management Replacement
Algorithm | null | null | true | false | Theodore Johnson and
Dennis E. Shasha | 1,994 | null | http://www.vldb.org/conf/1994/P439.PDF | null | null | 2Q: {A} Low Overhead High Performance Buffer Management Replacement
Algorithm | 2Q: A Low Overhead High Performance Buffer Management ... | https://dl.acm.org/doi/10.5555/645920.672996 | 2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm. Authors: Theodore Johnson. |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | eelru | \cite{eelru} | {EELRU:} Simple and Effective Adaptive Page Replacement | null | null | true | false | Yannis Smaragdakis and
Scott F. Kaplan and
Paul R. Wilson | 1,999 | null | https://doi.org/10.1145/301453.301486 | 10.1145/301453.301486 | null | {EELRU:} Simple and Effective Adaptive Page Replacement | EELRU: Simple and Effective Adaptive Page Replacement | https://www.researchgate.net/publication/2822757_EELRU_Simple_and_Effective_Adaptive_Page_Replacement | EELRU is a simple adaptive replacement algorithm, which uses only the kind of information needed by LRU---how recently each page has been touched relative to |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | lrfu | \cite{lrfu} | {LRFU:} {A} Spectrum of Policies that Subsumes the Least Recently
Used and Least Frequently Used Policies | null | null | true | false | Donghee Lee and
Jongmoo Choi and
Jong{-}Hun Kim and
Sam H. Noh and
Sang Lyul Min and
Yookun Cho and
Chong{-}Sang Kim | 2,001 | null | https://doi.org/10.1109/TC.2001.970573 | 10.1109/TC.2001.970573 | {IEEE} Trans. Computers | {LRFU:} {A} Spectrum of Policies that Subsumes the Least Recently
Used and Least Frequently Used Policies | [PDF] LRFU: a spectrum of policies that subsumes the least recently used ... | https://www.openu.ac.il/home/wiseman/2os/lru/lrfu.pdf | Of these, the Least Recently Used (LRU) and the. Least Frequently Used (LFU) block replacement policies constitute the two main streams. The LRU policy and its. |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | lirs | \cite{lirs} | {LIRS:} an efficient low inter-reference recency set replacement policy
to improve buffer cache performance | null | null | true | false | Song Jiang and
Xiaodong Zhang | 2,002 | null | https://doi.org/10.1145/511334.511340 | 10.1145/511334.511340 | null | {LIRS:} an efficient low inter-reference recency set replacement policy
to improve buffer cache performance | LIRS: an efficient low inter-reference recency set replacement policy ... | https://www.researchgate.net/publication/367088056_LIRS_an_efficient_low_inter-reference_recency_set_replacement_policy_to_improve_buffer_cache_performance | Many studies are focused on cache replacement algorithms, such as FIFO, LRU, LFU, and some advanced cache algorithms like ARC [19], LIRS [15] and 2Q [16]. |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | arc | \cite{arc} | {ARC:} {A} Self-Tuning, Low Overhead Replacement Cache | null | null | true | false | Nimrod Megiddo and
Dharmendra S. Modha | 2,003 | null | http://www.usenix.org/events/fast03/tech/megiddo.html | null | null | {ARC:} {A} Self-Tuning, Low Overhead Replacement Cache | [PDF] ARC: A Self-Tuning, Low Overhead Replacement Cache | https://www.cs.cmu.edu/~natassa/courses/15-721/papers/arcfast.pdf | We propose a new cache management policy, namely, Adaptive. Replacement Cache (ARC), that has several advantages. In response to evolving and changing access |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | mq | \cite{mq} | Second-Level Buffer Cache Management | null | null | true | false | Yuanyuan Zhou and
Zhifeng Chen and
Kai Li | 2,004 | null | https://doi.org/10.1109/TPDS.2004.13 | 10.1109/TPDS.2004.13 | {IEEE} Trans. Parallel Distributed Syst. | Second-Level Buffer Cache Management | [PDF] Second-Level Buffer Cache Management | https://www.openu.ac.il/home/wiseman/2os/lru/mq.pdf | This is a local cache replacement algorithm because it manages an L2 buffer cache without any information from first-level. |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | car | \cite{car} | {CAR:} Clock with Adaptive Replacement | null | null | true | false | Sorav Bansal and
Dharmendra S. Modha | 2,004 | null | http://www.usenix.org/events/fast04/tech/bansal.html | null | null | {CAR:} Clock with Adaptive Replacement | CAR: Clock with Adaptive Replacement - Stanford CS Theory | http://theory.stanford.edu/~sbansal/pubs/fast04.pdf | by S Bansal · Cited by 412 — CAR is a new algorithm that improves upon CLOCK by being scan-resistant, self-tuning, and adaptively capturing recency and frequency features. |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | clockpro | \cite{clockpro} | CLOCK-Pro: An Effective Improvement of the {CLOCK} Replacement | null | null | true | false | Song Jiang and
Feng Chen and
Xiaodong Zhang | 2,005 | null | http://www.usenix.org/events/usenix05/tech/general/jiang.html | null | null | CLOCK-Pro: An Effective Improvement of the {CLOCK} Replacement | CLOCK-Pro: An Effective Improvement of the CLOCK Replacement | https://www.usenix.org/conference/2005-usenix-annual-technical-conference/clock-pro-effective-improvement-clock-replacement | We propose an improved CLOCK replacement policy, called CLOCK-Pro. By additionally keeping track of a limited number of replaced pages, CLOCK-Pro works in a |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | DBLP:journals/tos/EinzigerEFM22 | \cite{DBLP:journals/tos/EinzigerEFM22} | Lightweight Robust Size Aware Cache Management | http://arxiv.org/abs/2105.08770v2 | Modern key-value stores, object stores, Internet proxy caches, as well as
Content Delivery Networks (CDN) often manage objects of diverse sizes, e.g.,
blobs, video files of different lengths, images with varying resolution, and
small documents. In such workloads, size-aware cache policies outperform
size-oblivious algo... | true | true | Gil Einziger and
Ohad Eytan and
Roy Friedman and
Benjamin Manes | 2,022 | null | https://doi.org/10.1145/3507920 | 10.1145/3507920 | {ACM} Trans. Storage | Lightweight Robust Size Aware Cache Management | Lightweight Robust Size Aware Cache Management | http://arxiv.org/pdf/2105.08770v2 | Modern key-value stores, object stores, Internet proxy caches, as well as
Content Delivery Networks (CDN) often manage objects of diverse sizes, e.g.,
blobs, video files of different lengths, images with varying resolution, and
small documents. In such workloads, size-aware cache policies outperform
size-oblivious algo... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | lhd | \cite{lhd} | {LHD:} Improving Cache Hit Rate by Maximizing Hit Density | null | null | true | false | Nathan Beckmann and
Haoxian Chen and
Asaf Cidon | 2,018 | null | https://www.usenix.org/conference/nsdi18/presentation/beckmann | null | null | {LHD:} Improving Cache Hit Rate by Maximizing Hit Density | LHD: improving cache hit rate by maximizing hit density | https://dl.acm.org/doi/10.5555/3307441.3307475 | We introduce least hit density (LHD), a novel eviction policy for key-value caches. LHD predicts each object's expected hits-per-space-consumed (hit density). |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | cacheus | \cite{cacheus} | Learning Cache Replacement with {CACHEUS} | null | null | true | false | Liana V. Rodriguez and
Farzana Beente Yusuf and
Steven Lyons and
Eysler Paz and
Raju Rangaswami and
Jason Liu and
Ming Zhao and
Giri Narasimhan | 2,021 | null | https://www.usenix.org/conference/fast21/presentation/rodriguez | null | null | Learning Cache Replacement with {CACHEUS} | Learning Cache Replacement with Cacheus | https://www.usenix.org/system/files/fast21-rodriguez.pdf | by LV Rodriguez · 2021 · Cited by 125 — Furthermore, CACHEUS enables augmenting state-of-the-art algorithms (e.g., LIRS, ARC) by combining it with a complementary cache replacement |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | sieve | \cite{sieve} | {SIEVE} is Simpler than {LRU:} an Efficient Turn-Key Eviction Algorithm
for Web Caches | null | null | true | false | Yazhuo Zhang and
Juncheng Yang and
Yao Yue and
Ymir Vigfusson and
K. V. Rashmi | 2,024 | null | https://www.usenix.org/conference/nsdi24/presentation/zhang-yazhuo | null | null | {SIEVE} is Simpler than {LRU:} an Efficient Turn-Key Eviction Algorithm
for Web Caches | SIEVE - An Efficient Turn-Key Eviction Algorithm for Web Caches | https://www.classcentral.com/course/youtube-nsdi-24-sieve-is-simpler-than-lru-an-efficient-turn-key-eviction-algorithm-for-web-caches-294624 | Discover how SIEVE outperforms traditional algorithms like LRU in simplicity, efficiency, and scalability for web cache workloads. Learn about the algorithm's |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | cherkasova1998improving | \cite{cherkasova1998improving} | Improving WWW proxies performance with greedy-dual-size-frequency caching policy | null | null | true | false | Cherkasova, Ludmila | 1,998 | null | null | null | null | Improving WWW proxies performance with greedy-dual-size-frequency caching policy | Improving WWW proxies performance with Greedy-Dual- ... | https://www.researchgate.net/publication/228542715_Improving_WWW_proxies_performance_with_Greedy-Dual-Size-Frequency_caching_policy | This paper introduces the Greedy-Dual-Size-Frequency caching policy to maximize hit and byte hit rates for WWW proxies. Proposed caching strategy incorporates |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | yang2020twemcache | \cite{yang2020twemcache} | A large scale analysis of hundreds of in-memory cache clusters at Twitter | null | null | true | false | Juncheng Yang and Yao Yue and K. V. Rashmi | 2,020 | null | https://www.usenix.org/conference/osdi20/presentation/yang | null | null | A large scale analysis of hundreds of in-memory cache clusters at Twitter | [PDF] A large scale analysis of hundreds of in-memory cache clusters at ... | https://www.usenix.org/system/files/osdi20-yang.pdf | This paper is included in the Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation November 4–6, 2020 978-1-939133-19-9 Open access to the Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation is sponsored by USENIX A large scale analysis of hundreds ... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | berg2020cachelib | \cite{berg2020cachelib} | The {CacheLib} Caching Engine: Design and Experiences at Scale | null | null | true | false | Benjamin Berg and Daniel S. Berger and Sara McAllister and Isaac Grosof and Sathya Gunasekar and Jimmy Lu and Michael Uhlar and Jim Carrig and Nathan Beckmann and Mor Harchol-Balter and Gregory R. Ganger | 2,020 | null | https://www.usenix.org/conference/osdi20/presentation/berg | null | null | The {CacheLib} Caching Engine: Design and Experiences at Scale | The CacheLib Caching Engine: Design and Experiences at Scale | https://www.usenix.org/conference/osdi20/presentation/berg | CacheLib is a general-purpose caching engine, designed based on experiences with a range of caching use cases at Facebook, that facilitates the easy |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | icebreaker | \cite{icebreaker} | IceBreaker: warming serverless functions better with heterogeneity | null | null | true | false | Rohan Basu Roy and
Tirthak Patel and
Devesh Tiwari | 2,022 | null | https://doi.org/10.1145/3503222.3507750 | 10.1145/3503222.3507750 | null | IceBreaker: warming serverless functions better with heterogeneity | [PDF] IceBreaker: Warming Serverless Functions Better with Heterogeneity | http://www1.ece.neu.edu/~ningfang/SimPaper/icebreaker-ASPLOS22.pdf | IceBreaker is a novel function pre-warming and keep-alive scheme for serverless functions that exploit server-heterogeneity to lower the keep-alive cost and |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | fasscache | \cite{fasscache} | FaasCache: keeping serverless computing alive with greedy-dual caching | null | null | true | false | Alexander Fuerst and
Prateek Sharma | 2,021 | null | https://doi.org/10.1145/3445814.3446757 | 10.1145/3445814.3446757 | null | FaasCache: keeping serverless computing alive with greedy-dual caching | [PDF] FaasCache: Keeping Serverless Computing Alive with Greedy-Dual ... | https://afuerst.github.io/assets/FaasCache.pdf | Keep-alive policies must keep functions alive based on their resource and usage characteristics, which is challenging due to the diversity in FaaS workloads. |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | DBLP:conf/osdi/ZhongLCHZL0024 | \cite{DBLP:conf/osdi/ZhongLCHZL0024} | DistServe: Disaggregating Prefill and Decoding for Goodput-optimized
Large Language Model Serving | http://arxiv.org/abs/2401.09670v3 | DistServe improves the performance of large language models (LLMs) serving by
disaggregating the prefill and decoding computation. Existing LLM serving
systems colocate the two phases and batch the computation of prefill and
decoding across all users and requests. We find that this strategy not only
leads to strong pre... | true | true | Yinmin Zhong and
Shengyu Liu and
Junda Chen and
Jianbo Hu and
Yibo Zhu and
Xuanzhe Liu and
Xin Jin and
Hao Zhang | 2,024 | null | https://www.usenix.org/conference/osdi24/presentation/zhong-yinmin | null | null | DistServe: Disaggregating Prefill and Decoding for Goodput-optimized
Large Language Model Serving | [PDF] DistServe: Disaggregating Prefill and Decoding for Goodput ... | https://www.usenix.org/system/files/osdi24-zhong-yinmin.pdf | July 10–12, 2024 • Santa Clara, CA, USA 978-1-939133-40-3 Open access to the Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation is sponsored by DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving Yinmin Zhong and Shengyu Liu, Peking Univ... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | DBLP:journals/corr/abs-2404-09526 | \cite{DBLP:journals/corr/abs-2404-09526} | LoongServe: Efficiently Serving Long-Context Large Language Models with
Elastic Sequence Parallelism | http://arxiv.org/abs/2404.09526v2 | The context window of large language models (LLMs) is rapidly increasing,
leading to a huge variance in resource usage between different requests as well
as between different phases of the same request. Restricted by static
parallelism strategies, existing LLM serving systems cannot efficiently utilize
the underlying r... | true | true | Bingyang Wu and
Shengyu Liu and
Yinmin Zhong and
Peng Sun and
Xuanzhe Liu and
Xin Jin | 2,024 | null | https://doi.org/10.48550/arXiv.2404.09526 | 10.48550/ARXIV.2404.09526 | CoRR | LoongServe: Efficiently Serving Long-Context Large Language Models with
Elastic Sequence Parallelism | LoongServe: Efficiently Serving Long-Context Large Language ... | https://colab.ws/articles/10.1145%2F3694715.3695948 | LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism. Bingyang Wu 1. ,. Shengyu Liu 1. ,. Yinmin |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | DBLP:conf/sosp/KwonLZ0ZY0ZS23 | \cite{DBLP:conf/sosp/KwonLZ0ZY0ZS23} | Efficient Memory Management for Large Language Model Serving with
PagedAttention | http://arxiv.org/abs/2309.06180v1 | High throughput serving of large language models (LLMs) requires batching
sufficiently many requests at a time. However, existing systems struggle
because the key-value cache (KV cache) memory for each request is huge and
grows and shrinks dynamically. When managed inefficiently, this memory can be
significantly wasted... | true | true | Woosuk Kwon and
Zhuohan Li and
Siyuan Zhuang and
Ying Sheng and
Lianmin Zheng and
Cody Hao Yu and
Joseph Gonzalez and
Hao Zhang and
Ion Stoica | 2,023 | null | https://doi.org/10.1145/3600006.3613165 | 10.1145/3600006.3613165 | null | Efficient Memory Management for Large Language Model Serving with
PagedAttention | Efficient Memory Management for Large Language Model ... | https://arxiv.org/pdf/2309.06180 | Efficient Memory Management for Large Language Model Serving with PagedAttention Woosuk Kwon 1,∗ Zhuohan Li 1,∗ Siyuan Zhuang 1 Ying Sheng 1,2 Lianmin Zheng 1 Cody Hao Yu 3 Joseph E. Gonzalez 1 Hao Zhang 4 Ion Stoica 1 1 UC Berkeley 2Stanford University 3Independent Researcher 4UC San Diego Abstract High throughput ser... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | alpaserve | \cite{alpaserve} | AlpaServe: Statistical Multiplexing with Model Parallelism for Deep
Learning Serving | http://arxiv.org/abs/2302.11665v2 | Model parallelism is conventionally viewed as a method to scale a single
large deep learning model beyond the memory limits of a single device. In this
paper, we demonstrate that model parallelism can be additionally used for the
statistical multiplexing of multiple devices when serving multiple models, even
when a sin... | true | true | Zhuohan Li and Lianmin Zheng and Yinmin Zhong and Vincent Liu and Ying Sheng and Xin Jin and Yanping Huang and Zhifeng Chen and Hao Zhang and Joseph E. Gonzalez and Ion Stoica | 2,023 | null | https://www.usenix.org/conference/osdi23/presentation/li-zhouhan | null | null | AlpaServe: Statistical Multiplexing with Model Parallelism for Deep
Learning Serving | alpa-projects/mms: AlpaServe - GitHub | https://github.com/alpa-projects/mms | This is the official implementation of our OSDI'23 paper: AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving. To reproduce |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | DBLP:conf/osdi/YuJKKC22 | \cite{DBLP:conf/osdi/YuJKKC22} | Orca: {A} Distributed Serving System for Transformer-Based Generative
Models | null | null | true | false | Gyeong{-}In Yu and
Joo Seong Jeong and
Geon{-}Woo Kim and
Soojeong Kim and
Byung{-}Gon Chun | 2,022 | null | https://www.usenix.org/conference/osdi22/presentation/yu | null | null | Orca: {A} Distributed Serving System for Transformer-Based Generative
Models | Orca: A Distributed Serving System for Transformer-Based ... - USENIX | https://www.usenix.org/conference/osdi22/presentation/yu | We have implemented a distributed serving system called ORCA, with additional designs for scalability to models with hundreds of billions of parameters. |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | DBLP:conf/isca/PatelCZSGMB24 | \cite{DBLP:conf/isca/PatelCZSGMB24} | Splitwise: Efficient generative LLM inference using phase splitting | http://arxiv.org/abs/2311.18677v2 | Recent innovations in generative large language models (LLMs) have made their
applications and use-cases ubiquitous. This has led to large-scale deployments
of these models, using complex, expensive, and power-hungry AI accelerators,
most commonly GPUs. These developments make LLM inference efficiency an
important chal... | true | true | Pratyush Patel and
Esha Choukse and
Chaojie Zhang and
Aashaka Shah and
{\'{I}}{\~{n}}igo Goiri and
Saeed Maleki and
Ricardo Bianchini | 2,024 | null | https://doi.org/10.1109/ISCA59077.2024.00019 | 10.1109/ISCA59077.2024.00019 | null | Splitwise: Efficient generative LLM inference using phase splitting | Splitwise: Efficient generative LLM inference using phase splitting | http://arxiv.org/pdf/2311.18677v2 | Recent innovations in generative large language models (LLMs) have made their
applications and use-cases ubiquitous. This has led to large-scale deployments
of these models, using complex, expensive, and power-hungry AI accelerators,
most commonly GPUs. These developments make LLM inference efficiency an
important chal... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | 298501 | \cite{298501} | {Cost-Efficient} Large Language Model Serving for Multi-turn Conversations with {CachedAttention} | null | null | true | false | Bin Gao and Zhuomin He and Puru Sharma and Qingxuan Kang and Djordje Jevdjic and Junbo Deng and Xingkun Yang and Zhou Yu and Pengfei Zuo | 2,024 | null | https://www.usenix.org/conference/atc24/presentation/gao-bin-cost | null | null | {Cost-Efficient} Large Language Model Serving for Multi-turn Conversations with {CachedAttention} | Cost-Efficient Large Language Model Serving for Multi-turn ... - arXiv | https://arxiv.org/abs/2403.19708 | View a PDF of the paper titled Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention, by Bin Gao and 8 other authors To address the problem, this paper proposes CachedAttention, a new attention mechanism that enables reuse of KV caches across multi-turn conversations, significant... |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | DBLP:journals/corr/abs-2412-17246 | \cite{DBLP:journals/corr/abs-2412-17246} | Fast and Live Model Auto Scaling with {O(1)} Host Caching | null | null | true | false | Dingyan Zhang and
Haotian Wang and
Yang Liu and
Xingda Wei and
Yizhou Shan and
Rong Chen and
Haibo Chen | 2,024 | null | https://doi.org/10.48550/arXiv.2412.17246 | 10.48550/ARXIV.2412.17246 | CoRR | Fast and Live Model Auto Scaling with {O(1)} Host Caching | Fast and Live Model Auto Scaling with 𝑂(1) Host Caching | https://arxiv.org/html/2412.17246v1 | Model autoscaling is the key mechanism to achieve serverless model-as-a-service, but it faces a fundamental trade-off between scaling speed and storage/memory |
KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache
at a Large Cloud Provider | 2506.02634v1 | shahrad2020serverless | \cite{shahrad2020serverless} | Serverless in the Wild: Characterizing and Optimizing the Serverless
Workload at a Large Cloud Provider | http://arxiv.org/abs/2003.03423v3 | Function as a Service (FaaS) has been gaining popularity as a way to deploy
computations to serverless backends in the cloud. This paradigm shifts the
complexity of allocating and provisioning resources to the cloud provider,
which has to provide the illusion of always-available resources (i.e., fast
function invocatio... | true | true | Mohammad Shahrad and Rodrigo Fonseca and Inigo Goiri and Gohar Chaudhry and Paul Batum and Jason Cooke and Eduardo Laureano and Colby Tresness and Mark Russinovich and Ricardo Bianchini | 2,020 | null | https://www.usenix.org/conference/atc20/presentation/shahrad | null | null | Serverless in the Wild: Characterizing and Optimizing the Serverless
Workload at a Large Cloud Provider | Characterizing and Optimizing the Serverless Workload at ... | https://www.usenix.org/system/files/atc20-shahrad.pdf | by M Shahrad · 2020 · Cited by 879 — This paper characterizes Azure Functions' serverless workload, showing most functions are invoked infrequently, and proposes a resource |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | liu2024:visual | \cite{liu2024:visual} | Visual Instruction Tuning | http://arxiv.org/abs/2304.08485v2 | Instruction tuning large language models (LLMs) using machine-generated
instruction-following data has improved zero-shot capabilities on new tasks,
but the idea is less explored in the multimodal field. In this paper, we
present the first attempt to use language-only GPT-4 to generate multimodal
language-image instruc... | true | true | Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae | 2,024 | null | null | null | Advances in neural information processing systems | Visual Instruction Tuning | Visual Instruction Tuning | http://arxiv.org/pdf/2304.08485v2 | Instruction tuning large language models (LLMs) using machine-generated
instruction-following data has improved zero-shot capabilities on new tasks,
but the idea is less explored in the multimodal field. In this paper, we
present the first attempt to use language-only GPT-4 to generate multimodal
language-image instruc... |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | bai2023:qwen | \cite{bai2023:qwen} | Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond | null | null | true | false | Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren | 2,023 | null | null | null | null | Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond | Qwen-VL: A Versatile Vision-Language Model for Understanding... | https://openreview.net/forum?id=qrGjFJVl3m | Despite the effort in open-sourcing the model and its weights, the reviewers find QWEN-VL lacking in significant research contributions and technical novelty. * _**Open-source:**_ Qwen-VL is an open-sourced large vision-language model that excels in **(i)** achieving leading performance across a wide range of vision-... |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | chen2023:sharegpt4v | \cite{chen2023:sharegpt4v} | ShareGPT4V: Improving Large Multi-Modal Models with Better Captions | http://arxiv.org/abs/2311.12793v2 | In the realm of large multi-modal models (LMMs), efficient modality alignment
is crucial yet often constrained by the scarcity of high-quality image-text
data. To address this bottleneck, we introduce the ShareGPT4V dataset, a
pioneering large-scale resource featuring 1.2 million highly descriptive
captions, which surp... | true | true | Chen, Lin and Li, Jisong and Dong, Xiaoyi and Zhang, Pan and He, Conghui and Wang, Jiaqi and Zhao, Feng and Lin, Dahua | 2,023 | null | null | null | arXiv preprint arXiv:2311.12793 | ShareGPT4V: Improving Large Multi-Modal Models with Better Captions | Improving Large Multi-Modal Models with Better Captions - arXiv | https://arxiv.org/abs/2311.12793 | Image 4: arxiv logo>cs> arXiv:2311.12793 arXiv:2311.12793 (cs) View a PDF of the paper titled ShareGPT4V: Improving Large Multi-Modal Models with Better Captions, by Lin Chen and 7 other authors View a PDF of the paper titled ShareGPT4V: Improving Large Multi-Modal Models with Better Captions, by Lin Chen and 7 other... |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | li2023:videochat | \cite{li2023:videochat} | VideoChat: Chat-Centric Video Understanding | http://arxiv.org/abs/2305.06355v2 | In this paper, we initiate an attempt of developing an end-to-end
chat-centric video understanding system, coined as VideoChat. It integrates
video foundation models and large language models via a learnable neural
interface, excelling in spatiotemporal reasoning, event localization, and
causal relationship inference. ... | true | true | Li, KunChang and He, Yinan and Wang, Yi and Li, Yizhuo and Wang, Wenhai and Luo, Ping and Wang, Yali and Wang, Limin and Qiao, Yu | 2,023 | null | null | null | arXiv preprint arXiv:2305.06355 | VideoChat: Chat-Centric Video Understanding | VideoChat : Chat-Centric Video Understanding | https://img.shlab.org.cn/pjlab/files/2023/06/638215855649090000.pdf | by KC Li · 2023 · Cited by 853 — VideoChat is an end-to-end chat-centric video understanding system integrating video and large language models, excelling in spatiotemporal reasoning and |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | zhang2023:video | \cite{zhang2023:video} | Video-llama: An instruction-tuned audio-visual language model for video understanding | null | null | true | false | Zhang, Hang and Li, Xin and Bing, Lidong | 2,023 | null | null | null | arXiv preprint arXiv:2306.02858 | Video-llama: An instruction-tuned audio-visual language model for video understanding | [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio ... | https://github.com/DAMO-NLP-SG/Video-LLaMA | [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding # Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding The following checkpoints are the full weights (visual encoder + audio encoder + Q-Formers + language decoder) to launch Video-L... |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | lu2024:unified | \cite{lu2024:unified} | Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision,
Language, Audio, and Action | http://arxiv.org/abs/2312.17172v1 | We present Unified-IO 2, the first autoregressive multimodal model that is
capable of understanding and generating image, text, audio, and action. To
unify different modalities, we tokenize inputs and outputs -- images, text,
audio, action, bounding boxes, etc., into a shared semantic space and then
process them with a... | true | true | Lu, Jiasen and Clark, Christopher and Lee, Sangho and Zhang, Zichen and Khosla, Savya and Marten, Ryan and Hoiem, Derek and Kembhavi, Aniruddha | 2,024 | null | null | null | null | Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision,
Language, Audio, and Action | Unified-IO 2: Scaling Autoregressive Multimodal Models with ... | https://openaccess.thecvf.com/content/CVPR2024/papers/Lu_Unified-IO_2_Scaling_Autoregressive_Multimodal_Models_with_Vision_Language_Audio_CVPR_2024_paper.pdf | by J Lu · 2024 · Cited by 210 — UNIFIED-IO 2 is a model that understands and generates image, text, audio, and action, using a single encoder-decoder model. |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | achiam2023:gpt | \cite{achiam2023:gpt} | Gpt-4 technical report | null | null | true | false | Achiam, Josh and Adler, Steven and Agarwal, Sandhini and Ahmad, Lama and Akkaya, Ilge and Aleman, Florencia Leoni and Almeida, Diogo and Altenschmidt, Janko and Altman, Sam and Anadkat, Shyamal and others | 2,023 | null | null | null | arXiv preprint arXiv:2303.08774 | Gpt-4 technical report | GPT-4 Technical Report | http://arxiv.org/pdf/2303.08774v6 | We report the development of GPT-4, a large-scale, multimodal model which can
accept image and text inputs and produce text outputs. While less capable than
humans in many real-world scenarios, GPT-4 exhibits human-level performance on
various professional and academic benchmarks, including passing a simulated bar
exam... |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | busso2008:iemocap | \cite{busso2008:iemocap} | IEMOCAP: Interactive emotional dyadic motion capture database | null | null | true | false | Busso, Carlos and Bulut, Murtaza and Lee, Chi-Chun and Kazemzadeh, Abe and Mower, Emily and Kim, Samuel and Chang, Jeannette N and Lee, Sungbok and Narayanan, Shrikanth S | 2,008 | null | null | null | Language resources and evaluation | IEMOCAP: Interactive emotional dyadic motion capture database | IEMOCAP- Home | https://sail.usc.edu/iemocap/ | The Interactive Emotional Dyadic Motion Capture (IEMOCAP) database is an acted, multimodal and multispeaker database, recently collected at SAIL lab at USC. |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | zadeh2018:multimodal | \cite{zadeh2018:multimodal} | Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph | null | null | true | false | Zadeh, AmirAli Bagher and Liang, Paul Pu and Poria, Soujanya and Cambria, Erik and Morency, Louis-Philippe | 2,018 | null | null | null | null | Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph | The MOSEI Dataset and Interpretable Dynamic Fusion | https://pliang279.github.io/papers/dap2018_mosei.pdf | by PP Liang · Cited by 30 — In this paper we introduce CMU-Multimodal Opinion. Sentiment and Emotion Intensity (CMU-. MOSEI), the largest dataset for multimodal sentiment analysis and |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | poria2019:meld | \cite{poria2019:meld} | MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in
Conversations | http://arxiv.org/abs/1810.02508v6 | Emotion recognition in conversations is a challenging task that has recently
gained popularity due to its potential applications. Until now, however, a
large-scale multimodal multi-party emotional conversational database containing
more than two speakers per dialogue was missing. Thus, we propose the
Multimodal Emotion... | true | true | Poria, Soujanya and Hazarika, Devamanyu and Majumder, Navonil and Naik, Gautam and Cambria, Erik and Mihalcea, Rada | 2,019 | null | null | null | null | MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in
Conversations | MELD: A Multimodal Multi-Party Dataset for Emotion ... | https://github.com/declare-lab/MELD | * /data/MELD/train_sent_emo.csv - contains the utterances in the training set along with Sentiment and Emotion labels. * /data/MELD/dev_sent_emo.csv - contains the utterances in the dev set along with Sentiment and Emotion labels. * /data/MELD/test_sent_emo.csv - contains the utterances in the test set along with... |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | han2023:champagne | \cite{han2023:champagne} | CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos | http://arxiv.org/abs/2303.09713v2 | Visual information is central to conversation: body gestures and physical
behaviour, for example, contribute to meaning that transcends words alone. To
date, however, most neural conversational models are limited to just text. We
introduce CHAMPAGNE, a generative model of conversations that can account for
visual conte... | true | true | Han, Seungju and Hessel, Jack and Dziri, Nouha and Choi, Yejin and Yu, Youngjae | 2,023 | null | null | null | null | CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos | [PDF] Learning Real-world Conversation from Large-Scale Web Videos | https://openaccess.thecvf.com/content/ICCV2023/papers/Han_CHAMPAGNE_Learning_Real-world_Conversation_from_Large-Scale_Web_Videos_ICCV_2023_paper.pdf | Figure 1: CHAMPAGNE is a generative model of real-world conversational frames trained on. YTD-18M, a dataset of 18M video-based dialogues. |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | park2024:let | \cite{park2024:let} | Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation | http://arxiv.org/abs/2406.07867v2 | In this paper, we introduce a novel Face-to-Face spoken dialogue model. It
processes audio-visual speech from user input and generates audio-visual speech
as the response, marking the initial step towards creating an avatar chatbot
system without relying on intermediate text. To this end, we newly introduce
MultiDialog... | true | true | Park, Se Jin and Kim, Chae Won and Rha, Hyeongseop and Kim, Minsu and Hong, Joanna and Yeo, Jeong Hun and Ro, Yong Man | 2,024 | null | null | null | arXiv preprint arXiv:2406.07867 | Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation | Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face... | https://openreview.net/forum?id=zby4Ade9CCF | In this paper, we introduce a novel Face-to-Face spoken dialogue model. It processes audio-visual speech from user input and generates audio-visual speech as |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | shafique2023:nonverbal | \cite{shafique2023:nonverbal} | Nonverbal Communication Cue Recognition: A Pathway to More Accessible Communication | null | null | true | false | Shafique, Zoya and Wang, Haiyan and Tian, Yingli | 2,023 | null | null | null | null | Nonverbal Communication Cue Recognition: A Pathway to More Accessible Communication | [PDF] Nonverbal Communication Cue Recognition: A Pathway to More ... | https://openaccess.thecvf.com/content/CVPR2023W/WiCV/papers/Shafique_Nonverbal_Communication_Cue_Recognition_A_Pathway_to_More_Accessible_Communication_CVPRW_2023_paper.pdf | Nonverbal communication cues (NVCs) include body language, facial expressions, and hand gestures, conveying emotions and attitudes. |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | zhang2023:learning | \cite{zhang2023:learning} | Learning Emotion Representations from Verbal and Nonverbal Communication | http://arxiv.org/abs/2305.13500v1 | Emotion understanding is an essential but highly challenging component of
artificial general intelligence. The absence of extensively annotated datasets
has significantly impeded advancements in this field. We present EmotionCLIP,
the first pre-training paradigm to extract visual emotion representations from
verbal and... | true | true | Zhang, Sitao and Pan, Yimu and Wang, James Z | 2,023 | null | null | null | null | Learning Emotion Representations from Verbal and Nonverbal Communication | Learning Emotion Representations from Verbal and Nonverbal Communication | http://arxiv.org/pdf/2305.13500v1 | Emotion understanding is an essential but highly challenging component of
artificial general intelligence. The absence of extensively annotated datasets
has significantly impeded advancements in this field. We present EmotionCLIP,
the first pre-training paradigm to extract visual emotion representations from
verbal and... |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | cherakara2023:furchat | \cite{cherakara2023:furchat} | FurChat: An Embodied Conversational Agent using LLMs, Combining Open and
Closed-Domain Dialogue with Facial Expressions | http://arxiv.org/abs/2308.15214v2 | We demonstrate an embodied conversational agent that can function as a
receptionist and generate a mixture of open and closed-domain dialogue along
with facial expressions, by using a large language model (LLM) to develop an
engaging conversation. We deployed the system onto a Furhat robot, which is
highly expressive a... | true | true | Cherakara, Neeraj and Varghese, Finny and Shabana, Sheena and Nelson, Nivan and Karukayil, Abhiram and Kulothungan, Rohith and Farhan, Mohammed Afil and Nesset, Birthe and Moujahid, Meriam and Dinkar, Tanvi and others | 2,023 | null | null | null | null | FurChat: An Embodied Conversational Agent using LLMs, Combining Open and
Closed-Domain Dialogue with Facial Expressions | [PDF] FurChat: An Embodied Conversational Agent using LLMs ... | https://aclanthology.org/2023.sigdial-1.55.pdf | FurChat is an embodied conversational agent using LLMs, combining open and closed-domain dialogue with facial expressions, and can function as a receptionist. |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | lee2023:developing | \cite{lee2023:developing} | Developing Social Robots with Empathetic Non-Verbal Cues Using Large
Language Models | http://arxiv.org/abs/2308.16529v1 | We propose augmenting the empathetic capacities of social robots by
integrating non-verbal cues. Our primary contribution is the design and
labeling of four types of empathetic non-verbal cues, abbreviated as SAFE:
Speech, Action (gesture), Facial expression, and Emotion, in a social robot.
These cues are generated usi... | true | true | Lee, Yoon Kyung and Jung, Yoonwon and Kang, Gyuyi and Hahn, Sowon | 2,023 | null | null | null | arXiv preprint arXiv:2308.16529 | Developing Social Robots with Empathetic Non-Verbal Cues Using Large
Language Models | Developing Social Robots with Empathetic Non-Verbal Cues Using ... | https://www.researchgate.net/publication/373552152_Developing_Social_Robots_with_Empathetic_Non-Verbal_Cues_Using_Large_Language_Models | We developed an LLM-based conversational system for the robot and assessed its alignment with social cues as defined by human counselors. Preliminary results |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | lin2023:one | \cite{lin2023:one} | One-Stage 3D Whole-Body Mesh Recovery with Component Aware Transformer | http://arxiv.org/abs/2303.16160v1 | Whole-body mesh recovery aims to estimate the 3D human body, face, and hands
parameters from a single image. It is challenging to perform this task with a
single network due to resolution issues, i.e., the face and hands are usually
located in extremely small regions. Existing works usually detect hands and
faces, enla... | true | true | Lin, Jing and Zeng, Ailing and Wang, Haoqian and Zhang, Lei and Li, Yu | 2,023 | null | null | null | null | One-Stage 3D Whole-Body Mesh Recovery with Component Aware Transformer | IDEA-Research/OSX - GitHub | https://github.com/IDEA-Research/OSX | This repo is official PyTorch implementation of One-Stage 3D Whole-Body Mesh Recovery with Component Aware Transformer (CVPR2023). We propose the first one- |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | dwivedi2024:tokenhmr | \cite{dwivedi2024:tokenhmr} | TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose
Representation | http://arxiv.org/abs/2404.16752v1 | We address the problem of regressing 3D human pose and shape from a single
image, with a focus on 3D accuracy. The current best methods leverage large
datasets of 3D pseudo-ground-truth (p-GT) and 2D keypoints, leading to robust
performance. With such methods, we observe a paradoxical decline in 3D pose
accuracy with i... | true | true | Dwivedi, Sai Kumar and Sun, Yu and Patel, Priyanka and Feng, Yao and Black, Michael J | 2,024 | null | null | null | null | TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose
Representation | TokenHMR: Advancing Human Mesh Recovery with a ... | https://github.com/saidwivedi/TokenHMR | Our method has two stages: Tokenization: The encoder maps continuous poses to discrete pose tokens. TokenHMR: During the training of human pose |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | danvevcek2022emoca | \cite{danvevcek2022emoca} | EMOCA: Emotion Driven Monocular Face Capture and Animation | http://arxiv.org/abs/2204.11312v1 | As 3D facial avatars become more widely used for communication, it is
critical that they faithfully convey emotion. Unfortunately, the best recent
methods that regress parametric 3D face models from monocular images are unable
to capture the full spectrum of facial expression, such as subtle or extreme
emotions. We fin... | true | true | Dan{\v{e}}{\v{c}}ek, Radek and Black, Michael J and Bolkart, Timo | 2,022 | null | null | null | null | EMOCA: Emotion Driven Monocular Face Capture and Animation | EMOCA: Emotion Driven Monocular Face Capture and Animation | http://arxiv.org/pdf/2204.11312v1 | As 3D facial avatars become more widely used for communication, it is
critical that they faithfully convey emotion. Unfortunately, the best recent
methods that regress parametric 3D face models from monocular images are unable
to capture the full spectrum of facial expression, such as subtle or extreme
emotions. We fin... |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | yi2023:generating | \cite{yi2023:generating} | Generating Holistic 3D Human Motion from Speech | http://arxiv.org/abs/2212.04420v2 | This work addresses the problem of generating 3D holistic body motions from
human speech. Given a speech recording, we synthesize sequences of 3D body
poses, hand gestures, and facial expressions that are realistic and diverse. To
achieve this, we first build a high-quality dataset of 3D holistic body meshes
with synch... | true | true | Yi, Hongwei and Liang, Hualin and Liu, Yifei and Cao, Qiong and Wen, Yandong and Bolkart, Timo and Tao, Dacheng and Black, Michael J | 2,023 | null | null | null | null | Generating Holistic 3D Human Motion from Speech | Generating Holistic 3D Human Motion from Speech | http://arxiv.org/pdf/2212.04420v2 | This work addresses the problem of generating 3D holistic body motions from
human speech. Given a speech recording, we synthesize sequences of 3D body
poses, hand gestures, and facial expressions that are realistic and diverse. To
achieve this, we first build a high-quality dataset of 3D holistic body meshes
with synch... |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | wu2024:motionllm | \cite{wu2024:motionllm} | MotionLLM: Multimodal Motion-Language Learning with Large Language Models | null | null | true | false | Wu, Qi and Zhao, Yubo and Wang, Yifan and Tai, Yu-Wing and Tang, Chi-Keung | 2,024 | null | null | null | arXiv preprint arXiv:2405.17013 | MotionLLM: Multimodal Motion-Language Learning with Large Language Models | (PDF) MotionLLM: Multimodal Motion-Language Learning ... | https://www.researchgate.net/publication/380906869_MotionLLM_Multimodal_Motion-Language_Learning_with_Large_Language_Models | MotionGPT-2 accommodates multiple motion-relevant tasks and supporting multimodal control conditions through pre-trained Large Language Models ( |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | lu2023:humantomato | \cite{lu2023:humantomato} | HumanTOMATO: Text-aligned Whole-body Motion Generation | http://arxiv.org/abs/2310.12978v1 | This work targets a novel text-driven whole-body motion generation task,
which takes a given textual description as input and aims at generating
high-quality, diverse, and coherent facial expressions, hand gestures, and body
motions simultaneously. Previous works on text-driven motion generation tasks
mainly have two l... | true | true | Lu, Shunlin and Chen, Ling-Hao and Zeng, Ailing and Lin, Jing and Zhang, Ruimao and Zhang, Lei and Shum, Heung-Yeung | 2,023 | null | null | null | arXiv preprint arXiv:2310.12978 | HumanTOMATO: Text-aligned Whole-body Motion Generation | HumanTOMATO: Text-aligned Whole-body Motion ... | https://lhchen.top/HumanTOMATO/ | The proposed HumanTOMATO model can generate text-aligned whole-body motions with vivid and harmonious face, hand, and body motion. |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | ng2023:can | \cite{ng2023:can} | Can Language Models Learn to Listen? | http://arxiv.org/abs/2308.10897v1 | We present a framework for generating appropriate facial responses from a
listener in dyadic social interactions based on the speaker's words. Given an
input transcription of the speaker's words with their timestamps, our approach
autoregressively predicts a response of a listener: a sequence of listener
facial gesture... | true | true | Ng, Evonne and Subramanian, Sanjay and Klein, Dan and Kanazawa, Angjoo and Darrell, Trevor and Ginosar, Shiry | 2,023 | null | null | null | null | Can Language Models Learn to Listen? | Can Language Models Learn to Listen? | http://arxiv.org/pdf/2308.10897v1 | We present a framework for generating appropriate facial responses from a
listener in dyadic social interactions based on the speaker's words. Given an
input transcription of the speaker's words with their timestamps, our approach
autoregressively predicts a response of a listener: a sequence of listener
facial gesture... |
Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning
Nonverbal Cues from Video-Grounded Dialogues | 2506.00958v1 | ng2022:learning | \cite{ng2022:learning} | Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion | http://arxiv.org/abs/2204.08451v1 | We present a framework for modeling interactional communication in dyadic
conversations: given multimodal inputs of a speaker, we autoregressively output
multiple possibilities of corresponding listener motion. We combine the motion
and speech audio of the speaker using a motion-audio cross attention
transformer. Furth... | true | true | Ng, Evonne and Joo, Hanbyul and Hu, Liwen and Li, Hao and Darrell, Trevor and Kanazawa, Angjoo and Ginosar, Shiry | 2,022 | null | null | null | null | Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion | [PDF] Learning To Listen: Modeling Non-Deterministic Dyadic Facial Motion | https://openaccess.thecvf.com/content/CVPR2022/papers/Ng_Learning_To_Listen_Modeling_Non-Deterministic_Dyadic_Facial_Motion_CVPR_2022_paper.pdf | The method synthesizes listener motion from speaker video using a motion-audio transformer and a VQ-VAE, outputting multiple possibilities of listener motion. |
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models | 2506.00832v1 | strom2006expressive | \cite{strom2006expressive} | Expressive prosody for unit-selection speech synthesis. | null | null | true | false | Strom, Volker and Clark, Robert AJ and King, Simon | 2,006 | null | null | null | null | Expressive prosody for unit-selection speech synthesis. | Expressive Prosody for Unit-selection Speech Synthesis - CSTR | https://www.cstr.ed.ac.uk/downloads/publications/2006/strom06.pdf | by V Strom · Cited by 42 — The Festival unit selection speech synthesis system, Multisyn [1], achieves highly natural synthetic speech by avoiding use of an ex- plicit model of prosody in |
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models | 2506.00832v1 | ren2019fastspeech | \cite{ren2019fastspeech} | FastSpeech: Fast, Robust and Controllable Text to Speech | http://arxiv.org/abs/1905.09263v5 | Neural network based end-to-end text to speech (TTS) has significantly
improved the quality of synthesized speech. Prominent methods (e.g., Tacotron
2) usually first generate mel-spectrogram from text, and then synthesize speech
from the mel-spectrogram using vocoder such as WaveNet. Compared with
traditional concatena... | true | true | Ren, Yi and Ruan, Yangjun and Tan, Xu and Qin, Tao and Zhao, Sheng and Zhao, Zhou and Liu, Tie-Yan | 2,019 | null | null | null | Advances in neural information processing systems | FastSpeech: Fast, Robust and Controllable Text to Speech | FastSpeech: Fast, Robust and Controllable Text to Speech | http://arxiv.org/pdf/1905.09263v5 | Neural network based end-to-end text to speech (TTS) has significantly
improved the quality of synthesized speech. Prominent methods (e.g., Tacotron
2) usually first generate mel-spectrogram from text, and then synthesize speech
from the mel-spectrogram using vocoder such as WaveNet. Compared with
traditional concatena... |
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models | 2506.00832v1 | ren2020fastspeech | \cite{ren2020fastspeech} | FastSpeech 2: Fast and High-Quality End-to-End Text to Speech | http://arxiv.org/abs/2006.04558v8 | Non-autoregressive text to speech (TTS) models such as FastSpeech can
synthesize speech significantly faster than previous autoregressive models with
comparable quality. The training of FastSpeech model relies on an
autoregressive teacher model for duration prediction (to provide more
information as input) and knowledg... | true | true | Ren, Yi and Hu, Chenxu and Tan, Xu and Qin, Tao and Zhao, Sheng and Zhao, Zhou and Liu, Tie-Yan | 2,020 | null | null | null | arXiv preprint arXiv:2006.04558 | FastSpeech 2: Fast and High-Quality End-to-End Text to Speech | FastSpeech 2: Fast and High-Quality End-to-End Text to Speech | https://www.microsoft.com/en-us/research/lab/microsoft-research-asia/articles/fastspeech-2-fast-and-high-quality-end-to-end-text-to-speech/ | FastSpeech 2 outperforms FastSpeech in voice quality and enjoys a much simpler training pipeline (3x training time reduction) while inheriting its advantages. |
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models | 2506.00832v1 | mohan2021ctrl | \cite{mohan2021ctrl} | Ctrl-P: Temporal control of prosodic variation for speech synthesis | null | null | true | false | Mohan, Devang S Ram and Hu, Vivian and Teh, Tian Huey and Torresquintero, Alexandra and Wallis, Christopher GR and Staib, Marlene and Foglianti, Lorenzo and Gao, Jiameng and King, Simon | 2,021 | null | null | null | arXiv preprint arXiv:2106.08352 | Ctrl-P: Temporal control of prosodic variation for speech synthesis | Ctrl-P: Temporal Control of Prosodic Variation for Speech Synthesis | http://arxiv.org/pdf/2106.08352v1 | Text does not fully specify the spoken form, so text-to-speech models must be
able to learn from speech data that vary in ways not explained by the
corresponding text. One way to reduce the amount of unexplained variation in
training data is to provide acoustic information as an additional learning
signal. When generat... |
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models | 2506.00832v1 | bandekar2023speaking | \cite{bandekar2023speaking} | Speaking rate attention-based duration prediction for speed control TTS | http://arxiv.org/abs/2310.08846v1 | With the advent of high-quality speech synthesis, there is a lot of interest
in controlling various prosodic attributes of speech. Speaking rate is an
essential attribute towards modelling the expressivity of speech. In this work,
we propose a novel approach to control the speaking rate for non-autoregressive
TTS. We a... | true | true | Bandekar, Jesuraj and Udupa, Sathvik and Singh, Abhayjeet and Jayakumar, Anjali and Badiger, Sandhya and Kumar, Saurabh and VH, Pooja and Ghosh, Prasanta Kumar and others | 2,023 | null | null | null | arXiv preprint arXiv:2310.08846 | Speaking rate attention-based duration prediction for speed control TTS | Speaking Rate Control of end-to-end TTS Models by Direct ... | https://www.isca-archive.org/interspeech_2022/lenglet22_interspeech.pdf | by M Lenglet · 2022 · Cited by 8 — Evaluation was performed on the control of speaking rate on both attention-based (TC) and duration predictor based (FS) methods. Objective analyses showed |
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models | 2506.00832v1 | wang2018style | \cite{wang2018style} | Style Tokens: Unsupervised Style Modeling, Control and Transfer in
End-to-End Speech Synthesis | http://arxiv.org/abs/1803.09017v1 | In this work, we propose "global style tokens" (GSTs), a bank of embeddings
that are jointly trained within Tacotron, a state-of-the-art end-to-end speech
synthesis system. The embeddings are trained with no explicit labels, yet learn
to model a large range of acoustic expressiveness. GSTs lead to a rich set of
signifi... | true | true | Wang, Yuxuan and Stanton, Daisy and Zhang, Yu and Ryan, RJ-Skerry and Battenberg, Eric and Shor, Joel and Xiao, Ying and Jia, Ye and Ren, Fei and Saurous, Rif A | 2,018 | null | null | null | null | Style Tokens: Unsupervised Style Modeling, Control and Transfer in
End-to-End Speech Synthesis | Unsupervised Style Modeling, Control and Transfer in End- ... | https://research.google/pubs/style-tokens-unsupervised-style-modeling-control-and-transfer-in-end-to-end-speech-synthesis/ | by Y Wang · Cited by 1080 — In this work, we propose “global style tokens”(GSTs), a bank of embeddings that are jointly trained within Tacotron, a state-of-the-art end-to-end speech |
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models | 2506.00832v1 | skerry2018towards | \cite{skerry2018towards} | Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with
Tacotron | http://arxiv.org/abs/1803.09047v1 | We present an extension to the Tacotron speech synthesis architecture that
learns a latent embedding space of prosody, derived from a reference acoustic
representation containing the desired prosody. We show that conditioning
Tacotron on this learned embedding space results in synthesized audio that
matches the prosody... | true | true | Skerry-Ryan, RJ and Battenberg, Eric and Xiao, Ying and Wang, Yuxuan and Stanton, Daisy and Shor, Joel and Weiss, Ron and Clark, Rob and Saurous, Rif A | 2,018 | null | null | null | null | Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with
Tacotron | [PDF] Towards End-to-End Prosody Transfer for Expressive Speech ... | https://proceedings.mlr.press/v80/skerry-ryan18a/skerry-ryan18a.pdf | Abstract. We present an extension to the Tacotron speech synthesis architecture that learns a latent embed- ding space of prosody, derived from a reference. |
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models | 2506.00832v1 | hsu2018hierarchical | \cite{hsu2018hierarchical} | Hierarchical Generative Modeling for Controllable Speech Synthesis | http://arxiv.org/abs/1810.07217v2 | This paper proposes a neural sequence-to-sequence text-to-speech (TTS) model
which can control latent attributes in the generated speech that are rarely
annotated in the training data, such as speaking style, accent, background
noise, and recording conditions. The model is formulated as a conditional
generative model b... | true | true | Hsu, Wei-Ning and Zhang, Yu and Weiss, Ron J and Zen, Heiga and Wu, Yonghui and Wang, Yuxuan and Cao, Yuan and Jia, Ye and Chen, Zhifeng and Shen, Jonathan and others | 2,018 | null | null | null | arXiv preprint arXiv:1810.07217 | Hierarchical Generative Modeling for Controllable Speech Synthesis | Hierarchical Generative Modeling for Controllable Speech Synthesis | http://arxiv.org/pdf/1810.07217v2 | This paper proposes a neural sequence-to-sequence text-to-speech (TTS) model
which can control latent attributes in the generated speech that are rarely
annotated in the training data, such as speaking style, accent, background
noise, and recording conditions. The model is formulated as a conditional
generative model b... |
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models | 2506.00832v1 | lenglet2022speaking | \cite{lenglet2022speaking} | Speaking Rate Control of end-to-end TTS Models by Direct Manipulation of the Encoder's Output Embeddings | null | null | true | false | Lenglet, Martin and Perrotin, Olivier and Bailly, G{\'e}rard | 2,022 | null | null | null | null | Speaking Rate Control of end-to-end TTS Models by Direct Manipulation of the Encoder's Output Embeddings | Speaking Rate Control of end-to-end TTS Models by ... - ISCA Archive | https://www.isca-archive.org/interspeech_2022/lenglet22_interspeech.html | Experimental results show that the control provided by embeddings reproduces a behaviour closer to natural speech data. |
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models | 2506.00832v1 | zhang2020unified | \cite{zhang2020unified} | Unified Mandarin TTS Front-end Based on Distilled BERT Model | http://arxiv.org/abs/2012.15404v1 | The front-end module in a typical Mandarin text-to-speech system (TTS) is
composed of a long pipeline of text processing components, which requires
extensive efforts to build and is prone to large accumulative model size and
cascade errors. In this paper, a pre-trained language model (PLM) based model
is proposed to si... | true | true | Zhang, Yang and Deng, Liqun and Wang, Yasheng | 2,020 | null | null | null | arXiv preprint arXiv:2012.15404 | Unified Mandarin TTS Front-end Based on Distilled BERT Model | Unified Mandarin TTS Front-end Based on Distilled BERT Model | https://arxiv.org/abs/2012.15404 | We use a pre-trained Chinese BERT[1] as the text encoder and employ multi-task learning technique to adapt it to the two TTS front-end tasks. |
Counterfactual Activation Editing for Post-hoc Prosody and
Mispronunciation Correction in TTS Models | 2506.00832v1 | fong2022speech | \cite{fong2022speech} | Speech Audio Corrector: using speech from non-target speakers for one-off correction of mispronunciations in grapheme-input text-to-speech | null | null | true | false | Fong, Jason and Lyth, Daniel and Henter, Gustav Eje and Tang, Hao and King, Simon | 2,022 | null | null | null | null | Speech Audio Corrector: using speech from non-target speakers for one-off correction of mispronunciations in grapheme-input text-to-speech | [PDF] using speech from non-target speakers for one-off correction of ... | https://www.research.ed.ac.uk/files/364801102/Speech_Audio_Corrector_FONG_DOA13062022_VOR.pdf | Missing: 04/08/2025 |
Dual Debiasing for Noisy In-Context Learning for Text Generation | 2506.00418v1 | yoo2022ground | \cite{yoo2022ground} | Ground-Truth Labels Matter: A Deeper Look into Input-Label
Demonstrations | http://arxiv.org/abs/2205.12685v2 | Despite recent explosion of interests in in-context learning, the underlying
mechanism and the precise impact of the quality of demonstrations remain
elusive. Intuitively, ground-truth labels should have as much impact in
in-context learning (ICL) as supervised learning, but recent work reported that
the input-label co... | true | true | Yoo, Kang Min and Kim, Junyeob and Kim, Hyuhng Joon and Cho, Hyunsoo and Jo, Hwiyeol and Lee, Sang-Woo and Lee, Sang-goo and Kim, Taeuk | 2,022 | null | null | null | null | Ground-Truth Labels Matter: A Deeper Look into Input-Label
Demonstrations | Ground-Truth Labels Matter: A Deeper Look into Input- ... | https://aclanthology.org/2022.emnlp-main.155.pdf | by KM Yoo · 2022 · Cited by 100 — We propose two new quantifiable metrics, sen- sitivity and GLER, to measure the impact of ground-truth label demonstrations on ICL. • We conduct |
Dual Debiasing for Noisy In-Context Learning for Text Generation | 2506.00418v1 | o2023contrastive | \cite{o2023contrastive} | Contrastive Decoding Improves Reasoning in Large Language Models | http://arxiv.org/abs/2309.09117v2 | We demonstrate that Contrastive Decoding -- a simple, computationally light,
and training-free text generation method proposed by Li et al 2022 -- achieves
large out-of-the-box improvements over greedy decoding on a variety of
reasoning tasks. Originally shown to improve the perceived quality of long-form
text generati... | true | true | O'Brien, Sean and Lewis, Mike | 2,023 | null | null | null | arXiv preprint arXiv:2309.09117 | Contrastive Decoding Improves Reasoning in Large Language Models | Contrastive Decoding Improves Reasoning in Large Language Models | http://arxiv.org/pdf/2309.09117v2 | We demonstrate that Contrastive Decoding -- a simple, computationally light,
and training-free text generation method proposed by Li et al 2022 -- achieves
large out-of-the-box improvements over greedy decoding on a variety of
reasoning tasks. Originally shown to improve the perceived quality of long-form
text generati... |
Dual Debiasing for Noisy In-Context Learning for Text Generation | 2506.00418v1 | li2023unified | \cite{li2023unified} | Unified Demonstration Retriever for In-Context Learning | http://arxiv.org/abs/2305.04320v2 | In-context learning is a new learning paradigm where a language model
conditions on a few input-output pairs (demonstrations) and a test input, and
directly outputs the prediction. It has been shown highly dependent on the
provided demonstrations and thus promotes the research of demonstration
retrieval: given a test i... | true | true | Li, Xiaonan and Lv, Kai and Yan, Hang and Lin, Tianyang and Zhu, Wei and Ni, Yuan and Xie, Guotong and Wang, Xiaoling and Qiu, Xipeng | 2,023 | null | null | null | null | Unified Demonstration Retriever for In-Context Learning | Unified Demonstration Retriever for In-Context Learning | https://aclanthology.org/2023.acl-long.256/ | In this paper, we propose Unified Demonstration Retriever (UDR), a single model to retrieve demonstrations for a wide range of tasks. |
Dual Debiasing for Noisy In-Context Learning for Text Generation | 2506.00418v1 | liucontext | \cite{liucontext} | In-context Vectors: Making In Context Learning More Effective and
Controllable Through Latent Space Steering | http://arxiv.org/abs/2311.06668v3 | Large language models (LLMs) demonstrate emergent in-context learning
capabilities, where they adapt to new tasks based on example demonstrations.
However, in-context learning has seen limited effectiveness in many settings,
is difficult to quantitatively control and takes up context window space. To
overcome these lim... | true | true | Liu, Sheng and Ye, Haotian and Xing, Lei and Zou, James Y | null | null | null | null | null | In-context Vectors: Making In Context Learning More Effective and
Controllable Through Latent Space Steering | Making In Context Learning More Effective and ... | https://consensus.app/papers/incontext-vectors-making-in-context-learning-more-zou-liu/20a28c8387155fa1ac876aad9841f1ee | Key takeaway: 'In-context vectors (ICV) improve in-context learning effectiveness, controllability, and computational efficiency in large |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.