Papers
arxiv:2601.05794

Simplify-This: A Comparative Analysis of Prompt-Based and Fine-Tuned LLMs

Published on Jan 9
Authors:
,
,

Abstract

A comparative analysis of fine-tuning and prompt engineering approaches for text simplification using encoder-decoder large language models shows that fine-tuned models provide better structural simplification while prompting achieves higher semantic similarity but with tendency to copy inputs.

AI-generated summary

Large language models (LLMs) enable strong text generation, and in general there is a practical tradeoff between fine-tuning and prompt engineering. We introduce Simplify-This, a comparative study evaluating both paradigms for text simplification with encoder-decoder LLMs across multiple benchmarks, using a range of evaluation metrics. Fine-tuned models consistently deliver stronger structural simplification, whereas prompting often attains higher semantic similarity scores yet tends to copy inputs. A human evaluation favors fine-tuned outputs overall. We release code, a cleaned derivative dataset used in our study, checkpoints of fine-tuned models, and prompt templates to facilitate reproducibility and future work.

Community

Sign up or log in to comment

Models citing this paper 9

Browse 9 models citing this paper

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.05794 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.