T5-Base-Summarization
A fine-tuned version of t5-base for summarizing research papers into concise summaries. This is the first stage of a two-step Research Paper Simplifier pipeline.
Model Description
This model takes a section of a research paper as input and generates a plain-language summary approximately 1/10th the length of the original text. Fine-tuned using LoRA (PEFT) for parameter-efficient training.
Pipeline
Research Paper โโโบ [T5-Base-Summarization] โโโบ Summary โโโบ [T5-Base-Story-Generation] โโโบ Story
Training Details
| Parameter | Value |
|---|---|
| Base model | t5-base |
| Task | Summarization |
| Max input length | 1024 tokens |
| Max target length | 128 tokens |
| Learning rate | 3e-5 |
| Batch size | 4 |
| Gradient accumulation steps | 4 |
| Warmup steps | 500 |
| Weight decay | 0.01 |
| Fine-tuning method | LoRA (r=16, alpha=32, targets: q, v) |
Usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("harsharajkumar273/T5-Base-Summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("harsharajkumar273/T5-Base-Summarization")
text = "Your research paper section here..."
word_count = len(text.split())
prompt = f"Summarize this part of the research paper to less than {word_count // 10} words:\n{text}"
inputs = tokenizer(prompt, return_tensors="pt", max_length=1024, truncation=True)
outputs = model.generate(**inputs, max_length=128, num_beams=4)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(summary)
Evaluation Metrics
Evaluated using ROUGE and BERTScore on a held-out 10% test split.
Related Models
- Downloads last month
- -