harsharajkumar273 commited on
Commit
d25060a
·
verified ·
1 Parent(s): 1fd527b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ base_model: microsoft/prophetnet-large-uncased
5
+ tags:
6
+ - summarization
7
+ - research-paper
8
+ - seq2seq
9
+ - prophetnet
10
+ - lora
11
+ - peft
12
+ datasets:
13
+ - custom
14
+ metrics:
15
+ - rouge
16
+ - bertscore
17
+ ---
18
+
19
+ # ProphetNet-Large-Summarization
20
+
21
+ A fine-tuned version of [microsoft/prophetnet-large-uncased](https://huggingface.co/microsoft/prophetnet-large-uncased) for summarizing research papers into concise summaries. This is the first stage of a two-step **Research Paper Simplifier** pipeline.
22
+
23
+ ## Model Description
24
+
25
+ This model takes a section of a research paper as input and generates a plain-language summary. Fine-tuned using LoRA (PEFT) with 4-bit quantization for efficient training.
26
+
27
+ ## Pipeline
28
+
29
+ ```
30
+ Research Paper ──► [ProphetNet-Large-Summarization] ──► Summary ──► [ProphetNet-Large-Story-Generation] ──► Story
31
+ ```
32
+
33
+ ## Training Details
34
+
35
+ | Parameter | Value |
36
+ |-----------|-------|
37
+ | Base model | microsoft/prophetnet-large-uncased |
38
+ | Task | Summarization |
39
+ | Max input length | 2048 tokens |
40
+ | Max target length | 256 tokens |
41
+ | Learning rate | 3e-5 |
42
+ | Batch size | 2 |
43
+ | Gradient accumulation steps | 4 |
44
+ | Warmup steps | 1500 |
45
+ | Weight decay | 0.01 |
46
+ | Fine-tuning method | LoRA (r=16, alpha=64, targets: query_proj, value_proj) |
47
+ | Quantization | 4-bit NF4 (bitsandbytes) |
48
+
49
+ ## Usage
50
+
51
+ ```python
52
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
53
+
54
+ tokenizer = AutoTokenizer.from_pretrained("harsharajkumar273/ProphetNet-Large-Summarization")
55
+ model = AutoModelForSeq2SeqLM.from_pretrained("harsharajkumar273/ProphetNet-Large-Summarization")
56
+
57
+ text = "Your research paper section here..."
58
+ word_count = len(text.split())
59
+ prompt = f"Summarize this part of the research paper to less than {word_count // 10} words:\n{text}"
60
+
61
+ inputs = tokenizer(prompt, return_tensors="pt", max_length=2048, truncation=True)
62
+ outputs = model.generate(**inputs, max_length=256, num_beams=4)
63
+ summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
64
+ print(summary)
65
+ ```
66
+
67
+ ## Evaluation Metrics
68
+
69
+ Evaluated using ROUGE and BERTScore on a held-out 10% test split.
70
+
71
+ ## Related Models
72
+
73
+ - [harsharajkumar273/Bart-Base-Summarization](https://huggingface.co/harsharajkumar273/Bart-Base-Summarization)
74
+ - [harsharajkumar273/T5-Base-Summarization](https://huggingface.co/harsharajkumar273/T5-Base-Summarization)
75
+ - [harsharajkumar273/ProphetNet-Large-Story-Generation](https://huggingface.co/harsharajkumar273/ProphetNet-Large-Story-Generation) — next stage