license: apache-2.0
language:
- en
pretty_name: 'StackPulse-QA: Instruction-Tuning Q&A Pairs from Stack Overflow'
size_categories:
- 100K<n<1M
task_categories:
- question-answering
- text-generation
- text2text-generation
tags:
- stackoverflow
- instruction-tuning
- qa
- code
- fine-tuning
- alpaca-format
- llm-training
π§© StackPulse-QA: Instruction-Tuning Q&A Pairs from Stack Overflow
Dataset Summary
Instruction-tuning Q&A dataset built from Omarrran/StackPulse_778K_QnA_Code_dataset by joining question IDs with BigQuery bigquery-public-data.stackoverflow.posts_answers on accepted_answer_id.
Each sample consists of:
input_text_instructβ A question (title + body) prefixed with an instructionoutput_textβ The accepted answer from Stack Overflow
Format mirrors the instruction-tuning dataset from DeepLearning.AI's Finetuning Large Language Models course, ready for fine-tuning PaLM, LLaMA, Mistral, Gemma, Phi, and similar models.
π Processing Progress
- Runs completed : 4 / 6
- Questions processed : 400,000 / 554,196
- Remaining : 154,196
π Files in This Dataset
ποΈ Training Files (80% split)
| File | Format | Description |
|---|---|---|
| data/tune_data_stack_overflow_python_qa_run1-07:19:04:2026.jsonl | JSONL | Training split from 1 |
| data/tune_data_stack_overflow_python_qa_run2-07:19:04:2026.jsonl | JSONL | Training split from 2 |
| data/tune_data_stack_overflow_python_qa_run3-07:19:04:2026.jsonl | JSONL | Training split from 3 |
| data/tune_data_stack_overflow_python_qa_run4-07:19:04:2026.jsonl | JSONL | Training split from 4 |
| data/tune_data_stack_overflow_python_qa_run5-07:19:04:2026.jsonl | JSONL | Training split from 5 |
π§ͺ Evaluation Files (20% split)
| File | Format | Description |
|---|---|---|
| data/tune_eval_data_stack_overflow_python_qa_run1-07:19:04:2026.jsonl | JSONL | Eval split from run 1 |
| data/tune_eval_data_stack_overflow_python_qa_run2-07:19:04:2026.jsonl | JSONL | Eval split from run 2 |
| data/tune_eval_data_stack_overflow_python_qa_run3-07:19:04:2026.jsonl | JSONL | Eval split from run 3 |
| data/tune_eval_data_stack_overflow_python_qa_run4-07:19:04:2026.jsonl | JSONL | Eval split from run 4 |
π Full Metadata CSVs
| File | Format | Description |
|---|---|---|
| data/stackpulse_qa_full_run1-07:19:04:2026.csv | CSV | Full metadata for run 1 |
| data/stackpulse_qa_full_run2-07:19:04:2026.csv | CSV | Full metadata for run 2 |
| data/stackpulse_qa_full_run3-07:19:04:2026.csv | CSV | Full metadata for run 3 |
| data/stackpulse_qa_full_run4-07:19:04:2026.csv | CSV | Full metadata for run 4 |
ποΈ Schema
JSONL Files (training / eval)
Exactly 2 fields per row β ready for instruction fine-tuning:
| Field | Type | Description |
|---|---|---|
input_text_instruct |
string | Instruction prefix + question title + question body |
output_text |
string | Accepted answer body (HTML format) |
CSV Files (full metadata)
| Column | Description |
|---|---|
| question_id | Stack Overflow question ID |
| input_text | title + body (no instruction prefix) |
| output_text | accepted answer body |
| input_text_instruct | instruction-prefixed input (same as JSONL) |
| title | question title only |
| tags | pipe-separated tags |
| q_score | question upvote score |
| view_count | total views |
| answer_count | number of answers |
| accepted_answer_id | ID of the accepted answer |
| answer_id | ID of this answer (= accepted_answer_id) |
| a_score | answer upvote score |
| is_accepted | always True (we only keep accepted answers) |
| creation_date | question creation timestamp |
π Quick Start
Load with pandas
import pandas as pd
# Training data
train = pd.read_json("data/tune_data_stack_overflow_python_qa_run1-*.jsonl", lines=True)
# Eval data
eval_ = pd.read_json("data/tune_eval_data_stack_overflow_python_qa_run1-*.jsonl", lines=True)
print(train.iloc[0]["input_text_instruct"][:300])
print(train.iloc[0]["output_text"][:300])
Load with HuggingFace datasets
from datasets import load_dataset
# Load all training shards
ds = load_dataset(
"json",
data_files={
"train": "data/tune_data_stack_overflow_python_qa_run*.jsonl",
"eval" : "data/tune_eval_data_stack_overflow_python_qa_run*.jsonl",
}
)
print(ds)
Use for fine-tuning (Alpaca-style)
def format_prompt(ex):
return {
"text": f"{ex['input_text_instruct']}\n\n### Response:\n{ex['output_text']}"
}
train_formatted = ds["train"].map(format_prompt)
π Instruction Template Used
Please answer the following Stackoverflow question on Programming. Answer it like you are a developer answering Stackoverflow questions. Stackoverflow question: {title}{body}
β οΈ Caveats
- HTML in answers:
output_textcontains raw HTML tags (<p>,<pre>,<code>). Strip or preserve depending on your use case. - Accepted answers only: We filter
q.accepted_answer_id = a.idβ other community answers are skipped. - ~60% match rate: Of each 100K question IDs queried, ~60K have accepted answers in BigQuery. The rest are self-answered, deleted, or lack acceptance.
- 80/20 split: Each run uses
random_state=42for reproducible train/eval splits. - Mirrors L2_data.ipynb: Format exactly matches DeepLearning.AI's Finetuning Large Language Models course notebook structure.
π Source Dataset
Question IDs and metadata sourced from:
Answers joined from:
bigquery-public-data.stackoverflow.posts_answers(Google BigQuery Public Dataset)
π Citation
@dataset{malik2026stackpulseqa,
author = {Malik, Omar Haq Nawaz},
title = {StackPulse-QA: Instruction-Tuning Q&A Pairs from Stack Overflow},
year = {2026},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/Omarrran/stackpulse_qa_output},
license = {Apache-2.0}
}
π€ Author
Omar Haq Nawaz Malik (HuggingFace: Omarrran) AI Engineer & NLP Researcher | BITS Pilani | Srinagar, Kashmir