nick007x commited on
Commit
df69cd8
·
verified ·
1 Parent(s): ba1807a

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -272
README.md DELETED
@@ -1,272 +0,0 @@
1
- # Reddit Data Archive & Analysis Pipeline
2
-
3
- ## 📊 Overview
4
-
5
- A comprehensive pipeline for archiving, processing, and analyzing Reddit data from 2005 to 2025. This repository contains tools for downloading Reddit's historical data, converting it to optimized formats, and performing detailed schema analysis and community studies.
6
-
7
- ## 🗂️ Repository Structure
8
-
9
- ```bash
10
- ├── scripts/ # Analysis scripts
11
- ├── analysis/ # Analysis methods and visualizations
12
- │ ├── original_schema_analysis/ # Schema evolution analysis
13
- │ │ ├── comments/ # Comment schema analysis results
14
- │ │ ├── figures/ # Schema analysis visualizations
15
- │ │ └── submissions/ # Submission schema analysis results
16
- │ │ ├── analysis_report_2005.txt
17
- │ │ ├── analysis_report_2006.txt
18
- │ │ └── ...
19
- │ └── parquet_subreddits_analysis/ # Analysis of Parquet-converted data
20
- │ ├── comments/ # Comment data analysis
21
- │ ├── figures/ # Subreddit analysis visualizations
22
- │ └── submissions/ # Submission data analysis
23
-
24
- ├── analyzed_subreddits/ # Focused subreddit case studies
25
- │ ├── comments/ # Subreddit-specific comment archives
26
- │ │ └── RC_funny.parquet # r/funny comments (empty as of now)
27
- │ ├── reddit-media/ # Media organized by subreddit and date
28
- │ │ ├── content-hashed/ # Deduplicated media (content addressing)
29
- │ │ ├── images/ # Image media
30
- │ │ │ └── r_funny/ # Organized by subreddit
31
- │ │ │ └── 2025/01/01/ # Daily structure for temporal analysis
32
- │ │ └── videos/ # Video media
33
- │ │ └── r_funny/ # Organized by subreddit
34
- │ │ └── 2025/01/01/ # Daily structure
35
- │ └── submissions/ # Subreddit-specific submission archives
36
- │ └── RS_funny.parquet # r/funny submissions (empty as of now)
37
-
38
- ├── converted_parquet/ # Optimized Parquet format (year-partitioned)
39
- │ ├── comments/ # Comments 2005-2025
40
- │ │ ├── 2005/ ── 2025/ # Year partitions for efficient querying
41
- │ └── submissions/ # Submissions 2005-2025
42
- │ ├── 2005/ ── 2025/ # Year partitions
43
-
44
- ├── original_dump/ # Raw downloaded Reddit archives
45
- │ ├── comments/ # Monthly comment archives (ZST compressed)
46
- │ │ ├── RC_2005-12.zst ── RC_2025-12.zst # Complete 2005-2025 coverage
47
- │ │ └── schema_analysis/ # Schema analysis directory
48
- │ └── submissions/ # Monthly submission archives
49
- │ ├── RS_2005-06.zst ── RS_2025-12.zst # Complete 2005-2025 coverage
50
- │ └── schema_analysis/ # Schema evolution analysis reports
51
- │ ├── analysis_report_2005.txt
52
- │ └── ...
53
-
54
- ├── subreddits_2025-01_* # Subreddit metadata (currently old January 2025 snapshot)
55
- │ ├── type_public.jsonl # 2.78M public subreddits
56
- │ ├── type_restricted.jsonl # 1.92M restricted subreddits
57
- │ ├── type_private.jsonl # 182K private subreddits
58
- │ └── type_other.jsonl # 100 other/archived subreddits
59
-
60
- ├── .gitattributes # Git LFS configuration for large files
61
- └── README.md # This documentation file
62
- ```
63
-
64
- ## 🏆 **Data Validation Layer**
65
-
66
- The **original_schema_analysis** acts as a **data quality validation layer** before any substantive analysis.
67
-
68
- ### **Phase 1 vs Phase 2 Analysis**
69
- ```
70
- Original Schema Analysis vs Parquet Subreddits Analysis
71
- ─────────────────────────────────────────────────────────────────────────────
72
- Raw data quality assessment Processed data insights
73
- Longitudinal (2005-2025) Cross-sectional (subreddit focus)
74
- Technical schema evolution Social/community patterns
75
- Data engineering perspective Social science perspective
76
- Low-level field statistics High-level behavioral patterns
77
- "What fields exist?" "What do people do?"
78
- ```
79
-
80
- ## 📈 Dataset Statistics
81
-
82
- ### Subreddit Ecosystem (January 2025)
83
- - **Total Subreddits:** 21,865,152
84
- - **Public Communities:** 2,776,279 (12.7%)
85
- - **Restricted:** 1,923,526 (8.8%)
86
- - **Private:** 182,045 (0.83%)
87
- - **User Profiles:** 16,982,966 (77.7%)
88
-
89
- ### Content Scale (January 2025 Example)
90
- - **Monthly Submissions:** ~39.9 million
91
- - **Monthly Comments:** ~500+ million (estimated)
92
- - **NSFW Content:** 39.6% of submissions
93
- - **Media Posts:** 34.3% on reddit media domains
94
-
95
- ### Largest Communities
96
- 1. **r/funny:** 66.3M subscribers (public)
97
- 2. **r/announcements:** 305.6M (private)
98
- 3. **r/XboxSeriesX:** 5.3M (largest restricted)
99
-
100
- ## 🛠️ Pipeline Stages
101
-
102
- ### Stage 1: Data Acquisition
103
- - Download monthly Pushshift/Reddit archives
104
- - Compressed ZST format for efficiency
105
- - Complete coverage: 2005-2025
106
-
107
- ### Stage 2: Schema Analysis
108
- - Field-by-field statistical analysis
109
- - Type distribution tracking
110
- - Null/empty value profiling
111
- - Schema evolution tracking (2005-2018 complete more coming soon)
112
-
113
- ### Stage 3: Format Conversion
114
- - ZST → JSONL decompression
115
- - JSONL → Parquet conversion
116
- - Year-based partitioning for query efficiency
117
- - Columnar optimization for analytical queries
118
-
119
- ### Stage 4: Community Analysis
120
- - Subreddit categorization (public/private/restricted/user)
121
- - Subscriber distribution analysis
122
- - Media organization by community
123
- - Case studies of specific subreddits
124
-
125
- ## 🔬 Analysis Tools
126
-
127
- ### Schema Analyzer
128
- - Processes JSONL files at 6,000-7,000 lines/second
129
- - Tracks 156 unique fields in submissions
130
- - Monitors type consistency and null rates
131
- - Generates comprehensive statistical reports
132
-
133
- ### Subreddit Classifier
134
- - Categorizes 21.8M subreddits by type
135
- - Analyzes subscriber distributions
136
- - Identifies community growth patterns
137
- - Exports categorized datasets
138
-
139
- ### Media Organizer
140
- - Content-addressable storage for deduplication
141
- - Daily organization (YYYY/MM/DD)
142
- - Subreddit-based categorization
143
- - Thumbnail generation
144
-
145
- ## 💾 Data Formats
146
-
147
- ### Original Data
148
- - **Format:** ZST-compressed JSONL
149
- - **Compression:** Zstandard (high ratio)
150
- - **Structure:** Monthly files (RC/RS_YYYY-MM.zst)
151
-
152
- ### Processed Data
153
- - **Format:** Apache Parquet
154
- - **Compression:** Zst (columnar)
155
- - **Partitioning:** Year-based (2005-2025)
156
- - **Optimization:** Column pruning, predicate pushdown
157
-
158
- ### Metadata
159
- - **Format:** JSONL
160
- - **Categorization:** Subreddit type classification
161
- - **Timestamps:** Unix epoch seconds
162
-
163
-
164
- ## 🔧 Technical Design Decisions
165
-
166
- ### Compression Strategy: Why ZST → JSONL → Parquet?
167
-
168
- This pipeline employs a tiered compression strategy based on access patterns:
169
-
170
- #### **Original Archives (ZST Compressed)**
171
- - **Format:** `.zst` (Zstandard) compressed JSONL
172
- - **Why ZST?** 6:1 compression ratio (36GB → 6GB) vs gzip's 4:1
173
- - **Trade-off:** 39.7s decompression time vs 26.8s raw read
174
- - **Decision:** Keep only for archival; decompress once to JSONL
175
-
176
- #### **Analytical Storage (Parquet)**
177
- - **Format:** Apache Parquet with zst compression
178
- - **Why Parquet?** Columnar storage enables:
179
- - Selective column reads (read 2/17 columns = 15s vs 55s)
180
- - Built-in compression (36GB → 7GB = 5:1 ratio)
181
- - Predicate pushdown (skip irrelevant rows)
182
- - **Benchmark:** Metadata read in 4.9s vs 26.8s JSONL read
183
-
184
- ## ⚡ Performance Benchmarks
185
-
186
- ### File Processing Speeds (36GB Reddit Comments, Nov 2016)
187
-
188
- | Format | Size | Read Time | Compression | Notes |
189
- |--------|------|-----------|-------------|-------|
190
- | **ZST Compressed** | 6.0GB | 39.7s | 6:1 | Requires decompression penalty |
191
- | **JSONL Raw** | 36GB | 26.8s | 1:1 | Fastest for repeated access |
192
- | **Parquet** | 7.0GB | 4.9s* | 5:1 | Metadata only; queries 2-15s |
193
-
194
- ## 🎯 Research Applications
195
-
196
- ### Community Studies
197
- - Subreddit lifecycle analysis
198
- - Moderation pattern tracking
199
- - Content policy evolution
200
- - NSFW community dynamics
201
-
202
- ### Content Analysis
203
- - Media type evolution (2005-2025)
204
- - Post engagement metrics
205
- - Cross-posting behavior
206
- - Temporal posting patterns
207
-
208
- ### Network Analysis
209
- - Cross-community interactions
210
- - User migration patterns
211
- - Community overlap studies
212
- - Influence network mapping
213
-
214
- ## 📊 Key Findings (Preliminary)
215
-
216
- ### Subreddit Distribution
217
- - **Long tail:** 89.4% of subreddits have 0 subscribers
218
- - **Growth pattern:** Most communities start as user profiles
219
- - **Restriction trend:** 8.8% of communities are restricted
220
- - **Private communities:** Mostly large, established groups
221
-
222
- ### Content Characteristics
223
- - **Text dominance:** 40.7% of posts are text-only
224
- - **NSFW prevalence:** 39.6% of content marked adult
225
- - **Moderation scale:** 32% removed by Reddit, 36% by moderators
226
- - **Media evolution:** Video posts growing (3% in Jan 2025)
227
-
228
-
229
- ## 📄 License & Attribution
230
-
231
- ### Data Source
232
- - Reddit Historical Data via Pushshift/Reddit API
233
- - Subreddit metadata from Reddit API
234
- - **Note:** Respect Reddit's terms of service and API limits
235
-
236
- ### Code License
237
- MIT License - See LICENSE file for details
238
-
239
- ### Citation
240
- If using this pipeline for research:
241
- ```
242
- Reddit Data Analysis Pipeline. (2025). Comprehensive archive and analysis
243
- tools for Reddit historical data (2005-2025). GitHub Repository.
244
- ```
245
-
246
- ## 🆘 Support & Contributing
247
-
248
- ### Issue Tracking
249
- - Data quality issues: Report in schema analysis
250
- - Processing errors: Check file integrity and formats
251
- - Performance: Consider partitioning and compression settings
252
-
253
- ### Contributing
254
- 1. Fork repository
255
- 2. Add tests for new analyzers
256
- 3. Document data processing steps
257
- 4. Submit pull request with analysis validation
258
-
259
- ### Performance Tips
260
- - Use SSD storage for active processing
261
- - Enable memory mapping for large files
262
- - Consider Spark/Dask for distributed processing
263
- - Implement incremental updates for new data
264
-
265
- ## 📚 Related Research
266
- - Social network analysis
267
- - Community detection algorithms
268
- - Content moderation studies
269
- - Temporal pattern analysis
270
- - Cross-platform comparative studies
271
-
272
- ---