tamnd commited on
Commit
3016992
·
1 Parent(s): 74cf856

Add 2011-02-15 — 40.9K events, 11 files

Browse files
README.md CHANGED
@@ -51,60 +51,90 @@ configs:
51
  data_files: "data/public_events/**/*.parquet"
52
  - config_name: discussions
53
  data_files: "data/discussions/**/*.parquet"
 
 
54
  ---
55
 
56
  # OpenGitHub
57
 
58
- Every public GitHub event since 2015, parsed into 16 structured Parquet tables, ready for analysis, training, and research.
59
 
60
- OpenGitHub covers **2011-02-12** to **2011-02-14** (3 days), totaling **108,663 events**. The original 19.7 MB of raw GH Archive NDJSON is compressed into 10.9 MB of Zstd-compressed Parquet, partitioned as `data/TABLE/YYYY/MM/DD.parquet` (one file per event type per day). All 16 event types are fully flattened into typed columns, so there is no JSON parsing needed downstream. The dataset is updated daily and released under the [Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/).
61
 
62
- The underlying data comes from [GH Archive](https://www.gharchive.org/), created by [Ilya Grigorik](https://www.igvita.com/), which has been recording every public GitHub event via the [Events API](https://docs.github.com/en/rest/activity/events) since 2011.
63
 
64
- ## Why this dataset?
65
 
66
- GitHub is the world's largest open-source platform, with over 200 million repositories, millions of developers, and billions of events. [GH Archive](https://www.gharchive.org/) captures every public event as gzipped NDJSON, one file per hour. Querying raw JSON at this scale is painful.
67
 
68
- OpenGitHub fixes that. Every event is parsed, every nested field is flattened into typed Parquet columns, and the output is partitioned by date. You can `SELECT` stars, issues, PRs, and commits directly with DuckDB, Spark, pandas, or HuggingFace Data Studio.
69
 
70
- Every field from the original GitHub API response is preserved. You can reconstruct the original event from these tables.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
  ## Events per year
73
 
74
  ```
75
- 2011 ██████████████████████████████ 108.7K
76
  ```
77
 
78
  | Year | Days | Events | Avg/Day | Raw Input | Parquet Output | Download | Process | Upload |
79
  |------|-----:|-------:|--------:|----------:|---------------:|---------:|--------:|-------:|
80
- | 2011 | 3 | 108,663 | 36,221 | 19.7 MB | 10.9 MB | 27s | 36s | 1m07s |
81
 
82
 
83
  ### Pushes per year
84
 
85
  ```
86
- 2011 ██████████████████████████████ 57.8K
87
  ```
88
 
89
 
90
  ### Issues per year
91
 
92
  ```
93
- 2011 ██████████████████████████████ 4.1K
94
  ```
95
 
96
 
97
  ### Pull requests per year
98
 
99
  ```
100
- 2011 ██████████████████████████████ 2.5K
101
  ```
102
 
103
 
104
  ### Stars per year
105
 
106
  ```
107
- 2011 ██████████████████████████████ 11.9K
108
  ```
109
 
110
 
@@ -126,6 +156,11 @@ ds = load_dataset("open-index/open-github", "issues",
126
 
127
  # Load all pull requests into memory
128
  ds = load_dataset("open-index/open-github", "pull_requests")
 
 
 
 
 
129
  ```
130
 
131
  ### DuckDB
@@ -483,28 +518,25 @@ GitHub Discussions lifecycle: created, answered, category_changed, labeled, and
483
 
484
  | Table | GitHub Event | Events | % | Description |
485
  |-------|-------------|-------:|---:|-------------|
486
- | `pushes` | PushEvent | 57,768 | 53.2% | Git pushes with commits |
487
- | `issues` | IssuesEvent | 4,113 | 3.8% | Issue lifecycle events |
488
- | `pull_requests` | PullRequestEvent | 2,468 | 2.3% | PR lifecycle events |
489
- | `stars` | WatchEvent | 11,931 | 11.0% | Repository stars |
490
- | `forks` | ForkEvent | 2,912 | 2.7% | Repository forks |
491
- | `creates` | CreateEvent | 21,881 | 20.1% | Branch/tag/repo creation |
492
- | `deletes` | DeleteEvent | 1,713 | 1.6% | Branch/tag deletion |
493
- | `commit_comments` | CommitCommentEvent | 1,420 | 1.3% | Comments on commits |
494
- | `wiki_pages` | GollumEvent | 3,560 | 3.3% | Wiki page edits |
495
- | `members` | MemberEvent | 808 | 0.7% | Collaborator additions |
496
- | `public_events` | PublicEvent | 89 | 0.1% | Repo made public |
497
 
498
  ## How it's built
499
 
500
- The pipeline processes each hourly GH Archive file in a single pass:
 
 
501
 
502
- 1. **Download** the hourly `.json.gz` file from `data.gharchive.org` (typically 100 to 150 MB compressed per hour)
503
- 2. **Decompress** and parse each line as a JSON event
504
- 3. **Route** each event by its `type` field to one of 16 handlers
505
- 4. **Flatten** nested JSON into typed columns. For example, `issue.user.login` becomes `issue_user_login` and `pull_request.head.repo.name` becomes `pr_head_repo_name`
506
- 5. **Write** to Parquet with Zstd compression, partitioned as `data/TABLE/YYYY/MM/DD.parquet`
507
- 6. **Publish** daily to HuggingFace with auto-generated stats
508
 
509
  All scalar fields are fully flattened into typed columns. Variable-length arrays (commits, labels, assets, topics, assignees) are stored as native Parquet LIST columns — no JSON strings. All `*_at` timestamp fields use the Parquet TIMESTAMP type (UTC microsecond precision), so DuckDB, pandas, Spark, and the HuggingFace viewer all read them as native datetimes.
510
 
@@ -512,10 +544,10 @@ No events are filtered. Every public event captured by GH Archive appears in the
512
 
513
  ## Known limitations
514
 
515
- - **Full coverage starts 2015-01-01.** Events from 2011-02-12 to 2014-12-31 are included but parsed from the deprecated Timeline API (see above).
516
  - **Bot activity.** A significant fraction of events (especially pushes and issues) are generated by bots such as Dependabot, Renovate, and CI systems. No bot filtering is applied.
517
  - **Event lag.** GH Archive captures events with a small delay (roughly minutes). Events during GitHub outages may be missing.
518
- - **Coverage starts 2015-01-01 for full data.** Events from 2011-02-12 to 2014-12-31 are parsed from the deprecated Timeline API. IssuesEvent and IssueCommentEvent from that period contain only integer IDs (no title, body, or state) because the old API did not include full objects in event payloads.
519
 
520
  ## Personal information
521
 
 
51
  data_files: "data/public_events/**/*.parquet"
52
  - config_name: discussions
53
  data_files: "data/discussions/**/*.parquet"
54
+ - config_name: live
55
+ data_files: "today/raw/**/*.parquet"
56
  ---
57
 
58
  # OpenGitHub
59
 
60
+ ## What is it?
61
 
62
+ This dataset contains every public event on GitHub: every push, pull request, issue, star, fork, code review, release, and discussion across all public repositories. GitHub is the world's largest software development platform, home to over 200 million repositories and the daily work of tens of millions of developers, from individual open-source contributors to the engineering teams behind the most widely used software on earth.
63
 
64
+ The archive currently spans from **2011-02-12** to **2011-02-15** (4 days), totaling **149,533 events** across 16 fully structured Parquet tables. New events are fetched directly from the GitHub Events API every few seconds and committed as 5-minute Parquet blocks through an automated live pipeline, so the dataset stays current with GitHub itself.
65
 
66
+ We believe this is the most complete and regularly updated structured mirror of public GitHub activity available on Hugging Face. The original 27.5 MB of raw GH Archive NDJSON has been parsed, flattened, and compressed into 15.0 MB of Zstd-compressed Parquet. Every nested JSON field is expanded into typed columns — no JSON parsing needed downstream. The data is partitioned as `data/TABLE/YYYY/MM/DD.parquet`, making it straightforward to query with DuckDB, load with the `datasets` library, or process with any tool that reads Parquet.
67
 
68
+ The underlying data comes from [GH Archive](https://www.gharchive.org/), created by [Ilya Grigorik](https://www.igvita.com/), which has been recording every public GitHub event via the [Events API](https://docs.github.com/en/rest/activity/events) since 2011. Released under the [Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/).
69
 
70
+ ## Live data (today)
71
 
72
+ Events from today are captured in near-real-time from the GitHub Events API and stored as 5-minute blocks in `today/raw/YYYY/MM/DD/HHMM.parquet`. Each block contains a generic event record with the full JSON payload preserved for later processing. Live blocks are committed to this dataset within minutes of the events occurring.
73
+
74
+
75
+ ### Live event schema
76
+
77
+ | Column | Type | Description |
78
+ |---|---|---|
79
+ | `event_id` | string | Unique GitHub event ID |
80
+ | `event_type` | string | Event type (PushEvent, IssuesEvent, etc.) |
81
+ | `created_at` | timestamp | When the event occurred |
82
+ | `actor_id` | int64 | User ID |
83
+ | `actor_login` | string | Username |
84
+ | `repo_id` | int64 | Repository ID |
85
+ | `repo_name` | string | Full repository name (owner/repo) |
86
+ | `org_id` | int64 | Organization ID (0 if personal) |
87
+ | `org_login` | string | Organization login |
88
+ | `action` | string | Event action (opened, closed, started, etc.) |
89
+ | `number` | int32 | Issue/PR number |
90
+ | `payload_json` | string | Full event payload as JSON |
91
+
92
+ ```python
93
+ # Query today's live events with DuckDB
94
+ import duckdb
95
+ duckdb.sql("""
96
+ SELECT event_type, COUNT(*) as n
97
+ FROM read_parquet('hf://datasets/open-index/open-github/today/raw/**/*.parquet')
98
+ GROUP BY event_type ORDER BY n DESC
99
+ """).show()
100
+ ```
101
 
102
  ## Events per year
103
 
104
  ```
105
+ 2011 ██████████████████████████████ 149.5K
106
  ```
107
 
108
  | Year | Days | Events | Avg/Day | Raw Input | Parquet Output | Download | Process | Upload |
109
  |------|-----:|-------:|--------:|----------:|---------------:|---------:|--------:|-------:|
110
+ | 2011 | 4 | 149,533 | 37,383 | 27.5 MB | 15.0 MB | 46s | 51s | 1m27s |
111
 
112
 
113
  ### Pushes per year
114
 
115
  ```
116
+ 2011 ██████████████████████████████ 80.4K
117
  ```
118
 
119
 
120
  ### Issues per year
121
 
122
  ```
123
+ 2011 ██████████████████████████████ 6.0K
124
  ```
125
 
126
 
127
  ### Pull requests per year
128
 
129
  ```
130
+ 2011 ██████████████████████████████ 3.5K
131
  ```
132
 
133
 
134
  ### Stars per year
135
 
136
  ```
137
+ 2011 ██████████████████████████████ 16.9K
138
  ```
139
 
140
 
 
156
 
157
  # Load all pull requests into memory
158
  ds = load_dataset("open-index/open-github", "pull_requests")
159
+
160
+ # Query today's live events
161
+ ds = load_dataset("open-index/open-github", "live", streaming=True)
162
+ for row in ds["train"]:
163
+ print(row["event_type"], row["repo_name"], row["created_at"])
164
  ```
165
 
166
  ### DuckDB
 
518
 
519
  | Table | GitHub Event | Events | % | Description |
520
  |-------|-------------|-------:|---:|-------------|
521
+ | `pushes` | PushEvent | 80,381 | 53.8% | Git pushes with commits |
522
+ | `issues` | IssuesEvent | 5,991 | 4.0% | Issue lifecycle events |
523
+ | `pull_requests` | PullRequestEvent | 3,477 | 2.3% | PR lifecycle events |
524
+ | `stars` | WatchEvent | 16,898 | 11.3% | Repository stars |
525
+ | `forks` | ForkEvent | 4,090 | 2.7% | Repository forks |
526
+ | `creates` | CreateEvent | 28,340 | 19.0% | Branch/tag/repo creation |
527
+ | `deletes` | DeleteEvent | 2,425 | 1.6% | Branch/tag deletion |
528
+ | `commit_comments` | CommitCommentEvent | 1,938 | 1.3% | Comments on commits |
529
+ | `wiki_pages` | GollumEvent | 4,710 | 3.1% | Wiki page edits |
530
+ | `members` | MemberEvent | 1,151 | 0.8% | Collaborator additions |
531
+ | `public_events` | PublicEvent | 132 | 0.1% | Repo made public |
532
 
533
  ## How it's built
534
 
535
+ The pipeline has two modes that work together:
536
+
537
+ **Archive mode** processes historical GH Archive hourly dumps in a single pass per file: download the `.json.gz`, decompress and parse each JSON line, route by event type to one of 16 handlers, flatten nested JSON into typed columns, write to Parquet with Zstd compression, and publish daily to HuggingFace.
538
 
539
+ **Live mode** captures events directly from the GitHub Events API in near-real-time. Multiple API tokens poll concurrently with adaptive pagination (up to 300 events per cycle). Events are deduplicated by ID, bucketed into 5-minute blocks by their `created_at` timestamp, and written as Parquet files. Each block is pushed to HuggingFace immediately after writing. On each hour boundary, the corresponding GH Archive file is downloaded and merged into the typed daily tables for complete coverage.
 
 
 
 
 
540
 
541
  All scalar fields are fully flattened into typed columns. Variable-length arrays (commits, labels, assets, topics, assignees) are stored as native Parquet LIST columns — no JSON strings. All `*_at` timestamp fields use the Parquet TIMESTAMP type (UTC microsecond precision), so DuckDB, pandas, Spark, and the HuggingFace viewer all read them as native datetimes.
542
 
 
544
 
545
  ## Known limitations
546
 
547
+ - **Full coverage starts 2015-01-01.** Events from 2011-02-12 to 2014-12-31 are included but parsed from the deprecated Timeline API format, which has less detail for some event types.
548
  - **Bot activity.** A significant fraction of events (especially pushes and issues) are generated by bots such as Dependabot, Renovate, and CI systems. No bot filtering is applied.
549
  - **Event lag.** GH Archive captures events with a small delay (roughly minutes). Events during GitHub outages may be missing.
550
+ - **Pre-2015 limitations.** IssuesEvent and IssueCommentEvent from 2012-2014 contain only integer IDs (no title, body, or state) because the old API did not include full objects in event payloads.
551
 
552
  ## Personal information
553
 
data/commit_comments/2011/02/15.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b65152134467f7e6e46388ab6a18f513b6036b588ac1af3189f39199c14e068
3
+ size 29826
data/creates/2011/02/15.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5054e84971d660c40fb517e4e5e04a139baaa6a47bfc81dfb7bf24b4d56aa68
3
+ size 144847
data/deletes/2011/02/15.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b09af1092f50a51fc00ddeeaadd41a978cd93c407d306b23c717f6aa7aec0666
3
+ size 20561
data/forks/2011/02/15.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:895c5f8ddacb2087f234455756577ecc682ce1b00851f7497514be4472d73e1d
3
+ size 49989
data/issues/2011/02/15.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e45c6c0b03ef3dc9a8f4df0569f66a9560e57b42be838be165e6d2f1a7d0c7ef
3
+ size 67530
data/members/2011/02/15.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:115e91c520baae87d98276395fbaf437479af0974434e5dcfcf3072112f9cbd7
3
+ size 15316
data/public_events/2011/02/15.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65dda3d945db39b2a7c9a0e6cd35100fd4e3a30143240b690d273baff199a71f
3
+ size 4270
data/pull_requests/2011/02/15.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8bda2597a7af361ba7f8963a04d0f346c6b83024f220e96da2af7dcecf90315
3
+ size 67216
data/pushes/2011/02/15.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0021704e9f5eb379a3873e6bb8dd8ac39689f2e3d6cddd66caa40de52bc8ced
3
+ size 3713701
data/stars/2011/02/15.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c25f17ed30c8af1a062519430e0ae05f403562e614863637b0958eb7d530d5d
3
+ size 148133
data/wiki_pages/2011/02/15.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d4e3fa5e76d9c70e7de58f1edd9e064dbf47967bcfdb5d2dea9be848b932d2a
3
+ size 90832
stats.csv CHANGED
@@ -1,4 +1,5 @@
1
  date,total_events,parse_errors,pushes,issues,issue_comments,pull_requests,pr_reviews,pr_review_comments,stars,forks,creates,deletes,releases,commit_comments,wiki_pages,members,public_events,discussions,bytes_read,dur_seconds,parquet_bytes,dur_download_s,dur_process_s,dur_commit_s
2
  2011-02-12,30100,0,16694,1161,0,799,0,0,3517,930,4842,466,0,410,1029,228,24,0,5912248,9.9,3288866,5.9,9.9,31.8
3
  2011-02-13,37082,0,18277,1332,0,667,0,0,3692,822,10106,455,0,451,1035,209,36,0,6432532,12.1,3596553,15.3,12.1,35.5
4
- 2011-02-14,41481,0,22797,1620,0,1002,0,0,4722,1160,6933,792,0,559,1496,371,29,0,8283118,14.1,4520871,5.8,14.1,0.0
 
 
1
  date,total_events,parse_errors,pushes,issues,issue_comments,pull_requests,pr_reviews,pr_review_comments,stars,forks,creates,deletes,releases,commit_comments,wiki_pages,members,public_events,discussions,bytes_read,dur_seconds,parquet_bytes,dur_download_s,dur_process_s,dur_commit_s
2
  2011-02-12,30100,0,16694,1161,0,799,0,0,3517,930,4842,466,0,410,1029,228,24,0,5912248,9.9,3288866,5.9,9.9,31.8
3
  2011-02-13,37082,0,18277,1332,0,667,0,0,3692,822,10106,455,0,451,1035,209,36,0,6432532,12.1,3596553,15.3,12.1,35.5
4
+ 2011-02-14,41481,0,22797,1620,0,1002,0,0,4722,1160,6933,792,0,559,1496,371,29,0,8283118,14.1,4520871,5.8,14.1,20.3
5
+ 2011-02-15,40870,0,22613,1878,0,1009,0,0,4967,1178,6459,712,0,518,1150,343,43,0,8224882,15.1,4352221,19.4,15.1,0.0