bruAristimunha commited on
Commit
952bd72
·
verified ·
1 Parent(s): 160d183

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +224 -0
README.md ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - foundation-model
12
+ - convolutional
13
+ ---
14
+
15
+ # SignalJEPA_PreLocal
16
+
17
+ Pre-local downstream architecture introduced in signal-JEPA Guetschel, P et al (2024) .
18
+
19
+ > **Architecture-only repository.** This repo documents the
20
+ > `braindecode.models.SignalJEPA_PreLocal` class. **No pretrained weights are
21
+ > distributed here** — instantiate the model and train it on your own
22
+ > data, or fine-tune from a published foundation-model checkpoint
23
+ > separately.
24
+
25
+ ## Quick start
26
+
27
+ ```bash
28
+ pip install braindecode
29
+ ```
30
+
31
+ ```python
32
+ from braindecode.models import SignalJEPA_PreLocal
33
+
34
+ model = SignalJEPA_PreLocal(
35
+ n_chans=22,
36
+ sfreq=250,
37
+ input_window_seconds=4.0,
38
+ n_outputs=4,
39
+ )
40
+ ```
41
+
42
+ The signal-shape arguments above are example defaults — adjust them
43
+ to match your recording.
44
+
45
+ ## Documentation
46
+
47
+ - Full API reference (parameters, references, architecture figure):
48
+ <https://braindecode.org/stable/generated/braindecode.models.SignalJEPA_PreLocal.html>
49
+ - Interactive browser with live instantiation:
50
+ <https://huggingface.co/spaces/braindecode/model-explorer>
51
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/signal_jepa.py#L965>
52
+
53
+ ## Architecture description
54
+
55
+ The block below is the rendered class docstring (parameters,
56
+ references, architecture figure where available).
57
+
58
+ <div class='bd-doc'><main>
59
+ <p>Pre-local downstream architecture introduced in signal-JEPA Guetschel, P et al (2024) [1]_.</p>
60
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span>
61
+
62
+ :bdg-dark-line:`Channel`<span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#d9534f;color:white;font-size:11px;font-weight:600;margin-right:4px;">Foundation Model</span>
63
+
64
+
65
+
66
+ This architecture is one of the variants of :class:`SignalJEPA`
67
+ that can be used for classification purposes.
68
+
69
+ .. figure:: https://braindecode.org/dev/_static/model/sjepa_pre-local.jpg
70
+ :align: center
71
+ :alt: sJEPA Pre-Local.
72
+
73
+ .. versionadded:: 0.9
74
+
75
+ .. rubric:: Pretrained Weights
76
+
77
+ Only the feature encoder weights are reused from the shared
78
+ SSL checkpoints. This model has no channel embedding nor transformer,
79
+ so ``strict=False`` is required at load time to skip the unused keys.
80
+ Either hub variant works; the ``_without-chans`` one is slightly
81
+ smaller.
82
+
83
+ .. important::
84
+ **Pre-trained Weights Available**
85
+
86
+ .. code:: python
87
+ from braindecode.models import SignalJEPA_PreLocal
88
+
89
+ model = SignalJEPA_PreLocal.from_pretrained(
90
+ "braindecode/signal-jepa_without-chans",
91
+ n_chans=22,
92
+ input_window_seconds=16.0,
93
+ n_outputs=4,
94
+ strict=False,
95
+ )
96
+
97
+ To push your own trained model to the Hub:
98
+
99
+ .. code:: python
100
+ model.push_to_hub(
101
+ repo_id="username/my-sjepa-model",
102
+ commit_message="Upload trained SignalJEPA model",
103
+ )
104
+
105
+ Requires installing ``braindecode[hub]`` for Hub integration.
106
+
107
+ .. rubric:: Usage
108
+
109
+ .. code:: python
110
+ from braindecode.models import SignalJEPA_PreLocal
111
+
112
+ model = SignalJEPA_PreLocal(
113
+ n_chans=22,
114
+ input_window_seconds=16.0,
115
+ sfreq=128,
116
+ n_outputs=4, # e.g., 4-class classification
117
+ )
118
+
119
+ # Forward: (batch, n_chans, n_times) -> (batch, n_outputs)
120
+ output = model(eeg_data)
121
+
122
+ .. warning::
123
+
124
+ Pre-trained at **128 Hz** on EEG bandpass-filtered between
125
+ **0.5 and 40 Hz** and rescaled by a factor of :math:`10^{6}`
126
+ (volts to microvolts). Apply the same preprocessing to your
127
+ data to match the pre-training distribution.
128
+
129
+ Parameters
130
+ ----------
131
+ n_spat_filters : int
132
+ Number of spatial filters.
133
+
134
+ References
135
+ ----------
136
+ .. [1] Guetschel, P., Moreau, T., & Tangermann, M. (2024).
137
+ S-JEPA: towards seamless cross-dataset transfer through dynamic spatial attention.
138
+ In 9th Graz Brain-Computer Interface Conference, https://www.doi.org/10.3217/978-3-99161-014-4-003
139
+
140
+ .. rubric:: Hugging Face Hub integration
141
+
142
+ When the optional ``huggingface_hub`` package is installed, all models
143
+ automatically gain the ability to be pushed to and loaded from the
144
+ Hugging Face Hub. Install with::
145
+
146
+ pip install braindecode[hub]
147
+
148
+ **Pushing a model to the Hub:**
149
+
150
+ .. code::
151
+ from braindecode.models import SignalJEPA_PreLocal
152
+
153
+ # Train your model
154
+ model = SignalJEPA_PreLocal(n_chans=22, n_outputs=4, n_times=1000)
155
+ # ... training code ...
156
+
157
+ # Push to the Hub
158
+ model.push_to_hub(
159
+ repo_id="username/my-signaljepa_prelocal-model",
160
+ commit_message="Initial model upload",
161
+ )
162
+
163
+ **Loading a model from the Hub:**
164
+
165
+ .. code::
166
+ from braindecode.models import SignalJEPA_PreLocal
167
+
168
+ # Load pretrained model
169
+ model = SignalJEPA_PreLocal.from_pretrained("username/my-signaljepa_prelocal-model")
170
+
171
+ # Load with a different number of outputs (head is rebuilt automatically)
172
+ model = SignalJEPA_PreLocal.from_pretrained("username/my-signaljepa_prelocal-model", n_outputs=4)
173
+
174
+ **Extracting features and replacing the head:**
175
+
176
+ .. code::
177
+ import torch
178
+
179
+ x = torch.randn(1, model.n_chans, model.n_times)
180
+ # Extract encoder features (consistent dict across all models)
181
+ out = model(x, return_features=True)
182
+ features = out["features"]
183
+
184
+ # Replace the classification head
185
+ model.reset_head(n_outputs=10)
186
+
187
+ **Saving and restoring full configuration:**
188
+
189
+ .. code::
190
+ import json
191
+
192
+ config = model.get_config() # all __init__ params
193
+ with open("config.json", "w") as f:
194
+ json.dump(config, f)
195
+
196
+ model2 = SignalJEPA_PreLocal.from_config(config) # reconstruct (no weights)
197
+
198
+ All model parameters (both EEG-specific and model-specific such as
199
+ dropout rates, activation functions, number of filters) are automatically
200
+ saved to the Hub and restored when loading.
201
+
202
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
203
+ </div>
204
+
205
+ ## Citation
206
+
207
+ Please cite both the original paper for this architecture (see the
208
+ *References* section above) and braindecode:
209
+
210
+ ```bibtex
211
+ @article{aristimunha2025braindecode,
212
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
213
+ author = {Aristimunha, Bruno and others},
214
+ journal = {Zenodo},
215
+ year = {2025},
216
+ doi = {10.5281/zenodo.17699192},
217
+ }
218
+ ```
219
+
220
+ ## License
221
+
222
+ BSD-3-Clause for the model code (matching braindecode).
223
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
224
+ inherit the licence of that checkpoint and its training corpus.