bruAristimunha commited on
Commit
54b5c5b
·
verified ·
1 Parent(s): d81696d

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +209 -0
README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+
12
+ ---
13
+
14
+ # SyncNet
15
+
16
+ Synchronization Network (SyncNet) from Li, Y et al (2017) .
17
+
18
+ > **Architecture-only repository.** This repo documents the
19
+ > `braindecode.models.SyncNet` class. **No pretrained weights are
20
+ > distributed here** — instantiate the model and train it on your own
21
+ > data, or fine-tune from a published foundation-model checkpoint
22
+ > separately.
23
+
24
+ ## Quick start
25
+
26
+ ```bash
27
+ pip install braindecode
28
+ ```
29
+
30
+ ```python
31
+ from braindecode.models import SyncNet
32
+
33
+ model = SyncNet(
34
+ n_chans=22,
35
+ sfreq=250,
36
+ input_window_seconds=4.0,
37
+ n_outputs=4,
38
+ )
39
+ ```
40
+
41
+ The signal-shape arguments above are example defaults — adjust them
42
+ to match your recording.
43
+
44
+ ## Documentation
45
+
46
+ - Full API reference (parameters, references, architecture figure):
47
+ <https://braindecode.org/stable/generated/braindecode.models.SyncNet.html>
48
+ - Interactive browser with live instantiation:
49
+ <https://huggingface.co/spaces/braindecode/model-explorer>
50
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/syncnet.py#L14>
51
+
52
+ ## Architecture description
53
+
54
+ The block below is the rendered class docstring (parameters,
55
+ references, architecture figure where available).
56
+
57
+ <div class='bd-doc'><main>
58
+ <p>Synchronization Network (SyncNet) from Li, Y et al (2017) [Li2017]_.</p>
59
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#E69F00;color:white;font-size:11px;font-weight:600;margin-right:4px;">Interpretability</span>
60
+
61
+
62
+
63
+ .. figure:: https://braindecode.org/dev/_static/model/SyncNet.png
64
+ :align: center
65
+ :alt: SyncNet Architecture
66
+
67
+ SyncNet uses parameterized 1-dimensional convolutional filters inspired by
68
+ the Morlet wavelet to extract features from EEG signals. The filters are
69
+ dynamically generated based on learnable parameters that control the
70
+ oscillation and decay characteristics.
71
+
72
+ The filter for channel ``c`` and filter ``k`` is defined as:
73
+
74
+ .. math::
75
+
76
+ f_c^{(k)}(\\tau) = amplitude_c^{(k)} \\cos(\\omega^{(k)} \\tau + \\phi_c^{(k)}) \\exp(-\\beta^{(k)} \\tau^2)
77
+
78
+ where:
79
+ - :math:`amplitude_c^{(k)}` is the amplitude parameter (channel-specific).
80
+ - :math:`\\omega^{(k)}` is the frequency parameter (shared across channels).
81
+ - :math:`\\phi_c^{(k)}` is the phase shift (channel-specific).
82
+ - :math:`\\beta^{(k)}` is the decay parameter (shared across channels).
83
+ - :math:`\\tau` is the time index.
84
+
85
+ Parameters
86
+ ----------
87
+ num_filters : int, optional
88
+ Number of filters in the convolutional layer. Default is 1.
89
+ filter_width : int, optional
90
+ Width of the convolutional filters. Default is 40.
91
+ pool_size : int, optional
92
+ Size of the pooling window. Default is 40.
93
+ activation : nn.Module, optional
94
+ Activation function to apply after pooling. Default is ``nn.ReLU``.
95
+ ampli_init_values : tuple of float, optional
96
+ The initialization range for amplitude parameter using uniform
97
+ distribution. Default is (-0.05, 0.05).
98
+ omega_init_values : tuple of float, optional
99
+ The initialization range for omega parameters using uniform
100
+ distribution. Default is (0, 1).
101
+ beta_init_values : tuple of float, optional
102
+ The initialization range for beta (decay) parameters using uniform
103
+ distribution. Default is (0, 0.05).
104
+ phase_init_values : tuple of float, optional
105
+ The initialization mean and standard deviation for phase
106
+ parameters using normal distribution. Default is (0, 0.05).
107
+
108
+
109
+ Notes
110
+ -----
111
+ This implementation is not guaranteed to be correct! it has not been checked
112
+ by original authors. The modifications are based on derivated code from
113
+ [CodeICASSP2025]_.
114
+
115
+
116
+ References
117
+ ----------
118
+ .. [Li2017] Li, Y., Dzirasa, K., Carin, L., & Carlson, D. E. (2017).
119
+ Targeting EEG/LFP synchrony with neural nets. Advances in neural
120
+ information processing systems, 30.
121
+ .. [CodeICASSP2025] Code from Baselines for EEG-Music Emotion Recognition
122
+ Grand Challenge at ICASSP 2025.
123
+ https://github.com/SalvoCalcagno/eeg-music-challenge-icassp-2025-baselines
124
+
125
+ .. rubric:: Hugging Face Hub integration
126
+
127
+ When the optional ``huggingface_hub`` package is installed, all models
128
+ automatically gain the ability to be pushed to and loaded from the
129
+ Hugging Face Hub. Install with::
130
+
131
+ pip install braindecode[hub]
132
+
133
+ **Pushing a model to the Hub:**
134
+
135
+ .. code::
136
+ from braindecode.models import SyncNet
137
+
138
+ # Train your model
139
+ model = SyncNet(n_chans=22, n_outputs=4, n_times=1000)
140
+ # ... training code ...
141
+
142
+ # Push to the Hub
143
+ model.push_to_hub(
144
+ repo_id="username/my-syncnet-model",
145
+ commit_message="Initial model upload",
146
+ )
147
+
148
+ **Loading a model from the Hub:**
149
+
150
+ .. code::
151
+ from braindecode.models import SyncNet
152
+
153
+ # Load pretrained model
154
+ model = SyncNet.from_pretrained("username/my-syncnet-model")
155
+
156
+ # Load with a different number of outputs (head is rebuilt automatically)
157
+ model = SyncNet.from_pretrained("username/my-syncnet-model", n_outputs=4)
158
+
159
+ **Extracting features and replacing the head:**
160
+
161
+ .. code::
162
+ import torch
163
+
164
+ x = torch.randn(1, model.n_chans, model.n_times)
165
+ # Extract encoder features (consistent dict across all models)
166
+ out = model(x, return_features=True)
167
+ features = out["features"]
168
+
169
+ # Replace the classification head
170
+ model.reset_head(n_outputs=10)
171
+
172
+ **Saving and restoring full configuration:**
173
+
174
+ .. code::
175
+ import json
176
+
177
+ config = model.get_config() # all __init__ params
178
+ with open("config.json", "w") as f:
179
+ json.dump(config, f)
180
+
181
+ model2 = SyncNet.from_config(config) # reconstruct (no weights)
182
+
183
+ All model parameters (both EEG-specific and model-specific such as
184
+ dropout rates, activation functions, number of filters) are automatically
185
+ saved to the Hub and restored when loading.
186
+
187
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
188
+ </div>
189
+
190
+ ## Citation
191
+
192
+ Please cite both the original paper for this architecture (see the
193
+ *References* section above) and braindecode:
194
+
195
+ ```bibtex
196
+ @article{aristimunha2025braindecode,
197
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
198
+ author = {Aristimunha, Bruno and others},
199
+ journal = {Zenodo},
200
+ year = {2025},
201
+ doi = {10.5281/zenodo.17699192},
202
+ }
203
+ ```
204
+
205
+ ## License
206
+
207
+ BSD-3-Clause for the model code (matching braindecode).
208
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
209
+ inherit the licence of that checkpoint and its training corpus.