bruAristimunha commited on
Commit
2fe248f
·
verified ·
1 Parent(s): 2d4ebb0

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +220 -0
README.md ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - convolutional
12
+ ---
13
+
14
+ # SincShallowNet
15
+
16
+ Sinc-ShallowNet from Borra, D et al (2020) .
17
+
18
+ > **Architecture-only repository.** This repo documents the
19
+ > `braindecode.models.SincShallowNet` class. **No pretrained weights are
20
+ > distributed here** — instantiate the model and train it on your own
21
+ > data, or fine-tune from a published foundation-model checkpoint
22
+ > separately.
23
+
24
+ ## Quick start
25
+
26
+ ```bash
27
+ pip install braindecode
28
+ ```
29
+
30
+ ```python
31
+ from braindecode.models import SincShallowNet
32
+
33
+ model = SincShallowNet(
34
+ n_chans=22,
35
+ sfreq=250,
36
+ input_window_seconds=4.0,
37
+ n_outputs=4,
38
+ )
39
+ ```
40
+
41
+ The signal-shape arguments above are example defaults — adjust them
42
+ to match your recording.
43
+
44
+ ## Documentation
45
+
46
+ - Full API reference (parameters, references, architecture figure):
47
+ <https://braindecode.org/stable/generated/braindecode.models.SincShallowNet.html>
48
+ - Interactive browser with live instantiation:
49
+ <https://huggingface.co/spaces/braindecode/model-explorer>
50
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/sinc_shallow.py#L11>
51
+
52
+ ## Architecture description
53
+
54
+ The block below is the rendered class docstring (parameters,
55
+ references, architecture figure where available).
56
+
57
+ <div class='bd-doc'><main>
58
+ <p>Sinc-ShallowNet from Borra, D et al (2020) [borra2020]_.</p>
59
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span><span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#E69F00;color:white;font-size:11px;font-weight:600;margin-right:4px;">Interpretability</span>
60
+
61
+
62
+
63
+ .. figure:: https://ars.els-cdn.com/content/image/1-s2.0-S0893608020302021-gr2_lrg.jpg
64
+ :align: center
65
+ :alt: SincShallowNet Architecture
66
+
67
+ The Sinc-ShallowNet architecture has these fundamental blocks:
68
+
69
+ 1. **Block 1: Spectral and Spatial Feature Extraction**
70
+
71
+ - *Temporal Sinc-Convolutional Layer*: Uses parametrized sinc functions to learn band-pass filters,
72
+ significantly reducing the number of trainable parameters by only
73
+ learning the lower and upper cutoff frequencies for each filter.
74
+ - *Spatial Depthwise Convolutional Layer*: Applies depthwise convolutions to learn spatial filters for
75
+ each temporal feature map independently, further reducing
76
+ parameters and enhancing interpretability.
77
+ - *Batch Normalization*
78
+
79
+ 2. **Block 2: Temporal Aggregation**
80
+
81
+ - *Activation Function*: ELU
82
+ - *Average Pooling Layer*: Aggregation by averaging spatial dim
83
+ - *Dropout Layer*
84
+ - *Flatten Layer*
85
+
86
+ 3. **Block 3: Classification**
87
+
88
+ - *Fully Connected Layer*: Maps the feature vector to n_outputs.
89
+
90
+ **Implementation Notes:**
91
+
92
+ - The sinc-convolutional layer initializes cutoff frequencies uniformly
93
+ within the desired frequency range and updates them during training while
94
+ ensuring the lower cutoff is less than the upper cutoff.
95
+
96
+ Parameters
97
+ ----------
98
+ num_time_filters : int
99
+ Number of temporal filters in the SincFilter layer.
100
+ time_filter_len : int
101
+ Size of the temporal filters.
102
+ depth_multiplier : int
103
+ Depth multiplier for spatial filtering.
104
+ activation : nn.Module, optional
105
+ Activation function to use. Default is nn.ELU().
106
+ drop_prob : float, optional
107
+ Dropout probability. Default is 0.5.
108
+ first_freq : float, optional
109
+ The starting frequency for the first Sinc filter. Default is 5.0.
110
+ min_freq : float, optional
111
+ Minimum frequency allowed for the low frequencies of the filters. Default is 1.0.
112
+ freq_stride : float, optional
113
+ Frequency stride for the Sinc filters. Controls the spacing between the filter frequencies.
114
+ Default is 1.0.
115
+ padding : str, optional
116
+ Padding mode for convolution, either 'same' or 'valid'. Default is 'same'.
117
+ bandwidth : float, optional
118
+ Initial bandwidth for each Sinc filter. Default is 4.0.
119
+ pool_size : int, optional
120
+ Size of the pooling window for the average pooling layer. Default is 55.
121
+ pool_stride : int, optional
122
+ Stride of the pooling operation. Default is 12.
123
+
124
+ Notes
125
+ -----
126
+ This implementation is based on the implementation from [sincshallowcode]_.
127
+
128
+ References
129
+ ----------
130
+ .. [borra2020] Borra, D., Fantozzi, S., & Magosso, E. (2020). Interpretable
131
+ and lightweight convolutional neural network for EEG decoding: Application
132
+ to movement execution and imagination. Neural Networks, 129, 55-74.
133
+ .. [sincshallowcode] Sinc-ShallowNet re-implementation source code:
134
+ https://github.com/marcellosicbaldi/SincNet-Tensorflow
135
+
136
+ .. rubric:: Hugging Face Hub integration
137
+
138
+ When the optional ``huggingface_hub`` package is installed, all models
139
+ automatically gain the ability to be pushed to and loaded from the
140
+ Hugging Face Hub. Install with::
141
+
142
+ pip install braindecode[hub]
143
+
144
+ **Pushing a model to the Hub:**
145
+
146
+ .. code::
147
+ from braindecode.models import SincShallowNet
148
+
149
+ # Train your model
150
+ model = SincShallowNet(n_chans=22, n_outputs=4, n_times=1000)
151
+ # ... training code ...
152
+
153
+ # Push to the Hub
154
+ model.push_to_hub(
155
+ repo_id="username/my-sincshallownet-model",
156
+ commit_message="Initial model upload",
157
+ )
158
+
159
+ **Loading a model from the Hub:**
160
+
161
+ .. code::
162
+ from braindecode.models import SincShallowNet
163
+
164
+ # Load pretrained model
165
+ model = SincShallowNet.from_pretrained("username/my-sincshallownet-model")
166
+
167
+ # Load with a different number of outputs (head is rebuilt automatically)
168
+ model = SincShallowNet.from_pretrained("username/my-sincshallownet-model", n_outputs=4)
169
+
170
+ **Extracting features and replacing the head:**
171
+
172
+ .. code::
173
+ import torch
174
+
175
+ x = torch.randn(1, model.n_chans, model.n_times)
176
+ # Extract encoder features (consistent dict across all models)
177
+ out = model(x, return_features=True)
178
+ features = out["features"]
179
+
180
+ # Replace the classification head
181
+ model.reset_head(n_outputs=10)
182
+
183
+ **Saving and restoring full configuration:**
184
+
185
+ .. code::
186
+ import json
187
+
188
+ config = model.get_config() # all __init__ params
189
+ with open("config.json", "w") as f:
190
+ json.dump(config, f)
191
+
192
+ model2 = SincShallowNet.from_config(config) # reconstruct (no weights)
193
+
194
+ All model parameters (both EEG-specific and model-specific such as
195
+ dropout rates, activation functions, number of filters) are automatically
196
+ saved to the Hub and restored when loading.
197
+
198
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
199
+ </div>
200
+
201
+ ## Citation
202
+
203
+ Please cite both the original paper for this architecture (see the
204
+ *References* section above) and braindecode:
205
+
206
+ ```bibtex
207
+ @article{aristimunha2025braindecode,
208
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
209
+ author = {Aristimunha, Bruno and others},
210
+ journal = {Zenodo},
211
+ year = {2025},
212
+ doi = {10.5281/zenodo.17699192},
213
+ }
214
+ ```
215
+
216
+ ## License
217
+
218
+ BSD-3-Clause for the model code (matching braindecode).
219
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
220
+ inherit the licence of that checkpoint and its training corpus.