bruAristimunha commited on
Commit
0efa599
·
verified ·
1 Parent(s): 3f21af1

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +195 -0
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - convolutional
12
+ ---
13
+
14
+ # FBCNet
15
+
16
+ FBCNet from Mane, R et al (2021) .
17
+
18
+ > **Architecture-only repository.** This repo documents the
19
+ > `braindecode.models.FBCNet` class. **No pretrained weights are
20
+ > distributed here** — instantiate the model and train it on your own
21
+ > data, or fine-tune from a published foundation-model checkpoint
22
+ > separately.
23
+
24
+ ## Quick start
25
+
26
+ ```bash
27
+ pip install braindecode
28
+ ```
29
+
30
+ ```python
31
+ from braindecode.models import FBCNet
32
+
33
+ model = FBCNet(
34
+ n_chans=22,
35
+ sfreq=250,
36
+ input_window_seconds=4.0,
37
+ n_outputs=4,
38
+ )
39
+ ```
40
+
41
+ The signal-shape arguments above are example defaults — adjust them
42
+ to match your recording.
43
+
44
+ ## Documentation
45
+
46
+ - Full API reference (parameters, references, architecture figure):
47
+ <https://braindecode.org/stable/generated/braindecode.models.FBCNet.html>
48
+ - Interactive browser with live instantiation:
49
+ <https://huggingface.co/spaces/braindecode/model-explorer>
50
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/fbcnet.py#L31>
51
+
52
+ ## Architecture description
53
+
54
+ The block below is the rendered class docstring (parameters,
55
+ references, architecture figure where available).
56
+
57
+ <div class='bd-doc'><main>
58
+ <p>FBCNet from Mane, R et al (2021) [fbcnet2021]_.</p>
59
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span><span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#0072B2;color:white;font-size:11px;font-weight:600;margin-right:4px;">Filterbank</span>
60
+
61
+
62
+
63
+ .. figure:: https://raw.githubusercontent.com/ravikiran-mane/FBCNet/refs/heads/master/FBCNet-V2.png
64
+ :align: center
65
+ :alt: FBCNet Architecture
66
+
67
+ The FBCNet model applies spatial convolution and variance calculation along
68
+ the time axis, inspired by the Filter Bank Common Spatial Pattern (FBCSP)
69
+ algorithm.
70
+
71
+ Notes
72
+ -----
73
+ This implementation is not guaranteed to be correct and has not been checked
74
+ by the original authors; it has only been reimplemented from the paper
75
+ description and source code [fbcnetcode2021]_. There is a difference in the
76
+ activation function; in the paper, the ELU is used as the activation function,
77
+ but in the original code, SiLU is used. We followed the code.
78
+
79
+ Parameters
80
+ ----------
81
+ n_bands : int or None or list[tuple[int, int]]], default=9
82
+ Number of frequency bands. Could
83
+ n_filters_spat : int, default=32
84
+ Number of spatial filters for the first convolution.
85
+ n_dim: int, default=3
86
+ Number of dimensions for the temporal reductor
87
+ temporal_layer : str, default='LogVarLayer'
88
+ Type of temporal aggregator layer. Options: 'VarLayer', 'StdLayer',
89
+ 'LogVarLayer', 'MeanLayer', 'MaxLayer'.
90
+ stride_factor : int, default=4
91
+ Stride factor for reshaping.
92
+ activation : nn.Module, default=nn.SiLU
93
+ Activation function class to apply in Spatial Convolution Block.
94
+ cnn_max_norm : float, default=2.0
95
+ Maximum norm for the spatial convolution layer.
96
+ linear_max_norm : float, default=0.5
97
+ Maximum norm for the final linear layer.
98
+ filter_parameters: dict, default None
99
+ Dictionary of parameters to use for the FilterBankLayer.
100
+ If None, a default Chebyshev Type II filter with transition bandwidth of
101
+ 2 Hz and stop-band ripple of 30 dB will be used.
102
+
103
+ References
104
+ ----------
105
+ .. [fbcnet2021] Mane, R., Chew, E., Chua, K., Ang, K. K., Robinson, N.,
106
+ Vinod, A. P., ... & Guan, C. (2021). FBCNet: A multi-view convolutional
107
+ neural network for brain-computer interface. preprint arXiv:2104.01233.
108
+ .. [fbcnetcode2021] Link to source-code:
109
+ https://github.com/ravikiran-mane/FBCNet
110
+
111
+ .. rubric:: Hugging Face Hub integration
112
+
113
+ When the optional ``huggingface_hub`` package is installed, all models
114
+ automatically gain the ability to be pushed to and loaded from the
115
+ Hugging Face Hub. Install with::
116
+
117
+ pip install braindecode[hub]
118
+
119
+ **Pushing a model to the Hub:**
120
+
121
+ .. code::
122
+ from braindecode.models import FBCNet
123
+
124
+ # Train your model
125
+ model = FBCNet(n_chans=22, n_outputs=4, n_times=1000)
126
+ # ... training code ...
127
+
128
+ # Push to the Hub
129
+ model.push_to_hub(
130
+ repo_id="username/my-fbcnet-model",
131
+ commit_message="Initial model upload",
132
+ )
133
+
134
+ **Loading a model from the Hub:**
135
+
136
+ .. code::
137
+ from braindecode.models import FBCNet
138
+
139
+ # Load pretrained model
140
+ model = FBCNet.from_pretrained("username/my-fbcnet-model")
141
+
142
+ # Load with a different number of outputs (head is rebuilt automatically)
143
+ model = FBCNet.from_pretrained("username/my-fbcnet-model", n_outputs=4)
144
+
145
+ **Extracting features and replacing the head:**
146
+
147
+ .. code::
148
+ import torch
149
+
150
+ x = torch.randn(1, model.n_chans, model.n_times)
151
+ # Extract encoder features (consistent dict across all models)
152
+ out = model(x, return_features=True)
153
+ features = out["features"]
154
+
155
+ # Replace the classification head
156
+ model.reset_head(n_outputs=10)
157
+
158
+ **Saving and restoring full configuration:**
159
+
160
+ .. code::
161
+ import json
162
+
163
+ config = model.get_config() # all __init__ params
164
+ with open("config.json", "w") as f:
165
+ json.dump(config, f)
166
+
167
+ model2 = FBCNet.from_config(config) # reconstruct (no weights)
168
+
169
+ All model parameters (both EEG-specific and model-specific such as
170
+ dropout rates, activation functions, number of filters) are automatically
171
+ saved to the Hub and restored when loading.
172
+
173
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
174
+ </div>
175
+
176
+ ## Citation
177
+
178
+ Please cite both the original paper for this architecture (see the
179
+ *References* section above) and braindecode:
180
+
181
+ ```bibtex
182
+ @article{aristimunha2025braindecode,
183
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
184
+ author = {Aristimunha, Bruno and others},
185
+ journal = {Zenodo},
186
+ year = {2025},
187
+ doi = {10.5281/zenodo.17699192},
188
+ }
189
+ ```
190
+
191
+ ## License
192
+
193
+ BSD-3-Clause for the model code (matching braindecode).
194
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
195
+ inherit the licence of that checkpoint and its training corpus.