bruAristimunha commited on
Commit
ea93c17
·
verified ·
1 Parent(s): 0efa599

Replace with clean markdown card

Browse files
Files changed (1) hide show
  1. README.md +26 -127
README.md CHANGED
@@ -13,13 +13,12 @@ tags:
13
 
14
  # FBCNet
15
 
16
- FBCNet from Mane, R et al (2021) .
17
 
18
- > **Architecture-only repository.** This repo documents the
19
  > `braindecode.models.FBCNet` class. **No pretrained weights are
20
- > distributed here** instantiate the model and train it on your own
21
- > data, or fine-tune from a published foundation-model checkpoint
22
- > separately.
23
 
24
  ## Quick start
25
 
@@ -38,145 +37,45 @@ model = FBCNet(
38
  )
39
  ```
40
 
41
- The signal-shape arguments above are example defaults — adjust them
42
- to match your recording.
43
 
44
  ## Documentation
45
-
46
- - Full API reference (parameters, references, architecture figure):
47
- <https://braindecode.org/stable/generated/braindecode.models.FBCNet.html>
48
- - Interactive browser with live instantiation:
49
  <https://huggingface.co/spaces/braindecode/model-explorer>
50
  - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/fbcnet.py#L31>
51
 
52
- ## Architecture description
53
-
54
- The block below is the rendered class docstring (parameters,
55
- references, architecture figure where available).
56
-
57
- <div class='bd-doc'><main>
58
- <p>FBCNet from Mane, R et al (2021) [fbcnet2021]_.</p>
59
- <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span><span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#0072B2;color:white;font-size:11px;font-weight:600;margin-right:4px;">Filterbank</span>
60
-
61
-
62
-
63
- .. figure:: https://raw.githubusercontent.com/ravikiran-mane/FBCNet/refs/heads/master/FBCNet-V2.png
64
- :align: center
65
- :alt: FBCNet Architecture
66
-
67
- The FBCNet model applies spatial convolution and variance calculation along
68
- the time axis, inspired by the Filter Bank Common Spatial Pattern (FBCSP)
69
- algorithm.
70
-
71
- Notes
72
- -----
73
- This implementation is not guaranteed to be correct and has not been checked
74
- by the original authors; it has only been reimplemented from the paper
75
- description and source code [fbcnetcode2021]_. There is a difference in the
76
- activation function; in the paper, the ELU is used as the activation function,
77
- but in the original code, SiLU is used. We followed the code.
78
-
79
- Parameters
80
- ----------
81
- n_bands : int or None or list[tuple[int, int]]], default=9
82
- Number of frequency bands. Could
83
- n_filters_spat : int, default=32
84
- Number of spatial filters for the first convolution.
85
- n_dim: int, default=3
86
- Number of dimensions for the temporal reductor
87
- temporal_layer : str, default='LogVarLayer'
88
- Type of temporal aggregator layer. Options: 'VarLayer', 'StdLayer',
89
- 'LogVarLayer', 'MeanLayer', 'MaxLayer'.
90
- stride_factor : int, default=4
91
- Stride factor for reshaping.
92
- activation : nn.Module, default=nn.SiLU
93
- Activation function class to apply in Spatial Convolution Block.
94
- cnn_max_norm : float, default=2.0
95
- Maximum norm for the spatial convolution layer.
96
- linear_max_norm : float, default=0.5
97
- Maximum norm for the final linear layer.
98
- filter_parameters: dict, default None
99
- Dictionary of parameters to use for the FilterBankLayer.
100
- If None, a default Chebyshev Type II filter with transition bandwidth of
101
- 2 Hz and stop-band ripple of 30 dB will be used.
102
-
103
- References
104
- ----------
105
- .. [fbcnet2021] Mane, R., Chew, E., Chua, K., Ang, K. K., Robinson, N.,
106
- Vinod, A. P., ... & Guan, C. (2021). FBCNet: A multi-view convolutional
107
- neural network for brain-computer interface. preprint arXiv:2104.01233.
108
- .. [fbcnetcode2021] Link to source-code:
109
- https://github.com/ravikiran-mane/FBCNet
110
-
111
- .. rubric:: Hugging Face Hub integration
112
-
113
- When the optional ``huggingface_hub`` package is installed, all models
114
- automatically gain the ability to be pushed to and loaded from the
115
- Hugging Face Hub. Install with::
116
-
117
- pip install braindecode[hub]
118
-
119
- **Pushing a model to the Hub:**
120
-
121
- .. code::
122
- from braindecode.models import FBCNet
123
-
124
- # Train your model
125
- model = FBCNet(n_chans=22, n_outputs=4, n_times=1000)
126
- # ... training code ...
127
-
128
- # Push to the Hub
129
- model.push_to_hub(
130
- repo_id="username/my-fbcnet-model",
131
- commit_message="Initial model upload",
132
- )
133
-
134
- **Loading a model from the Hub:**
135
 
136
- .. code::
137
- from braindecode.models import FBCNet
138
 
139
- # Load pretrained model
140
- model = FBCNet.from_pretrained("username/my-fbcnet-model")
141
 
142
- # Load with a different number of outputs (head is rebuilt automatically)
143
- model = FBCNet.from_pretrained("username/my-fbcnet-model", n_outputs=4)
144
 
145
- **Extracting features and replacing the head:**
146
 
147
- .. code::
148
- import torch
 
 
 
 
 
 
 
 
 
149
 
150
- x = torch.randn(1, model.n_chans, model.n_times)
151
- # Extract encoder features (consistent dict across all models)
152
- out = model(x, return_features=True)
153
- features = out["features"]
154
 
155
- # Replace the classification head
156
- model.reset_head(n_outputs=10)
157
 
158
- **Saving and restoring full configuration:**
 
159
 
160
- .. code::
161
- import json
162
-
163
- config = model.get_config() # all __init__ params
164
- with open("config.json", "w") as f:
165
- json.dump(config, f)
166
-
167
- model2 = FBCNet.from_config(config) # reconstruct (no weights)
168
-
169
- All model parameters (both EEG-specific and model-specific such as
170
- dropout rates, activation functions, number of filters) are automatically
171
- saved to the Hub and restored when loading.
172
-
173
- See :ref:`load-pretrained-models` for a complete tutorial.</main>
174
- </div>
175
 
176
  ## Citation
177
 
178
- Please cite both the original paper for this architecture (see the
179
- *References* section above) and braindecode:
180
 
181
  ```bibtex
182
  @article{aristimunha2025braindecode,
 
13
 
14
  # FBCNet
15
 
16
+ FBCNet from Mane, R et al (2021) [fbcnet2021].
17
 
18
+ > **Architecture-only repository.** Documents the
19
  > `braindecode.models.FBCNet` class. **No pretrained weights are
20
+ > distributed here.** Instantiate the model and train it on your own
21
+ > data.
 
22
 
23
  ## Quick start
24
 
 
37
  )
38
  ```
39
 
40
+ The signal-shape arguments above are illustrative defaults — adjust to
41
+ match your recording.
42
 
43
  ## Documentation
44
+ - Full API reference: <https://braindecode.org/stable/generated/braindecode.models.FBCNet.html>
45
+ - Interactive browser (live instantiation, parameter counts):
 
 
46
  <https://huggingface.co/spaces/braindecode/model-explorer>
47
  - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/fbcnet.py#L31>
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
+ ## Architecture
 
51
 
52
+ ![FBCNet architecture](https://raw.githubusercontent.com/ravikiran-mane/FBCNet/refs/heads/master/FBCNet-V2.png)
 
53
 
 
 
54
 
55
+ ## Parameters
56
 
57
+ | Parameter | Type | Description |
58
+ |---|---|---|
59
+ | `n_bands` | int or None or list[tuple[int, int]]], default=9 | Number of frequency bands. Could |
60
+ | `n_filters_spat` | int, default=32 | Number of spatial filters for the first convolution. |
61
+ | `n_dim: int, default=3` | — | Number of dimensions for the temporal reductor |
62
+ | `temporal_layer` | str, default='LogVarLayer' | Type of temporal aggregator layer. Options: 'VarLayer', 'StdLayer', 'LogVarLayer', 'MeanLayer', 'MaxLayer'. |
63
+ | `stride_factor` | int, default=4 | Stride factor for reshaping. |
64
+ | `activation` | nn.Module, default=nn.SiLU | Activation function class to apply in Spatial Convolution Block. |
65
+ | `cnn_max_norm` | float, default=2.0 | Maximum norm for the spatial convolution layer. |
66
+ | `linear_max_norm` | float, default=0.5 | Maximum norm for the final linear layer. |
67
+ | `filter_parameters: dict, default None` | — | Dictionary of parameters to use for the FilterBankLayer. If None, a default Chebyshev Type II filter with transition bandwidth of 2 Hz and stop-band ripple of 30 dB will be used. |
68
 
 
 
 
 
69
 
70
+ ## References
 
71
 
72
+ 1. Mane, R., Chew, E., Chua, K., Ang, K. K., Robinson, N., Vinod, A. P., ... & Guan, C. (2021). FBCNet: A multi-view convolutional neural network for brain-computer interface. preprint arXiv:2104.01233.
73
+ 2. Link to source-code: https://github.com/ravikiran-mane/FBCNet
74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Citation
77
 
78
+ Cite the original architecture paper (see *References* above) and braindecode:
 
79
 
80
  ```bibtex
81
  @article{aristimunha2025braindecode,