bruAristimunha commited on
Commit
8b6af4e
·
verified ·
1 Parent(s): aabdae9

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +213 -0
README.md ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - convolutional
12
+ ---
13
+
14
+ # IFNet
15
+
16
+ IFNetV2 from Wang J et al (2023) .
17
+
18
+ > **Architecture-only repository.** This repo documents the
19
+ > `braindecode.models.IFNet` class. **No pretrained weights are
20
+ > distributed here** — instantiate the model and train it on your own
21
+ > data, or fine-tune from a published foundation-model checkpoint
22
+ > separately.
23
+
24
+ ## Quick start
25
+
26
+ ```bash
27
+ pip install braindecode
28
+ ```
29
+
30
+ ```python
31
+ from braindecode.models import IFNet
32
+
33
+ model = IFNet(
34
+ n_chans=22,
35
+ sfreq=250,
36
+ input_window_seconds=4.0,
37
+ n_outputs=4,
38
+ )
39
+ ```
40
+
41
+ The signal-shape arguments above are example defaults — adjust them
42
+ to match your recording.
43
+
44
+ ## Documentation
45
+
46
+ - Full API reference (parameters, references, architecture figure):
47
+ <https://braindecode.org/stable/generated/braindecode.models.IFNet.html>
48
+ - Interactive browser with live instantiation:
49
+ <https://huggingface.co/spaces/braindecode/model-explorer>
50
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/ifnet.py#L31>
51
+
52
+ ## Architecture description
53
+
54
+ The block below is the rendered class docstring (parameters,
55
+ references, architecture figure where available).
56
+
57
+ <div class='bd-doc'><main>
58
+ <p>IFNetV2 from Wang J et al (2023) [ifnet]_.</p>
59
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span><span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#0072B2;color:white;font-size:11px;font-weight:600;margin-right:4px;">Filterbank</span>
60
+
61
+
62
+
63
+ .. figure:: https://raw.githubusercontent.com/Jiaheng-Wang/IFNet/main/IFNet.png
64
+ :align: center
65
+ :alt: IFNetV2 Architecture
66
+
67
+ Overview of the Interactive Frequency Convolutional Neural Network architecture.
68
+
69
+ IFNetV2 is designed to effectively capture spectro-spatial-temporal
70
+ features for motor imagery decoding from EEG data. The model consists of
71
+ three stages: Spectro-Spatial Feature Representation, Cross-Frequency
72
+ Interactions, and Classification.
73
+
74
+ - **Spectro-Spatial Feature Representation**: The raw EEG signals are
75
+ filtered into two characteristic frequency bands: low (4-16 Hz) and
76
+ high (16-40 Hz), covering the most relevant motor imagery bands.
77
+ Spectro-spatial features are then extracted through 1D point-wise
78
+ spatial convolution followed by temporal convolution.
79
+
80
+ - **Cross-Frequency Interactions**: The extracted spectro-spatial
81
+ features from each frequency band are combined through an element-wise
82
+ summation operation, which enhances feature representation while
83
+ preserving distinct characteristics.
84
+
85
+ - **Classification**: The aggregated spectro-spatial features are further
86
+ reduced through temporal average pooling and passed through a fully
87
+ connected layer followed by a softmax operation to generate output
88
+ probabilities for each class.
89
+
90
+ Notes
91
+ -----
92
+ This implementation is not guaranteed to be correct, has not been checked
93
+ by original authors, only reimplemented from the paper description and
94
+ Torch source code [ifnetv2code]_. Version 2 is present only in the repository,
95
+ and the main difference is one pooling layer, describe at the TABLE VII
96
+ from the paper: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10070810
97
+
98
+
99
+ Parameters
100
+ ----------
101
+ bands : list[tuple[int, int]] or int or None, default=[[4, 16], (16, 40)]
102
+ Frequency bands for filtering.
103
+ out_planes : int, default=64
104
+ Number of output feature dimensions.
105
+ kernel_sizes : tuple of int, default=(63, 31)
106
+ List of kernel sizes for temporal convolutions.
107
+ patch_size : int, default=125
108
+ Size of the patches for temporal segmentation.
109
+ drop_prob : float, default=0.5
110
+ Dropout probability.
111
+ activation : nn.Module, default=nn.GELU
112
+ Activation function after the InterFrequency Layer.
113
+ verbose : bool, default=False
114
+ Verbose to control the filtering layer
115
+ filter_parameters : dict, default={}
116
+ Additional parameters for the filter bank layer.
117
+
118
+ References
119
+ ----------
120
+ .. [ifnet] Wang, J., Yao, L., & Wang, Y. (2023). IFNet: An interactive
121
+ frequency convolutional neural network for enhancing motor imagery
122
+ decoding from EEG. IEEE Transactions on Neural Systems and
123
+ Rehabilitation Engineering, 31, 1900-1911.
124
+ .. [ifnetv2code] Wang, J., Yao, L., & Wang, Y. (2023). IFNet: An interactive
125
+ frequency convolutional neural network for enhancing motor imagery
126
+ decoding from EEG.
127
+ https://github.com/Jiaheng-Wang/IFNet
128
+
129
+ .. rubric:: Hugging Face Hub integration
130
+
131
+ When the optional ``huggingface_hub`` package is installed, all models
132
+ automatically gain the ability to be pushed to and loaded from the
133
+ Hugging Face Hub. Install with::
134
+
135
+ pip install braindecode[hub]
136
+
137
+ **Pushing a model to the Hub:**
138
+
139
+ .. code::
140
+ from braindecode.models import IFNet
141
+
142
+ # Train your model
143
+ model = IFNet(n_chans=22, n_outputs=4, n_times=1000)
144
+ # ... training code ...
145
+
146
+ # Push to the Hub
147
+ model.push_to_hub(
148
+ repo_id="username/my-ifnet-model",
149
+ commit_message="Initial model upload",
150
+ )
151
+
152
+ **Loading a model from the Hub:**
153
+
154
+ .. code::
155
+ from braindecode.models import IFNet
156
+
157
+ # Load pretrained model
158
+ model = IFNet.from_pretrained("username/my-ifnet-model")
159
+
160
+ # Load with a different number of outputs (head is rebuilt automatically)
161
+ model = IFNet.from_pretrained("username/my-ifnet-model", n_outputs=4)
162
+
163
+ **Extracting features and replacing the head:**
164
+
165
+ .. code::
166
+ import torch
167
+
168
+ x = torch.randn(1, model.n_chans, model.n_times)
169
+ # Extract encoder features (consistent dict across all models)
170
+ out = model(x, return_features=True)
171
+ features = out["features"]
172
+
173
+ # Replace the classification head
174
+ model.reset_head(n_outputs=10)
175
+
176
+ **Saving and restoring full configuration:**
177
+
178
+ .. code::
179
+ import json
180
+
181
+ config = model.get_config() # all __init__ params
182
+ with open("config.json", "w") as f:
183
+ json.dump(config, f)
184
+
185
+ model2 = IFNet.from_config(config) # reconstruct (no weights)
186
+
187
+ All model parameters (both EEG-specific and model-specific such as
188
+ dropout rates, activation functions, number of filters) are automatically
189
+ saved to the Hub and restored when loading.
190
+
191
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
192
+ </div>
193
+
194
+ ## Citation
195
+
196
+ Please cite both the original paper for this architecture (see the
197
+ *References* section above) and braindecode:
198
+
199
+ ```bibtex
200
+ @article{aristimunha2025braindecode,
201
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
202
+ author = {Aristimunha, Bruno and others},
203
+ journal = {Zenodo},
204
+ year = {2025},
205
+ doi = {10.5281/zenodo.17699192},
206
+ }
207
+ ```
208
+
209
+ ## License
210
+
211
+ BSD-3-Clause for the model code (matching braindecode).
212
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
213
+ inherit the licence of that checkpoint and its training corpus.