bruAristimunha commited on
Commit
c8a5cf9
·
verified ·
1 Parent(s): 42bd68b

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +193 -0
README.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - convolutional
12
+ ---
13
+
14
+ # EEGInceptionMI
15
+
16
+ EEG Inception for Motor Imagery, as proposed in Zhang et al. (2021)
17
+
18
+ > **Architecture-only repository.** This repo documents the
19
+ > `braindecode.models.EEGInceptionMI` class. **No pretrained weights are
20
+ > distributed here** — instantiate the model and train it on your own
21
+ > data, or fine-tune from a published foundation-model checkpoint
22
+ > separately.
23
+
24
+ ## Quick start
25
+
26
+ ```bash
27
+ pip install braindecode
28
+ ```
29
+
30
+ ```python
31
+ from braindecode.models import EEGInceptionMI
32
+
33
+ model = EEGInceptionMI(
34
+ n_chans=22,
35
+ sfreq=250,
36
+ input_window_seconds=4.0,
37
+ n_outputs=4,
38
+ )
39
+ ```
40
+
41
+ The signal-shape arguments above are example defaults — adjust them
42
+ to match your recording.
43
+
44
+ ## Documentation
45
+
46
+ - Full API reference (parameters, references, architecture figure):
47
+ <https://braindecode.org/stable/generated/braindecode.models.EEGInceptionMI.html>
48
+ - Interactive browser with live instantiation:
49
+ <https://huggingface.co/spaces/braindecode/model-explorer>
50
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/eeginception_mi.py#L14>
51
+
52
+ ## Architecture description
53
+
54
+ The block below is the rendered class docstring (parameters,
55
+ references, architecture figure where available).
56
+
57
+ <div class='bd-doc'><main>
58
+ <p>EEG Inception for Motor Imagery, as proposed in Zhang et al. (2021) [1]_</p>
59
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span>
60
+
61
+
62
+
63
+ .. figure:: https://content.cld.iop.org/journals/1741-2552/18/4/046014/revision3/jneabed81f1_hr.jpg
64
+ :align: center
65
+ :alt: EEGInceptionMI Architecture
66
+
67
+
68
+ The model is strongly based on the original InceptionNet for computer
69
+ vision. The main goal is to extract features in parallel with different
70
+ scales. The network has two blocks made of 3 inception modules with a skip
71
+ connection.
72
+
73
+ The model is fully described in [1]_.
74
+
75
+ Notes
76
+ -----
77
+ This implementation is not guaranteed to be correct, has not been checked
78
+ by original authors, only reimplemented bosed on the paper [1]_.
79
+
80
+ Parameters
81
+ ----------
82
+ input_window_seconds : float, optional
83
+ Size of the input, in seconds. Set to 4.5 s as in [1]_ for dataset
84
+ BCI IV 2a.
85
+ sfreq : float, optional
86
+ EEG sampling frequency in Hz. Defaults to 250 Hz as in [1]_ for dataset
87
+ BCI IV 2a.
88
+ n_convs : int, optional
89
+ Number of convolution per inception wide branching. Defaults to 5 as
90
+ in [1]_ for dataset BCI IV 2a.
91
+ n_filters : int, optional
92
+ Number of convolutional filters for all layers of this type. Set to 48
93
+ as in [1]_ for dataset BCI IV 2a.
94
+ kernel_unit_s : float, optional
95
+ Size in seconds of the basic 1D convolutional kernel used in inception
96
+ modules. Each convolutional layer in such modules have kernels of
97
+ increasing size, odd multiples of this value (e.g. 0.1, 0.3, 0.5, 0.7,
98
+ 0.9 here for ``n_convs=5``). Defaults to 0.1 s.
99
+ activation: nn.Module
100
+ Activation function. Defaults to ReLU activation.
101
+
102
+ References
103
+ ----------
104
+ .. [1] Zhang, C., Kim, Y. K., & Eskandarian, A. (2021).
105
+ EEG-inception: an accurate and robust end-to-end neural network
106
+ for EEG-based motor imagery classification.
107
+ Journal of Neural Engineering, 18(4), 046014.
108
+
109
+ .. rubric:: Hugging Face Hub integration
110
+
111
+ When the optional ``huggingface_hub`` package is installed, all models
112
+ automatically gain the ability to be pushed to and loaded from the
113
+ Hugging Face Hub. Install with::
114
+
115
+ pip install braindecode[hub]
116
+
117
+ **Pushing a model to the Hub:**
118
+
119
+ .. code::
120
+ from braindecode.models import EEGInceptionMI
121
+
122
+ # Train your model
123
+ model = EEGInceptionMI(n_chans=22, n_outputs=4, n_times=1000)
124
+ # ... training code ...
125
+
126
+ # Push to the Hub
127
+ model.push_to_hub(
128
+ repo_id="username/my-eeginceptionmi-model",
129
+ commit_message="Initial model upload",
130
+ )
131
+
132
+ **Loading a model from the Hub:**
133
+
134
+ .. code::
135
+ from braindecode.models import EEGInceptionMI
136
+
137
+ # Load pretrained model
138
+ model = EEGInceptionMI.from_pretrained("username/my-eeginceptionmi-model")
139
+
140
+ # Load with a different number of outputs (head is rebuilt automatically)
141
+ model = EEGInceptionMI.from_pretrained("username/my-eeginceptionmi-model", n_outputs=4)
142
+
143
+ **Extracting features and replacing the head:**
144
+
145
+ .. code::
146
+ import torch
147
+
148
+ x = torch.randn(1, model.n_chans, model.n_times)
149
+ # Extract encoder features (consistent dict across all models)
150
+ out = model(x, return_features=True)
151
+ features = out["features"]
152
+
153
+ # Replace the classification head
154
+ model.reset_head(n_outputs=10)
155
+
156
+ **Saving and restoring full configuration:**
157
+
158
+ .. code::
159
+ import json
160
+
161
+ config = model.get_config() # all __init__ params
162
+ with open("config.json", "w") as f:
163
+ json.dump(config, f)
164
+
165
+ model2 = EEGInceptionMI.from_config(config) # reconstruct (no weights)
166
+
167
+ All model parameters (both EEG-specific and model-specific such as
168
+ dropout rates, activation functions, number of filters) are automatically
169
+ saved to the Hub and restored when loading.
170
+
171
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
172
+ </div>
173
+
174
+ ## Citation
175
+
176
+ Please cite both the original paper for this architecture (see the
177
+ *References* section above) and braindecode:
178
+
179
+ ```bibtex
180
+ @article{aristimunha2025braindecode,
181
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
182
+ author = {Aristimunha, Bruno and others},
183
+ journal = {Zenodo},
184
+ year = {2025},
185
+ doi = {10.5281/zenodo.17699192},
186
+ }
187
+ ```
188
+
189
+ ## License
190
+
191
+ BSD-3-Clause for the model code (matching braindecode).
192
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
193
+ inherit the licence of that checkpoint and its training corpus.