bruAristimunha commited on
Commit
affe279
·
verified ·
1 Parent(s): e514e2a

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +198 -0
README.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - convolutional
12
+ ---
13
+
14
+ # EEGITNet
15
+
16
+ EEG-ITNet from Salami, et al (2022)
17
+
18
+ > **Architecture-only repository.** This repo documents the
19
+ > `braindecode.models.EEGITNet` class. **No pretrained weights are
20
+ > distributed here** — instantiate the model and train it on your own
21
+ > data, or fine-tune from a published foundation-model checkpoint
22
+ > separately.
23
+
24
+ ## Quick start
25
+
26
+ ```bash
27
+ pip install braindecode
28
+ ```
29
+
30
+ ```python
31
+ from braindecode.models import EEGITNet
32
+
33
+ model = EEGITNet(
34
+ n_chans=22,
35
+ sfreq=250,
36
+ input_window_seconds=4.0,
37
+ n_outputs=4,
38
+ )
39
+ ```
40
+
41
+ The signal-shape arguments above are example defaults — adjust them
42
+ to match your recording.
43
+
44
+ ## Documentation
45
+
46
+ - Full API reference (parameters, references, architecture figure):
47
+ <https://braindecode.org/stable/generated/braindecode.models.EEGITNet.html>
48
+ - Interactive browser with live instantiation:
49
+ <https://huggingface.co/spaces/braindecode/model-explorer>
50
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/eegitnet.py#L12>
51
+
52
+ ## Architecture description
53
+
54
+ The block below is the rendered class docstring (parameters,
55
+ references, architecture figure where available).
56
+
57
+ <div class='bd-doc'><main>
58
+ <p>EEG-ITNet from Salami, et al (2022) [Salami2022]_</p>
59
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span><span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#6c757d;color:white;font-size:11px;font-weight:600;margin-right:4px;">Recurrent</span>
60
+
61
+
62
+
63
+ .. figure:: https://braindecode.org/dev/_static/model/eegitnet.jpg
64
+ :align: center
65
+ :alt: EEG-ITNet Architecture
66
+
67
+ EEG-ITNet: An Explainable Inception Temporal
68
+ Convolutional Network for motor imagery classification from
69
+ Salami et al. 2022.
70
+
71
+ See [Salami2022]_ for details.
72
+
73
+ Code adapted from https://github.com/abbassalami/eeg-itnet
74
+
75
+ Parameters
76
+ ----------
77
+ drop_prob: float
78
+ Dropout probability.
79
+ activation: nn.Module, default=nn.ELU
80
+ Activation function class to apply. Should be a PyTorch activation
81
+ module class like ``nn.ReLU`` or ``nn.ELU``. Default is ``nn.ELU``.
82
+ kernel_length : int, optional
83
+ Kernel length for inception branches. Determines the temporal receptive field.
84
+ Default is 16.
85
+ pool_kernel : int, optional
86
+ Pooling kernel size for the average pooling layer. Default is 4.
87
+ tcn_in_channel : int, optional
88
+ Number of input channels for Temporal Convolutional (TC) blocks. Default is 14.
89
+ tcn_kernel_size : int, optional
90
+ Kernel size for the TC blocks. Determines the temporal receptive field.
91
+ Default is 4.
92
+ tcn_padding : int, optional
93
+ Padding size for the TC blocks to maintain the input dimensions. Default is 3.
94
+ drop_prob : float, optional
95
+ Dropout probability applied after certain layers to prevent overfitting.
96
+ Default is 0.4.
97
+ tcn_dilatation : int, optional
98
+ Dilation rate for the first TC block. Subsequent blocks will have
99
+ dilation rates multiplied by powers of 2. Default is 1.
100
+
101
+ Notes
102
+ -----
103
+ This implementation is not guaranteed to be correct, has not been checked
104
+ by original authors, only reimplemented from the paper based on author implementation.
105
+
106
+
107
+ References
108
+ ----------
109
+ .. [Salami2022] A. Salami, J. Andreu-Perez and H. Gillmeister, "EEG-ITNet:
110
+ An Explainable Inception Temporal Convolutional Network for motor
111
+ imagery classification," in IEEE Access,
112
+ doi: 10.1109/ACCESS.2022.3161489.
113
+
114
+ .. rubric:: Hugging Face Hub integration
115
+
116
+ When the optional ``huggingface_hub`` package is installed, all models
117
+ automatically gain the ability to be pushed to and loaded from the
118
+ Hugging Face Hub. Install with::
119
+
120
+ pip install braindecode[hub]
121
+
122
+ **Pushing a model to the Hub:**
123
+
124
+ .. code::
125
+ from braindecode.models import EEGITNet
126
+
127
+ # Train your model
128
+ model = EEGITNet(n_chans=22, n_outputs=4, n_times=1000)
129
+ # ... training code ...
130
+
131
+ # Push to the Hub
132
+ model.push_to_hub(
133
+ repo_id="username/my-eegitnet-model",
134
+ commit_message="Initial model upload",
135
+ )
136
+
137
+ **Loading a model from the Hub:**
138
+
139
+ .. code::
140
+ from braindecode.models import EEGITNet
141
+
142
+ # Load pretrained model
143
+ model = EEGITNet.from_pretrained("username/my-eegitnet-model")
144
+
145
+ # Load with a different number of outputs (head is rebuilt automatically)
146
+ model = EEGITNet.from_pretrained("username/my-eegitnet-model", n_outputs=4)
147
+
148
+ **Extracting features and replacing the head:**
149
+
150
+ .. code::
151
+ import torch
152
+
153
+ x = torch.randn(1, model.n_chans, model.n_times)
154
+ # Extract encoder features (consistent dict across all models)
155
+ out = model(x, return_features=True)
156
+ features = out["features"]
157
+
158
+ # Replace the classification head
159
+ model.reset_head(n_outputs=10)
160
+
161
+ **Saving and restoring full configuration:**
162
+
163
+ .. code::
164
+ import json
165
+
166
+ config = model.get_config() # all __init__ params
167
+ with open("config.json", "w") as f:
168
+ json.dump(config, f)
169
+
170
+ model2 = EEGITNet.from_config(config) # reconstruct (no weights)
171
+
172
+ All model parameters (both EEG-specific and model-specific such as
173
+ dropout rates, activation functions, number of filters) are automatically
174
+ saved to the Hub and restored when loading.
175
+
176
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
177
+ </div>
178
+
179
+ ## Citation
180
+
181
+ Please cite both the original paper for this architecture (see the
182
+ *References* section above) and braindecode:
183
+
184
+ ```bibtex
185
+ @article{aristimunha2025braindecode,
186
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
187
+ author = {Aristimunha, Bruno and others},
188
+ journal = {Zenodo},
189
+ year = {2025},
190
+ doi = {10.5281/zenodo.17699192},
191
+ }
192
+ ```
193
+
194
+ ## License
195
+
196
+ BSD-3-Clause for the model code (matching braindecode).
197
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
198
+ inherit the licence of that checkpoint and its training corpus.