bruAristimunha commited on
Commit
2438f54
·
verified ·
1 Parent(s): a7e9e29

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +182 -0
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - convolutional
12
+ ---
13
+
14
+ # EEGTCNet
15
+
16
+ EEGTCNet model from Ingolfsson et al (2020) .
17
+
18
+ > **Architecture-only repository.** This repo documents the
19
+ > `braindecode.models.EEGTCNet` class. **No pretrained weights are
20
+ > distributed here** — instantiate the model and train it on your own
21
+ > data, or fine-tune from a published foundation-model checkpoint
22
+ > separately.
23
+
24
+ ## Quick start
25
+
26
+ ```bash
27
+ pip install braindecode
28
+ ```
29
+
30
+ ```python
31
+ from braindecode.models import EEGTCNet
32
+
33
+ model = EEGTCNet(
34
+ n_chans=22,
35
+ sfreq=250,
36
+ input_window_seconds=4.0,
37
+ n_outputs=4,
38
+ )
39
+ ```
40
+
41
+ The signal-shape arguments above are example defaults — adjust them
42
+ to match your recording.
43
+
44
+ ## Documentation
45
+
46
+ - Full API reference (parameters, references, architecture figure):
47
+ <https://braindecode.org/stable/generated/braindecode.models.EEGTCNet.html>
48
+ - Interactive browser with live instantiation:
49
+ <https://huggingface.co/spaces/braindecode/model-explorer>
50
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/eegtcnet.py#L15>
51
+
52
+ ## Architecture description
53
+
54
+ The block below is the rendered class docstring (parameters,
55
+ references, architecture figure where available).
56
+
57
+ <div class='bd-doc'><main>
58
+ <p>EEGTCNet model from Ingolfsson et al (2020) [ingolfsson2020]_.</p>
59
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span><span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#6c757d;color:white;font-size:11px;font-weight:600;margin-right:4px;">Recurrent</span>
60
+
61
+
62
+
63
+ .. figure:: https://braindecode.org/dev/_static/model/eegtcnet.jpg
64
+ :align: center
65
+ :alt: EEGTCNet Architecture
66
+
67
+ Combining EEGNet and TCN blocks.
68
+
69
+ Parameters
70
+ ----------
71
+ activation : nn.Module, optional
72
+ Activation function to use. Default is `nn.ELU()`.
73
+ depth_multiplier : int, optional
74
+ Depth multiplier for the depthwise convolution. Default is 2.
75
+ filter_1 : int, optional
76
+ Number of temporal filters in the first convolutional layer. Default is 8.
77
+ kern_length : int, optional
78
+ Length of the temporal kernel in the first convolutional layer. Default is 64.
79
+ dropout : float, optional
80
+ Dropout rate. Default is 0.5.
81
+ depth : int, optional
82
+ Number of residual blocks in the TCN. Default is 2.
83
+ kernel_size : int, optional
84
+ Size of the temporal convolutional kernel in the TCN. Default is 4.
85
+ filters : int, optional
86
+ Number of filters in the TCN convolutional layers. Default is 12.
87
+ max_norm_const : float
88
+ Maximum L2-norm constraint imposed on weights of the last
89
+ fully-connected layer. Defaults to 0.25.
90
+
91
+ References
92
+ ----------
93
+ .. [ingolfsson2020] Ingolfsson, T. M., Hersche, M., Wang, X., Kobayashi, N.,
94
+ Cavigelli, L., & Benini, L. (2020). EEG-TCNet: An accurate temporal
95
+ convolutional network for embedded motor-imagery brain–machine interfaces.
96
+ https://doi.org/10.48550/arXiv.2006.00622
97
+
98
+ .. rubric:: Hugging Face Hub integration
99
+
100
+ When the optional ``huggingface_hub`` package is installed, all models
101
+ automatically gain the ability to be pushed to and loaded from the
102
+ Hugging Face Hub. Install with::
103
+
104
+ pip install braindecode[hub]
105
+
106
+ **Pushing a model to the Hub:**
107
+
108
+ .. code::
109
+ from braindecode.models import EEGTCNet
110
+
111
+ # Train your model
112
+ model = EEGTCNet(n_chans=22, n_outputs=4, n_times=1000)
113
+ # ... training code ...
114
+
115
+ # Push to the Hub
116
+ model.push_to_hub(
117
+ repo_id="username/my-eegtcnet-model",
118
+ commit_message="Initial model upload",
119
+ )
120
+
121
+ **Loading a model from the Hub:**
122
+
123
+ .. code::
124
+ from braindecode.models import EEGTCNet
125
+
126
+ # Load pretrained model
127
+ model = EEGTCNet.from_pretrained("username/my-eegtcnet-model")
128
+
129
+ # Load with a different number of outputs (head is rebuilt automatically)
130
+ model = EEGTCNet.from_pretrained("username/my-eegtcnet-model", n_outputs=4)
131
+
132
+ **Extracting features and replacing the head:**
133
+
134
+ .. code::
135
+ import torch
136
+
137
+ x = torch.randn(1, model.n_chans, model.n_times)
138
+ # Extract encoder features (consistent dict across all models)
139
+ out = model(x, return_features=True)
140
+ features = out["features"]
141
+
142
+ # Replace the classification head
143
+ model.reset_head(n_outputs=10)
144
+
145
+ **Saving and restoring full configuration:**
146
+
147
+ .. code::
148
+ import json
149
+
150
+ config = model.get_config() # all __init__ params
151
+ with open("config.json", "w") as f:
152
+ json.dump(config, f)
153
+
154
+ model2 = EEGTCNet.from_config(config) # reconstruct (no weights)
155
+
156
+ All model parameters (both EEG-specific and model-specific such as
157
+ dropout rates, activation functions, number of filters) are automatically
158
+ saved to the Hub and restored when loading.
159
+
160
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
161
+ </div>
162
+
163
+ ## Citation
164
+
165
+ Please cite both the original paper for this architecture (see the
166
+ *References* section above) and braindecode:
167
+
168
+ ```bibtex
169
+ @article{aristimunha2025braindecode,
170
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
171
+ author = {Aristimunha, Bruno and others},
172
+ journal = {Zenodo},
173
+ year = {2025},
174
+ doi = {10.5281/zenodo.17699192},
175
+ }
176
+ ```
177
+
178
+ ## License
179
+
180
+ BSD-3-Clause for the model code (matching braindecode).
181
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
182
+ inherit the licence of that checkpoint and its training corpus.