bruAristimunha commited on
Commit
d34d07a
·
verified ·
1 Parent(s): c167449

Replace with clean markdown card

Browse files
Files changed (1) hide show
  1. README.md +29 -131
README.md CHANGED
@@ -13,13 +13,12 @@ tags:
13
 
14
  # TIDNet
15
 
16
- Thinker Invariance DenseNet model from Kostas et al (2020) .
17
 
18
- > **Architecture-only repository.** This repo documents the
19
  > `braindecode.models.TIDNet` class. **No pretrained weights are
20
- > distributed here** instantiate the model and train it on your own
21
- > data, or fine-tune from a published foundation-model checkpoint
22
- > separately.
23
 
24
  ## Quick start
25
 
@@ -38,149 +37,48 @@ model = TIDNet(
38
  )
39
  ```
40
 
41
- The signal-shape arguments above are example defaults — adjust them
42
- to match your recording.
43
 
44
  ## Documentation
45
-
46
- - Full API reference (parameters, references, architecture figure):
47
- <https://braindecode.org/stable/generated/braindecode.models.TIDNet.html>
48
- - Interactive browser with live instantiation:
49
  <https://huggingface.co/spaces/braindecode/model-explorer>
50
  - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/tidnet.py#L13>
51
 
52
- ## Architecture description
53
-
54
- The block below is the rendered class docstring (parameters,
55
- references, architecture figure where available).
56
-
57
- <div class='bd-doc'><main>
58
- <p>Thinker Invariance DenseNet model from Kostas et al (2020) [TIDNet]_.</p>
59
- <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span>
60
-
61
-
62
-
63
- .. figure:: https://content.cld.iop.org/journals/1741-2552/17/5/056008/revision3/jneabb7a7f1_hr.jpg
64
- :align: center
65
- :alt: TIDNet Architecture
66
-
67
- See [TIDNet]_ for details.
68
-
69
- Parameters
70
- ----------
71
- s_growth : int
72
- DenseNet-style growth factor (added filters per DenseFilter)
73
- t_filters : int
74
- Number of temporal filters.
75
- drop_prob : float
76
- Dropout probability
77
- pooling : int
78
- Max temporal pooling (width and stride)
79
- temp_layers : int
80
- Number of temporal layers
81
- spat_layers : int
82
- Number of DenseFilters
83
- temp_span : float
84
- Percentage of n_times that defines the temporal filter length:
85
- temp_len = ceil(temp_span * n_times)
86
- e.g A value of 0.05 for temp_span with 1500 n_times will yield a temporal
87
- filter of length 75.
88
- bottleneck : int
89
- Bottleneck factor within Densefilter
90
- summary : int
91
- Output size of AdaptiveAvgPool1D layer. If set to -1, value will be calculated
92
- automatically (n_times // pooling).
93
- in_chans :
94
- Alias for n_chans.
95
- n_classes:
96
- Alias for n_outputs.
97
- input_window_samples :
98
- Alias for n_times.
99
- activation: nn.Module, default=nn.LeakyReLU
100
- Activation function class to apply. Should be a PyTorch activation
101
- module class like ``nn.ReLU`` or ``nn.ELU``. Default is ``nn.LeakyReLU``.
102
-
103
- Notes
104
- -----
105
- Code adapted from: https://github.com/SPOClab-ca/ThinkerInvariance/
106
-
107
- References
108
- ----------
109
- .. [TIDNet] Kostas, D. & Rudzicz, F.
110
- Thinker invariance: enabling deep neural networks for BCI across more
111
- people.
112
- J. Neural Eng. 17, 056008 (2020).
113
- doi: 10.1088/1741-2552/abb7a7.
114
-
115
- .. rubric:: Hugging Face Hub integration
116
-
117
- When the optional ``huggingface_hub`` package is installed, all models
118
- automatically gain the ability to be pushed to and loaded from the
119
- Hugging Face Hub. Install with::
120
-
121
- pip install braindecode[hub]
122
-
123
- **Pushing a model to the Hub:**
124
-
125
- .. code::
126
- from braindecode.models import TIDNet
127
 
128
- # Train your model
129
- model = TIDNet(n_chans=22, n_outputs=4, n_times=1000)
130
- # ... training code ...
131
 
132
- # Push to the Hub
133
- model.push_to_hub(
134
- repo_id="username/my-tidnet-model",
135
- commit_message="Initial model upload",
136
- )
137
 
138
- **Loading a model from the Hub:**
139
 
140
- .. code::
141
- from braindecode.models import TIDNet
142
 
143
- # Load pretrained model
144
- model = TIDNet.from_pretrained("username/my-tidnet-model")
 
 
 
 
 
 
 
 
 
 
 
 
 
145
 
146
- # Load with a different number of outputs (head is rebuilt automatically)
147
- model = TIDNet.from_pretrained("username/my-tidnet-model", n_outputs=4)
148
 
149
- **Extracting features and replacing the head:**
150
 
151
- .. code::
152
- import torch
153
 
154
- x = torch.randn(1, model.n_chans, model.n_times)
155
- # Extract encoder features (consistent dict across all models)
156
- out = model(x, return_features=True)
157
- features = out["features"]
158
-
159
- # Replace the classification head
160
- model.reset_head(n_outputs=10)
161
-
162
- **Saving and restoring full configuration:**
163
-
164
- .. code::
165
- import json
166
-
167
- config = model.get_config() # all __init__ params
168
- with open("config.json", "w") as f:
169
- json.dump(config, f)
170
-
171
- model2 = TIDNet.from_config(config) # reconstruct (no weights)
172
-
173
- All model parameters (both EEG-specific and model-specific such as
174
- dropout rates, activation functions, number of filters) are automatically
175
- saved to the Hub and restored when loading.
176
-
177
- See :ref:`load-pretrained-models` for a complete tutorial.</main>
178
- </div>
179
 
180
  ## Citation
181
 
182
- Please cite both the original paper for this architecture (see the
183
- *References* section above) and braindecode:
184
 
185
  ```bibtex
186
  @article{aristimunha2025braindecode,
 
13
 
14
  # TIDNet
15
 
16
+ Thinker Invariance DenseNet model from Kostas et al (2020) [TIDNet].
17
 
18
+ > **Architecture-only repository.** Documents the
19
  > `braindecode.models.TIDNet` class. **No pretrained weights are
20
+ > distributed here.** Instantiate the model and train it on your own
21
+ > data.
 
22
 
23
  ## Quick start
24
 
 
37
  )
38
  ```
39
 
40
+ The signal-shape arguments above are illustrative defaults — adjust to
41
+ match your recording.
42
 
43
  ## Documentation
44
+ - Full API reference: <https://braindecode.org/stable/generated/braindecode.models.TIDNet.html>
45
+ - Interactive browser (live instantiation, parameter counts):
 
 
46
  <https://huggingface.co/spaces/braindecode/model-explorer>
47
  - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/tidnet.py#L13>
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
+ ## Architecture
 
 
51
 
52
+ ![TIDNet architecture](https://content.cld.iop.org/journals/1741-2552/17/5/056008/revision3/jneabb7a7f1_hr.jpg)
 
 
 
 
53
 
 
54
 
55
+ ## Parameters
 
56
 
57
+ | Parameter | Type | Description |
58
+ |---|---|---|
59
+ | `s_growth` | int | DenseNet-style growth factor (added filters per DenseFilter) |
60
+ | `t_filters` | int | Number of temporal filters. |
61
+ | `drop_prob` | float | Dropout probability |
62
+ | `pooling` | int | Max temporal pooling (width and stride) |
63
+ | `temp_layers` | int | Number of temporal layers |
64
+ | `spat_layers` | int | Number of DenseFilters |
65
+ | `temp_span` | float | Percentage of n_times that defines the temporal filter length: temp_len = ceil(temp_span * n_times) e.g A value of 0.05 for temp_span with 1500 n_times will yield a temporal filter of length 75. |
66
+ | `bottleneck` | int | Bottleneck factor within Densefilter |
67
+ | `summary` | int | Output size of AdaptiveAvgPool1D layer. If set to -1, value will be calculated automatically (n_times // pooling). |
68
+ | `in_chans` | — | Alias for n_chans. |
69
+ | `n_classes:` | — | Alias for n_outputs. |
70
+ | `input_window_samples` | — | Alias for n_times. |
71
+ | `activation: nn.Module, default=nn.LeakyReLU` | — | Activation function class to apply. Should be a PyTorch activation module class like `nn.ReLU` or `nn.ELU`. Default is `nn.LeakyReLU`. |
72
 
 
 
73
 
74
+ ## References
75
 
76
+ 1. Kostas, D. & Rudzicz, F. Thinker invariance: enabling deep neural networks for BCI across more people. J. Neural Eng. 17, 056008 (2020). doi: 10.1088/1741-2552/abb7a7.
 
77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
  ## Citation
80
 
81
+ Cite the original architecture paper (see *References* above) and braindecode:
 
82
 
83
  ```bibtex
84
  @article{aristimunha2025braindecode,