File size: 3,627 Bytes
b23ea51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
979b209
b23ea51
979b209
b23ea51
979b209
 
b23ea51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
979b209
 
b23ea51
 
979b209
 
b23ea51
 
 
 
979b209
b23ea51
979b209
b23ea51
 
979b209
b23ea51
979b209
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b23ea51
 
979b209
b23ea51
979b209
 
b23ea51
 
 
 
979b209
b23ea51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
license: bsd-3-clause
library_name: braindecode
pipeline_tag: feature-extraction
tags:
  - eeg
  - biosignal
  - pytorch
  - neuroscience
  - braindecode
  - convolutional
  - transformer
---

# MSVTNet

MSVTNet model from Liu K et al (2024) from [msvt2024].

> **Architecture-only repository.** Documents the
> `braindecode.models.MSVTNet` class. **No pretrained weights are
> distributed here.** Instantiate the model and train it on your own
> data.

## Quick start

```bash
pip install braindecode
```

```python
from braindecode.models import MSVTNet

model = MSVTNet(
    n_chans=22,
    sfreq=250,
    input_window_seconds=4.0,
    n_outputs=4,
)
```

The signal-shape arguments above are illustrative defaults — adjust to
match your recording.

## Documentation
- Full API reference: <https://braindecode.org/stable/generated/braindecode.models.MSVTNet.html>
- Interactive browser (live instantiation, parameter counts):
  <https://huggingface.co/spaces/braindecode/model-explorer>
- Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/msvtnet.py#L13>


## Architecture

![MSVTNet architecture](https://raw.githubusercontent.com/SheepTAO/MSVTNet/refs/heads/main/MSVTNet_Arch.png)


## Parameters

| Parameter | Type | Description |
|---|---|---|
| `n_filters_list` | list[int], optional | List of filter numbers for each TSConv block, by default (9, 9, 9, 9). |
| `conv1_kernels_size` | list[int], optional | List of kernel sizes for the first convolution in each TSConv block, by default (15, 31, 63, 125). |
| `conv2_kernel_size` | int, optional | Kernel size for the second convolution in TSConv blocks, by default 15. |
| `depth_multiplier` | int, optional | Depth multiplier for depthwise convolution, by default 2. |
| `pool1_size` | int, optional | Pooling size for the first pooling layer in TSConv blocks, by default 8. |
| `pool2_size` | int, optional | Pooling size for the second pooling layer in TSConv blocks, by default 7. |
| `drop_prob` | float, optional | Dropout probability for convolutional layers, by default 0.3. |
| `num_heads` | int, optional | Number of attention heads in the transformer encoder, by default 8. |
| `ffn_expansion_factor` | float, optional | Ratio to compute feedforward dimension in the transformer, by default 1. |
| `att_drop_prob` | float, optional | Dropout probability for the transformer, by default 0.5. |
| `num_layers` | int, optional | Number of transformer encoder layers, by default 2. |
| `activation` | Type[nn.Module], optional | Activation function class to use, by default nn.ELU. |
| `return_features` | bool, optional | Whether to return predictions from branch classifiers, by default False. |


## References

1. Liu, K., et al. (2024). MSVTNet: Multi-Scale Vision Transformer Neural Network for EEG-Based Motor Imagery Decoding. IEEE Journal of Biomedical an Health Informatics.
2. Liu, K., et al. (2024). MSVTNet: Multi-Scale Vision Transformer Neural Network for EEG-Based Motor Imagery Decoding. Source Code: https://github.com/SheepTAO/MSVTNet


## Citation

Cite the original architecture paper (see *References* above) and braindecode:

```bibtex
@article{aristimunha2025braindecode,
  title   = {Braindecode: a deep learning library for raw electrophysiological data},
  author  = {Aristimunha, Bruno and others},
  journal = {Zenodo},
  year    = {2025},
  doi     = {10.5281/zenodo.17699192},
}
```

## License

BSD-3-Clause for the model code (matching braindecode).
Pretraining-derived weights, if you fine-tune from a checkpoint,
inherit the licence of that checkpoint and its training corpus.