File size: 3,545 Bytes
695ce77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a2845a1
695ce77
a2845a1
695ce77
a2845a1
 
695ce77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a2845a1
 
695ce77
 
a2845a1
 
695ce77
 
 
 
a2845a1
695ce77
a2845a1
695ce77
 
a2845a1
695ce77
a2845a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
695ce77
 
a2845a1
695ce77
a2845a1
695ce77
 
 
 
a2845a1
695ce77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: bsd-3-clause
library_name: braindecode
pipeline_tag: feature-extraction
tags:
  - eeg
  - biosignal
  - pytorch
  - neuroscience
  - braindecode
  - convolutional
---

# ShallowFBCSPNet

Shallow ConvNet model from Schirrmeister et al (2017) [Schirrmeister2017].

> **Architecture-only repository.** Documents the
> `braindecode.models.ShallowFBCSPNet` class. **No pretrained weights are
> distributed here.** Instantiate the model and train it on your own
> data.

## Quick start

```bash
pip install braindecode
```

```python
from braindecode.models import ShallowFBCSPNet

model = ShallowFBCSPNet(
    n_chans=22,
    sfreq=250,
    input_window_seconds=4.0,
    n_outputs=4,
)
```

The signal-shape arguments above are illustrative defaults β€” adjust to
match your recording.

## Documentation
- Full API reference: <https://braindecode.org/stable/generated/braindecode.models.ShallowFBCSPNet.html>
- Interactive browser (live instantiation, parameter counts):
  <https://huggingface.co/spaces/braindecode/model-explorer>
- Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/shallow_fbcsp.py#L24>


## Architecture

![ShallowFBCSPNet architecture](https://onlinelibrary.wiley.com/cms/asset/221ea375-6701-40d3-ab3f-e411aad62d9e/hbm23730-fig-0002-m.jpg)


## Parameters

| Parameter | Type | Description |
|---|---|---|
| `n_filters_time: int` | β€” | Number of temporal filters. |
| `filter_time_length: int` | β€” | Length of the temporal filter. |
| `n_filters_spat: int` | β€” | Number of spatial filters. |
| `pool_time_length: int` | β€” | Length of temporal pooling filter. |
| `pool_time_stride: int` | β€” | Length of stride between temporal pooling filters. |
| `final_conv_length: int | str` | β€” | Length of the final convolution layer. If set to "auto", length of the input signal must be specified. |
| `conv_nonlin: type[nn.Module] | Callable` | β€” | Non-linear module class to be used after convolution layers. For backward compatibility, callables are also accepted and wrapped with :class:`~braindecode.modules.Expression`. |
| `pool_mode: str` | β€” | Method to use on pooling layers. "max" or "mean". |
| `activation_pool_nonlin: type[nn.Module]` | β€” | Non-linear module class to be used after pooling layers. |
| `split_first_layer: bool` | β€” | Split first layer into temporal and spatial layers (True) or just use temporal (False). There would be no non-linearity between the split layers. |
| `batch_norm: bool` | β€” | Whether to use batch normalisation. |
| `batch_norm_alpha: float` | β€” | Momentum for BatchNorm2d. |
| `drop_prob: float` | β€” | Dropout probability. |


## References

1. Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F. & Ball, T. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping , Aug. 2017. Online: http://dx.doi.org/10.1002/hbm.23730


## Citation

Cite the original architecture paper (see *References* above) and braindecode:

```bibtex
@article{aristimunha2025braindecode,
  title   = {Braindecode: a deep learning library for raw electrophysiological data},
  author  = {Aristimunha, Bruno and others},
  journal = {Zenodo},
  year    = {2025},
  doi     = {10.5281/zenodo.17699192},
}
```

## License

BSD-3-Clause for the model code (matching braindecode).
Pretraining-derived weights, if you fine-tune from a checkpoint,
inherit the licence of that checkpoint and its training corpus.