File size: 3,310 Bytes
c4773b1 d7b6c7c c4773b1 d7b6c7c c4773b1 d7b6c7c c4773b1 d7b6c7c c4773b1 d7b6c7c c4773b1 d7b6c7c c4773b1 d7b6c7c c4773b1 d7b6c7c c4773b1 d7b6c7c c4773b1 d7b6c7c c4773b1 d7b6c7c c4773b1 d7b6c7c c4773b1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | ---
license: bsd-3-clause
library_name: braindecode
pipeline_tag: feature-extraction
tags:
- eeg
- biosignal
- pytorch
- neuroscience
- braindecode
- convolutional
---
# FBLightConvNet
LightConvNet from Ma, X et al (2023) [lightconvnet].
> **Architecture-only repository.** Documents the
> `braindecode.models.FBLightConvNet` class. **No pretrained weights are
> distributed here.** Instantiate the model and train it on your own
> data.
## Quick start
```bash
pip install braindecode
```
```python
from braindecode.models import FBLightConvNet
model = FBLightConvNet(
n_chans=22,
sfreq=250,
input_window_seconds=4.0,
n_outputs=4,
)
```
The signal-shape arguments above are illustrative defaults — adjust to
match your recording.
## Documentation
- Full API reference: <https://braindecode.org/stable/generated/braindecode.models.FBLightConvNet.html>
- Interactive browser (live instantiation, parameter counts):
<https://huggingface.co/spaces/braindecode/model-explorer>
- Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/fblightconvnet.py#L18>
## Architecture

## Parameters
| Parameter | Type | Description |
|---|---|---|
| `n_bands` | int or None or list of tuple of int, default=8 | Number of frequency bands or a list of frequency band tuples. If a list of tuples is provided, each tuple defines the lower and upper bounds of a frequency band. |
| `n_filters_spat` | int, default=32 | Number of spatial filters in the depthwise convolutional layer. |
| `n_dim` | int, default=3 | Number of dimensions for the temporal reduction layer. |
| `stride_factor` | int, default=4 | Stride factor used for reshaping the temporal dimension. |
| `activation` | nn.Module, default=nn.ELU | Activation function class to apply after convolutional layers. |
| `verbose` | bool, default=False | If True, enables verbose output during filter creation using mne. |
| `filter_parameters` | dict, default={} | Additional parameters for the FilterBankLayer. |
| `heads` | int, default=8 | Number of attention heads in the multi-head attention mechanism. |
| `weight_softmax` | bool, default=True | If True, applies softmax to the attention weights. |
| `bias` | bool, default=False | If True, includes a bias term in the convolutional layers. |
## References
1. Ma, X., Chen, W., Pei, Z., Liu, J., Huang, B., & Chen, J. (2023). A temporal dependency learning CNN with attention mechanism for MI-EEG decoding. IEEE Transactions on Neural Systems and Rehabilitation Engineering.
2. Link to source-code: https://github.com/Ma-Xinzhi/LightConvNet
## Citation
Cite the original architecture paper (see *References* above) and braindecode:
```bibtex
@article{aristimunha2025braindecode,
title = {Braindecode: a deep learning library for raw electrophysiological data},
author = {Aristimunha, Bruno and others},
journal = {Zenodo},
year = {2025},
doi = {10.5281/zenodo.17699192},
}
```
## License
BSD-3-Clause for the model code (matching braindecode).
Pretraining-derived weights, if you fine-tune from a checkpoint,
inherit the licence of that checkpoint and its training corpus.
|