Spaces:
Sleeping
Sleeping
Upload 86 files
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +3 -0
- BioSPPy/mcp_output/README_MCP.md +56 -0
- BioSPPy/mcp_output/analysis.json +1579 -0
- BioSPPy/mcp_output/diff_report.md +63 -0
- BioSPPy/mcp_output/mcp_plugin/__init__.py +0 -0
- BioSPPy/mcp_output/mcp_plugin/adapter.py +83 -0
- BioSPPy/mcp_output/mcp_plugin/main.py +13 -0
- BioSPPy/mcp_output/mcp_plugin/mcp_service.py +105 -0
- BioSPPy/mcp_output/requirements.txt +13 -0
- BioSPPy/mcp_output/start_mcp.py +30 -0
- BioSPPy/mcp_output/workflow_summary.json +202 -0
- BioSPPy/source/AUTHORS.md +29 -0
- BioSPPy/source/CHANGELOG.md +146 -0
- BioSPPy/source/LICENSE +32 -0
- BioSPPy/source/MANIFEST.in +1 -0
- BioSPPy/source/README.md +93 -0
- BioSPPy/source/__init__.py +4 -0
- BioSPPy/source/biosppy/__init__.py +21 -0
- BioSPPy/source/biosppy/__version__.py +13 -0
- BioSPPy/source/biosppy/biometrics.py +2345 -0
- BioSPPy/source/biosppy/clustering.py +1008 -0
- BioSPPy/source/biosppy/inter_plotting/__init__.py +18 -0
- BioSPPy/source/biosppy/inter_plotting/acc.py +496 -0
- BioSPPy/source/biosppy/inter_plotting/ecg.py +163 -0
- BioSPPy/source/biosppy/metrics.py +171 -0
- BioSPPy/source/biosppy/plotting.py +1741 -0
- BioSPPy/source/biosppy/signals/__init__.py +23 -0
- BioSPPy/source/biosppy/signals/abp.py +240 -0
- BioSPPy/source/biosppy/signals/acc.py +186 -0
- BioSPPy/source/biosppy/signals/bvp.py +107 -0
- BioSPPy/source/biosppy/signals/ecg.py +2045 -0
- BioSPPy/source/biosppy/signals/eda.py +252 -0
- BioSPPy/source/biosppy/signals/eeg.py +475 -0
- BioSPPy/source/biosppy/signals/emg.py +1139 -0
- BioSPPy/source/biosppy/signals/pcg.py +282 -0
- BioSPPy/source/biosppy/signals/ppg.py +568 -0
- BioSPPy/source/biosppy/signals/resp.py +116 -0
- BioSPPy/source/biosppy/signals/tools.py +2191 -0
- BioSPPy/source/biosppy/stats.py +240 -0
- BioSPPy/source/biosppy/storage.py +1043 -0
- BioSPPy/source/biosppy/synthesizers/__init__.py +18 -0
- BioSPPy/source/biosppy/synthesizers/ecg.py +661 -0
- BioSPPy/source/biosppy/timing.py +97 -0
- BioSPPy/source/biosppy/utils.py +439 -0
- BioSPPy/source/docs/Makefile +192 -0
- BioSPPy/source/docs/__init__.py +1 -0
- BioSPPy/source/docs/biosppy.rst +53 -0
- BioSPPy/source/docs/biosppy.signals.rst +56 -0
- BioSPPy/source/docs/conf.py +316 -0
- BioSPPy/source/docs/favicon.ico +0 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
BioSPPy/source/docs/images/ECG_raw.png filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
BioSPPy/source/docs/images/ECG_summary.png filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
BioSPPy/source/docs/logo/logo.png filter=lfs diff=lfs merge=lfs -text
|
BioSPPy/mcp_output/README_MCP.md
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# BioSPPy: Biosignal Processing in Python
|
| 2 |
+
|
| 3 |
+
## Project Introduction
|
| 4 |
+
|
| 5 |
+
BioSPPy is a comprehensive Python library designed for biosignal processing. It provides a suite of tools for analyzing and visualizing various physiological signals such as ECG, EEG, EMG, and more. The library is structured to facilitate easy integration and application of signal processing techniques, making it an essential tool for researchers and developers working in the field of biomedical signal processing.
|
| 6 |
+
|
| 7 |
+
## Installation Method
|
| 8 |
+
|
| 9 |
+
To install BioSPPy, ensure you have Python installed on your system. The library requires the following dependencies: `numpy`, `scipy`, and `matplotlib`. Optionally, `pandas` can be used for enhanced data manipulation capabilities.
|
| 10 |
+
|
| 11 |
+
Install BioSPPy using pip:
|
| 12 |
+
|
| 13 |
+
```
|
| 14 |
+
pip install biosppy
|
| 15 |
+
```
|
| 16 |
+
|
| 17 |
+
## Quick Start
|
| 18 |
+
|
| 19 |
+
Here's a quick example to get you started with BioSPPy:
|
| 20 |
+
|
| 21 |
+
1. Import the library and load your signal data.
|
| 22 |
+
2. Use the provided functions to process and analyze your signals.
|
| 23 |
+
|
| 24 |
+
Example:
|
| 25 |
+
|
| 26 |
+
```
|
| 27 |
+
from biosppy.signals import ecg
|
| 28 |
+
|
| 29 |
+
# Load your ECG signal data
|
| 30 |
+
signal = ...
|
| 31 |
+
|
| 32 |
+
# Process the ECG signal
|
| 33 |
+
out = ecg.ecg(signal=signal, sampling_rate=1000, show=True)
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
## Available Tools and Endpoints List
|
| 37 |
+
|
| 38 |
+
BioSPPy offers a variety of services for signal processing:
|
| 39 |
+
|
| 40 |
+
- **ECG Processing**: Functions like `ecg` for ECG signal analysis, including R-peak detection and heart rate computation.
|
| 41 |
+
- **EEG Processing**: Tools for EEG signal analysis, including power and phase-locking features.
|
| 42 |
+
- **EMG Processing**: Functions for EMG signal analysis, including onset detection.
|
| 43 |
+
- **EDA Processing**: Tools for analyzing electrodermal activity signals.
|
| 44 |
+
- **Clustering and Classification**: Services for clustering and classification of biosignals.
|
| 45 |
+
- **Plotting**: Visualization tools for various signal types.
|
| 46 |
+
- **Storage and Serialization**: Functions for storing and loading signal data in different formats.
|
| 47 |
+
|
| 48 |
+
## Common Issues and Notes
|
| 49 |
+
|
| 50 |
+
- **Dependencies**: Ensure all required dependencies are installed. Use `pip` to manage and install missing packages.
|
| 51 |
+
- **Environment**: BioSPPy is compatible with most Python environments. Ensure your environment is set up with the necessary libraries.
|
| 52 |
+
- **Performance**: For large datasets, consider optimizing your environment and using efficient data handling techniques.
|
| 53 |
+
|
| 54 |
+
## Reference Links or Documentation
|
| 55 |
+
|
| 56 |
+
For more detailed information, visit the [BioSPPy GitHub repository](https://github.com/PIA-Group/BioSPPy) where you can find the full documentation, examples, and additional resources to help you get the most out of BioSPPy.
|
BioSPPy/mcp_output/analysis.json
ADDED
|
@@ -0,0 +1,1579 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"summary": {
|
| 3 |
+
"repository_url": "https://github.com/PIA-Group/BioSPPy",
|
| 4 |
+
"summary": "Imported via zip fallback, file count: 47",
|
| 5 |
+
"file_tree": {
|
| 6 |
+
"AUTHORS.md": {
|
| 7 |
+
"size": 658
|
| 8 |
+
},
|
| 9 |
+
"CHANGELOG.md": {
|
| 10 |
+
"size": 3345
|
| 11 |
+
},
|
| 12 |
+
"README.md": {
|
| 13 |
+
"size": 2897
|
| 14 |
+
},
|
| 15 |
+
"biosppy/__init__.py": {
|
| 16 |
+
"size": 511
|
| 17 |
+
},
|
| 18 |
+
"biosppy/__version__.py": {
|
| 19 |
+
"size": 259
|
| 20 |
+
},
|
| 21 |
+
"biosppy/biometrics.py": {
|
| 22 |
+
"size": 63201
|
| 23 |
+
},
|
| 24 |
+
"biosppy/clustering.py": {
|
| 25 |
+
"size": 28595
|
| 26 |
+
},
|
| 27 |
+
"biosppy/inter_plotting/__init__.py": {
|
| 28 |
+
"size": 444
|
| 29 |
+
},
|
| 30 |
+
"biosppy/inter_plotting/acc.py": {
|
| 31 |
+
"size": 17555
|
| 32 |
+
},
|
| 33 |
+
"biosppy/inter_plotting/ecg.py": {
|
| 34 |
+
"size": 4526
|
| 35 |
+
},
|
| 36 |
+
"biosppy/metrics.py": {
|
| 37 |
+
"size": 5362
|
| 38 |
+
},
|
| 39 |
+
"biosppy/plotting.py": {
|
| 40 |
+
"size": 43780
|
| 41 |
+
},
|
| 42 |
+
"biosppy/signals/__init__.py": {
|
| 43 |
+
"size": 612
|
| 44 |
+
},
|
| 45 |
+
"biosppy/signals/abp.py": {
|
| 46 |
+
"size": 6039
|
| 47 |
+
},
|
| 48 |
+
"biosppy/signals/acc.py": {
|
| 49 |
+
"size": 5097
|
| 50 |
+
},
|
| 51 |
+
"biosppy/signals/bvp.py": {
|
| 52 |
+
"size": 2987
|
| 53 |
+
},
|
| 54 |
+
"biosppy/signals/ecg.py": {
|
| 55 |
+
"size": 62088
|
| 56 |
+
},
|
| 57 |
+
"biosppy/signals/eda.py": {
|
| 58 |
+
"size": 6305
|
| 59 |
+
},
|
| 60 |
+
"biosppy/signals/eeg.py": {
|
| 61 |
+
"size": 12123
|
| 62 |
+
},
|
| 63 |
+
"biosppy/signals/emg.py": {
|
| 64 |
+
"size": 41564
|
| 65 |
+
},
|
| 66 |
+
"biosppy/signals/pcg.py": {
|
| 67 |
+
"size": 8301
|
| 68 |
+
},
|
| 69 |
+
"biosppy/signals/ppg.py": {
|
| 70 |
+
"size": 18232
|
| 71 |
+
},
|
| 72 |
+
"biosppy/signals/resp.py": {
|
| 73 |
+
"size": 3197
|
| 74 |
+
},
|
| 75 |
+
"biosppy/signals/tools.py": {
|
| 76 |
+
"size": 56286
|
| 77 |
+
},
|
| 78 |
+
"biosppy/stats.py": {
|
| 79 |
+
"size": 5294
|
| 80 |
+
},
|
| 81 |
+
"biosppy/storage.py": {
|
| 82 |
+
"size": 25139
|
| 83 |
+
},
|
| 84 |
+
"biosppy/synthesizers/__init__.py": {
|
| 85 |
+
"size": 411
|
| 86 |
+
},
|
| 87 |
+
"biosppy/synthesizers/ecg.py": {
|
| 88 |
+
"size": 20014
|
| 89 |
+
},
|
| 90 |
+
"biosppy/timing.py": {
|
| 91 |
+
"size": 1601
|
| 92 |
+
},
|
| 93 |
+
"biosppy/utils.py": {
|
| 94 |
+
"size": 10067
|
| 95 |
+
},
|
| 96 |
+
"docs/conf.py": {
|
| 97 |
+
"size": 10232
|
| 98 |
+
},
|
| 99 |
+
"docs/requirements.txt": {
|
| 100 |
+
"size": 1
|
| 101 |
+
},
|
| 102 |
+
"example.py": {
|
| 103 |
+
"size": 783
|
| 104 |
+
},
|
| 105 |
+
"examples/acc.txt": {
|
| 106 |
+
"size": 49404
|
| 107 |
+
},
|
| 108 |
+
"examples/bcg.txt": {
|
| 109 |
+
"size": 105085
|
| 110 |
+
},
|
| 111 |
+
"examples/ecg.txt": {
|
| 112 |
+
"size": 105085
|
| 113 |
+
},
|
| 114 |
+
"examples/eda.txt": {
|
| 115 |
+
"size": 524313
|
| 116 |
+
},
|
| 117 |
+
"examples/eeg_ec.txt": {
|
| 118 |
+
"size": 418565
|
| 119 |
+
},
|
| 120 |
+
"examples/eeg_eo.txt": {
|
| 121 |
+
"size": 330261
|
| 122 |
+
},
|
| 123 |
+
"examples/emg.txt": {
|
| 124 |
+
"size": 524313
|
| 125 |
+
},
|
| 126 |
+
"examples/emg_1.txt": {
|
| 127 |
+
"size": 319485
|
| 128 |
+
},
|
| 129 |
+
"examples/pcg.txt": {
|
| 130 |
+
"size": 239669
|
| 131 |
+
},
|
| 132 |
+
"examples/ppg.txt": {
|
| 133 |
+
"size": 140085
|
| 134 |
+
},
|
| 135 |
+
"examples/resp.txt": {
|
| 136 |
+
"size": 419644
|
| 137 |
+
},
|
| 138 |
+
"requirements.txt": {
|
| 139 |
+
"size": 135
|
| 140 |
+
},
|
| 141 |
+
"setup.cfg": {
|
| 142 |
+
"size": 60
|
| 143 |
+
},
|
| 144 |
+
"setup.py": {
|
| 145 |
+
"size": 4405
|
| 146 |
+
}
|
| 147 |
+
},
|
| 148 |
+
"processed_by": "zip_fallback",
|
| 149 |
+
"success": true
|
| 150 |
+
},
|
| 151 |
+
"structure": {
|
| 152 |
+
"packages": [
|
| 153 |
+
"source.biosppy",
|
| 154 |
+
"source.biosppy.inter_plotting",
|
| 155 |
+
"source.biosppy.signals",
|
| 156 |
+
"source.biosppy.synthesizers"
|
| 157 |
+
]
|
| 158 |
+
},
|
| 159 |
+
"dependencies": {
|
| 160 |
+
"has_environment_yml": false,
|
| 161 |
+
"has_requirements_txt": true,
|
| 162 |
+
"pyproject": false,
|
| 163 |
+
"setup_cfg": true,
|
| 164 |
+
"setup_py": true
|
| 165 |
+
},
|
| 166 |
+
"entry_points": {
|
| 167 |
+
"imports": [],
|
| 168 |
+
"cli": [],
|
| 169 |
+
"modules": []
|
| 170 |
+
},
|
| 171 |
+
"llm_analysis": {
|
| 172 |
+
"core_modules": [
|
| 173 |
+
{
|
| 174 |
+
"package": "setup",
|
| 175 |
+
"module": "setup",
|
| 176 |
+
"functions": [],
|
| 177 |
+
"classes": [
|
| 178 |
+
"UploadCommand"
|
| 179 |
+
],
|
| 180 |
+
"function_signatures": {},
|
| 181 |
+
"description": "Discovered via AST scan"
|
| 182 |
+
},
|
| 183 |
+
{
|
| 184 |
+
"package": "biosppy",
|
| 185 |
+
"module": "biometrics",
|
| 186 |
+
"functions": [
|
| 187 |
+
"assess_classification",
|
| 188 |
+
"assess_runs",
|
| 189 |
+
"combination",
|
| 190 |
+
"cross_validation",
|
| 191 |
+
"get_auth_rates",
|
| 192 |
+
"get_id_rates",
|
| 193 |
+
"get_subject_results",
|
| 194 |
+
"majority_rule"
|
| 195 |
+
],
|
| 196 |
+
"classes": [
|
| 197 |
+
"BaseClassifier",
|
| 198 |
+
"CombinationError",
|
| 199 |
+
"KNN",
|
| 200 |
+
"SVM",
|
| 201 |
+
"SubjectError",
|
| 202 |
+
"UntrainedError"
|
| 203 |
+
],
|
| 204 |
+
"function_signatures": {
|
| 205 |
+
"get_auth_rates": [
|
| 206 |
+
"TP",
|
| 207 |
+
"FP",
|
| 208 |
+
"TN",
|
| 209 |
+
"FN",
|
| 210 |
+
"thresholds"
|
| 211 |
+
],
|
| 212 |
+
"get_id_rates": [
|
| 213 |
+
"H",
|
| 214 |
+
"M",
|
| 215 |
+
"R",
|
| 216 |
+
"N",
|
| 217 |
+
"thresholds"
|
| 218 |
+
],
|
| 219 |
+
"get_subject_results": [
|
| 220 |
+
"results",
|
| 221 |
+
"subject",
|
| 222 |
+
"thresholds",
|
| 223 |
+
"subjects",
|
| 224 |
+
"subject_dict",
|
| 225 |
+
"subject_idx"
|
| 226 |
+
],
|
| 227 |
+
"assess_classification": [
|
| 228 |
+
"results",
|
| 229 |
+
"thresholds"
|
| 230 |
+
],
|
| 231 |
+
"assess_runs": [
|
| 232 |
+
"results",
|
| 233 |
+
"subjects"
|
| 234 |
+
],
|
| 235 |
+
"combination": [
|
| 236 |
+
"results",
|
| 237 |
+
"weights"
|
| 238 |
+
],
|
| 239 |
+
"majority_rule": [
|
| 240 |
+
"labels",
|
| 241 |
+
"random"
|
| 242 |
+
],
|
| 243 |
+
"cross_validation": [
|
| 244 |
+
"labels",
|
| 245 |
+
"n_iter",
|
| 246 |
+
"test_size",
|
| 247 |
+
"train_size",
|
| 248 |
+
"random_state"
|
| 249 |
+
]
|
| 250 |
+
},
|
| 251 |
+
"description": "Discovered via AST scan"
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"package": "biosppy",
|
| 255 |
+
"module": "clustering",
|
| 256 |
+
"functions": [
|
| 257 |
+
"centroid_templates",
|
| 258 |
+
"coassoc_partition",
|
| 259 |
+
"consensus",
|
| 260 |
+
"consensus_kmeans",
|
| 261 |
+
"create_coassoc",
|
| 262 |
+
"create_ensemble",
|
| 263 |
+
"dbscan",
|
| 264 |
+
"hierarchical",
|
| 265 |
+
"kmeans",
|
| 266 |
+
"mdist_templates",
|
| 267 |
+
"outliers_dbscan",
|
| 268 |
+
"outliers_dmean"
|
| 269 |
+
],
|
| 270 |
+
"classes": [],
|
| 271 |
+
"function_signatures": {
|
| 272 |
+
"dbscan": [
|
| 273 |
+
"data",
|
| 274 |
+
"min_samples",
|
| 275 |
+
"eps",
|
| 276 |
+
"metric",
|
| 277 |
+
"metric_args"
|
| 278 |
+
],
|
| 279 |
+
"hierarchical": [
|
| 280 |
+
"data",
|
| 281 |
+
"k",
|
| 282 |
+
"linkage",
|
| 283 |
+
"metric",
|
| 284 |
+
"metric_args"
|
| 285 |
+
],
|
| 286 |
+
"kmeans": [
|
| 287 |
+
"data",
|
| 288 |
+
"k",
|
| 289 |
+
"init",
|
| 290 |
+
"max_iter",
|
| 291 |
+
"n_init",
|
| 292 |
+
"tol"
|
| 293 |
+
],
|
| 294 |
+
"consensus": [
|
| 295 |
+
"data",
|
| 296 |
+
"k",
|
| 297 |
+
"linkage",
|
| 298 |
+
"fcn",
|
| 299 |
+
"grid"
|
| 300 |
+
],
|
| 301 |
+
"consensus_kmeans": [
|
| 302 |
+
"data",
|
| 303 |
+
"k",
|
| 304 |
+
"linkage",
|
| 305 |
+
"nensemble",
|
| 306 |
+
"kmin",
|
| 307 |
+
"kmax"
|
| 308 |
+
],
|
| 309 |
+
"create_ensemble": [
|
| 310 |
+
"data",
|
| 311 |
+
"fcn",
|
| 312 |
+
"grid"
|
| 313 |
+
],
|
| 314 |
+
"create_coassoc": [
|
| 315 |
+
"ensemble",
|
| 316 |
+
"N"
|
| 317 |
+
],
|
| 318 |
+
"coassoc_partition": [
|
| 319 |
+
"coassoc",
|
| 320 |
+
"k",
|
| 321 |
+
"linkage"
|
| 322 |
+
],
|
| 323 |
+
"mdist_templates": [
|
| 324 |
+
"data",
|
| 325 |
+
"clusters",
|
| 326 |
+
"ntemplates",
|
| 327 |
+
"metric",
|
| 328 |
+
"metric_args"
|
| 329 |
+
],
|
| 330 |
+
"centroid_templates": [
|
| 331 |
+
"data",
|
| 332 |
+
"clusters",
|
| 333 |
+
"ntemplates"
|
| 334 |
+
],
|
| 335 |
+
"outliers_dbscan": [
|
| 336 |
+
"data",
|
| 337 |
+
"min_samples",
|
| 338 |
+
"eps",
|
| 339 |
+
"metric",
|
| 340 |
+
"metric_args"
|
| 341 |
+
],
|
| 342 |
+
"outliers_dmean": [
|
| 343 |
+
"data",
|
| 344 |
+
"alpha",
|
| 345 |
+
"beta",
|
| 346 |
+
"metric",
|
| 347 |
+
"metric_args",
|
| 348 |
+
"max_idx"
|
| 349 |
+
]
|
| 350 |
+
},
|
| 351 |
+
"description": "Discovered via AST scan"
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"package": "biosppy",
|
| 355 |
+
"module": "metrics",
|
| 356 |
+
"functions": [
|
| 357 |
+
"cdist",
|
| 358 |
+
"pcosine",
|
| 359 |
+
"pdist",
|
| 360 |
+
"squareform"
|
| 361 |
+
],
|
| 362 |
+
"classes": [],
|
| 363 |
+
"function_signatures": {
|
| 364 |
+
"pcosine": [
|
| 365 |
+
"u",
|
| 366 |
+
"v"
|
| 367 |
+
],
|
| 368 |
+
"pdist": [
|
| 369 |
+
"X",
|
| 370 |
+
"metric",
|
| 371 |
+
"p",
|
| 372 |
+
"w",
|
| 373 |
+
"V",
|
| 374 |
+
"VI"
|
| 375 |
+
],
|
| 376 |
+
"cdist": [
|
| 377 |
+
"XA",
|
| 378 |
+
"XB",
|
| 379 |
+
"metric",
|
| 380 |
+
"p",
|
| 381 |
+
"V",
|
| 382 |
+
"VI",
|
| 383 |
+
"w"
|
| 384 |
+
],
|
| 385 |
+
"squareform": [
|
| 386 |
+
"X",
|
| 387 |
+
"force",
|
| 388 |
+
"checks"
|
| 389 |
+
]
|
| 390 |
+
},
|
| 391 |
+
"description": "Discovered via AST scan"
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"package": "biosppy",
|
| 395 |
+
"module": "plotting",
|
| 396 |
+
"functions": [
|
| 397 |
+
"plot_abp",
|
| 398 |
+
"plot_acc",
|
| 399 |
+
"plot_bcg",
|
| 400 |
+
"plot_biometrics",
|
| 401 |
+
"plot_bvp",
|
| 402 |
+
"plot_clustering",
|
| 403 |
+
"plot_ecg",
|
| 404 |
+
"plot_eda",
|
| 405 |
+
"plot_eeg",
|
| 406 |
+
"plot_emg",
|
| 407 |
+
"plot_filter",
|
| 408 |
+
"plot_pcg",
|
| 409 |
+
"plot_ppg",
|
| 410 |
+
"plot_resp",
|
| 411 |
+
"plot_spectrum"
|
| 412 |
+
],
|
| 413 |
+
"classes": [],
|
| 414 |
+
"function_signatures": {
|
| 415 |
+
"plot_filter": [
|
| 416 |
+
"ftype",
|
| 417 |
+
"band",
|
| 418 |
+
"order",
|
| 419 |
+
"frequency",
|
| 420 |
+
"sampling_rate",
|
| 421 |
+
"path",
|
| 422 |
+
"show"
|
| 423 |
+
],
|
| 424 |
+
"plot_spectrum": [
|
| 425 |
+
"signal",
|
| 426 |
+
"sampling_rate",
|
| 427 |
+
"path",
|
| 428 |
+
"show"
|
| 429 |
+
],
|
| 430 |
+
"plot_acc": [
|
| 431 |
+
"ts",
|
| 432 |
+
"raw",
|
| 433 |
+
"vm",
|
| 434 |
+
"sm",
|
| 435 |
+
"path",
|
| 436 |
+
"show"
|
| 437 |
+
],
|
| 438 |
+
"plot_ppg": [
|
| 439 |
+
"ts",
|
| 440 |
+
"raw",
|
| 441 |
+
"filtered",
|
| 442 |
+
"onsets",
|
| 443 |
+
"heart_rate_ts",
|
| 444 |
+
"heart_rate",
|
| 445 |
+
"path",
|
| 446 |
+
"show"
|
| 447 |
+
],
|
| 448 |
+
"plot_bvp": [
|
| 449 |
+
"ts",
|
| 450 |
+
"raw",
|
| 451 |
+
"filtered",
|
| 452 |
+
"onsets",
|
| 453 |
+
"heart_rate_ts",
|
| 454 |
+
"heart_rate",
|
| 455 |
+
"path",
|
| 456 |
+
"show"
|
| 457 |
+
],
|
| 458 |
+
"plot_abp": [
|
| 459 |
+
"ts",
|
| 460 |
+
"raw",
|
| 461 |
+
"filtered",
|
| 462 |
+
"onsets",
|
| 463 |
+
"heart_rate_ts",
|
| 464 |
+
"heart_rate",
|
| 465 |
+
"path",
|
| 466 |
+
"show"
|
| 467 |
+
],
|
| 468 |
+
"plot_eda": [
|
| 469 |
+
"ts",
|
| 470 |
+
"raw",
|
| 471 |
+
"filtered",
|
| 472 |
+
"onsets",
|
| 473 |
+
"peaks",
|
| 474 |
+
"amplitudes",
|
| 475 |
+
"path",
|
| 476 |
+
"show"
|
| 477 |
+
],
|
| 478 |
+
"plot_emg": [
|
| 479 |
+
"ts",
|
| 480 |
+
"sampling_rate",
|
| 481 |
+
"raw",
|
| 482 |
+
"filtered",
|
| 483 |
+
"onsets",
|
| 484 |
+
"processed",
|
| 485 |
+
"path",
|
| 486 |
+
"show"
|
| 487 |
+
],
|
| 488 |
+
"plot_resp": [
|
| 489 |
+
"ts",
|
| 490 |
+
"raw",
|
| 491 |
+
"filtered",
|
| 492 |
+
"zeros",
|
| 493 |
+
"resp_rate_ts",
|
| 494 |
+
"resp_rate",
|
| 495 |
+
"path",
|
| 496 |
+
"show"
|
| 497 |
+
],
|
| 498 |
+
"plot_eeg": [
|
| 499 |
+
"ts",
|
| 500 |
+
"raw",
|
| 501 |
+
"filtered",
|
| 502 |
+
"labels",
|
| 503 |
+
"features_ts",
|
| 504 |
+
"theta",
|
| 505 |
+
"alpha_low",
|
| 506 |
+
"alpha_high",
|
| 507 |
+
"beta",
|
| 508 |
+
"gamma",
|
| 509 |
+
"plf_pairs",
|
| 510 |
+
"plf",
|
| 511 |
+
"path",
|
| 512 |
+
"show"
|
| 513 |
+
],
|
| 514 |
+
"plot_ecg": [
|
| 515 |
+
"ts",
|
| 516 |
+
"raw",
|
| 517 |
+
"filtered",
|
| 518 |
+
"rpeaks",
|
| 519 |
+
"templates_ts",
|
| 520 |
+
"templates",
|
| 521 |
+
"heart_rate_ts",
|
| 522 |
+
"heart_rate",
|
| 523 |
+
"path",
|
| 524 |
+
"show"
|
| 525 |
+
],
|
| 526 |
+
"plot_bcg": [
|
| 527 |
+
"ts",
|
| 528 |
+
"raw",
|
| 529 |
+
"filtered",
|
| 530 |
+
"jpeaks",
|
| 531 |
+
"templates_ts",
|
| 532 |
+
"templates",
|
| 533 |
+
"heart_rate_ts",
|
| 534 |
+
"heart_rate",
|
| 535 |
+
"path",
|
| 536 |
+
"show"
|
| 537 |
+
],
|
| 538 |
+
"plot_pcg": [
|
| 539 |
+
"ts",
|
| 540 |
+
"raw",
|
| 541 |
+
"filtered",
|
| 542 |
+
"peaks",
|
| 543 |
+
"heart_sounds",
|
| 544 |
+
"heart_rate_ts",
|
| 545 |
+
"inst_heart_rate",
|
| 546 |
+
"path",
|
| 547 |
+
"show"
|
| 548 |
+
],
|
| 549 |
+
"plot_biometrics": [
|
| 550 |
+
"assessment",
|
| 551 |
+
"eer_idx",
|
| 552 |
+
"path",
|
| 553 |
+
"show"
|
| 554 |
+
],
|
| 555 |
+
"plot_clustering": [
|
| 556 |
+
"data",
|
| 557 |
+
"clusters",
|
| 558 |
+
"path",
|
| 559 |
+
"show"
|
| 560 |
+
]
|
| 561 |
+
},
|
| 562 |
+
"description": "Discovered via AST scan"
|
| 563 |
+
},
|
| 564 |
+
{
|
| 565 |
+
"package": "biosppy",
|
| 566 |
+
"module": "stats",
|
| 567 |
+
"functions": [
|
| 568 |
+
"linear_regression",
|
| 569 |
+
"paired_test",
|
| 570 |
+
"pearson_correlation",
|
| 571 |
+
"unpaired_test"
|
| 572 |
+
],
|
| 573 |
+
"classes": [],
|
| 574 |
+
"function_signatures": {
|
| 575 |
+
"pearson_correlation": [
|
| 576 |
+
"x",
|
| 577 |
+
"y"
|
| 578 |
+
],
|
| 579 |
+
"linear_regression": [
|
| 580 |
+
"x",
|
| 581 |
+
"y"
|
| 582 |
+
],
|
| 583 |
+
"paired_test": [
|
| 584 |
+
"x",
|
| 585 |
+
"y"
|
| 586 |
+
],
|
| 587 |
+
"unpaired_test": [
|
| 588 |
+
"x",
|
| 589 |
+
"y"
|
| 590 |
+
]
|
| 591 |
+
},
|
| 592 |
+
"description": "Discovered via AST scan"
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"package": "biosppy",
|
| 596 |
+
"module": "storage",
|
| 597 |
+
"functions": [
|
| 598 |
+
"alloc_h5",
|
| 599 |
+
"deserialize",
|
| 600 |
+
"dumpJSON",
|
| 601 |
+
"loadJSON",
|
| 602 |
+
"load_h5",
|
| 603 |
+
"load_txt",
|
| 604 |
+
"pack_zip",
|
| 605 |
+
"serialize",
|
| 606 |
+
"store_h5",
|
| 607 |
+
"store_txt",
|
| 608 |
+
"unpack_zip",
|
| 609 |
+
"zip_write"
|
| 610 |
+
],
|
| 611 |
+
"classes": [
|
| 612 |
+
"HDF"
|
| 613 |
+
],
|
| 614 |
+
"function_signatures": {
|
| 615 |
+
"serialize": [
|
| 616 |
+
"data",
|
| 617 |
+
"path",
|
| 618 |
+
"compress"
|
| 619 |
+
],
|
| 620 |
+
"deserialize": [
|
| 621 |
+
"path"
|
| 622 |
+
],
|
| 623 |
+
"dumpJSON": [
|
| 624 |
+
"data",
|
| 625 |
+
"path"
|
| 626 |
+
],
|
| 627 |
+
"loadJSON": [
|
| 628 |
+
"path"
|
| 629 |
+
],
|
| 630 |
+
"zip_write": [
|
| 631 |
+
"fid",
|
| 632 |
+
"files",
|
| 633 |
+
"recursive",
|
| 634 |
+
"root"
|
| 635 |
+
],
|
| 636 |
+
"pack_zip": [
|
| 637 |
+
"files",
|
| 638 |
+
"path",
|
| 639 |
+
"recursive",
|
| 640 |
+
"forceExt"
|
| 641 |
+
],
|
| 642 |
+
"unpack_zip": [
|
| 643 |
+
"zip_path",
|
| 644 |
+
"path"
|
| 645 |
+
],
|
| 646 |
+
"alloc_h5": [
|
| 647 |
+
"path"
|
| 648 |
+
],
|
| 649 |
+
"store_h5": [
|
| 650 |
+
"path",
|
| 651 |
+
"label",
|
| 652 |
+
"data"
|
| 653 |
+
],
|
| 654 |
+
"load_h5": [
|
| 655 |
+
"path",
|
| 656 |
+
"label"
|
| 657 |
+
],
|
| 658 |
+
"store_txt": [
|
| 659 |
+
"path",
|
| 660 |
+
"data",
|
| 661 |
+
"sampling_rate",
|
| 662 |
+
"resolution",
|
| 663 |
+
"date",
|
| 664 |
+
"labels",
|
| 665 |
+
"precision"
|
| 666 |
+
],
|
| 667 |
+
"load_txt": [
|
| 668 |
+
"path"
|
| 669 |
+
]
|
| 670 |
+
},
|
| 671 |
+
"description": "Discovered via AST scan"
|
| 672 |
+
},
|
| 673 |
+
{
|
| 674 |
+
"package": "biosppy",
|
| 675 |
+
"module": "timing",
|
| 676 |
+
"functions": [
|
| 677 |
+
"clear",
|
| 678 |
+
"clear_all",
|
| 679 |
+
"tac",
|
| 680 |
+
"tic"
|
| 681 |
+
],
|
| 682 |
+
"classes": [],
|
| 683 |
+
"function_signatures": {
|
| 684 |
+
"tic": [
|
| 685 |
+
"name"
|
| 686 |
+
],
|
| 687 |
+
"tac": [
|
| 688 |
+
"name"
|
| 689 |
+
],
|
| 690 |
+
"clear": [
|
| 691 |
+
"name"
|
| 692 |
+
],
|
| 693 |
+
"clear_all": []
|
| 694 |
+
},
|
| 695 |
+
"description": "Discovered via AST scan"
|
| 696 |
+
},
|
| 697 |
+
{
|
| 698 |
+
"package": "biosppy",
|
| 699 |
+
"module": "utils",
|
| 700 |
+
"functions": [
|
| 701 |
+
"fileparts",
|
| 702 |
+
"fullfile",
|
| 703 |
+
"highestAveragesAllocator",
|
| 704 |
+
"normpath",
|
| 705 |
+
"random_fraction",
|
| 706 |
+
"remainderAllocator",
|
| 707 |
+
"walktree"
|
| 708 |
+
],
|
| 709 |
+
"classes": [
|
| 710 |
+
"ReturnTuple"
|
| 711 |
+
],
|
| 712 |
+
"function_signatures": {
|
| 713 |
+
"normpath": [
|
| 714 |
+
"path"
|
| 715 |
+
],
|
| 716 |
+
"fileparts": [
|
| 717 |
+
"path"
|
| 718 |
+
],
|
| 719 |
+
"fullfile": [],
|
| 720 |
+
"walktree": [
|
| 721 |
+
"top",
|
| 722 |
+
"spec"
|
| 723 |
+
],
|
| 724 |
+
"remainderAllocator": [
|
| 725 |
+
"votes",
|
| 726 |
+
"k",
|
| 727 |
+
"reverse",
|
| 728 |
+
"check"
|
| 729 |
+
],
|
| 730 |
+
"highestAveragesAllocator": [
|
| 731 |
+
"votes",
|
| 732 |
+
"k",
|
| 733 |
+
"divisor",
|
| 734 |
+
"check"
|
| 735 |
+
],
|
| 736 |
+
"random_fraction": [
|
| 737 |
+
"indx",
|
| 738 |
+
"fraction",
|
| 739 |
+
"sort"
|
| 740 |
+
]
|
| 741 |
+
},
|
| 742 |
+
"description": "Discovered via AST scan"
|
| 743 |
+
},
|
| 744 |
+
{
|
| 745 |
+
"package": "biosppy.inter_plotting",
|
| 746 |
+
"module": "acc",
|
| 747 |
+
"functions": [
|
| 748 |
+
"plot_acc"
|
| 749 |
+
],
|
| 750 |
+
"classes": [],
|
| 751 |
+
"function_signatures": {
|
| 752 |
+
"plot_acc": [
|
| 753 |
+
"ts",
|
| 754 |
+
"raw",
|
| 755 |
+
"vm",
|
| 756 |
+
"sm",
|
| 757 |
+
"spectrum",
|
| 758 |
+
"path"
|
| 759 |
+
]
|
| 760 |
+
},
|
| 761 |
+
"description": "Discovered via AST scan"
|
| 762 |
+
},
|
| 763 |
+
{
|
| 764 |
+
"package": "biosppy.inter_plotting",
|
| 765 |
+
"module": "ecg",
|
| 766 |
+
"functions": [
|
| 767 |
+
"plot_ecg"
|
| 768 |
+
],
|
| 769 |
+
"classes": [],
|
| 770 |
+
"function_signatures": {
|
| 771 |
+
"plot_ecg": [
|
| 772 |
+
"ts",
|
| 773 |
+
"raw",
|
| 774 |
+
"filtered",
|
| 775 |
+
"rpeaks",
|
| 776 |
+
"templates_ts",
|
| 777 |
+
"templates",
|
| 778 |
+
"heart_rate_ts",
|
| 779 |
+
"heart_rate",
|
| 780 |
+
"path",
|
| 781 |
+
"show"
|
| 782 |
+
]
|
| 783 |
+
},
|
| 784 |
+
"description": "Discovered via AST scan"
|
| 785 |
+
},
|
| 786 |
+
{
|
| 787 |
+
"package": "biosppy.signals",
|
| 788 |
+
"module": "abp",
|
| 789 |
+
"functions": [
|
| 790 |
+
"abp",
|
| 791 |
+
"find_onsets_zong2003"
|
| 792 |
+
],
|
| 793 |
+
"classes": [],
|
| 794 |
+
"function_signatures": {
|
| 795 |
+
"abp": [
|
| 796 |
+
"signal",
|
| 797 |
+
"sampling_rate",
|
| 798 |
+
"show"
|
| 799 |
+
],
|
| 800 |
+
"find_onsets_zong2003": [
|
| 801 |
+
"signal",
|
| 802 |
+
"sampling_rate",
|
| 803 |
+
"sm_size",
|
| 804 |
+
"size",
|
| 805 |
+
"alpha",
|
| 806 |
+
"wrange",
|
| 807 |
+
"d1_th",
|
| 808 |
+
"d2_th"
|
| 809 |
+
]
|
| 810 |
+
},
|
| 811 |
+
"description": "Discovered via AST scan"
|
| 812 |
+
},
|
| 813 |
+
{
|
| 814 |
+
"package": "biosppy.signals",
|
| 815 |
+
"module": "acc",
|
| 816 |
+
"functions": [
|
| 817 |
+
"acc",
|
| 818 |
+
"frequency_domain_feature_extractor",
|
| 819 |
+
"time_domain_feature_extractor"
|
| 820 |
+
],
|
| 821 |
+
"classes": [],
|
| 822 |
+
"function_signatures": {
|
| 823 |
+
"acc": [
|
| 824 |
+
"signal",
|
| 825 |
+
"sampling_rate",
|
| 826 |
+
"path",
|
| 827 |
+
"show",
|
| 828 |
+
"interactive"
|
| 829 |
+
],
|
| 830 |
+
"time_domain_feature_extractor": [
|
| 831 |
+
"signal"
|
| 832 |
+
],
|
| 833 |
+
"frequency_domain_feature_extractor": [
|
| 834 |
+
"signal",
|
| 835 |
+
"sampling_rate"
|
| 836 |
+
]
|
| 837 |
+
},
|
| 838 |
+
"description": "Discovered via AST scan"
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"package": "biosppy.signals",
|
| 842 |
+
"module": "bvp",
|
| 843 |
+
"functions": [
|
| 844 |
+
"bvp"
|
| 845 |
+
],
|
| 846 |
+
"classes": [],
|
| 847 |
+
"function_signatures": {
|
| 848 |
+
"bvp": [
|
| 849 |
+
"signal",
|
| 850 |
+
"sampling_rate",
|
| 851 |
+
"path",
|
| 852 |
+
"show"
|
| 853 |
+
]
|
| 854 |
+
},
|
| 855 |
+
"description": "Discovered via AST scan"
|
| 856 |
+
},
|
| 857 |
+
{
|
| 858 |
+
"package": "biosppy.signals",
|
| 859 |
+
"module": "ecg",
|
| 860 |
+
"functions": [
|
| 861 |
+
"ASI_segmenter",
|
| 862 |
+
"ZZ2018",
|
| 863 |
+
"bSQI",
|
| 864 |
+
"christov_segmenter",
|
| 865 |
+
"compare_segmentation",
|
| 866 |
+
"correct_rpeaks",
|
| 867 |
+
"ecg",
|
| 868 |
+
"engzee_segmenter",
|
| 869 |
+
"extract_heartbeats",
|
| 870 |
+
"fSQI",
|
| 871 |
+
"gamboa_segmenter",
|
| 872 |
+
"getPPositions",
|
| 873 |
+
"getQPositions",
|
| 874 |
+
"getSPositions",
|
| 875 |
+
"getTPositions",
|
| 876 |
+
"hamilton_segmenter",
|
| 877 |
+
"kSQI",
|
| 878 |
+
"pSQI",
|
| 879 |
+
"sSQI",
|
| 880 |
+
"ssf_segmenter"
|
| 881 |
+
],
|
| 882 |
+
"classes": [],
|
| 883 |
+
"function_signatures": {
|
| 884 |
+
"ecg": [
|
| 885 |
+
"signal",
|
| 886 |
+
"sampling_rate",
|
| 887 |
+
"path",
|
| 888 |
+
"show",
|
| 889 |
+
"interactive"
|
| 890 |
+
],
|
| 891 |
+
"extract_heartbeats": [
|
| 892 |
+
"signal",
|
| 893 |
+
"rpeaks",
|
| 894 |
+
"sampling_rate",
|
| 895 |
+
"before",
|
| 896 |
+
"after"
|
| 897 |
+
],
|
| 898 |
+
"compare_segmentation": [
|
| 899 |
+
"reference",
|
| 900 |
+
"test",
|
| 901 |
+
"sampling_rate",
|
| 902 |
+
"offset",
|
| 903 |
+
"minRR",
|
| 904 |
+
"tol"
|
| 905 |
+
],
|
| 906 |
+
"correct_rpeaks": [
|
| 907 |
+
"signal",
|
| 908 |
+
"rpeaks",
|
| 909 |
+
"sampling_rate",
|
| 910 |
+
"tol"
|
| 911 |
+
],
|
| 912 |
+
"ssf_segmenter": [
|
| 913 |
+
"signal",
|
| 914 |
+
"sampling_rate",
|
| 915 |
+
"threshold",
|
| 916 |
+
"before",
|
| 917 |
+
"after"
|
| 918 |
+
],
|
| 919 |
+
"christov_segmenter": [
|
| 920 |
+
"signal",
|
| 921 |
+
"sampling_rate"
|
| 922 |
+
],
|
| 923 |
+
"engzee_segmenter": [
|
| 924 |
+
"signal",
|
| 925 |
+
"sampling_rate",
|
| 926 |
+
"threshold"
|
| 927 |
+
],
|
| 928 |
+
"gamboa_segmenter": [
|
| 929 |
+
"signal",
|
| 930 |
+
"sampling_rate",
|
| 931 |
+
"tol"
|
| 932 |
+
],
|
| 933 |
+
"hamilton_segmenter": [
|
| 934 |
+
"signal",
|
| 935 |
+
"sampling_rate"
|
| 936 |
+
],
|
| 937 |
+
"ASI_segmenter": [
|
| 938 |
+
"signal",
|
| 939 |
+
"sampling_rate",
|
| 940 |
+
"Pth"
|
| 941 |
+
],
|
| 942 |
+
"getQPositions": [
|
| 943 |
+
"ecg_proc",
|
| 944 |
+
"show"
|
| 945 |
+
],
|
| 946 |
+
"getSPositions": [
|
| 947 |
+
"ecg_proc",
|
| 948 |
+
"show"
|
| 949 |
+
],
|
| 950 |
+
"getPPositions": [
|
| 951 |
+
"ecg_proc",
|
| 952 |
+
"show"
|
| 953 |
+
],
|
| 954 |
+
"getTPositions": [
|
| 955 |
+
"ecg_proc",
|
| 956 |
+
"show"
|
| 957 |
+
],
|
| 958 |
+
"bSQI": [
|
| 959 |
+
"detector_1",
|
| 960 |
+
"detector_2",
|
| 961 |
+
"fs",
|
| 962 |
+
"mode",
|
| 963 |
+
"search_window"
|
| 964 |
+
],
|
| 965 |
+
"sSQI": [
|
| 966 |
+
"signal"
|
| 967 |
+
],
|
| 968 |
+
"kSQI": [
|
| 969 |
+
"signal",
|
| 970 |
+
"fisher"
|
| 971 |
+
],
|
| 972 |
+
"pSQI": [
|
| 973 |
+
"signal",
|
| 974 |
+
"f_thr"
|
| 975 |
+
],
|
| 976 |
+
"fSQI": [
|
| 977 |
+
"ecg_signal",
|
| 978 |
+
"fs",
|
| 979 |
+
"nseg",
|
| 980 |
+
"num_spectrum",
|
| 981 |
+
"dem_spectrum",
|
| 982 |
+
"mode"
|
| 983 |
+
],
|
| 984 |
+
"ZZ2018": [
|
| 985 |
+
"signal",
|
| 986 |
+
"detector_1",
|
| 987 |
+
"detector_2",
|
| 988 |
+
"fs",
|
| 989 |
+
"search_window",
|
| 990 |
+
"nseg",
|
| 991 |
+
"mode"
|
| 992 |
+
]
|
| 993 |
+
},
|
| 994 |
+
"description": "Discovered via AST scan"
|
| 995 |
+
},
|
| 996 |
+
{
|
| 997 |
+
"package": "biosppy.signals",
|
| 998 |
+
"module": "eda",
|
| 999 |
+
"functions": [
|
| 1000 |
+
"basic_scr",
|
| 1001 |
+
"eda",
|
| 1002 |
+
"kbk_scr"
|
| 1003 |
+
],
|
| 1004 |
+
"classes": [],
|
| 1005 |
+
"function_signatures": {
|
| 1006 |
+
"eda": [
|
| 1007 |
+
"signal",
|
| 1008 |
+
"sampling_rate",
|
| 1009 |
+
"path",
|
| 1010 |
+
"show",
|
| 1011 |
+
"min_amplitude"
|
| 1012 |
+
],
|
| 1013 |
+
"basic_scr": [
|
| 1014 |
+
"signal",
|
| 1015 |
+
"sampling_rate"
|
| 1016 |
+
],
|
| 1017 |
+
"kbk_scr": [
|
| 1018 |
+
"signal",
|
| 1019 |
+
"sampling_rate",
|
| 1020 |
+
"min_amplitude"
|
| 1021 |
+
]
|
| 1022 |
+
},
|
| 1023 |
+
"description": "Discovered via AST scan"
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"package": "biosppy.signals",
|
| 1027 |
+
"module": "eeg",
|
| 1028 |
+
"functions": [
|
| 1029 |
+
"car_reference",
|
| 1030 |
+
"eeg",
|
| 1031 |
+
"get_plf_features",
|
| 1032 |
+
"get_power_features"
|
| 1033 |
+
],
|
| 1034 |
+
"classes": [],
|
| 1035 |
+
"function_signatures": {
|
| 1036 |
+
"eeg": [
|
| 1037 |
+
"signal",
|
| 1038 |
+
"sampling_rate",
|
| 1039 |
+
"labels",
|
| 1040 |
+
"path",
|
| 1041 |
+
"show"
|
| 1042 |
+
],
|
| 1043 |
+
"car_reference": [
|
| 1044 |
+
"signal"
|
| 1045 |
+
],
|
| 1046 |
+
"get_power_features": [
|
| 1047 |
+
"signal",
|
| 1048 |
+
"sampling_rate",
|
| 1049 |
+
"size",
|
| 1050 |
+
"overlap"
|
| 1051 |
+
],
|
| 1052 |
+
"get_plf_features": [
|
| 1053 |
+
"signal",
|
| 1054 |
+
"sampling_rate",
|
| 1055 |
+
"size",
|
| 1056 |
+
"overlap"
|
| 1057 |
+
]
|
| 1058 |
+
},
|
| 1059 |
+
"description": "Discovered via AST scan"
|
| 1060 |
+
},
|
| 1061 |
+
{
|
| 1062 |
+
"package": "biosppy.signals",
|
| 1063 |
+
"module": "emg",
|
| 1064 |
+
"functions": [
|
| 1065 |
+
"abbink_onset_detector",
|
| 1066 |
+
"bonato_onset_detector",
|
| 1067 |
+
"emg",
|
| 1068 |
+
"find_onsets",
|
| 1069 |
+
"hodges_bui_onset_detector",
|
| 1070 |
+
"lidierth_onset_detector",
|
| 1071 |
+
"londral_onset_detector",
|
| 1072 |
+
"silva_onset_detector",
|
| 1073 |
+
"solnik_onset_detector"
|
| 1074 |
+
],
|
| 1075 |
+
"classes": [],
|
| 1076 |
+
"function_signatures": {
|
| 1077 |
+
"emg": [
|
| 1078 |
+
"signal",
|
| 1079 |
+
"sampling_rate",
|
| 1080 |
+
"path",
|
| 1081 |
+
"show"
|
| 1082 |
+
],
|
| 1083 |
+
"find_onsets": [
|
| 1084 |
+
"signal",
|
| 1085 |
+
"sampling_rate",
|
| 1086 |
+
"size",
|
| 1087 |
+
"threshold"
|
| 1088 |
+
],
|
| 1089 |
+
"hodges_bui_onset_detector": [
|
| 1090 |
+
"signal",
|
| 1091 |
+
"rest",
|
| 1092 |
+
"sampling_rate",
|
| 1093 |
+
"size",
|
| 1094 |
+
"threshold"
|
| 1095 |
+
],
|
| 1096 |
+
"bonato_onset_detector": [
|
| 1097 |
+
"signal",
|
| 1098 |
+
"rest",
|
| 1099 |
+
"sampling_rate",
|
| 1100 |
+
"threshold",
|
| 1101 |
+
"active_state_duration",
|
| 1102 |
+
"samples_above_fail",
|
| 1103 |
+
"fail_size"
|
| 1104 |
+
],
|
| 1105 |
+
"lidierth_onset_detector": [
|
| 1106 |
+
"signal",
|
| 1107 |
+
"rest",
|
| 1108 |
+
"sampling_rate",
|
| 1109 |
+
"size",
|
| 1110 |
+
"threshold",
|
| 1111 |
+
"active_state_duration",
|
| 1112 |
+
"fail_size"
|
| 1113 |
+
],
|
| 1114 |
+
"abbink_onset_detector": [
|
| 1115 |
+
"signal",
|
| 1116 |
+
"rest",
|
| 1117 |
+
"sampling_rate",
|
| 1118 |
+
"size",
|
| 1119 |
+
"alarm_size",
|
| 1120 |
+
"threshold",
|
| 1121 |
+
"transition_threshold"
|
| 1122 |
+
],
|
| 1123 |
+
"solnik_onset_detector": [
|
| 1124 |
+
"signal",
|
| 1125 |
+
"rest",
|
| 1126 |
+
"sampling_rate",
|
| 1127 |
+
"threshold",
|
| 1128 |
+
"active_state_duration"
|
| 1129 |
+
],
|
| 1130 |
+
"silva_onset_detector": [
|
| 1131 |
+
"signal",
|
| 1132 |
+
"sampling_rate",
|
| 1133 |
+
"size",
|
| 1134 |
+
"threshold_size",
|
| 1135 |
+
"threshold"
|
| 1136 |
+
],
|
| 1137 |
+
"londral_onset_detector": [
|
| 1138 |
+
"signal",
|
| 1139 |
+
"rest",
|
| 1140 |
+
"sampling_rate",
|
| 1141 |
+
"size",
|
| 1142 |
+
"threshold",
|
| 1143 |
+
"active_state_duration"
|
| 1144 |
+
]
|
| 1145 |
+
},
|
| 1146 |
+
"description": "Discovered via AST scan"
|
| 1147 |
+
},
|
| 1148 |
+
{
|
| 1149 |
+
"package": "biosppy.signals",
|
| 1150 |
+
"module": "pcg",
|
| 1151 |
+
"functions": [
|
| 1152 |
+
"find_peaks",
|
| 1153 |
+
"get_avg_heart_rate",
|
| 1154 |
+
"homomorphic_filter",
|
| 1155 |
+
"identify_heart_sounds",
|
| 1156 |
+
"pcg"
|
| 1157 |
+
],
|
| 1158 |
+
"classes": [],
|
| 1159 |
+
"function_signatures": {
|
| 1160 |
+
"pcg": [
|
| 1161 |
+
"signal",
|
| 1162 |
+
"sampling_rate",
|
| 1163 |
+
"path",
|
| 1164 |
+
"show"
|
| 1165 |
+
],
|
| 1166 |
+
"find_peaks": [
|
| 1167 |
+
"signal",
|
| 1168 |
+
"sampling_rate"
|
| 1169 |
+
],
|
| 1170 |
+
"homomorphic_filter": [
|
| 1171 |
+
"signal",
|
| 1172 |
+
"sampling_rate"
|
| 1173 |
+
],
|
| 1174 |
+
"get_avg_heart_rate": [
|
| 1175 |
+
"envelope",
|
| 1176 |
+
"sampling_rate"
|
| 1177 |
+
],
|
| 1178 |
+
"identify_heart_sounds": [
|
| 1179 |
+
"beats",
|
| 1180 |
+
"sampling_rate"
|
| 1181 |
+
]
|
| 1182 |
+
},
|
| 1183 |
+
"description": "Discovered via AST scan"
|
| 1184 |
+
},
|
| 1185 |
+
{
|
| 1186 |
+
"package": "biosppy.signals",
|
| 1187 |
+
"module": "ppg",
|
| 1188 |
+
"functions": [
|
| 1189 |
+
"find_onsets_elgendi2013",
|
| 1190 |
+
"find_onsets_kavsaoglu2016",
|
| 1191 |
+
"ppg",
|
| 1192 |
+
"ppg_segmentation"
|
| 1193 |
+
],
|
| 1194 |
+
"classes": [],
|
| 1195 |
+
"function_signatures": {
|
| 1196 |
+
"ppg": [
|
| 1197 |
+
"signal",
|
| 1198 |
+
"sampling_rate",
|
| 1199 |
+
"show"
|
| 1200 |
+
],
|
| 1201 |
+
"find_onsets_elgendi2013": [
|
| 1202 |
+
"signal",
|
| 1203 |
+
"sampling_rate",
|
| 1204 |
+
"peakwindow",
|
| 1205 |
+
"beatwindow",
|
| 1206 |
+
"beatoffset",
|
| 1207 |
+
"mindelay"
|
| 1208 |
+
],
|
| 1209 |
+
"find_onsets_kavsaoglu2016": [
|
| 1210 |
+
"signal",
|
| 1211 |
+
"sampling_rate",
|
| 1212 |
+
"alpha",
|
| 1213 |
+
"k",
|
| 1214 |
+
"init_bpm",
|
| 1215 |
+
"min_delay",
|
| 1216 |
+
"max_BPM"
|
| 1217 |
+
],
|
| 1218 |
+
"ppg_segmentation": [
|
| 1219 |
+
"filtered",
|
| 1220 |
+
"sampling_rate",
|
| 1221 |
+
"show",
|
| 1222 |
+
"show_mean",
|
| 1223 |
+
"selection",
|
| 1224 |
+
"peak_threshold"
|
| 1225 |
+
]
|
| 1226 |
+
},
|
| 1227 |
+
"description": "Discovered via AST scan"
|
| 1228 |
+
},
|
| 1229 |
+
{
|
| 1230 |
+
"package": "biosppy.signals",
|
| 1231 |
+
"module": "resp",
|
| 1232 |
+
"functions": [
|
| 1233 |
+
"resp"
|
| 1234 |
+
],
|
| 1235 |
+
"classes": [],
|
| 1236 |
+
"function_signatures": {
|
| 1237 |
+
"resp": [
|
| 1238 |
+
"signal",
|
| 1239 |
+
"sampling_rate",
|
| 1240 |
+
"path",
|
| 1241 |
+
"show"
|
| 1242 |
+
]
|
| 1243 |
+
},
|
| 1244 |
+
"description": "Discovered via AST scan"
|
| 1245 |
+
},
|
| 1246 |
+
{
|
| 1247 |
+
"package": "biosppy.signals",
|
| 1248 |
+
"module": "tools",
|
| 1249 |
+
"functions": [
|
| 1250 |
+
"analytic_signal",
|
| 1251 |
+
"band_power",
|
| 1252 |
+
"distance_profile",
|
| 1253 |
+
"filter_signal",
|
| 1254 |
+
"find_extrema",
|
| 1255 |
+
"find_intersection",
|
| 1256 |
+
"finite_difference",
|
| 1257 |
+
"get_filter",
|
| 1258 |
+
"get_heart_rate",
|
| 1259 |
+
"mean_waves",
|
| 1260 |
+
"median_waves",
|
| 1261 |
+
"normalize",
|
| 1262 |
+
"pearson_correlation",
|
| 1263 |
+
"phase_locking",
|
| 1264 |
+
"power_spectrum",
|
| 1265 |
+
"rms_error",
|
| 1266 |
+
"signal_cross_join",
|
| 1267 |
+
"signal_self_join",
|
| 1268 |
+
"signal_stats",
|
| 1269 |
+
"smoother",
|
| 1270 |
+
"synchronize",
|
| 1271 |
+
"welch_spectrum",
|
| 1272 |
+
"windower",
|
| 1273 |
+
"zero_cross"
|
| 1274 |
+
],
|
| 1275 |
+
"classes": [
|
| 1276 |
+
"OnlineFilter"
|
| 1277 |
+
],
|
| 1278 |
+
"function_signatures": {
|
| 1279 |
+
"get_filter": [
|
| 1280 |
+
"ftype",
|
| 1281 |
+
"band",
|
| 1282 |
+
"order",
|
| 1283 |
+
"frequency",
|
| 1284 |
+
"sampling_rate"
|
| 1285 |
+
],
|
| 1286 |
+
"filter_signal": [
|
| 1287 |
+
"signal",
|
| 1288 |
+
"ftype",
|
| 1289 |
+
"band",
|
| 1290 |
+
"order",
|
| 1291 |
+
"frequency",
|
| 1292 |
+
"sampling_rate"
|
| 1293 |
+
],
|
| 1294 |
+
"smoother": [
|
| 1295 |
+
"signal",
|
| 1296 |
+
"kernel",
|
| 1297 |
+
"size",
|
| 1298 |
+
"mirror"
|
| 1299 |
+
],
|
| 1300 |
+
"analytic_signal": [
|
| 1301 |
+
"signal",
|
| 1302 |
+
"N"
|
| 1303 |
+
],
|
| 1304 |
+
"phase_locking": [
|
| 1305 |
+
"signal1",
|
| 1306 |
+
"signal2",
|
| 1307 |
+
"N"
|
| 1308 |
+
],
|
| 1309 |
+
"power_spectrum": [
|
| 1310 |
+
"signal",
|
| 1311 |
+
"sampling_rate",
|
| 1312 |
+
"pad",
|
| 1313 |
+
"pow2",
|
| 1314 |
+
"decibel"
|
| 1315 |
+
],
|
| 1316 |
+
"welch_spectrum": [
|
| 1317 |
+
"signal",
|
| 1318 |
+
"sampling_rate",
|
| 1319 |
+
"size",
|
| 1320 |
+
"overlap",
|
| 1321 |
+
"window",
|
| 1322 |
+
"window_kwargs",
|
| 1323 |
+
"pad",
|
| 1324 |
+
"decibel"
|
| 1325 |
+
],
|
| 1326 |
+
"band_power": [
|
| 1327 |
+
"freqs",
|
| 1328 |
+
"power",
|
| 1329 |
+
"frequency",
|
| 1330 |
+
"decibel"
|
| 1331 |
+
],
|
| 1332 |
+
"signal_stats": [
|
| 1333 |
+
"signal"
|
| 1334 |
+
],
|
| 1335 |
+
"normalize": [
|
| 1336 |
+
"signal",
|
| 1337 |
+
"ddof"
|
| 1338 |
+
],
|
| 1339 |
+
"zero_cross": [
|
| 1340 |
+
"signal",
|
| 1341 |
+
"detrend"
|
| 1342 |
+
],
|
| 1343 |
+
"find_extrema": [
|
| 1344 |
+
"signal",
|
| 1345 |
+
"mode"
|
| 1346 |
+
],
|
| 1347 |
+
"windower": [
|
| 1348 |
+
"signal",
|
| 1349 |
+
"size",
|
| 1350 |
+
"step",
|
| 1351 |
+
"fcn",
|
| 1352 |
+
"fcn_kwargs",
|
| 1353 |
+
"kernel",
|
| 1354 |
+
"kernel_kwargs"
|
| 1355 |
+
],
|
| 1356 |
+
"synchronize": [
|
| 1357 |
+
"x",
|
| 1358 |
+
"y",
|
| 1359 |
+
"detrend"
|
| 1360 |
+
],
|
| 1361 |
+
"pearson_correlation": [
|
| 1362 |
+
"x",
|
| 1363 |
+
"y"
|
| 1364 |
+
],
|
| 1365 |
+
"rms_error": [
|
| 1366 |
+
"x",
|
| 1367 |
+
"y"
|
| 1368 |
+
],
|
| 1369 |
+
"get_heart_rate": [
|
| 1370 |
+
"beats",
|
| 1371 |
+
"sampling_rate",
|
| 1372 |
+
"smooth",
|
| 1373 |
+
"size"
|
| 1374 |
+
],
|
| 1375 |
+
"find_intersection": [
|
| 1376 |
+
"x1",
|
| 1377 |
+
"y1",
|
| 1378 |
+
"x2",
|
| 1379 |
+
"y2",
|
| 1380 |
+
"alpha",
|
| 1381 |
+
"xtol",
|
| 1382 |
+
"ytol"
|
| 1383 |
+
],
|
| 1384 |
+
"finite_difference": [
|
| 1385 |
+
"signal",
|
| 1386 |
+
"weights"
|
| 1387 |
+
],
|
| 1388 |
+
"distance_profile": [
|
| 1389 |
+
"query",
|
| 1390 |
+
"signal",
|
| 1391 |
+
"metric"
|
| 1392 |
+
],
|
| 1393 |
+
"signal_self_join": [
|
| 1394 |
+
"signal",
|
| 1395 |
+
"size",
|
| 1396 |
+
"index",
|
| 1397 |
+
"limit"
|
| 1398 |
+
],
|
| 1399 |
+
"signal_cross_join": [
|
| 1400 |
+
"signal1",
|
| 1401 |
+
"signal2",
|
| 1402 |
+
"size",
|
| 1403 |
+
"index",
|
| 1404 |
+
"limit"
|
| 1405 |
+
],
|
| 1406 |
+
"mean_waves": [
|
| 1407 |
+
"data",
|
| 1408 |
+
"size",
|
| 1409 |
+
"step"
|
| 1410 |
+
],
|
| 1411 |
+
"median_waves": [
|
| 1412 |
+
"data",
|
| 1413 |
+
"size",
|
| 1414 |
+
"step"
|
| 1415 |
+
]
|
| 1416 |
+
},
|
| 1417 |
+
"description": "Discovered via AST scan"
|
| 1418 |
+
},
|
| 1419 |
+
{
|
| 1420 |
+
"package": "biosppy.synthesizers",
|
| 1421 |
+
"module": "ecg",
|
| 1422 |
+
"functions": [
|
| 1423 |
+
"B",
|
| 1424 |
+
"I",
|
| 1425 |
+
"P",
|
| 1426 |
+
"Pq",
|
| 1427 |
+
"Q1",
|
| 1428 |
+
"Q2",
|
| 1429 |
+
"R",
|
| 1430 |
+
"S",
|
| 1431 |
+
"St",
|
| 1432 |
+
"T",
|
| 1433 |
+
"ecg"
|
| 1434 |
+
],
|
| 1435 |
+
"classes": [],
|
| 1436 |
+
"function_signatures": {
|
| 1437 |
+
"B": [
|
| 1438 |
+
"l",
|
| 1439 |
+
"Kb"
|
| 1440 |
+
],
|
| 1441 |
+
"P": [
|
| 1442 |
+
"i",
|
| 1443 |
+
"Ap",
|
| 1444 |
+
"Kp"
|
| 1445 |
+
],
|
| 1446 |
+
"Pq": [
|
| 1447 |
+
"l",
|
| 1448 |
+
"Kpq"
|
| 1449 |
+
],
|
| 1450 |
+
"Q1": [
|
| 1451 |
+
"i",
|
| 1452 |
+
"Aq",
|
| 1453 |
+
"Kq1"
|
| 1454 |
+
],
|
| 1455 |
+
"Q2": [
|
| 1456 |
+
"i",
|
| 1457 |
+
"Aq",
|
| 1458 |
+
"Kq2"
|
| 1459 |
+
],
|
| 1460 |
+
"R": [
|
| 1461 |
+
"i",
|
| 1462 |
+
"Ar",
|
| 1463 |
+
"Kr"
|
| 1464 |
+
],
|
| 1465 |
+
"S": [
|
| 1466 |
+
"i",
|
| 1467 |
+
"As",
|
| 1468 |
+
"Ks",
|
| 1469 |
+
"Kcs",
|
| 1470 |
+
"k"
|
| 1471 |
+
],
|
| 1472 |
+
"St": [
|
| 1473 |
+
"i",
|
| 1474 |
+
"As",
|
| 1475 |
+
"Ks",
|
| 1476 |
+
"Kcs",
|
| 1477 |
+
"sm",
|
| 1478 |
+
"Kst",
|
| 1479 |
+
"k"
|
| 1480 |
+
],
|
| 1481 |
+
"T": [
|
| 1482 |
+
"i",
|
| 1483 |
+
"As",
|
| 1484 |
+
"Ks",
|
| 1485 |
+
"Kcs",
|
| 1486 |
+
"sm",
|
| 1487 |
+
"Kst",
|
| 1488 |
+
"At",
|
| 1489 |
+
"Kt",
|
| 1490 |
+
"k"
|
| 1491 |
+
],
|
| 1492 |
+
"I": [
|
| 1493 |
+
"i",
|
| 1494 |
+
"As",
|
| 1495 |
+
"Ks",
|
| 1496 |
+
"Kcs",
|
| 1497 |
+
"sm",
|
| 1498 |
+
"Kst",
|
| 1499 |
+
"At",
|
| 1500 |
+
"Kt",
|
| 1501 |
+
"si",
|
| 1502 |
+
"Ki"
|
| 1503 |
+
],
|
| 1504 |
+
"ecg": [
|
| 1505 |
+
"Kb",
|
| 1506 |
+
"Ap",
|
| 1507 |
+
"Kp",
|
| 1508 |
+
"Kpq",
|
| 1509 |
+
"Aq",
|
| 1510 |
+
"Kq1",
|
| 1511 |
+
"Kq2",
|
| 1512 |
+
"Ar",
|
| 1513 |
+
"Kr",
|
| 1514 |
+
"As",
|
| 1515 |
+
"Ks",
|
| 1516 |
+
"Kcs",
|
| 1517 |
+
"sm",
|
| 1518 |
+
"Kst",
|
| 1519 |
+
"At",
|
| 1520 |
+
"Kt",
|
| 1521 |
+
"si",
|
| 1522 |
+
"Ki",
|
| 1523 |
+
"var",
|
| 1524 |
+
"sampling_rate"
|
| 1525 |
+
]
|
| 1526 |
+
},
|
| 1527 |
+
"description": "Discovered via AST scan"
|
| 1528 |
+
},
|
| 1529 |
+
{
|
| 1530 |
+
"package": "docs",
|
| 1531 |
+
"module": "conf",
|
| 1532 |
+
"functions": [],
|
| 1533 |
+
"classes": [
|
| 1534 |
+
"Mock"
|
| 1535 |
+
],
|
| 1536 |
+
"function_signatures": {},
|
| 1537 |
+
"description": "Discovered via AST scan"
|
| 1538 |
+
}
|
| 1539 |
+
],
|
| 1540 |
+
"cli_commands": [],
|
| 1541 |
+
"import_strategy": {
|
| 1542 |
+
"primary": "import",
|
| 1543 |
+
"fallback": "blackbox",
|
| 1544 |
+
"confidence": 0.9
|
| 1545 |
+
},
|
| 1546 |
+
"dependencies": {
|
| 1547 |
+
"required": [
|
| 1548 |
+
"numpy",
|
| 1549 |
+
"scipy",
|
| 1550 |
+
"matplotlib"
|
| 1551 |
+
],
|
| 1552 |
+
"optional": [
|
| 1553 |
+
"pandas"
|
| 1554 |
+
]
|
| 1555 |
+
},
|
| 1556 |
+
"risk_assessment": {
|
| 1557 |
+
"import_feasibility": 0.8,
|
| 1558 |
+
"intrusiveness_risk": "low",
|
| 1559 |
+
"complexity": "medium"
|
| 1560 |
+
}
|
| 1561 |
+
},
|
| 1562 |
+
"deepwiki_analysis": {
|
| 1563 |
+
"repo_url": "https://github.com/PIA-Group/BioSPPy",
|
| 1564 |
+
"repo_name": "BioSPPy",
|
| 1565 |
+
"content": "PIA-Group/BioSPPy\nBiosignal Processing in Python\nIndexing Failed\nThere was a problem indexing this repository. Please try again.\nOnce indexed, you'll have full access to code exploration and search functionality",
|
| 1566 |
+
"model": "gpt-4o-2024-08-06",
|
| 1567 |
+
"source": "selenium",
|
| 1568 |
+
"success": true
|
| 1569 |
+
},
|
| 1570 |
+
"deepwiki_options": {
|
| 1571 |
+
"enabled": true,
|
| 1572 |
+
"model": "gpt-4o-2024-08-06"
|
| 1573 |
+
},
|
| 1574 |
+
"risk": {
|
| 1575 |
+
"import_feasibility": 0.8,
|
| 1576 |
+
"intrusiveness_risk": "low",
|
| 1577 |
+
"complexity": "medium"
|
| 1578 |
+
}
|
| 1579 |
+
}
|
BioSPPy/mcp_output/diff_report.md
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# BioSPPy Project Difference Report
|
| 2 |
+
|
| 3 |
+
**Repository:** BioSPPy
|
| 4 |
+
**Project Type:** Python Library
|
| 5 |
+
**Report Date:** February 3, 2026
|
| 6 |
+
**Time:** 13:23:29
|
| 7 |
+
|
| 8 |
+
## Project Overview
|
| 9 |
+
|
| 10 |
+
BioSPPy is a Python library designed to provide basic functionality for biosignal processing. It is widely used in the scientific community for its ease of use and comprehensive set of tools for analyzing physiological signals.
|
| 11 |
+
|
| 12 |
+
## Difference Analysis
|
| 13 |
+
|
| 14 |
+
### Summary of Changes
|
| 15 |
+
- **New Files Added:** 8
|
| 16 |
+
- **Modified Files:** 0
|
| 17 |
+
- **Intrusiveness:** None
|
| 18 |
+
- **Workflow Status:** Success
|
| 19 |
+
- **Test Status:** Failed
|
| 20 |
+
|
| 21 |
+
### New Files
|
| 22 |
+
The addition of 8 new files suggests an expansion in the library's capabilities or the introduction of new features. However, the lack of modifications to existing files indicates that these changes are likely isolated and do not alter the core functionality of the existing library.
|
| 23 |
+
|
| 24 |
+
### Workflow and Testing
|
| 25 |
+
While the workflow status is marked as successful, indicating that the integration and deployment processes were executed without errors, the test status has failed. This discrepancy suggests potential issues with the new additions that need to be addressed.
|
| 26 |
+
|
| 27 |
+
## Technical Analysis
|
| 28 |
+
|
| 29 |
+
### New Features
|
| 30 |
+
The introduction of new files typically implies the addition of new features or modules. Without modifications to existing files, these features are likely standalone and do not interfere with the current library structure.
|
| 31 |
+
|
| 32 |
+
### Testing Failures
|
| 33 |
+
The failure in testing indicates that the new files may contain bugs or that the new features are not fully compatible with the existing system. It is crucial to identify the specific tests that failed to understand the root cause of these issues.
|
| 34 |
+
|
| 35 |
+
## Recommendations and Improvements
|
| 36 |
+
|
| 37 |
+
1. **Detailed Testing:** Conduct a thorough review of the test cases to identify which specific tests failed. This will help in pinpointing the exact issues within the new files.
|
| 38 |
+
|
| 39 |
+
2. **Code Review:** Perform a comprehensive code review of the new files to ensure they adhere to the project's coding standards and best practices.
|
| 40 |
+
|
| 41 |
+
3. **Integration Testing:** Ensure that the new features integrate seamlessly with the existing library. This may involve creating new test cases that specifically target the interaction between new and existing components.
|
| 42 |
+
|
| 43 |
+
4. **Documentation Update:** Update the project documentation to include information about the new features, their usage, and any potential limitations or known issues.
|
| 44 |
+
|
| 45 |
+
## Deployment Information
|
| 46 |
+
|
| 47 |
+
Given the successful workflow status, the new files have been integrated into the main branch of the repository. However, due to the failed test status, it is advisable to refrain from deploying these changes to a production environment until the issues are resolved.
|
| 48 |
+
|
| 49 |
+
## Future Planning
|
| 50 |
+
|
| 51 |
+
1. **Bug Fixes:** Prioritize resolving the issues identified in the failed tests to ensure the stability and reliability of the library.
|
| 52 |
+
|
| 53 |
+
2. **Feature Enhancement:** Once the current issues are resolved, consider enhancing the new features based on user feedback and requirements.
|
| 54 |
+
|
| 55 |
+
3. **Community Engagement:** Engage with the BioSPPy user community to gather insights and suggestions for future improvements and feature requests.
|
| 56 |
+
|
| 57 |
+
4. **Regular Updates:** Establish a regular update cycle to ensure the library remains up-to-date with the latest advancements in biosignal processing.
|
| 58 |
+
|
| 59 |
+
## Conclusion
|
| 60 |
+
|
| 61 |
+
The recent changes to the BioSPPy project introduce new features that expand its capabilities. However, the failed test status highlights the need for further refinement and testing. By addressing these issues and following the recommendations outlined in this report, the project can continue to provide valuable tools for the biosignal processing community.
|
| 62 |
+
|
| 63 |
+
---
|
BioSPPy/mcp_output/mcp_plugin/__init__.py
ADDED
|
File without changes
|
BioSPPy/mcp_output/mcp_plugin/adapter.py
ADDED
|
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
|
| 4 |
+
# Path settings
|
| 5 |
+
source_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), "source")
|
| 6 |
+
sys.path.insert(0, source_path)
|
| 7 |
+
|
| 8 |
+
# Import statements
|
| 9 |
+
try:
|
| 10 |
+
from biosppy import setup
|
| 11 |
+
from docs import conf
|
| 12 |
+
except ImportError as e:
|
| 13 |
+
print("Failed to import modules. Ensure the source directory is correctly set up.")
|
| 14 |
+
raise e
|
| 15 |
+
|
| 16 |
+
# Adapter class definition
|
| 17 |
+
class Adapter:
|
| 18 |
+
"""
|
| 19 |
+
Adapter class for MCP plugin, utilizing the BioSPPy library.
|
| 20 |
+
Provides methods to interact with identified classes and functions.
|
| 21 |
+
"""
|
| 22 |
+
|
| 23 |
+
def __init__(self):
|
| 24 |
+
self.mode = "import"
|
| 25 |
+
|
| 26 |
+
# -------------------------------------------------------------------------
|
| 27 |
+
# Setup Module Methods
|
| 28 |
+
# -------------------------------------------------------------------------
|
| 29 |
+
|
| 30 |
+
def create_upload_command_instance(self):
|
| 31 |
+
"""
|
| 32 |
+
Create an instance of the UploadCommand class from the setup module.
|
| 33 |
+
|
| 34 |
+
Returns:
|
| 35 |
+
dict: A dictionary containing the status and the instance or error message.
|
| 36 |
+
"""
|
| 37 |
+
try:
|
| 38 |
+
instance = setup.UploadCommand()
|
| 39 |
+
return {"status": "success", "instance": instance}
|
| 40 |
+
except Exception as e:
|
| 41 |
+
return {"status": "error", "message": f"Failed to create UploadCommand instance: {str(e)}"}
|
| 42 |
+
|
| 43 |
+
# -------------------------------------------------------------------------
|
| 44 |
+
# Docs Module Methods
|
| 45 |
+
# -------------------------------------------------------------------------
|
| 46 |
+
|
| 47 |
+
def create_mock_instance(self):
|
| 48 |
+
"""
|
| 49 |
+
Create an instance of the Mock class from the conf module.
|
| 50 |
+
|
| 51 |
+
Returns:
|
| 52 |
+
dict: A dictionary containing the status and the instance or error message.
|
| 53 |
+
"""
|
| 54 |
+
try:
|
| 55 |
+
instance = conf.Mock()
|
| 56 |
+
return {"status": "success", "instance": instance}
|
| 57 |
+
except Exception as e:
|
| 58 |
+
return {"status": "error", "message": f"Failed to create Mock instance: {str(e)}"}
|
| 59 |
+
|
| 60 |
+
# -------------------------------------------------------------------------
|
| 61 |
+
# Error Handling and Fallback
|
| 62 |
+
# -------------------------------------------------------------------------
|
| 63 |
+
|
| 64 |
+
def handle_import_failure(self):
|
| 65 |
+
"""
|
| 66 |
+
Handle import failures gracefully, providing fallback options.
|
| 67 |
+
|
| 68 |
+
Returns:
|
| 69 |
+
dict: A dictionary containing the status and guidance message.
|
| 70 |
+
"""
|
| 71 |
+
return {
|
| 72 |
+
"status": "error",
|
| 73 |
+
"message": "Import failed. Please ensure all dependencies are installed and the source path is correct."
|
| 74 |
+
}
|
| 75 |
+
|
| 76 |
+
# Example usage
|
| 77 |
+
if __name__ == "__main__":
|
| 78 |
+
adapter = Adapter()
|
| 79 |
+
upload_command_result = adapter.create_upload_command_instance()
|
| 80 |
+
print(upload_command_result)
|
| 81 |
+
|
| 82 |
+
mock_instance_result = adapter.create_mock_instance()
|
| 83 |
+
print(mock_instance_result)
|
BioSPPy/mcp_output/mcp_plugin/main.py
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
MCP Service Auto-Wrapper - Auto-generated
|
| 3 |
+
"""
|
| 4 |
+
from mcp_service import create_app
|
| 5 |
+
|
| 6 |
+
def main():
|
| 7 |
+
"""Main entry point"""
|
| 8 |
+
app = create_app()
|
| 9 |
+
return app
|
| 10 |
+
|
| 11 |
+
if __name__ == "__main__":
|
| 12 |
+
app = main()
|
| 13 |
+
app.run()
|
BioSPPy/mcp_output/mcp_plugin/mcp_service.py
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
|
| 4 |
+
source_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), "source")
|
| 5 |
+
if source_path not in sys.path:
|
| 6 |
+
sys.path.insert(0, source_path)
|
| 7 |
+
|
| 8 |
+
from fastmcp import FastMCP
|
| 9 |
+
|
| 10 |
+
from setup import UploadCommand
|
| 11 |
+
from docs.conf import Mock
|
| 12 |
+
|
| 13 |
+
mcp = FastMCP("unknown_service")
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
@mcp.tool(name="uploadcommand", description="UploadCommand class")
|
| 17 |
+
def uploadcommand(*args, **kwargs):
|
| 18 |
+
"""UploadCommand class"""
|
| 19 |
+
try:
|
| 20 |
+
if UploadCommand is None:
|
| 21 |
+
return {"success": False, "result": None, "error": "Class UploadCommand is not available, path may need adjustment"}
|
| 22 |
+
|
| 23 |
+
# MCP parameter type conversion
|
| 24 |
+
converted_args = []
|
| 25 |
+
converted_kwargs = kwargs.copy()
|
| 26 |
+
|
| 27 |
+
# Handle position argument type conversion
|
| 28 |
+
for arg in args:
|
| 29 |
+
if isinstance(arg, str):
|
| 30 |
+
# Try to convert to numeric type
|
| 31 |
+
try:
|
| 32 |
+
if '.' in arg:
|
| 33 |
+
converted_args.append(float(arg))
|
| 34 |
+
else:
|
| 35 |
+
converted_args.append(int(arg))
|
| 36 |
+
except ValueError:
|
| 37 |
+
converted_args.append(arg)
|
| 38 |
+
else:
|
| 39 |
+
converted_args.append(arg)
|
| 40 |
+
|
| 41 |
+
# Handle keyword argument type conversion
|
| 42 |
+
for key, value in converted_kwargs.items():
|
| 43 |
+
if isinstance(value, str):
|
| 44 |
+
try:
|
| 45 |
+
if '.' in value:
|
| 46 |
+
converted_kwargs[key] = float(value)
|
| 47 |
+
else:
|
| 48 |
+
converted_kwargs[key] = int(value)
|
| 49 |
+
except ValueError:
|
| 50 |
+
pass
|
| 51 |
+
|
| 52 |
+
instance = UploadCommand(*converted_args, **converted_kwargs)
|
| 53 |
+
return {"success": True, "result": str(instance), "error": None}
|
| 54 |
+
except Exception as e:
|
| 55 |
+
return {"success": False, "result": None, "error": str(e)}
|
| 56 |
+
|
| 57 |
+
@mcp.tool(name="mock", description="Mock class")
|
| 58 |
+
def mock(*args, **kwargs):
|
| 59 |
+
"""Mock class"""
|
| 60 |
+
try:
|
| 61 |
+
if Mock is None:
|
| 62 |
+
return {"success": False, "result": None, "error": "Class Mock is not available, path may need adjustment"}
|
| 63 |
+
|
| 64 |
+
# MCP parameter type conversion
|
| 65 |
+
converted_args = []
|
| 66 |
+
converted_kwargs = kwargs.copy()
|
| 67 |
+
|
| 68 |
+
# Handle position argument type conversion
|
| 69 |
+
for arg in args:
|
| 70 |
+
if isinstance(arg, str):
|
| 71 |
+
# Try to convert to numeric type
|
| 72 |
+
try:
|
| 73 |
+
if '.' in arg:
|
| 74 |
+
converted_args.append(float(arg))
|
| 75 |
+
else:
|
| 76 |
+
converted_args.append(int(arg))
|
| 77 |
+
except ValueError:
|
| 78 |
+
converted_args.append(arg)
|
| 79 |
+
else:
|
| 80 |
+
converted_args.append(arg)
|
| 81 |
+
|
| 82 |
+
# Handle keyword argument type conversion
|
| 83 |
+
for key, value in converted_kwargs.items():
|
| 84 |
+
if isinstance(value, str):
|
| 85 |
+
try:
|
| 86 |
+
if '.' in value:
|
| 87 |
+
converted_kwargs[key] = float(value)
|
| 88 |
+
else:
|
| 89 |
+
converted_kwargs[key] = int(value)
|
| 90 |
+
except ValueError:
|
| 91 |
+
pass
|
| 92 |
+
|
| 93 |
+
instance = Mock(*converted_args, **converted_kwargs)
|
| 94 |
+
return {"success": True, "result": str(instance), "error": None}
|
| 95 |
+
except Exception as e:
|
| 96 |
+
return {"success": False, "result": None, "error": str(e)}
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
|
| 100 |
+
def create_app():
|
| 101 |
+
"""Create and return FastMCP application instance"""
|
| 102 |
+
return mcp
|
| 103 |
+
|
| 104 |
+
if __name__ == "__main__":
|
| 105 |
+
mcp.run(transport="http", host="0.0.0.0", port=8000)
|
BioSPPy/mcp_output/requirements.txt
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
fastmcp
|
| 2 |
+
fastapi
|
| 3 |
+
uvicorn[standard]
|
| 4 |
+
pydantic>=2.0.0
|
| 5 |
+
bidict==0.13.1
|
| 6 |
+
h5py==2.7.1
|
| 7 |
+
matplotlib==2.1.2
|
| 8 |
+
numpy==1.22.0
|
| 9 |
+
scikit-learn==0.19.1
|
| 10 |
+
scipy==1.2.0
|
| 11 |
+
shortuuid==0.5.0
|
| 12 |
+
six==1.11.0
|
| 13 |
+
joblib==0.11
|
BioSPPy/mcp_output/start_mcp.py
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
"""
|
| 3 |
+
MCP Service Startup Entry
|
| 4 |
+
"""
|
| 5 |
+
import sys
|
| 6 |
+
import os
|
| 7 |
+
|
| 8 |
+
project_root = os.path.dirname(os.path.abspath(__file__))
|
| 9 |
+
mcp_plugin_dir = os.path.join(project_root, "mcp_plugin")
|
| 10 |
+
if mcp_plugin_dir not in sys.path:
|
| 11 |
+
sys.path.insert(0, mcp_plugin_dir)
|
| 12 |
+
|
| 13 |
+
from mcp_service import create_app
|
| 14 |
+
|
| 15 |
+
def main():
|
| 16 |
+
"""Start FastMCP service"""
|
| 17 |
+
app = create_app()
|
| 18 |
+
# Use environment variable to configure port, default 8000
|
| 19 |
+
port = int(os.environ.get("MCP_PORT", "8000"))
|
| 20 |
+
|
| 21 |
+
# Choose transport mode based on environment variable
|
| 22 |
+
transport = os.environ.get("MCP_TRANSPORT", "stdio")
|
| 23 |
+
if transport == "http":
|
| 24 |
+
app.run(transport="http", host="0.0.0.0", port=port)
|
| 25 |
+
else:
|
| 26 |
+
# Default to STDIO mode
|
| 27 |
+
app.run()
|
| 28 |
+
|
| 29 |
+
if __name__ == "__main__":
|
| 30 |
+
main()
|
BioSPPy/mcp_output/workflow_summary.json
ADDED
|
@@ -0,0 +1,202 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"repository": {
|
| 3 |
+
"name": "BioSPPy",
|
| 4 |
+
"url": "https://github.com/PIA-Group/BioSPPy",
|
| 5 |
+
"local_path": "/export/zxcpu1/shiweijie/code/ghh/Code2MCP/workspace/BioSPPy",
|
| 6 |
+
"description": "Python library",
|
| 7 |
+
"features": "Basic functionality",
|
| 8 |
+
"tech_stack": "Python",
|
| 9 |
+
"stars": 0,
|
| 10 |
+
"forks": 0,
|
| 11 |
+
"language": "Python",
|
| 12 |
+
"last_updated": "",
|
| 13 |
+
"complexity": "medium",
|
| 14 |
+
"intrusiveness_risk": "low"
|
| 15 |
+
},
|
| 16 |
+
"execution": {
|
| 17 |
+
"start_time": 1770096083.4795523,
|
| 18 |
+
"end_time": 1770096146.6542323,
|
| 19 |
+
"duration": 63.17468023300171,
|
| 20 |
+
"status": "success",
|
| 21 |
+
"workflow_status": "success",
|
| 22 |
+
"nodes_executed": [
|
| 23 |
+
"download",
|
| 24 |
+
"analysis",
|
| 25 |
+
"env",
|
| 26 |
+
"generate",
|
| 27 |
+
"run",
|
| 28 |
+
"review",
|
| 29 |
+
"finalize"
|
| 30 |
+
],
|
| 31 |
+
"total_files_processed": 4,
|
| 32 |
+
"environment_type": "unknown",
|
| 33 |
+
"llm_calls": 0,
|
| 34 |
+
"deepwiki_calls": 0
|
| 35 |
+
},
|
| 36 |
+
"tests": {
|
| 37 |
+
"original_project": {
|
| 38 |
+
"passed": false,
|
| 39 |
+
"details": {},
|
| 40 |
+
"test_coverage": "100%",
|
| 41 |
+
"execution_time": 0,
|
| 42 |
+
"test_files": []
|
| 43 |
+
},
|
| 44 |
+
"mcp_plugin": {
|
| 45 |
+
"passed": true,
|
| 46 |
+
"details": {},
|
| 47 |
+
"service_health": "healthy",
|
| 48 |
+
"startup_time": 0,
|
| 49 |
+
"transport_mode": "stdio",
|
| 50 |
+
"fastmcp_version": "unknown",
|
| 51 |
+
"mcp_version": "unknown"
|
| 52 |
+
}
|
| 53 |
+
},
|
| 54 |
+
"analysis": {
|
| 55 |
+
"structure": {
|
| 56 |
+
"packages": [
|
| 57 |
+
"source.biosppy",
|
| 58 |
+
"source.biosppy.inter_plotting",
|
| 59 |
+
"source.biosppy.signals",
|
| 60 |
+
"source.biosppy.synthesizers"
|
| 61 |
+
]
|
| 62 |
+
},
|
| 63 |
+
"dependencies": {
|
| 64 |
+
"has_environment_yml": false,
|
| 65 |
+
"has_requirements_txt": true,
|
| 66 |
+
"pyproject": false,
|
| 67 |
+
"setup_cfg": true,
|
| 68 |
+
"setup_py": true
|
| 69 |
+
},
|
| 70 |
+
"entry_points": {
|
| 71 |
+
"imports": [],
|
| 72 |
+
"cli": [],
|
| 73 |
+
"modules": []
|
| 74 |
+
},
|
| 75 |
+
"risk_assessment": {
|
| 76 |
+
"import_feasibility": 0.8,
|
| 77 |
+
"intrusiveness_risk": "low",
|
| 78 |
+
"complexity": "medium"
|
| 79 |
+
},
|
| 80 |
+
"deepwiki_analysis": {
|
| 81 |
+
"repo_url": "https://github.com/PIA-Group/BioSPPy",
|
| 82 |
+
"repo_name": "BioSPPy",
|
| 83 |
+
"content": "PIA-Group/BioSPPy\nBiosignal Processing in Python\nIndexing Failed\nThere was a problem indexing this repository. Please try again.\nOnce indexed, you'll have full access to code exploration and search functionality",
|
| 84 |
+
"model": "gpt-4o-2024-08-06",
|
| 85 |
+
"source": "selenium",
|
| 86 |
+
"success": true
|
| 87 |
+
},
|
| 88 |
+
"code_complexity": {
|
| 89 |
+
"cyclomatic_complexity": "medium",
|
| 90 |
+
"cognitive_complexity": "medium",
|
| 91 |
+
"maintainability_index": 75
|
| 92 |
+
},
|
| 93 |
+
"security_analysis": {
|
| 94 |
+
"vulnerabilities_found": 0,
|
| 95 |
+
"security_score": 85,
|
| 96 |
+
"recommendations": []
|
| 97 |
+
}
|
| 98 |
+
},
|
| 99 |
+
"plugin_generation": {
|
| 100 |
+
"files_created": [
|
| 101 |
+
"mcp_output/start_mcp.py",
|
| 102 |
+
"mcp_output/mcp_plugin/__init__.py",
|
| 103 |
+
"mcp_output/mcp_plugin/mcp_service.py",
|
| 104 |
+
"mcp_output/mcp_plugin/adapter.py",
|
| 105 |
+
"mcp_output/mcp_plugin/main.py",
|
| 106 |
+
"mcp_output/requirements.txt",
|
| 107 |
+
"mcp_output/README_MCP.md"
|
| 108 |
+
],
|
| 109 |
+
"main_entry": "start_mcp.py",
|
| 110 |
+
"requirements": [
|
| 111 |
+
"fastmcp>=0.1.0",
|
| 112 |
+
"pydantic>=2.0.0"
|
| 113 |
+
],
|
| 114 |
+
"readme_path": "/export/zxcpu1/shiweijie/code/ghh/Code2MCP/workspace/BioSPPy/mcp_output/README_MCP.md",
|
| 115 |
+
"adapter_mode": "import",
|
| 116 |
+
"total_lines_of_code": 0,
|
| 117 |
+
"generated_files_size": 0,
|
| 118 |
+
"tool_endpoints": 0,
|
| 119 |
+
"supported_features": [
|
| 120 |
+
"Basic functionality"
|
| 121 |
+
],
|
| 122 |
+
"generated_tools": [
|
| 123 |
+
"Basic tools",
|
| 124 |
+
"Health check tools",
|
| 125 |
+
"Version info tools"
|
| 126 |
+
]
|
| 127 |
+
},
|
| 128 |
+
"code_review": {},
|
| 129 |
+
"errors": [],
|
| 130 |
+
"warnings": [],
|
| 131 |
+
"recommendations": [
|
| 132 |
+
"Improve test coverage by adding unit tests for uncovered modules",
|
| 133 |
+
"optimize large files such as `biosppy/biometrics.py` and `biosppy/signals/ecg.py` for better performance",
|
| 134 |
+
"ensure consistent documentation across all modules",
|
| 135 |
+
"update the `README.md` to include recent changes and usage examples",
|
| 136 |
+
"consider refactoring complex functions for better readability and maintainability",
|
| 137 |
+
"verify and update dependencies in `requirements.txt` to ensure compatibility",
|
| 138 |
+
"implement continuous integration to automate testing and deployment",
|
| 139 |
+
"enhance error handling and logging mechanisms",
|
| 140 |
+
"review and optimize the import strategy to reduce complexity",
|
| 141 |
+
"conduct a code review to identify potential improvements and code smells",
|
| 142 |
+
"improve the indexing process for better code exploration and search functionality",
|
| 143 |
+
"ensure all CLI commands are properly documented and tested",
|
| 144 |
+
"evaluate the risk assessment and address any identified risks",
|
| 145 |
+
"streamline the plugin integration process for better efficiency",
|
| 146 |
+
"enhance the performance metrics tracking to identify bottlenecks and areas for improvement."
|
| 147 |
+
],
|
| 148 |
+
"performance_metrics": {
|
| 149 |
+
"memory_usage_mb": 0,
|
| 150 |
+
"cpu_usage_percent": 0,
|
| 151 |
+
"response_time_ms": 0,
|
| 152 |
+
"throughput_requests_per_second": 0
|
| 153 |
+
},
|
| 154 |
+
"deployment_info": {
|
| 155 |
+
"supported_platforms": [
|
| 156 |
+
"Linux",
|
| 157 |
+
"Windows",
|
| 158 |
+
"macOS"
|
| 159 |
+
],
|
| 160 |
+
"python_versions": [
|
| 161 |
+
"3.8",
|
| 162 |
+
"3.9",
|
| 163 |
+
"3.10",
|
| 164 |
+
"3.11",
|
| 165 |
+
"3.12"
|
| 166 |
+
],
|
| 167 |
+
"deployment_methods": [
|
| 168 |
+
"Docker",
|
| 169 |
+
"pip",
|
| 170 |
+
"conda"
|
| 171 |
+
],
|
| 172 |
+
"monitoring_support": true,
|
| 173 |
+
"logging_configuration": "structured"
|
| 174 |
+
},
|
| 175 |
+
"execution_analysis": {
|
| 176 |
+
"success_factors": [
|
| 177 |
+
"Successful execution of all workflow nodes",
|
| 178 |
+
"Healthy service status of the MCP plugin"
|
| 179 |
+
],
|
| 180 |
+
"failure_reasons": [],
|
| 181 |
+
"overall_assessment": "good",
|
| 182 |
+
"node_performance": {
|
| 183 |
+
"download_time": "Completed successfully, time not specified",
|
| 184 |
+
"analysis_time": "Completed successfully, time not specified",
|
| 185 |
+
"generation_time": "Completed successfully, time not specified",
|
| 186 |
+
"test_time": "Original project tests failed, MCP plugin tests passed"
|
| 187 |
+
},
|
| 188 |
+
"resource_usage": {
|
| 189 |
+
"memory_efficiency": "Memory usage data not available",
|
| 190 |
+
"cpu_efficiency": "CPU usage data not available",
|
| 191 |
+
"disk_usage": "Disk usage data not available"
|
| 192 |
+
}
|
| 193 |
+
},
|
| 194 |
+
"technical_quality": {
|
| 195 |
+
"code_quality_score": 75,
|
| 196 |
+
"architecture_score": 80,
|
| 197 |
+
"performance_score": 70,
|
| 198 |
+
"maintainability_score": 75,
|
| 199 |
+
"security_score": 85,
|
| 200 |
+
"scalability_score": 70
|
| 201 |
+
}
|
| 202 |
+
}
|
BioSPPy/source/AUTHORS.md
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
BioSPPy is written and maintained by Carlos Carreiras and
|
| 2 |
+
various contributors:
|
| 3 |
+
|
| 4 |
+
Development Lead
|
| 5 |
+
----------------
|
| 6 |
+
|
| 7 |
+
- Carlos Carreiras (<carlos.carreiras@lx.it.pt>)
|
| 8 |
+
|
| 9 |
+
Main Contributors
|
| 10 |
+
-----------------
|
| 11 |
+
|
| 12 |
+
- Ana Priscila Alves (<anapriscila.alves@lx.it.pt>)
|
| 13 |
+
- André Lourenço (<arlourenco@lx.it.pt>)
|
| 14 |
+
- Filipe Canento (<fcanento@lx.it.pt>)
|
| 15 |
+
- Hugo Silva (<hugo.silva@lx.it.pt>)
|
| 16 |
+
|
| 17 |
+
Scientific Supervision
|
| 18 |
+
----------------------
|
| 19 |
+
|
| 20 |
+
- Ana Fred <afred@lx.it.pt>
|
| 21 |
+
|
| 22 |
+
Patches and Suggestions
|
| 23 |
+
-----------------------
|
| 24 |
+
|
| 25 |
+
- Hayden Ball (PR/7)
|
| 26 |
+
- Jason Li (PR/13)
|
| 27 |
+
- Dominique Makowski (<dom.makowski@gmail.com>) (PR/15, PR/24)
|
| 28 |
+
- Margarida Reis (PR/17)
|
| 29 |
+
- Michael Gschwandtner (PR/23)
|
BioSPPy/source/CHANGELOG.md
ADDED
|
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
BioSPPy Changelog
|
| 2 |
+
=================
|
| 3 |
+
|
| 4 |
+
Here you can see the full list of changes between each BioSPPy release.
|
| 5 |
+
|
| 6 |
+
Version 0.8.0
|
| 7 |
+
-------------
|
| 8 |
+
|
| 9 |
+
Released on December 20th 2021
|
| 10 |
+
|
| 11 |
+
- Added PCG module to signals.
|
| 12 |
+
- Fixed some bugs.
|
| 13 |
+
|
| 14 |
+
Version 0.7.3
|
| 15 |
+
-------------
|
| 16 |
+
|
| 17 |
+
Released on June 29th 2021
|
| 18 |
+
|
| 19 |
+
- Removed BCG from master until some issues are fixed.
|
| 20 |
+
|
| 21 |
+
Version 0.7.2
|
| 22 |
+
-------------
|
| 23 |
+
|
| 24 |
+
Released on May 14th 2021
|
| 25 |
+
|
| 26 |
+
- Fixed BCG dependencies.
|
| 27 |
+
|
| 28 |
+
Version 0.7.1
|
| 29 |
+
-------------
|
| 30 |
+
|
| 31 |
+
Released on May 14th 2021
|
| 32 |
+
|
| 33 |
+
- Included BCG module.
|
| 34 |
+
|
| 35 |
+
Version 0.7.0
|
| 36 |
+
-------------
|
| 37 |
+
|
| 38 |
+
Released on May 7th 2021
|
| 39 |
+
|
| 40 |
+
- GitHub and PyPI versions synced.
|
| 41 |
+
|
| 42 |
+
Version 0.6.1
|
| 43 |
+
-------------
|
| 44 |
+
|
| 45 |
+
Released on August 20th 2018
|
| 46 |
+
|
| 47 |
+
- Fixed source file encoding
|
| 48 |
+
|
| 49 |
+
Version 0.6.0
|
| 50 |
+
-------------
|
| 51 |
+
|
| 52 |
+
Released on August 20th 2018
|
| 53 |
+
|
| 54 |
+
- Added reference for BVP onset detection algorithm (closes #36)
|
| 55 |
+
- Updated readme file
|
| 56 |
+
- New setup.py style
|
| 57 |
+
- Added online filtering class in signals.tools
|
| 58 |
+
- Added Pearson correlation and RMSE methods in signals.tools
|
| 59 |
+
- Added method to compute Welch's power spectrum in signals.tools
|
| 60 |
+
- Don't use detrended derivative in signals.eda.kbk_scr (closes #43)
|
| 61 |
+
- Various minor changes
|
| 62 |
+
|
| 63 |
+
Version 0.5.1
|
| 64 |
+
-------------
|
| 65 |
+
|
| 66 |
+
Released on November 29th 2017
|
| 67 |
+
|
| 68 |
+
- Fixed bug when correcting r-peaks (closes #35)
|
| 69 |
+
- Fixed a bug in the generation of the classifier thresholds
|
| 70 |
+
- Added citation information to readme file (closes #34)
|
| 71 |
+
- Various minor changes
|
| 72 |
+
|
| 73 |
+
Version 0.5.0
|
| 74 |
+
-------------
|
| 75 |
+
|
| 76 |
+
Released on August 28th 2017
|
| 77 |
+
|
| 78 |
+
- Added a simple timing module
|
| 79 |
+
- Added methods to help with file manipulations
|
| 80 |
+
- Added a logo :camera:
|
| 81 |
+
- Added the Matthews Correlation Coefficient as another authentication metric.
|
| 82 |
+
- Fixed an issue in the ECG Hamilton algorithm (closes #28)
|
| 83 |
+
- Various bug fixes
|
| 84 |
+
|
| 85 |
+
Version 0.4.0
|
| 86 |
+
-------------
|
| 87 |
+
|
| 88 |
+
Released on May 2nd 2017
|
| 89 |
+
|
| 90 |
+
- Fixed array indexing with floats (merges #23)
|
| 91 |
+
- Allow user to modify SCRs rejection treshold (merges #24)
|
| 92 |
+
- Fixed the Scikit-Learn cross-validation module deprecation (closes #18)
|
| 93 |
+
- Addd methods to compute mean and meadian of a set of n-dimensional data points
|
| 94 |
+
- Added methods to compute the matrix profile
|
| 95 |
+
- Added new EMG onset detection algorithms (merges #17)
|
| 96 |
+
- Added finite difference method for numerial derivatives
|
| 97 |
+
- Fixed inconsistent decibel usage in plotting (closes #16)
|
| 98 |
+
|
| 99 |
+
Version 0.3.0
|
| 100 |
+
-------------
|
| 101 |
+
|
| 102 |
+
Released on December 30th 2016
|
| 103 |
+
|
| 104 |
+
- Lazy loading (merges #15)
|
| 105 |
+
- Python 3 compatibility (merges #13)
|
| 106 |
+
- Fixed usage of clustering linkage parameters
|
| 107 |
+
- Fixed a bug when using filtering without the forward-backward technique
|
| 108 |
+
- Bug fixes (closes #4, #8)
|
| 109 |
+
- Allow BVP parameters as inputs (merges #7)
|
| 110 |
+
|
| 111 |
+
Version 0.2.2
|
| 112 |
+
-------------
|
| 113 |
+
|
| 114 |
+
Released on April 20th 2016
|
| 115 |
+
|
| 116 |
+
- Makes use of new bidict API (closes #3)
|
| 117 |
+
- Updates package version in the requirements file
|
| 118 |
+
- Fixes incorrect EDA filter parameters
|
| 119 |
+
- Fixes heart rate smoothing (size parameter)
|
| 120 |
+
|
| 121 |
+
Version 0.2.1
|
| 122 |
+
-------------
|
| 123 |
+
|
| 124 |
+
Released on January 6th 2016
|
| 125 |
+
|
| 126 |
+
- Fixes incorrect BVP filter parameters (closes #2)
|
| 127 |
+
|
| 128 |
+
Version 0.2.0
|
| 129 |
+
-------------
|
| 130 |
+
|
| 131 |
+
Released on October 1st 2015
|
| 132 |
+
|
| 133 |
+
- Added the biometrics module, including k-NN and SVM classifiers
|
| 134 |
+
- Added outlier detection methods to the clustering module
|
| 135 |
+
- Added text-based data storage methods to the storage module
|
| 136 |
+
- Changed docstring style to napoleon-numpy
|
| 137 |
+
- Complete code style formatting
|
| 138 |
+
- Initial draft of the tutorial
|
| 139 |
+
- Bug fixes
|
| 140 |
+
|
| 141 |
+
Version 0.1.2
|
| 142 |
+
-------------
|
| 143 |
+
|
| 144 |
+
Released on August 29th 2015
|
| 145 |
+
|
| 146 |
+
- Alpha release
|
BioSPPy/source/LICENSE
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Copyright (c) 2015 Instituto de Telecomunicações. See AUTHORS for more details.
|
| 2 |
+
|
| 3 |
+
All rights reserved.
|
| 4 |
+
|
| 5 |
+
Redistribution and use in source and binary forms of the software as well
|
| 6 |
+
as documentation, with or without modification, are permitted provided
|
| 7 |
+
that the following conditions are met:
|
| 8 |
+
|
| 9 |
+
* Redistributions of source code must retain the above copyright
|
| 10 |
+
notice, this list of conditions and the following disclaimer.
|
| 11 |
+
|
| 12 |
+
* Redistributions in binary form must reproduce the above
|
| 13 |
+
copyright notice, this list of conditions and the following
|
| 14 |
+
disclaimer in the documentation and/or other materials provided
|
| 15 |
+
with the distribution.
|
| 16 |
+
|
| 17 |
+
* The names of the contributors may not be used to endorse or
|
| 18 |
+
promote products derived from this software without specific
|
| 19 |
+
prior written permission.
|
| 20 |
+
|
| 21 |
+
THIS SOFTWARE AND DOCUMENTATION IS PROVIDED BY THE COPYRIGHT HOLDERS AND
|
| 22 |
+
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
|
| 23 |
+
NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
| 24 |
+
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER
|
| 25 |
+
OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
|
| 26 |
+
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
|
| 27 |
+
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
|
| 28 |
+
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
|
| 29 |
+
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
|
| 30 |
+
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
| 31 |
+
SOFTWARE AND DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
|
| 32 |
+
DAMAGE.
|
BioSPPy/source/MANIFEST.in
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
include README.md LICENSE AUTHORS.md CHANGES.md
|
BioSPPy/source/README.md
ADDED
|
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
> This repository is archived. The BioSPPy toolbox is now maintained at [scientisst/BioSPPy](https://github.com/scientisst/BioSPPy).
|
| 2 |
+
|
| 3 |
+
# BioSPPy - Biosignal Processing in Python
|
| 4 |
+
|
| 5 |
+
*A toolbox for biosignal processing written in Python.*
|
| 6 |
+
|
| 7 |
+
[](http://biosppy.readthedocs.org/)
|
| 8 |
+
|
| 9 |
+
The toolbox bundles together various signal processing and pattern recognition
|
| 10 |
+
methods geared towards the analysis of biosignals.
|
| 11 |
+
|
| 12 |
+
Highlights:
|
| 13 |
+
|
| 14 |
+
- Support for various biosignals: BVP, ECG, EDA, EEG, EMG, PCG, PPG, Respiration
|
| 15 |
+
- Signal analysis primitives: filtering, frequency analysis
|
| 16 |
+
- Clustering
|
| 17 |
+
- Biometrics
|
| 18 |
+
|
| 19 |
+
Documentation can be found at: <http://biosppy.readthedocs.org/>
|
| 20 |
+
|
| 21 |
+
## Installation
|
| 22 |
+
|
| 23 |
+
Installation can be easily done with `pip`:
|
| 24 |
+
|
| 25 |
+
```bash
|
| 26 |
+
$ pip install biosppy
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
## Simple Example
|
| 30 |
+
|
| 31 |
+
The code below loads an ECG signal from the `examples` folder, filters it,
|
| 32 |
+
performs R-peak detection, and computes the instantaneous heart rate.
|
| 33 |
+
|
| 34 |
+
```python
|
| 35 |
+
from biosppy import storage
|
| 36 |
+
from biosppy.signals import ecg
|
| 37 |
+
|
| 38 |
+
# load raw ECG signal
|
| 39 |
+
signal, mdata = storage.load_txt('./examples/ecg.txt')
|
| 40 |
+
|
| 41 |
+
# process it and plot
|
| 42 |
+
out = ecg.ecg(signal=signal, sampling_rate=1000., show=True)
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
This should produce a plot similar to the one below.
|
| 46 |
+
|
| 47 |
+
[]()
|
| 48 |
+
|
| 49 |
+
## Dependencies
|
| 50 |
+
|
| 51 |
+
- bidict
|
| 52 |
+
- h5py
|
| 53 |
+
- matplotlib
|
| 54 |
+
- numpy
|
| 55 |
+
- scikit-learn
|
| 56 |
+
- scipy
|
| 57 |
+
- shortuuid
|
| 58 |
+
- six
|
| 59 |
+
- joblib
|
| 60 |
+
|
| 61 |
+
## Citing
|
| 62 |
+
Please use the following if you need to cite BioSPPy:
|
| 63 |
+
|
| 64 |
+
- Carreiras C, Alves AP, Lourenço A, Canento F, Silva H, Fred A, *et al.*
|
| 65 |
+
**BioSPPy - Biosignal Processing in Python**, 2015-,
|
| 66 |
+
https://github.com/PIA-Group/BioSPPy/ [Online; accessed ```<year>-<month>-<day>```].
|
| 67 |
+
|
| 68 |
+
```latex
|
| 69 |
+
@Misc{,
|
| 70 |
+
author = {Carlos Carreiras and Ana Priscila Alves and Andr\'{e} Louren\c{c}o and Filipe Canento and Hugo Silva and Ana Fred and others},
|
| 71 |
+
title = {{BioSPPy}: Biosignal Processing in {Python}},
|
| 72 |
+
year = {2015--},
|
| 73 |
+
url = "https://github.com/PIA-Group/BioSPPy/",
|
| 74 |
+
note = {[Online; accessed <today>]}
|
| 75 |
+
}
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
## License
|
| 79 |
+
|
| 80 |
+
BioSPPy is released under the BSD 3-clause license. See LICENSE for more details.
|
| 81 |
+
|
| 82 |
+
## Disclaimer
|
| 83 |
+
|
| 84 |
+
This program is distributed in the hope it will be useful and provided
|
| 85 |
+
to you "as is", but WITHOUT ANY WARRANTY, without even the implied
|
| 86 |
+
warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. This
|
| 87 |
+
program is NOT intended for medical diagnosis. We expressly disclaim any
|
| 88 |
+
liability whatsoever for any direct, indirect, consequential, incidental
|
| 89 |
+
or special damages, including, without limitation, lost revenues, lost
|
| 90 |
+
profits, losses resulting from business interruption or loss of data,
|
| 91 |
+
regardless of the form of action or legal theory under which the
|
| 92 |
+
liability may be asserted, even if advised of the possibility of such
|
| 93 |
+
damages.
|
BioSPPy/source/__init__.py
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
BioSPPy Project Package Initialization File
|
| 4 |
+
"""
|
BioSPPy/source/biosppy/__init__.py
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy
|
| 4 |
+
-------
|
| 5 |
+
|
| 6 |
+
A toolbox for biosignal processing written in Python.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2021 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
# compat
|
| 13 |
+
from __future__ import absolute_import, division, print_function
|
| 14 |
+
|
| 15 |
+
# get version
|
| 16 |
+
from .__version__ import __version__
|
| 17 |
+
|
| 18 |
+
# allow lazy loading
|
| 19 |
+
from .signals import acc, abp, bvp, ppg, pcg, ecg, eda, eeg, emg, resp, tools
|
| 20 |
+
from .synthesizers import ecg
|
| 21 |
+
from .inter_plotting import ecg, acc
|
BioSPPy/source/biosppy/__version__.py
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.version
|
| 4 |
+
---------------
|
| 5 |
+
|
| 6 |
+
Version tracker.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2021 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
VERSION = (0, 8, 0)
|
| 13 |
+
__version__ = ".".join(map(str, VERSION))
|
BioSPPy/source/biosppy/biometrics.py
ADDED
|
@@ -0,0 +1,2345 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.biometrics
|
| 4 |
+
------------------
|
| 5 |
+
|
| 6 |
+
This module provides classifier interfaces for identity recognition
|
| 7 |
+
(biometrics) applications. The core API methods are:
|
| 8 |
+
* enroll: add a new subject;
|
| 9 |
+
* dismiss: remove an existing subject;
|
| 10 |
+
* identify: determine the identity of collected biometric dataset;
|
| 11 |
+
* authenticate: verify the identity of collected biometric dataset.
|
| 12 |
+
|
| 13 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 14 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 15 |
+
"""
|
| 16 |
+
|
| 17 |
+
# Imports
|
| 18 |
+
# compat
|
| 19 |
+
from __future__ import absolute_import, division, print_function
|
| 20 |
+
from six.moves import range
|
| 21 |
+
import six
|
| 22 |
+
|
| 23 |
+
# built-in
|
| 24 |
+
import collections
|
| 25 |
+
|
| 26 |
+
# 3rd party
|
| 27 |
+
import numpy as np
|
| 28 |
+
import shortuuid
|
| 29 |
+
from bidict import bidict
|
| 30 |
+
from sklearn import model_selection as skcv
|
| 31 |
+
from sklearn import svm as sksvm
|
| 32 |
+
|
| 33 |
+
# local
|
| 34 |
+
from . import metrics, plotting, storage, utils
|
| 35 |
+
from .signals import tools
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
class SubjectError(Exception):
|
| 39 |
+
"""Exception raised when the subject is unknown."""
|
| 40 |
+
|
| 41 |
+
def __init__(self, subject=None):
|
| 42 |
+
self.subject = subject
|
| 43 |
+
|
| 44 |
+
def __str__(self):
|
| 45 |
+
if self.subject is None:
|
| 46 |
+
return str("Subject is not enrolled.")
|
| 47 |
+
else:
|
| 48 |
+
return str("Subject %r is not enrolled." % self.subject)
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
class UntrainedError(Exception):
|
| 52 |
+
"""Exception raised when classifier is not trained."""
|
| 53 |
+
|
| 54 |
+
def __str__(self):
|
| 55 |
+
return str("The classifier is not trained.")
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
class CombinationError(Exception):
|
| 59 |
+
"""Exception raised when the combination method fails."""
|
| 60 |
+
|
| 61 |
+
def __str__(self):
|
| 62 |
+
return str("Combination of empty array.")
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
class BaseClassifier(object):
|
| 66 |
+
"""Base biometric classifier class.
|
| 67 |
+
|
| 68 |
+
This class is a skeleton for actual classifier classes.
|
| 69 |
+
The following methods must be overridden or adapted to build a
|
| 70 |
+
new classifier:
|
| 71 |
+
|
| 72 |
+
* __init__
|
| 73 |
+
* _authenticate
|
| 74 |
+
* _get_thresholds
|
| 75 |
+
* _identify
|
| 76 |
+
* _prepare
|
| 77 |
+
* _train
|
| 78 |
+
* _update
|
| 79 |
+
|
| 80 |
+
Attributes
|
| 81 |
+
----------
|
| 82 |
+
EER_IDX : int
|
| 83 |
+
Reference index for the Equal Error Rate.
|
| 84 |
+
|
| 85 |
+
"""
|
| 86 |
+
|
| 87 |
+
EER_IDX = 0
|
| 88 |
+
|
| 89 |
+
def __init__(self):
|
| 90 |
+
# generic self things
|
| 91 |
+
self.is_trained = False
|
| 92 |
+
self._subject2label = bidict()
|
| 93 |
+
self._nbSubjects = 0
|
| 94 |
+
self._thresholds = {}
|
| 95 |
+
self._autoThresholds = None
|
| 96 |
+
|
| 97 |
+
# init data storage
|
| 98 |
+
self._iofile = {}
|
| 99 |
+
|
| 100 |
+
# defer flag
|
| 101 |
+
self._defer_flag = False
|
| 102 |
+
self._reset_defer()
|
| 103 |
+
|
| 104 |
+
def _reset_defer(self):
|
| 105 |
+
"""Reset defer buffer."""
|
| 106 |
+
|
| 107 |
+
self._defer_dict = {'enroll': set(), 'dismiss': set()}
|
| 108 |
+
|
| 109 |
+
def _defer(self, label, case):
|
| 110 |
+
"""Add deferred task.
|
| 111 |
+
|
| 112 |
+
Parameters
|
| 113 |
+
----------
|
| 114 |
+
label : str
|
| 115 |
+
Internal classifier subject label.
|
| 116 |
+
case : str
|
| 117 |
+
One of 'enroll' or 'dismiss'.
|
| 118 |
+
|
| 119 |
+
Notes
|
| 120 |
+
-----
|
| 121 |
+
* An enroll overrides a previous dismiss for the same subject.
|
| 122 |
+
* A dismiss overrides a previous enroll for the same subject.
|
| 123 |
+
|
| 124 |
+
"""
|
| 125 |
+
|
| 126 |
+
if case == 'enroll':
|
| 127 |
+
self._defer_dict['enroll'].add(label)
|
| 128 |
+
if label in self._defer_dict['dismiss']:
|
| 129 |
+
self._defer_dict['dismiss'].remove(label)
|
| 130 |
+
elif case == 'dismiss':
|
| 131 |
+
self._defer_dict['dismiss'].add(label)
|
| 132 |
+
if label in self._defer_dict['enroll']:
|
| 133 |
+
self._defer_dict['enroll'].remove(label)
|
| 134 |
+
|
| 135 |
+
self._defer_flag = True
|
| 136 |
+
|
| 137 |
+
def _check_state(self):
|
| 138 |
+
"""Check and update the train state."""
|
| 139 |
+
|
| 140 |
+
if self._nbSubjects > 0:
|
| 141 |
+
self.is_trained = True
|
| 142 |
+
else:
|
| 143 |
+
self.is_trained = False
|
| 144 |
+
|
| 145 |
+
def io_load(self, label):
|
| 146 |
+
"""Load enrolled subject data.
|
| 147 |
+
|
| 148 |
+
Parameters
|
| 149 |
+
----------
|
| 150 |
+
label : str
|
| 151 |
+
Internal classifier subject label.
|
| 152 |
+
|
| 153 |
+
Returns
|
| 154 |
+
-------
|
| 155 |
+
data : array
|
| 156 |
+
Subject data.
|
| 157 |
+
|
| 158 |
+
"""
|
| 159 |
+
|
| 160 |
+
return self._iofile[label]
|
| 161 |
+
|
| 162 |
+
def io_save(self, label, data):
|
| 163 |
+
"""Save subject data.
|
| 164 |
+
|
| 165 |
+
Parameters
|
| 166 |
+
----------
|
| 167 |
+
label : str
|
| 168 |
+
Internal classifier subject label.
|
| 169 |
+
data : array
|
| 170 |
+
Subject data.
|
| 171 |
+
|
| 172 |
+
"""
|
| 173 |
+
|
| 174 |
+
self._iofile[label] = data
|
| 175 |
+
|
| 176 |
+
def io_del(self, label):
|
| 177 |
+
"""Delete subject data.
|
| 178 |
+
|
| 179 |
+
Parameters
|
| 180 |
+
----------
|
| 181 |
+
label : str
|
| 182 |
+
Internal classifier subject label.
|
| 183 |
+
|
| 184 |
+
"""
|
| 185 |
+
|
| 186 |
+
del self._iofile[label]
|
| 187 |
+
|
| 188 |
+
def save(self, path):
|
| 189 |
+
"""Save classifier instance to a file.
|
| 190 |
+
|
| 191 |
+
Parameters
|
| 192 |
+
----------
|
| 193 |
+
path : str
|
| 194 |
+
Destination file path.
|
| 195 |
+
|
| 196 |
+
"""
|
| 197 |
+
|
| 198 |
+
storage.serialize(self, path)
|
| 199 |
+
|
| 200 |
+
@classmethod
|
| 201 |
+
def load(cls, path):
|
| 202 |
+
"""Load classifier instance from a file.
|
| 203 |
+
|
| 204 |
+
Parameters
|
| 205 |
+
----------
|
| 206 |
+
path : str
|
| 207 |
+
Source file path.
|
| 208 |
+
|
| 209 |
+
Returns
|
| 210 |
+
-------
|
| 211 |
+
clf : object
|
| 212 |
+
Loaded classifier instance.
|
| 213 |
+
|
| 214 |
+
"""
|
| 215 |
+
|
| 216 |
+
# load classifier
|
| 217 |
+
clf = storage.deserialize(path)
|
| 218 |
+
|
| 219 |
+
# check class type
|
| 220 |
+
if not isinstance(clf, cls):
|
| 221 |
+
raise TypeError("Mismatch between target class and loaded file.")
|
| 222 |
+
|
| 223 |
+
return clf
|
| 224 |
+
|
| 225 |
+
def check_subject(self, subject):
|
| 226 |
+
"""Check if a subject is enrolled.
|
| 227 |
+
|
| 228 |
+
Parameters
|
| 229 |
+
----------
|
| 230 |
+
subject : hashable
|
| 231 |
+
Subject identity.
|
| 232 |
+
|
| 233 |
+
Returns
|
| 234 |
+
-------
|
| 235 |
+
check : bool
|
| 236 |
+
If True, the subject is enrolled.
|
| 237 |
+
|
| 238 |
+
"""
|
| 239 |
+
|
| 240 |
+
if self.is_trained:
|
| 241 |
+
return subject in self._subject2label
|
| 242 |
+
|
| 243 |
+
return False
|
| 244 |
+
|
| 245 |
+
def list_subjects(self):
|
| 246 |
+
"""List all the enrolled subjects.
|
| 247 |
+
|
| 248 |
+
Returns
|
| 249 |
+
-------
|
| 250 |
+
subjects : list
|
| 251 |
+
Enrolled subjects.
|
| 252 |
+
|
| 253 |
+
"""
|
| 254 |
+
|
| 255 |
+
subjects = list(self._subject2label)
|
| 256 |
+
|
| 257 |
+
return subjects
|
| 258 |
+
|
| 259 |
+
def enroll(self, data=None, subject=None, deferred=False):
|
| 260 |
+
"""Enroll new data for a subject.
|
| 261 |
+
|
| 262 |
+
If the subject is already enrolled, new data is combined with
|
| 263 |
+
existing data.
|
| 264 |
+
|
| 265 |
+
Parameters
|
| 266 |
+
----------
|
| 267 |
+
data : array
|
| 268 |
+
Data to enroll.
|
| 269 |
+
subject : hashable
|
| 270 |
+
Subject identity.
|
| 271 |
+
deferred : bool, optional
|
| 272 |
+
If True, computations are delayed until `flush` is called.
|
| 273 |
+
|
| 274 |
+
Notes
|
| 275 |
+
-----
|
| 276 |
+
* When using deferred calls, an enroll overrides a previous dismiss
|
| 277 |
+
for the same subject.
|
| 278 |
+
|
| 279 |
+
"""
|
| 280 |
+
|
| 281 |
+
# check inputs
|
| 282 |
+
if data is None:
|
| 283 |
+
raise TypeError("Please specify the data to enroll.")
|
| 284 |
+
|
| 285 |
+
if subject is None:
|
| 286 |
+
raise TypeError("Plase specify the subject identity.")
|
| 287 |
+
|
| 288 |
+
if self.check_subject(subject):
|
| 289 |
+
# load existing
|
| 290 |
+
label = self._subject2label[subject]
|
| 291 |
+
old = self.io_load(label)
|
| 292 |
+
|
| 293 |
+
# combine data
|
| 294 |
+
data = self._update(old, data)
|
| 295 |
+
else:
|
| 296 |
+
# create new label
|
| 297 |
+
label = shortuuid.uuid()
|
| 298 |
+
self._subject2label[subject] = label
|
| 299 |
+
self._nbSubjects += 1
|
| 300 |
+
|
| 301 |
+
# store data
|
| 302 |
+
self.io_save(label, data)
|
| 303 |
+
|
| 304 |
+
if deferred:
|
| 305 |
+
# delay computations
|
| 306 |
+
self._defer(label, 'enroll')
|
| 307 |
+
else:
|
| 308 |
+
self._train([label], None)
|
| 309 |
+
self._check_state()
|
| 310 |
+
self.update_thresholds()
|
| 311 |
+
|
| 312 |
+
def dismiss(self, subject=None, deferred=False):
|
| 313 |
+
"""Remove a subject.
|
| 314 |
+
|
| 315 |
+
Parameters
|
| 316 |
+
----------
|
| 317 |
+
subject : hashable
|
| 318 |
+
Subject identity.
|
| 319 |
+
deferred : bool, optional
|
| 320 |
+
If True, computations are delayed until `flush` is called.
|
| 321 |
+
|
| 322 |
+
Raises
|
| 323 |
+
------
|
| 324 |
+
SubjectError
|
| 325 |
+
If the subject to remove is not enrolled.
|
| 326 |
+
|
| 327 |
+
Notes
|
| 328 |
+
-----
|
| 329 |
+
* When using deferred calls, a dismiss overrides a previous enroll
|
| 330 |
+
for the same subject.
|
| 331 |
+
|
| 332 |
+
"""
|
| 333 |
+
|
| 334 |
+
# check inputs
|
| 335 |
+
if subject is None:
|
| 336 |
+
raise TypeError("Please specify the subject identity.")
|
| 337 |
+
|
| 338 |
+
if not self.check_subject(subject):
|
| 339 |
+
raise SubjectError(subject)
|
| 340 |
+
|
| 341 |
+
label = self._subject2label[subject]
|
| 342 |
+
del self._subject2label[subject]
|
| 343 |
+
del self._thresholds[label]
|
| 344 |
+
self._nbSubjects -= 1
|
| 345 |
+
self.io_del(label)
|
| 346 |
+
|
| 347 |
+
if deferred:
|
| 348 |
+
self._defer(label, 'dismiss')
|
| 349 |
+
else:
|
| 350 |
+
self._train(None, [label])
|
| 351 |
+
self._check_state()
|
| 352 |
+
self.update_thresholds()
|
| 353 |
+
|
| 354 |
+
def batch_train(self, data=None):
|
| 355 |
+
"""Train the classifier in batch mode.
|
| 356 |
+
|
| 357 |
+
Parameters
|
| 358 |
+
----------
|
| 359 |
+
data : dict
|
| 360 |
+
Dictionary holding training data for each subject; if the object
|
| 361 |
+
for a subject is `None`, performs a `dismiss`.
|
| 362 |
+
|
| 363 |
+
"""
|
| 364 |
+
|
| 365 |
+
# check inputs
|
| 366 |
+
if data is None:
|
| 367 |
+
raise TypeError("Please specify the data to train.")
|
| 368 |
+
|
| 369 |
+
for sub, val in six.iteritems(data):
|
| 370 |
+
if val is None:
|
| 371 |
+
try:
|
| 372 |
+
self.dismiss(sub, deferred=True)
|
| 373 |
+
except SubjectError:
|
| 374 |
+
continue
|
| 375 |
+
else:
|
| 376 |
+
self.enroll(val, sub, deferred=True)
|
| 377 |
+
|
| 378 |
+
self.flush()
|
| 379 |
+
|
| 380 |
+
def flush(self):
|
| 381 |
+
"""Flush deferred computations."""
|
| 382 |
+
|
| 383 |
+
if self._defer_flag:
|
| 384 |
+
self._defer_flag = False
|
| 385 |
+
|
| 386 |
+
# train
|
| 387 |
+
enroll = list(self._defer_dict['enroll'])
|
| 388 |
+
dismiss = list(self._defer_dict['dismiss'])
|
| 389 |
+
self._train(enroll, dismiss)
|
| 390 |
+
|
| 391 |
+
# update thresholds
|
| 392 |
+
self._check_state()
|
| 393 |
+
self.update_thresholds()
|
| 394 |
+
|
| 395 |
+
# reset
|
| 396 |
+
self._reset_defer()
|
| 397 |
+
|
| 398 |
+
def update_thresholds(self, fraction=1.):
|
| 399 |
+
"""Update subject-specific thresholds based on the enrolled data.
|
| 400 |
+
|
| 401 |
+
Parameters
|
| 402 |
+
----------
|
| 403 |
+
fraction : float, optional
|
| 404 |
+
Fraction of samples to select from training data.
|
| 405 |
+
|
| 406 |
+
"""
|
| 407 |
+
|
| 408 |
+
ths = self.get_thresholds(force=True)
|
| 409 |
+
|
| 410 |
+
# gather data to test
|
| 411 |
+
data = {}
|
| 412 |
+
for subject, label in six.iteritems(self._subject2label):
|
| 413 |
+
# select a random fraction of the training data
|
| 414 |
+
aux = self.io_load(label)
|
| 415 |
+
indx = list(range(len(aux)))
|
| 416 |
+
use, _ = utils.random_fraction(indx, fraction, sort=True)
|
| 417 |
+
|
| 418 |
+
data[subject] = aux[use]
|
| 419 |
+
|
| 420 |
+
# evaluate classifier
|
| 421 |
+
_, res = self.evaluate(data, ths)
|
| 422 |
+
|
| 423 |
+
# choose thresholds at EER
|
| 424 |
+
for subject, label in six.iteritems(self._subject2label):
|
| 425 |
+
EER_auth = res['subject'][subject]['authentication']['rates']['EER']
|
| 426 |
+
self.set_auth_thr(label, EER_auth[self.EER_IDX, 0], ready=True)
|
| 427 |
+
|
| 428 |
+
EER_id = res['subject'][subject]['identification']['rates']['EER']
|
| 429 |
+
self.set_id_thr(label, EER_id[self.EER_IDX, 0], ready=True)
|
| 430 |
+
|
| 431 |
+
def set_auth_thr(self, subject, threshold, ready=False):
|
| 432 |
+
"""Set the authentication threshold of a subject.
|
| 433 |
+
|
| 434 |
+
Parameters
|
| 435 |
+
----------
|
| 436 |
+
subject : hashable
|
| 437 |
+
Subject identity.
|
| 438 |
+
threshold : int, float
|
| 439 |
+
Threshold value.
|
| 440 |
+
ready : bool, optional
|
| 441 |
+
If True, `subject` is the internal classifier label.
|
| 442 |
+
|
| 443 |
+
"""
|
| 444 |
+
|
| 445 |
+
if not ready:
|
| 446 |
+
if not self.check_subject(subject):
|
| 447 |
+
raise SubjectError(subject)
|
| 448 |
+
subject = self._subject2label[subject]
|
| 449 |
+
|
| 450 |
+
try:
|
| 451 |
+
self._thresholds[subject]['auth'] = threshold
|
| 452 |
+
except KeyError:
|
| 453 |
+
self._thresholds[subject] = {'auth': threshold, 'id': None}
|
| 454 |
+
|
| 455 |
+
def get_auth_thr(self, subject, ready=False):
|
| 456 |
+
"""Get the authentication threshold of a subject.
|
| 457 |
+
|
| 458 |
+
Parameters
|
| 459 |
+
----------
|
| 460 |
+
subject : hashable
|
| 461 |
+
Subject identity.
|
| 462 |
+
ready : bool, optional
|
| 463 |
+
If True, `subject` is the internal classifier label.
|
| 464 |
+
|
| 465 |
+
Returns
|
| 466 |
+
-------
|
| 467 |
+
threshold : int, float
|
| 468 |
+
Threshold value.
|
| 469 |
+
|
| 470 |
+
"""
|
| 471 |
+
|
| 472 |
+
if not ready:
|
| 473 |
+
if not self.check_subject(subject):
|
| 474 |
+
raise SubjectError(subject)
|
| 475 |
+
subject = self._subject2label[subject]
|
| 476 |
+
|
| 477 |
+
return self._thresholds[subject].get('auth', None)
|
| 478 |
+
|
| 479 |
+
def set_id_thr(self, subject, threshold, ready=False):
|
| 480 |
+
"""Set the identification threshold of a subject.
|
| 481 |
+
|
| 482 |
+
Parameters
|
| 483 |
+
----------
|
| 484 |
+
subject : hashable
|
| 485 |
+
Subject identity.
|
| 486 |
+
threshold : int, float
|
| 487 |
+
Threshold value.
|
| 488 |
+
ready : bool, optional
|
| 489 |
+
If True, `subject` is the internal classifier label.
|
| 490 |
+
|
| 491 |
+
"""
|
| 492 |
+
|
| 493 |
+
if not ready:
|
| 494 |
+
if not self.check_subject(subject):
|
| 495 |
+
raise SubjectError(subject)
|
| 496 |
+
subject = self._subject2label[subject]
|
| 497 |
+
|
| 498 |
+
try:
|
| 499 |
+
self._thresholds[subject]['id'] = threshold
|
| 500 |
+
except KeyError:
|
| 501 |
+
self._thresholds[subject] = {'auth': None, 'id': threshold}
|
| 502 |
+
|
| 503 |
+
def get_id_thr(self, subject, ready=False):
|
| 504 |
+
"""Get the identification threshold of a subject.
|
| 505 |
+
|
| 506 |
+
Parameters
|
| 507 |
+
----------
|
| 508 |
+
subject : hashable
|
| 509 |
+
Subject identity.
|
| 510 |
+
ready : bool, optional
|
| 511 |
+
If True, `subject` is the internal classifier label.
|
| 512 |
+
|
| 513 |
+
Returns
|
| 514 |
+
-------
|
| 515 |
+
threshold : int, float
|
| 516 |
+
Threshold value.
|
| 517 |
+
|
| 518 |
+
"""
|
| 519 |
+
|
| 520 |
+
if not ready:
|
| 521 |
+
if not self.check_subject(subject):
|
| 522 |
+
raise SubjectError(subject)
|
| 523 |
+
subject = self._subject2label[subject]
|
| 524 |
+
|
| 525 |
+
return self._thresholds[subject].get('id', None)
|
| 526 |
+
|
| 527 |
+
def get_thresholds(self, force=False):
|
| 528 |
+
"""Get an array of reasonable thresholds.
|
| 529 |
+
|
| 530 |
+
Parameters
|
| 531 |
+
----------
|
| 532 |
+
force : bool, optional
|
| 533 |
+
If True, forces generation of thresholds.
|
| 534 |
+
|
| 535 |
+
Returns
|
| 536 |
+
-------
|
| 537 |
+
ths : array
|
| 538 |
+
Generated thresholds.
|
| 539 |
+
|
| 540 |
+
"""
|
| 541 |
+
|
| 542 |
+
if force or (self._autoThresholds is None):
|
| 543 |
+
self._autoThresholds = self._get_thresholds()
|
| 544 |
+
|
| 545 |
+
return self._autoThresholds
|
| 546 |
+
|
| 547 |
+
def authenticate(self, data, subject, threshold=None):
|
| 548 |
+
"""Authenticate a set of feature vectors, allegedly belonging to the
|
| 549 |
+
given subject.
|
| 550 |
+
|
| 551 |
+
Parameters
|
| 552 |
+
----------
|
| 553 |
+
data : array
|
| 554 |
+
Input test data.
|
| 555 |
+
subject : hashable
|
| 556 |
+
Subject identity.
|
| 557 |
+
threshold : int, float, optional
|
| 558 |
+
Authentication threshold.
|
| 559 |
+
|
| 560 |
+
Returns
|
| 561 |
+
-------
|
| 562 |
+
decision : array
|
| 563 |
+
Authentication decision for each input sample.
|
| 564 |
+
|
| 565 |
+
"""
|
| 566 |
+
|
| 567 |
+
# check train state
|
| 568 |
+
if not self.is_trained:
|
| 569 |
+
raise UntrainedError
|
| 570 |
+
|
| 571 |
+
# check subject
|
| 572 |
+
if not self.check_subject(subject):
|
| 573 |
+
raise SubjectError(subject)
|
| 574 |
+
|
| 575 |
+
label = self._subject2label[subject]
|
| 576 |
+
|
| 577 |
+
# check threshold
|
| 578 |
+
if threshold is None:
|
| 579 |
+
threshold = self.get_auth_thr(label, ready=True)
|
| 580 |
+
|
| 581 |
+
# prepare data
|
| 582 |
+
aux = self._prepare(data, targets=label)
|
| 583 |
+
|
| 584 |
+
# authenticate
|
| 585 |
+
decision = self._authenticate(aux, label, threshold)
|
| 586 |
+
|
| 587 |
+
return decision
|
| 588 |
+
|
| 589 |
+
def identify(self, data, threshold=None):
|
| 590 |
+
"""Identify a set of feature vectors.
|
| 591 |
+
|
| 592 |
+
Parameters
|
| 593 |
+
----------
|
| 594 |
+
data : array
|
| 595 |
+
Input test data.
|
| 596 |
+
threshold : int, float, optional
|
| 597 |
+
Identification threshold.
|
| 598 |
+
|
| 599 |
+
Returns
|
| 600 |
+
-------
|
| 601 |
+
subjects : list
|
| 602 |
+
Identity of each input sample.
|
| 603 |
+
|
| 604 |
+
"""
|
| 605 |
+
|
| 606 |
+
# check train state
|
| 607 |
+
if not self.is_trained:
|
| 608 |
+
raise UntrainedError
|
| 609 |
+
|
| 610 |
+
# prepare data
|
| 611 |
+
aux = self._prepare(data)
|
| 612 |
+
|
| 613 |
+
# identify
|
| 614 |
+
labels = self._identify(aux, threshold)
|
| 615 |
+
|
| 616 |
+
# translate class labels
|
| 617 |
+
subjects = [self._subject2label.inv.get(item, '') for item in labels]
|
| 618 |
+
|
| 619 |
+
return subjects
|
| 620 |
+
|
| 621 |
+
def evaluate(self, data, thresholds=None, path=None, show=False):
|
| 622 |
+
"""Assess the performance of the classifier in both authentication and
|
| 623 |
+
identification scenarios.
|
| 624 |
+
|
| 625 |
+
Parameters
|
| 626 |
+
----------
|
| 627 |
+
data : dict
|
| 628 |
+
Dictionary holding test data for each subject.
|
| 629 |
+
thresholds : array, optional
|
| 630 |
+
Classifier thresholds to use.
|
| 631 |
+
path : str, optional
|
| 632 |
+
If provided, the plot will be saved to the specified file.
|
| 633 |
+
show : bool, optional
|
| 634 |
+
If True, show a summary plot.
|
| 635 |
+
|
| 636 |
+
Returns
|
| 637 |
+
-------
|
| 638 |
+
classification : dict
|
| 639 |
+
Classification results.
|
| 640 |
+
assessment : dict
|
| 641 |
+
Biometric statistics.
|
| 642 |
+
|
| 643 |
+
"""
|
| 644 |
+
|
| 645 |
+
# check train state
|
| 646 |
+
if not self.is_trained:
|
| 647 |
+
raise UntrainedError
|
| 648 |
+
|
| 649 |
+
# check thresholds
|
| 650 |
+
if thresholds is None:
|
| 651 |
+
thresholds = self.get_thresholds()
|
| 652 |
+
|
| 653 |
+
# get subjects
|
| 654 |
+
subjects = [item for item in data if self.check_subject(item)]
|
| 655 |
+
if len(subjects) == 0:
|
| 656 |
+
raise ValueError("No enrolled subjects in test set.")
|
| 657 |
+
|
| 658 |
+
results = {
|
| 659 |
+
'subjectList': subjects,
|
| 660 |
+
'subjectDict': self._subject2label,
|
| 661 |
+
}
|
| 662 |
+
|
| 663 |
+
for subject in subjects:
|
| 664 |
+
# prepare data
|
| 665 |
+
aux = self._prepare(data[subject])
|
| 666 |
+
|
| 667 |
+
# test
|
| 668 |
+
auth_res = []
|
| 669 |
+
id_res = []
|
| 670 |
+
for th in thresholds:
|
| 671 |
+
# authentication
|
| 672 |
+
auth = []
|
| 673 |
+
for subject_tst in subjects:
|
| 674 |
+
label = self._subject2label[subject_tst]
|
| 675 |
+
auth.append(self._authenticate(aux, label, th))
|
| 676 |
+
|
| 677 |
+
auth_res.append(np.array(auth))
|
| 678 |
+
|
| 679 |
+
# identification
|
| 680 |
+
id_res.append(self._identify(aux, th))
|
| 681 |
+
|
| 682 |
+
auth_res = np.array(auth_res)
|
| 683 |
+
id_res = np.array(id_res)
|
| 684 |
+
results[subject] = {'authentication': auth_res,
|
| 685 |
+
'identification': id_res,
|
| 686 |
+
}
|
| 687 |
+
|
| 688 |
+
# assess classification results
|
| 689 |
+
assess, = assess_classification(results, thresholds)
|
| 690 |
+
|
| 691 |
+
# output
|
| 692 |
+
args = (results, assess)
|
| 693 |
+
names = ('classification', 'assessment')
|
| 694 |
+
out = utils.ReturnTuple(args, names)
|
| 695 |
+
|
| 696 |
+
if show:
|
| 697 |
+
# plot
|
| 698 |
+
plotting.plot_biometrics(assess,
|
| 699 |
+
self.EER_IDX,
|
| 700 |
+
path=path,
|
| 701 |
+
show=True)
|
| 702 |
+
|
| 703 |
+
return out
|
| 704 |
+
|
| 705 |
+
@classmethod
|
| 706 |
+
def cross_validation(cls, data, labels, cv, thresholds=None, **kwargs):
|
| 707 |
+
"""Perform Cross Validation (CV) on a data set.
|
| 708 |
+
|
| 709 |
+
Parameters
|
| 710 |
+
----------
|
| 711 |
+
data : array
|
| 712 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 713 |
+
labels : list, array
|
| 714 |
+
A list of m class labels.
|
| 715 |
+
cv : CV iterator
|
| 716 |
+
A `sklearn.model_selection` iterator.
|
| 717 |
+
thresholds : array, optional
|
| 718 |
+
Classifier thresholds to use.
|
| 719 |
+
``**kwargs`` : dict, optional
|
| 720 |
+
Classifier parameters.
|
| 721 |
+
|
| 722 |
+
Returns
|
| 723 |
+
-------
|
| 724 |
+
runs : list
|
| 725 |
+
Evaluation results for each CV run.
|
| 726 |
+
assessment : dict
|
| 727 |
+
Final CV biometric statistics.
|
| 728 |
+
|
| 729 |
+
"""
|
| 730 |
+
|
| 731 |
+
runs = []
|
| 732 |
+
aux = []
|
| 733 |
+
for train, test in cv:
|
| 734 |
+
# train data set
|
| 735 |
+
train_idx = collections.defaultdict(list)
|
| 736 |
+
for item in train:
|
| 737 |
+
lbl = labels[item]
|
| 738 |
+
train_idx[lbl].append(item)
|
| 739 |
+
|
| 740 |
+
train_data = {sub: data[idx] for sub, idx in six.iteritems(train_idx)}
|
| 741 |
+
|
| 742 |
+
# test data set
|
| 743 |
+
test_idx = collections.defaultdict(list)
|
| 744 |
+
for item in test:
|
| 745 |
+
lbl = labels[item]
|
| 746 |
+
test_idx[lbl].append(item)
|
| 747 |
+
|
| 748 |
+
test_data = {sub: data[idx] for sub, idx in six.iteritems(test_idx)}
|
| 749 |
+
|
| 750 |
+
# instantiate classifier
|
| 751 |
+
clf = cls(**kwargs)
|
| 752 |
+
clf.batch_train(train_data)
|
| 753 |
+
res = clf.evaluate(test_data, thresholds=thresholds)
|
| 754 |
+
del clf
|
| 755 |
+
|
| 756 |
+
aux.append(res['assessment'])
|
| 757 |
+
runs.append(res)
|
| 758 |
+
|
| 759 |
+
# assess runs
|
| 760 |
+
if len(runs) > 0:
|
| 761 |
+
subjects = runs[0]['classification']['subjectList']
|
| 762 |
+
assess, = assess_runs(results=aux, subjects=subjects)
|
| 763 |
+
else:
|
| 764 |
+
raise ValueError("CV iterator empty or exhausted.")
|
| 765 |
+
|
| 766 |
+
# output
|
| 767 |
+
args = (runs, assess)
|
| 768 |
+
names = ('runs', 'assessment')
|
| 769 |
+
|
| 770 |
+
return utils.ReturnTuple(args, names)
|
| 771 |
+
|
| 772 |
+
def _authenticate(self, data, label, threshold):
|
| 773 |
+
"""Authenticate a set of feature vectors, allegedly belonging to the
|
| 774 |
+
given subject.
|
| 775 |
+
|
| 776 |
+
Parameters
|
| 777 |
+
----------
|
| 778 |
+
data : array
|
| 779 |
+
Input test data.
|
| 780 |
+
label : str
|
| 781 |
+
Internal classifier subject label.
|
| 782 |
+
threshold : int, float
|
| 783 |
+
Authentication threshold.
|
| 784 |
+
|
| 785 |
+
Returns
|
| 786 |
+
-------
|
| 787 |
+
decision : array
|
| 788 |
+
Authentication decision for each input sample.
|
| 789 |
+
|
| 790 |
+
"""
|
| 791 |
+
|
| 792 |
+
decision = np.zeros(len(data), dtype='bool')
|
| 793 |
+
|
| 794 |
+
return decision
|
| 795 |
+
|
| 796 |
+
def _get_thresholds(self):
|
| 797 |
+
"""Generate an array of reasonable thresholds.
|
| 798 |
+
|
| 799 |
+
Returns
|
| 800 |
+
-------
|
| 801 |
+
ths : array
|
| 802 |
+
Generated thresholds.
|
| 803 |
+
|
| 804 |
+
"""
|
| 805 |
+
|
| 806 |
+
ths = np.array([])
|
| 807 |
+
|
| 808 |
+
return ths
|
| 809 |
+
|
| 810 |
+
def _identify(self, data, threshold=None):
|
| 811 |
+
"""Identify a set of feature vectors.
|
| 812 |
+
|
| 813 |
+
Parameters
|
| 814 |
+
----------
|
| 815 |
+
data : array
|
| 816 |
+
Input test data.
|
| 817 |
+
threshold : int, float
|
| 818 |
+
Identification threshold.
|
| 819 |
+
|
| 820 |
+
Returns
|
| 821 |
+
-------
|
| 822 |
+
labels : list
|
| 823 |
+
Identity (internal label) of each input sample.
|
| 824 |
+
|
| 825 |
+
"""
|
| 826 |
+
|
| 827 |
+
labels = [''] * len(data)
|
| 828 |
+
|
| 829 |
+
return labels
|
| 830 |
+
|
| 831 |
+
def _prepare(self, data, targets=None):
|
| 832 |
+
"""Prepare data to be processed.
|
| 833 |
+
|
| 834 |
+
Parameters
|
| 835 |
+
----------
|
| 836 |
+
data : array
|
| 837 |
+
Data to process.
|
| 838 |
+
targets : list, str, optional
|
| 839 |
+
Target subject labels.
|
| 840 |
+
|
| 841 |
+
Returns
|
| 842 |
+
-------
|
| 843 |
+
out : object
|
| 844 |
+
Processed data.
|
| 845 |
+
|
| 846 |
+
"""
|
| 847 |
+
|
| 848 |
+
# target class labels
|
| 849 |
+
if targets is None:
|
| 850 |
+
targets = list(self._subject2label.values())
|
| 851 |
+
elif isinstance(targets, six.string_types):
|
| 852 |
+
targets = [targets]
|
| 853 |
+
|
| 854 |
+
return data
|
| 855 |
+
|
| 856 |
+
def _train(self, enroll=None, dismiss=None):
|
| 857 |
+
"""Train the classifier.
|
| 858 |
+
|
| 859 |
+
Parameters
|
| 860 |
+
----------
|
| 861 |
+
enroll : list, optional
|
| 862 |
+
Labels of new or updated subjects.
|
| 863 |
+
dismiss : list, optional
|
| 864 |
+
Labels of deleted subjects.
|
| 865 |
+
|
| 866 |
+
"""
|
| 867 |
+
|
| 868 |
+
if enroll is None:
|
| 869 |
+
enroll = []
|
| 870 |
+
if dismiss is None:
|
| 871 |
+
dismiss = []
|
| 872 |
+
|
| 873 |
+
# process dismiss
|
| 874 |
+
for _ in dismiss:
|
| 875 |
+
pass
|
| 876 |
+
|
| 877 |
+
# process enroll
|
| 878 |
+
for _ in enroll:
|
| 879 |
+
pass
|
| 880 |
+
|
| 881 |
+
def _update(self, old, new):
|
| 882 |
+
"""Combine new data with existing templates (for one subject).
|
| 883 |
+
|
| 884 |
+
Parameters
|
| 885 |
+
----------
|
| 886 |
+
old : array
|
| 887 |
+
Existing data.
|
| 888 |
+
new : array
|
| 889 |
+
New data.
|
| 890 |
+
|
| 891 |
+
Returns
|
| 892 |
+
-------
|
| 893 |
+
out : array
|
| 894 |
+
Combined data.
|
| 895 |
+
|
| 896 |
+
"""
|
| 897 |
+
|
| 898 |
+
return new
|
| 899 |
+
|
| 900 |
+
|
| 901 |
+
class KNN(BaseClassifier):
|
| 902 |
+
"""K Nearest Neighbors (k-NN) biometric classifier.
|
| 903 |
+
|
| 904 |
+
Parameters
|
| 905 |
+
----------
|
| 906 |
+
k : int, optional
|
| 907 |
+
Number of neighbors.
|
| 908 |
+
metric : str, optional
|
| 909 |
+
Distance metric.
|
| 910 |
+
metric_args : dict, optional
|
| 911 |
+
Additional keyword arguments are passed to the distance function.
|
| 912 |
+
|
| 913 |
+
Attributes
|
| 914 |
+
----------
|
| 915 |
+
EER_IDX : int
|
| 916 |
+
Reference index for the Equal Error Rate.
|
| 917 |
+
|
| 918 |
+
"""
|
| 919 |
+
|
| 920 |
+
EER_IDX = 0
|
| 921 |
+
|
| 922 |
+
def __init__(self, k=3, metric='euclidean', metric_args=None):
|
| 923 |
+
# parent __init__
|
| 924 |
+
super(KNN, self).__init__()
|
| 925 |
+
|
| 926 |
+
# algorithm self things
|
| 927 |
+
self.k = k
|
| 928 |
+
self.metric = metric
|
| 929 |
+
if metric_args is None:
|
| 930 |
+
metric_args = {}
|
| 931 |
+
self.metric_args = metric_args
|
| 932 |
+
|
| 933 |
+
# test metric args
|
| 934 |
+
_ = metrics.pdist(np.zeros((2, 2)), metric, **metric_args)
|
| 935 |
+
|
| 936 |
+
# minimum threshold
|
| 937 |
+
self.min_thr = 10 * np.finfo('float').eps
|
| 938 |
+
|
| 939 |
+
def _sort(self, dists, train_labels):
|
| 940 |
+
"""Sort the computed distances.
|
| 941 |
+
|
| 942 |
+
Parameters
|
| 943 |
+
----------
|
| 944 |
+
dists : array
|
| 945 |
+
Unsorted computed distances.
|
| 946 |
+
train_labels : list
|
| 947 |
+
Unsorted target subject labels.
|
| 948 |
+
|
| 949 |
+
Returns
|
| 950 |
+
-------
|
| 951 |
+
dists : array
|
| 952 |
+
Sorted computed distances.
|
| 953 |
+
train_labels : list
|
| 954 |
+
Sorted target subject labels.
|
| 955 |
+
|
| 956 |
+
"""
|
| 957 |
+
|
| 958 |
+
ind = dists.argsort()
|
| 959 |
+
# sneaky trick from http://stackoverflow.com/questions/6155649
|
| 960 |
+
static_inds = np.arange(dists.shape[0]).reshape((dists.shape[0], 1))
|
| 961 |
+
dists = dists[static_inds, ind]
|
| 962 |
+
train_labels = train_labels[static_inds, ind]
|
| 963 |
+
|
| 964 |
+
return dists, train_labels
|
| 965 |
+
|
| 966 |
+
def _authenticate(self, data, label, threshold):
|
| 967 |
+
"""Authenticate a set of feature vectors, allegedly belonging to the
|
| 968 |
+
given subject.
|
| 969 |
+
|
| 970 |
+
Parameters
|
| 971 |
+
----------
|
| 972 |
+
data : array
|
| 973 |
+
Input test data.
|
| 974 |
+
label : str
|
| 975 |
+
Internal classifier subject label.
|
| 976 |
+
threshold : int, float
|
| 977 |
+
Authentication threshold.
|
| 978 |
+
|
| 979 |
+
Returns
|
| 980 |
+
-------
|
| 981 |
+
decision : array
|
| 982 |
+
Authentication decision for each input sample.
|
| 983 |
+
|
| 984 |
+
"""
|
| 985 |
+
|
| 986 |
+
# unpack prepared data
|
| 987 |
+
dists = data['dists']
|
| 988 |
+
train_labels = data['train_labels']
|
| 989 |
+
|
| 990 |
+
# select based on subject label
|
| 991 |
+
aux = []
|
| 992 |
+
ns = len(dists)
|
| 993 |
+
for i in range(ns):
|
| 994 |
+
aux.append(dists[i, train_labels[i, :] == label])
|
| 995 |
+
|
| 996 |
+
dists = np.array(aux)
|
| 997 |
+
|
| 998 |
+
# nearest neighbors
|
| 999 |
+
dists = dists[:, :self.k]
|
| 1000 |
+
|
| 1001 |
+
decision = np.zeros(ns, dtype='bool')
|
| 1002 |
+
for i in range(ns):
|
| 1003 |
+
# compare distances to threshold
|
| 1004 |
+
count = np.sum(dists[i, :] <= threshold)
|
| 1005 |
+
|
| 1006 |
+
# decide accept
|
| 1007 |
+
if count > (self.k // 2):
|
| 1008 |
+
decision[i] = True
|
| 1009 |
+
|
| 1010 |
+
return decision
|
| 1011 |
+
|
| 1012 |
+
def _get_thresholds(self):
|
| 1013 |
+
"""Generate an array of reasonable thresholds.
|
| 1014 |
+
|
| 1015 |
+
For metrics other than 'cosine' or 'pcosine', which have a clear
|
| 1016 |
+
limits, generates an array based on the maximum distances between
|
| 1017 |
+
enrolled subjects.
|
| 1018 |
+
|
| 1019 |
+
Returns
|
| 1020 |
+
-------
|
| 1021 |
+
ths : array
|
| 1022 |
+
Generated thresholds.
|
| 1023 |
+
|
| 1024 |
+
"""
|
| 1025 |
+
|
| 1026 |
+
if self.metric == 'cosine':
|
| 1027 |
+
return np.linspace(self.min_thr, 2., 100)
|
| 1028 |
+
elif self.metric == 'pcosine':
|
| 1029 |
+
return np.linspace(self.min_thr, 1., 100)
|
| 1030 |
+
|
| 1031 |
+
maxD = []
|
| 1032 |
+
for _ in range(3):
|
| 1033 |
+
for label in list(six.itervalues(self._subject2label)):
|
| 1034 |
+
# randomly select samples
|
| 1035 |
+
aux = self.io_load(label)
|
| 1036 |
+
ind = np.random.randint(0, aux.shape[0], 3)
|
| 1037 |
+
obs = aux[ind]
|
| 1038 |
+
|
| 1039 |
+
# compute distances
|
| 1040 |
+
dists = self._prepare(obs)['dists']
|
| 1041 |
+
maxD.append(np.max(dists))
|
| 1042 |
+
|
| 1043 |
+
# maximum distance
|
| 1044 |
+
maxD = 1.5 * np.max(maxD)
|
| 1045 |
+
|
| 1046 |
+
ths = np.linspace(self.min_thr, maxD, 100)
|
| 1047 |
+
|
| 1048 |
+
return ths
|
| 1049 |
+
|
| 1050 |
+
def _identify(self, data, threshold=None):
|
| 1051 |
+
"""Identify a set of feature vectors.
|
| 1052 |
+
|
| 1053 |
+
Parameters
|
| 1054 |
+
----------
|
| 1055 |
+
data : array
|
| 1056 |
+
Input test data.
|
| 1057 |
+
threshold : int, float
|
| 1058 |
+
Identification threshold.
|
| 1059 |
+
|
| 1060 |
+
Returns
|
| 1061 |
+
-------
|
| 1062 |
+
labels :list
|
| 1063 |
+
Identity (internal label) of each input sample.
|
| 1064 |
+
|
| 1065 |
+
"""
|
| 1066 |
+
|
| 1067 |
+
if threshold is None:
|
| 1068 |
+
thrFcn = lambda label: self.get_id_thr(label, ready=True)
|
| 1069 |
+
else:
|
| 1070 |
+
thrFcn = lambda label: threshold
|
| 1071 |
+
|
| 1072 |
+
# unpack prepared data
|
| 1073 |
+
dists = data['dists']
|
| 1074 |
+
train_labels = data['train_labels']
|
| 1075 |
+
|
| 1076 |
+
# nearest neighbors
|
| 1077 |
+
dists = dists[:, :self.k]
|
| 1078 |
+
train_labels = train_labels[:, :self.k]
|
| 1079 |
+
ns = len(dists)
|
| 1080 |
+
|
| 1081 |
+
labels = []
|
| 1082 |
+
for i in range(ns):
|
| 1083 |
+
lbl, _ = majority_rule(train_labels[i, :], random=True)
|
| 1084 |
+
|
| 1085 |
+
# compare distances to threshold
|
| 1086 |
+
count = np.sum(dists[i, :] <= thrFcn(lbl))
|
| 1087 |
+
|
| 1088 |
+
# decide
|
| 1089 |
+
if count > (self.k // 2):
|
| 1090 |
+
# accept
|
| 1091 |
+
labels.append(lbl)
|
| 1092 |
+
else:
|
| 1093 |
+
# reject
|
| 1094 |
+
labels.append('')
|
| 1095 |
+
|
| 1096 |
+
return labels
|
| 1097 |
+
|
| 1098 |
+
def _prepare(self, data, targets=None):
|
| 1099 |
+
"""Prepare data to be processed.
|
| 1100 |
+
|
| 1101 |
+
Computes the distances of the input data set to the target subjects.
|
| 1102 |
+
|
| 1103 |
+
Parameters
|
| 1104 |
+
----------
|
| 1105 |
+
data : array
|
| 1106 |
+
Data to process.
|
| 1107 |
+
targets : list, str, optional
|
| 1108 |
+
Target subject labels.
|
| 1109 |
+
|
| 1110 |
+
Returns
|
| 1111 |
+
-------
|
| 1112 |
+
out : dict
|
| 1113 |
+
Processed data containing the computed distances (`dists`) and the
|
| 1114 |
+
target subject labels (`train_labels`).
|
| 1115 |
+
|
| 1116 |
+
"""
|
| 1117 |
+
|
| 1118 |
+
# target class labels
|
| 1119 |
+
if targets is None:
|
| 1120 |
+
targets = list(six.itervalues(self._subject2label))
|
| 1121 |
+
elif isinstance(targets, six.string_types):
|
| 1122 |
+
targets = [targets]
|
| 1123 |
+
|
| 1124 |
+
dists = []
|
| 1125 |
+
train_labels = []
|
| 1126 |
+
|
| 1127 |
+
for label in targets:
|
| 1128 |
+
# compute distances
|
| 1129 |
+
D = metrics.cdist(data, self.io_load(label),
|
| 1130 |
+
metric=self.metric, **self.metric_args)
|
| 1131 |
+
|
| 1132 |
+
dists.append(D)
|
| 1133 |
+
train_labels.append(np.tile(label, D.shape))
|
| 1134 |
+
|
| 1135 |
+
dists = np.concatenate(dists, axis=1)
|
| 1136 |
+
train_labels = np.concatenate(train_labels, axis=1)
|
| 1137 |
+
|
| 1138 |
+
# sort
|
| 1139 |
+
dists, train_labels = self._sort(dists, train_labels)
|
| 1140 |
+
|
| 1141 |
+
return {'dists': dists, 'train_labels': train_labels}
|
| 1142 |
+
|
| 1143 |
+
def _update(self, old, new):
|
| 1144 |
+
"""Combine new data with existing templates (for one subject).
|
| 1145 |
+
|
| 1146 |
+
Simply concatenates old data with new data.
|
| 1147 |
+
|
| 1148 |
+
Parameters
|
| 1149 |
+
----------
|
| 1150 |
+
old : array
|
| 1151 |
+
Existing data.
|
| 1152 |
+
new : array
|
| 1153 |
+
New data.
|
| 1154 |
+
|
| 1155 |
+
Returns
|
| 1156 |
+
-------
|
| 1157 |
+
out : array
|
| 1158 |
+
Combined data.
|
| 1159 |
+
|
| 1160 |
+
"""
|
| 1161 |
+
|
| 1162 |
+
out = np.concatenate([old, new], axis=0)
|
| 1163 |
+
|
| 1164 |
+
return out
|
| 1165 |
+
|
| 1166 |
+
|
| 1167 |
+
class SVM(BaseClassifier):
|
| 1168 |
+
"""Support Vector Machines (SVM) biometric classifier.
|
| 1169 |
+
|
| 1170 |
+
Wraps the 'OneClassSVM' and 'SVC' classes from 'scikit-learn'.
|
| 1171 |
+
|
| 1172 |
+
Parameters
|
| 1173 |
+
----------
|
| 1174 |
+
C : float, optional
|
| 1175 |
+
Penalty parameter C of the error term.
|
| 1176 |
+
kernel : str, optional
|
| 1177 |
+
Specifies the kernel type to be used in the algorithm. It must be one
|
| 1178 |
+
of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable.
|
| 1179 |
+
If none is given, ‘rbf’ will be used. If a callable is given it is
|
| 1180 |
+
used to precompute the kernel matrix.
|
| 1181 |
+
degree : int, optional
|
| 1182 |
+
Degree of the polynomial kernel function (‘poly’). Ignored by all other
|
| 1183 |
+
kernels.
|
| 1184 |
+
gamma : float, optional
|
| 1185 |
+
Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’. If gamma is 'auto'
|
| 1186 |
+
then 1/n_features will be used instead.
|
| 1187 |
+
coef0 : float, optional
|
| 1188 |
+
Independent term in kernel function. It is only significant in ‘poly’
|
| 1189 |
+
and ‘sigmoid’.
|
| 1190 |
+
shrinking : bool, optional
|
| 1191 |
+
Whether to use the shrinking heuristic.
|
| 1192 |
+
tol : float, optional
|
| 1193 |
+
Tolerance for stopping criterion.
|
| 1194 |
+
cache_size : float, optional
|
| 1195 |
+
Specify the size of the kernel cache (in MB).
|
| 1196 |
+
max_iter : int, optional
|
| 1197 |
+
Hard limit on iterations within solver, or -1 for no limit.
|
| 1198 |
+
random_state : int, RandomState, optional
|
| 1199 |
+
The seed of the pseudo random number generator to use when shuffling
|
| 1200 |
+
the data for probability estimation.
|
| 1201 |
+
|
| 1202 |
+
Attributes
|
| 1203 |
+
----------
|
| 1204 |
+
EER_IDX : int
|
| 1205 |
+
Reference index for the Equal Error Rate.
|
| 1206 |
+
|
| 1207 |
+
"""
|
| 1208 |
+
|
| 1209 |
+
EER_IDX = -1
|
| 1210 |
+
|
| 1211 |
+
def __init__(self,
|
| 1212 |
+
C=1.0,
|
| 1213 |
+
kernel='linear',
|
| 1214 |
+
degree=3,
|
| 1215 |
+
gamma='auto',
|
| 1216 |
+
coef0=0.0,
|
| 1217 |
+
shrinking=True,
|
| 1218 |
+
tol=0.001,
|
| 1219 |
+
cache_size=200,
|
| 1220 |
+
max_iter=-1,
|
| 1221 |
+
random_state=None):
|
| 1222 |
+
# parent __init__
|
| 1223 |
+
super(SVM, self).__init__()
|
| 1224 |
+
|
| 1225 |
+
# algorithm self things
|
| 1226 |
+
self._models = {}
|
| 1227 |
+
self._clf_kwargs = {
|
| 1228 |
+
'C': C,
|
| 1229 |
+
'kernel': kernel,
|
| 1230 |
+
'degree': degree,
|
| 1231 |
+
'gamma': gamma,
|
| 1232 |
+
'coef0': coef0,
|
| 1233 |
+
'shrinking': shrinking,
|
| 1234 |
+
'tol': tol,
|
| 1235 |
+
'cache_size': cache_size,
|
| 1236 |
+
'max_iter': max_iter,
|
| 1237 |
+
'random_state': random_state,
|
| 1238 |
+
}
|
| 1239 |
+
|
| 1240 |
+
# minimum threshold
|
| 1241 |
+
self.min_thr = 10 * np.finfo('float').eps
|
| 1242 |
+
|
| 1243 |
+
def _get_weights(self, n1, n2):
|
| 1244 |
+
"""Compute class weights.
|
| 1245 |
+
|
| 1246 |
+
The weights are inversely proportional to the number of samples in each
|
| 1247 |
+
class.
|
| 1248 |
+
|
| 1249 |
+
Parameters
|
| 1250 |
+
----------
|
| 1251 |
+
n1 : int
|
| 1252 |
+
Number of samples in the first class.
|
| 1253 |
+
n2 : int
|
| 1254 |
+
Number of samples in the second class.
|
| 1255 |
+
|
| 1256 |
+
Returns
|
| 1257 |
+
-------
|
| 1258 |
+
weights : dict
|
| 1259 |
+
Weights for each class.
|
| 1260 |
+
|
| 1261 |
+
"""
|
| 1262 |
+
|
| 1263 |
+
w = np.array([1. / n1, 1. / n2])
|
| 1264 |
+
w *= 2 / np.sum(w)
|
| 1265 |
+
weights = {-1: w[0], 1: w[1]}
|
| 1266 |
+
|
| 1267 |
+
return weights
|
| 1268 |
+
|
| 1269 |
+
def _get_single_clf(self, X, label):
|
| 1270 |
+
"""Instantiate and train a One Class SVM classifier.
|
| 1271 |
+
|
| 1272 |
+
Parameters
|
| 1273 |
+
----------
|
| 1274 |
+
X : array
|
| 1275 |
+
Training data.
|
| 1276 |
+
label : str
|
| 1277 |
+
Class label.
|
| 1278 |
+
|
| 1279 |
+
"""
|
| 1280 |
+
|
| 1281 |
+
clf = sksvm.OneClassSVM(kernel='rbf', nu=0.1)
|
| 1282 |
+
clf.fit(X)
|
| 1283 |
+
|
| 1284 |
+
# add to models
|
| 1285 |
+
self._models[('', label)] = clf
|
| 1286 |
+
|
| 1287 |
+
def _get_kernel_clf(self, X1, X2, n1, n2, label1, label2):
|
| 1288 |
+
"""Instantiate and train a SVC SVM classifier.
|
| 1289 |
+
|
| 1290 |
+
Parameters
|
| 1291 |
+
----------
|
| 1292 |
+
X1 : array
|
| 1293 |
+
Trainig data for the first class.
|
| 1294 |
+
X2 : array
|
| 1295 |
+
Training data for the second class.
|
| 1296 |
+
n1 : int
|
| 1297 |
+
Number of samples in the first class.
|
| 1298 |
+
n2 : int
|
| 1299 |
+
Number of samples in the second class.
|
| 1300 |
+
label1 : str
|
| 1301 |
+
Label for the first class.
|
| 1302 |
+
label2 : str
|
| 1303 |
+
Label for the first class.
|
| 1304 |
+
|
| 1305 |
+
"""
|
| 1306 |
+
|
| 1307 |
+
# prepare data to train
|
| 1308 |
+
X = np.concatenate((X1, X2), axis=0)
|
| 1309 |
+
Y = np.ones(n1 + n2)
|
| 1310 |
+
|
| 1311 |
+
pair = self._convert_pair((label1, label2))
|
| 1312 |
+
if pair[0] == label1:
|
| 1313 |
+
Y[:n1] = -1
|
| 1314 |
+
else:
|
| 1315 |
+
Y[n1:] = -1
|
| 1316 |
+
|
| 1317 |
+
# class weights
|
| 1318 |
+
weights = self._get_weights(n1, n2)
|
| 1319 |
+
|
| 1320 |
+
# instantiate and fit
|
| 1321 |
+
clf = sksvm.SVC(class_weight=weights, **self._clf_kwargs)
|
| 1322 |
+
clf.fit(X, Y)
|
| 1323 |
+
|
| 1324 |
+
# add to models
|
| 1325 |
+
self._models[pair] = clf
|
| 1326 |
+
|
| 1327 |
+
def _del_clf(self, pair):
|
| 1328 |
+
"""Delete a binary classifier.
|
| 1329 |
+
|
| 1330 |
+
Parameters
|
| 1331 |
+
----------
|
| 1332 |
+
pair : list, tuple
|
| 1333 |
+
Label pair.
|
| 1334 |
+
|
| 1335 |
+
"""
|
| 1336 |
+
|
| 1337 |
+
pair = self._convert_pair(pair)
|
| 1338 |
+
m = self._models.pop(pair)
|
| 1339 |
+
del m
|
| 1340 |
+
|
| 1341 |
+
def _convert_pair(self, pair):
|
| 1342 |
+
"""Sort and convert a label pair to the internal representation format.
|
| 1343 |
+
|
| 1344 |
+
Parameters
|
| 1345 |
+
----------
|
| 1346 |
+
pair : list, tuple
|
| 1347 |
+
Input label pair.
|
| 1348 |
+
|
| 1349 |
+
Returns
|
| 1350 |
+
-------
|
| 1351 |
+
pair : tuple
|
| 1352 |
+
Sorted label pair.
|
| 1353 |
+
|
| 1354 |
+
"""
|
| 1355 |
+
|
| 1356 |
+
pair = tuple(sorted(pair))
|
| 1357 |
+
|
| 1358 |
+
return pair
|
| 1359 |
+
|
| 1360 |
+
def _predict(self, pair, X):
|
| 1361 |
+
"""Get a classifier prediction of the input data, given the label pair.
|
| 1362 |
+
|
| 1363 |
+
Parameters
|
| 1364 |
+
----------
|
| 1365 |
+
pair : list, tuple
|
| 1366 |
+
Label pair.
|
| 1367 |
+
X : array
|
| 1368 |
+
Input data to classify.
|
| 1369 |
+
|
| 1370 |
+
Returns
|
| 1371 |
+
-------
|
| 1372 |
+
prediction : array
|
| 1373 |
+
Prediction for each sample in the input data.
|
| 1374 |
+
|
| 1375 |
+
"""
|
| 1376 |
+
|
| 1377 |
+
# convert pair
|
| 1378 |
+
pair = self._convert_pair(pair)
|
| 1379 |
+
|
| 1380 |
+
# classify
|
| 1381 |
+
aux = self._models[pair].predict(X)
|
| 1382 |
+
|
| 1383 |
+
prediction = []
|
| 1384 |
+
for item in aux:
|
| 1385 |
+
if item < 0:
|
| 1386 |
+
prediction.append(pair[0])
|
| 1387 |
+
elif item > 0:
|
| 1388 |
+
prediction.append(pair[1])
|
| 1389 |
+
else:
|
| 1390 |
+
prediction.append('')
|
| 1391 |
+
|
| 1392 |
+
prediction = np.array(prediction)
|
| 1393 |
+
|
| 1394 |
+
return prediction
|
| 1395 |
+
|
| 1396 |
+
def _authenticate(self, data, label, threshold):
|
| 1397 |
+
"""Authenticate a set of feature vectors, allegedly belonging to the
|
| 1398 |
+
given subject.
|
| 1399 |
+
|
| 1400 |
+
Parameters
|
| 1401 |
+
----------
|
| 1402 |
+
data : array
|
| 1403 |
+
Input test data.
|
| 1404 |
+
label : str
|
| 1405 |
+
Internal classifier subject label.
|
| 1406 |
+
threshold : int, float
|
| 1407 |
+
Authentication threshold.
|
| 1408 |
+
|
| 1409 |
+
Returns
|
| 1410 |
+
-------
|
| 1411 |
+
decision : array
|
| 1412 |
+
Authentication decision for each input sample.
|
| 1413 |
+
|
| 1414 |
+
"""
|
| 1415 |
+
|
| 1416 |
+
# unpack prepared data
|
| 1417 |
+
aux = data['predictions']
|
| 1418 |
+
ns = aux.shape[1]
|
| 1419 |
+
pairs = data['pairs']
|
| 1420 |
+
|
| 1421 |
+
# normalization
|
| 1422 |
+
if self._nbSubjects > 1:
|
| 1423 |
+
norm = float(self._nbSubjects - 1)
|
| 1424 |
+
else:
|
| 1425 |
+
norm = 1.0
|
| 1426 |
+
|
| 1427 |
+
# select pairs
|
| 1428 |
+
sel = np.nonzero([label in p for p in pairs])[0]
|
| 1429 |
+
aux = aux[sel, :]
|
| 1430 |
+
|
| 1431 |
+
decision = []
|
| 1432 |
+
for i in range(ns):
|
| 1433 |
+
# determine majority
|
| 1434 |
+
predMax, count = majority_rule(aux[:, i], random=True)
|
| 1435 |
+
rate = float(count) / norm
|
| 1436 |
+
|
| 1437 |
+
if predMax == '':
|
| 1438 |
+
decision.append(False)
|
| 1439 |
+
else:
|
| 1440 |
+
# compare with threshold
|
| 1441 |
+
if rate > threshold:
|
| 1442 |
+
decision.append(predMax == label)
|
| 1443 |
+
else:
|
| 1444 |
+
decision.append(False)
|
| 1445 |
+
|
| 1446 |
+
decision = np.array(decision)
|
| 1447 |
+
|
| 1448 |
+
return decision
|
| 1449 |
+
|
| 1450 |
+
def _get_thresholds(self):
|
| 1451 |
+
"""Generate an array of reasonable thresholds.
|
| 1452 |
+
|
| 1453 |
+
The thresholds correspond to the relative number of binary classifiers
|
| 1454 |
+
that agree on a class.
|
| 1455 |
+
|
| 1456 |
+
Returns
|
| 1457 |
+
-------
|
| 1458 |
+
ths : array
|
| 1459 |
+
Generated thresholds.
|
| 1460 |
+
|
| 1461 |
+
"""
|
| 1462 |
+
|
| 1463 |
+
ths = np.linspace(self.min_thr, 1.0, 100)
|
| 1464 |
+
|
| 1465 |
+
return ths
|
| 1466 |
+
|
| 1467 |
+
def _identify(self, data, threshold=None):
|
| 1468 |
+
"""Identify a set of feature vectors.
|
| 1469 |
+
|
| 1470 |
+
Parameters
|
| 1471 |
+
----------
|
| 1472 |
+
data : array
|
| 1473 |
+
Input test data.
|
| 1474 |
+
threshold : int, float
|
| 1475 |
+
Identification threshold.
|
| 1476 |
+
|
| 1477 |
+
Returns
|
| 1478 |
+
-------
|
| 1479 |
+
labels : list
|
| 1480 |
+
Identity (internal label) of each input sample.
|
| 1481 |
+
|
| 1482 |
+
"""
|
| 1483 |
+
|
| 1484 |
+
if threshold is None:
|
| 1485 |
+
thrFcn = lambda label: self.get_id_thr(label, ready=True)
|
| 1486 |
+
else:
|
| 1487 |
+
thrFcn = lambda label: threshold
|
| 1488 |
+
|
| 1489 |
+
# unpack prepared data
|
| 1490 |
+
aux = data['predictions']
|
| 1491 |
+
ns = aux.shape[1]
|
| 1492 |
+
|
| 1493 |
+
# normalization
|
| 1494 |
+
if self._nbSubjects > 1:
|
| 1495 |
+
norm = float(self._nbSubjects - 1)
|
| 1496 |
+
else:
|
| 1497 |
+
norm = 1.0
|
| 1498 |
+
|
| 1499 |
+
labels = []
|
| 1500 |
+
for i in range(ns):
|
| 1501 |
+
# determine majority
|
| 1502 |
+
predMax, count = majority_rule(aux[:, i], random=True)
|
| 1503 |
+
rate = float(count) / norm
|
| 1504 |
+
|
| 1505 |
+
if predMax == '':
|
| 1506 |
+
labels.append('')
|
| 1507 |
+
else:
|
| 1508 |
+
# compare with threshold
|
| 1509 |
+
if rate > thrFcn(predMax):
|
| 1510 |
+
# accept
|
| 1511 |
+
labels.append(predMax)
|
| 1512 |
+
else:
|
| 1513 |
+
# reject
|
| 1514 |
+
labels.append('')
|
| 1515 |
+
|
| 1516 |
+
return labels
|
| 1517 |
+
|
| 1518 |
+
def _prepare(self, data, targets=None):
|
| 1519 |
+
"""Prepare data to be processed.
|
| 1520 |
+
|
| 1521 |
+
Computes the predictions for each of the targeted classifier pairs.
|
| 1522 |
+
|
| 1523 |
+
Parameters
|
| 1524 |
+
----------
|
| 1525 |
+
data : array
|
| 1526 |
+
Data to process.
|
| 1527 |
+
targets : list, str, optional
|
| 1528 |
+
Target subject labels.
|
| 1529 |
+
|
| 1530 |
+
Returns
|
| 1531 |
+
-------
|
| 1532 |
+
out : dict
|
| 1533 |
+
Processed data containing an array with the predictions of each
|
| 1534 |
+
input sample (`predictions`) and a list with the target label
|
| 1535 |
+
pairs (`pairs`).
|
| 1536 |
+
|
| 1537 |
+
"""
|
| 1538 |
+
|
| 1539 |
+
# target class labels
|
| 1540 |
+
if self._nbSubjects == 1:
|
| 1541 |
+
pairs = list(self._models)
|
| 1542 |
+
else:
|
| 1543 |
+
if targets is None:
|
| 1544 |
+
pairs = list(self._models)
|
| 1545 |
+
elif isinstance(targets, six.string_types):
|
| 1546 |
+
labels = list(
|
| 1547 |
+
set(self._subject2label.values()) - set([targets]))
|
| 1548 |
+
pairs = [[targets, lbl] for lbl in labels]
|
| 1549 |
+
else:
|
| 1550 |
+
pairs = []
|
| 1551 |
+
for t in targets:
|
| 1552 |
+
labels = list(set(self._subject2label.values()) - set([t]))
|
| 1553 |
+
pairs.extend([t, lbl] for lbl in labels)
|
| 1554 |
+
|
| 1555 |
+
# predict
|
| 1556 |
+
predictions = np.array([self._predict(p, data) for p in pairs])
|
| 1557 |
+
|
| 1558 |
+
out = {'predictions': predictions, 'pairs': pairs}
|
| 1559 |
+
|
| 1560 |
+
return out
|
| 1561 |
+
|
| 1562 |
+
def _train(self, enroll=None, dismiss=None):
|
| 1563 |
+
"""Train the classifier.
|
| 1564 |
+
|
| 1565 |
+
Parameters
|
| 1566 |
+
----------
|
| 1567 |
+
enroll : list, optional
|
| 1568 |
+
Labels of new or updated subjects.
|
| 1569 |
+
dismiss : list, optional
|
| 1570 |
+
Labels of deleted subjects.
|
| 1571 |
+
|
| 1572 |
+
"""
|
| 1573 |
+
|
| 1574 |
+
if enroll is None:
|
| 1575 |
+
enroll = []
|
| 1576 |
+
if dismiss is None:
|
| 1577 |
+
dismiss = []
|
| 1578 |
+
|
| 1579 |
+
# process dismiss
|
| 1580 |
+
src_pairs = list(self._models)
|
| 1581 |
+
pairs = []
|
| 1582 |
+
for t in dismiss:
|
| 1583 |
+
pairs.extend([p for p in src_pairs if t in p])
|
| 1584 |
+
|
| 1585 |
+
for p in pairs:
|
| 1586 |
+
self._del_clf(p)
|
| 1587 |
+
|
| 1588 |
+
# process enroll
|
| 1589 |
+
existing = list(set(self._subject2label.values()) - set(enroll))
|
| 1590 |
+
for i, t1 in enumerate(enroll):
|
| 1591 |
+
X1 = self.io_load(t1)
|
| 1592 |
+
n1 = len(X1)
|
| 1593 |
+
|
| 1594 |
+
# existing subjects
|
| 1595 |
+
for t2 in existing:
|
| 1596 |
+
X2 = self.io_load(t2)
|
| 1597 |
+
n2 = len(X2)
|
| 1598 |
+
self._get_kernel_clf(X1, X2, n1, n2, t1, t2)
|
| 1599 |
+
|
| 1600 |
+
# new subjects
|
| 1601 |
+
for t2 in enroll[i + 1:]:
|
| 1602 |
+
X2 = self.io_load(t2)
|
| 1603 |
+
n2 = len(X2)
|
| 1604 |
+
self._get_kernel_clf(X1, X2, n1, n2, t1, t2)
|
| 1605 |
+
|
| 1606 |
+
# check singles
|
| 1607 |
+
if self._nbSubjects == 1:
|
| 1608 |
+
label = list(six.itervalues(self._subject2label))[0]
|
| 1609 |
+
X = self.io_load(label)
|
| 1610 |
+
self._get_single_clf(X, label)
|
| 1611 |
+
elif self._nbSubjects > 1:
|
| 1612 |
+
aux = [p for p in self._models if '' in p]
|
| 1613 |
+
if len(aux) != 0:
|
| 1614 |
+
for p in aux:
|
| 1615 |
+
self._del_clf(p)
|
| 1616 |
+
|
| 1617 |
+
def _update(self, old, new):
|
| 1618 |
+
"""Combine new data with existing templates (for one subject).
|
| 1619 |
+
|
| 1620 |
+
Simply concatenates old data with new data.
|
| 1621 |
+
|
| 1622 |
+
Parameters
|
| 1623 |
+
----------
|
| 1624 |
+
old : array
|
| 1625 |
+
Existing data.
|
| 1626 |
+
new : array
|
| 1627 |
+
New data.
|
| 1628 |
+
|
| 1629 |
+
Returns
|
| 1630 |
+
-------
|
| 1631 |
+
out : array
|
| 1632 |
+
Combined data.
|
| 1633 |
+
|
| 1634 |
+
"""
|
| 1635 |
+
|
| 1636 |
+
out = np.concatenate([old, new], axis=0)
|
| 1637 |
+
|
| 1638 |
+
return out
|
| 1639 |
+
|
| 1640 |
+
|
| 1641 |
+
def get_auth_rates(TP=None, FP=None, TN=None, FN=None, thresholds=None):
|
| 1642 |
+
"""Compute authentication rates from the confusion matrix.
|
| 1643 |
+
|
| 1644 |
+
Parameters
|
| 1645 |
+
----------
|
| 1646 |
+
TP : array
|
| 1647 |
+
True Positive counts for each classifier threshold.
|
| 1648 |
+
FP : array
|
| 1649 |
+
False Positive counts for each classifier threshold.
|
| 1650 |
+
TN : array
|
| 1651 |
+
True Negative counts for each classifier threshold.
|
| 1652 |
+
FN : array
|
| 1653 |
+
False Negative counts for each classifier threshold.
|
| 1654 |
+
thresholds : array
|
| 1655 |
+
Classifier thresholds.
|
| 1656 |
+
|
| 1657 |
+
Returns
|
| 1658 |
+
-------
|
| 1659 |
+
Acc : array
|
| 1660 |
+
Accuracy at each classifier threshold.
|
| 1661 |
+
TAR : array
|
| 1662 |
+
True Accept Rate at each classifier threshold.
|
| 1663 |
+
FAR : array
|
| 1664 |
+
False Accept Rate at each classifier threshold.
|
| 1665 |
+
FRR : array
|
| 1666 |
+
False Reject Rate at each classifier threshold.
|
| 1667 |
+
TRR : array
|
| 1668 |
+
True Reject Rate at each classifier threshold.
|
| 1669 |
+
EER : array
|
| 1670 |
+
Equal Error Rate points, with format (threshold, rate).
|
| 1671 |
+
Err : array
|
| 1672 |
+
Error rate at each classifier threshold.
|
| 1673 |
+
PPV : array
|
| 1674 |
+
Positive Predictive Value at each classifier threshold.
|
| 1675 |
+
FDR : array
|
| 1676 |
+
False Discovery Rate at each classifier threshold.
|
| 1677 |
+
NPV : array
|
| 1678 |
+
Negative Predictive Value at each classifier threshold.
|
| 1679 |
+
FOR : array
|
| 1680 |
+
False Omission Rate at each classifier threshold.
|
| 1681 |
+
MCC : array
|
| 1682 |
+
Matthrews Correlation Coefficient at each classifier threshold.
|
| 1683 |
+
|
| 1684 |
+
"""
|
| 1685 |
+
|
| 1686 |
+
# check inputs
|
| 1687 |
+
if TP is None:
|
| 1688 |
+
raise TypeError("Please specify the input TP counts.")
|
| 1689 |
+
if FP is None:
|
| 1690 |
+
raise TypeError("Please specify the input FP counts.")
|
| 1691 |
+
if TN is None:
|
| 1692 |
+
raise TypeError("Please specify the input TN counts.")
|
| 1693 |
+
if FN is None:
|
| 1694 |
+
raise TypeError("Please specify the input FN counts.")
|
| 1695 |
+
if thresholds is None:
|
| 1696 |
+
raise TypeError("Please specify the input classifier thresholds.")
|
| 1697 |
+
|
| 1698 |
+
# ensure numpy
|
| 1699 |
+
TP = np.array(TP)
|
| 1700 |
+
FP = np.array(FP)
|
| 1701 |
+
TN = np.array(TN)
|
| 1702 |
+
FN = np.array(FN)
|
| 1703 |
+
thresholds = np.array(thresholds)
|
| 1704 |
+
|
| 1705 |
+
# helper variables
|
| 1706 |
+
A = TP + FP
|
| 1707 |
+
B = TP + FN
|
| 1708 |
+
C = TN + FP
|
| 1709 |
+
D = TN + FN
|
| 1710 |
+
E = A * B * C * D
|
| 1711 |
+
F = A + D
|
| 1712 |
+
|
| 1713 |
+
# avoid divisions by zero
|
| 1714 |
+
A[A == 0] = 1.
|
| 1715 |
+
B[B == 0] = 1.
|
| 1716 |
+
C[C == 0] = 1.
|
| 1717 |
+
D[D == 0] = 1.
|
| 1718 |
+
E[E == 0] = 1.
|
| 1719 |
+
F[F == 0] = 1.
|
| 1720 |
+
|
| 1721 |
+
# rates
|
| 1722 |
+
Acc = (TP + TN) / F # accuracy
|
| 1723 |
+
Err = (FP + FN) / F # error rate
|
| 1724 |
+
|
| 1725 |
+
TAR = TP / B # true accept rate /true positive rate
|
| 1726 |
+
FRR = FN / B # false rejection rate / false negative rate
|
| 1727 |
+
|
| 1728 |
+
TRR = TN / C # true rejection rate / true negative rate
|
| 1729 |
+
FAR = FP / C # false accept rate / false positive rate
|
| 1730 |
+
|
| 1731 |
+
PPV = TP / A # positive predictive value
|
| 1732 |
+
FDR = FP / A # false discovery rate
|
| 1733 |
+
|
| 1734 |
+
NPV = TN / D # negative predictive value
|
| 1735 |
+
FOR = FN / D # false omission rate
|
| 1736 |
+
|
| 1737 |
+
MCC = (TP*TN - FP*FN) / np.sqrt(E) # matthews correlation coefficient
|
| 1738 |
+
|
| 1739 |
+
# determine EER
|
| 1740 |
+
roots, values = tools.find_intersection(thresholds, FAR, thresholds, FRR)
|
| 1741 |
+
EER = np.vstack((roots, values)).T
|
| 1742 |
+
|
| 1743 |
+
# output
|
| 1744 |
+
args = (Acc, TAR, FAR, FRR, TRR, EER, Err, PPV, FDR, NPV, FOR, MCC)
|
| 1745 |
+
names = ('Acc', 'TAR', 'FAR', 'FRR', 'TRR', 'EER', 'Err', 'PPV', 'FDR',
|
| 1746 |
+
'NPV', 'FOR', 'MCC')
|
| 1747 |
+
|
| 1748 |
+
return utils.ReturnTuple(args, names)
|
| 1749 |
+
|
| 1750 |
+
|
| 1751 |
+
def get_id_rates(H=None, M=None, R=None, N=None, thresholds=None):
|
| 1752 |
+
"""Compute identification rates from the confusion matrix.
|
| 1753 |
+
|
| 1754 |
+
Parameters
|
| 1755 |
+
----------
|
| 1756 |
+
H : array
|
| 1757 |
+
Hit counts for each classifier threshold.
|
| 1758 |
+
M : array
|
| 1759 |
+
Miss counts for each classifier threshold.
|
| 1760 |
+
R : array
|
| 1761 |
+
Reject counts for each classifier threshold.
|
| 1762 |
+
N : int
|
| 1763 |
+
Number of test samples.
|
| 1764 |
+
thresholds : array
|
| 1765 |
+
Classifier thresholds.
|
| 1766 |
+
|
| 1767 |
+
Returns
|
| 1768 |
+
-------
|
| 1769 |
+
Acc : array
|
| 1770 |
+
Accuracy at each classifier threshold.
|
| 1771 |
+
Err : array
|
| 1772 |
+
Error rate at each classifier threshold.
|
| 1773 |
+
MR : array
|
| 1774 |
+
Miss Rate at each classifier threshold.
|
| 1775 |
+
RR : array
|
| 1776 |
+
Reject Rate at each classifier threshold.
|
| 1777 |
+
EID : array
|
| 1778 |
+
Error of Identification points, with format (threshold, rate).
|
| 1779 |
+
EER : array
|
| 1780 |
+
Equal Error Rate points, with format (threshold, rate).
|
| 1781 |
+
|
| 1782 |
+
"""
|
| 1783 |
+
|
| 1784 |
+
# check inputs
|
| 1785 |
+
if H is None:
|
| 1786 |
+
raise TypeError("Please specify the input H counts.")
|
| 1787 |
+
if M is None:
|
| 1788 |
+
raise TypeError("Please specify the input M counts.")
|
| 1789 |
+
if R is None:
|
| 1790 |
+
raise TypeError("Please specify the input R counts.")
|
| 1791 |
+
if N is None:
|
| 1792 |
+
raise TypeError("Please specify the total number of test samples.")
|
| 1793 |
+
if thresholds is None:
|
| 1794 |
+
raise TypeError("Please specify the input classifier thresholds.")
|
| 1795 |
+
|
| 1796 |
+
# ensure numpy
|
| 1797 |
+
H = np.array(H)
|
| 1798 |
+
M = np.array(M)
|
| 1799 |
+
R = np.array(R)
|
| 1800 |
+
thresholds = np.array(thresholds)
|
| 1801 |
+
|
| 1802 |
+
Acc = H / N
|
| 1803 |
+
Err = 1 - Acc
|
| 1804 |
+
MR = M / N
|
| 1805 |
+
RR = R / N
|
| 1806 |
+
|
| 1807 |
+
# EER
|
| 1808 |
+
roots, values = tools.find_intersection(thresholds, MR, thresholds, RR)
|
| 1809 |
+
EER = np.vstack((roots, values)).T
|
| 1810 |
+
|
| 1811 |
+
# EID
|
| 1812 |
+
y2 = np.min(Err) * np.ones(len(thresholds), dtype='float')
|
| 1813 |
+
roots, values = tools.find_intersection(thresholds, Err, thresholds, y2)
|
| 1814 |
+
EID = np.vstack((roots, values)).T
|
| 1815 |
+
|
| 1816 |
+
# output
|
| 1817 |
+
args = (Acc, Err, MR, RR, EID, EER)
|
| 1818 |
+
names = ('Acc', 'Err', 'MR', 'RR', 'EID', 'EER')
|
| 1819 |
+
|
| 1820 |
+
return utils.ReturnTuple(args, names)
|
| 1821 |
+
|
| 1822 |
+
|
| 1823 |
+
def get_subject_results(results=None,
|
| 1824 |
+
subject=None,
|
| 1825 |
+
thresholds=None,
|
| 1826 |
+
subjects=None,
|
| 1827 |
+
subject_dict=None,
|
| 1828 |
+
subject_idx=None):
|
| 1829 |
+
"""Compute authentication and identification performance metrics for a
|
| 1830 |
+
given subject.
|
| 1831 |
+
|
| 1832 |
+
Parameters
|
| 1833 |
+
----------
|
| 1834 |
+
results : dict
|
| 1835 |
+
Classification results.
|
| 1836 |
+
subject : hashable
|
| 1837 |
+
True subject label.
|
| 1838 |
+
thresholds : array
|
| 1839 |
+
Classifier thresholds.
|
| 1840 |
+
subjects : list
|
| 1841 |
+
Target subject classes.
|
| 1842 |
+
subject_dict : bidict
|
| 1843 |
+
Subject-label conversion dictionary.
|
| 1844 |
+
subject_idx : list
|
| 1845 |
+
Subject index.
|
| 1846 |
+
|
| 1847 |
+
Returns
|
| 1848 |
+
-------
|
| 1849 |
+
assessment : dict
|
| 1850 |
+
Authentication and identification results.
|
| 1851 |
+
|
| 1852 |
+
"""
|
| 1853 |
+
|
| 1854 |
+
# check inputs
|
| 1855 |
+
if results is None:
|
| 1856 |
+
raise TypeError("Please specify the input classification results.")
|
| 1857 |
+
if subject is None:
|
| 1858 |
+
raise TypeError("Please specify the input subject class.")
|
| 1859 |
+
if thresholds is None:
|
| 1860 |
+
raise TypeError("Please specify the input classifier thresholds.")
|
| 1861 |
+
if subjects is None:
|
| 1862 |
+
raise TypeError("Please specify the target subject classes.")
|
| 1863 |
+
if subject_dict is None:
|
| 1864 |
+
raise TypeError("Please specify the subject-label dictionary.")
|
| 1865 |
+
if subject_idx is None:
|
| 1866 |
+
raise TypeError("Plase specify subject index.")
|
| 1867 |
+
|
| 1868 |
+
nth = len(thresholds)
|
| 1869 |
+
auth_res = results['authentication']
|
| 1870 |
+
id_res = results['identification']
|
| 1871 |
+
ns = auth_res.shape[2]
|
| 1872 |
+
|
| 1873 |
+
# sanity checks
|
| 1874 |
+
if auth_res.shape[0] != id_res.shape[0]:
|
| 1875 |
+
raise ValueError("Authentication and identification number of \
|
| 1876 |
+
thresholds do not match.")
|
| 1877 |
+
if auth_res.shape[0] != nth:
|
| 1878 |
+
raise ValueError("Number of thresholds in vector does not match \
|
| 1879 |
+
biometric results.")
|
| 1880 |
+
if auth_res.shape[2] != id_res.shape[1]:
|
| 1881 |
+
raise ValueError("Authentication and identification number of tests \
|
| 1882 |
+
do not match.")
|
| 1883 |
+
|
| 1884 |
+
label = subject_dict[subject]
|
| 1885 |
+
|
| 1886 |
+
# authentication vars
|
| 1887 |
+
TP = np.zeros(nth, dtype='float')
|
| 1888 |
+
FP = np.zeros(nth, dtype='float')
|
| 1889 |
+
TN = np.zeros(nth, dtype='float')
|
| 1890 |
+
FN = np.zeros(nth, dtype='float')
|
| 1891 |
+
|
| 1892 |
+
# identification vars
|
| 1893 |
+
H = np.zeros(nth, dtype='float')
|
| 1894 |
+
M = np.zeros(nth, dtype='float')
|
| 1895 |
+
R = np.zeros(nth, dtype='float')
|
| 1896 |
+
CM = []
|
| 1897 |
+
|
| 1898 |
+
for i in range(nth): # for each threshold
|
| 1899 |
+
# authentication
|
| 1900 |
+
for k, lbl in enumerate(subject_idx): # for each subject
|
| 1901 |
+
subject_tst = subjects[k]
|
| 1902 |
+
|
| 1903 |
+
d = auth_res[i, lbl, :]
|
| 1904 |
+
if subject == subject_tst:
|
| 1905 |
+
# true positives
|
| 1906 |
+
aux = np.sum(d)
|
| 1907 |
+
TP[i] += aux
|
| 1908 |
+
# false negatives
|
| 1909 |
+
FN[i] += (ns - aux)
|
| 1910 |
+
else:
|
| 1911 |
+
# false positives
|
| 1912 |
+
aux = np.sum(d)
|
| 1913 |
+
FP[i] += aux
|
| 1914 |
+
# true negatives
|
| 1915 |
+
TN[i] += (ns - aux)
|
| 1916 |
+
|
| 1917 |
+
# identification
|
| 1918 |
+
res = id_res[i, :]
|
| 1919 |
+
hits = res == label
|
| 1920 |
+
nhits = np.sum(hits)
|
| 1921 |
+
rejects = res == ''
|
| 1922 |
+
nrejects = np.sum(rejects)
|
| 1923 |
+
misses = np.logical_not(np.logical_or(hits, rejects))
|
| 1924 |
+
nmisses = ns - (nhits + nrejects)
|
| 1925 |
+
missCounts = {
|
| 1926 |
+
subject_dict.inv[ms]: np.sum(res == ms)
|
| 1927 |
+
for ms in np.unique(res[misses])
|
| 1928 |
+
}
|
| 1929 |
+
|
| 1930 |
+
# appends
|
| 1931 |
+
H[i] = nhits
|
| 1932 |
+
M[i] = nmisses
|
| 1933 |
+
R[i] = nrejects
|
| 1934 |
+
CM.append(missCounts)
|
| 1935 |
+
|
| 1936 |
+
# compute rates
|
| 1937 |
+
auth_rates = get_auth_rates(TP, FP, TN, FN, thresholds).as_dict()
|
| 1938 |
+
id_rates = get_id_rates(H, M, R, ns, thresholds).as_dict()
|
| 1939 |
+
|
| 1940 |
+
output = {
|
| 1941 |
+
'authentication': {
|
| 1942 |
+
'confusionMatrix': {'TP': TP, 'FP': FP, 'TN': TN, 'FN': FN},
|
| 1943 |
+
'rates': auth_rates,
|
| 1944 |
+
},
|
| 1945 |
+
'identification': {
|
| 1946 |
+
'confusionMatrix': {'H': H, 'M': M, 'R': R, 'CM': CM},
|
| 1947 |
+
'rates': id_rates,
|
| 1948 |
+
},
|
| 1949 |
+
}
|
| 1950 |
+
|
| 1951 |
+
return utils.ReturnTuple((output,), ('assessment',))
|
| 1952 |
+
|
| 1953 |
+
|
| 1954 |
+
def assess_classification(results=None, thresholds=None):
|
| 1955 |
+
"""Assess the performance of a biometric classification test.
|
| 1956 |
+
|
| 1957 |
+
Parameters
|
| 1958 |
+
----------
|
| 1959 |
+
results : dict
|
| 1960 |
+
Classification results.
|
| 1961 |
+
thresholds : array
|
| 1962 |
+
Classifier thresholds.
|
| 1963 |
+
|
| 1964 |
+
Returns
|
| 1965 |
+
-------
|
| 1966 |
+
assessment : dict
|
| 1967 |
+
Classification assessment.
|
| 1968 |
+
|
| 1969 |
+
"""
|
| 1970 |
+
|
| 1971 |
+
# check inputs
|
| 1972 |
+
if results is None:
|
| 1973 |
+
raise TypeError("Please specify the input classification results.")
|
| 1974 |
+
if thresholds is None:
|
| 1975 |
+
raise TypeError("Please specify the input classifier thresholds.")
|
| 1976 |
+
|
| 1977 |
+
# test subjects
|
| 1978 |
+
subjectDict = results['subjectDict']
|
| 1979 |
+
subParent = results['subjectList']
|
| 1980 |
+
subIdx = [subParent.index(item) for item in subParent]
|
| 1981 |
+
subIdx.sort()
|
| 1982 |
+
subjects = [subParent[item] for item in subIdx]
|
| 1983 |
+
|
| 1984 |
+
# output object
|
| 1985 |
+
output = {
|
| 1986 |
+
'global': {
|
| 1987 |
+
'authentication': {
|
| 1988 |
+
'confusionMatrix': {'TP': 0., 'TN': 0., 'FP': 0., 'FN': 0.},
|
| 1989 |
+
},
|
| 1990 |
+
'identification': {
|
| 1991 |
+
'confusionMatrix': {'H': 0., 'M': 0., 'R': 0.},
|
| 1992 |
+
},
|
| 1993 |
+
},
|
| 1994 |
+
'subject': {},
|
| 1995 |
+
'thresholds': thresholds,
|
| 1996 |
+
}
|
| 1997 |
+
|
| 1998 |
+
nth = len(thresholds)
|
| 1999 |
+
C = np.zeros((nth, len(subjects)), dtype='float')
|
| 2000 |
+
|
| 2001 |
+
# update variables
|
| 2002 |
+
auth = output['global']['authentication']['confusionMatrix']
|
| 2003 |
+
authM = ['TP', 'TN', 'FP', 'FN']
|
| 2004 |
+
iden = output['global']['identification']['confusionMatrix']
|
| 2005 |
+
idenM = ['H', 'M', 'R']
|
| 2006 |
+
|
| 2007 |
+
for test_user in subjects:
|
| 2008 |
+
aux, = get_subject_results(results[test_user], test_user, thresholds,
|
| 2009 |
+
subjects, subjectDict, subIdx)
|
| 2010 |
+
|
| 2011 |
+
# copy to subject
|
| 2012 |
+
output['subject'][test_user] = aux
|
| 2013 |
+
|
| 2014 |
+
# authentication
|
| 2015 |
+
for m in authM:
|
| 2016 |
+
auth[m] += aux['authentication']['confusionMatrix'][m]
|
| 2017 |
+
|
| 2018 |
+
# identification
|
| 2019 |
+
for m in idenM:
|
| 2020 |
+
iden[m] += aux['identification']['confusionMatrix'][m]
|
| 2021 |
+
|
| 2022 |
+
# subject misses
|
| 2023 |
+
for i, item in enumerate(aux['identification']['confusionMatrix']['CM']):
|
| 2024 |
+
for k, sub in enumerate(subjects):
|
| 2025 |
+
try:
|
| 2026 |
+
C[i, k] += item[sub]
|
| 2027 |
+
except KeyError:
|
| 2028 |
+
pass
|
| 2029 |
+
|
| 2030 |
+
# normalize subject misses
|
| 2031 |
+
sC = C.sum(axis=1).reshape((nth, 1))
|
| 2032 |
+
# avoid division by zero
|
| 2033 |
+
sC[sC <= 0] = 1.
|
| 2034 |
+
CR = C / sC
|
| 2035 |
+
|
| 2036 |
+
# update subjects
|
| 2037 |
+
for k, sub in enumerate(subjects):
|
| 2038 |
+
output['subject'][sub]['identification']['confusionMatrix']['C'] = C[:,
|
| 2039 |
+
k]
|
| 2040 |
+
output['subject'][sub]['identification']['rates']['CR'] = CR[:, k]
|
| 2041 |
+
|
| 2042 |
+
# compute global rates
|
| 2043 |
+
aux = get_auth_rates(auth['TP'], auth['FP'], auth['TN'], auth['FN'],
|
| 2044 |
+
thresholds)
|
| 2045 |
+
output['global']['authentication']['rates'] = aux.as_dict()
|
| 2046 |
+
|
| 2047 |
+
# identification
|
| 2048 |
+
Ns = iden['H'] + iden['M'] + iden['R']
|
| 2049 |
+
aux = get_id_rates(iden['H'], iden['M'], iden['R'], Ns, thresholds)
|
| 2050 |
+
output['global']['identification']['rates'] = aux.as_dict()
|
| 2051 |
+
|
| 2052 |
+
return utils.ReturnTuple((output,), ('assessment',))
|
| 2053 |
+
|
| 2054 |
+
|
| 2055 |
+
def assess_runs(results=None, subjects=None):
|
| 2056 |
+
"""Assess the performance of multiple biometric classification runs.
|
| 2057 |
+
|
| 2058 |
+
Parameters
|
| 2059 |
+
----------
|
| 2060 |
+
results : list
|
| 2061 |
+
Classification assessment for each run.
|
| 2062 |
+
subjects : list
|
| 2063 |
+
Common target subject classes.
|
| 2064 |
+
|
| 2065 |
+
Returns
|
| 2066 |
+
-------
|
| 2067 |
+
assessment : dict
|
| 2068 |
+
Global classification assessment.
|
| 2069 |
+
|
| 2070 |
+
"""
|
| 2071 |
+
|
| 2072 |
+
# check inputs
|
| 2073 |
+
if results is None:
|
| 2074 |
+
raise TypeError("Please specify the input classification results.")
|
| 2075 |
+
if subjects is None:
|
| 2076 |
+
raise TypeError("Please specify the common subject classes.")
|
| 2077 |
+
|
| 2078 |
+
nb = len(results)
|
| 2079 |
+
if nb == 0:
|
| 2080 |
+
raise ValueError("Please provide at least one classification run.")
|
| 2081 |
+
elif nb == 1:
|
| 2082 |
+
return utils.ReturnTuple((results[0],), ('assessment',))
|
| 2083 |
+
|
| 2084 |
+
# output
|
| 2085 |
+
output = {
|
| 2086 |
+
'global': {
|
| 2087 |
+
'authentication': {
|
| 2088 |
+
'confusionMatrix': {'TP': 0., 'TN': 0., 'FP': 0., 'FN': 0.},
|
| 2089 |
+
},
|
| 2090 |
+
'identification': {
|
| 2091 |
+
'confusionMatrix': {'H': 0., 'M': 0., 'R': 0.},
|
| 2092 |
+
},
|
| 2093 |
+
},
|
| 2094 |
+
'subject': {},
|
| 2095 |
+
'thresholds': None,
|
| 2096 |
+
}
|
| 2097 |
+
|
| 2098 |
+
thresholds = output['thresholds'] = results[0]['thresholds']
|
| 2099 |
+
|
| 2100 |
+
# global helpers
|
| 2101 |
+
auth = output['global']['authentication']['confusionMatrix']
|
| 2102 |
+
iden = output['global']['identification']['confusionMatrix']
|
| 2103 |
+
authM = ['TP', 'TN', 'FP', 'FN']
|
| 2104 |
+
idenM1 = ['H', 'M', 'R', 'C']
|
| 2105 |
+
idenM2 = ['H', 'M', 'R']
|
| 2106 |
+
|
| 2107 |
+
for sub in subjects:
|
| 2108 |
+
# create subject confusion matrix, rates
|
| 2109 |
+
output['subject'][sub] = {
|
| 2110 |
+
'authentication': {
|
| 2111 |
+
'confusionMatrix': {'TP': 0., 'TN': 0., 'FP': 0., 'FN': 0.},
|
| 2112 |
+
'rates': {},
|
| 2113 |
+
},
|
| 2114 |
+
'identification': {
|
| 2115 |
+
'confusionMatrix': {'H': 0., 'M': 0., 'R': 0., 'C': 0.},
|
| 2116 |
+
'rates': {},
|
| 2117 |
+
},
|
| 2118 |
+
}
|
| 2119 |
+
|
| 2120 |
+
# subject helpers
|
| 2121 |
+
authS = output['subject'][sub]['authentication']['confusionMatrix']
|
| 2122 |
+
idenS = output['subject'][sub]['identification']['confusionMatrix']
|
| 2123 |
+
|
| 2124 |
+
# update confusions
|
| 2125 |
+
for run in results:
|
| 2126 |
+
# authentication
|
| 2127 |
+
auth_run = run['subject'][sub]['authentication']['confusionMatrix']
|
| 2128 |
+
for m in authM:
|
| 2129 |
+
auth[m] += auth_run[m]
|
| 2130 |
+
authS[m] += auth_run[m]
|
| 2131 |
+
|
| 2132 |
+
# identification
|
| 2133 |
+
iden_run = run['subject'][sub]['identification']['confusionMatrix']
|
| 2134 |
+
for m in idenM1:
|
| 2135 |
+
idenS[m] += iden_run[m]
|
| 2136 |
+
for m in idenM2:
|
| 2137 |
+
iden[m] += iden_run[m]
|
| 2138 |
+
|
| 2139 |
+
# compute subject mean
|
| 2140 |
+
# authentication
|
| 2141 |
+
for m in authM:
|
| 2142 |
+
authS[m] /= float(nb)
|
| 2143 |
+
|
| 2144 |
+
# identification
|
| 2145 |
+
for m in idenM1:
|
| 2146 |
+
idenS[m] /= float(nb)
|
| 2147 |
+
|
| 2148 |
+
# compute subject rates
|
| 2149 |
+
aux = get_auth_rates(authS['TP'], authS['FP'], authS['TN'],
|
| 2150 |
+
authS['FN'], thresholds)
|
| 2151 |
+
output['subject'][sub]['authentication']['rates'] = aux.as_dict()
|
| 2152 |
+
|
| 2153 |
+
Ns = idenS['H'] + idenS['M'] + idenS['R']
|
| 2154 |
+
aux = get_id_rates(idenS['H'], idenS['M'], idenS['R'], Ns, thresholds)
|
| 2155 |
+
output['subject'][sub]['identification']['rates'] = aux.as_dict()
|
| 2156 |
+
M = np.array(idenS['M'], copy=True)
|
| 2157 |
+
M[M <= 0] = 1.
|
| 2158 |
+
output['subject'][sub]['identification']['rates']['CR'] = idenS['C'] / M
|
| 2159 |
+
|
| 2160 |
+
# compute global mean
|
| 2161 |
+
# authentication
|
| 2162 |
+
for m in authM:
|
| 2163 |
+
auth[m] /= float(nb)
|
| 2164 |
+
|
| 2165 |
+
# identification
|
| 2166 |
+
for m in idenM2:
|
| 2167 |
+
iden[m] /= float(nb)
|
| 2168 |
+
|
| 2169 |
+
# compute rates
|
| 2170 |
+
aux = get_auth_rates(auth['TP'], auth['FP'], auth['TN'], auth['FN'],
|
| 2171 |
+
thresholds)
|
| 2172 |
+
output['global']['authentication']['rates'] = aux.as_dict()
|
| 2173 |
+
|
| 2174 |
+
Ns = iden['H'] + iden['M'] + iden['R']
|
| 2175 |
+
aux = get_id_rates(iden['H'], iden['M'], iden['R'], Ns, thresholds)
|
| 2176 |
+
output['global']['identification']['rates'] = aux.as_dict()
|
| 2177 |
+
|
| 2178 |
+
return utils.ReturnTuple((output,), ('assessment',))
|
| 2179 |
+
|
| 2180 |
+
|
| 2181 |
+
def combination(results=None, weights=None):
|
| 2182 |
+
"""Combine results from multiple classifiers.
|
| 2183 |
+
|
| 2184 |
+
Parameters
|
| 2185 |
+
----------
|
| 2186 |
+
results : dict
|
| 2187 |
+
Results for each classifier.
|
| 2188 |
+
weights : dict, optional
|
| 2189 |
+
Weight for each classifier.
|
| 2190 |
+
|
| 2191 |
+
Returns
|
| 2192 |
+
-------
|
| 2193 |
+
decision : object
|
| 2194 |
+
Consensus decision.
|
| 2195 |
+
confidence : float
|
| 2196 |
+
Confidence estimate of the decision.
|
| 2197 |
+
counts : array
|
| 2198 |
+
Weight for each possible decision outcome.
|
| 2199 |
+
classes : array
|
| 2200 |
+
List of possible decision outcomes.
|
| 2201 |
+
|
| 2202 |
+
"""
|
| 2203 |
+
|
| 2204 |
+
# check inputs
|
| 2205 |
+
if results is None:
|
| 2206 |
+
raise TypeError("Please specify the input classification results.")
|
| 2207 |
+
if weights is None:
|
| 2208 |
+
weights = {}
|
| 2209 |
+
|
| 2210 |
+
# compile results to find all classes
|
| 2211 |
+
vec = list(six.itervalues(results))
|
| 2212 |
+
if len(vec) == 0:
|
| 2213 |
+
raise CombinationError("No keys found.")
|
| 2214 |
+
|
| 2215 |
+
unq = np.unique(np.concatenate(vec))
|
| 2216 |
+
|
| 2217 |
+
nb = len(unq)
|
| 2218 |
+
if nb == 0:
|
| 2219 |
+
# empty array
|
| 2220 |
+
raise CombinationError("No values found.")
|
| 2221 |
+
elif nb == 1:
|
| 2222 |
+
# unanimous result
|
| 2223 |
+
decision = unq[0]
|
| 2224 |
+
confidence = 1.
|
| 2225 |
+
counts = [1.]
|
| 2226 |
+
else:
|
| 2227 |
+
# multi-class
|
| 2228 |
+
counts = np.zeros(nb, dtype='float')
|
| 2229 |
+
|
| 2230 |
+
for n in results:
|
| 2231 |
+
# ensure array
|
| 2232 |
+
res = np.array(results[n])
|
| 2233 |
+
ns = float(len(res))
|
| 2234 |
+
|
| 2235 |
+
# get count for each unique class
|
| 2236 |
+
for i in range(nb):
|
| 2237 |
+
aux = float(np.sum(res == unq[i]))
|
| 2238 |
+
w = weights.get(n, 1.)
|
| 2239 |
+
counts[i] += ((aux / ns) * w)
|
| 2240 |
+
|
| 2241 |
+
# most frequent class
|
| 2242 |
+
predMax = counts.argmax()
|
| 2243 |
+
counts /= counts.sum()
|
| 2244 |
+
|
| 2245 |
+
decision = unq[predMax]
|
| 2246 |
+
confidence = counts[predMax]
|
| 2247 |
+
|
| 2248 |
+
# output
|
| 2249 |
+
args = (decision, confidence, counts, unq)
|
| 2250 |
+
names = ('decision', 'confidence', 'counts', 'classes')
|
| 2251 |
+
|
| 2252 |
+
return utils.ReturnTuple(args, names)
|
| 2253 |
+
|
| 2254 |
+
|
| 2255 |
+
def majority_rule(labels=None, random=True):
|
| 2256 |
+
"""Determine the most frequent class label.
|
| 2257 |
+
|
| 2258 |
+
Parameters
|
| 2259 |
+
----------
|
| 2260 |
+
labels : array, list
|
| 2261 |
+
List of clas labels.
|
| 2262 |
+
random : bool, optional
|
| 2263 |
+
If True, will choose randomly in case of tied classes, otherwise the
|
| 2264 |
+
first element is chosen.
|
| 2265 |
+
|
| 2266 |
+
Returns
|
| 2267 |
+
-------
|
| 2268 |
+
decision : object
|
| 2269 |
+
Consensus decision.
|
| 2270 |
+
count : int
|
| 2271 |
+
Number of elements of the consensus decision.
|
| 2272 |
+
|
| 2273 |
+
"""
|
| 2274 |
+
|
| 2275 |
+
# check inputs
|
| 2276 |
+
if labels is None:
|
| 2277 |
+
raise TypeError("Please specify the input list of class labels.")
|
| 2278 |
+
|
| 2279 |
+
if len(labels) == 0:
|
| 2280 |
+
raise CombinationError("Empty list of class labels.")
|
| 2281 |
+
|
| 2282 |
+
# count unique occurrences
|
| 2283 |
+
unq, counts = np.unique(labels, return_counts=True)
|
| 2284 |
+
|
| 2285 |
+
# most frequent class
|
| 2286 |
+
predMax = counts.argmax()
|
| 2287 |
+
|
| 2288 |
+
if random:
|
| 2289 |
+
# check for repeats
|
| 2290 |
+
ind = np.nonzero(counts == counts[predMax])[0]
|
| 2291 |
+
length = len(ind)
|
| 2292 |
+
|
| 2293 |
+
if length > 1:
|
| 2294 |
+
predMax = ind[np.random.randint(0, length)]
|
| 2295 |
+
|
| 2296 |
+
decision = unq[predMax]
|
| 2297 |
+
cnt = counts[predMax]
|
| 2298 |
+
|
| 2299 |
+
out = utils.ReturnTuple((decision, cnt), ('decision', 'count'))
|
| 2300 |
+
|
| 2301 |
+
return out
|
| 2302 |
+
|
| 2303 |
+
|
| 2304 |
+
def cross_validation(labels,
|
| 2305 |
+
n_iter=10,
|
| 2306 |
+
test_size=0.1,
|
| 2307 |
+
train_size=None,
|
| 2308 |
+
random_state=None):
|
| 2309 |
+
"""Return a Cross Validation (CV) iterator.
|
| 2310 |
+
|
| 2311 |
+
Wraps the StratifiedShuffleSplit iterator from sklearn.model_selection.
|
| 2312 |
+
This iterator returns stratified randomized folds, which preserve the
|
| 2313 |
+
percentage of samples for each class.
|
| 2314 |
+
|
| 2315 |
+
Parameters
|
| 2316 |
+
----------
|
| 2317 |
+
labels : list, array
|
| 2318 |
+
List of class labels for each data sample.
|
| 2319 |
+
n_iter : int, optional
|
| 2320 |
+
Number of splitting iterations.
|
| 2321 |
+
test_size : float, int, optional
|
| 2322 |
+
If float, represents the proportion of the dataset to include in the
|
| 2323 |
+
test split; if int, represents the absolute number of test samples.
|
| 2324 |
+
train_size : float, int, optional
|
| 2325 |
+
If float, represents the proportion of the dataset to include in the
|
| 2326 |
+
train split; if int, represents the absolute number of train samples.
|
| 2327 |
+
random_state : int, RandomState, optional
|
| 2328 |
+
The seed of the pseudo random number generator to use when shuffling
|
| 2329 |
+
the data.
|
| 2330 |
+
|
| 2331 |
+
Returns
|
| 2332 |
+
-------
|
| 2333 |
+
cv : CV iterator
|
| 2334 |
+
Cross Validation iterator.
|
| 2335 |
+
|
| 2336 |
+
"""
|
| 2337 |
+
|
| 2338 |
+
cv = skcv.StratifiedShuffleSplit(
|
| 2339 |
+
n_splits=n_iter,
|
| 2340 |
+
test_size=test_size,
|
| 2341 |
+
train_size=train_size,
|
| 2342 |
+
random_state=random_state,
|
| 2343 |
+
).split(np.zeros(len(labels)), labels)
|
| 2344 |
+
|
| 2345 |
+
return utils.ReturnTuple((cv,), ('cv',))
|
BioSPPy/source/biosppy/clustering.py
ADDED
|
@@ -0,0 +1,1008 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.clustering
|
| 4 |
+
------------------
|
| 5 |
+
|
| 6 |
+
This module provides various unsupervised machine learning (clustering)
|
| 7 |
+
algorithms.
|
| 8 |
+
|
| 9 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 10 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
# Imports
|
| 14 |
+
# compat
|
| 15 |
+
from __future__ import absolute_import, division, print_function
|
| 16 |
+
from six.moves import map, range, zip
|
| 17 |
+
import six
|
| 18 |
+
|
| 19 |
+
# 3rd party
|
| 20 |
+
import numpy as np
|
| 21 |
+
import scipy.cluster.hierarchy as sch
|
| 22 |
+
import scipy.cluster.vq as scv
|
| 23 |
+
import scipy.sparse as sp
|
| 24 |
+
import sklearn.cluster as skc
|
| 25 |
+
from sklearn.model_selection import ParameterGrid
|
| 26 |
+
|
| 27 |
+
# local
|
| 28 |
+
from . import metrics, utils
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
def dbscan(data=None,
|
| 32 |
+
min_samples=5,
|
| 33 |
+
eps=0.5,
|
| 34 |
+
metric='euclidean',
|
| 35 |
+
metric_args=None):
|
| 36 |
+
"""Perform clustering using the DBSCAN algorithm [EKSX96]_.
|
| 37 |
+
|
| 38 |
+
The algorithm works by grouping data points that are closely packed
|
| 39 |
+
together (with many nearby neighbors), marking as outliers points that lie
|
| 40 |
+
in low-density regions.
|
| 41 |
+
|
| 42 |
+
Parameters
|
| 43 |
+
----------
|
| 44 |
+
data : array
|
| 45 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 46 |
+
min_samples : int, optional
|
| 47 |
+
Minimum number of samples in a cluster.
|
| 48 |
+
eps : float, optional
|
| 49 |
+
Maximum distance between two samples in the same cluster.
|
| 50 |
+
metric : str, optional
|
| 51 |
+
Distance metric (see scipy.spatial.distance).
|
| 52 |
+
metric_args : dict, optional
|
| 53 |
+
Additional keyword arguments to pass to the distance function.
|
| 54 |
+
|
| 55 |
+
Returns
|
| 56 |
+
-------
|
| 57 |
+
clusters : dict
|
| 58 |
+
Dictionary with the sample indices (rows from 'data') for each found
|
| 59 |
+
cluster; outliers have key -1; clusters are assigned integer keys
|
| 60 |
+
starting at 0.
|
| 61 |
+
|
| 62 |
+
References
|
| 63 |
+
----------
|
| 64 |
+
.. [EKSX96] M. Ester, H. P. Kriegel, J. Sander, and X. Xu,
|
| 65 |
+
“A Density-Based Algorithm for Discovering Clusters in Large Spatial
|
| 66 |
+
Databases with Noise”, Proceedings of the 2nd International
|
| 67 |
+
Conf. on Knowledge Discovery and Data Mining, pp. 226-231, 1996.
|
| 68 |
+
|
| 69 |
+
"""
|
| 70 |
+
|
| 71 |
+
# check inputs
|
| 72 |
+
if data is None:
|
| 73 |
+
raise TypeError("Please specify input data.")
|
| 74 |
+
|
| 75 |
+
if metric_args is None:
|
| 76 |
+
metric_args = {}
|
| 77 |
+
|
| 78 |
+
# compute distances
|
| 79 |
+
D = metrics.pdist(data, metric=metric, **metric_args)
|
| 80 |
+
D = metrics.squareform(D)
|
| 81 |
+
|
| 82 |
+
# fit
|
| 83 |
+
db = skc.DBSCAN(eps=eps, min_samples=min_samples, metric='precomputed')
|
| 84 |
+
labels = db.fit_predict(D)
|
| 85 |
+
|
| 86 |
+
# get cluster indices
|
| 87 |
+
clusters = _extract_clusters(labels)
|
| 88 |
+
|
| 89 |
+
return utils.ReturnTuple((clusters,), ('clusters',))
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
def hierarchical(data=None,
|
| 93 |
+
k=0,
|
| 94 |
+
linkage='average',
|
| 95 |
+
metric='euclidean',
|
| 96 |
+
metric_args=None):
|
| 97 |
+
"""Perform clustering using hierarchical agglomerative algorithms.
|
| 98 |
+
|
| 99 |
+
Parameters
|
| 100 |
+
----------
|
| 101 |
+
data : array
|
| 102 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 103 |
+
k : int, optional
|
| 104 |
+
Number of clusters to extract; if 0 uses the life-time criterion.
|
| 105 |
+
linkage : str, optional
|
| 106 |
+
Linkage criterion; one of 'average', 'centroid', 'complete', 'median',
|
| 107 |
+
'single', 'ward', or 'weighted'.
|
| 108 |
+
metric : str, optional
|
| 109 |
+
Distance metric (see 'biosppy.metrics').
|
| 110 |
+
metric_args : dict, optional
|
| 111 |
+
Additional keyword arguments to pass to the distance function.
|
| 112 |
+
|
| 113 |
+
Returns
|
| 114 |
+
-------
|
| 115 |
+
clusters : dict
|
| 116 |
+
Dictionary with the sample indices (rows from 'data') for each found
|
| 117 |
+
cluster; outliers have key -1; clusters are assigned integer keys
|
| 118 |
+
starting at 0.
|
| 119 |
+
|
| 120 |
+
Raises
|
| 121 |
+
------
|
| 122 |
+
TypeError
|
| 123 |
+
If 'metric' is not a string.
|
| 124 |
+
ValueError
|
| 125 |
+
When the 'linkage' is unknown.
|
| 126 |
+
ValueError
|
| 127 |
+
When 'metric' is not 'euclidean' when using 'centroid', 'median',
|
| 128 |
+
or 'ward' linkage.
|
| 129 |
+
ValueError
|
| 130 |
+
When 'k' is larger than the number of data samples.
|
| 131 |
+
|
| 132 |
+
"""
|
| 133 |
+
|
| 134 |
+
# check inputs
|
| 135 |
+
if data is None:
|
| 136 |
+
raise TypeError("Please specify input data.")
|
| 137 |
+
|
| 138 |
+
if linkage not in ['average', 'centroid', 'complete', 'median', 'single',
|
| 139 |
+
'ward', 'weighted']:
|
| 140 |
+
raise ValueError("Unknown linkage criterion '%r'." % linkage)
|
| 141 |
+
|
| 142 |
+
if not isinstance(metric, six.string_types):
|
| 143 |
+
raise TypeError("Please specify the distance metric as a string.")
|
| 144 |
+
|
| 145 |
+
N = len(data)
|
| 146 |
+
if k > N:
|
| 147 |
+
raise ValueError("Number of clusters 'k' is higher than the number" \
|
| 148 |
+
" of input samples.")
|
| 149 |
+
|
| 150 |
+
if metric_args is None:
|
| 151 |
+
metric_args = {}
|
| 152 |
+
|
| 153 |
+
if linkage in ['centroid', 'median', 'ward']:
|
| 154 |
+
if metric != 'euclidean':
|
| 155 |
+
raise TypeError("Linkage '{}' requires the distance metric to be" \
|
| 156 |
+
" 'euclidean'.".format(linkage))
|
| 157 |
+
Z = sch.linkage(data, method=linkage)
|
| 158 |
+
else:
|
| 159 |
+
# compute distances
|
| 160 |
+
D = metrics.pdist(data, metric=metric, **metric_args)
|
| 161 |
+
|
| 162 |
+
# build linkage
|
| 163 |
+
Z = sch.linkage(D, method=linkage)
|
| 164 |
+
|
| 165 |
+
if k < 0:
|
| 166 |
+
k = 0
|
| 167 |
+
|
| 168 |
+
# extract clusters
|
| 169 |
+
if k == 0:
|
| 170 |
+
# life-time
|
| 171 |
+
labels = _life_time(Z, N)
|
| 172 |
+
else:
|
| 173 |
+
labels = sch.fcluster(Z, k, 'maxclust')
|
| 174 |
+
|
| 175 |
+
# get cluster indices
|
| 176 |
+
clusters = _extract_clusters(labels)
|
| 177 |
+
|
| 178 |
+
return utils.ReturnTuple((clusters,), ('clusters',))
|
| 179 |
+
|
| 180 |
+
|
| 181 |
+
def kmeans(data=None,
|
| 182 |
+
k=None,
|
| 183 |
+
init='random',
|
| 184 |
+
max_iter=300,
|
| 185 |
+
n_init=10,
|
| 186 |
+
tol=0.0001):
|
| 187 |
+
"""Perform clustering using the k-means algorithm.
|
| 188 |
+
|
| 189 |
+
Parameters
|
| 190 |
+
----------
|
| 191 |
+
data : array
|
| 192 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 193 |
+
k : int
|
| 194 |
+
Number of clusters to extract.
|
| 195 |
+
init : str, array, optional
|
| 196 |
+
If string, one of 'random' or 'k-means++'; if array, it should be of
|
| 197 |
+
shape (n_clusters, n_features), specifying the initial centers.
|
| 198 |
+
max_iter : int, optional
|
| 199 |
+
Maximum number of iterations.
|
| 200 |
+
n_init : int, optional
|
| 201 |
+
Number of initializations.
|
| 202 |
+
tol : float, optional
|
| 203 |
+
Relative tolerance to declare convergence.
|
| 204 |
+
|
| 205 |
+
Returns
|
| 206 |
+
-------
|
| 207 |
+
clusters : dict
|
| 208 |
+
Dictionary with the sample indices (rows from 'data') for each found
|
| 209 |
+
cluster; outliers have key -1; clusters are assigned integer keys
|
| 210 |
+
starting at 0.
|
| 211 |
+
|
| 212 |
+
"""
|
| 213 |
+
|
| 214 |
+
# check inputs
|
| 215 |
+
if data is None:
|
| 216 |
+
raise TypeError("Please specify input data.")
|
| 217 |
+
|
| 218 |
+
if k is None:
|
| 219 |
+
raise TypeError("Please specify the number 'k' of clusters.")
|
| 220 |
+
|
| 221 |
+
clf = skc.KMeans(n_clusters=k,
|
| 222 |
+
init=init,
|
| 223 |
+
max_iter=max_iter,
|
| 224 |
+
n_init=n_init,
|
| 225 |
+
tol=tol)
|
| 226 |
+
labels = clf.fit_predict(data)
|
| 227 |
+
|
| 228 |
+
# get cluster indices
|
| 229 |
+
clusters = _extract_clusters(labels)
|
| 230 |
+
|
| 231 |
+
return utils.ReturnTuple((clusters,), ('clusters',))
|
| 232 |
+
|
| 233 |
+
|
| 234 |
+
def consensus(data=None, k=0, linkage='average', fcn=None, grid=None):
|
| 235 |
+
"""Perform clustering based in an ensemble of partitions.
|
| 236 |
+
|
| 237 |
+
Parameters
|
| 238 |
+
----------
|
| 239 |
+
data : array
|
| 240 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 241 |
+
k : int, optional
|
| 242 |
+
Number of clusters to extract; if 0 uses the life-time criterion.
|
| 243 |
+
linkage : str, optional
|
| 244 |
+
Linkage criterion for final partition extraction; one of 'average',
|
| 245 |
+
'centroid', 'complete', 'median', 'single', 'ward', or 'weighted'.
|
| 246 |
+
fcn : function
|
| 247 |
+
A clustering function.
|
| 248 |
+
grid : dict, list, optional
|
| 249 |
+
A (list of) dictionary with parameters for each run of the clustering
|
| 250 |
+
method (see sklearn.model_selection.ParameterGrid).
|
| 251 |
+
|
| 252 |
+
Returns
|
| 253 |
+
-------
|
| 254 |
+
clusters : dict
|
| 255 |
+
Dictionary with the sample indices (rows from 'data') for each found
|
| 256 |
+
cluster; outliers have key -1; clusters are assigned integer keys
|
| 257 |
+
starting at 0.
|
| 258 |
+
|
| 259 |
+
"""
|
| 260 |
+
|
| 261 |
+
# check inputs
|
| 262 |
+
if data is None:
|
| 263 |
+
raise TypeError("Please specify input data.")
|
| 264 |
+
|
| 265 |
+
if fcn is None:
|
| 266 |
+
raise TypeError("Please specify the clustering function.")
|
| 267 |
+
|
| 268 |
+
if grid is None:
|
| 269 |
+
grid = {}
|
| 270 |
+
|
| 271 |
+
# create ensemble
|
| 272 |
+
ensemble, = create_ensemble(data=data, fcn=fcn, grid=grid)
|
| 273 |
+
|
| 274 |
+
# generate coassoc
|
| 275 |
+
coassoc, = create_coassoc(ensemble=ensemble, N=len(data))
|
| 276 |
+
|
| 277 |
+
# extract partition
|
| 278 |
+
clusters, = coassoc_partition(coassoc=coassoc, k=k, linkage=linkage)
|
| 279 |
+
|
| 280 |
+
return utils.ReturnTuple((clusters,), ('clusters',))
|
| 281 |
+
|
| 282 |
+
|
| 283 |
+
def consensus_kmeans(data=None,
|
| 284 |
+
k=0,
|
| 285 |
+
linkage='average',
|
| 286 |
+
nensemble=100,
|
| 287 |
+
kmin=None,
|
| 288 |
+
kmax=None):
|
| 289 |
+
"""Perform clustering based on an ensemble of k-means partitions.
|
| 290 |
+
|
| 291 |
+
Parameters
|
| 292 |
+
----------
|
| 293 |
+
data : array
|
| 294 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 295 |
+
k : int, optional
|
| 296 |
+
Number of clusters to extract; if 0 uses the life-time criterion.
|
| 297 |
+
linkage : str, optional
|
| 298 |
+
Linkage criterion for final partition extraction; one of 'average',
|
| 299 |
+
'centroid', 'complete', 'median', 'single', 'ward', or 'weighted'.
|
| 300 |
+
nensemble : int, optional
|
| 301 |
+
Number of partitions in the ensemble.
|
| 302 |
+
kmin : int, optional
|
| 303 |
+
Minimum k for the k-means partitions; defaults to :math:`\\sqrt{m}/2`.
|
| 304 |
+
kmax : int, optional
|
| 305 |
+
Maximum k for the k-means partitions; defaults to :math:`\\sqrt{m}`.
|
| 306 |
+
|
| 307 |
+
Returns
|
| 308 |
+
-------
|
| 309 |
+
clusters : dict
|
| 310 |
+
Dictionary with the sample indices (rows from 'data') for each found
|
| 311 |
+
cluster; outliers have key -1; clusters are assigned integer keys
|
| 312 |
+
starting at 0.
|
| 313 |
+
|
| 314 |
+
"""
|
| 315 |
+
|
| 316 |
+
# check inputs
|
| 317 |
+
if data is None:
|
| 318 |
+
raise TypeError("Please specify input data.")
|
| 319 |
+
|
| 320 |
+
N = len(data)
|
| 321 |
+
|
| 322 |
+
if kmin is None:
|
| 323 |
+
kmin = int(round(np.sqrt(N) / 2.))
|
| 324 |
+
|
| 325 |
+
if kmax is None:
|
| 326 |
+
kmax = int(round(np.sqrt(N)))
|
| 327 |
+
|
| 328 |
+
# initialization grid
|
| 329 |
+
grid = {
|
| 330 |
+
'k': np.random.random_integers(low=kmin, high=kmax, size=nensemble)
|
| 331 |
+
}
|
| 332 |
+
|
| 333 |
+
# run consensus
|
| 334 |
+
clusters, = consensus(data=data,
|
| 335 |
+
k=k,
|
| 336 |
+
linkage=linkage,
|
| 337 |
+
fcn=kmeans,
|
| 338 |
+
grid=grid)
|
| 339 |
+
|
| 340 |
+
return utils.ReturnTuple((clusters,), ('clusters',))
|
| 341 |
+
|
| 342 |
+
|
| 343 |
+
def create_ensemble(data=None, fcn=None, grid=None):
|
| 344 |
+
"""Create an ensemble of partitions of the data using the given
|
| 345 |
+
clustering method.
|
| 346 |
+
|
| 347 |
+
Parameters
|
| 348 |
+
----------
|
| 349 |
+
data : array
|
| 350 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 351 |
+
fcn : function
|
| 352 |
+
A clustering function.
|
| 353 |
+
grid : dict, list, optional
|
| 354 |
+
A (list of) dictionary with parameters for each run of the clustering
|
| 355 |
+
method (see sklearn.model_selection.ParameterGrid).
|
| 356 |
+
|
| 357 |
+
Returns
|
| 358 |
+
-------
|
| 359 |
+
ensemble : list
|
| 360 |
+
Obtained ensemble partitions.
|
| 361 |
+
|
| 362 |
+
"""
|
| 363 |
+
|
| 364 |
+
# check inputs
|
| 365 |
+
if data is None:
|
| 366 |
+
raise TypeError("Please specify input data.")
|
| 367 |
+
|
| 368 |
+
if fcn is None:
|
| 369 |
+
raise TypeError("Please specify the clustering function.")
|
| 370 |
+
|
| 371 |
+
if grid is None:
|
| 372 |
+
grid = {}
|
| 373 |
+
|
| 374 |
+
# grid iterator
|
| 375 |
+
grid = ParameterGrid(grid)
|
| 376 |
+
|
| 377 |
+
# run clustering
|
| 378 |
+
ensemble = []
|
| 379 |
+
for params in grid:
|
| 380 |
+
ensemble.append(fcn(data, **params)['clusters'])
|
| 381 |
+
|
| 382 |
+
return utils.ReturnTuple((ensemble,), ('ensemble',))
|
| 383 |
+
|
| 384 |
+
|
| 385 |
+
def create_coassoc(ensemble=None, N=None):
|
| 386 |
+
"""Create the co-association matrix from a clustering ensemble.
|
| 387 |
+
|
| 388 |
+
Parameters
|
| 389 |
+
----------
|
| 390 |
+
ensemble : list
|
| 391 |
+
Clustering ensemble partitions.
|
| 392 |
+
N : int
|
| 393 |
+
Number of data samples.
|
| 394 |
+
|
| 395 |
+
Returns
|
| 396 |
+
-------
|
| 397 |
+
coassoc : array
|
| 398 |
+
Co-association matrix.
|
| 399 |
+
|
| 400 |
+
"""
|
| 401 |
+
|
| 402 |
+
# check inputs
|
| 403 |
+
if ensemble is None:
|
| 404 |
+
raise TypeError("Please specify the clustering ensemble.")
|
| 405 |
+
|
| 406 |
+
if N is None:
|
| 407 |
+
raise TypeError(
|
| 408 |
+
"Please specify the number of samples in the original data set.")
|
| 409 |
+
|
| 410 |
+
nparts = len(ensemble)
|
| 411 |
+
assoc = 0
|
| 412 |
+
for part in ensemble:
|
| 413 |
+
nsamples = np.array([len(part[key]) for key in part])
|
| 414 |
+
dim = np.sum(nsamples * (nsamples - 1)) // 2
|
| 415 |
+
|
| 416 |
+
I = np.zeros(dim)
|
| 417 |
+
J = np.zeros(dim)
|
| 418 |
+
X = np.ones(dim)
|
| 419 |
+
ntriplets = 0
|
| 420 |
+
|
| 421 |
+
for v in six.itervalues(part):
|
| 422 |
+
nb = len(v)
|
| 423 |
+
if nb > 0:
|
| 424 |
+
for h in range(nb):
|
| 425 |
+
for f in range(h + 1, nb):
|
| 426 |
+
I[ntriplets] = v[h]
|
| 427 |
+
J[ntriplets] = v[f]
|
| 428 |
+
ntriplets += 1
|
| 429 |
+
|
| 430 |
+
assoc_aux = sp.csc_matrix((X, (I, J)), shape=(N, N))
|
| 431 |
+
assoc += assoc_aux
|
| 432 |
+
|
| 433 |
+
a = assoc + assoc.T
|
| 434 |
+
a.setdiag(nparts * np.ones(N))
|
| 435 |
+
coassoc = a.todense()
|
| 436 |
+
|
| 437 |
+
return utils.ReturnTuple((coassoc,), ('coassoc',))
|
| 438 |
+
|
| 439 |
+
|
| 440 |
+
def coassoc_partition(coassoc=None, k=0, linkage='average'):
|
| 441 |
+
"""Extract the consensus partition from a co-association matrix using
|
| 442 |
+
hierarchical agglomerative methods.
|
| 443 |
+
|
| 444 |
+
Parameters
|
| 445 |
+
----------
|
| 446 |
+
coassoc : array
|
| 447 |
+
Co-association matrix.
|
| 448 |
+
k : int, optional
|
| 449 |
+
Number of clusters to extract; if 0 uses the life-time criterion.
|
| 450 |
+
linkage : str, optional
|
| 451 |
+
Linkage criterion for final partition extraction; one of 'average',
|
| 452 |
+
'complete', 'single', or 'weighted'.
|
| 453 |
+
|
| 454 |
+
Returns
|
| 455 |
+
-------
|
| 456 |
+
clusters : dict
|
| 457 |
+
Dictionary with the sample indices (rows from 'data') for each found
|
| 458 |
+
cluster; outliers have key -1; clusters are assigned integer keys
|
| 459 |
+
starting at 0.
|
| 460 |
+
|
| 461 |
+
"""
|
| 462 |
+
|
| 463 |
+
# check inputs
|
| 464 |
+
if coassoc is None:
|
| 465 |
+
raise TypeError("Please specify the input co-association matrix.")
|
| 466 |
+
|
| 467 |
+
if linkage not in ['average', 'complete', 'single', 'weighted']:
|
| 468 |
+
raise ValueError("Unknown linkage criterion '%r'." % linkage)
|
| 469 |
+
|
| 470 |
+
N = len(coassoc)
|
| 471 |
+
if k > N:
|
| 472 |
+
raise ValueError("Number of clusters 'k' is higher than the number of \
|
| 473 |
+
input samples.")
|
| 474 |
+
|
| 475 |
+
if k < 0:
|
| 476 |
+
k = 0
|
| 477 |
+
|
| 478 |
+
# convert coassoc to condensed format, dissimilarity
|
| 479 |
+
mx = np.max(coassoc)
|
| 480 |
+
D = metrics.squareform(mx - coassoc)
|
| 481 |
+
|
| 482 |
+
# build linkage
|
| 483 |
+
Z = sch.linkage(D, method=linkage)
|
| 484 |
+
|
| 485 |
+
# extract clusters
|
| 486 |
+
if k == 0:
|
| 487 |
+
# life-time
|
| 488 |
+
labels = _life_time(Z, N)
|
| 489 |
+
else:
|
| 490 |
+
labels = sch.fcluster(Z, k, 'maxclust')
|
| 491 |
+
|
| 492 |
+
# get cluster indices
|
| 493 |
+
clusters = _extract_clusters(labels)
|
| 494 |
+
|
| 495 |
+
return utils.ReturnTuple((clusters,), ('clusters',))
|
| 496 |
+
|
| 497 |
+
|
| 498 |
+
def mdist_templates(data=None,
|
| 499 |
+
clusters=None,
|
| 500 |
+
ntemplates=1,
|
| 501 |
+
metric='euclidean',
|
| 502 |
+
metric_args=None):
|
| 503 |
+
"""Template selection based on the MDIST method [UlRJ04]_.
|
| 504 |
+
|
| 505 |
+
Extends the original method with the option of also providing a data
|
| 506 |
+
clustering, in which case the MDIST criterion is applied for
|
| 507 |
+
each cluster [LCSF14]_.
|
| 508 |
+
|
| 509 |
+
Parameters
|
| 510 |
+
----------
|
| 511 |
+
data : array
|
| 512 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 513 |
+
clusters : dict, optional
|
| 514 |
+
Dictionary with the sample indices (rows from `data`) for each cluster.
|
| 515 |
+
ntemplates : int, optional
|
| 516 |
+
Number of templates to extract.
|
| 517 |
+
metric : str, optional
|
| 518 |
+
Distance metric (see scipy.spatial.distance).
|
| 519 |
+
metric_args : dict, optional
|
| 520 |
+
Additional keyword arguments to pass to the distance function.
|
| 521 |
+
|
| 522 |
+
Returns
|
| 523 |
+
-------
|
| 524 |
+
templates : array
|
| 525 |
+
Selected templates from the input data.
|
| 526 |
+
|
| 527 |
+
References
|
| 528 |
+
----------
|
| 529 |
+
.. [UlRJ04] U. Uludag, A. Ross, A. Jain, "Biometric template selection
|
| 530 |
+
and update: a case study in fingerprints",
|
| 531 |
+
Pattern Recognition 37, 2004
|
| 532 |
+
.. [LCSF14] A. Lourenco, C. Carreiras, H. Silva, A. Fred,
|
| 533 |
+
"ECG biometrics: A template selection approach", 2014 IEEE
|
| 534 |
+
International Symposium on Medical Measurements and
|
| 535 |
+
Applications (MeMeA), 2014
|
| 536 |
+
|
| 537 |
+
"""
|
| 538 |
+
|
| 539 |
+
# check inputs
|
| 540 |
+
if data is None:
|
| 541 |
+
raise TypeError("Please specify input data.")
|
| 542 |
+
|
| 543 |
+
if clusters is None:
|
| 544 |
+
clusters = {0: np.arange(len(data), dtype='int')}
|
| 545 |
+
|
| 546 |
+
# cluster labels
|
| 547 |
+
ks = list(clusters)
|
| 548 |
+
|
| 549 |
+
# remove the outliers' cluster, if present
|
| 550 |
+
if '-1' in ks:
|
| 551 |
+
ks.remove('-1')
|
| 552 |
+
|
| 553 |
+
cardinals = [len(clusters[k]) for k in ks]
|
| 554 |
+
|
| 555 |
+
# check number of templates
|
| 556 |
+
if np.isscalar(ntemplates):
|
| 557 |
+
if ntemplates < 1:
|
| 558 |
+
raise ValueError("The number of templates has to be at least 1.")
|
| 559 |
+
# allocate templates per cluster
|
| 560 |
+
ntemplatesPerCluster = utils.highestAveragesAllocator(cardinals,
|
| 561 |
+
ntemplates,
|
| 562 |
+
divisor='dHondt',
|
| 563 |
+
check=True)
|
| 564 |
+
else:
|
| 565 |
+
# ntemplates as a list is unofficially supported because
|
| 566 |
+
# we have to account for cluster label order
|
| 567 |
+
if np.sum(ntemplates) < 1:
|
| 568 |
+
raise ValueError(
|
| 569 |
+
"The total number of templates has to be at least 1.")
|
| 570 |
+
# just copy
|
| 571 |
+
ntemplatesPerCluster = ntemplates
|
| 572 |
+
|
| 573 |
+
templates = []
|
| 574 |
+
|
| 575 |
+
for i, k in enumerate(ks):
|
| 576 |
+
c = np.array(clusters[k])
|
| 577 |
+
length = cardinals[i]
|
| 578 |
+
nt = ntemplatesPerCluster[i]
|
| 579 |
+
|
| 580 |
+
if nt == 0:
|
| 581 |
+
continue
|
| 582 |
+
|
| 583 |
+
if length == 0:
|
| 584 |
+
continue
|
| 585 |
+
elif length == 1:
|
| 586 |
+
templates.append(data[c][0])
|
| 587 |
+
elif length == 2:
|
| 588 |
+
if nt == 1:
|
| 589 |
+
# choose randomly
|
| 590 |
+
r = round(np.random.rand())
|
| 591 |
+
templates.append(data[c][r])
|
| 592 |
+
else:
|
| 593 |
+
for j in range(length):
|
| 594 |
+
templates.append(data[c][j])
|
| 595 |
+
else:
|
| 596 |
+
# compute mean distances
|
| 597 |
+
indices, _ = _mean_distance(data[c],
|
| 598 |
+
metric=metric,
|
| 599 |
+
metric_args=metric_args)
|
| 600 |
+
|
| 601 |
+
# select templates
|
| 602 |
+
sel = indices[:nt]
|
| 603 |
+
for item in sel:
|
| 604 |
+
templates.append(data[c][item])
|
| 605 |
+
|
| 606 |
+
templates = np.array(templates)
|
| 607 |
+
|
| 608 |
+
return utils.ReturnTuple((templates,), ('templates',))
|
| 609 |
+
|
| 610 |
+
|
| 611 |
+
def centroid_templates(data=None, clusters=None, ntemplates=1):
|
| 612 |
+
"""Template selection based on cluster centroids.
|
| 613 |
+
|
| 614 |
+
Parameters
|
| 615 |
+
----------
|
| 616 |
+
data : array
|
| 617 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 618 |
+
clusters : dict
|
| 619 |
+
Dictionary with the sample indices (rows from 'data') for each cluster.
|
| 620 |
+
ntemplates : int, optional
|
| 621 |
+
Number of templates to extract; if more than 1, k-means is used to
|
| 622 |
+
obtain more templates.
|
| 623 |
+
|
| 624 |
+
Returns
|
| 625 |
+
-------
|
| 626 |
+
templates : array
|
| 627 |
+
Selected templates from the input data.
|
| 628 |
+
|
| 629 |
+
"""
|
| 630 |
+
|
| 631 |
+
# check inputs
|
| 632 |
+
if data is None:
|
| 633 |
+
raise TypeError("Please specify input data.")
|
| 634 |
+
|
| 635 |
+
if clusters is None:
|
| 636 |
+
raise TypeError("Please specify a data clustering.")
|
| 637 |
+
|
| 638 |
+
# cluster labels
|
| 639 |
+
ks = list(clusters)
|
| 640 |
+
|
| 641 |
+
# remove the outliers' cluster, if present
|
| 642 |
+
if '-1' in ks:
|
| 643 |
+
ks.remove('-1')
|
| 644 |
+
|
| 645 |
+
cardinals = [len(clusters[k]) for k in ks]
|
| 646 |
+
|
| 647 |
+
# check number of templates
|
| 648 |
+
if np.isscalar(ntemplates):
|
| 649 |
+
if ntemplates < 1:
|
| 650 |
+
raise ValueError("The number of templates has to be at least 1.")
|
| 651 |
+
# allocate templates per cluster
|
| 652 |
+
ntemplatesPerCluster = utils.highestAveragesAllocator(cardinals,
|
| 653 |
+
ntemplates,
|
| 654 |
+
divisor='dHondt',
|
| 655 |
+
check=True)
|
| 656 |
+
else:
|
| 657 |
+
# ntemplates as a list is unofficially supported because
|
| 658 |
+
# we have to account for cluster label order
|
| 659 |
+
if np.sum(ntemplates) < 1:
|
| 660 |
+
raise ValueError(
|
| 661 |
+
"The total number of templates has to be at least 1.")
|
| 662 |
+
# just copy
|
| 663 |
+
ntemplatesPerCluster = ntemplates
|
| 664 |
+
|
| 665 |
+
# select templates
|
| 666 |
+
templates = []
|
| 667 |
+
for i, k in enumerate(ks):
|
| 668 |
+
c = np.array(clusters[k])
|
| 669 |
+
length = cardinals[i]
|
| 670 |
+
nt = ntemplatesPerCluster[i]
|
| 671 |
+
|
| 672 |
+
# ignore cases
|
| 673 |
+
if nt == 0 or length == 0:
|
| 674 |
+
continue
|
| 675 |
+
|
| 676 |
+
if nt == 1:
|
| 677 |
+
# cluster centroid
|
| 678 |
+
templates.append(np.mean(data[c], axis=0))
|
| 679 |
+
elif nt == length:
|
| 680 |
+
# centroids are the samples
|
| 681 |
+
templates.extend(data[c])
|
| 682 |
+
else:
|
| 683 |
+
# divide space using k-means
|
| 684 |
+
nb = min([nt, length])
|
| 685 |
+
centroidsKmeans, _ = scv.kmeans2(data[c],
|
| 686 |
+
k=nb,
|
| 687 |
+
iter=50,
|
| 688 |
+
minit='points')
|
| 689 |
+
for item in centroidsKmeans:
|
| 690 |
+
templates.append(item)
|
| 691 |
+
|
| 692 |
+
templates = np.array(templates)
|
| 693 |
+
|
| 694 |
+
return utils.ReturnTuple((templates,), ('templates',))
|
| 695 |
+
|
| 696 |
+
|
| 697 |
+
def outliers_dbscan(data=None,
|
| 698 |
+
min_samples=5,
|
| 699 |
+
eps=0.5,
|
| 700 |
+
metric='euclidean',
|
| 701 |
+
metric_args=None):
|
| 702 |
+
"""Perform outlier removal using the DBSCAN algorithm.
|
| 703 |
+
|
| 704 |
+
Parameters
|
| 705 |
+
----------
|
| 706 |
+
data : array
|
| 707 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 708 |
+
min_samples : int, optional
|
| 709 |
+
Minimum number of samples in a cluster.
|
| 710 |
+
eps : float, optional
|
| 711 |
+
Maximum distance between two samples in the same cluster.
|
| 712 |
+
metric : str, optional
|
| 713 |
+
Distance metric (see scipy.spatial.distance).
|
| 714 |
+
metric_args : dict, optional
|
| 715 |
+
Additional keyword arguments to pass to the distance function.
|
| 716 |
+
|
| 717 |
+
Returns
|
| 718 |
+
-------
|
| 719 |
+
clusters : dict
|
| 720 |
+
Dictionary with the sample indices (rows from 'data') for the
|
| 721 |
+
outliers (key -1) and the normal (key 0) groups.
|
| 722 |
+
templates : dict
|
| 723 |
+
Elements from 'data' for the outliers (key -1) and the
|
| 724 |
+
normal (key 0) groups.
|
| 725 |
+
|
| 726 |
+
"""
|
| 727 |
+
|
| 728 |
+
# perform clustering
|
| 729 |
+
clusters, = dbscan(data=data,
|
| 730 |
+
min_samples=min_samples,
|
| 731 |
+
eps=eps,
|
| 732 |
+
metric=metric,
|
| 733 |
+
metric_args=metric_args)
|
| 734 |
+
|
| 735 |
+
# merge clusters
|
| 736 |
+
clusters = _merge_clusters(clusters)
|
| 737 |
+
|
| 738 |
+
# separate templates
|
| 739 |
+
templates = {-1: data[clusters[-1]], 0: data[clusters[0]]}
|
| 740 |
+
|
| 741 |
+
# output
|
| 742 |
+
args = (clusters, templates)
|
| 743 |
+
names = ('clusters', 'templates')
|
| 744 |
+
|
| 745 |
+
return utils.ReturnTuple(args, names)
|
| 746 |
+
|
| 747 |
+
|
| 748 |
+
def outliers_dmean(data=None,
|
| 749 |
+
alpha=0.5,
|
| 750 |
+
beta=1.5,
|
| 751 |
+
metric='euclidean',
|
| 752 |
+
metric_args=None,
|
| 753 |
+
max_idx=None):
|
| 754 |
+
"""Perform outlier removal using the DMEAN algorithm [LCSF13]_.
|
| 755 |
+
|
| 756 |
+
A sample is considered valid if it cumulatively verifies:
|
| 757 |
+
* distance to average template smaller than a (data derived)
|
| 758 |
+
threshold 'T';
|
| 759 |
+
* sample minimum greater than a (data derived) threshold 'M';
|
| 760 |
+
* sample maximum smaller than a (data derived) threshold 'N';
|
| 761 |
+
* position of the sample maximum is the same as the
|
| 762 |
+
given index [optional].
|
| 763 |
+
|
| 764 |
+
For a set of :math:`\\{X_1, ..., X_n\\}` :math:`n` samples:
|
| 765 |
+
|
| 766 |
+
.. math::
|
| 767 |
+
|
| 768 |
+
\\widetilde{X} = \\frac{1}{n} \\sum_{i=1}^{n}{X_i}
|
| 769 |
+
|
| 770 |
+
d_i = dist(X_i, \\widetilde{X})
|
| 771 |
+
|
| 772 |
+
D_m = \\frac{1}{n} \\sum_{i=1}^{n}{d_i}
|
| 773 |
+
|
| 774 |
+
D_s = \\sqrt{\\frac{1}{n - 1} \\sum_{i=1}^{n}{(d_i - D_m)^2}}
|
| 775 |
+
|
| 776 |
+
T = D_m + \\alpha * D_s
|
| 777 |
+
|
| 778 |
+
M = \\beta * median(\\{\\max{X_i}, i=1, ..., n \\})
|
| 779 |
+
|
| 780 |
+
N = \\beta * median(\\{\\min{X_i}, i=1, ..., n \\})
|
| 781 |
+
|
| 782 |
+
Parameters
|
| 783 |
+
----------
|
| 784 |
+
data : array
|
| 785 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 786 |
+
alpha : float, optional
|
| 787 |
+
Parameter for the distance threshold.
|
| 788 |
+
beta : float, optional
|
| 789 |
+
Parameter for the maximum and minimum thresholds.
|
| 790 |
+
metric : str, optional
|
| 791 |
+
Distance metric (see scipy.spatial.distance).
|
| 792 |
+
metric_args : dict, optional
|
| 793 |
+
Additional keyword arguments to pass to the distance function.
|
| 794 |
+
max_idx : int, optional
|
| 795 |
+
Index of the expected maximum.
|
| 796 |
+
|
| 797 |
+
Returns
|
| 798 |
+
-------
|
| 799 |
+
clusters : dict
|
| 800 |
+
Dictionary with the sample indices (rows from 'data') for the
|
| 801 |
+
outliers (key -1) and the normal (key 0) groups.
|
| 802 |
+
templates : dict
|
| 803 |
+
Elements from 'data' for the outliers (key -1) and the
|
| 804 |
+
normal (key 0) groups.
|
| 805 |
+
|
| 806 |
+
References
|
| 807 |
+
----------
|
| 808 |
+
.. [LCSF13] A. Lourenco, H. Silva, C. Carreiras, A. Fred, "Outlier
|
| 809 |
+
Detection in Non-intrusive ECG Biometric System", Image Analysis
|
| 810 |
+
and Recognition, vol. 7950, pp. 43-52, 2013
|
| 811 |
+
|
| 812 |
+
"""
|
| 813 |
+
|
| 814 |
+
# check inputs
|
| 815 |
+
if data is None:
|
| 816 |
+
raise TypeError("Please specify input data.")
|
| 817 |
+
|
| 818 |
+
if metric_args is None:
|
| 819 |
+
metric_args = {}
|
| 820 |
+
|
| 821 |
+
# distance to mean wave
|
| 822 |
+
mean_wave = np.mean(data, axis=0, keepdims=True)
|
| 823 |
+
dists = metrics.cdist(data, mean_wave, metric=metric, **metric_args)
|
| 824 |
+
dists = dists.flatten()
|
| 825 |
+
|
| 826 |
+
# distance threshold
|
| 827 |
+
th = np.mean(dists) + alpha * np.std(dists, ddof=1)
|
| 828 |
+
|
| 829 |
+
# median of max and min
|
| 830 |
+
M = np.median(np.max(data, 1)) * beta
|
| 831 |
+
m = np.median(np.min(data, 1)) * beta
|
| 832 |
+
|
| 833 |
+
# search for outliers
|
| 834 |
+
outliers = []
|
| 835 |
+
for i, item in enumerate(data):
|
| 836 |
+
idx = np.argmax(item)
|
| 837 |
+
if (max_idx is not None) and (idx != max_idx):
|
| 838 |
+
outliers.append(i)
|
| 839 |
+
elif item[idx] > M:
|
| 840 |
+
outliers.append(i)
|
| 841 |
+
elif np.min(item) < m:
|
| 842 |
+
outliers.append(i)
|
| 843 |
+
elif dists[i] > th:
|
| 844 |
+
outliers.append(i)
|
| 845 |
+
|
| 846 |
+
outliers = np.unique(outliers)
|
| 847 |
+
normal = np.setdiff1d(list(range(len(data))), outliers, assume_unique=True)
|
| 848 |
+
|
| 849 |
+
# output
|
| 850 |
+
clusters = {-1: outliers, 0: normal}
|
| 851 |
+
|
| 852 |
+
templates = {-1: data[outliers], 0: data[normal]}
|
| 853 |
+
|
| 854 |
+
args = (clusters, templates)
|
| 855 |
+
names = ('clusters', 'templates')
|
| 856 |
+
|
| 857 |
+
return utils.ReturnTuple(args, names)
|
| 858 |
+
|
| 859 |
+
|
| 860 |
+
def _life_time(Z, N):
|
| 861 |
+
"""Life-Time criterion for automatic selection of the number of clusters.
|
| 862 |
+
|
| 863 |
+
Parameters
|
| 864 |
+
----------
|
| 865 |
+
Z : array
|
| 866 |
+
The hierarchical clustering encoded as a linkage matrix.
|
| 867 |
+
N : int
|
| 868 |
+
Number of data samples.
|
| 869 |
+
|
| 870 |
+
Returns
|
| 871 |
+
-------
|
| 872 |
+
labels : array
|
| 873 |
+
Cluster labels.
|
| 874 |
+
|
| 875 |
+
"""
|
| 876 |
+
|
| 877 |
+
if N < 3:
|
| 878 |
+
return np.arange(N, dtype='int')
|
| 879 |
+
|
| 880 |
+
# find maximum
|
| 881 |
+
df = np.diff(Z[:, 2])
|
| 882 |
+
idx = np.argmax(df)
|
| 883 |
+
mx = df[idx]
|
| 884 |
+
th = Z[idx, 2]
|
| 885 |
+
|
| 886 |
+
idxs = Z[np.nonzero(Z[:, 2] > th)[0], 2]
|
| 887 |
+
cont = len(idxs) + 1
|
| 888 |
+
|
| 889 |
+
# find minimum
|
| 890 |
+
mi = np.min(df[np.nonzero(df != 0)])
|
| 891 |
+
|
| 892 |
+
if mi != mx:
|
| 893 |
+
if mx < 2 * mi:
|
| 894 |
+
cont = 1
|
| 895 |
+
|
| 896 |
+
if cont > 1:
|
| 897 |
+
labels = sch.fcluster(Z, cont, 'maxclust')
|
| 898 |
+
else:
|
| 899 |
+
labels = np.arange(N, dtype='int')
|
| 900 |
+
|
| 901 |
+
return labels
|
| 902 |
+
|
| 903 |
+
|
| 904 |
+
def _extract_clusters(labels):
|
| 905 |
+
"""Extract cluster indices from an array of cluster labels.
|
| 906 |
+
|
| 907 |
+
Parameters
|
| 908 |
+
----------
|
| 909 |
+
labels : array
|
| 910 |
+
Input cluster labels.
|
| 911 |
+
|
| 912 |
+
Returns
|
| 913 |
+
-------
|
| 914 |
+
clusters : dict
|
| 915 |
+
Dictionary with the sample indices for each found cluster; outliers
|
| 916 |
+
have key -1; clusters are assigned integer keys starting at 0.
|
| 917 |
+
|
| 918 |
+
"""
|
| 919 |
+
|
| 920 |
+
# ensure numpy
|
| 921 |
+
labels = np.array(labels)
|
| 922 |
+
|
| 923 |
+
# unique labels and sort
|
| 924 |
+
unq = np.unique(labels).tolist()
|
| 925 |
+
|
| 926 |
+
clusters = {}
|
| 927 |
+
|
| 928 |
+
# outliers
|
| 929 |
+
if -1 in unq:
|
| 930 |
+
clusters[-1] = np.nonzero(labels == -1)[0]
|
| 931 |
+
unq.remove(-1)
|
| 932 |
+
elif '-1' in unq:
|
| 933 |
+
clusters[-1] = np.nonzero(labels == '-1')[0]
|
| 934 |
+
unq.remove('-1')
|
| 935 |
+
|
| 936 |
+
for i, u in enumerate(unq):
|
| 937 |
+
clusters[i] = np.nonzero(labels == u)[0]
|
| 938 |
+
|
| 939 |
+
return clusters
|
| 940 |
+
|
| 941 |
+
|
| 942 |
+
def _mean_distance(data, metric='euclidean', metric_args=None):
|
| 943 |
+
"""Compute the sorted mean distance between the input samples.
|
| 944 |
+
|
| 945 |
+
Parameters
|
| 946 |
+
----------
|
| 947 |
+
data : array
|
| 948 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 949 |
+
metric : str, optional
|
| 950 |
+
Distance metric (see scipy.spatial.distance).
|
| 951 |
+
metric_args : dict, optional
|
| 952 |
+
Additional keyword arguments to pass to the distance function.
|
| 953 |
+
|
| 954 |
+
Returns
|
| 955 |
+
-------
|
| 956 |
+
indices : array
|
| 957 |
+
Indices that sort the computed mean distances.
|
| 958 |
+
mdist : array
|
| 959 |
+
Mean distance characterizing each data sample.
|
| 960 |
+
|
| 961 |
+
"""
|
| 962 |
+
|
| 963 |
+
if metric_args is None:
|
| 964 |
+
metric_args = {}
|
| 965 |
+
|
| 966 |
+
# compute distances
|
| 967 |
+
D = metrics.pdist(data, metric=metric, **metric_args)
|
| 968 |
+
D = metrics.squareform(D)
|
| 969 |
+
|
| 970 |
+
# compute mean
|
| 971 |
+
mdist = np.mean(D, axis=0)
|
| 972 |
+
|
| 973 |
+
# sort
|
| 974 |
+
indices = np.argsort(mdist)
|
| 975 |
+
|
| 976 |
+
return indices, mdist
|
| 977 |
+
|
| 978 |
+
|
| 979 |
+
def _merge_clusters(clusters):
|
| 980 |
+
"""Merge non-outlier clusters in a partition.
|
| 981 |
+
|
| 982 |
+
Parameters
|
| 983 |
+
----------
|
| 984 |
+
clusters : dict
|
| 985 |
+
Dictionary with the sample indices for each found cluster;
|
| 986 |
+
outliers have key -1.
|
| 987 |
+
|
| 988 |
+
Returns
|
| 989 |
+
-------
|
| 990 |
+
res : dict
|
| 991 |
+
Merged clusters.
|
| 992 |
+
|
| 993 |
+
"""
|
| 994 |
+
|
| 995 |
+
keys = list(clusters)
|
| 996 |
+
|
| 997 |
+
# outliers
|
| 998 |
+
if -1 in keys:
|
| 999 |
+
keys.remove(-1)
|
| 1000 |
+
res = {-1: clusters[-1]}
|
| 1001 |
+
else:
|
| 1002 |
+
res = {-1: np.array([], dtype='int')}
|
| 1003 |
+
|
| 1004 |
+
# normal clusters
|
| 1005 |
+
aux = np.concatenate([clusters[k] for k in keys])
|
| 1006 |
+
res[0] = np.unique(aux).astype('int')
|
| 1007 |
+
|
| 1008 |
+
return res
|
BioSPPy/source/biosppy/inter_plotting/__init__.py
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals
|
| 4 |
+
---------------
|
| 5 |
+
|
| 6 |
+
This package provides methods to interactively display plots for the
|
| 7 |
+
following physiological signals (biosignals):
|
| 8 |
+
* Electrocardiogram (ECG)
|
| 9 |
+
|
| 10 |
+
:copyright: (c) 2015-2021 by Instituto de Telecomunicacoes
|
| 11 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 12 |
+
"""
|
| 13 |
+
|
| 14 |
+
# compat
|
| 15 |
+
from __future__ import absolute_import, division, print_function
|
| 16 |
+
|
| 17 |
+
# allow lazy loading
|
| 18 |
+
from . import ecg, acc
|
BioSPPy/source/biosppy/inter_plotting/acc.py
ADDED
|
@@ -0,0 +1,496 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.inter_plotting.ecg
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides an interactive display option for the ACC plot.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
# Imports
|
| 14 |
+
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk
|
| 15 |
+
from matplotlib.backend_bases import key_press_handler
|
| 16 |
+
import matplotlib.pyplot as plt
|
| 17 |
+
import numpy as np
|
| 18 |
+
from tkinter import *
|
| 19 |
+
import tkinter.font as tkFont
|
| 20 |
+
import sys
|
| 21 |
+
import os
|
| 22 |
+
|
| 23 |
+
# Globals
|
| 24 |
+
from biosppy import utils
|
| 25 |
+
|
| 26 |
+
MAJOR_LW = 2.5
|
| 27 |
+
MINOR_LW = 1.5
|
| 28 |
+
MAX_ROWS = 10
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
def plot_acc(ts=None, raw=None, vm=None, sm=None, spectrum=None, path=None):
|
| 32 |
+
"""Create a summary plot from the output of signals.acc.acc.
|
| 33 |
+
|
| 34 |
+
Parameters
|
| 35 |
+
----------
|
| 36 |
+
ts : array
|
| 37 |
+
Signal time axis reference (seconds).
|
| 38 |
+
raw : array
|
| 39 |
+
Raw ACC signal.
|
| 40 |
+
vm : array
|
| 41 |
+
Vector Magnitude feature of the signal.
|
| 42 |
+
sm : array
|
| 43 |
+
Signal Magnitude feature of the signal
|
| 44 |
+
path : str, optional
|
| 45 |
+
If provided, the plot will be saved to the specified file.
|
| 46 |
+
show : bool, optional
|
| 47 |
+
If True, show the plot immediately.
|
| 48 |
+
|
| 49 |
+
"""
|
| 50 |
+
|
| 51 |
+
raw_t = np.transpose(raw)
|
| 52 |
+
acc_x, acc_y, acc_z = raw_t[0], raw_t[1], raw_t[2]
|
| 53 |
+
|
| 54 |
+
root = Tk()
|
| 55 |
+
root.resizable(False, False) # default
|
| 56 |
+
fig, axs_1 = plt.subplots(3, 1)
|
| 57 |
+
axs_1[0].plot(ts, acc_x, linewidth=MINOR_LW, label="Raw acc along X", color="C0")
|
| 58 |
+
axs_1[1].plot(ts, acc_y, linewidth=MINOR_LW, label="Raw acc along Y", color="C1")
|
| 59 |
+
axs_1[2].plot(ts, acc_z, linewidth=MINOR_LW, label="Raw acc along Z", color="C2")
|
| 60 |
+
|
| 61 |
+
axs_1[0].set_ylabel("Amplitude ($m/s^2$)")
|
| 62 |
+
axs_1[1].set_ylabel("Amplitude ($m/s^2$)")
|
| 63 |
+
axs_1[2].set_ylabel("Amplitude ($m/s^2$)")
|
| 64 |
+
axs_1[2].set_xlabel("Time (s)")
|
| 65 |
+
|
| 66 |
+
fig.suptitle("Acceleration signals")
|
| 67 |
+
|
| 68 |
+
global share_axes_check_box
|
| 69 |
+
|
| 70 |
+
class feature_figure:
|
| 71 |
+
def __init__(self, reset=False):
|
| 72 |
+
self.figure, self.axes = plt.subplots(1, 1)
|
| 73 |
+
if not reset:
|
| 74 |
+
self.avail_plots = []
|
| 75 |
+
|
| 76 |
+
def on_xlims_change(self, event_ax):
|
| 77 |
+
print("updated xlims: ", event_ax.get_xlim())
|
| 78 |
+
|
| 79 |
+
def set_labels(self, x_label, y_label):
|
| 80 |
+
self.axes.set_xlabel(x_label)
|
| 81 |
+
self.axes.set_ylabel(y_label)
|
| 82 |
+
|
| 83 |
+
def hide_content(self):
|
| 84 |
+
# setting every plot element to white
|
| 85 |
+
self.axes.spines["bottom"].set_color("white")
|
| 86 |
+
self.axes.spines["top"].set_color("white")
|
| 87 |
+
self.axes.spines["right"].set_color("white")
|
| 88 |
+
self.axes.spines["left"].set_color("white")
|
| 89 |
+
self.axes.tick_params(axis="x", colors="white")
|
| 90 |
+
self.axes.tick_params(axis="y", colors="white")
|
| 91 |
+
|
| 92 |
+
def show_content(self):
|
| 93 |
+
# setting every plot element to black
|
| 94 |
+
self.axes.spines["bottom"].set_color("black")
|
| 95 |
+
self.axes.spines["top"].set_color("black")
|
| 96 |
+
self.axes.spines["right"].set_color("black")
|
| 97 |
+
self.axes.spines["left"].set_color("black")
|
| 98 |
+
self.axes.tick_params(axis="x", colors="black")
|
| 99 |
+
self.axes.tick_params(axis="y", colors="black")
|
| 100 |
+
self.axes.legend()
|
| 101 |
+
|
| 102 |
+
def draw_in_canvas(self, root: Tk, grid_params=None):
|
| 103 |
+
if grid_params is None:
|
| 104 |
+
grid_params = {
|
| 105 |
+
"row": 2,
|
| 106 |
+
"column": 1,
|
| 107 |
+
"columnspan": 4,
|
| 108 |
+
"sticky": "w",
|
| 109 |
+
"padx": 10,
|
| 110 |
+
}
|
| 111 |
+
# add an empty canvas for plotting
|
| 112 |
+
self.canvas = FigureCanvasTkAgg(self.figure, master=root)
|
| 113 |
+
self.canvas.get_tk_widget().grid(**grid_params)
|
| 114 |
+
self.canvas.draw()
|
| 115 |
+
# self.axes.callbacks.connect('xlim_changed', on_xlims_change)
|
| 116 |
+
|
| 117 |
+
def dump_canvas(self, root):
|
| 118 |
+
self.axes.clear()
|
| 119 |
+
self.figure.clear()
|
| 120 |
+
self.figure.canvas.draw_idle()
|
| 121 |
+
self.canvas = FigureCanvasTkAgg(self.figure, master=root)
|
| 122 |
+
self.canvas.get_tk_widget().destroy()
|
| 123 |
+
|
| 124 |
+
def add_toolbar(self, root: Tk, grid_params=None):
|
| 125 |
+
if grid_params is None:
|
| 126 |
+
grid_params = {"row": 5, "column": 1, "columnspan": 2, "sticky": "w"}
|
| 127 |
+
toolbarFramefeat = Frame(master=root)
|
| 128 |
+
toolbarFramefeat.grid(**grid_params)
|
| 129 |
+
|
| 130 |
+
toolbarfeat = NavigationToolbar2Tk(self.canvas, toolbarFramefeat)
|
| 131 |
+
toolbarfeat.update()
|
| 132 |
+
|
| 133 |
+
def add_plot(self, feature_name: str, xdata, ydata, linewidth, label, color):
|
| 134 |
+
feature_exists = False
|
| 135 |
+
for features in self.avail_plots:
|
| 136 |
+
if features["feature_name"] == feature_name:
|
| 137 |
+
feature_exists = True
|
| 138 |
+
break
|
| 139 |
+
if not feature_exists:
|
| 140 |
+
self._add_plot(feature_name, xdata, ydata, linewidth, label, color)
|
| 141 |
+
|
| 142 |
+
def _add_plot(self, feature_name: str, xdata, ydata, linewidth, label, color):
|
| 143 |
+
|
| 144 |
+
plot_params = {
|
| 145 |
+
"feature_name": feature_name,
|
| 146 |
+
"x": xdata,
|
| 147 |
+
"y": ydata,
|
| 148 |
+
"linewidth": linewidth,
|
| 149 |
+
"label": label,
|
| 150 |
+
"color": color,
|
| 151 |
+
}
|
| 152 |
+
|
| 153 |
+
plot_data = dict(plot_params)
|
| 154 |
+
self.avail_plots.append(plot_data)
|
| 155 |
+
del plot_params["feature_name"]
|
| 156 |
+
del plot_params["x"]
|
| 157 |
+
del plot_params["y"]
|
| 158 |
+
self.axes.plot(xdata, ydata, **plot_params)
|
| 159 |
+
self.show_content()
|
| 160 |
+
|
| 161 |
+
def remove_plot(self, feature_name: str):
|
| 162 |
+
if self.avail_plots:
|
| 163 |
+
removed_index = [
|
| 164 |
+
i
|
| 165 |
+
for i, x in enumerate(self.avail_plots)
|
| 166 |
+
if x["feature_name"] == feature_name
|
| 167 |
+
]
|
| 168 |
+
if len(removed_index) == 1:
|
| 169 |
+
self._remove_plot(removed_index)
|
| 170 |
+
|
| 171 |
+
def _remove_plot(self, removed_index):
|
| 172 |
+
del self.avail_plots[removed_index[0]]
|
| 173 |
+
self.__init__(reset=True)
|
| 174 |
+
|
| 175 |
+
for params in self.avail_plots:
|
| 176 |
+
temp_params = dict(params)
|
| 177 |
+
del temp_params["feature_name"]
|
| 178 |
+
del temp_params["x"]
|
| 179 |
+
del temp_params["y"]
|
| 180 |
+
self.axes.plot(params["x"], params["y"], **temp_params)
|
| 181 |
+
|
| 182 |
+
if self.avail_plots == []:
|
| 183 |
+
self.hide_content()
|
| 184 |
+
else:
|
| 185 |
+
self.show_content()
|
| 186 |
+
|
| 187 |
+
def get_axes(self):
|
| 188 |
+
return self.axes
|
| 189 |
+
|
| 190 |
+
def get_figure(self):
|
| 191 |
+
return self.figure
|
| 192 |
+
|
| 193 |
+
# save to file
|
| 194 |
+
if path is not None:
|
| 195 |
+
path = utils.normpath(path)
|
| 196 |
+
root_, ext = os.path.splitext(path)
|
| 197 |
+
ext = ext.lower()
|
| 198 |
+
if ext not in ["png", "jpg"]:
|
| 199 |
+
path = root_ + ".png"
|
| 200 |
+
|
| 201 |
+
fig.savefig(path, dpi=200, bbox_inches="tight")
|
| 202 |
+
|
| 203 |
+
# window title
|
| 204 |
+
root.wm_title("BioSPPy: acceleration signal")
|
| 205 |
+
|
| 206 |
+
root.columnconfigure(0, weight=4)
|
| 207 |
+
root.columnconfigure(1, weight=1)
|
| 208 |
+
root.columnconfigure(2, weight=1)
|
| 209 |
+
root.columnconfigure(3, weight=1)
|
| 210 |
+
root.columnconfigure(4, weight=1)
|
| 211 |
+
|
| 212 |
+
helv = tkFont.Font(family="Helvetica", size=20)
|
| 213 |
+
|
| 214 |
+
# checkbox
|
| 215 |
+
show_features_var = IntVar()
|
| 216 |
+
share_axes_var = IntVar()
|
| 217 |
+
|
| 218 |
+
def show_features():
|
| 219 |
+
global feat_fig
|
| 220 |
+
global toolbarfeat
|
| 221 |
+
global share_axes_check_box
|
| 222 |
+
if show_features_var.get() == 0:
|
| 223 |
+
drop_features2.get_menu().config(state="disabled")
|
| 224 |
+
domain_feat_btn.config(state="disabled")
|
| 225 |
+
|
| 226 |
+
# remove canvas for plotting
|
| 227 |
+
feat_fig.dump_canvas(root)
|
| 228 |
+
|
| 229 |
+
if show_features_var.get() == 1:
|
| 230 |
+
# enable option menu for feature selection
|
| 231 |
+
drop_features2.get_menu().config(state="normal")
|
| 232 |
+
domain_feat_btn.config(state="normal")
|
| 233 |
+
# canvas_features.get_tk_widget().grid(row=2, column=1, columnspan=1, sticky='w', padx=10)
|
| 234 |
+
|
| 235 |
+
# add an empty canvas for plotting
|
| 236 |
+
feat_fig = feature_figure()
|
| 237 |
+
share_axes_check_box = Checkbutton(
|
| 238 |
+
root,
|
| 239 |
+
text="Share axes",
|
| 240 |
+
variable=share_axes_var,
|
| 241 |
+
onvalue=1,
|
| 242 |
+
offvalue=0,
|
| 243 |
+
command=lambda feat_fig=feat_fig: share_axes(feat_fig.get_axes()),
|
| 244 |
+
)
|
| 245 |
+
share_axes_check_box.config(font=helv)
|
| 246 |
+
share_axes_check_box.grid(row=4, column=1, sticky=W)
|
| 247 |
+
|
| 248 |
+
feat_fig.hide_content()
|
| 249 |
+
feat_fig.draw_in_canvas(root)
|
| 250 |
+
|
| 251 |
+
def share_axes(ax2):
|
| 252 |
+
if share_axes_var.get() == 1:
|
| 253 |
+
axs_1[0].get_shared_x_axes().join(axs_1[0], ax2)
|
| 254 |
+
axs_1[1].get_shared_x_axes().join(axs_1[1], ax2)
|
| 255 |
+
axs_1[2].get_shared_x_axes().join(axs_1[2], ax2)
|
| 256 |
+
|
| 257 |
+
else:
|
| 258 |
+
for ax in axs_1:
|
| 259 |
+
ax.get_shared_x_axes().remove(ax2)
|
| 260 |
+
ax2.get_shared_x_axes().remove(ax)
|
| 261 |
+
ax.autoscale()
|
| 262 |
+
canvas_raw.draw()
|
| 263 |
+
|
| 264 |
+
check1 = Checkbutton(
|
| 265 |
+
root,
|
| 266 |
+
text="Show features",
|
| 267 |
+
variable=show_features_var,
|
| 268 |
+
onvalue=1,
|
| 269 |
+
offvalue=0,
|
| 270 |
+
command=show_features,
|
| 271 |
+
)
|
| 272 |
+
check1.config(font=helv)
|
| 273 |
+
check1.grid(row=0, column=0, sticky=W)
|
| 274 |
+
|
| 275 |
+
# FEATURES to be chosen
|
| 276 |
+
clicked_features = StringVar()
|
| 277 |
+
clicked_features.set("features")
|
| 278 |
+
|
| 279 |
+
def domain_func():
|
| 280 |
+
global share_axes_check_box
|
| 281 |
+
|
| 282 |
+
if feat_domain_var.get() == 1:
|
| 283 |
+
domain_feat_btn["text"] = "Domain: frequency"
|
| 284 |
+
feat_domain_var.set(0)
|
| 285 |
+
feat_fig.remove_plot("VM")
|
| 286 |
+
feat_fig.remove_plot("SM")
|
| 287 |
+
feat_fig.draw_in_canvas(root)
|
| 288 |
+
drop_features2.reset()
|
| 289 |
+
drop_features2.reset_fields(["Spectra"])
|
| 290 |
+
share_axes_check_box.config(state="disabled")
|
| 291 |
+
share_axes_var.set(0)
|
| 292 |
+
|
| 293 |
+
else:
|
| 294 |
+
domain_feat_btn["text"] = "Domain: time"
|
| 295 |
+
feat_domain_var.set(1)
|
| 296 |
+
feat_fig.remove_plot("SPECTRA X")
|
| 297 |
+
feat_fig.remove_plot("SPECTRA Y")
|
| 298 |
+
feat_fig.remove_plot("SPECTRA Z")
|
| 299 |
+
feat_fig.draw_in_canvas(root)
|
| 300 |
+
drop_features2.reset()
|
| 301 |
+
drop_features2.reset_fields(["VM", "SM"])
|
| 302 |
+
share_axes_check_box.config(state="normal")
|
| 303 |
+
|
| 304 |
+
feat_domain_var = IntVar()
|
| 305 |
+
feat_domain_var.set(1)
|
| 306 |
+
|
| 307 |
+
class feat_menu:
|
| 308 |
+
def __init__(self, fieldnames: list, entry_name="Select Features", font=helv):
|
| 309 |
+
self.feat_menu = Menubutton(root, text=entry_name, relief="raised")
|
| 310 |
+
self.feat_menu.grid(row=0, column=2, sticky=W)
|
| 311 |
+
self.feat_menu.menu = Menu(self.feat_menu, tearoff=0)
|
| 312 |
+
self.feat_menu["menu"] = self.feat_menu.menu
|
| 313 |
+
self.feat_menu["font"] = font
|
| 314 |
+
self.font = font
|
| 315 |
+
self.feat_activation = {}
|
| 316 |
+
|
| 317 |
+
# setting up disabled fields
|
| 318 |
+
for field in fieldnames:
|
| 319 |
+
self.feat_activation[field] = False
|
| 320 |
+
|
| 321 |
+
for field in fieldnames:
|
| 322 |
+
self.feat_menu.menu.add_command(
|
| 323 |
+
label=field,
|
| 324 |
+
font=helv,
|
| 325 |
+
command=lambda field=field: self.update_field(field),
|
| 326 |
+
foreground="gray",
|
| 327 |
+
)
|
| 328 |
+
self.fieldnames = fieldnames
|
| 329 |
+
|
| 330 |
+
self.feat_menu.update()
|
| 331 |
+
|
| 332 |
+
def reset(self, entry_name="Select Features"):
|
| 333 |
+
self.feat_menu = Menubutton(root, text=entry_name, relief="raised")
|
| 334 |
+
self.feat_menu.grid(row=0, column=2, sticky=W)
|
| 335 |
+
self.feat_menu.menu = Menu(self.feat_menu, tearoff=0)
|
| 336 |
+
self.feat_menu["menu"] = self.feat_menu.menu
|
| 337 |
+
self.feat_menu["font"] = self.font
|
| 338 |
+
self.feat_menu.update()
|
| 339 |
+
|
| 340 |
+
def update_field(self, field):
|
| 341 |
+
|
| 342 |
+
self.feat_activation[field] = not self.feat_activation[field]
|
| 343 |
+
self.feat_menu.configure(text=field) # Set menu text to the selected event
|
| 344 |
+
|
| 345 |
+
self.reset()
|
| 346 |
+
|
| 347 |
+
for field_ in self.fieldnames:
|
| 348 |
+
if self.feat_activation[field_]:
|
| 349 |
+
self.feat_menu.menu.add_command(
|
| 350 |
+
label=field_,
|
| 351 |
+
font=helv,
|
| 352 |
+
command=lambda field=field_: self.update_field(field),
|
| 353 |
+
)
|
| 354 |
+
else:
|
| 355 |
+
self.feat_menu.menu.add_command(
|
| 356 |
+
label=field_,
|
| 357 |
+
font=helv,
|
| 358 |
+
command=lambda field=field_: self.update_field(field),
|
| 359 |
+
foreground="gray",
|
| 360 |
+
)
|
| 361 |
+
|
| 362 |
+
if field == "SM":
|
| 363 |
+
if not self.feat_activation[field]:
|
| 364 |
+
feat_fig.remove_plot("SM")
|
| 365 |
+
if any(self.feat_activation.values()):
|
| 366 |
+
feat_fig.set_labels("Time (s)", "Amplitude ($m/s^2$)")
|
| 367 |
+
|
| 368 |
+
else:
|
| 369 |
+
feat_fig.add_plot(
|
| 370 |
+
"SM",
|
| 371 |
+
ts,
|
| 372 |
+
sm,
|
| 373 |
+
linewidth=MINOR_LW,
|
| 374 |
+
label="Signal Magnitude feature",
|
| 375 |
+
color="C4",
|
| 376 |
+
)
|
| 377 |
+
feat_fig.set_labels("Time (s)", "Amplitude ($m/s^2$)")
|
| 378 |
+
|
| 379 |
+
feat_fig.draw_in_canvas(root)
|
| 380 |
+
feat_fig.add_toolbar(root)
|
| 381 |
+
|
| 382 |
+
elif field == "VM":
|
| 383 |
+
if not self.feat_activation[field]:
|
| 384 |
+
feat_fig.remove_plot("VM")
|
| 385 |
+
if any(self.feat_activation.values()):
|
| 386 |
+
feat_fig.set_labels("Time (s)", "Amplitude ($m/s^2$)")
|
| 387 |
+
else:
|
| 388 |
+
feat_fig.add_plot(
|
| 389 |
+
"VM",
|
| 390 |
+
ts,
|
| 391 |
+
vm,
|
| 392 |
+
linewidth=MINOR_LW,
|
| 393 |
+
label="Vector Magnitude feature",
|
| 394 |
+
color="C3",
|
| 395 |
+
)
|
| 396 |
+
feat_fig.set_labels("Time (s)", "Amplitude ($m/s^2$)")
|
| 397 |
+
|
| 398 |
+
feat_fig.draw_in_canvas(root)
|
| 399 |
+
feat_fig.add_toolbar(root)
|
| 400 |
+
|
| 401 |
+
elif field == "Spectra":
|
| 402 |
+
if not self.feat_activation[field]:
|
| 403 |
+
feat_fig.remove_plot("SPECTRA X")
|
| 404 |
+
feat_fig.remove_plot("SPECTRA Y")
|
| 405 |
+
feat_fig.remove_plot("SPECTRA Z")
|
| 406 |
+
|
| 407 |
+
else:
|
| 408 |
+
|
| 409 |
+
feat_fig.add_plot(
|
| 410 |
+
"SPECTRA X",
|
| 411 |
+
spectrum["freq"]["x"],
|
| 412 |
+
spectrum["abs_amp"]["x"],
|
| 413 |
+
linewidth=MINOR_LW,
|
| 414 |
+
label="Spectrum along X",
|
| 415 |
+
color="C0",
|
| 416 |
+
)
|
| 417 |
+
feat_fig.draw_in_canvas(root)
|
| 418 |
+
|
| 419 |
+
feat_fig.add_plot(
|
| 420 |
+
"SPECTRA Y",
|
| 421 |
+
spectrum["freq"]["y"],
|
| 422 |
+
spectrum["abs_amp"]["y"],
|
| 423 |
+
linewidth=MINOR_LW,
|
| 424 |
+
label="Spectrum along Y",
|
| 425 |
+
color="C1",
|
| 426 |
+
)
|
| 427 |
+
feat_fig.draw_in_canvas(root)
|
| 428 |
+
|
| 429 |
+
feat_fig.add_plot(
|
| 430 |
+
"SPECTRA Z",
|
| 431 |
+
spectrum["freq"]["z"],
|
| 432 |
+
spectrum["abs_amp"]["z"],
|
| 433 |
+
linewidth=MINOR_LW,
|
| 434 |
+
label="Spectrum along Z",
|
| 435 |
+
color="C2",
|
| 436 |
+
)
|
| 437 |
+
feat_fig.set_labels(
|
| 438 |
+
"Frequency ($Hz$)", "Normalized Amplitude [a.u.]"
|
| 439 |
+
)
|
| 440 |
+
|
| 441 |
+
feat_fig.draw_in_canvas(root)
|
| 442 |
+
feat_fig.add_toolbar(root)
|
| 443 |
+
|
| 444 |
+
self.feat_menu.config(state="normal")
|
| 445 |
+
self.feat_menu.update()
|
| 446 |
+
|
| 447 |
+
def reset_fields(self, fieldnames):
|
| 448 |
+
self.feat_activation = {}
|
| 449 |
+
|
| 450 |
+
# setting up disabled fields
|
| 451 |
+
for field in fieldnames:
|
| 452 |
+
self.feat_activation[field] = False
|
| 453 |
+
|
| 454 |
+
for field in fieldnames:
|
| 455 |
+
self.feat_menu.menu.add_command(
|
| 456 |
+
label=field,
|
| 457 |
+
font=helv,
|
| 458 |
+
command=lambda field=field: self.update_field(field),
|
| 459 |
+
foreground="gray",
|
| 460 |
+
)
|
| 461 |
+
|
| 462 |
+
self.fieldnames = fieldnames
|
| 463 |
+
|
| 464 |
+
self.feat_menu.update()
|
| 465 |
+
|
| 466 |
+
def get_menu(self):
|
| 467 |
+
return self.feat_menu
|
| 468 |
+
|
| 469 |
+
domain_feat_btn = Button(root, text="Domain: time", command=domain_func)
|
| 470 |
+
domain_feat_btn.config(font=helv, state="disabled")
|
| 471 |
+
domain_feat_btn.grid(row=0, column=1, sticky=W, padx=10)
|
| 472 |
+
|
| 473 |
+
temp_features = ["VM", "SM"]
|
| 474 |
+
|
| 475 |
+
drop_features2 = feat_menu(temp_features, entry_name="Select Features", font=helv)
|
| 476 |
+
drop_features2.get_menu().config(state="disabled")
|
| 477 |
+
drop_features2.get_menu().update()
|
| 478 |
+
|
| 479 |
+
canvas_raw = FigureCanvasTkAgg(fig, master=root)
|
| 480 |
+
canvas_raw.get_tk_widget().grid(row=2, column=0, columnspan=1, sticky="w", padx=10)
|
| 481 |
+
canvas_raw.draw()
|
| 482 |
+
|
| 483 |
+
toolbarFrame = Frame(master=root)
|
| 484 |
+
toolbarFrame.grid(row=5, column=0, columnspan=1, sticky=W)
|
| 485 |
+
toolbar = NavigationToolbar2Tk(canvas_raw, toolbarFrame)
|
| 486 |
+
toolbar.update()
|
| 487 |
+
|
| 488 |
+
# Add key functionality
|
| 489 |
+
def on_key(event):
|
| 490 |
+
# print('You pressed {}'.format(event.key))
|
| 491 |
+
key_press_handler(event, canvas_raw, toolbar)
|
| 492 |
+
|
| 493 |
+
canvas_raw.mpl_connect("key_press_event", on_key)
|
| 494 |
+
|
| 495 |
+
# tkinter main loop
|
| 496 |
+
mainloop()
|
BioSPPy/source/biosppy/inter_plotting/ecg.py
ADDED
|
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.inter_plotting.ecg
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides an interactive display option for the ECG plot.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
# Imports
|
| 14 |
+
from matplotlib import gridspec
|
| 15 |
+
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk
|
| 16 |
+
|
| 17 |
+
# from matplotlib.backends.backend_wx import *
|
| 18 |
+
import matplotlib.pyplot as plt
|
| 19 |
+
import numpy as np
|
| 20 |
+
from tkinter import *
|
| 21 |
+
import os
|
| 22 |
+
from biosppy import utils
|
| 23 |
+
|
| 24 |
+
MAJOR_LW = 2.5
|
| 25 |
+
MINOR_LW = 1.5
|
| 26 |
+
MAX_ROWS = 10
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
def plot_ecg(
|
| 30 |
+
ts=None,
|
| 31 |
+
raw=None,
|
| 32 |
+
filtered=None,
|
| 33 |
+
rpeaks=None,
|
| 34 |
+
templates_ts=None,
|
| 35 |
+
templates=None,
|
| 36 |
+
heart_rate_ts=None,
|
| 37 |
+
heart_rate=None,
|
| 38 |
+
path=None,
|
| 39 |
+
show=True,
|
| 40 |
+
):
|
| 41 |
+
"""Create a summary plot from the output of signals.ecg.ecg.
|
| 42 |
+
|
| 43 |
+
Parameters
|
| 44 |
+
----------
|
| 45 |
+
ts : array
|
| 46 |
+
Signal time axis reference (seconds).
|
| 47 |
+
raw : array
|
| 48 |
+
Raw ECG signal.
|
| 49 |
+
filtered : array
|
| 50 |
+
Filtered ECG signal.
|
| 51 |
+
rpeaks : array
|
| 52 |
+
R-peak location indices.
|
| 53 |
+
templates_ts : array
|
| 54 |
+
Templates time axis reference (seconds).
|
| 55 |
+
templates : array
|
| 56 |
+
Extracted heartbeat templates.
|
| 57 |
+
heart_rate_ts : array
|
| 58 |
+
Heart rate time axis reference (seconds).
|
| 59 |
+
heart_rate : array
|
| 60 |
+
Instantaneous heart rate (bpm).
|
| 61 |
+
path : str, optional
|
| 62 |
+
If provided, the plot will be saved to the specified file.
|
| 63 |
+
show : bool, optional
|
| 64 |
+
If True, show the plot immediately.
|
| 65 |
+
|
| 66 |
+
"""
|
| 67 |
+
|
| 68 |
+
# creating a root widget
|
| 69 |
+
root_tk = Tk()
|
| 70 |
+
root_tk.resizable(False, False) # default
|
| 71 |
+
|
| 72 |
+
fig_raw, axs_raw = plt.subplots(3, 1, sharex=True)
|
| 73 |
+
fig_raw.suptitle("ECG Summary")
|
| 74 |
+
|
| 75 |
+
# raw signal plot (1)
|
| 76 |
+
axs_raw[0].plot(ts, raw, linewidth=MAJOR_LW, label="Raw", color="C0")
|
| 77 |
+
axs_raw[0].set_ylabel("Amplitude")
|
| 78 |
+
axs_raw[0].legend()
|
| 79 |
+
axs_raw[0].grid()
|
| 80 |
+
|
| 81 |
+
# filtered signal with R-Peaks (2)
|
| 82 |
+
axs_raw[1].plot(ts, filtered, linewidth=MAJOR_LW, label="Filtered", color="C0")
|
| 83 |
+
|
| 84 |
+
ymin = np.min(filtered)
|
| 85 |
+
ymax = np.max(filtered)
|
| 86 |
+
alpha = 0.1 * (ymax - ymin)
|
| 87 |
+
ymax += alpha
|
| 88 |
+
ymin -= alpha
|
| 89 |
+
|
| 90 |
+
# adding the R-Peaks
|
| 91 |
+
axs_raw[1].vlines(
|
| 92 |
+
ts[rpeaks], ymin, ymax, color="m", linewidth=MINOR_LW, label="R-peaks"
|
| 93 |
+
)
|
| 94 |
+
|
| 95 |
+
axs_raw[1].set_ylabel("Amplitude")
|
| 96 |
+
axs_raw[1].legend(loc="upper right")
|
| 97 |
+
axs_raw[1].grid()
|
| 98 |
+
|
| 99 |
+
# heart rate (3)
|
| 100 |
+
axs_raw[2].plot(heart_rate_ts, heart_rate, linewidth=MAJOR_LW, label="Heart Rate")
|
| 101 |
+
axs_raw[2].set_xlabel("Time (s)")
|
| 102 |
+
axs_raw[2].set_ylabel("Heart Rate (bpm)")
|
| 103 |
+
axs_raw[2].legend()
|
| 104 |
+
axs_raw[2].grid()
|
| 105 |
+
|
| 106 |
+
canvas_raw = FigureCanvasTkAgg(fig_raw, master=root_tk)
|
| 107 |
+
canvas_raw.get_tk_widget().grid(
|
| 108 |
+
row=0, column=0, columnspan=1, rowspan=6, sticky="w"
|
| 109 |
+
)
|
| 110 |
+
canvas_raw.draw()
|
| 111 |
+
|
| 112 |
+
toolbarFrame = Frame(master=root_tk)
|
| 113 |
+
toolbarFrame.grid(row=6, column=0, columnspan=1, sticky=W)
|
| 114 |
+
toolbar = NavigationToolbar2Tk(canvas_raw, toolbarFrame)
|
| 115 |
+
toolbar.update()
|
| 116 |
+
|
| 117 |
+
fig = fig_raw
|
| 118 |
+
|
| 119 |
+
fig_2 = plt.Figure()
|
| 120 |
+
gs = gridspec.GridSpec(6, 1)
|
| 121 |
+
|
| 122 |
+
axs_2 = fig_2.add_subplot(gs[:, 0])
|
| 123 |
+
|
| 124 |
+
axs_2.plot(templates_ts, templates.T, "m", linewidth=MINOR_LW, alpha=0.7)
|
| 125 |
+
axs_2.set_xlabel("Time (s)")
|
| 126 |
+
axs_2.set_ylabel("Amplitude")
|
| 127 |
+
axs_2.set_title("Templates")
|
| 128 |
+
axs_2.grid()
|
| 129 |
+
|
| 130 |
+
grid_params = {"row": 0, "column": 1, "columnspan": 2, "rowspan": 6, "sticky": "w"}
|
| 131 |
+
canvas_2 = FigureCanvasTkAgg(fig_2, master=root_tk)
|
| 132 |
+
canvas_2.get_tk_widget().grid(**grid_params)
|
| 133 |
+
canvas_2.draw()
|
| 134 |
+
|
| 135 |
+
toolbarFrame_2 = Frame(master=root_tk)
|
| 136 |
+
toolbarFrame_2.grid(row=6, column=1, columnspan=1, sticky=W)
|
| 137 |
+
toolbar_2 = NavigationToolbar2Tk(canvas_2, toolbarFrame_2)
|
| 138 |
+
toolbar_2.update()
|
| 139 |
+
|
| 140 |
+
if show:
|
| 141 |
+
# window title
|
| 142 |
+
root_tk.wm_title("BioSPPy: ECG signal")
|
| 143 |
+
|
| 144 |
+
# save to file
|
| 145 |
+
if path is not None:
|
| 146 |
+
path = utils.normpath(path)
|
| 147 |
+
root, ext = os.path.splitext(path)
|
| 148 |
+
ext = ext.lower()
|
| 149 |
+
if ext not in ["png", "jpg"]:
|
| 150 |
+
path_block_1 = "{}-summary{}".format(root, ".png")
|
| 151 |
+
path_block_2 = "{}-templates{}".format(root, ".png")
|
| 152 |
+
else:
|
| 153 |
+
path_block_1 = "{}-summary{}".format(root, ext)
|
| 154 |
+
path_block_2 = "{}-templates{}".format(root, ext)
|
| 155 |
+
|
| 156 |
+
fig.savefig(path_block_1, dpi=200, bbox_inches="tight")
|
| 157 |
+
fig_2.savefig(path_block_2, dpi=200, bbox_inches="tight")
|
| 158 |
+
|
| 159 |
+
mainloop()
|
| 160 |
+
|
| 161 |
+
else:
|
| 162 |
+
# close
|
| 163 |
+
plt.close(fig)
|
BioSPPy/source/biosppy/metrics.py
ADDED
|
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.metrics
|
| 4 |
+
---------------
|
| 5 |
+
|
| 6 |
+
This module provides pairwise distance computation methods.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
# Imports
|
| 13 |
+
# compat
|
| 14 |
+
from __future__ import absolute_import, division, print_function
|
| 15 |
+
import six
|
| 16 |
+
|
| 17 |
+
# 3rd party
|
| 18 |
+
import numpy as np
|
| 19 |
+
import scipy.spatial.distance as ssd
|
| 20 |
+
from scipy import linalg
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
def pcosine(u, v):
|
| 24 |
+
"""Computes the Cosine distance (positive space) between 1-D arrays.
|
| 25 |
+
|
| 26 |
+
The Cosine distance (positive space) between `u` and `v` is defined as
|
| 27 |
+
|
| 28 |
+
.. math::
|
| 29 |
+
|
| 30 |
+
d(u, v) = 1 - abs \\left( \\frac{u \\cdot v}{||u||_2 ||v||_2} \\right)
|
| 31 |
+
|
| 32 |
+
where :math:`u \\cdot v` is the dot product of :math:`u` and :math:`v`.
|
| 33 |
+
|
| 34 |
+
Parameters
|
| 35 |
+
----------
|
| 36 |
+
u : array
|
| 37 |
+
Input array.
|
| 38 |
+
v : array
|
| 39 |
+
Input array.
|
| 40 |
+
|
| 41 |
+
Returns
|
| 42 |
+
-------
|
| 43 |
+
cosine : float
|
| 44 |
+
Cosine distance between `u` and `v`.
|
| 45 |
+
|
| 46 |
+
"""
|
| 47 |
+
|
| 48 |
+
# validate vectors like scipy does
|
| 49 |
+
u = ssd._validate_vector(u)
|
| 50 |
+
v = ssd._validate_vector(v)
|
| 51 |
+
|
| 52 |
+
dist = 1. - np.abs(np.dot(u, v) / (linalg.norm(u) * linalg.norm(v)))
|
| 53 |
+
|
| 54 |
+
return dist
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
def pdist(X, metric='euclidean', p=2, w=None, V=None, VI=None):
|
| 58 |
+
"""Pairwise distances between observations in n-dimensional space.
|
| 59 |
+
|
| 60 |
+
Wraps scipy.spatial.distance.pdist.
|
| 61 |
+
|
| 62 |
+
Parameters
|
| 63 |
+
----------
|
| 64 |
+
X : array
|
| 65 |
+
An m by n array of m original observations in an n-dimensional space.
|
| 66 |
+
metric : str, function, optional
|
| 67 |
+
The distance metric to use; the distance can be 'braycurtis',
|
| 68 |
+
'canberra', 'chebyshev', 'cityblock', 'correlation', 'cosine', 'dice',
|
| 69 |
+
'euclidean', 'hamming', 'jaccard', 'kulsinski', 'mahalanobis',
|
| 70 |
+
'matching', 'minkowski', 'pcosine', 'rogerstanimoto', 'russellrao',
|
| 71 |
+
'seuclidean', 'sokalmichener', 'sokalsneath', 'sqeuclidean', 'yule'.
|
| 72 |
+
p : float, optional
|
| 73 |
+
The p-norm to apply (for Minkowski, weighted and unweighted).
|
| 74 |
+
w : array, optional
|
| 75 |
+
The weight vector (for weighted Minkowski).
|
| 76 |
+
V : array, optional
|
| 77 |
+
The variance vector (for standardized Euclidean).
|
| 78 |
+
VI : array, optional
|
| 79 |
+
The inverse of the covariance matrix (for Mahalanobis).
|
| 80 |
+
|
| 81 |
+
Returns
|
| 82 |
+
-------
|
| 83 |
+
Y : array
|
| 84 |
+
Returns a condensed distance matrix Y. For each :math:`i` and
|
| 85 |
+
:math:`j` (where :math:`i<j<n`), the metric ``dist(u=X[i], v=X[j])``
|
| 86 |
+
is computed and stored in entry ``ij``.
|
| 87 |
+
|
| 88 |
+
"""
|
| 89 |
+
|
| 90 |
+
if isinstance(metric, six.string_types):
|
| 91 |
+
if metric == 'pcosine':
|
| 92 |
+
metric = pcosine
|
| 93 |
+
|
| 94 |
+
return ssd.pdist(X, metric, p, w, V, VI)
|
| 95 |
+
|
| 96 |
+
|
| 97 |
+
def cdist(XA, XB, metric='euclidean', p=2, V=None, VI=None, w=None):
|
| 98 |
+
"""Computes distance between each pair of the two collections of inputs.
|
| 99 |
+
|
| 100 |
+
Wraps scipy.spatial.distance.cdist.
|
| 101 |
+
|
| 102 |
+
Parameters
|
| 103 |
+
----------
|
| 104 |
+
XA : array
|
| 105 |
+
An :math:`m_A` by :math:`n` array of :math:`m_A` original observations
|
| 106 |
+
in an :math:`n`-dimensional space.
|
| 107 |
+
XB : array
|
| 108 |
+
An :math:`m_B` by :math:`n` array of :math:`m_B` original observations
|
| 109 |
+
in an :math:`n`-dimensional space.
|
| 110 |
+
metric : str, function, optional
|
| 111 |
+
The distance metric to use; the distance can be 'braycurtis',
|
| 112 |
+
'canberra', 'chebyshev', 'cityblock', 'correlation', 'cosine', 'dice',
|
| 113 |
+
'euclidean', 'hamming', 'jaccard', 'kulsinski', 'mahalanobis',
|
| 114 |
+
'matching', 'minkowski', 'pcosine', 'rogerstanimoto', 'russellrao',
|
| 115 |
+
'seuclidean', 'sokalmichener', 'sokalsneath', 'sqeuclidean', 'yule'.
|
| 116 |
+
p : float, optional
|
| 117 |
+
The p-norm to apply (for Minkowski, weighted and unweighted).
|
| 118 |
+
w : array, optional
|
| 119 |
+
The weight vector (for weighted Minkowski).
|
| 120 |
+
V : array, optional
|
| 121 |
+
The variance vector (for standardized Euclidean).
|
| 122 |
+
VI : array, optional
|
| 123 |
+
The inverse of the covariance matrix (for Mahalanobis).
|
| 124 |
+
|
| 125 |
+
Returns
|
| 126 |
+
-------
|
| 127 |
+
Y : array
|
| 128 |
+
An :math:`m_A` by :math:`m_B` distance matrix is returned. For each
|
| 129 |
+
:math:`i` and :math:`j`, the metric ``dist(u=XA[i], v=XB[j])``
|
| 130 |
+
is computed and stored in the :math:`ij` th entry.
|
| 131 |
+
|
| 132 |
+
"""
|
| 133 |
+
|
| 134 |
+
if isinstance(metric, six.string_types):
|
| 135 |
+
if metric == 'pcosine':
|
| 136 |
+
metric = pcosine
|
| 137 |
+
|
| 138 |
+
return ssd.cdist(XA, XB, metric, p, V, VI, w)
|
| 139 |
+
|
| 140 |
+
|
| 141 |
+
def squareform(X, force="no", checks=True):
|
| 142 |
+
"""Converts a vector-form distance vector to a square-form distance matrix,
|
| 143 |
+
and vice-versa.
|
| 144 |
+
|
| 145 |
+
Wraps scipy.spatial.distance.squareform.
|
| 146 |
+
|
| 147 |
+
Parameters
|
| 148 |
+
----------
|
| 149 |
+
X : array
|
| 150 |
+
Either a condensed or redundant distance matrix.
|
| 151 |
+
force : str, optional
|
| 152 |
+
As with MATLAB(TM), if force is equal to 'tovector' or 'tomatrix', the
|
| 153 |
+
input will be treated as a distance matrix or distance vector
|
| 154 |
+
respectively.
|
| 155 |
+
checks : bool, optional
|
| 156 |
+
If `checks` is set to False, no checks will be made for matrix
|
| 157 |
+
symmetry nor zero diagonals. This is useful if it is known that
|
| 158 |
+
``X - X.T1`` is small and ``diag(X)`` is close to zero. These values
|
| 159 |
+
are ignored any way so they do not disrupt the squareform
|
| 160 |
+
transformation.
|
| 161 |
+
|
| 162 |
+
Returns
|
| 163 |
+
-------
|
| 164 |
+
Y : array
|
| 165 |
+
If a condensed distance matrix is passed, a redundant one is returned,
|
| 166 |
+
or if a redundant one is passed, a condensed distance matrix is
|
| 167 |
+
returned.
|
| 168 |
+
|
| 169 |
+
"""
|
| 170 |
+
|
| 171 |
+
return ssd.squareform(X, force, checks)
|
BioSPPy/source/biosppy/plotting.py
ADDED
|
@@ -0,0 +1,1741 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.plotting
|
| 4 |
+
----------------
|
| 5 |
+
|
| 6 |
+
This module provides utilities to plot data.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
# Imports
|
| 13 |
+
# compat
|
| 14 |
+
from __future__ import absolute_import, division, print_function
|
| 15 |
+
from six.moves import range, zip
|
| 16 |
+
import six
|
| 17 |
+
|
| 18 |
+
# built-in
|
| 19 |
+
import os
|
| 20 |
+
|
| 21 |
+
# 3rd party
|
| 22 |
+
import matplotlib.pyplot as plt
|
| 23 |
+
import matplotlib.gridspec as gridspec
|
| 24 |
+
import numpy as np
|
| 25 |
+
|
| 26 |
+
# local
|
| 27 |
+
from . import utils
|
| 28 |
+
from biosppy.signals import tools as st
|
| 29 |
+
|
| 30 |
+
# Globals
|
| 31 |
+
MAJOR_LW = 2.5
|
| 32 |
+
MINOR_LW = 1.5
|
| 33 |
+
MAX_ROWS = 10
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
def _plot_filter(b, a, sampling_rate=1000., nfreqs=4096, ax=None):
|
| 37 |
+
"""Compute and plot the frequency response of a digital filter.
|
| 38 |
+
|
| 39 |
+
Parameters
|
| 40 |
+
----------
|
| 41 |
+
b : array
|
| 42 |
+
Numerator coefficients.
|
| 43 |
+
a : array
|
| 44 |
+
Denominator coefficients.
|
| 45 |
+
sampling_rate : int, float, optional
|
| 46 |
+
Sampling frequency (Hz).
|
| 47 |
+
nfreqs : int, optional
|
| 48 |
+
Number of frequency points to compute.
|
| 49 |
+
ax : axis, optional
|
| 50 |
+
Plot Axis to use.
|
| 51 |
+
|
| 52 |
+
Returns
|
| 53 |
+
-------
|
| 54 |
+
fig : Figure
|
| 55 |
+
Figure object.
|
| 56 |
+
|
| 57 |
+
"""
|
| 58 |
+
|
| 59 |
+
# compute frequency response
|
| 60 |
+
freqs, resp = st._filter_resp(b, a,
|
| 61 |
+
sampling_rate=sampling_rate,
|
| 62 |
+
nfreqs=nfreqs)
|
| 63 |
+
|
| 64 |
+
# plot
|
| 65 |
+
if ax is None:
|
| 66 |
+
fig = plt.figure()
|
| 67 |
+
ax = fig.add_subplot(111)
|
| 68 |
+
else:
|
| 69 |
+
fig = ax.figure
|
| 70 |
+
|
| 71 |
+
# amplitude
|
| 72 |
+
pwr = 20. * np.log10(np.abs(resp))
|
| 73 |
+
ax.semilogx(freqs, pwr, 'b', linewidth=MAJOR_LW)
|
| 74 |
+
ax.set_ylabel('Amplitude (dB)', color='b')
|
| 75 |
+
ax.set_xlabel('Frequency (Hz)')
|
| 76 |
+
|
| 77 |
+
# phase
|
| 78 |
+
angles = np.unwrap(np.angle(resp))
|
| 79 |
+
ax2 = ax.twinx()
|
| 80 |
+
ax2.semilogx(freqs, angles, 'g', linewidth=MAJOR_LW)
|
| 81 |
+
ax2.set_ylabel('Angle (radians)', color='g')
|
| 82 |
+
|
| 83 |
+
ax.grid()
|
| 84 |
+
|
| 85 |
+
return fig
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
def plot_filter(ftype='FIR',
|
| 89 |
+
band='lowpass',
|
| 90 |
+
order=None,
|
| 91 |
+
frequency=None,
|
| 92 |
+
sampling_rate=1000.,
|
| 93 |
+
path=None,
|
| 94 |
+
show=True, **kwargs):
|
| 95 |
+
"""Plot the frequency response of the filter specified with the given
|
| 96 |
+
parameters.
|
| 97 |
+
|
| 98 |
+
Parameters
|
| 99 |
+
----------
|
| 100 |
+
ftype : str
|
| 101 |
+
Filter type:
|
| 102 |
+
* Finite Impulse Response filter ('FIR');
|
| 103 |
+
* Butterworth filter ('butter');
|
| 104 |
+
* Chebyshev filters ('cheby1', 'cheby2');
|
| 105 |
+
* Elliptic filter ('ellip');
|
| 106 |
+
* Bessel filter ('bessel').
|
| 107 |
+
band : str
|
| 108 |
+
Band type:
|
| 109 |
+
* Low-pass filter ('lowpass');
|
| 110 |
+
* High-pass filter ('highpass');
|
| 111 |
+
* Band-pass filter ('bandpass');
|
| 112 |
+
* Band-stop filter ('bandstop').
|
| 113 |
+
order : int
|
| 114 |
+
Order of the filter.
|
| 115 |
+
frequency : int, float, list, array
|
| 116 |
+
Cutoff frequencies; format depends on type of band:
|
| 117 |
+
* 'lowpass' or 'bandpass': single frequency;
|
| 118 |
+
* 'bandpass' or 'bandstop': pair of frequencies.
|
| 119 |
+
sampling_rate : int, float, optional
|
| 120 |
+
Sampling frequency (Hz).
|
| 121 |
+
path : str, optional
|
| 122 |
+
If provided, the plot will be saved to the specified file.
|
| 123 |
+
show : bool, optional
|
| 124 |
+
If True, show the plot immediately.
|
| 125 |
+
``**kwargs`` : dict, optional
|
| 126 |
+
Additional keyword arguments are passed to the underlying
|
| 127 |
+
scipy.signal function.
|
| 128 |
+
|
| 129 |
+
"""
|
| 130 |
+
|
| 131 |
+
# get filter
|
| 132 |
+
b, a = st.get_filter(ftype=ftype,
|
| 133 |
+
band=band,
|
| 134 |
+
order=order,
|
| 135 |
+
frequency=frequency,
|
| 136 |
+
sampling_rate=sampling_rate, **kwargs)
|
| 137 |
+
|
| 138 |
+
# plot
|
| 139 |
+
fig = _plot_filter(b, a, sampling_rate)
|
| 140 |
+
|
| 141 |
+
# make layout tight
|
| 142 |
+
fig.tight_layout()
|
| 143 |
+
|
| 144 |
+
# save to file
|
| 145 |
+
if path is not None:
|
| 146 |
+
path = utils.normpath(path)
|
| 147 |
+
root, ext = os.path.splitext(path)
|
| 148 |
+
ext = ext.lower()
|
| 149 |
+
if ext not in ['png', 'jpg']:
|
| 150 |
+
path = root + '.png'
|
| 151 |
+
|
| 152 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 153 |
+
|
| 154 |
+
# show
|
| 155 |
+
if show:
|
| 156 |
+
plt.show()
|
| 157 |
+
else:
|
| 158 |
+
# close
|
| 159 |
+
plt.close(fig)
|
| 160 |
+
|
| 161 |
+
|
| 162 |
+
def plot_spectrum(signal=None, sampling_rate=1000., path=None, show=True):
|
| 163 |
+
"""Plot the power spectrum of a signal (one-sided).
|
| 164 |
+
|
| 165 |
+
Parameters
|
| 166 |
+
----------
|
| 167 |
+
signal : array
|
| 168 |
+
Input signal.
|
| 169 |
+
sampling_rate : int, float, optional
|
| 170 |
+
Sampling frequency (Hz).
|
| 171 |
+
path : str, optional
|
| 172 |
+
If provided, the plot will be saved to the specified file.
|
| 173 |
+
show : bool, optional
|
| 174 |
+
If True, show the plot immediately.
|
| 175 |
+
|
| 176 |
+
"""
|
| 177 |
+
|
| 178 |
+
freqs, power = st.power_spectrum(signal, sampling_rate,
|
| 179 |
+
pad=0,
|
| 180 |
+
pow2=False,
|
| 181 |
+
decibel=True)
|
| 182 |
+
|
| 183 |
+
fig = plt.figure()
|
| 184 |
+
ax = fig.add_subplot(111)
|
| 185 |
+
|
| 186 |
+
ax.plot(freqs, power, linewidth=MAJOR_LW)
|
| 187 |
+
ax.set_xlabel('Frequency (Hz)')
|
| 188 |
+
ax.set_ylabel('Power (dB)')
|
| 189 |
+
ax.grid()
|
| 190 |
+
|
| 191 |
+
# make layout tight
|
| 192 |
+
fig.tight_layout()
|
| 193 |
+
|
| 194 |
+
# save to file
|
| 195 |
+
if path is not None:
|
| 196 |
+
path = utils.normpath(path)
|
| 197 |
+
root, ext = os.path.splitext(path)
|
| 198 |
+
ext = ext.lower()
|
| 199 |
+
if ext not in ['png', 'jpg']:
|
| 200 |
+
path = root + '.png'
|
| 201 |
+
|
| 202 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 203 |
+
|
| 204 |
+
# show
|
| 205 |
+
if show:
|
| 206 |
+
plt.show()
|
| 207 |
+
else:
|
| 208 |
+
# close
|
| 209 |
+
plt.close(fig)
|
| 210 |
+
|
| 211 |
+
|
| 212 |
+
def plot_acc(ts=None,
|
| 213 |
+
raw=None,
|
| 214 |
+
vm=None,
|
| 215 |
+
sm=None,
|
| 216 |
+
path=None,
|
| 217 |
+
show=False):
|
| 218 |
+
"""Create a summary plot from the output of signals.acc.acc.
|
| 219 |
+
|
| 220 |
+
Parameters
|
| 221 |
+
----------
|
| 222 |
+
ts : array
|
| 223 |
+
Signal time axis reference (seconds).
|
| 224 |
+
raw : array
|
| 225 |
+
Raw ACC signal.
|
| 226 |
+
vm : array
|
| 227 |
+
Vector Magnitude feature of the signal.
|
| 228 |
+
sm : array
|
| 229 |
+
Signal Magnitude feature of the signal
|
| 230 |
+
path : str, optional
|
| 231 |
+
If provided, the plot will be saved to the specified file.
|
| 232 |
+
show : bool, optional
|
| 233 |
+
If True, show the plot immediately.
|
| 234 |
+
|
| 235 |
+
"""
|
| 236 |
+
|
| 237 |
+
raw_t = np.transpose(raw)
|
| 238 |
+
acc_x, acc_y, acc_z = raw_t[0], raw_t[1], raw_t[2]
|
| 239 |
+
|
| 240 |
+
fig = plt.figure()
|
| 241 |
+
fig.suptitle('ACC Summary')
|
| 242 |
+
gs = gridspec.GridSpec(6, 2)
|
| 243 |
+
|
| 244 |
+
# raw signal (acc_x)
|
| 245 |
+
ax1 = fig.add_subplot(gs[:2, 0])
|
| 246 |
+
|
| 247 |
+
ax1.plot(ts, acc_x, linewidth=MINOR_LW, label='Raw acc along X', color='C0')
|
| 248 |
+
|
| 249 |
+
ax1.set_ylabel('Amplitude ($m/s^2$)')
|
| 250 |
+
ax1.legend()
|
| 251 |
+
ax1.grid()
|
| 252 |
+
|
| 253 |
+
# raw signal (acc_y)
|
| 254 |
+
ax2 = fig.add_subplot(gs[2:4, 0], sharex=ax1)
|
| 255 |
+
|
| 256 |
+
ax2.plot(ts, acc_y, linewidth=MINOR_LW, label='Raw acc along Y', color='C1')
|
| 257 |
+
|
| 258 |
+
ax2.set_ylabel('Amplitude ($m/s^2$)')
|
| 259 |
+
ax2.legend()
|
| 260 |
+
ax2.grid()
|
| 261 |
+
|
| 262 |
+
# raw signal (acc_z)
|
| 263 |
+
ax3 = fig.add_subplot(gs[4:, 0], sharex=ax1)
|
| 264 |
+
|
| 265 |
+
ax3.plot(ts, acc_z, linewidth=MINOR_LW, label='Raw acc along Z', color='C2')
|
| 266 |
+
|
| 267 |
+
ax3.set_ylabel('Amplitude ($m/s^2$)')
|
| 268 |
+
ax3.set_xlabel('Time (s)')
|
| 269 |
+
ax3.legend()
|
| 270 |
+
ax3.grid()
|
| 271 |
+
|
| 272 |
+
# vector magnitude
|
| 273 |
+
ax4 = fig.add_subplot(gs[:3, 1], sharex=ax1)
|
| 274 |
+
|
| 275 |
+
ax4.plot(ts, vm, linewidth=MINOR_LW, label='Vector Magnitude feature', color='C3')
|
| 276 |
+
|
| 277 |
+
ax4.set_ylabel('Amplitude ($m/s^2$)')
|
| 278 |
+
ax4.legend()
|
| 279 |
+
ax4.grid()
|
| 280 |
+
|
| 281 |
+
# signal magnitude
|
| 282 |
+
ax5 = fig.add_subplot(gs[3:, 1], sharex=ax1)
|
| 283 |
+
|
| 284 |
+
ax5.plot(ts, sm, linewidth=MINOR_LW, label='Signal Magnitude feature', color='C4')
|
| 285 |
+
|
| 286 |
+
ax5.set_ylabel('Amplitude ($m/s^2$)')
|
| 287 |
+
ax5.set_xlabel('Time (s)')
|
| 288 |
+
ax5.legend()
|
| 289 |
+
ax5.grid()
|
| 290 |
+
|
| 291 |
+
# make layout tight
|
| 292 |
+
gs.tight_layout(fig)
|
| 293 |
+
|
| 294 |
+
# save to file
|
| 295 |
+
if path is not None:
|
| 296 |
+
path = utils.normpath(path)
|
| 297 |
+
root, ext = os.path.splitext(path)
|
| 298 |
+
ext = ext.lower()
|
| 299 |
+
if ext not in ['png', 'jpg']:
|
| 300 |
+
path = root + '.png'
|
| 301 |
+
|
| 302 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 303 |
+
|
| 304 |
+
# show
|
| 305 |
+
if show:
|
| 306 |
+
plt.show()
|
| 307 |
+
else:
|
| 308 |
+
# close
|
| 309 |
+
plt.close(fig)
|
| 310 |
+
|
| 311 |
+
|
| 312 |
+
def plot_ppg(ts=None,
|
| 313 |
+
raw=None,
|
| 314 |
+
filtered=None,
|
| 315 |
+
onsets=None,
|
| 316 |
+
heart_rate_ts=None,
|
| 317 |
+
heart_rate=None,
|
| 318 |
+
path=None,
|
| 319 |
+
show=False):
|
| 320 |
+
"""Create a summary plot from the output of signals.ppg.ppg.
|
| 321 |
+
|
| 322 |
+
Parameters
|
| 323 |
+
----------
|
| 324 |
+
ts : array
|
| 325 |
+
Signal time axis reference (seconds).
|
| 326 |
+
raw : array
|
| 327 |
+
Raw PPG signal.
|
| 328 |
+
filtered : array
|
| 329 |
+
Filtered PPG signal.
|
| 330 |
+
onsets : array
|
| 331 |
+
Indices of PPG pulse onsets.
|
| 332 |
+
heart_rate_ts : array
|
| 333 |
+
Heart rate time axis reference (seconds).
|
| 334 |
+
heart_rate : array
|
| 335 |
+
Instantaneous heart rate (bpm).
|
| 336 |
+
path : str, optional
|
| 337 |
+
If provided, the plot will be saved to the specified file.
|
| 338 |
+
show : bool, optional
|
| 339 |
+
If True, show the plot immediately.
|
| 340 |
+
|
| 341 |
+
"""
|
| 342 |
+
|
| 343 |
+
fig = plt.figure()
|
| 344 |
+
fig.suptitle('PPG Summary')
|
| 345 |
+
|
| 346 |
+
# raw signal
|
| 347 |
+
ax1 = fig.add_subplot(311)
|
| 348 |
+
|
| 349 |
+
ax1.plot(ts, raw, linewidth=MAJOR_LW, label='Raw')
|
| 350 |
+
|
| 351 |
+
ax1.set_ylabel('Amplitude')
|
| 352 |
+
ax1.legend()
|
| 353 |
+
ax1.grid()
|
| 354 |
+
|
| 355 |
+
# filtered signal with onsets
|
| 356 |
+
ax2 = fig.add_subplot(312, sharex=ax1)
|
| 357 |
+
|
| 358 |
+
ymin = np.min(filtered)
|
| 359 |
+
ymax = np.max(filtered)
|
| 360 |
+
alpha = 0.1 * (ymax - ymin)
|
| 361 |
+
ymax += alpha
|
| 362 |
+
ymin -= alpha
|
| 363 |
+
|
| 364 |
+
ax2.plot(ts, filtered, linewidth=MAJOR_LW, label='Filtered')
|
| 365 |
+
ax2.vlines(ts[onsets], ymin, ymax,
|
| 366 |
+
color='m',
|
| 367 |
+
linewidth=MINOR_LW,
|
| 368 |
+
label='Onsets')
|
| 369 |
+
|
| 370 |
+
ax2.set_ylabel('Amplitude')
|
| 371 |
+
ax2.legend()
|
| 372 |
+
ax2.grid()
|
| 373 |
+
|
| 374 |
+
# heart rate
|
| 375 |
+
ax3 = fig.add_subplot(313, sharex=ax1)
|
| 376 |
+
|
| 377 |
+
ax3.plot(heart_rate_ts, heart_rate, linewidth=MAJOR_LW, label='Heart Rate')
|
| 378 |
+
|
| 379 |
+
ax3.set_xlabel('Time (s)')
|
| 380 |
+
ax3.set_ylabel('Heart Rate (bpm)')
|
| 381 |
+
ax3.legend()
|
| 382 |
+
ax3.grid()
|
| 383 |
+
|
| 384 |
+
# make layout tight
|
| 385 |
+
fig.tight_layout()
|
| 386 |
+
|
| 387 |
+
# save to file
|
| 388 |
+
if path is not None:
|
| 389 |
+
path = utils.normpath(path)
|
| 390 |
+
root, ext = os.path.splitext(path)
|
| 391 |
+
ext = ext.lower()
|
| 392 |
+
if ext not in ['png', 'jpg']:
|
| 393 |
+
path = root + '.png'
|
| 394 |
+
|
| 395 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 396 |
+
|
| 397 |
+
# show
|
| 398 |
+
if show:
|
| 399 |
+
plt.show()
|
| 400 |
+
else:
|
| 401 |
+
# close
|
| 402 |
+
plt.close(fig)
|
| 403 |
+
|
| 404 |
+
|
| 405 |
+
def plot_bvp(ts=None,
|
| 406 |
+
raw=None,
|
| 407 |
+
filtered=None,
|
| 408 |
+
onsets=None,
|
| 409 |
+
heart_rate_ts=None,
|
| 410 |
+
heart_rate=None,
|
| 411 |
+
path=None,
|
| 412 |
+
show=False):
|
| 413 |
+
"""Create a summary plot from the output of signals.bvp.bvp.
|
| 414 |
+
|
| 415 |
+
Parameters
|
| 416 |
+
----------
|
| 417 |
+
ts : array
|
| 418 |
+
Signal time axis reference (seconds).
|
| 419 |
+
raw : array
|
| 420 |
+
Raw BVP signal.
|
| 421 |
+
filtered : array
|
| 422 |
+
Filtered BVP signal.
|
| 423 |
+
onsets : array
|
| 424 |
+
Indices of BVP pulse onsets.
|
| 425 |
+
heart_rate_ts : array
|
| 426 |
+
Heart rate time axis reference (seconds).
|
| 427 |
+
heart_rate : array
|
| 428 |
+
Instantaneous heart rate (bpm).
|
| 429 |
+
path : str, optional
|
| 430 |
+
If provided, the plot will be saved to the specified file.
|
| 431 |
+
show : bool, optional
|
| 432 |
+
If True, show the plot immediately.
|
| 433 |
+
|
| 434 |
+
"""
|
| 435 |
+
|
| 436 |
+
fig = plt.figure()
|
| 437 |
+
fig.suptitle('BVP Summary')
|
| 438 |
+
|
| 439 |
+
# raw signal
|
| 440 |
+
ax1 = fig.add_subplot(311)
|
| 441 |
+
|
| 442 |
+
ax1.plot(ts, raw, linewidth=MAJOR_LW, label='Raw')
|
| 443 |
+
|
| 444 |
+
ax1.set_ylabel('Amplitude')
|
| 445 |
+
ax1.legend()
|
| 446 |
+
ax1.grid()
|
| 447 |
+
|
| 448 |
+
# filtered signal with onsets
|
| 449 |
+
ax2 = fig.add_subplot(312, sharex=ax1)
|
| 450 |
+
|
| 451 |
+
ymin = np.min(filtered)
|
| 452 |
+
ymax = np.max(filtered)
|
| 453 |
+
alpha = 0.1 * (ymax - ymin)
|
| 454 |
+
ymax += alpha
|
| 455 |
+
ymin -= alpha
|
| 456 |
+
|
| 457 |
+
ax2.plot(ts, filtered, linewidth=MAJOR_LW, label='Filtered')
|
| 458 |
+
ax2.vlines(ts[onsets], ymin, ymax,
|
| 459 |
+
color='m',
|
| 460 |
+
linewidth=MINOR_LW,
|
| 461 |
+
label='Onsets')
|
| 462 |
+
|
| 463 |
+
ax2.set_ylabel('Amplitude')
|
| 464 |
+
ax2.legend()
|
| 465 |
+
ax2.grid()
|
| 466 |
+
|
| 467 |
+
# heart rate
|
| 468 |
+
ax3 = fig.add_subplot(313, sharex=ax1)
|
| 469 |
+
|
| 470 |
+
ax3.plot(heart_rate_ts, heart_rate, linewidth=MAJOR_LW, label='Heart Rate')
|
| 471 |
+
|
| 472 |
+
ax3.set_xlabel('Time (s)')
|
| 473 |
+
ax3.set_ylabel('Heart Rate (bpm)')
|
| 474 |
+
ax3.legend()
|
| 475 |
+
ax3.grid()
|
| 476 |
+
|
| 477 |
+
# make layout tight
|
| 478 |
+
fig.tight_layout()
|
| 479 |
+
|
| 480 |
+
# save to file
|
| 481 |
+
if path is not None:
|
| 482 |
+
path = utils.normpath(path)
|
| 483 |
+
root, ext = os.path.splitext(path)
|
| 484 |
+
ext = ext.lower()
|
| 485 |
+
if ext not in ['png', 'jpg']:
|
| 486 |
+
path = root + '.png'
|
| 487 |
+
|
| 488 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 489 |
+
|
| 490 |
+
# show
|
| 491 |
+
if show:
|
| 492 |
+
plt.show()
|
| 493 |
+
else:
|
| 494 |
+
# close
|
| 495 |
+
plt.close(fig)
|
| 496 |
+
|
| 497 |
+
|
| 498 |
+
def plot_abp(ts=None,
|
| 499 |
+
raw=None,
|
| 500 |
+
filtered=None,
|
| 501 |
+
onsets=None,
|
| 502 |
+
heart_rate_ts=None,
|
| 503 |
+
heart_rate=None,
|
| 504 |
+
path=None,
|
| 505 |
+
show=False):
|
| 506 |
+
"""Create a summary plot from the output of signals.abp.abp.
|
| 507 |
+
|
| 508 |
+
Parameters
|
| 509 |
+
----------
|
| 510 |
+
ts : array
|
| 511 |
+
Signal time axis reference (seconds).
|
| 512 |
+
raw : array
|
| 513 |
+
Raw ABP signal.
|
| 514 |
+
filtered : array
|
| 515 |
+
Filtered ABP signal.
|
| 516 |
+
onsets : array
|
| 517 |
+
Indices of ABP pulse onsets.
|
| 518 |
+
heart_rate_ts : array
|
| 519 |
+
Heart rate time axis reference (seconds).
|
| 520 |
+
heart_rate : array
|
| 521 |
+
Instantaneous heart rate (bpm).
|
| 522 |
+
path : str, optional
|
| 523 |
+
If provided, the plot will be saved to the specified file.
|
| 524 |
+
show : bool, optional
|
| 525 |
+
If True, show the plot immediately.
|
| 526 |
+
|
| 527 |
+
"""
|
| 528 |
+
|
| 529 |
+
fig = plt.figure()
|
| 530 |
+
fig.suptitle('ABP Summary')
|
| 531 |
+
|
| 532 |
+
# raw signal
|
| 533 |
+
ax1 = fig.add_subplot(311)
|
| 534 |
+
|
| 535 |
+
ax1.plot(ts, raw, linewidth=MAJOR_LW, label='Raw')
|
| 536 |
+
|
| 537 |
+
ax1.set_ylabel('Amplitude')
|
| 538 |
+
ax1.legend()
|
| 539 |
+
ax1.grid()
|
| 540 |
+
|
| 541 |
+
# filtered signal with onsets
|
| 542 |
+
ax2 = fig.add_subplot(312, sharex=ax1)
|
| 543 |
+
|
| 544 |
+
ymin = np.min(filtered)
|
| 545 |
+
ymax = np.max(filtered)
|
| 546 |
+
alpha = 0.1 * (ymax - ymin)
|
| 547 |
+
ymax += alpha
|
| 548 |
+
ymin -= alpha
|
| 549 |
+
|
| 550 |
+
ax2.plot(ts, filtered, linewidth=MAJOR_LW, label='Filtered')
|
| 551 |
+
ax2.vlines(ts[onsets], ymin, ymax,
|
| 552 |
+
color='m',
|
| 553 |
+
linewidth=MINOR_LW,
|
| 554 |
+
label='Onsets')
|
| 555 |
+
|
| 556 |
+
ax2.set_ylabel('Amplitude')
|
| 557 |
+
ax2.legend()
|
| 558 |
+
ax2.grid()
|
| 559 |
+
|
| 560 |
+
# heart rate
|
| 561 |
+
ax3 = fig.add_subplot(313, sharex=ax1)
|
| 562 |
+
|
| 563 |
+
ax3.plot(heart_rate_ts, heart_rate, linewidth=MAJOR_LW, label='Heart Rate')
|
| 564 |
+
|
| 565 |
+
ax3.set_xlabel('Time (s)')
|
| 566 |
+
ax3.set_ylabel('Heart Rate (bpm)')
|
| 567 |
+
ax3.legend()
|
| 568 |
+
ax3.grid()
|
| 569 |
+
|
| 570 |
+
# make layout tight
|
| 571 |
+
fig.tight_layout()
|
| 572 |
+
|
| 573 |
+
# save to file
|
| 574 |
+
if path is not None:
|
| 575 |
+
path = utils.normpath(path)
|
| 576 |
+
root, ext = os.path.splitext(path)
|
| 577 |
+
ext = ext.lower()
|
| 578 |
+
if ext not in ['png', 'jpg']:
|
| 579 |
+
path = root + '.png'
|
| 580 |
+
|
| 581 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 582 |
+
|
| 583 |
+
# show
|
| 584 |
+
if show:
|
| 585 |
+
plt.show()
|
| 586 |
+
else:
|
| 587 |
+
# close
|
| 588 |
+
plt.close(fig)
|
| 589 |
+
|
| 590 |
+
def plot_eda(ts=None,
|
| 591 |
+
raw=None,
|
| 592 |
+
filtered=None,
|
| 593 |
+
onsets=None,
|
| 594 |
+
peaks=None,
|
| 595 |
+
amplitudes=None,
|
| 596 |
+
path=None,
|
| 597 |
+
show=False):
|
| 598 |
+
"""Create a summary plot from the output of signals.eda.eda.
|
| 599 |
+
|
| 600 |
+
Parameters
|
| 601 |
+
----------
|
| 602 |
+
ts : array
|
| 603 |
+
Signal time axis reference (seconds).
|
| 604 |
+
raw : array
|
| 605 |
+
Raw EDA signal.
|
| 606 |
+
filtered : array
|
| 607 |
+
Filtered EDA signal.
|
| 608 |
+
onsets : array
|
| 609 |
+
Indices of SCR pulse onsets.
|
| 610 |
+
peaks : array
|
| 611 |
+
Indices of the SCR peaks.
|
| 612 |
+
amplitudes : array
|
| 613 |
+
SCR pulse amplitudes.
|
| 614 |
+
path : str, optional
|
| 615 |
+
If provided, the plot will be saved to the specified file.
|
| 616 |
+
show : bool, optional
|
| 617 |
+
If True, show the plot immediately.
|
| 618 |
+
|
| 619 |
+
"""
|
| 620 |
+
|
| 621 |
+
fig = plt.figure()
|
| 622 |
+
fig.suptitle('EDA Summary')
|
| 623 |
+
|
| 624 |
+
# raw signal
|
| 625 |
+
ax1 = fig.add_subplot(311)
|
| 626 |
+
|
| 627 |
+
ax1.plot(ts, raw, linewidth=MAJOR_LW, label='raw')
|
| 628 |
+
|
| 629 |
+
ax1.set_ylabel('Amplitude')
|
| 630 |
+
ax1.legend()
|
| 631 |
+
ax1.grid()
|
| 632 |
+
|
| 633 |
+
# filtered signal with onsets, peaks
|
| 634 |
+
ax2 = fig.add_subplot(312, sharex=ax1)
|
| 635 |
+
|
| 636 |
+
ymin = np.min(filtered)
|
| 637 |
+
ymax = np.max(filtered)
|
| 638 |
+
alpha = 0.1 * (ymax - ymin)
|
| 639 |
+
ymax += alpha
|
| 640 |
+
ymin -= alpha
|
| 641 |
+
|
| 642 |
+
ax2.plot(ts, filtered, linewidth=MAJOR_LW, label='Filtered')
|
| 643 |
+
ax2.vlines(ts[onsets], ymin, ymax,
|
| 644 |
+
color='m',
|
| 645 |
+
linewidth=MINOR_LW,
|
| 646 |
+
label='Onsets')
|
| 647 |
+
ax2.vlines(ts[peaks], ymin, ymax,
|
| 648 |
+
color='g',
|
| 649 |
+
linewidth=MINOR_LW,
|
| 650 |
+
label='Peaks')
|
| 651 |
+
|
| 652 |
+
ax2.set_ylabel('Amplitude')
|
| 653 |
+
ax2.legend()
|
| 654 |
+
ax2.grid()
|
| 655 |
+
|
| 656 |
+
# amplitudes
|
| 657 |
+
ax3 = fig.add_subplot(313, sharex=ax1)
|
| 658 |
+
|
| 659 |
+
ax3.plot(ts[onsets], amplitudes, linewidth=MAJOR_LW, label='Amplitudes')
|
| 660 |
+
|
| 661 |
+
ax3.set_xlabel('Time (s)')
|
| 662 |
+
ax3.set_ylabel('Amplitude')
|
| 663 |
+
ax3.legend()
|
| 664 |
+
ax3.grid()
|
| 665 |
+
|
| 666 |
+
# make layout tight
|
| 667 |
+
fig.tight_layout()
|
| 668 |
+
|
| 669 |
+
# save to file
|
| 670 |
+
if path is not None:
|
| 671 |
+
path = utils.normpath(path)
|
| 672 |
+
root, ext = os.path.splitext(path)
|
| 673 |
+
ext = ext.lower()
|
| 674 |
+
if ext not in ['png', 'jpg']:
|
| 675 |
+
path = root + '.png'
|
| 676 |
+
|
| 677 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 678 |
+
|
| 679 |
+
# show
|
| 680 |
+
if show:
|
| 681 |
+
plt.show()
|
| 682 |
+
else:
|
| 683 |
+
# close
|
| 684 |
+
plt.close(fig)
|
| 685 |
+
|
| 686 |
+
|
| 687 |
+
def plot_emg(ts=None,
|
| 688 |
+
sampling_rate=None,
|
| 689 |
+
raw=None,
|
| 690 |
+
filtered=None,
|
| 691 |
+
onsets=None,
|
| 692 |
+
processed=None,
|
| 693 |
+
path=None,
|
| 694 |
+
show=False):
|
| 695 |
+
"""Create a summary plot from the output of signals.emg.emg.
|
| 696 |
+
|
| 697 |
+
Parameters
|
| 698 |
+
----------
|
| 699 |
+
ts : array
|
| 700 |
+
Signal time axis reference (seconds).
|
| 701 |
+
sampling_rate : int, float
|
| 702 |
+
Sampling frequency (Hz).
|
| 703 |
+
raw : array
|
| 704 |
+
Raw EMG signal.
|
| 705 |
+
filtered : array
|
| 706 |
+
Filtered EMG signal.
|
| 707 |
+
onsets : array
|
| 708 |
+
Indices of EMG pulse onsets.
|
| 709 |
+
processed : array, optional
|
| 710 |
+
Processed EMG signal according to the chosen onset detector.
|
| 711 |
+
path : str, optional
|
| 712 |
+
If provided, the plot will be saved to the specified file.
|
| 713 |
+
show : bool, optional
|
| 714 |
+
If True, show the plot immediately.
|
| 715 |
+
|
| 716 |
+
"""
|
| 717 |
+
|
| 718 |
+
fig = plt.figure()
|
| 719 |
+
fig.suptitle('EMG Summary')
|
| 720 |
+
|
| 721 |
+
if processed is not None:
|
| 722 |
+
ax1 = fig.add_subplot(311)
|
| 723 |
+
ax2 = fig.add_subplot(312, sharex=ax1)
|
| 724 |
+
ax3 = fig.add_subplot(313)
|
| 725 |
+
|
| 726 |
+
# processed signal
|
| 727 |
+
L = len(processed)
|
| 728 |
+
T = (L - 1) / sampling_rate
|
| 729 |
+
ts_processed = np.linspace(0, T, L, endpoint=True)
|
| 730 |
+
ax3.plot(ts_processed, processed,
|
| 731 |
+
linewidth=MAJOR_LW,
|
| 732 |
+
label='Processed')
|
| 733 |
+
ax3.set_xlabel('Time (s)')
|
| 734 |
+
ax3.set_ylabel('Amplitude')
|
| 735 |
+
ax3.legend()
|
| 736 |
+
ax3.grid()
|
| 737 |
+
else:
|
| 738 |
+
ax1 = fig.add_subplot(211)
|
| 739 |
+
ax2 = fig.add_subplot(212, sharex=ax1)
|
| 740 |
+
|
| 741 |
+
# raw signal
|
| 742 |
+
ax1.plot(ts, raw, linewidth=MAJOR_LW, label='Raw')
|
| 743 |
+
|
| 744 |
+
ax1.set_ylabel('Amplitude')
|
| 745 |
+
ax1.legend()
|
| 746 |
+
ax1.grid()
|
| 747 |
+
|
| 748 |
+
# filtered signal with onsets
|
| 749 |
+
ymin = np.min(filtered)
|
| 750 |
+
ymax = np.max(filtered)
|
| 751 |
+
alpha = 0.1 * (ymax - ymin)
|
| 752 |
+
ymax += alpha
|
| 753 |
+
ymin -= alpha
|
| 754 |
+
|
| 755 |
+
ax2.plot(ts, filtered, linewidth=MAJOR_LW, label='Filtered')
|
| 756 |
+
ax2.vlines(ts[onsets], ymin, ymax,
|
| 757 |
+
color='m',
|
| 758 |
+
linewidth=MINOR_LW,
|
| 759 |
+
label='Onsets')
|
| 760 |
+
|
| 761 |
+
ax2.set_xlabel('Time (s)')
|
| 762 |
+
ax2.set_ylabel('Amplitude')
|
| 763 |
+
ax2.legend()
|
| 764 |
+
ax2.grid()
|
| 765 |
+
|
| 766 |
+
# make layout tight
|
| 767 |
+
fig.tight_layout()
|
| 768 |
+
|
| 769 |
+
# save to file
|
| 770 |
+
if path is not None:
|
| 771 |
+
path = utils.normpath(path)
|
| 772 |
+
root, ext = os.path.splitext(path)
|
| 773 |
+
ext = ext.lower()
|
| 774 |
+
if ext not in ['png', 'jpg']:
|
| 775 |
+
path = root + '.png'
|
| 776 |
+
|
| 777 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 778 |
+
|
| 779 |
+
# show
|
| 780 |
+
if show:
|
| 781 |
+
plt.show()
|
| 782 |
+
else:
|
| 783 |
+
# close
|
| 784 |
+
plt.close(fig)
|
| 785 |
+
|
| 786 |
+
|
| 787 |
+
def plot_resp(ts=None,
|
| 788 |
+
raw=None,
|
| 789 |
+
filtered=None,
|
| 790 |
+
zeros=None,
|
| 791 |
+
resp_rate_ts=None,
|
| 792 |
+
resp_rate=None,
|
| 793 |
+
path=None,
|
| 794 |
+
show=False):
|
| 795 |
+
"""Create a summary plot from the output of signals.ppg.ppg.
|
| 796 |
+
|
| 797 |
+
Parameters
|
| 798 |
+
----------
|
| 799 |
+
ts : array
|
| 800 |
+
Signal time axis reference (seconds).
|
| 801 |
+
raw : array
|
| 802 |
+
Raw Resp signal.
|
| 803 |
+
filtered : array
|
| 804 |
+
Filtered Resp signal.
|
| 805 |
+
zeros : array
|
| 806 |
+
Indices of Respiration zero crossings.
|
| 807 |
+
resp_rate_ts : array
|
| 808 |
+
Respiration rate time axis reference (seconds).
|
| 809 |
+
resp_rate : array
|
| 810 |
+
Instantaneous respiration rate (Hz).
|
| 811 |
+
path : str, optional
|
| 812 |
+
If provided, the plot will be saved to the specified file.
|
| 813 |
+
show : bool, optional
|
| 814 |
+
If True, show the plot immediately.
|
| 815 |
+
|
| 816 |
+
"""
|
| 817 |
+
|
| 818 |
+
fig = plt.figure()
|
| 819 |
+
fig.suptitle('Respiration Summary')
|
| 820 |
+
|
| 821 |
+
# raw signal
|
| 822 |
+
ax1 = fig.add_subplot(311)
|
| 823 |
+
|
| 824 |
+
ax1.plot(ts, raw, linewidth=MAJOR_LW, label='Raw')
|
| 825 |
+
|
| 826 |
+
ax1.set_ylabel('Amplitude')
|
| 827 |
+
ax1.legend()
|
| 828 |
+
ax1.grid()
|
| 829 |
+
|
| 830 |
+
# filtered signal with zeros
|
| 831 |
+
ax2 = fig.add_subplot(312, sharex=ax1)
|
| 832 |
+
|
| 833 |
+
ymin = np.min(filtered)
|
| 834 |
+
ymax = np.max(filtered)
|
| 835 |
+
alpha = 0.1 * (ymax - ymin)
|
| 836 |
+
ymax += alpha
|
| 837 |
+
ymin -= alpha
|
| 838 |
+
|
| 839 |
+
ax2.plot(ts, filtered, linewidth=MAJOR_LW, label='Filtered')
|
| 840 |
+
ax2.vlines(ts[zeros], ymin, ymax,
|
| 841 |
+
color='m',
|
| 842 |
+
linewidth=MINOR_LW,
|
| 843 |
+
label='Zero crossings')
|
| 844 |
+
|
| 845 |
+
ax2.set_ylabel('Amplitude')
|
| 846 |
+
ax2.legend()
|
| 847 |
+
ax2.grid()
|
| 848 |
+
|
| 849 |
+
# heart rate
|
| 850 |
+
ax3 = fig.add_subplot(313, sharex=ax1)
|
| 851 |
+
|
| 852 |
+
ax3.plot(resp_rate_ts, resp_rate,
|
| 853 |
+
linewidth=MAJOR_LW,
|
| 854 |
+
label='Respiration Rate')
|
| 855 |
+
|
| 856 |
+
ax3.set_xlabel('Time (s)')
|
| 857 |
+
ax3.set_ylabel('Respiration Rate (Hz)')
|
| 858 |
+
ax3.legend()
|
| 859 |
+
ax3.grid()
|
| 860 |
+
|
| 861 |
+
# make layout tight
|
| 862 |
+
fig.tight_layout()
|
| 863 |
+
|
| 864 |
+
# save to file
|
| 865 |
+
if path is not None:
|
| 866 |
+
path = utils.normpath(path)
|
| 867 |
+
root, ext = os.path.splitext(path)
|
| 868 |
+
ext = ext.lower()
|
| 869 |
+
if ext not in ['png', 'jpg']:
|
| 870 |
+
path = root + '.png'
|
| 871 |
+
|
| 872 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 873 |
+
|
| 874 |
+
# show
|
| 875 |
+
if show:
|
| 876 |
+
plt.show()
|
| 877 |
+
else:
|
| 878 |
+
# close
|
| 879 |
+
plt.close(fig)
|
| 880 |
+
|
| 881 |
+
|
| 882 |
+
def plot_eeg(ts=None,
|
| 883 |
+
raw=None,
|
| 884 |
+
filtered=None,
|
| 885 |
+
labels=None,
|
| 886 |
+
features_ts=None,
|
| 887 |
+
theta=None,
|
| 888 |
+
alpha_low=None,
|
| 889 |
+
alpha_high=None,
|
| 890 |
+
beta=None,
|
| 891 |
+
gamma=None,
|
| 892 |
+
plf_pairs=None,
|
| 893 |
+
plf=None,
|
| 894 |
+
path=None,
|
| 895 |
+
show=False):
|
| 896 |
+
"""Create a summary plot from the output of signals.eeg.eeg.
|
| 897 |
+
|
| 898 |
+
Parameters
|
| 899 |
+
----------
|
| 900 |
+
ts : array
|
| 901 |
+
Signal time axis reference (seconds).
|
| 902 |
+
raw : array
|
| 903 |
+
Raw EEG signal.
|
| 904 |
+
filtered : array
|
| 905 |
+
Filtered EEG signal.
|
| 906 |
+
labels : list
|
| 907 |
+
Channel labels.
|
| 908 |
+
features_ts : array
|
| 909 |
+
Features time axis reference (seconds).
|
| 910 |
+
theta : array
|
| 911 |
+
Average power in the 4 to 8 Hz frequency band; each column is one
|
| 912 |
+
EEG channel.
|
| 913 |
+
alpha_low : array
|
| 914 |
+
Average power in the 8 to 10 Hz frequency band; each column is one
|
| 915 |
+
EEG channel.
|
| 916 |
+
alpha_high : array
|
| 917 |
+
Average power in the 10 to 13 Hz frequency band; each column is one
|
| 918 |
+
EEG channel.
|
| 919 |
+
beta : array
|
| 920 |
+
Average power in the 13 to 25 Hz frequency band; each column is one
|
| 921 |
+
EEG channel.
|
| 922 |
+
gamma : array
|
| 923 |
+
Average power in the 25 to 40 Hz frequency band; each column is one
|
| 924 |
+
EEG channel.
|
| 925 |
+
plf_pairs : list
|
| 926 |
+
PLF pair indices.
|
| 927 |
+
plf : array
|
| 928 |
+
PLF matrix; each column is a channel pair.
|
| 929 |
+
path : str, optional
|
| 930 |
+
If provided, the plot will be saved to the specified file.
|
| 931 |
+
show : bool, optional
|
| 932 |
+
If True, show the plot immediately.
|
| 933 |
+
|
| 934 |
+
"""
|
| 935 |
+
|
| 936 |
+
nrows = MAX_ROWS
|
| 937 |
+
alpha = 2.
|
| 938 |
+
|
| 939 |
+
# Get number of channels
|
| 940 |
+
nch = raw.shape[1]
|
| 941 |
+
|
| 942 |
+
figs = []
|
| 943 |
+
|
| 944 |
+
# raw
|
| 945 |
+
fig = _plot_multichannel(ts=ts,
|
| 946 |
+
signal=raw,
|
| 947 |
+
labels=labels,
|
| 948 |
+
nrows=nrows,
|
| 949 |
+
alpha=alpha,
|
| 950 |
+
title='EEG Summary - Raw',
|
| 951 |
+
xlabel='Time (s)',
|
| 952 |
+
ylabel='Amplitude')
|
| 953 |
+
figs.append(('_Raw', fig))
|
| 954 |
+
|
| 955 |
+
# filtered
|
| 956 |
+
fig = _plot_multichannel(ts=ts,
|
| 957 |
+
signal=filtered,
|
| 958 |
+
labels=labels,
|
| 959 |
+
nrows=nrows,
|
| 960 |
+
alpha=alpha,
|
| 961 |
+
title='EEG Summary - Filtered',
|
| 962 |
+
xlabel='Time (s)',
|
| 963 |
+
ylabel='Amplitude')
|
| 964 |
+
figs.append(('_Filtered', fig))
|
| 965 |
+
|
| 966 |
+
# band-power
|
| 967 |
+
names = ('Theta Band', 'Lower Alpha Band', 'Higher Alpha Band',
|
| 968 |
+
'Beta Band', 'Gamma Band')
|
| 969 |
+
args = (theta, alpha_low, alpha_high, beta, gamma)
|
| 970 |
+
for n, a in zip(names, args):
|
| 971 |
+
fig = _plot_multichannel(ts=features_ts,
|
| 972 |
+
signal=a,
|
| 973 |
+
labels=labels,
|
| 974 |
+
nrows=nrows,
|
| 975 |
+
alpha=alpha,
|
| 976 |
+
title='EEG Summary - %s' % n,
|
| 977 |
+
xlabel='Time (s)',
|
| 978 |
+
ylabel='Power')
|
| 979 |
+
figs.append(('_' + n.replace(' ', '_'), fig))
|
| 980 |
+
|
| 981 |
+
# Only plot/compute plf if there is more than one channel
|
| 982 |
+
if nch > 1:
|
| 983 |
+
# PLF
|
| 984 |
+
plf_labels = ['%s vs %s' % (labels[p[0]], labels[p[1]]) for p in plf_pairs]
|
| 985 |
+
fig = _plot_multichannel(ts=features_ts,
|
| 986 |
+
signal=plf,
|
| 987 |
+
labels=plf_labels,
|
| 988 |
+
nrows=nrows,
|
| 989 |
+
alpha=alpha,
|
| 990 |
+
title='EEG Summary - Phase-Locking Factor',
|
| 991 |
+
xlabel='Time (s)',
|
| 992 |
+
ylabel='PLF')
|
| 993 |
+
figs.append(('_PLF', fig))
|
| 994 |
+
|
| 995 |
+
# save to file
|
| 996 |
+
if path is not None:
|
| 997 |
+
path = utils.normpath(path)
|
| 998 |
+
root, ext = os.path.splitext(path)
|
| 999 |
+
ext = ext.lower()
|
| 1000 |
+
if ext not in ['png', 'jpg']:
|
| 1001 |
+
ext = '.png'
|
| 1002 |
+
|
| 1003 |
+
for n, fig in figs:
|
| 1004 |
+
path = root + n + ext
|
| 1005 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 1006 |
+
|
| 1007 |
+
# show
|
| 1008 |
+
if show:
|
| 1009 |
+
plt.show()
|
| 1010 |
+
else:
|
| 1011 |
+
# close
|
| 1012 |
+
for _, fig in figs:
|
| 1013 |
+
plt.close(fig)
|
| 1014 |
+
|
| 1015 |
+
|
| 1016 |
+
def _yscaling(signal=None, alpha=1.5):
|
| 1017 |
+
"""Get y axis limits for a signal with scaling.
|
| 1018 |
+
|
| 1019 |
+
Parameters
|
| 1020 |
+
----------
|
| 1021 |
+
signal : array
|
| 1022 |
+
Input signal.
|
| 1023 |
+
alpha : float, optional
|
| 1024 |
+
Scaling factor.
|
| 1025 |
+
|
| 1026 |
+
Returns
|
| 1027 |
+
-------
|
| 1028 |
+
ymin : float
|
| 1029 |
+
Minimum y value.
|
| 1030 |
+
ymax : float
|
| 1031 |
+
Maximum y value.
|
| 1032 |
+
|
| 1033 |
+
"""
|
| 1034 |
+
|
| 1035 |
+
mi = np.min(signal)
|
| 1036 |
+
m = np.mean(signal)
|
| 1037 |
+
mx = np.max(signal)
|
| 1038 |
+
|
| 1039 |
+
if mi == mx:
|
| 1040 |
+
ymin = m - 1
|
| 1041 |
+
ymax = m + 1
|
| 1042 |
+
else:
|
| 1043 |
+
ymin = m - alpha * (m - mi)
|
| 1044 |
+
ymax = m + alpha * (mx - m)
|
| 1045 |
+
|
| 1046 |
+
return ymin, ymax
|
| 1047 |
+
|
| 1048 |
+
|
| 1049 |
+
def _plot_multichannel(ts=None,
|
| 1050 |
+
signal=None,
|
| 1051 |
+
labels=None,
|
| 1052 |
+
nrows=10,
|
| 1053 |
+
alpha=2.,
|
| 1054 |
+
title=None,
|
| 1055 |
+
xlabel=None,
|
| 1056 |
+
ylabel=None):
|
| 1057 |
+
"""Plot a multi-channel signal.
|
| 1058 |
+
|
| 1059 |
+
Parameters
|
| 1060 |
+
----------
|
| 1061 |
+
ts : array
|
| 1062 |
+
Signal time axis reference (seconds).
|
| 1063 |
+
signal : array
|
| 1064 |
+
Multi-channel signal; each column is one channel.
|
| 1065 |
+
labels : list, optional
|
| 1066 |
+
Channel labels.
|
| 1067 |
+
nrows : int, optional
|
| 1068 |
+
Maximum number of rows to use.
|
| 1069 |
+
alpha : float, optional
|
| 1070 |
+
Scaling factor for y axis.
|
| 1071 |
+
title : str, optional
|
| 1072 |
+
Plot title.
|
| 1073 |
+
xlabel : str, optional
|
| 1074 |
+
Label for x axis.
|
| 1075 |
+
ylabel : str, optional
|
| 1076 |
+
Label for y axis.
|
| 1077 |
+
|
| 1078 |
+
Returns
|
| 1079 |
+
-------
|
| 1080 |
+
fig : Figure
|
| 1081 |
+
Figure object.
|
| 1082 |
+
|
| 1083 |
+
"""
|
| 1084 |
+
|
| 1085 |
+
# ensure numpy
|
| 1086 |
+
signal = np.array(signal)
|
| 1087 |
+
nch = signal.shape[1]
|
| 1088 |
+
|
| 1089 |
+
# check labels
|
| 1090 |
+
if labels is None:
|
| 1091 |
+
labels = ['Ch. %d' % i for i in range(nch)]
|
| 1092 |
+
|
| 1093 |
+
if nch < nrows:
|
| 1094 |
+
nrows = nch
|
| 1095 |
+
|
| 1096 |
+
ncols = int(np.ceil(nch / float(nrows)))
|
| 1097 |
+
|
| 1098 |
+
fig = plt.figure()
|
| 1099 |
+
|
| 1100 |
+
# title
|
| 1101 |
+
if title is not None:
|
| 1102 |
+
fig.suptitle(title)
|
| 1103 |
+
|
| 1104 |
+
gs = gridspec.GridSpec(nrows, ncols, hspace=0, wspace=0.2)
|
| 1105 |
+
|
| 1106 |
+
# reference axes
|
| 1107 |
+
ax0 = fig.add_subplot(gs[0, 0])
|
| 1108 |
+
ax0.plot(ts, signal[:, 0], linewidth=MAJOR_LW, label=labels[0])
|
| 1109 |
+
ymin, ymax = _yscaling(signal[:, 0], alpha=alpha)
|
| 1110 |
+
ax0.set_ylim(ymin, ymax)
|
| 1111 |
+
ax0.legend()
|
| 1112 |
+
ax0.grid()
|
| 1113 |
+
axs = {(0, 0): ax0}
|
| 1114 |
+
|
| 1115 |
+
for i in range(1, nch - 1):
|
| 1116 |
+
a = i % nrows
|
| 1117 |
+
b = int(np.floor(i / float(nrows)))
|
| 1118 |
+
ax = fig.add_subplot(gs[a, b], sharex=ax0)
|
| 1119 |
+
axs[(a, b)] = ax
|
| 1120 |
+
|
| 1121 |
+
ax.plot(ts, signal[:, i], linewidth=MAJOR_LW, label=labels[i])
|
| 1122 |
+
ymin, ymax = _yscaling(signal[:, i], alpha=alpha)
|
| 1123 |
+
ax.set_ylim(ymin, ymax)
|
| 1124 |
+
ax.legend()
|
| 1125 |
+
ax.grid()
|
| 1126 |
+
|
| 1127 |
+
# last plot
|
| 1128 |
+
i = nch - 1
|
| 1129 |
+
a = i % nrows
|
| 1130 |
+
b = int(np.floor(i / float(nrows)))
|
| 1131 |
+
ax = fig.add_subplot(gs[a, b], sharex=ax0)
|
| 1132 |
+
axs[(a, b)] = ax
|
| 1133 |
+
|
| 1134 |
+
ax.plot(ts, signal[:, -1], linewidth=MAJOR_LW, label=labels[-1])
|
| 1135 |
+
ymin, ymax = _yscaling(signal[:, -1], alpha=alpha)
|
| 1136 |
+
ax.set_ylim(ymin, ymax)
|
| 1137 |
+
ax.legend()
|
| 1138 |
+
ax.grid()
|
| 1139 |
+
|
| 1140 |
+
if xlabel is not None:
|
| 1141 |
+
ax.set_xlabel(xlabel)
|
| 1142 |
+
|
| 1143 |
+
for b in range(0, ncols - 1):
|
| 1144 |
+
a = nrows - 1
|
| 1145 |
+
ax = axs[(a, b)]
|
| 1146 |
+
ax.set_xlabel(xlabel)
|
| 1147 |
+
|
| 1148 |
+
if ylabel is not None:
|
| 1149 |
+
# middle left
|
| 1150 |
+
a = nrows // 2
|
| 1151 |
+
ax = axs[(a, 0)]
|
| 1152 |
+
ax.set_ylabel(ylabel)
|
| 1153 |
+
|
| 1154 |
+
# make layout tight
|
| 1155 |
+
gs.tight_layout(fig)
|
| 1156 |
+
|
| 1157 |
+
return fig
|
| 1158 |
+
|
| 1159 |
+
|
| 1160 |
+
def plot_ecg(ts=None,
|
| 1161 |
+
raw=None,
|
| 1162 |
+
filtered=None,
|
| 1163 |
+
rpeaks=None,
|
| 1164 |
+
templates_ts=None,
|
| 1165 |
+
templates=None,
|
| 1166 |
+
heart_rate_ts=None,
|
| 1167 |
+
heart_rate=None,
|
| 1168 |
+
path=None,
|
| 1169 |
+
show=False):
|
| 1170 |
+
"""Create a summary plot from the output of signals.ecg.ecg.
|
| 1171 |
+
|
| 1172 |
+
Parameters
|
| 1173 |
+
----------
|
| 1174 |
+
ts : array
|
| 1175 |
+
Signal time axis reference (seconds).
|
| 1176 |
+
raw : array
|
| 1177 |
+
Raw ECG signal.
|
| 1178 |
+
filtered : array
|
| 1179 |
+
Filtered ECG signal.
|
| 1180 |
+
rpeaks : array
|
| 1181 |
+
R-peak location indices.
|
| 1182 |
+
templates_ts : array
|
| 1183 |
+
Templates time axis reference (seconds).
|
| 1184 |
+
templates : array
|
| 1185 |
+
Extracted heartbeat templates.
|
| 1186 |
+
heart_rate_ts : array
|
| 1187 |
+
Heart rate time axis reference (seconds).
|
| 1188 |
+
heart_rate : array
|
| 1189 |
+
Instantaneous heart rate (bpm).
|
| 1190 |
+
path : str, optional
|
| 1191 |
+
If provided, the plot will be saved to the specified file.
|
| 1192 |
+
show : bool, optional
|
| 1193 |
+
If True, show the plot immediately.
|
| 1194 |
+
|
| 1195 |
+
"""
|
| 1196 |
+
|
| 1197 |
+
fig = plt.figure()
|
| 1198 |
+
fig.suptitle('ECG Summary')
|
| 1199 |
+
gs = gridspec.GridSpec(6, 2)
|
| 1200 |
+
|
| 1201 |
+
# raw signal
|
| 1202 |
+
ax1 = fig.add_subplot(gs[:2, 0])
|
| 1203 |
+
|
| 1204 |
+
ax1.plot(ts, raw, linewidth=MAJOR_LW, label='Raw')
|
| 1205 |
+
|
| 1206 |
+
ax1.set_ylabel('Amplitude')
|
| 1207 |
+
ax1.legend()
|
| 1208 |
+
ax1.grid()
|
| 1209 |
+
|
| 1210 |
+
# filtered signal with rpeaks
|
| 1211 |
+
ax2 = fig.add_subplot(gs[2:4, 0], sharex=ax1)
|
| 1212 |
+
|
| 1213 |
+
ymin = np.min(filtered)
|
| 1214 |
+
ymax = np.max(filtered)
|
| 1215 |
+
alpha = 0.1 * (ymax - ymin)
|
| 1216 |
+
ymax += alpha
|
| 1217 |
+
ymin -= alpha
|
| 1218 |
+
|
| 1219 |
+
ax2.plot(ts, filtered, linewidth=MAJOR_LW, label='Filtered')
|
| 1220 |
+
ax2.vlines(ts[rpeaks], ymin, ymax,
|
| 1221 |
+
color='m',
|
| 1222 |
+
linewidth=MINOR_LW,
|
| 1223 |
+
label='R-peaks')
|
| 1224 |
+
|
| 1225 |
+
ax2.set_ylabel('Amplitude')
|
| 1226 |
+
ax2.legend()
|
| 1227 |
+
ax2.grid()
|
| 1228 |
+
|
| 1229 |
+
# heart rate
|
| 1230 |
+
ax3 = fig.add_subplot(gs[4:, 0], sharex=ax1)
|
| 1231 |
+
|
| 1232 |
+
ax3.plot(heart_rate_ts, heart_rate, linewidth=MAJOR_LW, label='Heart Rate')
|
| 1233 |
+
|
| 1234 |
+
ax3.set_xlabel('Time (s)')
|
| 1235 |
+
ax3.set_ylabel('Heart Rate (bpm)')
|
| 1236 |
+
ax3.legend()
|
| 1237 |
+
ax3.grid()
|
| 1238 |
+
|
| 1239 |
+
# templates
|
| 1240 |
+
ax4 = fig.add_subplot(gs[1:5, 1])
|
| 1241 |
+
|
| 1242 |
+
ax4.plot(templates_ts, templates.T, 'm', linewidth=MINOR_LW, alpha=0.7)
|
| 1243 |
+
|
| 1244 |
+
ax4.set_xlabel('Time (s)')
|
| 1245 |
+
ax4.set_ylabel('Amplitude')
|
| 1246 |
+
ax4.set_title('Templates')
|
| 1247 |
+
ax4.grid()
|
| 1248 |
+
|
| 1249 |
+
# make layout tight
|
| 1250 |
+
gs.tight_layout(fig)
|
| 1251 |
+
|
| 1252 |
+
# save to file
|
| 1253 |
+
if path is not None:
|
| 1254 |
+
path = utils.normpath(path)
|
| 1255 |
+
root, ext = os.path.splitext(path)
|
| 1256 |
+
ext = ext.lower()
|
| 1257 |
+
if ext not in ['png', 'jpg']:
|
| 1258 |
+
path = root + '.png'
|
| 1259 |
+
|
| 1260 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 1261 |
+
|
| 1262 |
+
# show
|
| 1263 |
+
if show:
|
| 1264 |
+
plt.show()
|
| 1265 |
+
else:
|
| 1266 |
+
# close
|
| 1267 |
+
plt.close(fig)
|
| 1268 |
+
|
| 1269 |
+
|
| 1270 |
+
def plot_bcg(ts=None,
|
| 1271 |
+
raw=None,
|
| 1272 |
+
filtered=None,
|
| 1273 |
+
jpeaks=None,
|
| 1274 |
+
templates_ts=None,
|
| 1275 |
+
templates=None,
|
| 1276 |
+
heart_rate_ts=None,
|
| 1277 |
+
heart_rate=None,
|
| 1278 |
+
path=None,
|
| 1279 |
+
show=False):
|
| 1280 |
+
"""Create a summary plot from the output of signals.bcg.bcg.
|
| 1281 |
+
|
| 1282 |
+
Parameters
|
| 1283 |
+
----------
|
| 1284 |
+
ts : array
|
| 1285 |
+
Signal time axis reference (seconds).
|
| 1286 |
+
raw : array
|
| 1287 |
+
Raw ECG signal.
|
| 1288 |
+
filtered : array
|
| 1289 |
+
Filtered ECG signal.
|
| 1290 |
+
ipeaks : array
|
| 1291 |
+
I-peak location indices.
|
| 1292 |
+
templates_ts : array
|
| 1293 |
+
Templates time axis reference (seconds).
|
| 1294 |
+
templates : array
|
| 1295 |
+
Extracted heartbeat templates.
|
| 1296 |
+
heart_rate_ts : array
|
| 1297 |
+
Heart rate time axis reference (seconds).
|
| 1298 |
+
heart_rate : array
|
| 1299 |
+
Instantaneous heart rate (bpm).
|
| 1300 |
+
path : str, optional
|
| 1301 |
+
If provided, the plot will be saved to the specified file.
|
| 1302 |
+
show : bool, optional
|
| 1303 |
+
If True, show the plot immediately.
|
| 1304 |
+
|
| 1305 |
+
"""
|
| 1306 |
+
|
| 1307 |
+
fig = plt.figure()
|
| 1308 |
+
fig.suptitle('BCG Summary')
|
| 1309 |
+
gs = gridspec.GridSpec(6, 2)
|
| 1310 |
+
|
| 1311 |
+
# raw signal
|
| 1312 |
+
ax1 = fig.add_subplot(gs[:2, 0])
|
| 1313 |
+
|
| 1314 |
+
ax1.plot(ts, raw, linewidth=MAJOR_LW, label='Raw')
|
| 1315 |
+
|
| 1316 |
+
ax1.set_ylabel('Amplitude')
|
| 1317 |
+
ax1.legend()
|
| 1318 |
+
ax1.grid()
|
| 1319 |
+
|
| 1320 |
+
# filtered signal with rpeaks
|
| 1321 |
+
ax2 = fig.add_subplot(gs[2:4, 0], sharex=ax1)
|
| 1322 |
+
|
| 1323 |
+
ymin = np.min(filtered)
|
| 1324 |
+
ymax = np.max(filtered)
|
| 1325 |
+
alpha = 0.1 * (ymax - ymin)
|
| 1326 |
+
ymax += alpha
|
| 1327 |
+
ymin -= alpha
|
| 1328 |
+
|
| 1329 |
+
ax2.plot(ts, filtered, linewidth=MAJOR_LW, label='Filtered')
|
| 1330 |
+
ax2.vlines(ts[jpeaks], ymin, ymax,
|
| 1331 |
+
color='m',
|
| 1332 |
+
linewidth=MINOR_LW,
|
| 1333 |
+
label='J-peaks')
|
| 1334 |
+
|
| 1335 |
+
ax2.set_ylabel('Amplitude')
|
| 1336 |
+
ax2.legend()
|
| 1337 |
+
ax2.grid()
|
| 1338 |
+
|
| 1339 |
+
# heart rate
|
| 1340 |
+
ax3 = fig.add_subplot(gs[4:, 0], sharex=ax1)
|
| 1341 |
+
|
| 1342 |
+
ax3.plot(heart_rate_ts, heart_rate, linewidth=MAJOR_LW, label='Heart Rate')
|
| 1343 |
+
|
| 1344 |
+
ax3.set_xlabel('Time (s)')
|
| 1345 |
+
ax3.set_ylabel('Heart Rate (bpm)')
|
| 1346 |
+
ax3.legend()
|
| 1347 |
+
ax3.grid()
|
| 1348 |
+
|
| 1349 |
+
# templates
|
| 1350 |
+
ax4 = fig.add_subplot(gs[1:5, 1])
|
| 1351 |
+
|
| 1352 |
+
ax4.plot(templates_ts, templates.T, 'm', linewidth=MINOR_LW, alpha=0.7)
|
| 1353 |
+
|
| 1354 |
+
ax4.set_xlabel('Time (s)')
|
| 1355 |
+
ax4.set_ylabel('Amplitude')
|
| 1356 |
+
ax4.set_title('Templates')
|
| 1357 |
+
ax4.grid()
|
| 1358 |
+
|
| 1359 |
+
# make layout tight
|
| 1360 |
+
gs.tight_layout(fig)
|
| 1361 |
+
|
| 1362 |
+
# save to file
|
| 1363 |
+
if path is not None:
|
| 1364 |
+
path = utils.normpath(path)
|
| 1365 |
+
root, ext = os.path.splitext(path)
|
| 1366 |
+
ext = ext.lower()
|
| 1367 |
+
if ext not in ['png', 'jpg']:
|
| 1368 |
+
path = root + '.png'
|
| 1369 |
+
|
| 1370 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 1371 |
+
|
| 1372 |
+
# show
|
| 1373 |
+
if show:
|
| 1374 |
+
plt.show()
|
| 1375 |
+
else:
|
| 1376 |
+
# close
|
| 1377 |
+
plt.close(fig)
|
| 1378 |
+
|
| 1379 |
+
def plot_pcg(ts=None,
|
| 1380 |
+
raw=None,
|
| 1381 |
+
filtered=None,
|
| 1382 |
+
peaks=None,
|
| 1383 |
+
heart_sounds=None,
|
| 1384 |
+
heart_rate_ts=None,
|
| 1385 |
+
inst_heart_rate=None,
|
| 1386 |
+
path=None,
|
| 1387 |
+
show=False):
|
| 1388 |
+
"""Create a summary plot from the output of signals.pcg.pcg.
|
| 1389 |
+
Parameters
|
| 1390 |
+
----------
|
| 1391 |
+
ts : array
|
| 1392 |
+
Signal time axis reference (seconds).
|
| 1393 |
+
raw : array
|
| 1394 |
+
Raw PCG signal.
|
| 1395 |
+
filtered : array
|
| 1396 |
+
Filtered PCG signal.
|
| 1397 |
+
peaks : array
|
| 1398 |
+
Peak location indices.
|
| 1399 |
+
heart_sounds : array
|
| 1400 |
+
Classification of peaks as S1 or S2
|
| 1401 |
+
heart_rate_ts : array
|
| 1402 |
+
Heart rate time axis reference (seconds).
|
| 1403 |
+
inst_heart_rate : array
|
| 1404 |
+
Instantaneous heart rate (bpm).
|
| 1405 |
+
path : str, optional
|
| 1406 |
+
If provided, the plot will be saved to the specified file.
|
| 1407 |
+
show : bool, optional
|
| 1408 |
+
If True, show the plot immediately.
|
| 1409 |
+
|
| 1410 |
+
"""
|
| 1411 |
+
|
| 1412 |
+
fig = plt.figure()
|
| 1413 |
+
fig.suptitle('PCG Summary')
|
| 1414 |
+
gs = gridspec.GridSpec(6, 2)
|
| 1415 |
+
|
| 1416 |
+
# raw signal
|
| 1417 |
+
ax1 = fig.add_subplot(gs[:2, 0])
|
| 1418 |
+
|
| 1419 |
+
ax1.plot(ts, raw, linewidth=MAJOR_LW,label='raw')
|
| 1420 |
+
|
| 1421 |
+
ax1.set_ylabel('Amplitude')
|
| 1422 |
+
ax1.legend()
|
| 1423 |
+
ax1.grid()
|
| 1424 |
+
|
| 1425 |
+
# filtered signal with rpeaks
|
| 1426 |
+
ax2 = fig.add_subplot(gs[2:4, 0], sharex=ax1)
|
| 1427 |
+
|
| 1428 |
+
ymin = np.min(filtered)
|
| 1429 |
+
ymax = np.max(filtered)
|
| 1430 |
+
alpha = 0.1 * (ymax - ymin)
|
| 1431 |
+
ymax += alpha
|
| 1432 |
+
ymin -= alpha
|
| 1433 |
+
|
| 1434 |
+
ax2.plot(ts, filtered, linewidth=MAJOR_LW, label='Filtered')
|
| 1435 |
+
ax2.vlines(ts[peaks], ymin, ymax,
|
| 1436 |
+
color='m',
|
| 1437 |
+
linewidth=MINOR_LW,
|
| 1438 |
+
label='Peaks')
|
| 1439 |
+
|
| 1440 |
+
ax2.set_ylabel('Amplitude')
|
| 1441 |
+
ax2.legend()
|
| 1442 |
+
ax2.grid()
|
| 1443 |
+
|
| 1444 |
+
# heart rate
|
| 1445 |
+
ax3 = fig.add_subplot(gs[4:, 0], sharex=ax1)
|
| 1446 |
+
|
| 1447 |
+
ax3.plot(heart_rate_ts,inst_heart_rate, linewidth=MAJOR_LW, label='Heart rate')
|
| 1448 |
+
|
| 1449 |
+
ax3.set_xlabel('Time (s)')
|
| 1450 |
+
ax3.set_ylabel('Heart Rate (bpm)')
|
| 1451 |
+
ax3.legend()
|
| 1452 |
+
ax3.grid()
|
| 1453 |
+
|
| 1454 |
+
# heart sounds
|
| 1455 |
+
ax4 = fig.add_subplot(gs[1:5, 1])
|
| 1456 |
+
|
| 1457 |
+
ax4.plot(ts,filtered,linewidth=MAJOR_LW, label='PCG heart sounds')
|
| 1458 |
+
for i in range(0, len(peaks)):
|
| 1459 |
+
|
| 1460 |
+
text = "S" + str(int(heart_sounds[i]))
|
| 1461 |
+
plt.annotate(text,(ts[peaks[i]], ymax-alpha),ha='center', va='center',size = 13)
|
| 1462 |
+
|
| 1463 |
+
ax4.set_xlabel('Time (s)')
|
| 1464 |
+
ax4.set_ylabel('Amplitude')
|
| 1465 |
+
ax4.set_title('Heart sounds')
|
| 1466 |
+
ax4.grid()
|
| 1467 |
+
|
| 1468 |
+
# make layout tight
|
| 1469 |
+
gs.tight_layout(fig)
|
| 1470 |
+
|
| 1471 |
+
# save to file
|
| 1472 |
+
if path is not None:
|
| 1473 |
+
path = utils.normpath(path)
|
| 1474 |
+
root, ext = os.path.splitext(path)
|
| 1475 |
+
ext = ext.lower()
|
| 1476 |
+
if ext not in ['png', 'jpg']:
|
| 1477 |
+
path = root + '.png'
|
| 1478 |
+
|
| 1479 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 1480 |
+
|
| 1481 |
+
# show
|
| 1482 |
+
if show:
|
| 1483 |
+
plt.show()
|
| 1484 |
+
else:
|
| 1485 |
+
# close
|
| 1486 |
+
plt.close(fig)
|
| 1487 |
+
|
| 1488 |
+
def _plot_rates(thresholds, rates, variables,
|
| 1489 |
+
lw=1,
|
| 1490 |
+
colors=None,
|
| 1491 |
+
alpha=1,
|
| 1492 |
+
eer_idx=None,
|
| 1493 |
+
labels=False,
|
| 1494 |
+
ax=None):
|
| 1495 |
+
"""Plot biometric rates.
|
| 1496 |
+
|
| 1497 |
+
Parameters
|
| 1498 |
+
----------
|
| 1499 |
+
thresholds : array
|
| 1500 |
+
Classifier thresholds.
|
| 1501 |
+
rates : dict
|
| 1502 |
+
Dictionary of rates.
|
| 1503 |
+
variables : list
|
| 1504 |
+
Keys from 'rates' to plot.
|
| 1505 |
+
lw : int, float, optional
|
| 1506 |
+
Plot linewidth.
|
| 1507 |
+
colors : list, optional
|
| 1508 |
+
Plot line color for each variable.
|
| 1509 |
+
alpha : float, optional
|
| 1510 |
+
Plot line alpha value.
|
| 1511 |
+
eer_idx : int, optional
|
| 1512 |
+
Classifier reference index for the Equal Error Rate.
|
| 1513 |
+
labels : bool, optional
|
| 1514 |
+
If True, will show plot labels.
|
| 1515 |
+
ax : axis, optional
|
| 1516 |
+
Plot Axis to use.
|
| 1517 |
+
|
| 1518 |
+
Returns
|
| 1519 |
+
-------
|
| 1520 |
+
fig : Figure
|
| 1521 |
+
Figure object.
|
| 1522 |
+
|
| 1523 |
+
"""
|
| 1524 |
+
|
| 1525 |
+
if ax is None:
|
| 1526 |
+
fig = plt.figure()
|
| 1527 |
+
ax = fig.add_subplot(111)
|
| 1528 |
+
else:
|
| 1529 |
+
fig = ax.figure
|
| 1530 |
+
|
| 1531 |
+
if colors is None:
|
| 1532 |
+
x = np.linspace(0., 1., len(variables))
|
| 1533 |
+
colors = plt.get_cmap('rainbow')(x)
|
| 1534 |
+
|
| 1535 |
+
if labels:
|
| 1536 |
+
for i, v in enumerate(variables):
|
| 1537 |
+
ax.plot(thresholds, rates[v], colors[i],
|
| 1538 |
+
lw=lw,
|
| 1539 |
+
alpha=alpha,
|
| 1540 |
+
label=v)
|
| 1541 |
+
else:
|
| 1542 |
+
for i, v in enumerate(variables):
|
| 1543 |
+
ax.plot(thresholds, rates[v], colors[i], lw=lw, alpha=alpha)
|
| 1544 |
+
|
| 1545 |
+
if eer_idx is not None:
|
| 1546 |
+
x, y = rates['EER'][eer_idx]
|
| 1547 |
+
ax.vlines(x, 0, 1, 'r', lw=lw)
|
| 1548 |
+
ax.set_title('EER = %0.2f %%' % (100. * y))
|
| 1549 |
+
|
| 1550 |
+
return fig
|
| 1551 |
+
|
| 1552 |
+
|
| 1553 |
+
def plot_biometrics(assessment=None, eer_idx=None, path=None, show=False):
|
| 1554 |
+
"""Create a summary plot of a biometrics test run.
|
| 1555 |
+
|
| 1556 |
+
Parameters
|
| 1557 |
+
----------
|
| 1558 |
+
assessment : dict
|
| 1559 |
+
Classification assessment results.
|
| 1560 |
+
eer_idx : int, optional
|
| 1561 |
+
Classifier reference index for the Equal Error Rate.
|
| 1562 |
+
path : str, optional
|
| 1563 |
+
If provided, the plot will be saved to the specified file.
|
| 1564 |
+
show : bool, optional
|
| 1565 |
+
If True, show the plot immediately.
|
| 1566 |
+
|
| 1567 |
+
"""
|
| 1568 |
+
|
| 1569 |
+
fig = plt.figure()
|
| 1570 |
+
fig.suptitle('Biometrics Summary')
|
| 1571 |
+
|
| 1572 |
+
c_sub = ['#008bff', '#8dd000']
|
| 1573 |
+
c_global = ['#0037ff', 'g']
|
| 1574 |
+
|
| 1575 |
+
ths = assessment['thresholds']
|
| 1576 |
+
|
| 1577 |
+
auth_ax = fig.add_subplot(121)
|
| 1578 |
+
id_ax = fig.add_subplot(122)
|
| 1579 |
+
|
| 1580 |
+
# subject results
|
| 1581 |
+
for sub in six.iterkeys(assessment['subject']):
|
| 1582 |
+
auth_rates = assessment['subject'][sub]['authentication']['rates']
|
| 1583 |
+
_ = _plot_rates(ths, auth_rates, ['FAR', 'FRR'],
|
| 1584 |
+
lw=MINOR_LW,
|
| 1585 |
+
colors=c_sub,
|
| 1586 |
+
alpha=0.4,
|
| 1587 |
+
eer_idx=None,
|
| 1588 |
+
labels=False,
|
| 1589 |
+
ax=auth_ax)
|
| 1590 |
+
|
| 1591 |
+
id_rates = assessment['subject'][sub]['identification']['rates']
|
| 1592 |
+
_ = _plot_rates(ths, id_rates, ['MR', 'RR'],
|
| 1593 |
+
lw=MINOR_LW,
|
| 1594 |
+
colors=c_sub,
|
| 1595 |
+
alpha=0.4,
|
| 1596 |
+
eer_idx=None,
|
| 1597 |
+
labels=False,
|
| 1598 |
+
ax=id_ax)
|
| 1599 |
+
|
| 1600 |
+
# global results
|
| 1601 |
+
auth_rates = assessment['global']['authentication']['rates']
|
| 1602 |
+
_ = _plot_rates(ths, auth_rates, ['FAR', 'FRR'],
|
| 1603 |
+
lw=MAJOR_LW,
|
| 1604 |
+
colors=c_global,
|
| 1605 |
+
alpha=1,
|
| 1606 |
+
eer_idx=eer_idx,
|
| 1607 |
+
labels=True,
|
| 1608 |
+
ax=auth_ax)
|
| 1609 |
+
|
| 1610 |
+
id_rates = assessment['global']['identification']['rates']
|
| 1611 |
+
_ = _plot_rates(ths, id_rates, ['MR', 'RR'],
|
| 1612 |
+
lw=MAJOR_LW,
|
| 1613 |
+
colors=c_global,
|
| 1614 |
+
alpha=1,
|
| 1615 |
+
eer_idx=eer_idx,
|
| 1616 |
+
labels=True,
|
| 1617 |
+
ax=id_ax)
|
| 1618 |
+
|
| 1619 |
+
# set labels and grids
|
| 1620 |
+
auth_ax.set_xlabel('Threshold')
|
| 1621 |
+
auth_ax.set_ylabel('Authentication')
|
| 1622 |
+
auth_ax.grid()
|
| 1623 |
+
auth_ax.legend()
|
| 1624 |
+
|
| 1625 |
+
id_ax.set_xlabel('Threshold')
|
| 1626 |
+
id_ax.set_ylabel('Identification')
|
| 1627 |
+
id_ax.grid()
|
| 1628 |
+
id_ax.legend()
|
| 1629 |
+
|
| 1630 |
+
# make layout tight
|
| 1631 |
+
fig.tight_layout()
|
| 1632 |
+
|
| 1633 |
+
# save to file
|
| 1634 |
+
if path is not None:
|
| 1635 |
+
path = utils.normpath(path)
|
| 1636 |
+
root, ext = os.path.splitext(path)
|
| 1637 |
+
ext = ext.lower()
|
| 1638 |
+
if ext not in ['png', 'jpg']:
|
| 1639 |
+
path = root + '.png'
|
| 1640 |
+
|
| 1641 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 1642 |
+
|
| 1643 |
+
# show
|
| 1644 |
+
if show:
|
| 1645 |
+
plt.show()
|
| 1646 |
+
else:
|
| 1647 |
+
# close
|
| 1648 |
+
plt.close(fig)
|
| 1649 |
+
|
| 1650 |
+
|
| 1651 |
+
def plot_clustering(data=None, clusters=None, path=None, show=False):
|
| 1652 |
+
"""Create a summary plot of a data clustering.
|
| 1653 |
+
|
| 1654 |
+
Parameters
|
| 1655 |
+
----------
|
| 1656 |
+
data : array
|
| 1657 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 1658 |
+
clusters : dict
|
| 1659 |
+
Dictionary with the sample indices (rows from `data`) for each cluster.
|
| 1660 |
+
path : str, optional
|
| 1661 |
+
If provided, the plot will be saved to the specified file.
|
| 1662 |
+
show : bool, optional
|
| 1663 |
+
If True, show the plot immediately.
|
| 1664 |
+
|
| 1665 |
+
"""
|
| 1666 |
+
|
| 1667 |
+
fig = plt.figure()
|
| 1668 |
+
fig.suptitle('Clustering Summary')
|
| 1669 |
+
|
| 1670 |
+
ymin, ymax = _yscaling(data, alpha=1.2)
|
| 1671 |
+
|
| 1672 |
+
# determine number of clusters
|
| 1673 |
+
keys = list(clusters)
|
| 1674 |
+
nc = len(keys)
|
| 1675 |
+
|
| 1676 |
+
if nc <= 4:
|
| 1677 |
+
nrows = 2
|
| 1678 |
+
ncols = 4
|
| 1679 |
+
else:
|
| 1680 |
+
area = nc + 4
|
| 1681 |
+
|
| 1682 |
+
# try to fit to a square
|
| 1683 |
+
nrows = int(np.ceil(np.sqrt(area)))
|
| 1684 |
+
|
| 1685 |
+
if nrows > MAX_ROWS:
|
| 1686 |
+
# prefer to increase number of columns
|
| 1687 |
+
nrows = MAX_ROWS
|
| 1688 |
+
|
| 1689 |
+
ncols = int(np.ceil(area / float(nrows)))
|
| 1690 |
+
|
| 1691 |
+
# plot grid
|
| 1692 |
+
gs = gridspec.GridSpec(nrows, ncols, hspace=0.2, wspace=0.2)
|
| 1693 |
+
|
| 1694 |
+
# global axes
|
| 1695 |
+
ax_global = fig.add_subplot(gs[:2, :2])
|
| 1696 |
+
|
| 1697 |
+
# cluster axes
|
| 1698 |
+
c_grid = np.ones((nrows, ncols), dtype='bool')
|
| 1699 |
+
c_grid[:2, :2] = False
|
| 1700 |
+
c_rows, c_cols = np.nonzero(c_grid)
|
| 1701 |
+
|
| 1702 |
+
# generate color map
|
| 1703 |
+
x = np.linspace(0., 1., nc)
|
| 1704 |
+
cmap = plt.get_cmap('rainbow')
|
| 1705 |
+
|
| 1706 |
+
for i, k in enumerate(keys):
|
| 1707 |
+
aux = data[clusters[k]]
|
| 1708 |
+
color = cmap(x[i])
|
| 1709 |
+
label = 'Cluster %s' % k
|
| 1710 |
+
ax = fig.add_subplot(gs[c_rows[i], c_cols[i]], sharex=ax_global)
|
| 1711 |
+
ax.set_ylim([ymin, ymax])
|
| 1712 |
+
ax.set_title(label)
|
| 1713 |
+
ax.grid()
|
| 1714 |
+
|
| 1715 |
+
if len(aux) > 0:
|
| 1716 |
+
ax_global.plot(aux.T, color=color, lw=MINOR_LW, alpha=0.7)
|
| 1717 |
+
ax.plot(aux.T, color=color, lw=MAJOR_LW)
|
| 1718 |
+
|
| 1719 |
+
ax_global.set_title('All Clusters')
|
| 1720 |
+
ax_global.set_ylim([ymin, ymax])
|
| 1721 |
+
ax_global.grid()
|
| 1722 |
+
|
| 1723 |
+
# make layout tight
|
| 1724 |
+
gs.tight_layout(fig)
|
| 1725 |
+
|
| 1726 |
+
# save to file
|
| 1727 |
+
if path is not None:
|
| 1728 |
+
path = utils.normpath(path)
|
| 1729 |
+
root, ext = os.path.splitext(path)
|
| 1730 |
+
ext = ext.lower()
|
| 1731 |
+
if ext not in ['png', 'jpg']:
|
| 1732 |
+
path = root + '.png'
|
| 1733 |
+
|
| 1734 |
+
fig.savefig(path, dpi=200, bbox_inches='tight')
|
| 1735 |
+
|
| 1736 |
+
# show
|
| 1737 |
+
if show:
|
| 1738 |
+
plt.show()
|
| 1739 |
+
else:
|
| 1740 |
+
# close
|
| 1741 |
+
plt.close(fig)
|
BioSPPy/source/biosppy/signals/__init__.py
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals
|
| 4 |
+
---------------
|
| 5 |
+
|
| 6 |
+
This package provides methods to process common
|
| 7 |
+
physiological signals (biosignals):
|
| 8 |
+
* Photoplethysmogram (PPG)
|
| 9 |
+
* Electrocardiogram (ECG)
|
| 10 |
+
* Electrodermal Activity (EDA)
|
| 11 |
+
* Electroencephalogram (EEG)
|
| 12 |
+
* Electromyogram (EMG)
|
| 13 |
+
* Respiration (Resp)
|
| 14 |
+
|
| 15 |
+
:copyright: (c) 2015-2021 by Instituto de Telecomunicacoes
|
| 16 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 17 |
+
"""
|
| 18 |
+
|
| 19 |
+
# compat
|
| 20 |
+
from __future__ import absolute_import, division, print_function
|
| 21 |
+
|
| 22 |
+
# allow lazy loading
|
| 23 |
+
from . import acc, abp, bvp, pcg, ppg, ecg, eda, eeg, emg, resp, tools
|
BioSPPy/source/biosppy/signals/abp.py
ADDED
|
@@ -0,0 +1,240 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.abp
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides methods to process Arterial Blood Pressure (ABP) signals.
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 10 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
# Imports
|
| 14 |
+
# compat
|
| 15 |
+
from __future__ import absolute_import, division, print_function
|
| 16 |
+
from six.moves import range
|
| 17 |
+
|
| 18 |
+
# 3rd party
|
| 19 |
+
import numpy as np
|
| 20 |
+
|
| 21 |
+
# local
|
| 22 |
+
from . import tools as st
|
| 23 |
+
from .. import plotting, utils
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
def abp(signal=None, sampling_rate=1000.0, show=True):
|
| 27 |
+
"""Process a raw ABP signal and extract relevant signal features using
|
| 28 |
+
default parameters.
|
| 29 |
+
|
| 30 |
+
Parameters
|
| 31 |
+
----------
|
| 32 |
+
signal : array
|
| 33 |
+
Raw ABP signal.
|
| 34 |
+
sampling_rate : int, float, optional
|
| 35 |
+
Sampling frequency (Hz).
|
| 36 |
+
show : bool, optional
|
| 37 |
+
If True, show a summary plot.
|
| 38 |
+
|
| 39 |
+
Returns
|
| 40 |
+
-------
|
| 41 |
+
ts : array
|
| 42 |
+
Signal time axis reference (seconds).
|
| 43 |
+
filtered : array
|
| 44 |
+
Filtered ABP signal.
|
| 45 |
+
onsets : array
|
| 46 |
+
Indices of ABP pulse onsets.
|
| 47 |
+
heart_rate_ts : array
|
| 48 |
+
Heart rate time axis reference (seconds).
|
| 49 |
+
heart_rate : array
|
| 50 |
+
Instantaneous heart rate (bpm).
|
| 51 |
+
|
| 52 |
+
"""
|
| 53 |
+
|
| 54 |
+
# check inputs
|
| 55 |
+
if signal is None:
|
| 56 |
+
raise TypeError("Please specify an input signal.")
|
| 57 |
+
|
| 58 |
+
# ensure numpy
|
| 59 |
+
signal = np.array(signal)
|
| 60 |
+
|
| 61 |
+
sampling_rate = float(sampling_rate)
|
| 62 |
+
|
| 63 |
+
# filter signal
|
| 64 |
+
filtered, _, _ = st.filter_signal(
|
| 65 |
+
signal=signal,
|
| 66 |
+
ftype="butter",
|
| 67 |
+
band="bandpass",
|
| 68 |
+
order=4,
|
| 69 |
+
frequency=[1, 8],
|
| 70 |
+
sampling_rate=sampling_rate,
|
| 71 |
+
)
|
| 72 |
+
|
| 73 |
+
# find onsets
|
| 74 |
+
(onsets,) = find_onsets_zong2003(signal=filtered, sampling_rate=sampling_rate)
|
| 75 |
+
|
| 76 |
+
# compute heart rate
|
| 77 |
+
hr_idx, hr = st.get_heart_rate(
|
| 78 |
+
beats=onsets, sampling_rate=sampling_rate, smooth=True, size=3
|
| 79 |
+
)
|
| 80 |
+
|
| 81 |
+
# get time vectors
|
| 82 |
+
length = len(signal)
|
| 83 |
+
T = (length - 1) / sampling_rate
|
| 84 |
+
ts = np.linspace(0, T, length, endpoint=False)
|
| 85 |
+
ts_hr = ts[hr_idx]
|
| 86 |
+
|
| 87 |
+
# plot
|
| 88 |
+
if show:
|
| 89 |
+
plotting.plot_abp(
|
| 90 |
+
ts=ts,
|
| 91 |
+
raw=signal,
|
| 92 |
+
filtered=filtered,
|
| 93 |
+
onsets=onsets,
|
| 94 |
+
heart_rate_ts=ts_hr,
|
| 95 |
+
heart_rate=hr,
|
| 96 |
+
path=None,
|
| 97 |
+
show=True,
|
| 98 |
+
)
|
| 99 |
+
|
| 100 |
+
# output
|
| 101 |
+
args = (ts, filtered, onsets, ts_hr, hr)
|
| 102 |
+
names = ("ts", "filtered", "onsets", "heart_rate_ts", "heart_rate")
|
| 103 |
+
|
| 104 |
+
return utils.ReturnTuple(args, names)
|
| 105 |
+
|
| 106 |
+
|
| 107 |
+
def find_onsets_zong2003(
|
| 108 |
+
signal=None,
|
| 109 |
+
sampling_rate=1000.0,
|
| 110 |
+
sm_size=None,
|
| 111 |
+
size=None,
|
| 112 |
+
alpha=2.0,
|
| 113 |
+
wrange=None,
|
| 114 |
+
d1_th=0,
|
| 115 |
+
d2_th=None,
|
| 116 |
+
):
|
| 117 |
+
"""Determine onsets of ABP pulses.
|
| 118 |
+
Skips corrupted signal parts.
|
| 119 |
+
Based on the approach by Zong *et al.* [Zong03]_.
|
| 120 |
+
Parameters
|
| 121 |
+
----------
|
| 122 |
+
signal : array
|
| 123 |
+
Input filtered ABP signal.
|
| 124 |
+
sampling_rate : int, float, optional
|
| 125 |
+
Sampling frequency (Hz).
|
| 126 |
+
sm_size : int, optional
|
| 127 |
+
Size of smoother kernel (seconds).
|
| 128 |
+
Defaults to 0.25
|
| 129 |
+
size : int, optional
|
| 130 |
+
Window to search for maxima (seconds).
|
| 131 |
+
Defaults to 5
|
| 132 |
+
alpha : float, optional
|
| 133 |
+
Normalization parameter.
|
| 134 |
+
Defaults to 2.0
|
| 135 |
+
wrange : int, optional
|
| 136 |
+
The window in which to search for a peak (seconds).
|
| 137 |
+
Defaults to 0.1
|
| 138 |
+
d1_th : int, optional
|
| 139 |
+
Smallest allowed difference between maxima and minima.
|
| 140 |
+
Defaults to 0
|
| 141 |
+
d2_th : int, optional
|
| 142 |
+
Smallest allowed time between maxima and minima (seconds),
|
| 143 |
+
Defaults to 0.15
|
| 144 |
+
Returns
|
| 145 |
+
-------
|
| 146 |
+
onsets : array
|
| 147 |
+
Indices of ABP pulse onsets.
|
| 148 |
+
|
| 149 |
+
References
|
| 150 |
+
----------
|
| 151 |
+
.. [Zong03] W Zong, T Heldt, GB Moody and RG Mark, "An Open-source
|
| 152 |
+
Algorithm to Detect Onset of Arterial Blood Pressure Pulses",
|
| 153 |
+
IEEE Comp. in Cardiology, vol. 30, pp. 259-262, 2003
|
| 154 |
+
"""
|
| 155 |
+
|
| 156 |
+
# check inputs
|
| 157 |
+
if signal is None:
|
| 158 |
+
raise TypeError("Please specify an input signal.")
|
| 159 |
+
|
| 160 |
+
# parameters
|
| 161 |
+
sm_size = 0.25 if not sm_size else sm_size
|
| 162 |
+
sm_size = int(sm_size * sampling_rate)
|
| 163 |
+
size = 5 if not size else size
|
| 164 |
+
size = int(size * sampling_rate)
|
| 165 |
+
wrange = 0.1 if not wrange else wrange
|
| 166 |
+
wrange = int(wrange * sampling_rate)
|
| 167 |
+
d2_th = 0.15 if not d2_th else d2_th
|
| 168 |
+
d2_th = int(d2_th * sampling_rate)
|
| 169 |
+
|
| 170 |
+
length = len(signal)
|
| 171 |
+
|
| 172 |
+
# slope sum function
|
| 173 |
+
dy = np.diff(signal)
|
| 174 |
+
dy[dy < 0] = 0
|
| 175 |
+
|
| 176 |
+
ssf, _ = st.smoother(signal=dy, kernel="boxcar", size=sm_size, mirror=True)
|
| 177 |
+
|
| 178 |
+
# main loop
|
| 179 |
+
start = 0
|
| 180 |
+
stop = size
|
| 181 |
+
if stop > length:
|
| 182 |
+
stop = length
|
| 183 |
+
|
| 184 |
+
idx = []
|
| 185 |
+
|
| 186 |
+
while True:
|
| 187 |
+
sq = np.copy(signal[start:stop])
|
| 188 |
+
sq -= sq.mean()
|
| 189 |
+
# sq = sq[1:]
|
| 190 |
+
ss = 25 * ssf[start:stop]
|
| 191 |
+
sss = 100 * np.diff(ss)
|
| 192 |
+
sss[sss < 0] = 0
|
| 193 |
+
sss = sss - alpha * np.mean(sss)
|
| 194 |
+
|
| 195 |
+
# find maxima
|
| 196 |
+
pk, pv = st.find_extrema(signal=sss, mode="max")
|
| 197 |
+
pk = pk[np.nonzero(pv > 0)]
|
| 198 |
+
pk += wrange
|
| 199 |
+
dpidx = pk
|
| 200 |
+
|
| 201 |
+
# analyze between maxima of 2nd derivative of ss
|
| 202 |
+
detected = False
|
| 203 |
+
for i in range(1, len(dpidx) + 1):
|
| 204 |
+
try:
|
| 205 |
+
v, u = dpidx[i - 1], dpidx[i]
|
| 206 |
+
except IndexError:
|
| 207 |
+
v, u = dpidx[-1], -1
|
| 208 |
+
|
| 209 |
+
s = sq[v:u]
|
| 210 |
+
Mk, Mv = st.find_extrema(signal=s, mode="max")
|
| 211 |
+
mk, mv = st.find_extrema(signal=s, mode="min")
|
| 212 |
+
|
| 213 |
+
try:
|
| 214 |
+
M = Mk[np.argmax(Mv)]
|
| 215 |
+
m = mk[np.argmax(mv)]
|
| 216 |
+
except ValueError:
|
| 217 |
+
continue
|
| 218 |
+
|
| 219 |
+
if (s[M] - s[m] > d1_th) and (m - M > d2_th):
|
| 220 |
+
idx += [v + start]
|
| 221 |
+
detected = True
|
| 222 |
+
|
| 223 |
+
# next round continues from previous detected beat
|
| 224 |
+
if detected:
|
| 225 |
+
start = idx[-1] + wrange
|
| 226 |
+
else:
|
| 227 |
+
start += size
|
| 228 |
+
|
| 229 |
+
# stop condition
|
| 230 |
+
if start > length:
|
| 231 |
+
break
|
| 232 |
+
|
| 233 |
+
# update stop
|
| 234 |
+
stop += size
|
| 235 |
+
if stop > length:
|
| 236 |
+
stop = length
|
| 237 |
+
|
| 238 |
+
idx = np.array(idx, dtype="int")
|
| 239 |
+
|
| 240 |
+
return utils.ReturnTuple((idx,), ("onsets",))
|
BioSPPy/source/biosppy/signals/acc.py
ADDED
|
@@ -0,0 +1,186 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.acc
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides methods to process Acceleration (ACC) signals.
|
| 7 |
+
Implemented code assumes ACC acquisition from a 3 orthogonal axis reference system.
|
| 8 |
+
|
| 9 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 10 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 11 |
+
|
| 12 |
+
Authors
|
| 13 |
+
-------
|
| 14 |
+
Afonso Ferreira
|
| 15 |
+
Diogo Vieira
|
| 16 |
+
|
| 17 |
+
"""
|
| 18 |
+
|
| 19 |
+
# Imports
|
| 20 |
+
from __future__ import absolute_import, division, print_function
|
| 21 |
+
from six.moves import range
|
| 22 |
+
|
| 23 |
+
# 3rd party
|
| 24 |
+
import numpy as np
|
| 25 |
+
|
| 26 |
+
# local
|
| 27 |
+
from .. import plotting, utils
|
| 28 |
+
from biosppy.inter_plotting import acc as inter_plotting
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
def acc(signal=None, sampling_rate=100.0, path=None, show=True, interactive=True):
|
| 32 |
+
"""Process a raw ACC signal and extract relevant signal features using
|
| 33 |
+
default parameters.
|
| 34 |
+
|
| 35 |
+
Parameters
|
| 36 |
+
----------
|
| 37 |
+
signal : array
|
| 38 |
+
Raw ACC signal.
|
| 39 |
+
sampling_rate : int, float, optional
|
| 40 |
+
Sampling frequency (Hz).
|
| 41 |
+
path : str, optional
|
| 42 |
+
If provided, the plot will be saved to the specified file.
|
| 43 |
+
show : bool, optional
|
| 44 |
+
If True, show a summary plot.
|
| 45 |
+
interactive : bool, optional
|
| 46 |
+
If True, shows an interactive plot.
|
| 47 |
+
|
| 48 |
+
Returns
|
| 49 |
+
-------
|
| 50 |
+
ts : array
|
| 51 |
+
Signal time axis reference (seconds).
|
| 52 |
+
signal : array
|
| 53 |
+
Raw (unfiltered) ACC signal.
|
| 54 |
+
vm : array
|
| 55 |
+
Vector Magnitude feature of the signal.
|
| 56 |
+
sm : array
|
| 57 |
+
Signal Magnitude feature of the signal.
|
| 58 |
+
freq_features : dict
|
| 59 |
+
Positive Frequency domains (Hz) of the signal.
|
| 60 |
+
amp_features : dict
|
| 61 |
+
Normalized Absolute Amplitudes of the signal.
|
| 62 |
+
|
| 63 |
+
"""
|
| 64 |
+
|
| 65 |
+
# check inputs
|
| 66 |
+
if signal is None:
|
| 67 |
+
raise TypeError("Please specify an input signal.")
|
| 68 |
+
|
| 69 |
+
# ensure numpy
|
| 70 |
+
signal = np.array(signal)
|
| 71 |
+
|
| 72 |
+
sampling_rate = float(sampling_rate)
|
| 73 |
+
|
| 74 |
+
# extract features
|
| 75 |
+
vm_features, sm_features = time_domain_feature_extractor(signal=signal)
|
| 76 |
+
freq_features, abs_amp_features = frequency_domain_feature_extractor(
|
| 77 |
+
signal=signal, sampling_rate=sampling_rate
|
| 78 |
+
)
|
| 79 |
+
|
| 80 |
+
# get time vectors
|
| 81 |
+
length = len(signal)
|
| 82 |
+
T = (length - 1) / sampling_rate
|
| 83 |
+
ts = np.linspace(0, T, length, endpoint=True)
|
| 84 |
+
|
| 85 |
+
# plot
|
| 86 |
+
if show:
|
| 87 |
+
if interactive:
|
| 88 |
+
inter_plotting.plot_acc(
|
| 89 |
+
ts=ts, # plotting.plot_acc
|
| 90 |
+
raw=signal,
|
| 91 |
+
vm=vm_features,
|
| 92 |
+
sm=sm_features,
|
| 93 |
+
spectrum={"freq": freq_features, "abs_amp": abs_amp_features},
|
| 94 |
+
path=path,
|
| 95 |
+
)
|
| 96 |
+
else:
|
| 97 |
+
plotting.plot_acc(
|
| 98 |
+
ts=ts, # plotting.plot_acc
|
| 99 |
+
raw=signal,
|
| 100 |
+
vm=vm_features,
|
| 101 |
+
sm=sm_features,
|
| 102 |
+
path=path,
|
| 103 |
+
show=True,
|
| 104 |
+
)
|
| 105 |
+
|
| 106 |
+
# output
|
| 107 |
+
args = (ts, signal, vm_features, sm_features, freq_features, abs_amp_features)
|
| 108 |
+
names = ("ts", "signal", "vm", "sm", "freq", "abs_amp")
|
| 109 |
+
|
| 110 |
+
return utils.ReturnTuple(args, names)
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
def time_domain_feature_extractor(signal=None):
|
| 114 |
+
"""Extracts the vector magnitude and signal magnitude features from an input ACC signal, given the signal itself.
|
| 115 |
+
|
| 116 |
+
Parameters
|
| 117 |
+
----------
|
| 118 |
+
signal : array
|
| 119 |
+
Input ACC signal.
|
| 120 |
+
|
| 121 |
+
Returns
|
| 122 |
+
-------
|
| 123 |
+
vm_features : array
|
| 124 |
+
Extracted Vector Magnitude (VM) feature.
|
| 125 |
+
sm_features : array
|
| 126 |
+
Extracted Signal Magnitude (SM) feature.
|
| 127 |
+
|
| 128 |
+
"""
|
| 129 |
+
|
| 130 |
+
# check inputs
|
| 131 |
+
if signal is None:
|
| 132 |
+
raise TypeError("Please specify an input signal.")
|
| 133 |
+
|
| 134 |
+
# get acceleration features
|
| 135 |
+
vm_features = np.zeros(signal.shape[0])
|
| 136 |
+
sm_features = np.zeros(signal.shape[0])
|
| 137 |
+
|
| 138 |
+
for i in range(signal.shape[0]):
|
| 139 |
+
vm_features[i] = np.linalg.norm(
|
| 140 |
+
np.array([signal[i][0], signal[i][1], signal[i][2]])
|
| 141 |
+
)
|
| 142 |
+
sm_features[i] = (abs(signal[i][0]) + abs(signal[i][1]) + abs(signal[i][2])) / 3
|
| 143 |
+
|
| 144 |
+
return utils.ReturnTuple((vm_features, sm_features), ("vm", "sm"))
|
| 145 |
+
|
| 146 |
+
|
| 147 |
+
def frequency_domain_feature_extractor(signal=None, sampling_rate=100.0):
|
| 148 |
+
"""Extracts the FFT from each ACC sub-signal (x, y, z), given the signal itself.
|
| 149 |
+
|
| 150 |
+
Parameters
|
| 151 |
+
----------
|
| 152 |
+
signal : array
|
| 153 |
+
Input ACC signal.
|
| 154 |
+
sampling_rate : int, float, optional
|
| 155 |
+
Sampling frequency (Hz).
|
| 156 |
+
|
| 157 |
+
Returns
|
| 158 |
+
-------
|
| 159 |
+
freq_features : dict
|
| 160 |
+
Dictionary of positive frequencies (Hz) for all sub-signals.
|
| 161 |
+
amp_features : dict
|
| 162 |
+
Dictionary of Normalized Absolute Amplitudes for all sub-signals.
|
| 163 |
+
|
| 164 |
+
"""
|
| 165 |
+
|
| 166 |
+
# check inputs
|
| 167 |
+
if signal is None:
|
| 168 |
+
raise TypeError("Please specify an input signal.")
|
| 169 |
+
|
| 170 |
+
freq_features = {}
|
| 171 |
+
amp_features = {}
|
| 172 |
+
|
| 173 |
+
# get Normalized FFT for each sub-signal
|
| 174 |
+
for ind, axis in zip(range(signal.shape[1]), ["x", "y", "z"]):
|
| 175 |
+
sub_signal = signal[:, ind]
|
| 176 |
+
|
| 177 |
+
n = len(sub_signal)
|
| 178 |
+
k = np.arange(n)
|
| 179 |
+
T = n / sampling_rate
|
| 180 |
+
frq = k / T
|
| 181 |
+
freq_features[axis] = frq[range(n // 2)]
|
| 182 |
+
|
| 183 |
+
amp = np.fft.fft(sub_signal) / n
|
| 184 |
+
amp_features[axis] = abs(amp[range(n // 2)])
|
| 185 |
+
|
| 186 |
+
return utils.ReturnTuple((freq_features, amp_features), ("freq", "abs_amp"))
|
BioSPPy/source/biosppy/signals/bvp.py
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.bvp
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides methods to process Blood Volume Pulse (BVP) signals.
|
| 7 |
+
|
| 8 |
+
-------- DEPRECATED --------
|
| 9 |
+
PLEASE, USE THE PPG MODULE
|
| 10 |
+
This module was left for compatibility
|
| 11 |
+
----------------------------
|
| 12 |
+
|
| 13 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 14 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 15 |
+
"""
|
| 16 |
+
|
| 17 |
+
# Imports
|
| 18 |
+
# compat
|
| 19 |
+
from __future__ import absolute_import, division, print_function
|
| 20 |
+
from six.moves import range
|
| 21 |
+
|
| 22 |
+
# 3rd party
|
| 23 |
+
import numpy as np
|
| 24 |
+
|
| 25 |
+
# local
|
| 26 |
+
from . import tools as st
|
| 27 |
+
from . import ppg
|
| 28 |
+
from .. import plotting, utils
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
def bvp(signal=None, sampling_rate=1000., path=None, show=True):
|
| 32 |
+
"""Process a raw BVP signal and extract relevant signal features using
|
| 33 |
+
default parameters.
|
| 34 |
+
Parameters
|
| 35 |
+
----------
|
| 36 |
+
signal : array
|
| 37 |
+
Raw BVP signal.
|
| 38 |
+
sampling_rate : int, float, optional
|
| 39 |
+
Sampling frequency (Hz).
|
| 40 |
+
path : str, optional
|
| 41 |
+
If provided, the plot will be saved to the specified file.
|
| 42 |
+
show : bool, optional
|
| 43 |
+
If True, show a summary plot.
|
| 44 |
+
Returns
|
| 45 |
+
-------
|
| 46 |
+
ts : array
|
| 47 |
+
Signal time axis reference (seconds).
|
| 48 |
+
filtered : array
|
| 49 |
+
Filtered BVP signal.
|
| 50 |
+
onsets : array
|
| 51 |
+
Indices of BVP pulse onsets.
|
| 52 |
+
heart_rate_ts : array
|
| 53 |
+
Heart rate time axis reference (seconds).
|
| 54 |
+
heart_rate : array
|
| 55 |
+
Instantaneous heart rate (bpm).
|
| 56 |
+
"""
|
| 57 |
+
|
| 58 |
+
# check inputs
|
| 59 |
+
if signal is None:
|
| 60 |
+
raise TypeError("Please specify an input signal.")
|
| 61 |
+
|
| 62 |
+
# ensure numpy
|
| 63 |
+
signal = np.array(signal)
|
| 64 |
+
|
| 65 |
+
sampling_rate = float(sampling_rate)
|
| 66 |
+
|
| 67 |
+
# filter signal
|
| 68 |
+
filtered, _, _ = st.filter_signal(signal=signal,
|
| 69 |
+
ftype='butter',
|
| 70 |
+
band='bandpass',
|
| 71 |
+
order=4,
|
| 72 |
+
frequency=[1, 8],
|
| 73 |
+
sampling_rate=sampling_rate)
|
| 74 |
+
|
| 75 |
+
# find onsets
|
| 76 |
+
onsets,_ = ppg.find_onsets_elgendi2013(signal=filtered, sampling_rate=sampling_rate)
|
| 77 |
+
|
| 78 |
+
# compute heart rate
|
| 79 |
+
hr_idx, hr = st.get_heart_rate(beats=onsets,
|
| 80 |
+
sampling_rate=sampling_rate,
|
| 81 |
+
smooth=True,
|
| 82 |
+
size=3)
|
| 83 |
+
|
| 84 |
+
# get time vectors
|
| 85 |
+
length = len(signal)
|
| 86 |
+
T = (length - 1) / sampling_rate
|
| 87 |
+
ts = np.linspace(0, T, length, endpoint=False)
|
| 88 |
+
ts_hr = ts[hr_idx]
|
| 89 |
+
|
| 90 |
+
# plot
|
| 91 |
+
if show:
|
| 92 |
+
plotting.plot_bvp(ts=ts,
|
| 93 |
+
raw=signal,
|
| 94 |
+
filtered=filtered,
|
| 95 |
+
onsets=onsets,
|
| 96 |
+
heart_rate_ts=ts_hr,
|
| 97 |
+
heart_rate=hr,
|
| 98 |
+
path=path,
|
| 99 |
+
show=True)
|
| 100 |
+
|
| 101 |
+
# output
|
| 102 |
+
args = (ts, filtered, onsets, ts_hr, hr)
|
| 103 |
+
names = ('ts', 'filtered', 'onsets', 'heart_rate_ts', 'heart_rate')
|
| 104 |
+
|
| 105 |
+
return utils.ReturnTuple(args, names)
|
| 106 |
+
|
| 107 |
+
|
BioSPPy/source/biosppy/signals/ecg.py
ADDED
|
@@ -0,0 +1,2045 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.ecg
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides methods to process Electrocardiographic (ECG) signals.
|
| 7 |
+
Implemented code assumes a single-channel Lead I like ECG signal.
|
| 8 |
+
|
| 9 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 10 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 11 |
+
|
| 12 |
+
"""
|
| 13 |
+
|
| 14 |
+
# Imports
|
| 15 |
+
# compat
|
| 16 |
+
from __future__ import absolute_import, division, print_function
|
| 17 |
+
from six.moves import range, zip
|
| 18 |
+
|
| 19 |
+
# 3rd party
|
| 20 |
+
import math
|
| 21 |
+
import numpy as np
|
| 22 |
+
import scipy.signal as ss
|
| 23 |
+
import matplotlib.pyplot as plt
|
| 24 |
+
from scipy import stats, integrate
|
| 25 |
+
|
| 26 |
+
# local
|
| 27 |
+
from . import tools as st
|
| 28 |
+
from .. import plotting, utils
|
| 29 |
+
from biosppy.inter_plotting import ecg as inter_plotting
|
| 30 |
+
from scipy.signal import argrelextrema
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
def ecg(signal=None, sampling_rate=1000.0, path=None, show=True, interactive=True):
|
| 34 |
+
"""Process a raw ECG signal and extract relevant signal features using
|
| 35 |
+
default parameters.
|
| 36 |
+
|
| 37 |
+
Parameters
|
| 38 |
+
----------
|
| 39 |
+
signal : array
|
| 40 |
+
Raw ECG signal.
|
| 41 |
+
sampling_rate : int, float, optional
|
| 42 |
+
Sampling frequency (Hz).
|
| 43 |
+
path : str, optional
|
| 44 |
+
If provided, the plot will be saved to the specified file.
|
| 45 |
+
show : bool, optional
|
| 46 |
+
If True, show a summary plot.
|
| 47 |
+
interactive : bool, optional
|
| 48 |
+
If True, shows an interactive plot.
|
| 49 |
+
|
| 50 |
+
Returns
|
| 51 |
+
-------
|
| 52 |
+
ts : array
|
| 53 |
+
Signal time axis reference (seconds).
|
| 54 |
+
filtered : array
|
| 55 |
+
Filtered ECG signal.
|
| 56 |
+
rpeaks : array
|
| 57 |
+
R-peak location indices.
|
| 58 |
+
templates_ts : array
|
| 59 |
+
Templates time axis reference (seconds).
|
| 60 |
+
templates : array
|
| 61 |
+
Extracted heartbeat templates.
|
| 62 |
+
heart_rate_ts : array
|
| 63 |
+
Heart rate time axis reference (seconds).
|
| 64 |
+
heart_rate : array
|
| 65 |
+
Instantaneous heart rate (bpm).
|
| 66 |
+
|
| 67 |
+
"""
|
| 68 |
+
|
| 69 |
+
# check inputs
|
| 70 |
+
if signal is None:
|
| 71 |
+
raise TypeError("Please specify an input signal.")
|
| 72 |
+
|
| 73 |
+
# ensure numpy
|
| 74 |
+
signal = np.array(signal)
|
| 75 |
+
|
| 76 |
+
sampling_rate = float(sampling_rate)
|
| 77 |
+
|
| 78 |
+
# filter signal
|
| 79 |
+
order = int(0.3 * sampling_rate)
|
| 80 |
+
filtered, _, _ = st.filter_signal(
|
| 81 |
+
signal=signal,
|
| 82 |
+
ftype="FIR",
|
| 83 |
+
band="bandpass",
|
| 84 |
+
order=order,
|
| 85 |
+
frequency=[3, 45],
|
| 86 |
+
sampling_rate=sampling_rate,
|
| 87 |
+
)
|
| 88 |
+
|
| 89 |
+
# segment
|
| 90 |
+
(rpeaks,) = hamilton_segmenter(signal=filtered, sampling_rate=sampling_rate)
|
| 91 |
+
|
| 92 |
+
# correct R-peak locations
|
| 93 |
+
(rpeaks,) = correct_rpeaks(
|
| 94 |
+
signal=filtered, rpeaks=rpeaks, sampling_rate=sampling_rate, tol=0.05
|
| 95 |
+
)
|
| 96 |
+
|
| 97 |
+
# extract templates
|
| 98 |
+
templates, rpeaks = extract_heartbeats(
|
| 99 |
+
signal=filtered,
|
| 100 |
+
rpeaks=rpeaks,
|
| 101 |
+
sampling_rate=sampling_rate,
|
| 102 |
+
before=0.2,
|
| 103 |
+
after=0.4,
|
| 104 |
+
)
|
| 105 |
+
|
| 106 |
+
# compute heart rate
|
| 107 |
+
hr_idx, hr = st.get_heart_rate(
|
| 108 |
+
beats=rpeaks, sampling_rate=sampling_rate, smooth=True, size=3
|
| 109 |
+
)
|
| 110 |
+
|
| 111 |
+
# get time vectors
|
| 112 |
+
length = len(signal)
|
| 113 |
+
T = (length - 1) / sampling_rate
|
| 114 |
+
ts = np.linspace(0, T, length, endpoint=True)
|
| 115 |
+
ts_hr = ts[hr_idx]
|
| 116 |
+
ts_tmpl = np.linspace(-0.2, 0.4, templates.shape[1], endpoint=False)
|
| 117 |
+
|
| 118 |
+
# plot
|
| 119 |
+
if show:
|
| 120 |
+
if interactive:
|
| 121 |
+
inter_plotting.plot_ecg(
|
| 122 |
+
ts=ts,
|
| 123 |
+
raw=signal,
|
| 124 |
+
filtered=filtered,
|
| 125 |
+
rpeaks=rpeaks,
|
| 126 |
+
templates_ts=ts_tmpl,
|
| 127 |
+
templates=templates,
|
| 128 |
+
heart_rate_ts=ts_hr,
|
| 129 |
+
heart_rate=hr,
|
| 130 |
+
path=path,
|
| 131 |
+
show=True,
|
| 132 |
+
)
|
| 133 |
+
|
| 134 |
+
else:
|
| 135 |
+
plotting.plot_ecg(
|
| 136 |
+
ts=ts,
|
| 137 |
+
raw=signal,
|
| 138 |
+
filtered=filtered,
|
| 139 |
+
rpeaks=rpeaks,
|
| 140 |
+
templates_ts=ts_tmpl,
|
| 141 |
+
templates=templates,
|
| 142 |
+
heart_rate_ts=ts_hr,
|
| 143 |
+
heart_rate=hr,
|
| 144 |
+
path=path,
|
| 145 |
+
show=True,
|
| 146 |
+
)
|
| 147 |
+
|
| 148 |
+
# output
|
| 149 |
+
args = (ts, filtered, rpeaks, ts_tmpl, templates, ts_hr, hr)
|
| 150 |
+
names = (
|
| 151 |
+
"ts",
|
| 152 |
+
"filtered",
|
| 153 |
+
"rpeaks",
|
| 154 |
+
"templates_ts",
|
| 155 |
+
"templates",
|
| 156 |
+
"heart_rate_ts",
|
| 157 |
+
"heart_rate",
|
| 158 |
+
)
|
| 159 |
+
|
| 160 |
+
return utils.ReturnTuple(args, names)
|
| 161 |
+
|
| 162 |
+
|
| 163 |
+
def _extract_heartbeats(signal=None, rpeaks=None, before=200, after=400):
|
| 164 |
+
"""Extract heartbeat templates from an ECG signal, given a list of
|
| 165 |
+
R-peak locations.
|
| 166 |
+
|
| 167 |
+
Parameters
|
| 168 |
+
----------
|
| 169 |
+
signal : array
|
| 170 |
+
Input ECG signal.
|
| 171 |
+
rpeaks : array
|
| 172 |
+
R-peak location indices.
|
| 173 |
+
before : int, optional
|
| 174 |
+
Number of samples to include before the R peak.
|
| 175 |
+
after : int, optional
|
| 176 |
+
Number of samples to include after the R peak.
|
| 177 |
+
|
| 178 |
+
Returns
|
| 179 |
+
-------
|
| 180 |
+
templates : array
|
| 181 |
+
Extracted heartbeat templates.
|
| 182 |
+
rpeaks : array
|
| 183 |
+
Corresponding R-peak location indices of the extracted heartbeat
|
| 184 |
+
templates.
|
| 185 |
+
|
| 186 |
+
"""
|
| 187 |
+
|
| 188 |
+
R = np.sort(rpeaks)
|
| 189 |
+
length = len(signal)
|
| 190 |
+
templates = []
|
| 191 |
+
newR = []
|
| 192 |
+
|
| 193 |
+
for r in R:
|
| 194 |
+
a = r - before
|
| 195 |
+
if a < 0:
|
| 196 |
+
continue
|
| 197 |
+
b = r + after
|
| 198 |
+
if b > length:
|
| 199 |
+
break
|
| 200 |
+
templates.append(signal[a:b])
|
| 201 |
+
newR.append(r)
|
| 202 |
+
|
| 203 |
+
templates = np.array(templates)
|
| 204 |
+
newR = np.array(newR, dtype="int")
|
| 205 |
+
|
| 206 |
+
return templates, newR
|
| 207 |
+
|
| 208 |
+
|
| 209 |
+
def extract_heartbeats(
|
| 210 |
+
signal=None, rpeaks=None, sampling_rate=1000.0, before=0.2, after=0.4
|
| 211 |
+
):
|
| 212 |
+
"""Extract heartbeat templates from an ECG signal, given a list of
|
| 213 |
+
R-peak locations.
|
| 214 |
+
|
| 215 |
+
Parameters
|
| 216 |
+
----------
|
| 217 |
+
signal : array
|
| 218 |
+
Input ECG signal.
|
| 219 |
+
rpeaks : array
|
| 220 |
+
R-peak location indices.
|
| 221 |
+
sampling_rate : int, float, optional
|
| 222 |
+
Sampling frequency (Hz).
|
| 223 |
+
before : float, optional
|
| 224 |
+
Window size to include before the R peak (seconds).
|
| 225 |
+
after : int, optional
|
| 226 |
+
Window size to include after the R peak (seconds).
|
| 227 |
+
|
| 228 |
+
Returns
|
| 229 |
+
-------
|
| 230 |
+
templates : array
|
| 231 |
+
Extracted heartbeat templates.
|
| 232 |
+
rpeaks : array
|
| 233 |
+
Corresponding R-peak location indices of the extracted heartbeat
|
| 234 |
+
templates.
|
| 235 |
+
|
| 236 |
+
"""
|
| 237 |
+
|
| 238 |
+
# check inputs
|
| 239 |
+
if signal is None:
|
| 240 |
+
raise TypeError("Please specify an input signal.")
|
| 241 |
+
|
| 242 |
+
if rpeaks is None:
|
| 243 |
+
raise TypeError("Please specify the input R-peak locations.")
|
| 244 |
+
|
| 245 |
+
if before < 0:
|
| 246 |
+
raise ValueError("Please specify a non-negative 'before' value.")
|
| 247 |
+
if after < 0:
|
| 248 |
+
raise ValueError("Please specify a non-negative 'after' value.")
|
| 249 |
+
|
| 250 |
+
# convert delimiters to samples
|
| 251 |
+
before = int(before * sampling_rate)
|
| 252 |
+
after = int(after * sampling_rate)
|
| 253 |
+
|
| 254 |
+
# get heartbeats
|
| 255 |
+
templates, newR = _extract_heartbeats(
|
| 256 |
+
signal=signal, rpeaks=rpeaks, before=before, after=after
|
| 257 |
+
)
|
| 258 |
+
|
| 259 |
+
return utils.ReturnTuple((templates, newR), ("templates", "rpeaks"))
|
| 260 |
+
|
| 261 |
+
|
| 262 |
+
def compare_segmentation(
|
| 263 |
+
reference=None, test=None, sampling_rate=1000.0, offset=0, minRR=None, tol=0.05
|
| 264 |
+
):
|
| 265 |
+
"""Compare the segmentation performance of a list of R-peak positions
|
| 266 |
+
against a reference list.
|
| 267 |
+
|
| 268 |
+
Parameters
|
| 269 |
+
----------
|
| 270 |
+
reference : array
|
| 271 |
+
Reference R-peak location indices.
|
| 272 |
+
test : array
|
| 273 |
+
Test R-peak location indices.
|
| 274 |
+
sampling_rate : int, float, optional
|
| 275 |
+
Sampling frequency (Hz).
|
| 276 |
+
offset : int, optional
|
| 277 |
+
Constant a priori offset (number of samples) between reference and
|
| 278 |
+
test R-peak locations.
|
| 279 |
+
minRR : float, optional
|
| 280 |
+
Minimum admissible RR interval (seconds).
|
| 281 |
+
tol : float, optional
|
| 282 |
+
Tolerance between corresponding reference and test R-peak
|
| 283 |
+
locations (seconds).
|
| 284 |
+
|
| 285 |
+
Returns
|
| 286 |
+
-------
|
| 287 |
+
TP : int
|
| 288 |
+
Number of true positive R-peaks.
|
| 289 |
+
FP : int
|
| 290 |
+
Number of false positive R-peaks.
|
| 291 |
+
performance : float
|
| 292 |
+
Test performance; TP / len(reference).
|
| 293 |
+
acc : float
|
| 294 |
+
Accuracy rate; TP / (TP + FP).
|
| 295 |
+
err : float
|
| 296 |
+
Error rate; FP / (TP + FP).
|
| 297 |
+
match : list
|
| 298 |
+
Indices of the elements of 'test' that match to an R-peak
|
| 299 |
+
from 'reference'.
|
| 300 |
+
deviation : array
|
| 301 |
+
Absolute errors of the matched R-peaks (seconds).
|
| 302 |
+
mean_deviation : float
|
| 303 |
+
Mean error (seconds).
|
| 304 |
+
std_deviation : float
|
| 305 |
+
Standard deviation of error (seconds).
|
| 306 |
+
mean_ref_ibi : float
|
| 307 |
+
Mean of the reference interbeat intervals (seconds).
|
| 308 |
+
std_ref_ibi : float
|
| 309 |
+
Standard deviation of the reference interbeat intervals (seconds).
|
| 310 |
+
mean_test_ibi : float
|
| 311 |
+
Mean of the test interbeat intervals (seconds).
|
| 312 |
+
std_test_ibi : float
|
| 313 |
+
Standard deviation of the test interbeat intervals (seconds).
|
| 314 |
+
|
| 315 |
+
"""
|
| 316 |
+
|
| 317 |
+
# check inputs
|
| 318 |
+
if reference is None:
|
| 319 |
+
raise TypeError(
|
| 320 |
+
"Please specify an input reference list of R-peak \
|
| 321 |
+
locations."
|
| 322 |
+
)
|
| 323 |
+
|
| 324 |
+
if test is None:
|
| 325 |
+
raise TypeError(
|
| 326 |
+
"Please specify an input test list of R-peak \
|
| 327 |
+
locations."
|
| 328 |
+
)
|
| 329 |
+
|
| 330 |
+
if minRR is None:
|
| 331 |
+
minRR = np.inf
|
| 332 |
+
|
| 333 |
+
sampling_rate = float(sampling_rate)
|
| 334 |
+
|
| 335 |
+
# ensure numpy
|
| 336 |
+
reference = np.array(reference)
|
| 337 |
+
test = np.array(test)
|
| 338 |
+
|
| 339 |
+
# convert to samples
|
| 340 |
+
minRR = minRR * sampling_rate
|
| 341 |
+
tol = tol * sampling_rate
|
| 342 |
+
|
| 343 |
+
TP = 0
|
| 344 |
+
FP = 0
|
| 345 |
+
|
| 346 |
+
matchIdx = []
|
| 347 |
+
dev = []
|
| 348 |
+
|
| 349 |
+
for i, r in enumerate(test):
|
| 350 |
+
# deviation to closest R in reference
|
| 351 |
+
ref = reference[np.argmin(np.abs(reference - (r + offset)))]
|
| 352 |
+
error = np.abs(ref - (r + offset))
|
| 353 |
+
|
| 354 |
+
if error < tol:
|
| 355 |
+
TP += 1
|
| 356 |
+
matchIdx.append(i)
|
| 357 |
+
dev.append(error)
|
| 358 |
+
else:
|
| 359 |
+
if len(matchIdx) > 0:
|
| 360 |
+
bdf = r - test[matchIdx[-1]]
|
| 361 |
+
if bdf < minRR:
|
| 362 |
+
# false positive, but removable with RR interval check
|
| 363 |
+
pass
|
| 364 |
+
else:
|
| 365 |
+
FP += 1
|
| 366 |
+
else:
|
| 367 |
+
FP += 1
|
| 368 |
+
|
| 369 |
+
# convert deviations to time
|
| 370 |
+
dev = np.array(dev, dtype="float")
|
| 371 |
+
dev /= sampling_rate
|
| 372 |
+
nd = len(dev)
|
| 373 |
+
if nd == 0:
|
| 374 |
+
mdev = np.nan
|
| 375 |
+
sdev = np.nan
|
| 376 |
+
elif nd == 1:
|
| 377 |
+
mdev = np.mean(dev)
|
| 378 |
+
sdev = 0.0
|
| 379 |
+
else:
|
| 380 |
+
mdev = np.mean(dev)
|
| 381 |
+
sdev = np.std(dev, ddof=1)
|
| 382 |
+
|
| 383 |
+
# interbeat interval
|
| 384 |
+
th1 = 1.5 # 40 bpm
|
| 385 |
+
th2 = 0.3 # 200 bpm
|
| 386 |
+
|
| 387 |
+
rIBI = np.diff(reference)
|
| 388 |
+
rIBI = np.array(rIBI, dtype="float")
|
| 389 |
+
rIBI /= sampling_rate
|
| 390 |
+
|
| 391 |
+
good = np.nonzero((rIBI < th1) & (rIBI > th2))[0]
|
| 392 |
+
rIBI = rIBI[good]
|
| 393 |
+
|
| 394 |
+
nr = len(rIBI)
|
| 395 |
+
if nr == 0:
|
| 396 |
+
rIBIm = np.nan
|
| 397 |
+
rIBIs = np.nan
|
| 398 |
+
elif nr == 1:
|
| 399 |
+
rIBIm = np.mean(rIBI)
|
| 400 |
+
rIBIs = 0.0
|
| 401 |
+
else:
|
| 402 |
+
rIBIm = np.mean(rIBI)
|
| 403 |
+
rIBIs = np.std(rIBI, ddof=1)
|
| 404 |
+
|
| 405 |
+
tIBI = np.diff(test[matchIdx])
|
| 406 |
+
tIBI = np.array(tIBI, dtype="float")
|
| 407 |
+
tIBI /= sampling_rate
|
| 408 |
+
|
| 409 |
+
good = np.nonzero((tIBI < th1) & (tIBI > th2))[0]
|
| 410 |
+
tIBI = tIBI[good]
|
| 411 |
+
|
| 412 |
+
nt = len(tIBI)
|
| 413 |
+
if nt == 0:
|
| 414 |
+
tIBIm = np.nan
|
| 415 |
+
tIBIs = np.nan
|
| 416 |
+
elif nt == 1:
|
| 417 |
+
tIBIm = np.mean(tIBI)
|
| 418 |
+
tIBIs = 0.0
|
| 419 |
+
else:
|
| 420 |
+
tIBIm = np.mean(tIBI)
|
| 421 |
+
tIBIs = np.std(tIBI, ddof=1)
|
| 422 |
+
|
| 423 |
+
# output
|
| 424 |
+
perf = float(TP) / len(reference)
|
| 425 |
+
acc = float(TP) / (TP + FP)
|
| 426 |
+
err = float(FP) / (TP + FP)
|
| 427 |
+
|
| 428 |
+
args = (
|
| 429 |
+
TP,
|
| 430 |
+
FP,
|
| 431 |
+
perf,
|
| 432 |
+
acc,
|
| 433 |
+
err,
|
| 434 |
+
matchIdx,
|
| 435 |
+
dev,
|
| 436 |
+
mdev,
|
| 437 |
+
sdev,
|
| 438 |
+
rIBIm,
|
| 439 |
+
rIBIs,
|
| 440 |
+
tIBIm,
|
| 441 |
+
tIBIs,
|
| 442 |
+
)
|
| 443 |
+
names = (
|
| 444 |
+
"TP",
|
| 445 |
+
"FP",
|
| 446 |
+
"performance",
|
| 447 |
+
"acc",
|
| 448 |
+
"err",
|
| 449 |
+
"match",
|
| 450 |
+
"deviation",
|
| 451 |
+
"mean_deviation",
|
| 452 |
+
"std_deviation",
|
| 453 |
+
"mean_ref_ibi",
|
| 454 |
+
"std_ref_ibi",
|
| 455 |
+
"mean_test_ibi",
|
| 456 |
+
"std_test_ibi",
|
| 457 |
+
)
|
| 458 |
+
|
| 459 |
+
return utils.ReturnTuple(args, names)
|
| 460 |
+
|
| 461 |
+
|
| 462 |
+
def correct_rpeaks(signal=None, rpeaks=None, sampling_rate=1000.0, tol=0.05):
|
| 463 |
+
"""Correct R-peak locations to the maximum within a tolerance.
|
| 464 |
+
|
| 465 |
+
Parameters
|
| 466 |
+
----------
|
| 467 |
+
signal : array
|
| 468 |
+
ECG signal.
|
| 469 |
+
rpeaks : array
|
| 470 |
+
R-peak location indices.
|
| 471 |
+
sampling_rate : int, float, optional
|
| 472 |
+
Sampling frequency (Hz).
|
| 473 |
+
tol : int, float, optional
|
| 474 |
+
Correction tolerance (seconds).
|
| 475 |
+
|
| 476 |
+
Returns
|
| 477 |
+
-------
|
| 478 |
+
rpeaks : array
|
| 479 |
+
Cerrected R-peak location indices.
|
| 480 |
+
|
| 481 |
+
Notes
|
| 482 |
+
-----
|
| 483 |
+
* The tolerance is defined as the time interval :math:`[R-tol, R+tol[`.
|
| 484 |
+
|
| 485 |
+
"""
|
| 486 |
+
|
| 487 |
+
# check inputs
|
| 488 |
+
if signal is None:
|
| 489 |
+
raise TypeError("Please specify an input signal.")
|
| 490 |
+
|
| 491 |
+
if rpeaks is None:
|
| 492 |
+
raise TypeError("Please specify the input R-peaks.")
|
| 493 |
+
|
| 494 |
+
tol = int(tol * sampling_rate)
|
| 495 |
+
length = len(signal)
|
| 496 |
+
|
| 497 |
+
newR = []
|
| 498 |
+
for r in rpeaks:
|
| 499 |
+
a = r - tol
|
| 500 |
+
if a < 0:
|
| 501 |
+
continue
|
| 502 |
+
b = r + tol
|
| 503 |
+
if b > length:
|
| 504 |
+
break
|
| 505 |
+
newR.append(a + np.argmax(signal[a:b]))
|
| 506 |
+
|
| 507 |
+
newR = sorted(list(set(newR)))
|
| 508 |
+
newR = np.array(newR, dtype="int")
|
| 509 |
+
|
| 510 |
+
return utils.ReturnTuple((newR,), ("rpeaks",))
|
| 511 |
+
|
| 512 |
+
|
| 513 |
+
def ssf_segmenter(
|
| 514 |
+
signal=None, sampling_rate=1000.0, threshold=20, before=0.03, after=0.01
|
| 515 |
+
):
|
| 516 |
+
"""ECG R-peak segmentation based on the Slope Sum Function (SSF).
|
| 517 |
+
|
| 518 |
+
Parameters
|
| 519 |
+
----------
|
| 520 |
+
signal : array
|
| 521 |
+
Input filtered ECG signal.
|
| 522 |
+
sampling_rate : int, float, optional
|
| 523 |
+
Sampling frequency (Hz).
|
| 524 |
+
threshold : float, optional
|
| 525 |
+
SSF threshold.
|
| 526 |
+
before : float, optional
|
| 527 |
+
Search window size before R-peak candidate (seconds).
|
| 528 |
+
after : float, optional
|
| 529 |
+
Search window size after R-peak candidate (seconds).
|
| 530 |
+
|
| 531 |
+
Returns
|
| 532 |
+
-------
|
| 533 |
+
rpeaks : array
|
| 534 |
+
R-peak location indices.
|
| 535 |
+
|
| 536 |
+
"""
|
| 537 |
+
|
| 538 |
+
# check inputs
|
| 539 |
+
if signal is None:
|
| 540 |
+
raise TypeError("Please specify an input signal.")
|
| 541 |
+
|
| 542 |
+
# convert to samples
|
| 543 |
+
winB = int(before * sampling_rate)
|
| 544 |
+
winA = int(after * sampling_rate)
|
| 545 |
+
|
| 546 |
+
Rset = set()
|
| 547 |
+
length = len(signal)
|
| 548 |
+
|
| 549 |
+
# diff
|
| 550 |
+
dx = np.diff(signal)
|
| 551 |
+
dx[dx >= 0] = 0
|
| 552 |
+
dx = dx**2
|
| 553 |
+
|
| 554 |
+
# detection
|
| 555 |
+
(idx,) = np.nonzero(dx > threshold)
|
| 556 |
+
idx0 = np.hstack(([0], idx))
|
| 557 |
+
didx = np.diff(idx0)
|
| 558 |
+
|
| 559 |
+
# search
|
| 560 |
+
sidx = idx[didx > 1]
|
| 561 |
+
for item in sidx:
|
| 562 |
+
a = item - winB
|
| 563 |
+
if a < 0:
|
| 564 |
+
a = 0
|
| 565 |
+
b = item + winA
|
| 566 |
+
if b > length:
|
| 567 |
+
continue
|
| 568 |
+
|
| 569 |
+
r = np.argmax(signal[a:b]) + a
|
| 570 |
+
Rset.add(r)
|
| 571 |
+
|
| 572 |
+
# output
|
| 573 |
+
rpeaks = list(Rset)
|
| 574 |
+
rpeaks.sort()
|
| 575 |
+
rpeaks = np.array(rpeaks, dtype="int")
|
| 576 |
+
|
| 577 |
+
return utils.ReturnTuple((rpeaks,), ("rpeaks",))
|
| 578 |
+
|
| 579 |
+
|
| 580 |
+
def christov_segmenter(signal=None, sampling_rate=1000.0):
|
| 581 |
+
"""ECG R-peak segmentation algorithm.
|
| 582 |
+
|
| 583 |
+
Follows the approach by Christov [Chri04]_.
|
| 584 |
+
|
| 585 |
+
Parameters
|
| 586 |
+
----------
|
| 587 |
+
signal : array
|
| 588 |
+
Input filtered ECG signal.
|
| 589 |
+
sampling_rate : int, float, optional
|
| 590 |
+
Sampling frequency (Hz).
|
| 591 |
+
|
| 592 |
+
Returns
|
| 593 |
+
-------
|
| 594 |
+
rpeaks : array
|
| 595 |
+
R-peak location indices.
|
| 596 |
+
|
| 597 |
+
References
|
| 598 |
+
----------
|
| 599 |
+
.. [Chri04] Ivaylo I. Christov, "Real time electrocardiogram QRS
|
| 600 |
+
detection using combined adaptive threshold", BioMedical Engineering
|
| 601 |
+
OnLine 2004, vol. 3:28, 2004
|
| 602 |
+
|
| 603 |
+
"""
|
| 604 |
+
|
| 605 |
+
# check inputs
|
| 606 |
+
if signal is None:
|
| 607 |
+
raise TypeError("Please specify an input signal.")
|
| 608 |
+
|
| 609 |
+
length = len(signal)
|
| 610 |
+
|
| 611 |
+
# algorithm parameters
|
| 612 |
+
v100ms = int(0.1 * sampling_rate)
|
| 613 |
+
v50ms = int(0.050 * sampling_rate)
|
| 614 |
+
v300ms = int(0.300 * sampling_rate)
|
| 615 |
+
v350ms = int(0.350 * sampling_rate)
|
| 616 |
+
v200ms = int(0.2 * sampling_rate)
|
| 617 |
+
v1200ms = int(1.2 * sampling_rate)
|
| 618 |
+
M_th = 0.4 # paper is 0.6
|
| 619 |
+
|
| 620 |
+
# Pre-processing
|
| 621 |
+
# 1. Moving averaging filter for power-line interference suppression:
|
| 622 |
+
# averages samples in one period of the powerline
|
| 623 |
+
# interference frequency with a first zero at this frequency.
|
| 624 |
+
b = np.ones(int(0.02 * sampling_rate)) / 50.0
|
| 625 |
+
a = [1]
|
| 626 |
+
X = ss.filtfilt(b, a, signal)
|
| 627 |
+
# 2. Moving averaging of samples in 28 ms interval for electromyogram
|
| 628 |
+
# noise suppression a filter with first zero at about 35 Hz.
|
| 629 |
+
b = np.ones(int(sampling_rate / 35.0)) / 35.0
|
| 630 |
+
X = ss.filtfilt(b, a, X)
|
| 631 |
+
X, _, _ = st.filter_signal(
|
| 632 |
+
signal=X,
|
| 633 |
+
ftype="butter",
|
| 634 |
+
band="lowpass",
|
| 635 |
+
order=7,
|
| 636 |
+
frequency=40.0,
|
| 637 |
+
sampling_rate=sampling_rate,
|
| 638 |
+
)
|
| 639 |
+
X, _, _ = st.filter_signal(
|
| 640 |
+
signal=X,
|
| 641 |
+
ftype="butter",
|
| 642 |
+
band="highpass",
|
| 643 |
+
order=7,
|
| 644 |
+
frequency=9.0,
|
| 645 |
+
sampling_rate=sampling_rate,
|
| 646 |
+
)
|
| 647 |
+
|
| 648 |
+
k, Y, L = 1, [], len(X)
|
| 649 |
+
for n in range(k + 1, L - k):
|
| 650 |
+
Y.append(X[n] ** 2 - X[n - k] * X[n + k])
|
| 651 |
+
Y = np.array(Y)
|
| 652 |
+
Y[Y < 0] = 0
|
| 653 |
+
|
| 654 |
+
# Complex lead
|
| 655 |
+
# Y = abs(scipy.diff(X)) # 1-lead
|
| 656 |
+
# 3. Moving averaging of a complex lead (the sintesis is
|
| 657 |
+
# explained in the next section) in 40 ms intervals a filter
|
| 658 |
+
# with first zero at about 25 Hz. It is suppressing the noise
|
| 659 |
+
# magnified by the differentiation procedure used in the
|
| 660 |
+
# process of the complex lead sintesis.
|
| 661 |
+
b = np.ones(int(sampling_rate / 25.0)) / 25.0
|
| 662 |
+
Y = ss.lfilter(b, a, Y)
|
| 663 |
+
|
| 664 |
+
# Init
|
| 665 |
+
MM = M_th * np.max(Y[: int(5 * sampling_rate)]) * np.ones(5)
|
| 666 |
+
MMidx = 0
|
| 667 |
+
M = np.mean(MM)
|
| 668 |
+
slope = np.linspace(1.0, 0.6, int(sampling_rate))
|
| 669 |
+
Rdec = 0
|
| 670 |
+
R = 0
|
| 671 |
+
RR = np.zeros(5)
|
| 672 |
+
RRidx = 0
|
| 673 |
+
Rm = 0
|
| 674 |
+
QRS = []
|
| 675 |
+
Rpeak = []
|
| 676 |
+
current_sample = 0
|
| 677 |
+
skip = False
|
| 678 |
+
F = np.mean(Y[:v350ms])
|
| 679 |
+
|
| 680 |
+
# Go through each sample
|
| 681 |
+
while current_sample < len(Y):
|
| 682 |
+
if QRS:
|
| 683 |
+
# No detection is allowed 200 ms after the current one. In
|
| 684 |
+
# the interval QRS to QRS+200ms a new value of M5 is calculated: newM5 = 0.6*max(Yi)
|
| 685 |
+
if current_sample <= QRS[-1] + v200ms:
|
| 686 |
+
Mnew = M_th * max(Y[QRS[-1] : QRS[-1] + v200ms])
|
| 687 |
+
# The estimated newM5 value can become quite high, if
|
| 688 |
+
# steep slope premature ventricular contraction or artifact
|
| 689 |
+
# appeared, and for that reason it is limited to newM5 = 1.1*M5 if newM5 > 1.5* M5
|
| 690 |
+
# The MM buffer is refreshed excluding the oldest component, and including M5 = newM5.
|
| 691 |
+
Mnew = Mnew if Mnew <= 1.5 * MM[MMidx - 1] else 1.1 * MM[MMidx - 1]
|
| 692 |
+
MM[MMidx] = Mnew
|
| 693 |
+
MMidx = np.mod(MMidx + 1, 5)
|
| 694 |
+
# M is calculated as an average value of MM.
|
| 695 |
+
Mtemp = np.mean(MM)
|
| 696 |
+
M = Mtemp
|
| 697 |
+
skip = True
|
| 698 |
+
# M is decreased in an interval 200 to 1200 ms following
|
| 699 |
+
# the last QRS detection at a low slope, reaching 60 % of its
|
| 700 |
+
# refreshed value at 1200 ms.
|
| 701 |
+
elif (
|
| 702 |
+
current_sample >= QRS[-1] + v200ms
|
| 703 |
+
and current_sample < QRS[-1] + v1200ms
|
| 704 |
+
):
|
| 705 |
+
M = Mtemp * slope[current_sample - QRS[-1] - v200ms]
|
| 706 |
+
# After 1200 ms M remains unchanged.
|
| 707 |
+
# R = 0 V in the interval from the last detected QRS to 2/3 of the expected Rm.
|
| 708 |
+
if current_sample >= QRS[-1] and current_sample < QRS[-1] + (2 / 3.0) * Rm:
|
| 709 |
+
R = 0
|
| 710 |
+
# In the interval QRS + Rm * 2/3 to QRS + Rm, R decreases
|
| 711 |
+
# 1.4 times slower then the decrease of the previously discussed
|
| 712 |
+
# steep slope threshold (M in the 200 to 1200 ms interval).
|
| 713 |
+
elif (
|
| 714 |
+
current_sample >= QRS[-1] + (2 / 3.0) * Rm
|
| 715 |
+
and current_sample < QRS[-1] + Rm
|
| 716 |
+
):
|
| 717 |
+
R += Rdec
|
| 718 |
+
# After QRS + Rm the decrease of R is stopped
|
| 719 |
+
# MFR = M + F + R
|
| 720 |
+
MFR = M + F + R
|
| 721 |
+
# QRS or beat complex is detected if Yi = MFR
|
| 722 |
+
if not skip and Y[current_sample] >= MFR:
|
| 723 |
+
QRS += [current_sample]
|
| 724 |
+
Rpeak += [QRS[-1] + np.argmax(Y[QRS[-1] : QRS[-1] + v300ms])]
|
| 725 |
+
if len(QRS) >= 2:
|
| 726 |
+
# A buffer with the 5 last RR intervals is updated at any new QRS detection.
|
| 727 |
+
RR[RRidx] = QRS[-1] - QRS[-2]
|
| 728 |
+
RRidx = np.mod(RRidx + 1, 5)
|
| 729 |
+
skip = False
|
| 730 |
+
# With every signal sample, F is updated adding the maximum
|
| 731 |
+
# of Y in the latest 50 ms of the 350 ms interval and
|
| 732 |
+
# subtracting maxY in the earliest 50 ms of the interval.
|
| 733 |
+
if current_sample >= v350ms:
|
| 734 |
+
Y_latest50 = Y[current_sample - v50ms : current_sample]
|
| 735 |
+
Y_earliest50 = Y[current_sample - v350ms : current_sample - v300ms]
|
| 736 |
+
F += (max(Y_latest50) - max(Y_earliest50)) / 1000.0
|
| 737 |
+
# Rm is the mean value of the buffer RR.
|
| 738 |
+
Rm = np.mean(RR)
|
| 739 |
+
current_sample += 1
|
| 740 |
+
|
| 741 |
+
rpeaks = []
|
| 742 |
+
for i in Rpeak:
|
| 743 |
+
a, b = i - v100ms, i + v100ms
|
| 744 |
+
if a < 0:
|
| 745 |
+
a = 0
|
| 746 |
+
if b > length:
|
| 747 |
+
b = length
|
| 748 |
+
rpeaks.append(np.argmax(signal[a:b]) + a)
|
| 749 |
+
|
| 750 |
+
rpeaks = sorted(list(set(rpeaks)))
|
| 751 |
+
rpeaks = np.array(rpeaks, dtype="int")
|
| 752 |
+
|
| 753 |
+
return utils.ReturnTuple((rpeaks,), ("rpeaks",))
|
| 754 |
+
|
| 755 |
+
|
| 756 |
+
def engzee_segmenter(signal=None, sampling_rate=1000.0, threshold=0.48):
|
| 757 |
+
"""ECG R-peak segmentation algorithm.
|
| 758 |
+
|
| 759 |
+
Follows the approach by Engelse and Zeelenberg [EnZe79]_ with the
|
| 760 |
+
modifications by Lourenco *et al.* [LSLL12]_.
|
| 761 |
+
|
| 762 |
+
Parameters
|
| 763 |
+
----------
|
| 764 |
+
signal : array
|
| 765 |
+
Input filtered ECG signal.
|
| 766 |
+
sampling_rate : int, float, optional
|
| 767 |
+
Sampling frequency (Hz).
|
| 768 |
+
threshold : float, optional
|
| 769 |
+
Detection threshold.
|
| 770 |
+
|
| 771 |
+
Returns
|
| 772 |
+
-------
|
| 773 |
+
rpeaks : array
|
| 774 |
+
R-peak location indices.
|
| 775 |
+
|
| 776 |
+
References
|
| 777 |
+
----------
|
| 778 |
+
.. [EnZe79] W. Engelse and C. Zeelenberg, "A single scan algorithm for
|
| 779 |
+
QRS detection and feature extraction", IEEE Comp. in Cardiology,
|
| 780 |
+
vol. 6, pp. 37-42, 1979
|
| 781 |
+
.. [LSLL12] A. Lourenco, H. Silva, P. Leite, R. Lourenco and A. Fred,
|
| 782 |
+
"Real Time Electrocardiogram Segmentation for Finger Based ECG
|
| 783 |
+
Biometrics", BIOSIGNALS 2012, pp. 49-54, 2012
|
| 784 |
+
|
| 785 |
+
"""
|
| 786 |
+
|
| 787 |
+
# check inputs
|
| 788 |
+
if signal is None:
|
| 789 |
+
raise TypeError("Please specify an input signal.")
|
| 790 |
+
|
| 791 |
+
# algorithm parameters
|
| 792 |
+
changeM = int(0.75 * sampling_rate)
|
| 793 |
+
Miterate = int(1.75 * sampling_rate)
|
| 794 |
+
v250ms = int(0.25 * sampling_rate)
|
| 795 |
+
v1200ms = int(1.2 * sampling_rate)
|
| 796 |
+
v1500ms = int(1.5 * sampling_rate)
|
| 797 |
+
v180ms = int(0.18 * sampling_rate)
|
| 798 |
+
p10ms = int(np.ceil(0.01 * sampling_rate))
|
| 799 |
+
p20ms = int(np.ceil(0.02 * sampling_rate))
|
| 800 |
+
err_kill = int(0.01 * sampling_rate)
|
| 801 |
+
inc = 1
|
| 802 |
+
mmth = threshold
|
| 803 |
+
mmp = 0.2
|
| 804 |
+
|
| 805 |
+
# Differentiator (1)
|
| 806 |
+
y1 = [signal[i] - signal[i - 4] for i in range(4, len(signal))]
|
| 807 |
+
|
| 808 |
+
# Low pass filter (2)
|
| 809 |
+
c = [1, 4, 6, 4, 1, -1, -4, -6, -4, -1]
|
| 810 |
+
y2 = np.array([np.dot(c, y1[n - 9 : n + 1]) for n in range(9, len(y1))])
|
| 811 |
+
y2_len = len(y2)
|
| 812 |
+
|
| 813 |
+
# vars
|
| 814 |
+
MM = mmth * max(y2[:Miterate]) * np.ones(3)
|
| 815 |
+
MMidx = 0
|
| 816 |
+
Th = np.mean(MM)
|
| 817 |
+
NN = mmp * min(y2[:Miterate]) * np.ones(2)
|
| 818 |
+
NNidx = 0
|
| 819 |
+
ThNew = np.mean(NN)
|
| 820 |
+
update = False
|
| 821 |
+
nthfpluss = []
|
| 822 |
+
rpeaks = []
|
| 823 |
+
|
| 824 |
+
# Find nthf+ point
|
| 825 |
+
while True:
|
| 826 |
+
# If a previous intersection was found, continue the analysis from there
|
| 827 |
+
if update:
|
| 828 |
+
if inc * changeM + Miterate < y2_len:
|
| 829 |
+
a = (inc - 1) * changeM
|
| 830 |
+
b = inc * changeM + Miterate
|
| 831 |
+
Mnew = mmth * max(y2[a:b])
|
| 832 |
+
Nnew = mmp * min(y2[a:b])
|
| 833 |
+
elif y2_len - (inc - 1) * changeM > v1500ms:
|
| 834 |
+
a = (inc - 1) * changeM
|
| 835 |
+
Mnew = mmth * max(y2[a:])
|
| 836 |
+
Nnew = mmp * min(y2[a:])
|
| 837 |
+
if len(y2) - inc * changeM > Miterate:
|
| 838 |
+
MM[MMidx] = Mnew if Mnew <= 1.5 * MM[MMidx - 1] else 1.1 * MM[MMidx - 1]
|
| 839 |
+
NN[NNidx] = (
|
| 840 |
+
Nnew
|
| 841 |
+
if abs(Nnew) <= 1.5 * abs(NN[NNidx - 1])
|
| 842 |
+
else 1.1 * NN[NNidx - 1]
|
| 843 |
+
)
|
| 844 |
+
MMidx = np.mod(MMidx + 1, len(MM))
|
| 845 |
+
NNidx = np.mod(NNidx + 1, len(NN))
|
| 846 |
+
Th = np.mean(MM)
|
| 847 |
+
ThNew = np.mean(NN)
|
| 848 |
+
inc += 1
|
| 849 |
+
update = False
|
| 850 |
+
if nthfpluss:
|
| 851 |
+
lastp = nthfpluss[-1] + 1
|
| 852 |
+
if lastp < (inc - 1) * changeM:
|
| 853 |
+
lastp = (inc - 1) * changeM
|
| 854 |
+
y22 = y2[lastp : inc * changeM + err_kill]
|
| 855 |
+
# find intersection with Th
|
| 856 |
+
try:
|
| 857 |
+
nthfplus = np.intersect1d(
|
| 858 |
+
np.nonzero(y22 > Th)[0], np.nonzero(y22 < Th)[0] - 1
|
| 859 |
+
)[0]
|
| 860 |
+
except IndexError:
|
| 861 |
+
if inc * changeM > len(y2):
|
| 862 |
+
break
|
| 863 |
+
else:
|
| 864 |
+
update = True
|
| 865 |
+
continue
|
| 866 |
+
# adjust index
|
| 867 |
+
nthfplus += int(lastp)
|
| 868 |
+
# if a previous R peak was found:
|
| 869 |
+
if rpeaks:
|
| 870 |
+
# check if intersection is within the 200-1200 ms interval. Modification: 300 ms -> 200 bpm
|
| 871 |
+
if nthfplus - rpeaks[-1] > v250ms and nthfplus - rpeaks[-1] < v1200ms:
|
| 872 |
+
pass
|
| 873 |
+
# if new intersection is within the <200ms interval, skip it. Modification: 300 ms -> 200 bpm
|
| 874 |
+
elif nthfplus - rpeaks[-1] < v250ms:
|
| 875 |
+
nthfpluss += [nthfplus]
|
| 876 |
+
continue
|
| 877 |
+
# no previous intersection, find the first one
|
| 878 |
+
else:
|
| 879 |
+
try:
|
| 880 |
+
aux = np.nonzero(
|
| 881 |
+
y2[(inc - 1) * changeM : inc * changeM + err_kill] > Th
|
| 882 |
+
)[0]
|
| 883 |
+
bux = (
|
| 884 |
+
np.nonzero(y2[(inc - 1) * changeM : inc * changeM + err_kill] < Th)[
|
| 885 |
+
0
|
| 886 |
+
]
|
| 887 |
+
- 1
|
| 888 |
+
)
|
| 889 |
+
nthfplus = int((inc - 1) * changeM) + np.intersect1d(aux, bux)[0]
|
| 890 |
+
except IndexError:
|
| 891 |
+
if inc * changeM > len(y2):
|
| 892 |
+
break
|
| 893 |
+
else:
|
| 894 |
+
update = True
|
| 895 |
+
continue
|
| 896 |
+
nthfpluss += [nthfplus]
|
| 897 |
+
# Define 160ms search region
|
| 898 |
+
windowW = np.arange(nthfplus, nthfplus + v180ms)
|
| 899 |
+
# Check if the condition y2[n] < Th holds for a specified
|
| 900 |
+
# number of consecutive points (experimentally we found this number to be at least 10 points)"
|
| 901 |
+
i, f = windowW[0], windowW[-1] if windowW[-1] < len(y2) else -1
|
| 902 |
+
hold_points = np.diff(np.nonzero(y2[i:f] < ThNew)[0])
|
| 903 |
+
cont = 0
|
| 904 |
+
for hp in hold_points:
|
| 905 |
+
if hp == 1:
|
| 906 |
+
cont += 1
|
| 907 |
+
if cont == p10ms - 1: # -1 is because diff eats a sample
|
| 908 |
+
max_shift = p20ms # looks for X's max a bit to the right
|
| 909 |
+
if nthfpluss[-1] > max_shift:
|
| 910 |
+
rpeaks += [np.argmax(signal[i - max_shift : f]) + i - max_shift]
|
| 911 |
+
else:
|
| 912 |
+
rpeaks += [np.argmax(signal[i:f]) + i]
|
| 913 |
+
break
|
| 914 |
+
else:
|
| 915 |
+
cont = 0
|
| 916 |
+
|
| 917 |
+
rpeaks = sorted(list(set(rpeaks)))
|
| 918 |
+
rpeaks = np.array(rpeaks, dtype="int")
|
| 919 |
+
|
| 920 |
+
return utils.ReturnTuple((rpeaks,), ("rpeaks",))
|
| 921 |
+
|
| 922 |
+
|
| 923 |
+
def gamboa_segmenter(signal=None, sampling_rate=1000.0, tol=0.002):
|
| 924 |
+
"""ECG R-peak segmentation algorithm.
|
| 925 |
+
|
| 926 |
+
Follows the approach by Gamboa.
|
| 927 |
+
|
| 928 |
+
Parameters
|
| 929 |
+
----------
|
| 930 |
+
signal : array
|
| 931 |
+
Input filtered ECG signal.
|
| 932 |
+
sampling_rate : int, float, optional
|
| 933 |
+
Sampling frequency (Hz).
|
| 934 |
+
tol : float, optional
|
| 935 |
+
Tolerance parameter.
|
| 936 |
+
|
| 937 |
+
Returns
|
| 938 |
+
-------
|
| 939 |
+
rpeaks : array
|
| 940 |
+
R-peak location indices.
|
| 941 |
+
|
| 942 |
+
"""
|
| 943 |
+
|
| 944 |
+
# check inputs
|
| 945 |
+
if signal is None:
|
| 946 |
+
raise TypeError("Please specify an input signal.")
|
| 947 |
+
|
| 948 |
+
# convert to samples
|
| 949 |
+
v_100ms = int(0.1 * sampling_rate)
|
| 950 |
+
v_300ms = int(0.3 * sampling_rate)
|
| 951 |
+
hist, edges = np.histogram(signal, 100, density=True)
|
| 952 |
+
|
| 953 |
+
TH = 0.01
|
| 954 |
+
F = np.cumsum(hist)
|
| 955 |
+
|
| 956 |
+
v0 = edges[np.nonzero(F > TH)[0][0]]
|
| 957 |
+
v1 = edges[np.nonzero(F < (1 - TH))[0][-1]]
|
| 958 |
+
|
| 959 |
+
nrm = max([abs(v0), abs(v1)])
|
| 960 |
+
norm_signal = signal / float(nrm)
|
| 961 |
+
|
| 962 |
+
d2 = np.diff(norm_signal, 2)
|
| 963 |
+
|
| 964 |
+
b = np.nonzero((np.diff(np.sign(np.diff(-d2)))) == -2)[0] + 2
|
| 965 |
+
b = np.intersect1d(b, np.nonzero(-d2 > tol)[0])
|
| 966 |
+
|
| 967 |
+
if len(b) < 3:
|
| 968 |
+
rpeaks = []
|
| 969 |
+
else:
|
| 970 |
+
b = b.astype("float")
|
| 971 |
+
rpeaks = []
|
| 972 |
+
previous = b[0]
|
| 973 |
+
for i in b[1:]:
|
| 974 |
+
if i - previous > v_300ms:
|
| 975 |
+
previous = i
|
| 976 |
+
rpeaks.append(np.argmax(signal[int(i) : int(i + v_100ms)]) + i)
|
| 977 |
+
|
| 978 |
+
rpeaks = sorted(list(set(rpeaks)))
|
| 979 |
+
rpeaks = np.array(rpeaks, dtype="int")
|
| 980 |
+
|
| 981 |
+
return utils.ReturnTuple((rpeaks,), ("rpeaks",))
|
| 982 |
+
|
| 983 |
+
|
| 984 |
+
def hamilton_segmenter(signal=None, sampling_rate=1000.0):
|
| 985 |
+
"""ECG R-peak segmentation algorithm.
|
| 986 |
+
|
| 987 |
+
Follows the approach by Hamilton [Hami02]_.
|
| 988 |
+
|
| 989 |
+
Parameters
|
| 990 |
+
----------
|
| 991 |
+
signal : array
|
| 992 |
+
Input filtered ECG signal.
|
| 993 |
+
sampling_rate : int, float, optional
|
| 994 |
+
Sampling frequency (Hz).
|
| 995 |
+
|
| 996 |
+
Returns
|
| 997 |
+
-------
|
| 998 |
+
rpeaks : array
|
| 999 |
+
R-peak location indices.
|
| 1000 |
+
|
| 1001 |
+
References
|
| 1002 |
+
----------
|
| 1003 |
+
.. [Hami02] P.S. Hamilton, "Open Source ECG Analysis Software
|
| 1004 |
+
Documentation", E.P.Limited, 2002
|
| 1005 |
+
|
| 1006 |
+
"""
|
| 1007 |
+
|
| 1008 |
+
# check inputs
|
| 1009 |
+
if signal is None:
|
| 1010 |
+
raise TypeError("Please specify an input signal.")
|
| 1011 |
+
|
| 1012 |
+
sampling_rate = float(sampling_rate)
|
| 1013 |
+
length = len(signal)
|
| 1014 |
+
dur = length / sampling_rate
|
| 1015 |
+
|
| 1016 |
+
# algorithm parameters
|
| 1017 |
+
v1s = int(1.0 * sampling_rate)
|
| 1018 |
+
v100ms = int(0.1 * sampling_rate)
|
| 1019 |
+
TH_elapsed = np.ceil(0.36 * sampling_rate)
|
| 1020 |
+
sm_size = int(0.08 * sampling_rate)
|
| 1021 |
+
init_ecg = 8 # seconds for initialization
|
| 1022 |
+
if dur < init_ecg:
|
| 1023 |
+
init_ecg = int(dur)
|
| 1024 |
+
|
| 1025 |
+
# filtering
|
| 1026 |
+
filtered, _, _ = st.filter_signal(
|
| 1027 |
+
signal=signal,
|
| 1028 |
+
ftype="butter",
|
| 1029 |
+
band="lowpass",
|
| 1030 |
+
order=4,
|
| 1031 |
+
frequency=25.0,
|
| 1032 |
+
sampling_rate=sampling_rate,
|
| 1033 |
+
)
|
| 1034 |
+
filtered, _, _ = st.filter_signal(
|
| 1035 |
+
signal=filtered,
|
| 1036 |
+
ftype="butter",
|
| 1037 |
+
band="highpass",
|
| 1038 |
+
order=4,
|
| 1039 |
+
frequency=3.0,
|
| 1040 |
+
sampling_rate=sampling_rate,
|
| 1041 |
+
)
|
| 1042 |
+
|
| 1043 |
+
# diff
|
| 1044 |
+
dx = np.abs(np.diff(filtered, 1) * sampling_rate)
|
| 1045 |
+
|
| 1046 |
+
# smoothing
|
| 1047 |
+
dx, _ = st.smoother(signal=dx, kernel="hamming", size=sm_size, mirror=True)
|
| 1048 |
+
|
| 1049 |
+
# buffers
|
| 1050 |
+
qrspeakbuffer = np.zeros(init_ecg)
|
| 1051 |
+
noisepeakbuffer = np.zeros(init_ecg)
|
| 1052 |
+
peak_idx_test = np.zeros(init_ecg)
|
| 1053 |
+
noise_idx = np.zeros(init_ecg)
|
| 1054 |
+
rrinterval = sampling_rate * np.ones(init_ecg)
|
| 1055 |
+
|
| 1056 |
+
a, b = 0, v1s
|
| 1057 |
+
all_peaks, _ = st.find_extrema(signal=dx, mode="max")
|
| 1058 |
+
for i in range(init_ecg):
|
| 1059 |
+
peaks, values = st.find_extrema(signal=dx[a:b], mode="max")
|
| 1060 |
+
try:
|
| 1061 |
+
ind = np.argmax(values)
|
| 1062 |
+
except ValueError:
|
| 1063 |
+
pass
|
| 1064 |
+
else:
|
| 1065 |
+
# peak amplitude
|
| 1066 |
+
qrspeakbuffer[i] = values[ind]
|
| 1067 |
+
# peak location
|
| 1068 |
+
peak_idx_test[i] = peaks[ind] + a
|
| 1069 |
+
|
| 1070 |
+
a += v1s
|
| 1071 |
+
b += v1s
|
| 1072 |
+
|
| 1073 |
+
# thresholds
|
| 1074 |
+
ANP = np.median(noisepeakbuffer)
|
| 1075 |
+
AQRSP = np.median(qrspeakbuffer)
|
| 1076 |
+
TH = 0.475
|
| 1077 |
+
DT = ANP + TH * (AQRSP - ANP)
|
| 1078 |
+
DT_vec = []
|
| 1079 |
+
indexqrs = 0
|
| 1080 |
+
indexnoise = 0
|
| 1081 |
+
indexrr = 0
|
| 1082 |
+
npeaks = 0
|
| 1083 |
+
offset = 0
|
| 1084 |
+
|
| 1085 |
+
beats = []
|
| 1086 |
+
|
| 1087 |
+
# detection rules
|
| 1088 |
+
# 1 - ignore all peaks that precede or follow larger peaks by less than 200ms
|
| 1089 |
+
lim = int(np.ceil(0.2 * sampling_rate))
|
| 1090 |
+
diff_nr = int(np.ceil(0.045 * sampling_rate))
|
| 1091 |
+
bpsi, bpe = offset, 0
|
| 1092 |
+
|
| 1093 |
+
for f in all_peaks:
|
| 1094 |
+
DT_vec += [DT]
|
| 1095 |
+
# 1 - Checking if f-peak is larger than any peak following or preceding it by less than 200 ms
|
| 1096 |
+
peak_cond = np.array(
|
| 1097 |
+
(all_peaks > f - lim) * (all_peaks < f + lim) * (all_peaks != f)
|
| 1098 |
+
)
|
| 1099 |
+
peaks_within = all_peaks[peak_cond]
|
| 1100 |
+
if peaks_within.any() and (max(dx[peaks_within]) > dx[f]):
|
| 1101 |
+
continue
|
| 1102 |
+
|
| 1103 |
+
# 4 - If the peak is larger than the detection threshold call it a QRS complex, otherwise call it noise
|
| 1104 |
+
if dx[f] > DT:
|
| 1105 |
+
# 2 - look for both positive and negative slopes in raw signal
|
| 1106 |
+
if f < diff_nr:
|
| 1107 |
+
diff_now = np.diff(signal[0 : f + diff_nr])
|
| 1108 |
+
elif f + diff_nr >= len(signal):
|
| 1109 |
+
diff_now = np.diff(signal[f - diff_nr : len(dx)])
|
| 1110 |
+
else:
|
| 1111 |
+
diff_now = np.diff(signal[f - diff_nr : f + diff_nr])
|
| 1112 |
+
diff_signer = diff_now[diff_now > 0]
|
| 1113 |
+
if len(diff_signer) == 0 or len(diff_signer) == len(diff_now):
|
| 1114 |
+
continue
|
| 1115 |
+
# RR INTERVALS
|
| 1116 |
+
if npeaks > 0:
|
| 1117 |
+
# 3 - in here we check point 3 of the Hamilton paper
|
| 1118 |
+
# that is, we check whether our current peak is a valid R-peak.
|
| 1119 |
+
prev_rpeak = beats[npeaks - 1]
|
| 1120 |
+
|
| 1121 |
+
elapsed = f - prev_rpeak
|
| 1122 |
+
# if the previous peak was within 360 ms interval
|
| 1123 |
+
if elapsed < TH_elapsed:
|
| 1124 |
+
# check current and previous slopes
|
| 1125 |
+
if prev_rpeak < diff_nr:
|
| 1126 |
+
diff_prev = np.diff(signal[0 : prev_rpeak + diff_nr])
|
| 1127 |
+
elif prev_rpeak + diff_nr >= len(signal):
|
| 1128 |
+
diff_prev = np.diff(signal[prev_rpeak - diff_nr : len(dx)])
|
| 1129 |
+
else:
|
| 1130 |
+
diff_prev = np.diff(
|
| 1131 |
+
signal[prev_rpeak - diff_nr : prev_rpeak + diff_nr]
|
| 1132 |
+
)
|
| 1133 |
+
|
| 1134 |
+
slope_now = max(diff_now)
|
| 1135 |
+
slope_prev = max(diff_prev)
|
| 1136 |
+
|
| 1137 |
+
if slope_now < 0.5 * slope_prev:
|
| 1138 |
+
# if current slope is smaller than half the previous one, then it is a T-wave
|
| 1139 |
+
continue
|
| 1140 |
+
if dx[f] < 3.0 * np.median(qrspeakbuffer): # avoid retarded noise peaks
|
| 1141 |
+
beats += [int(f) + bpsi]
|
| 1142 |
+
else:
|
| 1143 |
+
continue
|
| 1144 |
+
|
| 1145 |
+
if bpe == 0:
|
| 1146 |
+
rrinterval[indexrr] = beats[npeaks] - beats[npeaks - 1]
|
| 1147 |
+
indexrr += 1
|
| 1148 |
+
if indexrr == init_ecg:
|
| 1149 |
+
indexrr = 0
|
| 1150 |
+
else:
|
| 1151 |
+
if beats[npeaks] > beats[bpe - 1] + v100ms:
|
| 1152 |
+
rrinterval[indexrr] = beats[npeaks] - beats[npeaks - 1]
|
| 1153 |
+
indexrr += 1
|
| 1154 |
+
if indexrr == init_ecg:
|
| 1155 |
+
indexrr = 0
|
| 1156 |
+
|
| 1157 |
+
elif dx[f] < 3.0 * np.median(qrspeakbuffer):
|
| 1158 |
+
beats += [int(f) + bpsi]
|
| 1159 |
+
else:
|
| 1160 |
+
continue
|
| 1161 |
+
|
| 1162 |
+
npeaks += 1
|
| 1163 |
+
qrspeakbuffer[indexqrs] = dx[f]
|
| 1164 |
+
peak_idx_test[indexqrs] = f
|
| 1165 |
+
indexqrs += 1
|
| 1166 |
+
if indexqrs == init_ecg:
|
| 1167 |
+
indexqrs = 0
|
| 1168 |
+
if dx[f] <= DT:
|
| 1169 |
+
# 4 - not valid
|
| 1170 |
+
# 5 - If no QRS has been detected within 1.5 R-to-R intervals,
|
| 1171 |
+
# there was a peak that was larger than half the detection threshold,
|
| 1172 |
+
# and the peak followed the preceding detection by at least 360 ms,
|
| 1173 |
+
# classify that peak as a QRS complex
|
| 1174 |
+
tf = f + bpsi
|
| 1175 |
+
# RR interval median
|
| 1176 |
+
RRM = np.median(rrinterval) # initial values are good?
|
| 1177 |
+
|
| 1178 |
+
if len(beats) >= 2:
|
| 1179 |
+
elapsed = tf - beats[npeaks - 1]
|
| 1180 |
+
|
| 1181 |
+
if elapsed >= 1.5 * RRM and elapsed > TH_elapsed:
|
| 1182 |
+
if dx[f] > 0.5 * DT:
|
| 1183 |
+
beats += [int(f) + offset]
|
| 1184 |
+
# RR INTERVALS
|
| 1185 |
+
if npeaks > 0:
|
| 1186 |
+
rrinterval[indexrr] = beats[npeaks] - beats[npeaks - 1]
|
| 1187 |
+
indexrr += 1
|
| 1188 |
+
if indexrr == init_ecg:
|
| 1189 |
+
indexrr = 0
|
| 1190 |
+
npeaks += 1
|
| 1191 |
+
qrspeakbuffer[indexqrs] = dx[f]
|
| 1192 |
+
peak_idx_test[indexqrs] = f
|
| 1193 |
+
indexqrs += 1
|
| 1194 |
+
if indexqrs == init_ecg:
|
| 1195 |
+
indexqrs = 0
|
| 1196 |
+
else:
|
| 1197 |
+
noisepeakbuffer[indexnoise] = dx[f]
|
| 1198 |
+
noise_idx[indexnoise] = f
|
| 1199 |
+
indexnoise += 1
|
| 1200 |
+
if indexnoise == init_ecg:
|
| 1201 |
+
indexnoise = 0
|
| 1202 |
+
else:
|
| 1203 |
+
noisepeakbuffer[indexnoise] = dx[f]
|
| 1204 |
+
noise_idx[indexnoise] = f
|
| 1205 |
+
indexnoise += 1
|
| 1206 |
+
if indexnoise == init_ecg:
|
| 1207 |
+
indexnoise = 0
|
| 1208 |
+
|
| 1209 |
+
# Update Detection Threshold
|
| 1210 |
+
ANP = np.median(noisepeakbuffer)
|
| 1211 |
+
AQRSP = np.median(qrspeakbuffer)
|
| 1212 |
+
DT = ANP + 0.475 * (AQRSP - ANP)
|
| 1213 |
+
|
| 1214 |
+
beats = np.array(beats)
|
| 1215 |
+
|
| 1216 |
+
r_beats = []
|
| 1217 |
+
thres_ch = 0.85
|
| 1218 |
+
adjacency = 0.05 * sampling_rate
|
| 1219 |
+
for i in beats:
|
| 1220 |
+
error = [False, False]
|
| 1221 |
+
if i - lim < 0:
|
| 1222 |
+
window = signal[0 : i + lim]
|
| 1223 |
+
add = 0
|
| 1224 |
+
elif i + lim >= length:
|
| 1225 |
+
window = signal[i - lim : length]
|
| 1226 |
+
add = i - lim
|
| 1227 |
+
else:
|
| 1228 |
+
window = signal[i - lim : i + lim]
|
| 1229 |
+
add = i - lim
|
| 1230 |
+
# meanval = np.mean(window)
|
| 1231 |
+
w_peaks, _ = st.find_extrema(signal=window, mode="max")
|
| 1232 |
+
w_negpeaks, _ = st.find_extrema(signal=window, mode="min")
|
| 1233 |
+
zerdiffs = np.where(np.diff(window) == 0)[0]
|
| 1234 |
+
w_peaks = np.concatenate((w_peaks, zerdiffs))
|
| 1235 |
+
w_negpeaks = np.concatenate((w_negpeaks, zerdiffs))
|
| 1236 |
+
|
| 1237 |
+
pospeaks = sorted(zip(window[w_peaks], w_peaks), reverse=True)
|
| 1238 |
+
negpeaks = sorted(zip(window[w_negpeaks], w_negpeaks))
|
| 1239 |
+
|
| 1240 |
+
try:
|
| 1241 |
+
twopeaks = [pospeaks[0]]
|
| 1242 |
+
except IndexError:
|
| 1243 |
+
twopeaks = []
|
| 1244 |
+
try:
|
| 1245 |
+
twonegpeaks = [negpeaks[0]]
|
| 1246 |
+
except IndexError:
|
| 1247 |
+
twonegpeaks = []
|
| 1248 |
+
|
| 1249 |
+
# getting positive peaks
|
| 1250 |
+
for i in range(len(pospeaks) - 1):
|
| 1251 |
+
if abs(pospeaks[0][1] - pospeaks[i + 1][1]) > adjacency:
|
| 1252 |
+
twopeaks.append(pospeaks[i + 1])
|
| 1253 |
+
break
|
| 1254 |
+
try:
|
| 1255 |
+
posdiv = abs(twopeaks[0][0] - twopeaks[1][0])
|
| 1256 |
+
except IndexError:
|
| 1257 |
+
error[0] = True
|
| 1258 |
+
|
| 1259 |
+
# getting negative peaks
|
| 1260 |
+
for i in range(len(negpeaks) - 1):
|
| 1261 |
+
if abs(negpeaks[0][1] - negpeaks[i + 1][1]) > adjacency:
|
| 1262 |
+
twonegpeaks.append(negpeaks[i + 1])
|
| 1263 |
+
break
|
| 1264 |
+
try:
|
| 1265 |
+
negdiv = abs(twonegpeaks[0][0] - twonegpeaks[1][0])
|
| 1266 |
+
except IndexError:
|
| 1267 |
+
error[1] = True
|
| 1268 |
+
|
| 1269 |
+
# choosing type of R-peak
|
| 1270 |
+
n_errors = sum(error)
|
| 1271 |
+
try:
|
| 1272 |
+
if not n_errors:
|
| 1273 |
+
if posdiv > thres_ch * negdiv:
|
| 1274 |
+
# pos noerr
|
| 1275 |
+
r_beats.append(twopeaks[0][1] + add)
|
| 1276 |
+
else:
|
| 1277 |
+
# neg noerr
|
| 1278 |
+
r_beats.append(twonegpeaks[0][1] + add)
|
| 1279 |
+
elif n_errors == 2:
|
| 1280 |
+
if abs(twopeaks[0][1]) > abs(twonegpeaks[0][1]):
|
| 1281 |
+
# pos allerr
|
| 1282 |
+
r_beats.append(twopeaks[0][1] + add)
|
| 1283 |
+
else:
|
| 1284 |
+
# neg allerr
|
| 1285 |
+
r_beats.append(twonegpeaks[0][1] + add)
|
| 1286 |
+
elif error[0]:
|
| 1287 |
+
# pos poserr
|
| 1288 |
+
r_beats.append(twopeaks[0][1] + add)
|
| 1289 |
+
else:
|
| 1290 |
+
# neg negerr
|
| 1291 |
+
r_beats.append(twonegpeaks[0][1] + add)
|
| 1292 |
+
except IndexError:
|
| 1293 |
+
continue
|
| 1294 |
+
|
| 1295 |
+
rpeaks = sorted(list(set(r_beats)))
|
| 1296 |
+
rpeaks = np.array(rpeaks, dtype="int")
|
| 1297 |
+
|
| 1298 |
+
return utils.ReturnTuple((rpeaks,), ("rpeaks",))
|
| 1299 |
+
|
| 1300 |
+
|
| 1301 |
+
def ASI_segmenter(signal=None, sampling_rate=1000.0, Pth=5.0):
|
| 1302 |
+
"""ECG R-peak segmentation algorithm.
|
| 1303 |
+
|
| 1304 |
+
Parameters
|
| 1305 |
+
----------
|
| 1306 |
+
signal : array
|
| 1307 |
+
Input ECG signal.
|
| 1308 |
+
sampling_rate : int, float, optional
|
| 1309 |
+
Sampling frequency (Hz).
|
| 1310 |
+
Pth : int, float, optional
|
| 1311 |
+
Free parameter used in exponential decay
|
| 1312 |
+
|
| 1313 |
+
Returns
|
| 1314 |
+
-------
|
| 1315 |
+
rpeaks : array
|
| 1316 |
+
R-peak location indices.
|
| 1317 |
+
|
| 1318 |
+
References
|
| 1319 |
+
----------
|
| 1320 |
+
Modification by Tiago Rodrigues, based on:
|
| 1321 |
+
[R. Gutiérrez-rivas 2015] Novel Real-Time Low-Complexity QRS Complex Detector
|
| 1322 |
+
Based on Adaptive Thresholding. Vol. 15,no. 10, pp. 6036–6043, 2015.
|
| 1323 |
+
[D. Sadhukhan] R-Peak Detection Algorithm for Ecg using Double Difference
|
| 1324 |
+
And RRInterval Processing. Procedia Technology, vol. 4, pp. 873–877, 2012.
|
| 1325 |
+
|
| 1326 |
+
"""
|
| 1327 |
+
|
| 1328 |
+
N = round(3 * sampling_rate / 128)
|
| 1329 |
+
Nd = N - 1
|
| 1330 |
+
Rmin = 0.26
|
| 1331 |
+
|
| 1332 |
+
rpeaks = []
|
| 1333 |
+
i = 1
|
| 1334 |
+
tf = len(signal)
|
| 1335 |
+
Ramptotal = 0
|
| 1336 |
+
|
| 1337 |
+
# Double derivative squared
|
| 1338 |
+
diff_ecg = [signal[i] - signal[i - Nd] for i in range(Nd, len(signal))]
|
| 1339 |
+
ddiff_ecg = [diff_ecg[i] - diff_ecg[i - 1] for i in range(1, len(diff_ecg))]
|
| 1340 |
+
squar = np.square(ddiff_ecg)
|
| 1341 |
+
|
| 1342 |
+
# Integrate moving window
|
| 1343 |
+
b = np.array(np.ones(N))
|
| 1344 |
+
a = [1]
|
| 1345 |
+
processed_ecg = ss.lfilter(b, a, squar)
|
| 1346 |
+
|
| 1347 |
+
# R-peak finder FSM
|
| 1348 |
+
while i < tf - sampling_rate: # ignore last second of recording
|
| 1349 |
+
|
| 1350 |
+
# State 1: looking for maximum
|
| 1351 |
+
tf1 = round(i + Rmin * sampling_rate)
|
| 1352 |
+
Rpeakamp = 0
|
| 1353 |
+
while i < tf1:
|
| 1354 |
+
# Rpeak amplitude and position
|
| 1355 |
+
if processed_ecg[i] > Rpeakamp:
|
| 1356 |
+
Rpeakamp = processed_ecg[i]
|
| 1357 |
+
rpeakpos = i + 1
|
| 1358 |
+
i += 1
|
| 1359 |
+
|
| 1360 |
+
Ramptotal = (19 / 20) * Ramptotal + (1 / 20) * Rpeakamp
|
| 1361 |
+
rpeaks.append(rpeakpos)
|
| 1362 |
+
|
| 1363 |
+
# State 2: waiting state
|
| 1364 |
+
d = tf1 - rpeakpos
|
| 1365 |
+
tf2 = i + round(0.2 * 250 - d)
|
| 1366 |
+
while i <= tf2:
|
| 1367 |
+
i += 1
|
| 1368 |
+
|
| 1369 |
+
# State 3: decreasing threshold
|
| 1370 |
+
Thr = Ramptotal
|
| 1371 |
+
while processed_ecg[i] < Thr:
|
| 1372 |
+
Thr = Thr * math.exp(-Pth / sampling_rate)
|
| 1373 |
+
i += 1
|
| 1374 |
+
|
| 1375 |
+
return utils.ReturnTuple((rpeaks,), ("rpeaks",))
|
| 1376 |
+
|
| 1377 |
+
|
| 1378 |
+
def getQPositions(ecg_proc=None, show=False):
|
| 1379 |
+
"""Different ECG Waves (Q, R, S, ...) are not present or are not so clear to identify in all ECG signals (I II III V1 V2 V3, ...)
|
| 1380 |
+
For Q wave we suggest to use signals I, aVL . Avoid II, III, V1, V2, V3, V4, aVR, aVF
|
| 1381 |
+
|
| 1382 |
+
Parameters
|
| 1383 |
+
----------
|
| 1384 |
+
signal : object
|
| 1385 |
+
object return by the function ecg.
|
| 1386 |
+
show : bool, optional
|
| 1387 |
+
If True, show a plot of the Q Positions on every signal sample/template.
|
| 1388 |
+
|
| 1389 |
+
Returns
|
| 1390 |
+
-------
|
| 1391 |
+
Q_positions : array
|
| 1392 |
+
Array with all Q positions on the signal
|
| 1393 |
+
|
| 1394 |
+
Q_start_ positions : array
|
| 1395 |
+
Array with all Q start positions on the signal
|
| 1396 |
+
|
| 1397 |
+
"""
|
| 1398 |
+
|
| 1399 |
+
template_r_position = 100 # R peek on the template is always on 100 index
|
| 1400 |
+
Q_positions = []
|
| 1401 |
+
Q_start_positions = []
|
| 1402 |
+
|
| 1403 |
+
for n, each in enumerate(ecg_proc["templates"]):
|
| 1404 |
+
# Get Q Position
|
| 1405 |
+
template_left = each[0 : template_r_position + 1]
|
| 1406 |
+
mininums_from_template_left = argrelextrema(template_left, np.less)
|
| 1407 |
+
# print("Q position= " + str(mininums_from_template_left[0][-1]))
|
| 1408 |
+
Q_position = ecg_proc["rpeaks"][n] - (
|
| 1409 |
+
template_r_position - mininums_from_template_left[0][-1]
|
| 1410 |
+
)
|
| 1411 |
+
Q_positions.append(Q_position)
|
| 1412 |
+
|
| 1413 |
+
# Get Q start position
|
| 1414 |
+
template_Q_left = each[0 : mininums_from_template_left[0][-1] + 1]
|
| 1415 |
+
maximum_from_template_Q_left = argrelextrema(template_Q_left, np.greater)
|
| 1416 |
+
# print("Q start position=" + str(maximum_from_template_Q_left[0][-1]))
|
| 1417 |
+
# print("Q start value=" + str(template_Q_left[maximum_from_template_Q_left[0][-1]]))
|
| 1418 |
+
Q_start_position = (
|
| 1419 |
+
ecg_proc["rpeaks"][n]
|
| 1420 |
+
- template_r_position
|
| 1421 |
+
+ maximum_from_template_Q_left[0][-1]
|
| 1422 |
+
)
|
| 1423 |
+
Q_start_positions.append(Q_start_position)
|
| 1424 |
+
|
| 1425 |
+
if show:
|
| 1426 |
+
plt.plot(each)
|
| 1427 |
+
plt.axvline(x=template_r_position, color="r", label="R peak")
|
| 1428 |
+
plt.axvline(
|
| 1429 |
+
x=mininums_from_template_left[0][-1], color="yellow", label="Q Position"
|
| 1430 |
+
)
|
| 1431 |
+
plt.axvline(
|
| 1432 |
+
x=maximum_from_template_Q_left[0][-1],
|
| 1433 |
+
color="green",
|
| 1434 |
+
label="Q Start Position",
|
| 1435 |
+
)
|
| 1436 |
+
plt.legend()
|
| 1437 |
+
show()
|
| 1438 |
+
return Q_positions, Q_start_positions
|
| 1439 |
+
|
| 1440 |
+
|
| 1441 |
+
def getSPositions(ecg_proc=None, show=False):
|
| 1442 |
+
"""Different ECG Waves (Q, R, S, ...) are not present or are not so clear to identify in all ECG signals (I II III V1 V2 V3, ...)
|
| 1443 |
+
For S wave we suggest to use signals V1, V2, V3. Avoid I, V5, V6, aVR, aVL
|
| 1444 |
+
|
| 1445 |
+
Parameters
|
| 1446 |
+
----------
|
| 1447 |
+
signal : object
|
| 1448 |
+
object return by the function ecg.
|
| 1449 |
+
show : bool, optional
|
| 1450 |
+
If True, show a plot of the S Positions on every signal sample/template.
|
| 1451 |
+
|
| 1452 |
+
Returns
|
| 1453 |
+
-------
|
| 1454 |
+
S_positions : array
|
| 1455 |
+
Array with all S positions on the signal
|
| 1456 |
+
|
| 1457 |
+
S_end_ positions : array
|
| 1458 |
+
Array with all S end positions on the signal
|
| 1459 |
+
"""
|
| 1460 |
+
|
| 1461 |
+
template_r_position = 100 # R peek on the template is always on 100 index
|
| 1462 |
+
S_positions = []
|
| 1463 |
+
S_end_positions = []
|
| 1464 |
+
template_size = len(ecg_proc["templates"][0])
|
| 1465 |
+
|
| 1466 |
+
for n, each in enumerate(ecg_proc["templates"]):
|
| 1467 |
+
# Get S Position
|
| 1468 |
+
template_right = each[template_r_position : template_size + 1]
|
| 1469 |
+
mininums_from_template_right = argrelextrema(template_right, np.less)
|
| 1470 |
+
S_position = ecg_proc["rpeaks"][n] + mininums_from_template_right[0][0]
|
| 1471 |
+
S_positions.append(S_position)
|
| 1472 |
+
|
| 1473 |
+
# Get S end position
|
| 1474 |
+
maximums_from_template_right = argrelextrema(template_right, np.greater)
|
| 1475 |
+
# print("S end position=" + str(maximums_from_template_right[0][0]))
|
| 1476 |
+
# print("S end value=" + str(template_right[maximums_from_template_right[0][0]]))
|
| 1477 |
+
S_end_position = ecg_proc["rpeaks"][n] + maximums_from_template_right[0][0]
|
| 1478 |
+
S_end_positions.append(S_end_position)
|
| 1479 |
+
|
| 1480 |
+
if show:
|
| 1481 |
+
plt.plot(each)
|
| 1482 |
+
plt.axvline(x=template_r_position, color="r", label="R peak")
|
| 1483 |
+
plt.axvline(
|
| 1484 |
+
x=template_r_position + mininums_from_template_right[0][0],
|
| 1485 |
+
color="yellow",
|
| 1486 |
+
label="S Position",
|
| 1487 |
+
)
|
| 1488 |
+
plt.axvline(
|
| 1489 |
+
x=template_r_position + maximums_from_template_right[0][0],
|
| 1490 |
+
color="green",
|
| 1491 |
+
label="S end Position",
|
| 1492 |
+
)
|
| 1493 |
+
plt.legend()
|
| 1494 |
+
show()
|
| 1495 |
+
|
| 1496 |
+
return S_positions, S_end_positions
|
| 1497 |
+
|
| 1498 |
+
|
| 1499 |
+
def getPPositions(ecg_proc=None, show=False):
|
| 1500 |
+
"""Different ECG Waves (Q, R, S, ...) are not present or are not so clear to identify in all ECG signals (I II III V1 V2 V3, ...)
|
| 1501 |
+
For P wave we suggest to use signals II, V1, aVF . Avoid I, III, V1, V2, V3, V4, V5, AVL
|
| 1502 |
+
|
| 1503 |
+
Parameters
|
| 1504 |
+
----------
|
| 1505 |
+
signal : object
|
| 1506 |
+
object return by the function ecg.
|
| 1507 |
+
show : bool, optional
|
| 1508 |
+
If True, show a plot of the P Positions on every signal sample/template.
|
| 1509 |
+
|
| 1510 |
+
Returns
|
| 1511 |
+
-------
|
| 1512 |
+
P_positions : array
|
| 1513 |
+
Array with all P positions on the signal
|
| 1514 |
+
P_start_ positions : array
|
| 1515 |
+
Array with all P start positions on the signal
|
| 1516 |
+
P_end_ positions : array
|
| 1517 |
+
Array with all P end positions on the signal
|
| 1518 |
+
"""
|
| 1519 |
+
|
| 1520 |
+
template_r_position = 100 # R peek on the template is always on 100 index
|
| 1521 |
+
template_p_position_max = (
|
| 1522 |
+
80 # the P will be always hapenning on the first 80 indexes of the template
|
| 1523 |
+
)
|
| 1524 |
+
P_positions = []
|
| 1525 |
+
P_start_positions = []
|
| 1526 |
+
P_end_positions = []
|
| 1527 |
+
|
| 1528 |
+
for n, each in enumerate(ecg_proc["templates"]):
|
| 1529 |
+
# Get P position
|
| 1530 |
+
template_left = each[0 : template_p_position_max + 1]
|
| 1531 |
+
max_from_template_left = np.argmax(template_left)
|
| 1532 |
+
# print("P Position=" + str(max_from_template_left))
|
| 1533 |
+
P_position = (
|
| 1534 |
+
ecg_proc["rpeaks"][n] - template_r_position + max_from_template_left
|
| 1535 |
+
)
|
| 1536 |
+
P_positions.append(P_position)
|
| 1537 |
+
|
| 1538 |
+
# Get P start position
|
| 1539 |
+
template_P_left = each[0 : max_from_template_left + 1]
|
| 1540 |
+
mininums_from_template_left = argrelextrema(template_P_left, np.less)
|
| 1541 |
+
# print("P start position=" + str(mininums_from_template_left[0][-1]))
|
| 1542 |
+
P_start_position = (
|
| 1543 |
+
ecg_proc["rpeaks"][n]
|
| 1544 |
+
- template_r_position
|
| 1545 |
+
+ mininums_from_template_left[0][-1]
|
| 1546 |
+
)
|
| 1547 |
+
P_start_positions.append(P_start_position)
|
| 1548 |
+
|
| 1549 |
+
# Get P end position
|
| 1550 |
+
template_P_right = each[max_from_template_left : template_p_position_max + 1]
|
| 1551 |
+
mininums_from_template_right = argrelextrema(template_P_right, np.less)
|
| 1552 |
+
# print("P end position=" + str(mininums_from_template_right[0][0]+max_from_template_left))
|
| 1553 |
+
P_end_position = (
|
| 1554 |
+
ecg_proc["rpeaks"][n]
|
| 1555 |
+
- template_r_position
|
| 1556 |
+
+ max_from_template_left
|
| 1557 |
+
+ mininums_from_template_right[0][0]
|
| 1558 |
+
)
|
| 1559 |
+
P_end_positions.append(P_end_position)
|
| 1560 |
+
|
| 1561 |
+
if show:
|
| 1562 |
+
plt.plot(each)
|
| 1563 |
+
plt.axvline(x=template_r_position, color="r", label="R peak")
|
| 1564 |
+
plt.axvline(x=max_from_template_left, color="yellow", label="P Position")
|
| 1565 |
+
plt.axvline(
|
| 1566 |
+
x=mininums_from_template_left[0][-1], color="green", label="P start"
|
| 1567 |
+
)
|
| 1568 |
+
plt.axvline(
|
| 1569 |
+
x=(max_from_template_left + mininums_from_template_right[0][0]),
|
| 1570 |
+
color="green",
|
| 1571 |
+
label="P end",
|
| 1572 |
+
)
|
| 1573 |
+
plt.legend()
|
| 1574 |
+
show()
|
| 1575 |
+
return P_positions, P_start_positions, P_end_positions
|
| 1576 |
+
|
| 1577 |
+
|
| 1578 |
+
def getTPositions(ecg_proc=None, show=False):
|
| 1579 |
+
"""Different ECG Waves (Q, R, S, ...) are not present or are not so clear to identify in all ECG signals (I II III V1 V2 V3, ...)
|
| 1580 |
+
For T wave we suggest to use signals V4, v5 (II, V3 have good results, but in less accuracy) . Avoid I, V1, V2, aVR, aVL
|
| 1581 |
+
|
| 1582 |
+
Parameters
|
| 1583 |
+
----------
|
| 1584 |
+
signal : object
|
| 1585 |
+
object return by the function ecg.
|
| 1586 |
+
show : bool, optional
|
| 1587 |
+
If True, show a plot of the T Positions on every signal sample/template.
|
| 1588 |
+
|
| 1589 |
+
Returns
|
| 1590 |
+
-------
|
| 1591 |
+
T_positions : array
|
| 1592 |
+
Array with all T positions on the signal
|
| 1593 |
+
T_start_ positions : array
|
| 1594 |
+
Array with all T start positions on the signal
|
| 1595 |
+
T_end_ positions : array
|
| 1596 |
+
Array with all T end positions on the signal
|
| 1597 |
+
"""
|
| 1598 |
+
|
| 1599 |
+
template_r_position = 100 # R peek on the template is always on 100 index
|
| 1600 |
+
template_T_position_min = (
|
| 1601 |
+
170 # the T will be always hapenning after 150 indexes of the template
|
| 1602 |
+
)
|
| 1603 |
+
T_positions = []
|
| 1604 |
+
T_start_positions = []
|
| 1605 |
+
T_end_positions = []
|
| 1606 |
+
|
| 1607 |
+
for n, each in enumerate(ecg_proc["templates"]):
|
| 1608 |
+
# Get T position
|
| 1609 |
+
template_right = each[template_T_position_min:]
|
| 1610 |
+
max_from_template_right = np.argmax(template_right)
|
| 1611 |
+
# print("T Position=" + str(template_T_position_min + max_from_template_right))
|
| 1612 |
+
T_position = (
|
| 1613 |
+
ecg_proc["rpeaks"][n]
|
| 1614 |
+
- template_r_position
|
| 1615 |
+
+ template_T_position_min
|
| 1616 |
+
+ max_from_template_right
|
| 1617 |
+
)
|
| 1618 |
+
T_positions.append(T_position)
|
| 1619 |
+
|
| 1620 |
+
# Get T start position
|
| 1621 |
+
template_T_left = each[
|
| 1622 |
+
template_r_position : template_T_position_min + max_from_template_right
|
| 1623 |
+
]
|
| 1624 |
+
min_from_template_T_left = argrelextrema(template_T_left, np.less)
|
| 1625 |
+
# print("T start position=" + str(template_r_position+min_from_template_T_left[0][-1]))
|
| 1626 |
+
T_start_position = ecg_proc["rpeaks"][n] + min_from_template_T_left[0][-1]
|
| 1627 |
+
T_start_positions.append(T_start_position)
|
| 1628 |
+
|
| 1629 |
+
# Get T end position
|
| 1630 |
+
template_T_right = each[template_T_position_min + max_from_template_right :]
|
| 1631 |
+
mininums_from_template_T_right = argrelextrema(template_T_right, np.less)
|
| 1632 |
+
# print("T end position=" + str(template_T_position_min + max_from_template_right + mininums_from_template_T_right[0][0]))
|
| 1633 |
+
T_end_position = (
|
| 1634 |
+
ecg_proc["rpeaks"][n]
|
| 1635 |
+
- template_r_position
|
| 1636 |
+
+ template_T_position_min
|
| 1637 |
+
+ max_from_template_right
|
| 1638 |
+
+ mininums_from_template_T_right[0][0]
|
| 1639 |
+
)
|
| 1640 |
+
T_end_positions.append(T_end_position)
|
| 1641 |
+
|
| 1642 |
+
if show:
|
| 1643 |
+
plt.plot(each)
|
| 1644 |
+
plt.axvline(x=template_r_position, color="r", label="R peak")
|
| 1645 |
+
plt.axvline(
|
| 1646 |
+
x=template_T_position_min + max_from_template_right,
|
| 1647 |
+
color="yellow",
|
| 1648 |
+
label="T Position",
|
| 1649 |
+
)
|
| 1650 |
+
plt.axvline(
|
| 1651 |
+
x=template_r_position + min_from_template_T_left[0][-1],
|
| 1652 |
+
color="green",
|
| 1653 |
+
label="P start",
|
| 1654 |
+
)
|
| 1655 |
+
plt.axvline(
|
| 1656 |
+
x=(
|
| 1657 |
+
template_T_position_min
|
| 1658 |
+
+ max_from_template_right
|
| 1659 |
+
+ mininums_from_template_T_right[0][0]
|
| 1660 |
+
),
|
| 1661 |
+
color="green",
|
| 1662 |
+
label="P end",
|
| 1663 |
+
)
|
| 1664 |
+
plt.legend()
|
| 1665 |
+
show()
|
| 1666 |
+
return T_positions, T_start_positions, T_end_positions
|
| 1667 |
+
|
| 1668 |
+
|
| 1669 |
+
def bSQI(detector_1, detector_2, fs=1000.0, mode="simple", search_window=150):
|
| 1670 |
+
"""Comparison of the output of two detectors.
|
| 1671 |
+
|
| 1672 |
+
Parameters
|
| 1673 |
+
----------
|
| 1674 |
+
detector_1 : array
|
| 1675 |
+
Output of the first detector.
|
| 1676 |
+
detector_2 : array
|
| 1677 |
+
Output of the second detector.
|
| 1678 |
+
fs: int, optional
|
| 1679 |
+
Sampling rate, in Hz.
|
| 1680 |
+
mode : str, optional
|
| 1681 |
+
If 'simple', return only the percentage of beats detected by both. If 'matching', return the peak matching degree.
|
| 1682 |
+
If 'n_double' returns the number of matches divided by the sum of all minus the matches.
|
| 1683 |
+
search_window : int, optional
|
| 1684 |
+
Search window around each peak, in ms.
|
| 1685 |
+
|
| 1686 |
+
Returns
|
| 1687 |
+
-------
|
| 1688 |
+
bSQI : float
|
| 1689 |
+
Performance of both detectors.
|
| 1690 |
+
|
| 1691 |
+
"""
|
| 1692 |
+
|
| 1693 |
+
if detector_1 is None or detector_2 is None:
|
| 1694 |
+
raise TypeError("Input Error, check detectors outputs")
|
| 1695 |
+
search_window = int(search_window / 1000 * fs)
|
| 1696 |
+
both = 0
|
| 1697 |
+
for i in detector_1:
|
| 1698 |
+
for j in range(max([0, i - search_window]), i + search_window):
|
| 1699 |
+
if j in detector_2:
|
| 1700 |
+
both += 1
|
| 1701 |
+
break
|
| 1702 |
+
|
| 1703 |
+
if mode == "simple":
|
| 1704 |
+
return (both / len(detector_1)) * 100
|
| 1705 |
+
elif mode == "matching":
|
| 1706 |
+
return (2 * both) / (len(detector_1) + len(detector_2))
|
| 1707 |
+
elif mode == "n_double":
|
| 1708 |
+
return both / (len(detector_1) + len(detector_2) - both)
|
| 1709 |
+
|
| 1710 |
+
|
| 1711 |
+
def sSQI(signal):
|
| 1712 |
+
"""Return the skewness of the signal
|
| 1713 |
+
|
| 1714 |
+
Parameters
|
| 1715 |
+
----------
|
| 1716 |
+
signal : array
|
| 1717 |
+
ECG signal.
|
| 1718 |
+
|
| 1719 |
+
Returns
|
| 1720 |
+
-------
|
| 1721 |
+
skewness : float
|
| 1722 |
+
Skewness value.
|
| 1723 |
+
|
| 1724 |
+
"""
|
| 1725 |
+
if signal is None:
|
| 1726 |
+
raise TypeError("Please specify an input signal")
|
| 1727 |
+
|
| 1728 |
+
return stats.skew(signal)
|
| 1729 |
+
|
| 1730 |
+
|
| 1731 |
+
def kSQI(signal, fisher=True):
|
| 1732 |
+
"""Return the kurtosis of the signal
|
| 1733 |
+
|
| 1734 |
+
Parameters
|
| 1735 |
+
----------
|
| 1736 |
+
signal : array
|
| 1737 |
+
ECG signal.
|
| 1738 |
+
fisher : bool, optional
|
| 1739 |
+
If True,Fisher’s definition is used (normal ==> 0.0). If False, Pearson’s definition is used (normal ==> 3.0).
|
| 1740 |
+
|
| 1741 |
+
Returns
|
| 1742 |
+
-------
|
| 1743 |
+
kurtosis : float
|
| 1744 |
+
Kurtosis value.
|
| 1745 |
+
"""
|
| 1746 |
+
|
| 1747 |
+
if signal is None:
|
| 1748 |
+
raise TypeError("Please specify an input signal")
|
| 1749 |
+
|
| 1750 |
+
return stats.kurtosis(signal, fisher=fisher)
|
| 1751 |
+
|
| 1752 |
+
|
| 1753 |
+
def pSQI(signal, f_thr=0.01):
|
| 1754 |
+
"""Return the flatline percentage of the signal
|
| 1755 |
+
|
| 1756 |
+
Parameters
|
| 1757 |
+
----------
|
| 1758 |
+
signal : array
|
| 1759 |
+
ECG signal.
|
| 1760 |
+
f_thr : float, optional
|
| 1761 |
+
Flatline threshold, in mV / sample
|
| 1762 |
+
|
| 1763 |
+
Returns
|
| 1764 |
+
-------
|
| 1765 |
+
flatline_percentage : float
|
| 1766 |
+
Percentage of signal where the absolute value of the derivative is lower then the threshold.
|
| 1767 |
+
|
| 1768 |
+
"""
|
| 1769 |
+
|
| 1770 |
+
if signal is None:
|
| 1771 |
+
raise TypeError("Please specify an input signal")
|
| 1772 |
+
|
| 1773 |
+
diff = np.diff(signal)
|
| 1774 |
+
length = len(diff)
|
| 1775 |
+
|
| 1776 |
+
flatline = np.where(abs(diff) < f_thr)[0]
|
| 1777 |
+
|
| 1778 |
+
return (len(flatline) / length) * 100
|
| 1779 |
+
|
| 1780 |
+
|
| 1781 |
+
def fSQI(
|
| 1782 |
+
ecg_signal,
|
| 1783 |
+
fs=1000.0,
|
| 1784 |
+
nseg=1024,
|
| 1785 |
+
num_spectrum=[5, 20],
|
| 1786 |
+
dem_spectrum=None,
|
| 1787 |
+
mode="simple",
|
| 1788 |
+
):
|
| 1789 |
+
"""Returns the ration between two frequency power bands.
|
| 1790 |
+
|
| 1791 |
+
Parameters
|
| 1792 |
+
----------
|
| 1793 |
+
ecg_signal : array
|
| 1794 |
+
ECG signal.
|
| 1795 |
+
fs : float, optional
|
| 1796 |
+
ECG sampling frequency, in Hz.
|
| 1797 |
+
nseg : int, optional
|
| 1798 |
+
Frequency axis resolution.
|
| 1799 |
+
num_spectrum : array, optional
|
| 1800 |
+
Frequency bandwidth for the ratio's numerator, in Hz.
|
| 1801 |
+
dem_spectrum : array, optional
|
| 1802 |
+
Frequency bandwidth for the ratio's denominator, in Hz. If None, then the whole spectrum is used.
|
| 1803 |
+
mode : str, optional
|
| 1804 |
+
If 'simple' just do the ration, if is 'bas', then do 1 - num_power.
|
| 1805 |
+
|
| 1806 |
+
Returns
|
| 1807 |
+
-------
|
| 1808 |
+
Ratio : float
|
| 1809 |
+
Ratio between two powerbands.
|
| 1810 |
+
"""
|
| 1811 |
+
|
| 1812 |
+
def power_in_range(f_range, f, Pxx_den):
|
| 1813 |
+
_indexes = np.where((f >= f_range[0]) & (f <= f_range[1]))[0]
|
| 1814 |
+
_power = integrate.trapz(Pxx_den[_indexes], f[_indexes])
|
| 1815 |
+
return _power
|
| 1816 |
+
|
| 1817 |
+
if ecg_signal is None:
|
| 1818 |
+
raise TypeError("Please specify an input signal")
|
| 1819 |
+
|
| 1820 |
+
f, Pxx_den = ss.welch(ecg_signal, fs, nperseg=nseg)
|
| 1821 |
+
num_power = power_in_range(num_spectrum, f, Pxx_den)
|
| 1822 |
+
|
| 1823 |
+
if dem_spectrum is None:
|
| 1824 |
+
dem_power = power_in_range([0, float(fs / 2.0)], f, Pxx_den)
|
| 1825 |
+
else:
|
| 1826 |
+
dem_power = power_in_range(dem_spectrum, f, Pxx_den)
|
| 1827 |
+
|
| 1828 |
+
if mode == "simple":
|
| 1829 |
+
return num_power / dem_power
|
| 1830 |
+
elif mode == "bas":
|
| 1831 |
+
return 1 - num_power / dem_power
|
| 1832 |
+
|
| 1833 |
+
|
| 1834 |
+
def ZZ2018(
|
| 1835 |
+
signal, detector_1, detector_2, fs=1000, search_window=100, nseg=1024, mode="simple"
|
| 1836 |
+
):
|
| 1837 |
+
import numpy as np
|
| 1838 |
+
|
| 1839 |
+
""" Signal quality estimator. Designed for signal with a lenght of 10 seconds.
|
| 1840 |
+
Follows the approach by Zhao *et la.* [Zhao18]_.
|
| 1841 |
+
|
| 1842 |
+
Parameters
|
| 1843 |
+
----------
|
| 1844 |
+
signal : array
|
| 1845 |
+
Input ECG signal in mV.
|
| 1846 |
+
detector_1 : array
|
| 1847 |
+
Input of the first R peak detector.
|
| 1848 |
+
detector_2 : array
|
| 1849 |
+
Input of the second R peak detector.
|
| 1850 |
+
fs : int, float, optional
|
| 1851 |
+
Sampling frequency (Hz).
|
| 1852 |
+
search_window : int, optional
|
| 1853 |
+
Search window around each peak, in ms.
|
| 1854 |
+
nseg : int, optional
|
| 1855 |
+
Frequency axis resolution.
|
| 1856 |
+
mode : str, optional
|
| 1857 |
+
If 'simple', simple heurisitc. If 'fuzzy', employ a fuzzy classifier.
|
| 1858 |
+
|
| 1859 |
+
Returns
|
| 1860 |
+
-------
|
| 1861 |
+
noise : str
|
| 1862 |
+
Quality classification.
|
| 1863 |
+
|
| 1864 |
+
References
|
| 1865 |
+
----------
|
| 1866 |
+
.. [Zhao18] Zhao, Z., & Zhang, Y. (2018).
|
| 1867 |
+
SQI quality evaluation mechanism of single-lead ECG signal based on simple heuristic fusion and fuzzy comprehensive evaluation.
|
| 1868 |
+
Frontiers in Physiology, 9, 727.
|
| 1869 |
+
"""
|
| 1870 |
+
|
| 1871 |
+
if len(detector_1) == 0 or len(detector_2) == 0:
|
| 1872 |
+
return "Unacceptable"
|
| 1873 |
+
|
| 1874 |
+
## compute indexes
|
| 1875 |
+
qsqi = bSQI(
|
| 1876 |
+
detector_1, detector_2, fs=fs, mode="matching", search_window=search_window
|
| 1877 |
+
)
|
| 1878 |
+
psqi = fSQI(signal, fs=fs, nseg=nseg, num_spectrum=[5, 15], dem_spectrum=[5, 40])
|
| 1879 |
+
ksqi = kSQI(signal)
|
| 1880 |
+
bassqi = fSQI(
|
| 1881 |
+
signal, fs=fs, nseg=nseg, num_spectrum=[0, 1], dem_spectrum=[0, 40], mode="bas"
|
| 1882 |
+
)
|
| 1883 |
+
|
| 1884 |
+
if mode == "simple":
|
| 1885 |
+
## First stage rules (0 = unqualified, 1 = suspicious, 2 = optimal)
|
| 1886 |
+
## qSQI rules
|
| 1887 |
+
if qsqi > 0.90:
|
| 1888 |
+
qsqi_class = 2
|
| 1889 |
+
elif qsqi < 0.60:
|
| 1890 |
+
qsqi_class = 0
|
| 1891 |
+
else:
|
| 1892 |
+
qsqi_class = 1
|
| 1893 |
+
|
| 1894 |
+
## pSQI rules
|
| 1895 |
+
import numpy as np
|
| 1896 |
+
|
| 1897 |
+
## Get the maximum bpm
|
| 1898 |
+
if len(detector_1) > 1:
|
| 1899 |
+
RR_max = 60000.0 / (1000.0 / fs * np.min(np.diff(detector_1)))
|
| 1900 |
+
else:
|
| 1901 |
+
RR_max = 1
|
| 1902 |
+
|
| 1903 |
+
if RR_max < 130:
|
| 1904 |
+
l1, l2, l3 = 0.5, 0.8, 0.4
|
| 1905 |
+
else:
|
| 1906 |
+
l1, l2, l3 = 0.4, 0.7, 0.3
|
| 1907 |
+
|
| 1908 |
+
if psqi > l1 and psqi < l2:
|
| 1909 |
+
pSQI_class = 2
|
| 1910 |
+
elif psqi > l3 and psqi < l1:
|
| 1911 |
+
pSQI_class = 1
|
| 1912 |
+
else:
|
| 1913 |
+
pSQI_class = 0
|
| 1914 |
+
|
| 1915 |
+
## kSQI rules
|
| 1916 |
+
if ksqi > 5:
|
| 1917 |
+
kSQI_class = 2
|
| 1918 |
+
else:
|
| 1919 |
+
kSQI_class = 0
|
| 1920 |
+
|
| 1921 |
+
## basSQI rules
|
| 1922 |
+
if bassqi >= 0.95:
|
| 1923 |
+
basSQI_class = 2
|
| 1924 |
+
elif bassqi < 0.9:
|
| 1925 |
+
basSQI_class = 0
|
| 1926 |
+
else:
|
| 1927 |
+
basSQI_class = 1
|
| 1928 |
+
|
| 1929 |
+
class_matrix = np.array([qsqi_class, pSQI_class, kSQI_class, basSQI_class])
|
| 1930 |
+
n_optimal = len(np.where(class_matrix == 2)[0])
|
| 1931 |
+
n_suspics = len(np.where(class_matrix == 1)[0])
|
| 1932 |
+
n_unqualy = len(np.where(class_matrix == 0)[0])
|
| 1933 |
+
if (
|
| 1934 |
+
n_unqualy >= 3
|
| 1935 |
+
or (n_unqualy == 2 and n_suspics >= 1)
|
| 1936 |
+
or (n_unqualy == 1 and n_suspics == 3)
|
| 1937 |
+
):
|
| 1938 |
+
return "Unacceptable"
|
| 1939 |
+
elif n_optimal >= 3 and n_unqualy == 0:
|
| 1940 |
+
return "Excellent"
|
| 1941 |
+
else:
|
| 1942 |
+
return "Barely acceptable"
|
| 1943 |
+
|
| 1944 |
+
elif mode == "fuzzy":
|
| 1945 |
+
# Transform qSQI range from [0, 1] to [0, 100]
|
| 1946 |
+
qsqi = qsqi * 100.0
|
| 1947 |
+
# UqH (Excellent)
|
| 1948 |
+
if qsqi <= 80:
|
| 1949 |
+
UqH = 0
|
| 1950 |
+
elif qsqi >= 90:
|
| 1951 |
+
UqH = qsqi / 100.0
|
| 1952 |
+
else:
|
| 1953 |
+
UqH = 1.0 / (1 + (1 / np.power(0.3 * (qsqi - 80), 2)))
|
| 1954 |
+
|
| 1955 |
+
# UqI (Barely acceptable)
|
| 1956 |
+
UqI = 1.0 / (1 + np.power((qsqi - 75) / 7.5, 2))
|
| 1957 |
+
|
| 1958 |
+
# UqJ (unacceptable)
|
| 1959 |
+
if qsqi <= 55:
|
| 1960 |
+
UqJ = 1
|
| 1961 |
+
else:
|
| 1962 |
+
UqJ = 1.0 / (1 + np.power((qsqi - 55) / 5.0, 2))
|
| 1963 |
+
|
| 1964 |
+
# Get R1
|
| 1965 |
+
R1 = np.array([UqH, UqI, UqJ])
|
| 1966 |
+
|
| 1967 |
+
# pSQI
|
| 1968 |
+
# UpH
|
| 1969 |
+
if psqi <= 0.25:
|
| 1970 |
+
UpH = 0
|
| 1971 |
+
elif psqi >= 0.35:
|
| 1972 |
+
UpH = 1
|
| 1973 |
+
else:
|
| 1974 |
+
UpH = 0.1 * (psqi - 0.25)
|
| 1975 |
+
|
| 1976 |
+
# UpI
|
| 1977 |
+
if psqi < 0.18:
|
| 1978 |
+
UpI = 0
|
| 1979 |
+
elif psqi >= 0.32:
|
| 1980 |
+
UpI = 0
|
| 1981 |
+
elif psqi >= 0.18 and psqi < 0.22:
|
| 1982 |
+
UpI = 25 * (psqi - 0.18)
|
| 1983 |
+
elif psqi >= 0.22 and psqi < 0.28:
|
| 1984 |
+
UpI = 1
|
| 1985 |
+
else:
|
| 1986 |
+
UpI = 25 * (0.32 - psqi)
|
| 1987 |
+
|
| 1988 |
+
# UpJ
|
| 1989 |
+
if psqi < 0.15:
|
| 1990 |
+
UpJ = 1
|
| 1991 |
+
elif psqi > 0.25:
|
| 1992 |
+
UpJ = 0
|
| 1993 |
+
else:
|
| 1994 |
+
UpJ = 0.1 * (0.25 - psqi)
|
| 1995 |
+
|
| 1996 |
+
# Get R2
|
| 1997 |
+
R2 = np.array([UpH, UpI, UpJ])
|
| 1998 |
+
|
| 1999 |
+
# kSQI
|
| 2000 |
+
# Get R3
|
| 2001 |
+
if ksqi > 5:
|
| 2002 |
+
R3 = np.array([1, 0, 0])
|
| 2003 |
+
else:
|
| 2004 |
+
R3 = np.array([0, 0, 1])
|
| 2005 |
+
|
| 2006 |
+
# basSQI
|
| 2007 |
+
# UbH
|
| 2008 |
+
if bassqi <= 90:
|
| 2009 |
+
UbH = 0
|
| 2010 |
+
elif bassqi >= 95:
|
| 2011 |
+
UbH = bassqi / 100.0
|
| 2012 |
+
else:
|
| 2013 |
+
UbH = 1.0 / (1 + (1 / np.power(0.8718 * (bassqi - 90), 2)))
|
| 2014 |
+
|
| 2015 |
+
# UbI
|
| 2016 |
+
if bassqi <= 85:
|
| 2017 |
+
UbI = 1
|
| 2018 |
+
else:
|
| 2019 |
+
UbI = 1.0 / (1 + np.power((bassqi - 85) / 5.0, 2))
|
| 2020 |
+
|
| 2021 |
+
# UbJ
|
| 2022 |
+
UbJ = 1.0 / (1 + np.power((bassqi - 95) / 2.5, 2))
|
| 2023 |
+
|
| 2024 |
+
# R4
|
| 2025 |
+
R4 = np.array([UbH, UbI, UbJ])
|
| 2026 |
+
|
| 2027 |
+
# evaluation matrix R
|
| 2028 |
+
R = np.vstack([R1, R2, R3, R4])
|
| 2029 |
+
|
| 2030 |
+
# weight vector W
|
| 2031 |
+
W = np.array([0.4, 0.4, 0.1, 0.1])
|
| 2032 |
+
|
| 2033 |
+
S = np.array(
|
| 2034 |
+
[np.sum((R[:, 0] * W)), np.sum((R[:, 1] * W)), np.sum((R[:, 2] * W))]
|
| 2035 |
+
)
|
| 2036 |
+
|
| 2037 |
+
# classify
|
| 2038 |
+
V = np.sum(np.power(S, 2) * [1, 2, 3]) / np.sum(np.power(S, 2))
|
| 2039 |
+
|
| 2040 |
+
if V < 1.5:
|
| 2041 |
+
return "Excellent"
|
| 2042 |
+
elif V >= 2.40:
|
| 2043 |
+
return "Unnacceptable"
|
| 2044 |
+
else:
|
| 2045 |
+
return "Barely acceptable"
|
BioSPPy/source/biosppy/signals/eda.py
ADDED
|
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.eda
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides methods to process Electrodermal Activity (EDA)
|
| 7 |
+
signals, also known as Galvanic Skin Response (GSR).
|
| 8 |
+
|
| 9 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 10 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
# Imports
|
| 14 |
+
# compat
|
| 15 |
+
from __future__ import absolute_import, division, print_function
|
| 16 |
+
from six.moves import range
|
| 17 |
+
|
| 18 |
+
# 3rd party
|
| 19 |
+
import numpy as np
|
| 20 |
+
|
| 21 |
+
# local
|
| 22 |
+
from . import tools as st
|
| 23 |
+
from .. import plotting, utils
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
def eda(signal=None, sampling_rate=1000.0, path=None, show=True, min_amplitude=0.1):
|
| 27 |
+
"""Process a raw EDA signal and extract relevant signal features using
|
| 28 |
+
default parameters.
|
| 29 |
+
|
| 30 |
+
Parameters
|
| 31 |
+
----------
|
| 32 |
+
signal : array
|
| 33 |
+
Raw EDA signal.
|
| 34 |
+
sampling_rate : int, float, optional
|
| 35 |
+
Sampling frequency (Hz).
|
| 36 |
+
path : str, optional
|
| 37 |
+
If provided, the plot will be saved to the specified file.
|
| 38 |
+
show : bool, optional
|
| 39 |
+
If True, show a summary plot.
|
| 40 |
+
min_amplitude : float, optional
|
| 41 |
+
Minimum treshold by which to exclude SCRs.
|
| 42 |
+
|
| 43 |
+
Returns
|
| 44 |
+
-------
|
| 45 |
+
ts : array
|
| 46 |
+
Signal time axis reference (seconds).
|
| 47 |
+
filtered : array
|
| 48 |
+
Filtered EDA signal.
|
| 49 |
+
onsets : array
|
| 50 |
+
Indices of SCR pulse onsets.
|
| 51 |
+
peaks : array
|
| 52 |
+
Indices of the SCR peaks.
|
| 53 |
+
amplitudes : array
|
| 54 |
+
SCR pulse amplitudes.
|
| 55 |
+
|
| 56 |
+
"""
|
| 57 |
+
|
| 58 |
+
# check inputs
|
| 59 |
+
if signal is None:
|
| 60 |
+
raise TypeError("Please specify an input signal.")
|
| 61 |
+
|
| 62 |
+
# ensure numpy
|
| 63 |
+
signal = np.array(signal)
|
| 64 |
+
|
| 65 |
+
sampling_rate = float(sampling_rate)
|
| 66 |
+
|
| 67 |
+
# filter signal
|
| 68 |
+
aux, _, _ = st.filter_signal(
|
| 69 |
+
signal=signal,
|
| 70 |
+
ftype="butter",
|
| 71 |
+
band="lowpass",
|
| 72 |
+
order=4,
|
| 73 |
+
frequency=5,
|
| 74 |
+
sampling_rate=sampling_rate,
|
| 75 |
+
)
|
| 76 |
+
|
| 77 |
+
# smooth
|
| 78 |
+
sm_size = int(0.75 * sampling_rate)
|
| 79 |
+
filtered, _ = st.smoother(signal=aux, kernel="boxzen", size=sm_size, mirror=True)
|
| 80 |
+
|
| 81 |
+
# get SCR info
|
| 82 |
+
onsets, peaks, amps = kbk_scr(
|
| 83 |
+
signal=filtered, sampling_rate=sampling_rate, min_amplitude=min_amplitude
|
| 84 |
+
)
|
| 85 |
+
|
| 86 |
+
# get time vectors
|
| 87 |
+
length = len(signal)
|
| 88 |
+
T = (length - 1) / sampling_rate
|
| 89 |
+
ts = np.linspace(0, T, length, endpoint=True)
|
| 90 |
+
|
| 91 |
+
# plot
|
| 92 |
+
if show:
|
| 93 |
+
plotting.plot_eda(
|
| 94 |
+
ts=ts,
|
| 95 |
+
raw=signal,
|
| 96 |
+
filtered=filtered,
|
| 97 |
+
onsets=onsets,
|
| 98 |
+
peaks=peaks,
|
| 99 |
+
amplitudes=amps,
|
| 100 |
+
path=path,
|
| 101 |
+
show=True,
|
| 102 |
+
)
|
| 103 |
+
|
| 104 |
+
# output
|
| 105 |
+
args = (ts, filtered, onsets, peaks, amps)
|
| 106 |
+
names = ("ts", "filtered", "onsets", "peaks", "amplitudes")
|
| 107 |
+
|
| 108 |
+
return utils.ReturnTuple(args, names)
|
| 109 |
+
|
| 110 |
+
|
| 111 |
+
def basic_scr(signal=None, sampling_rate=1000.0):
|
| 112 |
+
"""Basic method to extract Skin Conductivity Responses (SCR) from an
|
| 113 |
+
EDA signal.
|
| 114 |
+
|
| 115 |
+
Follows the approach in [Gamb08]_.
|
| 116 |
+
|
| 117 |
+
Parameters
|
| 118 |
+
----------
|
| 119 |
+
signal : array
|
| 120 |
+
Input filterd EDA signal.
|
| 121 |
+
sampling_rate : int, float, optional
|
| 122 |
+
Sampling frequency (Hz).
|
| 123 |
+
|
| 124 |
+
Returns
|
| 125 |
+
-------
|
| 126 |
+
onsets : array
|
| 127 |
+
Indices of the SCR onsets.
|
| 128 |
+
peaks : array
|
| 129 |
+
Indices of the SRC peaks.
|
| 130 |
+
amplitudes : array
|
| 131 |
+
SCR pulse amplitudes.
|
| 132 |
+
|
| 133 |
+
References
|
| 134 |
+
----------
|
| 135 |
+
.. [Gamb08] Hugo Gamboa, "Multi-modal Behavioral Biometrics Based on HCI
|
| 136 |
+
and Electrophysiology", PhD thesis, Instituto Superior T{\'e}cnico, 2008
|
| 137 |
+
|
| 138 |
+
"""
|
| 139 |
+
|
| 140 |
+
# check inputs
|
| 141 |
+
if signal is None:
|
| 142 |
+
raise TypeError("Please specify an input signal.")
|
| 143 |
+
|
| 144 |
+
# find extrema
|
| 145 |
+
pi, _ = st.find_extrema(signal=signal, mode="max")
|
| 146 |
+
ni, _ = st.find_extrema(signal=signal, mode="min")
|
| 147 |
+
|
| 148 |
+
# sanity check
|
| 149 |
+
if len(pi) == 0 or len(ni) == 0:
|
| 150 |
+
raise ValueError("Could not find SCR pulses.")
|
| 151 |
+
|
| 152 |
+
# pair vectors
|
| 153 |
+
if ni[0] > pi[0]:
|
| 154 |
+
ni = ni[1:]
|
| 155 |
+
if pi[-1] < ni[-1]:
|
| 156 |
+
pi = pi[:-1]
|
| 157 |
+
if len(pi) > len(ni):
|
| 158 |
+
pi = pi[:-1]
|
| 159 |
+
|
| 160 |
+
li = min(len(pi), len(ni))
|
| 161 |
+
i1 = pi[:li]
|
| 162 |
+
i3 = ni[:li]
|
| 163 |
+
|
| 164 |
+
# indices
|
| 165 |
+
i0 = np.array((i1 + i3) / 2.0, dtype=int)
|
| 166 |
+
if i0[0] < 0:
|
| 167 |
+
i0[0] = 0
|
| 168 |
+
|
| 169 |
+
# amplitude
|
| 170 |
+
a = signal[i0] - signal[i3]
|
| 171 |
+
|
| 172 |
+
# output
|
| 173 |
+
args = (i3, i0, a)
|
| 174 |
+
names = ("onsets", "peaks", "amplitudes")
|
| 175 |
+
|
| 176 |
+
return utils.ReturnTuple(args, names)
|
| 177 |
+
|
| 178 |
+
|
| 179 |
+
def kbk_scr(signal=None, sampling_rate=1000.0, min_amplitude=0.1):
|
| 180 |
+
"""KBK method to extract Skin Conductivity Responses (SCR) from an
|
| 181 |
+
EDA signal.
|
| 182 |
+
|
| 183 |
+
Follows the approach by Kim *et al.* [KiBK04]_.
|
| 184 |
+
|
| 185 |
+
Parameters
|
| 186 |
+
----------
|
| 187 |
+
signal : array
|
| 188 |
+
Input filterd EDA signal.
|
| 189 |
+
sampling_rate : int, float, optional
|
| 190 |
+
Sampling frequency (Hz).
|
| 191 |
+
min_amplitude : float, optional
|
| 192 |
+
Minimum treshold by which to exclude SCRs.
|
| 193 |
+
|
| 194 |
+
Returns
|
| 195 |
+
-------
|
| 196 |
+
onsets : array
|
| 197 |
+
Indices of the SCR onsets.
|
| 198 |
+
peaks : array
|
| 199 |
+
Indices of the SRC peaks.
|
| 200 |
+
amplitudes : array
|
| 201 |
+
SCR pulse amplitudes.
|
| 202 |
+
|
| 203 |
+
References
|
| 204 |
+
----------
|
| 205 |
+
.. [KiBK04] K.H. Kim, S.W. Bang, and S.R. Kim, "Emotion recognition
|
| 206 |
+
system using short-term monitoring of physiological signals",
|
| 207 |
+
Med. Biol. Eng. Comput., vol. 42, pp. 419-427, 2004
|
| 208 |
+
|
| 209 |
+
"""
|
| 210 |
+
|
| 211 |
+
# check inputs
|
| 212 |
+
if signal is None:
|
| 213 |
+
raise TypeError("Please specify an input signal.")
|
| 214 |
+
|
| 215 |
+
# differentiation
|
| 216 |
+
df = np.diff(signal)
|
| 217 |
+
|
| 218 |
+
# smooth
|
| 219 |
+
size = int(1.0 * sampling_rate)
|
| 220 |
+
df, _ = st.smoother(signal=df, kernel="bartlett", size=size, mirror=True)
|
| 221 |
+
|
| 222 |
+
# zero crosses
|
| 223 |
+
(zeros,) = st.zero_cross(signal=df, detrend=False)
|
| 224 |
+
if np.all(df[: zeros[0]] > 0):
|
| 225 |
+
zeros = zeros[1:]
|
| 226 |
+
if np.all(df[zeros[-1] :] > 0):
|
| 227 |
+
zeros = zeros[:-1]
|
| 228 |
+
|
| 229 |
+
scrs, amps, ZC, pks = [], [], [], []
|
| 230 |
+
for i in range(0, len(zeros) - 1, 2):
|
| 231 |
+
scrs += [df[zeros[i] : zeros[i + 1]]]
|
| 232 |
+
ZC += [zeros[i]]
|
| 233 |
+
ZC += [zeros[i + 1]]
|
| 234 |
+
pks += [zeros[i] + np.argmax(df[zeros[i] : zeros[i + 1]])]
|
| 235 |
+
amps += [signal[pks[-1]] - signal[ZC[-2]]]
|
| 236 |
+
|
| 237 |
+
# exclude SCRs with small amplitude
|
| 238 |
+
thr = min_amplitude * np.max(amps)
|
| 239 |
+
idx = np.where(amps > thr)
|
| 240 |
+
|
| 241 |
+
scrs = np.array(scrs, dtype=np.object)[idx]
|
| 242 |
+
amps = np.array(amps)[idx]
|
| 243 |
+
ZC = np.array(ZC)[np.array(idx) * 2]
|
| 244 |
+
pks = np.array(pks, dtype=int)[idx]
|
| 245 |
+
|
| 246 |
+
onsets = ZC[0].astype(int)
|
| 247 |
+
|
| 248 |
+
# output
|
| 249 |
+
args = (onsets, pks, amps)
|
| 250 |
+
names = ("onsets", "peaks", "amplitudes")
|
| 251 |
+
|
| 252 |
+
return utils.ReturnTuple(args, names)
|
BioSPPy/source/biosppy/signals/eeg.py
ADDED
|
@@ -0,0 +1,475 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.eeg
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides methods to process Electroencephalographic (EEG)
|
| 7 |
+
signals.
|
| 8 |
+
|
| 9 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 10 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
# Imports
|
| 14 |
+
# compat
|
| 15 |
+
from __future__ import absolute_import, division, print_function
|
| 16 |
+
from six.moves import range
|
| 17 |
+
|
| 18 |
+
# 3rd party
|
| 19 |
+
import numpy as np
|
| 20 |
+
|
| 21 |
+
# local
|
| 22 |
+
from . import tools as st
|
| 23 |
+
from .. import plotting, utils
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
def eeg(signal=None, sampling_rate=1000.0, labels=None, path=None, show=True):
|
| 27 |
+
"""Process raw EEG signals and extract relevant signal features using
|
| 28 |
+
default parameters.
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
Parameters
|
| 32 |
+
----------
|
| 33 |
+
signal : array
|
| 34 |
+
Raw EEG signal matrix; each column is one EEG channel.
|
| 35 |
+
sampling_rate : int, float, optional
|
| 36 |
+
Sampling frequency (Hz).
|
| 37 |
+
labels : list, optional
|
| 38 |
+
Channel labels.
|
| 39 |
+
path : str, optional
|
| 40 |
+
If provided, the plot will be saved to the specified file.
|
| 41 |
+
show : bool, optional
|
| 42 |
+
If True, show a summary plot.
|
| 43 |
+
|
| 44 |
+
Returns
|
| 45 |
+
-------
|
| 46 |
+
ts : array
|
| 47 |
+
Signal time axis reference (seconds).
|
| 48 |
+
filtered : array
|
| 49 |
+
Filtered EEG signal.
|
| 50 |
+
features_ts : array
|
| 51 |
+
Features time axis reference (seconds).
|
| 52 |
+
theta : array
|
| 53 |
+
Average power in the 4 to 8 Hz frequency band; each column is one EEG
|
| 54 |
+
channel.
|
| 55 |
+
alpha_low : array
|
| 56 |
+
Average power in the 8 to 10 Hz frequency band; each column is one EEG
|
| 57 |
+
channel.
|
| 58 |
+
alpha_high : array
|
| 59 |
+
Average power in the 10 to 13 Hz frequency band; each column is one EEG
|
| 60 |
+
channel.
|
| 61 |
+
beta : array
|
| 62 |
+
Average power in the 13 to 25 Hz frequency band; each column is one EEG
|
| 63 |
+
channel.
|
| 64 |
+
gamma : array
|
| 65 |
+
Average power in the 25 to 40 Hz frequency band; each column is one EEG
|
| 66 |
+
channel.
|
| 67 |
+
plf_pairs : list
|
| 68 |
+
PLF pair indices.
|
| 69 |
+
plf : array
|
| 70 |
+
PLF matrix; each column is a channel pair.
|
| 71 |
+
|
| 72 |
+
"""
|
| 73 |
+
|
| 74 |
+
# check inputs
|
| 75 |
+
if signal is None:
|
| 76 |
+
raise TypeError("Please specify an input signal.")
|
| 77 |
+
|
| 78 |
+
# ensure numpy
|
| 79 |
+
signal = np.array(signal)
|
| 80 |
+
signal = np.reshape(signal, (signal.shape[0], -1))
|
| 81 |
+
|
| 82 |
+
sampling_rate = float(sampling_rate)
|
| 83 |
+
nch = signal.shape[1]
|
| 84 |
+
|
| 85 |
+
if labels is None:
|
| 86 |
+
labels = ["Ch. %d" % i for i in range(nch)]
|
| 87 |
+
else:
|
| 88 |
+
if len(labels) != nch:
|
| 89 |
+
raise ValueError(
|
| 90 |
+
"Number of channels mismatch between signal matrix and labels."
|
| 91 |
+
)
|
| 92 |
+
|
| 93 |
+
# high pass filter
|
| 94 |
+
b, a = st.get_filter(
|
| 95 |
+
ftype="butter",
|
| 96 |
+
band="highpass",
|
| 97 |
+
order=8,
|
| 98 |
+
frequency=4,
|
| 99 |
+
sampling_rate=sampling_rate,
|
| 100 |
+
)
|
| 101 |
+
|
| 102 |
+
aux, _ = st._filter_signal(b, a, signal=signal, check_phase=True, axis=0)
|
| 103 |
+
|
| 104 |
+
# low pass filter
|
| 105 |
+
b, a = st.get_filter(
|
| 106 |
+
ftype="butter",
|
| 107 |
+
band="lowpass",
|
| 108 |
+
order=16,
|
| 109 |
+
frequency=40,
|
| 110 |
+
sampling_rate=sampling_rate,
|
| 111 |
+
)
|
| 112 |
+
|
| 113 |
+
filtered, _ = st._filter_signal(b, a, signal=aux, check_phase=True, axis=0)
|
| 114 |
+
|
| 115 |
+
# band power features
|
| 116 |
+
out = get_power_features(
|
| 117 |
+
signal=filtered, sampling_rate=sampling_rate, size=0.25, overlap=0.5
|
| 118 |
+
)
|
| 119 |
+
ts_feat = out["ts"]
|
| 120 |
+
theta = out["theta"]
|
| 121 |
+
alpha_low = out["alpha_low"]
|
| 122 |
+
alpha_high = out["alpha_high"]
|
| 123 |
+
beta = out["beta"]
|
| 124 |
+
gamma = out["gamma"]
|
| 125 |
+
|
| 126 |
+
# If the input EEG is single channel do not extract plf
|
| 127 |
+
# Initialises plf related vars for input and output requirement of plot_eeg function in case of nch <=1
|
| 128 |
+
plf_pairs = []
|
| 129 |
+
plf = []
|
| 130 |
+
if nch > 1:
|
| 131 |
+
# PLF features
|
| 132 |
+
_, plf_pairs, plf = get_plf_features(
|
| 133 |
+
signal=filtered, sampling_rate=sampling_rate, size=0.25, overlap=0.5
|
| 134 |
+
)
|
| 135 |
+
|
| 136 |
+
# get time vectors
|
| 137 |
+
length = len(signal)
|
| 138 |
+
T = (length - 1) / sampling_rate
|
| 139 |
+
ts = np.linspace(0, T, length, endpoint=True)
|
| 140 |
+
|
| 141 |
+
# plot
|
| 142 |
+
if show:
|
| 143 |
+
plotting.plot_eeg(
|
| 144 |
+
ts=ts,
|
| 145 |
+
raw=signal,
|
| 146 |
+
filtered=filtered,
|
| 147 |
+
labels=labels,
|
| 148 |
+
features_ts=ts_feat,
|
| 149 |
+
theta=theta,
|
| 150 |
+
alpha_low=alpha_low,
|
| 151 |
+
alpha_high=alpha_high,
|
| 152 |
+
beta=beta,
|
| 153 |
+
gamma=gamma,
|
| 154 |
+
plf_pairs=plf_pairs,
|
| 155 |
+
plf=plf,
|
| 156 |
+
path=path,
|
| 157 |
+
show=True,
|
| 158 |
+
)
|
| 159 |
+
|
| 160 |
+
# output
|
| 161 |
+
args = (
|
| 162 |
+
ts,
|
| 163 |
+
filtered,
|
| 164 |
+
ts_feat,
|
| 165 |
+
theta,
|
| 166 |
+
alpha_low,
|
| 167 |
+
alpha_high,
|
| 168 |
+
beta,
|
| 169 |
+
gamma,
|
| 170 |
+
plf_pairs,
|
| 171 |
+
plf,
|
| 172 |
+
)
|
| 173 |
+
names = (
|
| 174 |
+
"ts",
|
| 175 |
+
"filtered",
|
| 176 |
+
"features_ts",
|
| 177 |
+
"theta",
|
| 178 |
+
"alpha_low",
|
| 179 |
+
"alpha_high",
|
| 180 |
+
"beta",
|
| 181 |
+
"gamma",
|
| 182 |
+
"plf_pairs",
|
| 183 |
+
"plf",
|
| 184 |
+
)
|
| 185 |
+
|
| 186 |
+
return utils.ReturnTuple(args, names)
|
| 187 |
+
|
| 188 |
+
|
| 189 |
+
def car_reference(signal=None):
|
| 190 |
+
"""Change signal reference to the Common Average Reference (CAR).
|
| 191 |
+
|
| 192 |
+
Parameters
|
| 193 |
+
----------
|
| 194 |
+
signal : array
|
| 195 |
+
Input EEG signal matrix; each column is one EEG channel.
|
| 196 |
+
|
| 197 |
+
Returns
|
| 198 |
+
-------
|
| 199 |
+
signal : array
|
| 200 |
+
Re-referenced EEG signal matrix; each column is one EEG channel.
|
| 201 |
+
|
| 202 |
+
"""
|
| 203 |
+
|
| 204 |
+
# check inputs
|
| 205 |
+
if signal is None:
|
| 206 |
+
raise TypeError("Please specify an input signal.")
|
| 207 |
+
|
| 208 |
+
length, nch = signal.shape
|
| 209 |
+
avg = np.mean(signal, axis=1)
|
| 210 |
+
|
| 211 |
+
out = signal - np.tile(avg.reshape((length, 1)), nch)
|
| 212 |
+
|
| 213 |
+
return utils.ReturnTuple((out,), ("signal",))
|
| 214 |
+
|
| 215 |
+
|
| 216 |
+
def get_power_features(signal=None, sampling_rate=1000.0, size=0.25, overlap=0.5):
|
| 217 |
+
"""Extract band power features from EEG signals.
|
| 218 |
+
|
| 219 |
+
Computes the average signal power, with overlapping windows, in typical
|
| 220 |
+
EEG frequency bands:
|
| 221 |
+
* Theta: from 4 to 8 Hz,
|
| 222 |
+
* Lower Alpha: from 8 to 10 Hz,
|
| 223 |
+
* Higher Alpha: from 10 to 13 Hz,
|
| 224 |
+
* Beta: from 13 to 25 Hz,
|
| 225 |
+
* Gamma: from 25 to 40 Hz.
|
| 226 |
+
|
| 227 |
+
Parameters
|
| 228 |
+
----------
|
| 229 |
+
signal array
|
| 230 |
+
Filtered EEG signal matrix; each column is one EEG channel.
|
| 231 |
+
sampling_rate : int, float, optional
|
| 232 |
+
Sampling frequency (Hz).
|
| 233 |
+
size : float, optional
|
| 234 |
+
Window size (seconds).
|
| 235 |
+
overlap : float, optional
|
| 236 |
+
Window overlap (0 to 1).
|
| 237 |
+
|
| 238 |
+
Returns
|
| 239 |
+
-------
|
| 240 |
+
ts : array
|
| 241 |
+
Features time axis reference (seconds).
|
| 242 |
+
theta : array
|
| 243 |
+
Average power in the 4 to 8 Hz frequency band; each column is one EEG
|
| 244 |
+
channel.
|
| 245 |
+
alpha_low : array
|
| 246 |
+
Average power in the 8 to 10 Hz frequency band; each column is one EEG
|
| 247 |
+
channel.
|
| 248 |
+
alpha_high : array
|
| 249 |
+
Average power in the 10 to 13 Hz frequency band; each column is one EEG
|
| 250 |
+
channel.
|
| 251 |
+
beta : array
|
| 252 |
+
Average power in the 13 to 25 Hz frequency band; each column is one EEG
|
| 253 |
+
channel.
|
| 254 |
+
gamma : array
|
| 255 |
+
Average power in the 25 to 40 Hz frequency band; each column is one EEG
|
| 256 |
+
channel.
|
| 257 |
+
|
| 258 |
+
"""
|
| 259 |
+
|
| 260 |
+
# check inputs
|
| 261 |
+
if signal is None:
|
| 262 |
+
raise TypeError("Please specify an input signal.")
|
| 263 |
+
|
| 264 |
+
# ensure numpy
|
| 265 |
+
signal = np.array(signal)
|
| 266 |
+
nch = signal.shape[1]
|
| 267 |
+
|
| 268 |
+
sampling_rate = float(sampling_rate)
|
| 269 |
+
|
| 270 |
+
# convert sizes to samples
|
| 271 |
+
size = int(size * sampling_rate)
|
| 272 |
+
step = size - int(overlap * size)
|
| 273 |
+
|
| 274 |
+
# padding
|
| 275 |
+
min_pad = 1024
|
| 276 |
+
pad = None
|
| 277 |
+
if size < min_pad:
|
| 278 |
+
pad = min_pad - size
|
| 279 |
+
|
| 280 |
+
# frequency bands
|
| 281 |
+
bands = [[4, 8], [8, 10], [10, 13], [13, 25], [25, 40]]
|
| 282 |
+
nb = len(bands)
|
| 283 |
+
|
| 284 |
+
# windower
|
| 285 |
+
fcn_kwargs = {"sampling_rate": sampling_rate, "bands": bands, "pad": pad}
|
| 286 |
+
index, values = st.windower(
|
| 287 |
+
signal=signal,
|
| 288 |
+
size=size,
|
| 289 |
+
step=step,
|
| 290 |
+
kernel="hann",
|
| 291 |
+
fcn=_power_features,
|
| 292 |
+
fcn_kwargs=fcn_kwargs,
|
| 293 |
+
)
|
| 294 |
+
|
| 295 |
+
# median filter
|
| 296 |
+
md_size = int(0.625 * sampling_rate / float(step))
|
| 297 |
+
if md_size % 2 == 0:
|
| 298 |
+
# must be odd
|
| 299 |
+
md_size += 1
|
| 300 |
+
|
| 301 |
+
for i in range(nb):
|
| 302 |
+
for j in range(nch):
|
| 303 |
+
values[:, i, j], _ = st.smoother(
|
| 304 |
+
signal=values[:, i, j], kernel="median", size=md_size
|
| 305 |
+
)
|
| 306 |
+
|
| 307 |
+
# extract individual bands
|
| 308 |
+
theta = values[:, 0, :]
|
| 309 |
+
alpha_low = values[:, 1, :]
|
| 310 |
+
alpha_high = values[:, 2, :]
|
| 311 |
+
beta = values[:, 3, :]
|
| 312 |
+
gamma = values[:, 4, :]
|
| 313 |
+
|
| 314 |
+
# convert indices to seconds
|
| 315 |
+
ts = index.astype("float") / sampling_rate
|
| 316 |
+
|
| 317 |
+
# output
|
| 318 |
+
args = (ts, theta, alpha_low, alpha_high, beta, gamma)
|
| 319 |
+
names = ("ts", "theta", "alpha_low", "alpha_high", "beta", "gamma")
|
| 320 |
+
|
| 321 |
+
return utils.ReturnTuple(args, names)
|
| 322 |
+
|
| 323 |
+
|
| 324 |
+
def get_plf_features(signal=None, sampling_rate=1000.0, size=0.25, overlap=0.5):
|
| 325 |
+
"""Extract Phase-Locking Factor (PLF) features from EEG signals between all
|
| 326 |
+
channel pairs.
|
| 327 |
+
|
| 328 |
+
Parameters
|
| 329 |
+
----------
|
| 330 |
+
signal : array
|
| 331 |
+
Filtered EEG signal matrix; each column is one EEG channel.
|
| 332 |
+
sampling_rate : int, float, optional
|
| 333 |
+
Sampling frequency (Hz).
|
| 334 |
+
size : float, optional
|
| 335 |
+
Window size (seconds).
|
| 336 |
+
overlap : float, optional
|
| 337 |
+
Window overlap (0 to 1).
|
| 338 |
+
|
| 339 |
+
Returns
|
| 340 |
+
-------
|
| 341 |
+
ts : array
|
| 342 |
+
Features time axis reference (seconds).
|
| 343 |
+
plf_pairs : list
|
| 344 |
+
PLF pair indices.
|
| 345 |
+
plf : array
|
| 346 |
+
PLF matrix; each column is a channel pair.
|
| 347 |
+
|
| 348 |
+
"""
|
| 349 |
+
|
| 350 |
+
# check inputs
|
| 351 |
+
if signal is None:
|
| 352 |
+
raise TypeError("Please specify an input signal.")
|
| 353 |
+
|
| 354 |
+
# ensure numpy
|
| 355 |
+
signal = np.array(signal)
|
| 356 |
+
nch = signal.shape[1]
|
| 357 |
+
|
| 358 |
+
sampling_rate = float(sampling_rate)
|
| 359 |
+
|
| 360 |
+
# convert sizes to samples
|
| 361 |
+
size = int(size * sampling_rate)
|
| 362 |
+
step = size - int(overlap * size)
|
| 363 |
+
|
| 364 |
+
# padding
|
| 365 |
+
min_pad = 1024
|
| 366 |
+
N = None
|
| 367 |
+
if size < min_pad:
|
| 368 |
+
N = min_pad
|
| 369 |
+
|
| 370 |
+
# PLF pairs
|
| 371 |
+
pairs = [(i, j) for i in range(nch) for j in range(i + 1, nch)]
|
| 372 |
+
nb = len(pairs)
|
| 373 |
+
|
| 374 |
+
# windower
|
| 375 |
+
fcn_kwargs = {"pairs": pairs, "N": N}
|
| 376 |
+
index, values = st.windower(
|
| 377 |
+
signal=signal,
|
| 378 |
+
size=size,
|
| 379 |
+
step=step,
|
| 380 |
+
kernel="hann",
|
| 381 |
+
fcn=_plf_features,
|
| 382 |
+
fcn_kwargs=fcn_kwargs,
|
| 383 |
+
)
|
| 384 |
+
|
| 385 |
+
# median filter
|
| 386 |
+
md_size = int(0.625 * sampling_rate / float(step))
|
| 387 |
+
if md_size % 2 == 0:
|
| 388 |
+
# must be odd
|
| 389 |
+
md_size += 1
|
| 390 |
+
|
| 391 |
+
for i in range(nb):
|
| 392 |
+
values[:, i], _ = st.smoother(
|
| 393 |
+
signal=values[:, i], kernel="median", size=md_size
|
| 394 |
+
)
|
| 395 |
+
|
| 396 |
+
# convert indices to seconds
|
| 397 |
+
ts = index.astype("float") / sampling_rate
|
| 398 |
+
|
| 399 |
+
# output
|
| 400 |
+
args = (ts, pairs, values)
|
| 401 |
+
names = ("ts", "plf_pairs", "plf")
|
| 402 |
+
|
| 403 |
+
return utils.ReturnTuple(args, names)
|
| 404 |
+
|
| 405 |
+
|
| 406 |
+
def _power_features(signal=None, sampling_rate=1000.0, bands=None, pad=0):
|
| 407 |
+
"""Helper function to compute band power features for each window.
|
| 408 |
+
|
| 409 |
+
Parameters
|
| 410 |
+
----------
|
| 411 |
+
signal : array
|
| 412 |
+
Filtered EEG signal matrix; each column is one EEG channel.
|
| 413 |
+
sampling_rate : int, float, optional
|
| 414 |
+
Sampling frequency (Hz).
|
| 415 |
+
bands : list
|
| 416 |
+
List of frequency pairs defining the bands.
|
| 417 |
+
pad : int, optional
|
| 418 |
+
Padding for the Fourier Transform (number of zeros added).
|
| 419 |
+
|
| 420 |
+
Returns
|
| 421 |
+
-------
|
| 422 |
+
out : array
|
| 423 |
+
Average power for each band and EEG channel; shape is
|
| 424 |
+
(bands, channels).
|
| 425 |
+
|
| 426 |
+
"""
|
| 427 |
+
|
| 428 |
+
nch = signal.shape[1]
|
| 429 |
+
|
| 430 |
+
out = np.zeros((len(bands), nch), dtype="float")
|
| 431 |
+
for i in range(nch):
|
| 432 |
+
# compute power spectrum
|
| 433 |
+
freqs, power = st.power_spectrum(
|
| 434 |
+
signal=signal[:, i],
|
| 435 |
+
sampling_rate=sampling_rate,
|
| 436 |
+
pad=pad,
|
| 437 |
+
pow2=False,
|
| 438 |
+
decibel=False,
|
| 439 |
+
)
|
| 440 |
+
|
| 441 |
+
# compute average band power
|
| 442 |
+
for j, b in enumerate(bands):
|
| 443 |
+
(avg,) = st.band_power(freqs=freqs, power=power, frequency=b, decibel=False)
|
| 444 |
+
out[j, i] = avg
|
| 445 |
+
|
| 446 |
+
return out
|
| 447 |
+
|
| 448 |
+
|
| 449 |
+
def _plf_features(signal=None, pairs=None, N=None):
|
| 450 |
+
"""Helper function to compute PLF features for each window.
|
| 451 |
+
|
| 452 |
+
Parameters
|
| 453 |
+
----------
|
| 454 |
+
signal : array
|
| 455 |
+
Filtered EEG signal matrix; each column is one EEG channel.
|
| 456 |
+
pairs : iterable
|
| 457 |
+
List of signal channel pairs.
|
| 458 |
+
N : int, optional
|
| 459 |
+
Number of Fourier components.
|
| 460 |
+
|
| 461 |
+
Returns
|
| 462 |
+
-------
|
| 463 |
+
out : array
|
| 464 |
+
PLF for each channel pair.
|
| 465 |
+
|
| 466 |
+
"""
|
| 467 |
+
|
| 468 |
+
out = np.zeros(len(pairs), dtype="float")
|
| 469 |
+
for i, p in enumerate(pairs):
|
| 470 |
+
# compute PLF
|
| 471 |
+
s1 = signal[:, p[0]]
|
| 472 |
+
s2 = signal[:, p[1]]
|
| 473 |
+
(out[i],) = st.phase_locking(signal1=s1, signal2=s2, N=N)
|
| 474 |
+
|
| 475 |
+
return out
|
BioSPPy/source/biosppy/signals/emg.py
ADDED
|
@@ -0,0 +1,1139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.emg
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides methods to process Electromyographic (EMG) signals.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
# Imports
|
| 13 |
+
# compat
|
| 14 |
+
from __future__ import absolute_import, division, print_function
|
| 15 |
+
|
| 16 |
+
# 3rd party
|
| 17 |
+
import numpy as np
|
| 18 |
+
|
| 19 |
+
# local
|
| 20 |
+
from . import tools as st
|
| 21 |
+
from .. import plotting, utils
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
def emg(signal=None, sampling_rate=1000., path=None, show=True):
|
| 25 |
+
"""Process a raw EMG signal and extract relevant signal features using
|
| 26 |
+
default parameters.
|
| 27 |
+
|
| 28 |
+
Parameters
|
| 29 |
+
----------
|
| 30 |
+
signal : array
|
| 31 |
+
Raw EMG signal.
|
| 32 |
+
sampling_rate : int, float, optional
|
| 33 |
+
Sampling frequency (Hz).
|
| 34 |
+
path : str, optional
|
| 35 |
+
If provided, the plot will be saved to the specified file.
|
| 36 |
+
show : bool, optional
|
| 37 |
+
If True, show a summary plot.
|
| 38 |
+
|
| 39 |
+
Returns
|
| 40 |
+
-------
|
| 41 |
+
ts : array
|
| 42 |
+
Signal time axis reference (seconds).
|
| 43 |
+
filtered : array
|
| 44 |
+
Filtered EMG signal.
|
| 45 |
+
onsets : array
|
| 46 |
+
Indices of EMG pulse onsets.
|
| 47 |
+
|
| 48 |
+
"""
|
| 49 |
+
|
| 50 |
+
# check inputs
|
| 51 |
+
if signal is None:
|
| 52 |
+
raise TypeError("Please specify an input signal.")
|
| 53 |
+
|
| 54 |
+
# ensure numpy
|
| 55 |
+
signal = np.array(signal)
|
| 56 |
+
|
| 57 |
+
sampling_rate = float(sampling_rate)
|
| 58 |
+
|
| 59 |
+
# filter signal
|
| 60 |
+
filtered, _, _ = st.filter_signal(signal=signal,
|
| 61 |
+
ftype='butter',
|
| 62 |
+
band='highpass',
|
| 63 |
+
order=4,
|
| 64 |
+
frequency=100,
|
| 65 |
+
sampling_rate=sampling_rate)
|
| 66 |
+
|
| 67 |
+
# find onsets
|
| 68 |
+
onsets, = find_onsets(signal=filtered, sampling_rate=sampling_rate)
|
| 69 |
+
|
| 70 |
+
# get time vectors
|
| 71 |
+
length = len(signal)
|
| 72 |
+
T = (length - 1) / sampling_rate
|
| 73 |
+
ts = np.linspace(0, T, length, endpoint=True)
|
| 74 |
+
|
| 75 |
+
# plot
|
| 76 |
+
if show:
|
| 77 |
+
plotting.plot_emg(ts=ts,
|
| 78 |
+
sampling_rate=1000.,
|
| 79 |
+
raw=signal,
|
| 80 |
+
filtered=filtered,
|
| 81 |
+
processed=None,
|
| 82 |
+
onsets=onsets,
|
| 83 |
+
path=path,
|
| 84 |
+
show=True)
|
| 85 |
+
|
| 86 |
+
# output
|
| 87 |
+
args = (ts, filtered, onsets)
|
| 88 |
+
names = ('ts', 'filtered', 'onsets')
|
| 89 |
+
|
| 90 |
+
return utils.ReturnTuple(args, names)
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
def find_onsets(signal=None, sampling_rate=1000., size=0.05, threshold=None):
|
| 94 |
+
"""Determine onsets of EMG pulses.
|
| 95 |
+
|
| 96 |
+
Skips corrupted signal parts.
|
| 97 |
+
|
| 98 |
+
Parameters
|
| 99 |
+
----------
|
| 100 |
+
signal : array
|
| 101 |
+
Input filtered EMG signal.
|
| 102 |
+
sampling_rate : int, float, optional
|
| 103 |
+
Sampling frequency (Hz).
|
| 104 |
+
size : float, optional
|
| 105 |
+
Detection window size (seconds).
|
| 106 |
+
threshold : float, optional
|
| 107 |
+
Detection threshold.
|
| 108 |
+
|
| 109 |
+
Returns
|
| 110 |
+
-------
|
| 111 |
+
onsets : array
|
| 112 |
+
Indices of EMG pulse onsets.
|
| 113 |
+
|
| 114 |
+
"""
|
| 115 |
+
|
| 116 |
+
# check inputs
|
| 117 |
+
if signal is None:
|
| 118 |
+
raise TypeError("Please specify an input signal.")
|
| 119 |
+
|
| 120 |
+
# full-wave rectification
|
| 121 |
+
fwlo = np.abs(signal)
|
| 122 |
+
|
| 123 |
+
# smooth
|
| 124 |
+
size = int(sampling_rate * size)
|
| 125 |
+
mvgav, _ = st.smoother(signal=fwlo,
|
| 126 |
+
kernel='boxzen',
|
| 127 |
+
size=size,
|
| 128 |
+
mirror=True)
|
| 129 |
+
|
| 130 |
+
# threshold
|
| 131 |
+
if threshold is None:
|
| 132 |
+
aux = np.abs(mvgav)
|
| 133 |
+
threshold = 1.2 * np.mean(aux) + 2.0 * np.std(aux, ddof=1)
|
| 134 |
+
|
| 135 |
+
# find onsets
|
| 136 |
+
length = len(signal)
|
| 137 |
+
start = np.nonzero(mvgav > threshold)[0]
|
| 138 |
+
stop = np.nonzero(mvgav <= threshold)[0]
|
| 139 |
+
|
| 140 |
+
onsets = np.union1d(np.intersect1d(start - 1, stop),
|
| 141 |
+
np.intersect1d(start + 1, stop))
|
| 142 |
+
|
| 143 |
+
if np.any(onsets):
|
| 144 |
+
if onsets[-1] >= length:
|
| 145 |
+
onsets[-1] = length - 1
|
| 146 |
+
|
| 147 |
+
return utils.ReturnTuple((onsets,), ('onsets',))
|
| 148 |
+
|
| 149 |
+
|
| 150 |
+
def hodges_bui_onset_detector(signal=None, rest=None, sampling_rate=1000.,
|
| 151 |
+
size=None, threshold=None):
|
| 152 |
+
"""Determine onsets of EMG pulses.
|
| 153 |
+
|
| 154 |
+
Follows the approach by Hodges and Bui [HoBu96]_.
|
| 155 |
+
|
| 156 |
+
Parameters
|
| 157 |
+
----------
|
| 158 |
+
signal : array
|
| 159 |
+
Input filtered EMG signal.
|
| 160 |
+
rest : array, list, dict
|
| 161 |
+
One of the following 3 options:
|
| 162 |
+
* N-dimensional array with filtered samples corresponding to a
|
| 163 |
+
rest period;
|
| 164 |
+
* 2D array or list with the beginning and end indices of a segment of
|
| 165 |
+
the signal corresponding to a rest period;
|
| 166 |
+
* Dictionary with {'mean': mean value, 'std_dev': standard variation}.
|
| 167 |
+
sampling_rate : int, float, optional
|
| 168 |
+
Sampling frequency (Hz).
|
| 169 |
+
size : int
|
| 170 |
+
Detection window size (seconds).
|
| 171 |
+
threshold : int, float
|
| 172 |
+
Detection threshold.
|
| 173 |
+
|
| 174 |
+
Returns
|
| 175 |
+
-------
|
| 176 |
+
onsets : array
|
| 177 |
+
Indices of EMG pulse onsets.
|
| 178 |
+
processed : array
|
| 179 |
+
Processed EMG signal.
|
| 180 |
+
|
| 181 |
+
References
|
| 182 |
+
----------
|
| 183 |
+
.. [HoBu96] Hodges PW, Bui BH, "A comparison of computer-based methods for
|
| 184 |
+
the determination of onset of muscle contraction using
|
| 185 |
+
electromyography", Electroencephalography and Clinical Neurophysiology
|
| 186 |
+
- Electromyography and Motor Control, vol. 101:6, pp. 511-519, 1996
|
| 187 |
+
|
| 188 |
+
"""
|
| 189 |
+
|
| 190 |
+
# check inputs
|
| 191 |
+
if signal is None:
|
| 192 |
+
raise TypeError("Please specify an input signal.")
|
| 193 |
+
|
| 194 |
+
if rest is None:
|
| 195 |
+
raise TypeError("Please specidy rest parameters.")
|
| 196 |
+
|
| 197 |
+
if size is None:
|
| 198 |
+
raise TypeError("Please specify the detection window size.")
|
| 199 |
+
|
| 200 |
+
if threshold is None:
|
| 201 |
+
raise TypeError("Please specify the detection threshold.")
|
| 202 |
+
|
| 203 |
+
# gather statistics on rest signal
|
| 204 |
+
if isinstance(rest, np.ndarray) or isinstance(rest, list):
|
| 205 |
+
# if the input parameter is a numpy array or a list
|
| 206 |
+
if len(rest) >= 2:
|
| 207 |
+
# first ensure numpy
|
| 208 |
+
rest = np.array(rest)
|
| 209 |
+
if len(rest) == 2:
|
| 210 |
+
# the rest signal is a segment of the signal
|
| 211 |
+
rest_signal = signal[rest[0]:rest[1]]
|
| 212 |
+
else:
|
| 213 |
+
# the rest signal is provided as is
|
| 214 |
+
rest_signal = rest
|
| 215 |
+
rest_zero_mean = rest_signal - np.mean(rest_signal)
|
| 216 |
+
statistics = st.signal_stats(signal=rest_zero_mean)
|
| 217 |
+
mean_rest = statistics['mean']
|
| 218 |
+
std_dev_rest = statistics['std_dev']
|
| 219 |
+
else:
|
| 220 |
+
raise TypeError("Please specify the rest analysis.")
|
| 221 |
+
elif isinstance(rest, dict):
|
| 222 |
+
# if the input is a dictionary
|
| 223 |
+
mean_rest = rest['mean']
|
| 224 |
+
std_dev_rest = rest['std_dev']
|
| 225 |
+
else:
|
| 226 |
+
raise TypeError("Please specify the rest analysis.")
|
| 227 |
+
|
| 228 |
+
# subtract baseline offset
|
| 229 |
+
signal_zero_mean = signal - np.mean(signal)
|
| 230 |
+
|
| 231 |
+
# full-wave rectification
|
| 232 |
+
fwlo = np.abs(signal_zero_mean)
|
| 233 |
+
|
| 234 |
+
# moving average
|
| 235 |
+
mvgav = np.convolve(fwlo, np.ones((size,))/size, mode='valid')
|
| 236 |
+
|
| 237 |
+
# calculate the test function
|
| 238 |
+
tf = (1 / std_dev_rest) * (mvgav - mean_rest)
|
| 239 |
+
|
| 240 |
+
# find onsets
|
| 241 |
+
length = len(signal)
|
| 242 |
+
start = np.nonzero(tf >= threshold)[0]
|
| 243 |
+
stop = np.nonzero(tf < threshold)[0]
|
| 244 |
+
|
| 245 |
+
onsets = np.union1d(np.intersect1d(start - 1, stop),
|
| 246 |
+
np.intersect1d(start + 1, stop))
|
| 247 |
+
|
| 248 |
+
# adjust indices because of moving average
|
| 249 |
+
onsets += int(size / 2)
|
| 250 |
+
|
| 251 |
+
if np.any(onsets):
|
| 252 |
+
if onsets[-1] >= length:
|
| 253 |
+
onsets[-1] = length - 1
|
| 254 |
+
|
| 255 |
+
return utils.ReturnTuple((onsets, tf), ('onsets', 'processed'))
|
| 256 |
+
|
| 257 |
+
|
| 258 |
+
def bonato_onset_detector(signal=None, rest=None, sampling_rate=1000.,
|
| 259 |
+
threshold=None, active_state_duration=None,
|
| 260 |
+
samples_above_fail=None, fail_size=None):
|
| 261 |
+
"""Determine onsets of EMG pulses.
|
| 262 |
+
|
| 263 |
+
Follows the approach by Bonato et al. [Bo98]_.
|
| 264 |
+
|
| 265 |
+
Parameters
|
| 266 |
+
----------
|
| 267 |
+
signal : array
|
| 268 |
+
Input filtered EMG signal.
|
| 269 |
+
rest : array, list, dict
|
| 270 |
+
One of the following 3 options:
|
| 271 |
+
* N-dimensional array with filtered samples corresponding to a
|
| 272 |
+
rest period;
|
| 273 |
+
* 2D array or list with the beginning and end indices of a segment of
|
| 274 |
+
the signal corresponding to a rest period;
|
| 275 |
+
* Dictionary with {'mean': mean value, 'std_dev': standard variation}.
|
| 276 |
+
sampling_rate : int, float, optional
|
| 277 |
+
Sampling frequency (Hz).
|
| 278 |
+
threshold : int, float
|
| 279 |
+
Detection threshold.
|
| 280 |
+
active_state_duration: int
|
| 281 |
+
Minimum duration of the active state.
|
| 282 |
+
samples_above_fail : int
|
| 283 |
+
Number of samples above the threshold level in a group of successive
|
| 284 |
+
samples.
|
| 285 |
+
fail_size : int
|
| 286 |
+
Number of successive samples.
|
| 287 |
+
|
| 288 |
+
Returns
|
| 289 |
+
-------
|
| 290 |
+
onsets : array
|
| 291 |
+
Indices of EMG pulse onsets.
|
| 292 |
+
processed : array
|
| 293 |
+
Processed EMG signal.
|
| 294 |
+
|
| 295 |
+
References
|
| 296 |
+
----------
|
| 297 |
+
.. [Bo98] Bonato P, D’Alessio T, Knaflitz M, "A statistical method for the
|
| 298 |
+
measurement of muscle activation intervals from surface myoelectric
|
| 299 |
+
signal during gait", IEEE Transactions on Biomedical Engineering,
|
| 300 |
+
vol. 45:3, pp. 287–299, 1998
|
| 301 |
+
|
| 302 |
+
"""
|
| 303 |
+
|
| 304 |
+
# check inputs
|
| 305 |
+
if signal is None:
|
| 306 |
+
raise TypeError("Please specify an input signal.")
|
| 307 |
+
|
| 308 |
+
if rest is None:
|
| 309 |
+
raise TypeError("Please specidy rest parameters.")
|
| 310 |
+
|
| 311 |
+
if threshold is None:
|
| 312 |
+
raise TypeError("Please specify the detection threshold.")
|
| 313 |
+
|
| 314 |
+
if active_state_duration is None:
|
| 315 |
+
raise TypeError("Please specify the mininum duration of the "
|
| 316 |
+
"active state.")
|
| 317 |
+
|
| 318 |
+
if samples_above_fail is None:
|
| 319 |
+
raise TypeError("Please specify the number of samples above the "
|
| 320 |
+
"threshold level in a group of successive samples.")
|
| 321 |
+
|
| 322 |
+
if fail_size is None:
|
| 323 |
+
raise TypeError("Please specify the number of successive samples.")
|
| 324 |
+
|
| 325 |
+
# gather statistics on rest signal
|
| 326 |
+
if isinstance(rest, np.ndarray) or isinstance(rest, list):
|
| 327 |
+
# if the input parameter is a numpy array or a list
|
| 328 |
+
if len(rest) >= 2:
|
| 329 |
+
# first ensure numpy
|
| 330 |
+
rest = np.array(rest)
|
| 331 |
+
if len(rest) == 2:
|
| 332 |
+
# the rest signal is a segment of the signal
|
| 333 |
+
rest_signal = signal[rest[0]:rest[1]]
|
| 334 |
+
else:
|
| 335 |
+
# the rest signal is provided as is
|
| 336 |
+
rest_signal = rest
|
| 337 |
+
rest_zero_mean = rest_signal - np.mean(rest_signal)
|
| 338 |
+
statistics = st.signal_stats(signal=rest_zero_mean)
|
| 339 |
+
var_rest = statistics['var']
|
| 340 |
+
else:
|
| 341 |
+
raise TypeError("Please specify the rest analysis.")
|
| 342 |
+
elif isinstance(rest, dict):
|
| 343 |
+
# if the input is a dictionary
|
| 344 |
+
var_rest = rest['var']
|
| 345 |
+
else:
|
| 346 |
+
raise TypeError("Please specify the rest analysis.")
|
| 347 |
+
|
| 348 |
+
# subtract baseline offset
|
| 349 |
+
signal_zero_mean = signal - np.mean(signal)
|
| 350 |
+
|
| 351 |
+
tf_list = []
|
| 352 |
+
onset_time_list = []
|
| 353 |
+
offset_time_list = []
|
| 354 |
+
alarm_time = 0
|
| 355 |
+
state_duration = 0
|
| 356 |
+
j = 0
|
| 357 |
+
n = 0
|
| 358 |
+
onset = False
|
| 359 |
+
alarm = False
|
| 360 |
+
for k in range(1, len(signal_zero_mean), 2): # odd values only
|
| 361 |
+
# calculate the test function
|
| 362 |
+
tf = (1 / var_rest) * (signal_zero_mean[k-1]**2 + signal_zero_mean[k]**2)
|
| 363 |
+
tf_list.append(tf)
|
| 364 |
+
if onset is True:
|
| 365 |
+
if alarm is False:
|
| 366 |
+
if tf < threshold:
|
| 367 |
+
alarm_time = k // 2
|
| 368 |
+
alarm = True
|
| 369 |
+
else: # now we have to check for the remaining rule to me bet - duration of inactive state
|
| 370 |
+
if tf < threshold:
|
| 371 |
+
state_duration += 1
|
| 372 |
+
if j > 0: # there was one (or more) samples above the threshold level but now one is bellow it
|
| 373 |
+
# the test function may go above the threshold , but each time not longer than j samples
|
| 374 |
+
n += 1
|
| 375 |
+
if n == samples_above_fail:
|
| 376 |
+
n = 0
|
| 377 |
+
j = 0
|
| 378 |
+
if state_duration == active_state_duration:
|
| 379 |
+
offset_time_list.append(alarm_time)
|
| 380 |
+
onset = False
|
| 381 |
+
alarm = False
|
| 382 |
+
n = 0
|
| 383 |
+
j = 0
|
| 384 |
+
state_duration = 0
|
| 385 |
+
else: # sample falls below the threshold level
|
| 386 |
+
j += 1
|
| 387 |
+
if j > fail_size:
|
| 388 |
+
# the inactive state is above the threshold for longer than the predefined number of samples
|
| 389 |
+
alarm = False
|
| 390 |
+
n = 0
|
| 391 |
+
j = 0
|
| 392 |
+
state_duration = 0
|
| 393 |
+
else: # we only look for another onset if a previous offset was detected
|
| 394 |
+
if alarm is False: # if the alarm time has not yet been identified
|
| 395 |
+
if tf >= threshold: # alarm time
|
| 396 |
+
alarm_time = k // 2
|
| 397 |
+
alarm = True
|
| 398 |
+
else: # now we have to check for the remaining rule to me bet - duration of active state
|
| 399 |
+
if tf >= threshold:
|
| 400 |
+
state_duration += 1
|
| 401 |
+
if j > 0: # there was one (or more) samples below the threshold level but now one is above it.
|
| 402 |
+
# a total of n samples must be above it
|
| 403 |
+
n += 1
|
| 404 |
+
if n == samples_above_fail:
|
| 405 |
+
n = 0
|
| 406 |
+
j = 0
|
| 407 |
+
if state_duration == active_state_duration:
|
| 408 |
+
onset_time_list.append(alarm_time)
|
| 409 |
+
onset = True
|
| 410 |
+
alarm = False
|
| 411 |
+
n = 0
|
| 412 |
+
j = 0
|
| 413 |
+
state_duration = 0
|
| 414 |
+
else: # sample falls below the threshold level
|
| 415 |
+
j += 1
|
| 416 |
+
if j > fail_size:
|
| 417 |
+
# the active state has fallen below the threshold for longer than the predefined number of samples
|
| 418 |
+
alarm = False
|
| 419 |
+
n = 0
|
| 420 |
+
j = 0
|
| 421 |
+
state_duration = 0
|
| 422 |
+
|
| 423 |
+
onsets = np.union1d(onset_time_list,
|
| 424 |
+
offset_time_list)
|
| 425 |
+
|
| 426 |
+
# adjust indices because of odd numbers
|
| 427 |
+
onsets *= 2
|
| 428 |
+
|
| 429 |
+
return utils.ReturnTuple((onsets, tf_list), ('onsets', 'processed'))
|
| 430 |
+
|
| 431 |
+
|
| 432 |
+
def lidierth_onset_detector(signal=None, rest=None, sampling_rate=1000.,
|
| 433 |
+
size=None, threshold=None,
|
| 434 |
+
active_state_duration=None, fail_size=None):
|
| 435 |
+
"""Determine onsets of EMG pulses.
|
| 436 |
+
|
| 437 |
+
Follows the approach by Lidierth. [Li86]_.
|
| 438 |
+
|
| 439 |
+
Parameters
|
| 440 |
+
----------
|
| 441 |
+
signal : array
|
| 442 |
+
Input filtered EMG signal.
|
| 443 |
+
rest : array, list, dict
|
| 444 |
+
One of the following 3 options:
|
| 445 |
+
* N-dimensional array with filtered samples corresponding to a
|
| 446 |
+
rest period;
|
| 447 |
+
* 2D array or list with the beginning and end indices of a segment of
|
| 448 |
+
the signal corresponding to a rest period;
|
| 449 |
+
* Dictionary with {'mean': mean value, 'std_dev': standard variation}.
|
| 450 |
+
sampling_rate : int, float, optional
|
| 451 |
+
Sampling frequency (Hz).
|
| 452 |
+
size : int
|
| 453 |
+
Detection window size (seconds).
|
| 454 |
+
threshold : int, float
|
| 455 |
+
Detection threshold.
|
| 456 |
+
active_state_duration: int
|
| 457 |
+
Minimum duration of the active state.
|
| 458 |
+
fail_size : int
|
| 459 |
+
Number of successive samples.
|
| 460 |
+
|
| 461 |
+
Returns
|
| 462 |
+
-------
|
| 463 |
+
onsets : array
|
| 464 |
+
Indices of EMG pulse onsets.
|
| 465 |
+
processed : array
|
| 466 |
+
Processed EMG signal.
|
| 467 |
+
|
| 468 |
+
References
|
| 469 |
+
----------
|
| 470 |
+
.. [Li86] Lidierth M, "A computer based method for automated measurement
|
| 471 |
+
of the periods of muscular activity from an EMG and its application to
|
| 472 |
+
locomotor EMGs", ElectroencephClin Neurophysiol, vol. 64:4,
|
| 473 |
+
pp. 378–380, 1986
|
| 474 |
+
|
| 475 |
+
"""
|
| 476 |
+
|
| 477 |
+
# check inputs
|
| 478 |
+
if signal is None:
|
| 479 |
+
raise TypeError("Please specify an input signal.")
|
| 480 |
+
|
| 481 |
+
if rest is None:
|
| 482 |
+
raise TypeError("Please specidy rest parameters.")
|
| 483 |
+
|
| 484 |
+
if size is None:
|
| 485 |
+
raise TypeError("Please specify the detection window size.")
|
| 486 |
+
|
| 487 |
+
if threshold is None:
|
| 488 |
+
raise TypeError("Please specify the detection threshold.")
|
| 489 |
+
|
| 490 |
+
if active_state_duration is None:
|
| 491 |
+
raise TypeError("Please specify the mininum duration of the "
|
| 492 |
+
"active state.")
|
| 493 |
+
|
| 494 |
+
if fail_size is None:
|
| 495 |
+
raise TypeError("Please specify the number of successive samples.")
|
| 496 |
+
|
| 497 |
+
# gather statistics on rest signal
|
| 498 |
+
if isinstance(rest, np.ndarray) or isinstance(rest, list):
|
| 499 |
+
# if the input parameter is a numpy array or a list
|
| 500 |
+
if len(rest) >= 2:
|
| 501 |
+
# first ensure numpy
|
| 502 |
+
rest = np.array(rest)
|
| 503 |
+
if len(rest) == 2:
|
| 504 |
+
# the rest signal is a segment of the signal
|
| 505 |
+
rest_signal = signal[rest[0]:rest[1]]
|
| 506 |
+
else:
|
| 507 |
+
# the rest signal is provided as is
|
| 508 |
+
rest_signal = rest
|
| 509 |
+
rest_zero_mean = rest_signal - np.mean(rest_signal)
|
| 510 |
+
statistics = st.signal_stats(signal=rest_zero_mean)
|
| 511 |
+
mean_rest = statistics['mean']
|
| 512 |
+
std_dev_rest = statistics['std_dev']
|
| 513 |
+
else:
|
| 514 |
+
raise TypeError("Please specify the rest analysis.")
|
| 515 |
+
elif isinstance(rest, dict):
|
| 516 |
+
# if the input is a dictionary
|
| 517 |
+
mean_rest = rest['mean']
|
| 518 |
+
std_dev_rest = rest['std_dev']
|
| 519 |
+
else:
|
| 520 |
+
raise TypeError("Please specify the rest analysis.")
|
| 521 |
+
|
| 522 |
+
# subtract baseline offset
|
| 523 |
+
signal_zero_mean = signal - np.mean(signal)
|
| 524 |
+
|
| 525 |
+
# full-wave rectification
|
| 526 |
+
fwlo = np.abs(signal_zero_mean)
|
| 527 |
+
|
| 528 |
+
# moving average
|
| 529 |
+
mvgav = np.convolve(fwlo, np.ones((size,)) / size, mode='valid')
|
| 530 |
+
|
| 531 |
+
# calculate the test function
|
| 532 |
+
tf = (1 / std_dev_rest) * (mvgav - mean_rest)
|
| 533 |
+
|
| 534 |
+
onset_time_list = []
|
| 535 |
+
offset_time_list = []
|
| 536 |
+
alarm_time = 0
|
| 537 |
+
state_duration = 0
|
| 538 |
+
j = 0
|
| 539 |
+
onset = False
|
| 540 |
+
alarm = False
|
| 541 |
+
for k in range(0, len(tf)):
|
| 542 |
+
if onset is True:
|
| 543 |
+
# an onset was previously detected and we are looking for the offset time applying the same criteria
|
| 544 |
+
if alarm is False: # if the alarm time has not yet been identified
|
| 545 |
+
if tf[k] < threshold: # alarm time
|
| 546 |
+
alarm_time = k
|
| 547 |
+
alarm = True
|
| 548 |
+
else: # now we have to check for the remaining rule to me bet - duration of inactive state
|
| 549 |
+
if tf[k] < threshold:
|
| 550 |
+
state_duration += 1
|
| 551 |
+
if j > 0: # there was one (or more) samples above the threshold level but now one is bellow it
|
| 552 |
+
# the test function may go above the threshold , but each time not longer than j samples
|
| 553 |
+
j = 0
|
| 554 |
+
if state_duration == active_state_duration:
|
| 555 |
+
offset_time_list.append(alarm_time)
|
| 556 |
+
onset = False
|
| 557 |
+
alarm = False
|
| 558 |
+
j = 0
|
| 559 |
+
state_duration = 0
|
| 560 |
+
else: # sample falls below the threshold level
|
| 561 |
+
j += 1
|
| 562 |
+
if j > fail_size:
|
| 563 |
+
# the inactive state is above the threshold for longer than the predefined number of samples
|
| 564 |
+
alarm = False
|
| 565 |
+
j = 0
|
| 566 |
+
state_duration = 0
|
| 567 |
+
else: # we only look for another onset if a previous offset was detected
|
| 568 |
+
if alarm is False: # if the alarm time has not yet been identified
|
| 569 |
+
if tf[k] >= threshold: # alarm time
|
| 570 |
+
alarm_time = k
|
| 571 |
+
alarm = True
|
| 572 |
+
else: # now we have to check for the remaining rule to me bet - duration of active state
|
| 573 |
+
if tf[k] >= threshold:
|
| 574 |
+
state_duration += 1
|
| 575 |
+
if j > 0: # there was one (or more) samples below the threshold level but now one is above it
|
| 576 |
+
# the test function may repeatedly fall below the threshold, but each time not longer than j samples
|
| 577 |
+
j = 0
|
| 578 |
+
if state_duration == active_state_duration:
|
| 579 |
+
onset_time_list.append(alarm_time)
|
| 580 |
+
onset = True
|
| 581 |
+
alarm = False
|
| 582 |
+
j = 0
|
| 583 |
+
state_duration = 0
|
| 584 |
+
else: # sample falls below the threshold level
|
| 585 |
+
j += 1
|
| 586 |
+
if j > fail_size:
|
| 587 |
+
# the active state has fallen below the threshold for longer than the predefined number of samples
|
| 588 |
+
alarm = False
|
| 589 |
+
j = 0
|
| 590 |
+
state_duration = 0
|
| 591 |
+
|
| 592 |
+
onsets = np.union1d(onset_time_list,
|
| 593 |
+
offset_time_list)
|
| 594 |
+
|
| 595 |
+
# adjust indices because of moving average
|
| 596 |
+
onsets += int(size / 2)
|
| 597 |
+
|
| 598 |
+
return utils.ReturnTuple((onsets, tf), ('onsets', 'processed'))
|
| 599 |
+
|
| 600 |
+
|
| 601 |
+
def abbink_onset_detector(signal=None, rest=None, sampling_rate=1000.,
|
| 602 |
+
size=None, alarm_size=None, threshold=None,
|
| 603 |
+
transition_threshold=None):
|
| 604 |
+
"""Determine onsets of EMG pulses.
|
| 605 |
+
|
| 606 |
+
Follows the approach by Abbink et al.. [Abb98]_.
|
| 607 |
+
|
| 608 |
+
Parameters
|
| 609 |
+
----------
|
| 610 |
+
signal : array
|
| 611 |
+
Input filtered EMG signal.
|
| 612 |
+
rest : array, list, dict
|
| 613 |
+
One of the following 3 options:
|
| 614 |
+
* N-dimensional array with filtered samples corresponding to a
|
| 615 |
+
rest period;
|
| 616 |
+
* 2D array or list with the beginning and end indices of a segment of
|
| 617 |
+
the signal corresponding to a rest period;
|
| 618 |
+
* Dictionary with {'mean': mean value, 'std_dev': standard variation}.
|
| 619 |
+
sampling_rate : int, float, optional
|
| 620 |
+
Sampling frequency (Hz).
|
| 621 |
+
size : int
|
| 622 |
+
Detection window size (seconds).
|
| 623 |
+
alarm_size : int
|
| 624 |
+
Number of amplitudes searched in the calculation of the transition
|
| 625 |
+
index.
|
| 626 |
+
threshold : int, float
|
| 627 |
+
Detection threshold.
|
| 628 |
+
transition_threshold: int, float
|
| 629 |
+
Threshold used in the calculation of the transition index.
|
| 630 |
+
|
| 631 |
+
Returns
|
| 632 |
+
-------
|
| 633 |
+
onsets : array
|
| 634 |
+
Indices of EMG pulse onsets.
|
| 635 |
+
processed : array
|
| 636 |
+
Processed EMG signal.
|
| 637 |
+
|
| 638 |
+
References
|
| 639 |
+
----------
|
| 640 |
+
.. [Abb98] Abbink JH, van der Bilt A, van der Glas HW, "Detection of onset
|
| 641 |
+
and termination of muscle activity in surface electromyograms",
|
| 642 |
+
Journal of Oral Rehabilitation, vol. 25, pp. 365–369, 1998
|
| 643 |
+
|
| 644 |
+
"""
|
| 645 |
+
|
| 646 |
+
# check inputs
|
| 647 |
+
if signal is None:
|
| 648 |
+
raise TypeError("Please specify an input signal.")
|
| 649 |
+
|
| 650 |
+
if rest is None:
|
| 651 |
+
raise TypeError("Please specidy rest parameters.")
|
| 652 |
+
|
| 653 |
+
if size is None:
|
| 654 |
+
raise TypeError("Please specify the detection window size.")
|
| 655 |
+
|
| 656 |
+
if alarm_size is None:
|
| 657 |
+
raise TypeError("Please specify the number of amplitudes searched in "
|
| 658 |
+
"the calculation of the transition index.")
|
| 659 |
+
|
| 660 |
+
if threshold is None:
|
| 661 |
+
raise TypeError("Please specify the detection threshold.")
|
| 662 |
+
|
| 663 |
+
if transition_threshold is None:
|
| 664 |
+
raise TypeError("Please specify the second threshold.")
|
| 665 |
+
|
| 666 |
+
# gather statistics on rest signal
|
| 667 |
+
if isinstance(rest, np.ndarray) or isinstance(rest, list):
|
| 668 |
+
# if the input parameter is a numpy array or a list
|
| 669 |
+
if len(rest) >= 2:
|
| 670 |
+
# first ensure numpy
|
| 671 |
+
rest = np.array(rest)
|
| 672 |
+
if len(rest) == 2:
|
| 673 |
+
# the rest signal is a segment of the signal
|
| 674 |
+
rest_signal = signal[rest[0]:rest[1]]
|
| 675 |
+
else:
|
| 676 |
+
# the rest signal is provided as is
|
| 677 |
+
rest_signal = rest
|
| 678 |
+
rest_zero_mean = rest_signal - np.mean(rest_signal)
|
| 679 |
+
statistics = st.signal_stats(signal=rest_zero_mean)
|
| 680 |
+
mean_rest = statistics['mean']
|
| 681 |
+
std_dev_rest = statistics['std_dev']
|
| 682 |
+
else:
|
| 683 |
+
raise TypeError("Please specify the rest analysis.")
|
| 684 |
+
elif isinstance(rest, dict):
|
| 685 |
+
# if the input is a dictionary
|
| 686 |
+
mean_rest = rest['mean']
|
| 687 |
+
std_dev_rest = rest['std_dev']
|
| 688 |
+
else:
|
| 689 |
+
raise TypeError("Please specify the rest analysis.")
|
| 690 |
+
|
| 691 |
+
# subtract baseline offset
|
| 692 |
+
signal_zero_mean = signal - np.mean(signal)
|
| 693 |
+
|
| 694 |
+
# full-wave rectification
|
| 695 |
+
fwlo = np.abs(signal_zero_mean)
|
| 696 |
+
|
| 697 |
+
# moving average
|
| 698 |
+
mvgav = np.convolve(fwlo, np.ones((size,)) / size, mode='valid')
|
| 699 |
+
|
| 700 |
+
# calculate the test function
|
| 701 |
+
tf = (1 / std_dev_rest) * (mvgav - mean_rest)
|
| 702 |
+
|
| 703 |
+
# additional filter
|
| 704 |
+
filtered_tf, _, _ = st.filter_signal(signal=tf,
|
| 705 |
+
ftype='butter',
|
| 706 |
+
band='lowpass',
|
| 707 |
+
order=10,
|
| 708 |
+
frequency=30,
|
| 709 |
+
sampling_rate=sampling_rate)
|
| 710 |
+
# convert from numpy array to list to use list comprehensions
|
| 711 |
+
filtered_tf = filtered_tf.tolist()
|
| 712 |
+
|
| 713 |
+
onset_time_list = []
|
| 714 |
+
offset_time_list = []
|
| 715 |
+
alarm_time = 0
|
| 716 |
+
onset = False
|
| 717 |
+
alarm = False
|
| 718 |
+
for k in range(0, len(tf)):
|
| 719 |
+
if onset is True:
|
| 720 |
+
# an onset was previously detected and we are looking for the offset time, applying the same criteria
|
| 721 |
+
if alarm is False:
|
| 722 |
+
if filtered_tf[k] < threshold:
|
| 723 |
+
# the first index of the sliding window is used as an estimate for the onset time (simple post-processor)
|
| 724 |
+
alarm_time = k
|
| 725 |
+
alarm = True
|
| 726 |
+
else:
|
| 727 |
+
# if alarm_time > alarm_window_size and len(emg_conditioned_list) == (alarm_time + alarm_window_size + 1):
|
| 728 |
+
if alarm_time > alarm_size and k == (alarm_time + alarm_size + 1):
|
| 729 |
+
transition_indices = []
|
| 730 |
+
for j in range(alarm_size, alarm_time):
|
| 731 |
+
low_list = [filtered_tf[j-alarm_size+a] for a in range(1, alarm_size+1)]
|
| 732 |
+
low = sum(i < transition_threshold for i in low_list)
|
| 733 |
+
high_list = [filtered_tf[j+b] for b in range(1, alarm_size+1)]
|
| 734 |
+
high = sum(i > transition_threshold for i in high_list)
|
| 735 |
+
transition_indices.append(low + high)
|
| 736 |
+
offset_time_list = np.where(transition_indices == np.amin(transition_indices))[0].tolist()
|
| 737 |
+
onset = False
|
| 738 |
+
alarm = False
|
| 739 |
+
else: # we only look for another onset if a previous offset was detected
|
| 740 |
+
if alarm is False:
|
| 741 |
+
if filtered_tf[k] >= threshold:
|
| 742 |
+
# the first index of the sliding window is used as an estimate for the onset time (simple post-processor)
|
| 743 |
+
alarm_time = k
|
| 744 |
+
alarm = True
|
| 745 |
+
else:
|
| 746 |
+
# if alarm_time > alarm_window_size and len(emg_conditioned_list) == (alarm_time + alarm_window_size + 1):
|
| 747 |
+
if alarm_time > alarm_size and k == (alarm_time + alarm_size + 1):
|
| 748 |
+
transition_indices = []
|
| 749 |
+
for j in range(alarm_size, alarm_time):
|
| 750 |
+
low_list = [filtered_tf[j-alarm_size+a] for a in range(1, alarm_size+1)]
|
| 751 |
+
low = sum(i < transition_threshold for i in low_list)
|
| 752 |
+
high_list = [filtered_tf[j+b] for b in range(1, alarm_size+1)]
|
| 753 |
+
high = sum(i > transition_threshold for i in high_list)
|
| 754 |
+
transition_indices.append(low + high)
|
| 755 |
+
onset_time_list = np.where(transition_indices == np.amax(transition_indices))[0].tolist()
|
| 756 |
+
onset = True
|
| 757 |
+
alarm = False
|
| 758 |
+
|
| 759 |
+
onsets = np.union1d(onset_time_list,
|
| 760 |
+
offset_time_list)
|
| 761 |
+
|
| 762 |
+
# adjust indices because of moving average
|
| 763 |
+
onsets += int(size / 2)
|
| 764 |
+
|
| 765 |
+
return utils.ReturnTuple((onsets, filtered_tf), ('onsets', 'processed'))
|
| 766 |
+
|
| 767 |
+
|
| 768 |
+
def solnik_onset_detector(signal=None, rest=None, sampling_rate=1000.,
|
| 769 |
+
threshold=None, active_state_duration=None):
|
| 770 |
+
"""Determine onsets of EMG pulses.
|
| 771 |
+
|
| 772 |
+
Follows the approach by Solnik et al. [Sol10]_.
|
| 773 |
+
|
| 774 |
+
Parameters
|
| 775 |
+
----------
|
| 776 |
+
signal : array
|
| 777 |
+
Input filtered EMG signal.
|
| 778 |
+
rest : array, list, dict
|
| 779 |
+
One of the following 3 options:
|
| 780 |
+
* N-dimensional array with filtered samples corresponding to a
|
| 781 |
+
rest period;
|
| 782 |
+
* 2D array or list with the beginning and end indices of a segment of
|
| 783 |
+
the signal corresponding to a rest period;
|
| 784 |
+
* Dictionary with {'mean': mean value, 'std_dev': standard variation}.
|
| 785 |
+
sampling_rate : int, float, optional
|
| 786 |
+
Sampling frequency (Hz).
|
| 787 |
+
threshold : int, float
|
| 788 |
+
Scale factor for calculating the detection threshold.
|
| 789 |
+
active_state_duration: int
|
| 790 |
+
Minimum duration of the active state.
|
| 791 |
+
|
| 792 |
+
Returns
|
| 793 |
+
-------
|
| 794 |
+
onsets : array
|
| 795 |
+
Indices of EMG pulse onsets.
|
| 796 |
+
processed : array
|
| 797 |
+
Processed EMG signal.
|
| 798 |
+
|
| 799 |
+
References
|
| 800 |
+
----------
|
| 801 |
+
.. [Sol10] Solnik S, Rider P, Steinweg K, DeVita P, Hortobágyi T,
|
| 802 |
+
"Teager-Kaiser energy operator signal conditioning improves EMG onset
|
| 803 |
+
detection", European Journal of Applied Physiology, vol 110:3,
|
| 804 |
+
pp. 489-498, 2010
|
| 805 |
+
|
| 806 |
+
"""
|
| 807 |
+
|
| 808 |
+
# check inputs
|
| 809 |
+
if signal is None:
|
| 810 |
+
raise TypeError("Please specify an input signal.")
|
| 811 |
+
|
| 812 |
+
if rest is None:
|
| 813 |
+
raise TypeError("Please specidy rest parameters.")
|
| 814 |
+
|
| 815 |
+
if threshold is None:
|
| 816 |
+
raise TypeError("Please specify the scale factor for calculating the "
|
| 817 |
+
"detection threshold.")
|
| 818 |
+
|
| 819 |
+
if active_state_duration is None:
|
| 820 |
+
raise TypeError("Please specify the mininum duration of the "
|
| 821 |
+
"active state.")
|
| 822 |
+
|
| 823 |
+
# gather statistics on rest signal
|
| 824 |
+
if isinstance(rest, np.ndarray) or isinstance(rest, list):
|
| 825 |
+
# if the input parameter is a numpy array or a list
|
| 826 |
+
if len(rest) >= 2:
|
| 827 |
+
# first ensure numpy
|
| 828 |
+
rest = np.array(rest)
|
| 829 |
+
if len(rest) == 2:
|
| 830 |
+
# the rest signal is a segment of the signal
|
| 831 |
+
rest_signal = signal[rest[0]:rest[1]]
|
| 832 |
+
else:
|
| 833 |
+
# the rest signal is provided as is
|
| 834 |
+
rest_signal = rest
|
| 835 |
+
rest_zero_mean = rest_signal - np.mean(rest_signal)
|
| 836 |
+
statistics = st.signal_stats(signal=rest_zero_mean)
|
| 837 |
+
mean_rest = statistics['mean']
|
| 838 |
+
std_dev_rest = statistics['std_dev']
|
| 839 |
+
else:
|
| 840 |
+
raise TypeError("Please specify the rest analysis.")
|
| 841 |
+
elif isinstance(rest, dict):
|
| 842 |
+
# if the input is a dictionary
|
| 843 |
+
mean_rest = rest['mean']
|
| 844 |
+
std_dev_rest = rest['std_dev']
|
| 845 |
+
else:
|
| 846 |
+
raise TypeError("Please specify the rest analysis.")
|
| 847 |
+
|
| 848 |
+
# subtract baseline offset
|
| 849 |
+
signal_zero_mean = signal - np.mean(signal)
|
| 850 |
+
|
| 851 |
+
# calculate threshold
|
| 852 |
+
threshold = mean_rest + threshold * std_dev_rest
|
| 853 |
+
|
| 854 |
+
tf_list = []
|
| 855 |
+
onset_time_list = []
|
| 856 |
+
offset_time_list = []
|
| 857 |
+
alarm_time = 0
|
| 858 |
+
state_duration = 0
|
| 859 |
+
onset = False
|
| 860 |
+
alarm = False
|
| 861 |
+
for k in range(1, len(signal_zero_mean)-1):
|
| 862 |
+
# calculate the test function
|
| 863 |
+
# Teager-Kaiser energy operator
|
| 864 |
+
tf = signal_zero_mean[k]**2 - signal_zero_mean[k+1] * signal_zero_mean[k-1]
|
| 865 |
+
# full-wave rectification
|
| 866 |
+
tf = np.abs(tf)
|
| 867 |
+
tf_list.append(tf)
|
| 868 |
+
if onset is True:
|
| 869 |
+
# an onset was previously detected and we are looking for the offset time, applying the same criteria
|
| 870 |
+
if alarm is False: # if the alarm time has not yet been identified
|
| 871 |
+
if tf < threshold: # alarm time
|
| 872 |
+
alarm_time = k
|
| 873 |
+
alarm = True
|
| 874 |
+
else: # now we have to check for the remaining rule to me bet - duration of inactive state
|
| 875 |
+
if tf < threshold:
|
| 876 |
+
state_duration += 1
|
| 877 |
+
if state_duration == active_state_duration:
|
| 878 |
+
offset_time_list.append(alarm_time)
|
| 879 |
+
onset = False
|
| 880 |
+
alarm = False
|
| 881 |
+
state_duration = 0
|
| 882 |
+
else: # we only look for another onset if a previous offset was detected
|
| 883 |
+
if alarm is False: # if the alarm time has not yet been identified
|
| 884 |
+
if tf >= threshold: # alarm time
|
| 885 |
+
alarm_time = k
|
| 886 |
+
alarm = True
|
| 887 |
+
else: # now we have to check for the remaining rule to me bet - duration of active state
|
| 888 |
+
if tf >= threshold:
|
| 889 |
+
state_duration += 1
|
| 890 |
+
if state_duration == active_state_duration:
|
| 891 |
+
onset_time_list.append(alarm_time)
|
| 892 |
+
onset = True
|
| 893 |
+
alarm = False
|
| 894 |
+
state_duration = 0
|
| 895 |
+
|
| 896 |
+
onsets = np.union1d(onset_time_list,
|
| 897 |
+
offset_time_list)
|
| 898 |
+
|
| 899 |
+
return utils.ReturnTuple((onsets, tf_list), ('onsets', 'processed'))
|
| 900 |
+
|
| 901 |
+
|
| 902 |
+
def silva_onset_detector(signal=None, sampling_rate=1000.,
|
| 903 |
+
size=None, threshold_size=None, threshold=None):
|
| 904 |
+
"""Determine onsets of EMG pulses.
|
| 905 |
+
|
| 906 |
+
Follows the approach by Silva et al. [Sil12]_.
|
| 907 |
+
|
| 908 |
+
Parameters
|
| 909 |
+
----------
|
| 910 |
+
signal : array
|
| 911 |
+
Input filtered EMG signal.
|
| 912 |
+
sampling_rate : int, float, optional
|
| 913 |
+
Sampling frequency (Hz).
|
| 914 |
+
size : int
|
| 915 |
+
Detection window size (seconds).
|
| 916 |
+
threshold_size : int
|
| 917 |
+
Window size for calculation of the adaptive threshold; must be bigger
|
| 918 |
+
than the detection window size.
|
| 919 |
+
threshold : int, float
|
| 920 |
+
Fixed threshold for the double criteria.
|
| 921 |
+
|
| 922 |
+
Returns
|
| 923 |
+
-------
|
| 924 |
+
onsets : array
|
| 925 |
+
Indices of EMG pulse onsets.
|
| 926 |
+
processed : array
|
| 927 |
+
Processed EMG signal.
|
| 928 |
+
|
| 929 |
+
References
|
| 930 |
+
----------
|
| 931 |
+
.. [Sil12] Silva H, Scherer R, Sousa J, Londral A , "Towards improving the
|
| 932 |
+
usability of electromyographic interfacess", Journal of Oral
|
| 933 |
+
Rehabilitation, pp. 1–2, 2012
|
| 934 |
+
|
| 935 |
+
"""
|
| 936 |
+
|
| 937 |
+
# check inputs
|
| 938 |
+
if signal is None:
|
| 939 |
+
raise TypeError("Please specify an input signal.")
|
| 940 |
+
|
| 941 |
+
if size is None:
|
| 942 |
+
raise TypeError("Please specify the detection window size.")
|
| 943 |
+
|
| 944 |
+
if threshold_size is None:
|
| 945 |
+
raise TypeError("Please specify the window size for calculation of "
|
| 946 |
+
"the adaptive threshold.")
|
| 947 |
+
|
| 948 |
+
if threshold_size <= size:
|
| 949 |
+
raise TypeError("The window size for calculation of the adaptive "
|
| 950 |
+
"threshold must be bigger than the detection "
|
| 951 |
+
"window size")
|
| 952 |
+
|
| 953 |
+
if threshold is None:
|
| 954 |
+
raise TypeError("Please specify the fixed threshold for the "
|
| 955 |
+
"double criteria.")
|
| 956 |
+
|
| 957 |
+
# subtract baseline offset
|
| 958 |
+
signal_zero_mean = signal - np.mean(signal)
|
| 959 |
+
|
| 960 |
+
# full-wave rectification
|
| 961 |
+
fwlo = np.abs(signal_zero_mean)
|
| 962 |
+
|
| 963 |
+
# moving average for calculating the test function
|
| 964 |
+
tf_mvgav = np.convolve(fwlo, np.ones((size,)) / size, mode='valid')
|
| 965 |
+
|
| 966 |
+
# moving average for calculating the adaptive threshold
|
| 967 |
+
threshold_mvgav = np.convolve(fwlo, np.ones((threshold_size,)) / threshold_size, mode='valid')
|
| 968 |
+
|
| 969 |
+
onset_time_list = []
|
| 970 |
+
offset_time_list = []
|
| 971 |
+
onset = False
|
| 972 |
+
for k in range(0, len(threshold_mvgav)):
|
| 973 |
+
if onset is True:
|
| 974 |
+
# an onset was previously detected and we are looking for the offset time, applying the same criteria
|
| 975 |
+
if tf_mvgav[k] < threshold_mvgav[k] and tf_mvgav[k] < threshold:
|
| 976 |
+
offset_time_list.append(k)
|
| 977 |
+
onset = False # the offset has been detected, and we can look for another activation
|
| 978 |
+
else: # we only look for another onset if a previous offset was detected
|
| 979 |
+
if tf_mvgav[k] >= threshold_mvgav[k] and tf_mvgav[k] >= threshold:
|
| 980 |
+
# the first index of the sliding window is used as an estimate for the onset time (simple post-processor)
|
| 981 |
+
onset_time_list.append(k)
|
| 982 |
+
onset = True
|
| 983 |
+
|
| 984 |
+
onsets = np.union1d(onset_time_list,
|
| 985 |
+
offset_time_list)
|
| 986 |
+
|
| 987 |
+
# adjust indices because of moving average
|
| 988 |
+
onsets += int(size / 2)
|
| 989 |
+
|
| 990 |
+
return utils.ReturnTuple((onsets, tf_mvgav), ('onsets', 'processed'))
|
| 991 |
+
|
| 992 |
+
|
| 993 |
+
def londral_onset_detector(signal=None, rest=None, sampling_rate=1000.,
|
| 994 |
+
size=None, threshold=None,
|
| 995 |
+
active_state_duration=None):
|
| 996 |
+
"""Determine onsets of EMG pulses.
|
| 997 |
+
|
| 998 |
+
Follows the approach by Londral et al. [Lon13]_.
|
| 999 |
+
|
| 1000 |
+
Parameters
|
| 1001 |
+
----------
|
| 1002 |
+
signal : array
|
| 1003 |
+
Input filtered EMG signal.
|
| 1004 |
+
rest : array, list, dict
|
| 1005 |
+
One of the following 3 options:
|
| 1006 |
+
* N-dimensional array with filtered samples corresponding to a
|
| 1007 |
+
rest period;
|
| 1008 |
+
* 2D array or list with the beginning and end indices of a segment of
|
| 1009 |
+
the signal corresponding to a rest period;
|
| 1010 |
+
* Dictionary with {'mean': mean value, 'std_dev': standard variation}.
|
| 1011 |
+
sampling_rate : int, float, optional
|
| 1012 |
+
Sampling frequency (Hz).
|
| 1013 |
+
size : int
|
| 1014 |
+
Detection window size (seconds).
|
| 1015 |
+
threshold : int, float
|
| 1016 |
+
Scale factor for calculating the detection threshold.
|
| 1017 |
+
active_state_duration: int
|
| 1018 |
+
Minimum duration of the active state.
|
| 1019 |
+
|
| 1020 |
+
Returns
|
| 1021 |
+
-------
|
| 1022 |
+
onsets : array
|
| 1023 |
+
Indices of EMG pulse onsets.
|
| 1024 |
+
processed : array
|
| 1025 |
+
Processed EMG signal.
|
| 1026 |
+
|
| 1027 |
+
References
|
| 1028 |
+
----------
|
| 1029 |
+
.. [Lon13] Londral A, Silva H, Nunes N, Carvalho M, Azevedo L, "A wireless
|
| 1030 |
+
user-computer interface to explore various sources of biosignals and
|
| 1031 |
+
visual biofeedback for severe motor impairment",
|
| 1032 |
+
Journal of Accessibility and Design for All, vol. 3:2, pp. 118–134, 2013
|
| 1033 |
+
|
| 1034 |
+
"""
|
| 1035 |
+
|
| 1036 |
+
# check inputs
|
| 1037 |
+
if signal is None:
|
| 1038 |
+
raise TypeError("Please specify an input signal.")
|
| 1039 |
+
|
| 1040 |
+
if rest is None:
|
| 1041 |
+
raise TypeError("Please specidy rest parameters.")
|
| 1042 |
+
|
| 1043 |
+
if size is None:
|
| 1044 |
+
raise TypeError("Please specify the detection window size.")
|
| 1045 |
+
|
| 1046 |
+
if threshold is None:
|
| 1047 |
+
raise TypeError("Please specify the scale factor for calculating the "
|
| 1048 |
+
"detection threshold.")
|
| 1049 |
+
|
| 1050 |
+
if active_state_duration is None:
|
| 1051 |
+
raise TypeError("Please specify the mininum duration of the "
|
| 1052 |
+
"active state.")
|
| 1053 |
+
|
| 1054 |
+
# gather statistics on rest signal
|
| 1055 |
+
if isinstance(rest, np.ndarray) or isinstance(rest, list):
|
| 1056 |
+
# if the input parameter is a numpy array or a list
|
| 1057 |
+
if len(rest) >= 2:
|
| 1058 |
+
# first ensure numpy
|
| 1059 |
+
rest = np.array(rest)
|
| 1060 |
+
if len(rest) == 2:
|
| 1061 |
+
# the rest signal is a segment of the signal
|
| 1062 |
+
rest_signal = signal[rest[0]:rest[1]]
|
| 1063 |
+
else:
|
| 1064 |
+
# the rest signal is provided as is
|
| 1065 |
+
rest_signal = rest
|
| 1066 |
+
rest_zero_mean = rest_signal - np.mean(rest_signal)
|
| 1067 |
+
statistics = st.signal_stats(signal=rest_zero_mean)
|
| 1068 |
+
mean_rest = statistics['mean']
|
| 1069 |
+
std_dev_rest = statistics['std_dev']
|
| 1070 |
+
else:
|
| 1071 |
+
raise TypeError("Please specify the rest analysis.")
|
| 1072 |
+
elif isinstance(rest, dict):
|
| 1073 |
+
# if the input is a dictionary
|
| 1074 |
+
mean_rest = rest['mean']
|
| 1075 |
+
std_dev_rest = rest['std_dev']
|
| 1076 |
+
else:
|
| 1077 |
+
raise TypeError("Please specify the rest analysis.")
|
| 1078 |
+
|
| 1079 |
+
# subtract baseline offset
|
| 1080 |
+
signal_zero_mean = signal - np.mean(signal)
|
| 1081 |
+
|
| 1082 |
+
# calculate threshold
|
| 1083 |
+
threshold = mean_rest + threshold * std_dev_rest
|
| 1084 |
+
|
| 1085 |
+
# helper function for calculating the test function for each window
|
| 1086 |
+
def _londral_test_function(signal=None):
|
| 1087 |
+
tf = (1 / size) * (sum(j ** 2 for j in signal) - (1 / size) * (sum(signal) ** 2))
|
| 1088 |
+
return tf
|
| 1089 |
+
|
| 1090 |
+
# calculate the test function
|
| 1091 |
+
_, tf = st.windower(
|
| 1092 |
+
signal=signal_zero_mean,
|
| 1093 |
+
size=size, step=1,
|
| 1094 |
+
fcn=_londral_test_function,
|
| 1095 |
+
kernel='rectangular',
|
| 1096 |
+
)
|
| 1097 |
+
|
| 1098 |
+
onset_time_list = []
|
| 1099 |
+
offset_time_list = []
|
| 1100 |
+
alarm_time = 0
|
| 1101 |
+
state_duration = 0
|
| 1102 |
+
onset = False
|
| 1103 |
+
alarm = False
|
| 1104 |
+
for k in range(0, len(tf)):
|
| 1105 |
+
if onset is True:
|
| 1106 |
+
# an onset was previously detected and we are looking for the offset time, applying the same criteria
|
| 1107 |
+
if alarm is False: # if the alarm time has not yet been identified
|
| 1108 |
+
if tf[k] < threshold: # alarm time
|
| 1109 |
+
alarm_time = k
|
| 1110 |
+
alarm = True
|
| 1111 |
+
else: # now we have to check for the remaining rule to me bet - duration of inactive state
|
| 1112 |
+
if tf[k] < threshold:
|
| 1113 |
+
state_duration += 1
|
| 1114 |
+
if state_duration == active_state_duration:
|
| 1115 |
+
offset_time_list.append(alarm_time)
|
| 1116 |
+
onset = False
|
| 1117 |
+
alarm = False
|
| 1118 |
+
state_duration = 0
|
| 1119 |
+
else: # we only look for another onset if a previous offset was detected
|
| 1120 |
+
if alarm is False: # if the alarm time has not yet been identified
|
| 1121 |
+
if tf[k] >= threshold: # alarm time
|
| 1122 |
+
alarm_time = k
|
| 1123 |
+
alarm = True
|
| 1124 |
+
else: # now we have to check for the remaining rule to me bet - duration of active state
|
| 1125 |
+
if tf[k] >= threshold:
|
| 1126 |
+
state_duration += 1
|
| 1127 |
+
if state_duration == active_state_duration:
|
| 1128 |
+
onset_time_list.append(alarm_time)
|
| 1129 |
+
onset = True
|
| 1130 |
+
alarm = False
|
| 1131 |
+
state_duration = 0
|
| 1132 |
+
|
| 1133 |
+
onsets = np.union1d(onset_time_list,
|
| 1134 |
+
offset_time_list)
|
| 1135 |
+
|
| 1136 |
+
# adjust indices because of moving average
|
| 1137 |
+
onsets += int(size / 2)
|
| 1138 |
+
|
| 1139 |
+
return utils.ReturnTuple((onsets, tf), ('onsets', 'processed'))
|
BioSPPy/source/biosppy/signals/pcg.py
ADDED
|
@@ -0,0 +1,282 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.pcg
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides methods to process Phonocardiography (PCG) signals.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
# Imports
|
| 13 |
+
# compat
|
| 14 |
+
from __future__ import absolute_import, division, print_function
|
| 15 |
+
|
| 16 |
+
# 3rd party
|
| 17 |
+
import numpy as np
|
| 18 |
+
import scipy.signal as ss
|
| 19 |
+
|
| 20 |
+
# local
|
| 21 |
+
from . import tools as st
|
| 22 |
+
from .. import plotting, utils
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
def pcg(signal=None, sampling_rate=1000., path=None, show=True):
|
| 26 |
+
"""
|
| 27 |
+
|
| 28 |
+
Parameters
|
| 29 |
+
----------
|
| 30 |
+
signal : array
|
| 31 |
+
Raw PCG signal.
|
| 32 |
+
sampling_rate : int, float, optional
|
| 33 |
+
Sampling frequency (Hz).
|
| 34 |
+
path : str, optional
|
| 35 |
+
If provided, the plot will be saved to the specified file.
|
| 36 |
+
show : bool, optional
|
| 37 |
+
If True, show a summary plot.
|
| 38 |
+
|
| 39 |
+
Returns
|
| 40 |
+
-------
|
| 41 |
+
ts : array
|
| 42 |
+
Signal time axis reference (seconds).
|
| 43 |
+
filtered : array
|
| 44 |
+
Filtered PCG signal.
|
| 45 |
+
peaks : array
|
| 46 |
+
Peak location indices.
|
| 47 |
+
hs: array
|
| 48 |
+
Classification of peaks as S1 or S2.
|
| 49 |
+
heart_rate : double
|
| 50 |
+
Average heart rate (bpm).
|
| 51 |
+
systolic_time_interval : double
|
| 52 |
+
Average systolic time interval (seconds).
|
| 53 |
+
heart_rate_ts : array
|
| 54 |
+
Heart rate time axis reference (seconds).
|
| 55 |
+
inst_heart_rate : array
|
| 56 |
+
Instantaneous heart rate (bpm).
|
| 57 |
+
|
| 58 |
+
"""
|
| 59 |
+
|
| 60 |
+
# check inputs
|
| 61 |
+
if signal is None:
|
| 62 |
+
raise TypeError("Please specify an input signal.")
|
| 63 |
+
|
| 64 |
+
# ensure numpy
|
| 65 |
+
signal = np.array(signal)
|
| 66 |
+
|
| 67 |
+
sampling_rate = float(sampling_rate)
|
| 68 |
+
|
| 69 |
+
# Filter Design
|
| 70 |
+
order = 2
|
| 71 |
+
passBand = np.array([25, 400])
|
| 72 |
+
|
| 73 |
+
# Band-Pass filtering of the PCG:
|
| 74 |
+
filtered,fs,params = st.filter_signal(signal,'butter','bandpass',order,passBand,sampling_rate)
|
| 75 |
+
|
| 76 |
+
# find peaks
|
| 77 |
+
peaks,envelope = find_peaks(signal=filtered, sampling_rate=sampling_rate)
|
| 78 |
+
|
| 79 |
+
# classify heart sounds
|
| 80 |
+
hs, = identify_heart_sounds(beats=peaks, sampling_rate=sampling_rate)
|
| 81 |
+
s1_peaks = peaks[np.where(hs==1)[0]]
|
| 82 |
+
|
| 83 |
+
# get heart rate
|
| 84 |
+
heartRate,systolicTimeInterval = get_avg_heart_rate(envelope,sampling_rate)
|
| 85 |
+
|
| 86 |
+
# get instantaneous heart rate
|
| 87 |
+
hr_idx,hr = st.get_heart_rate(s1_peaks, sampling_rate)
|
| 88 |
+
|
| 89 |
+
# get time vectors
|
| 90 |
+
length = len(signal)
|
| 91 |
+
T = (length - 1) / sampling_rate
|
| 92 |
+
ts = np.linspace(0, T, length, endpoint=True)
|
| 93 |
+
ts_hr = ts[hr_idx]
|
| 94 |
+
|
| 95 |
+
# plot
|
| 96 |
+
if show:
|
| 97 |
+
plotting.plot_pcg(ts=ts,
|
| 98 |
+
raw=signal,
|
| 99 |
+
filtered=filtered,
|
| 100 |
+
peaks=peaks,
|
| 101 |
+
heart_sounds=hs,
|
| 102 |
+
heart_rate_ts=ts_hr,
|
| 103 |
+
inst_heart_rate=hr,
|
| 104 |
+
path=path,
|
| 105 |
+
show=True)
|
| 106 |
+
|
| 107 |
+
|
| 108 |
+
# output
|
| 109 |
+
args = (ts, filtered, peaks, hs, heartRate, systolicTimeInterval, ts_hr, hr)
|
| 110 |
+
names = ('ts', 'filtered', 'peaks', 'heart_sounds',
|
| 111 |
+
'heart_rate', 'systolic_time_interval','heart_rate_ts','inst_heart_rate')
|
| 112 |
+
|
| 113 |
+
return utils.ReturnTuple(args, names)
|
| 114 |
+
|
| 115 |
+
def find_peaks(signal=None,sampling_rate=1000.):
|
| 116 |
+
|
| 117 |
+
"""Finds the peaks of the heart sounds from the homomorphic envelope
|
| 118 |
+
|
| 119 |
+
Parameters
|
| 120 |
+
----------
|
| 121 |
+
signal : array
|
| 122 |
+
Input filtered PCG signal.
|
| 123 |
+
sampling_rate : int, float, optional
|
| 124 |
+
Sampling frequency (Hz).
|
| 125 |
+
|
| 126 |
+
Returns
|
| 127 |
+
-------
|
| 128 |
+
peaks : array
|
| 129 |
+
peak location indices.
|
| 130 |
+
envelope : array
|
| 131 |
+
Homomorphic envelope (normalized).
|
| 132 |
+
|
| 133 |
+
"""
|
| 134 |
+
|
| 135 |
+
# Compute homomorphic envelope
|
| 136 |
+
envelope, = homomorphic_filter(signal,sampling_rate)
|
| 137 |
+
envelope, = st.normalize(envelope)
|
| 138 |
+
|
| 139 |
+
# Find the prominent peaks of the envelope
|
| 140 |
+
peaksIndices, _ = ss.find_peaks(envelope, height = 0.2 * np.amax(envelope), distance = 0.10*sampling_rate, prominence = 0.25)
|
| 141 |
+
|
| 142 |
+
peaks = np.array(peaksIndices, dtype='int')
|
| 143 |
+
|
| 144 |
+
return utils.ReturnTuple((peaks,envelope),
|
| 145 |
+
('peaks','homomorphic_envelope'))
|
| 146 |
+
|
| 147 |
+
def homomorphic_filter(signal=None, sampling_rate=1000.):
|
| 148 |
+
|
| 149 |
+
"""Finds the homomorphic envelope of a signal
|
| 150 |
+
|
| 151 |
+
Follows the approach described by Schmidt et al. [Schimdt10]_.
|
| 152 |
+
|
| 153 |
+
Parameters
|
| 154 |
+
----------
|
| 155 |
+
signal : array
|
| 156 |
+
Input filtered PCG signal.
|
| 157 |
+
sampling_rate : int, float, optional
|
| 158 |
+
Sampling frequency (Hz).
|
| 159 |
+
|
| 160 |
+
Returns
|
| 161 |
+
-------
|
| 162 |
+
envelope : array
|
| 163 |
+
Homomorphic envelope (non-normalized).
|
| 164 |
+
|
| 165 |
+
References
|
| 166 |
+
----------
|
| 167 |
+
.. [Schimdt10] S. E. Schmidt et al., "Segmentation of heart sound recordings by a
|
| 168 |
+
duration-dependent hidden Markov model", Physiol. Meas., 2010
|
| 169 |
+
|
| 170 |
+
"""
|
| 171 |
+
|
| 172 |
+
# check inputs
|
| 173 |
+
if signal is None:
|
| 174 |
+
raise TypeError("Please specify an input signal.")
|
| 175 |
+
|
| 176 |
+
sampling_rate = float(sampling_rate)
|
| 177 |
+
|
| 178 |
+
# LP-filter Design (to reject the oscillating component of the signal):
|
| 179 |
+
order = 1; fc = 8
|
| 180 |
+
sos = ss.butter(order, fc, btype = 'low', analog = False, output = 'sos', fs = sampling_rate)
|
| 181 |
+
envelope = np.exp( ss.sosfiltfilt(sos, np.log(np.abs(signal))))
|
| 182 |
+
|
| 183 |
+
return utils.ReturnTuple((envelope,),
|
| 184 |
+
('homomorphic_envelope',))
|
| 185 |
+
|
| 186 |
+
def get_avg_heart_rate(envelope=None, sampling_rate=1000.):
|
| 187 |
+
|
| 188 |
+
"""Compute average heart rate from the signal's homomorphic envelope.
|
| 189 |
+
|
| 190 |
+
Follows the approach described by Schmidt et al. [Schimdt10]_, with
|
| 191 |
+
code adapted from David Springer [Springer16]_.
|
| 192 |
+
|
| 193 |
+
Parameters
|
| 194 |
+
----------
|
| 195 |
+
envelope : array
|
| 196 |
+
Signal's homomorphic envelope
|
| 197 |
+
sampling_rate : int, float, optional
|
| 198 |
+
Sampling frequency (Hz).
|
| 199 |
+
|
| 200 |
+
Returns
|
| 201 |
+
-------
|
| 202 |
+
heart_rate : double
|
| 203 |
+
Average heart rate (bpm).
|
| 204 |
+
systolic_time_interval : double
|
| 205 |
+
Average systolic time interval (seconds).
|
| 206 |
+
|
| 207 |
+
Notes
|
| 208 |
+
-----
|
| 209 |
+
* Assumes normal human heart rate to be between 40 and 200 bpm.
|
| 210 |
+
* Assumes normal human systole time interval to be between 0.2 seconds and half a heartbeat
|
| 211 |
+
|
| 212 |
+
References
|
| 213 |
+
----------
|
| 214 |
+
.. [Schimdt10] S. E. Schmidt et al., "Segmentation of heart sound recordings by a
|
| 215 |
+
duration-dependent hidden Markov model", Physiol. Meas., 2010
|
| 216 |
+
.. [Springer16] D.Springer, "Heart sound segmentation code based on duration-dependant
|
| 217 |
+
HMM", 2016. Available at: https://github.com/davidspringer/Springer-Segmentation-Code
|
| 218 |
+
|
| 219 |
+
"""
|
| 220 |
+
|
| 221 |
+
# check inputs
|
| 222 |
+
if envelope is None:
|
| 223 |
+
raise TypeError("Please specify the signal's homomorphic envelope.")
|
| 224 |
+
|
| 225 |
+
autocorrelation = np.correlate(envelope,envelope,mode='full')
|
| 226 |
+
autocorrelation = autocorrelation[(autocorrelation.size)//2:]
|
| 227 |
+
|
| 228 |
+
min_index = int(0.3*sampling_rate)
|
| 229 |
+
max_index = int(1.5*sampling_rate)
|
| 230 |
+
|
| 231 |
+
index = np.argmax(autocorrelation[min_index-1:max_index-1])
|
| 232 |
+
true_index = index+min_index-1
|
| 233 |
+
heartRate = 60/(true_index/sampling_rate)
|
| 234 |
+
|
| 235 |
+
max_sys_duration = int(np.round(((60/heartRate)*sampling_rate)/2))
|
| 236 |
+
min_sys_duration = int(np.round(0.2*sampling_rate))
|
| 237 |
+
|
| 238 |
+
pos = np.argmax(autocorrelation[min_sys_duration-1:max_sys_duration-1])
|
| 239 |
+
systolicTimeInterval = (min_sys_duration+pos)/sampling_rate
|
| 240 |
+
|
| 241 |
+
|
| 242 |
+
return utils.ReturnTuple((heartRate,systolicTimeInterval),
|
| 243 |
+
('heart_rate','systolic_time_interval'))
|
| 244 |
+
|
| 245 |
+
def identify_heart_sounds(beats = None, sampling_rate = 1000.):
|
| 246 |
+
|
| 247 |
+
"""Classify heart sound peaks as S1 or S2
|
| 248 |
+
|
| 249 |
+
Parameters
|
| 250 |
+
----------
|
| 251 |
+
beats : array
|
| 252 |
+
Peaks of heart sounds
|
| 253 |
+
sampling_rate : int, float, optional
|
| 254 |
+
Sampling frequency (Hz).
|
| 255 |
+
|
| 256 |
+
Returns
|
| 257 |
+
-------
|
| 258 |
+
classification : array
|
| 259 |
+
Classification of heart sound peaks. 1 is S1, 2 is S2
|
| 260 |
+
|
| 261 |
+
"""
|
| 262 |
+
|
| 263 |
+
one_peak_ahead = np.roll(beats, -1)
|
| 264 |
+
|
| 265 |
+
SS_intervals = (one_peak_ahead[0:-1] - beats[0:-1]) / sampling_rate
|
| 266 |
+
|
| 267 |
+
# Initialize the vector to store the classification of the peaks:
|
| 268 |
+
classification = np.zeros(len(beats))
|
| 269 |
+
|
| 270 |
+
# Classify the peaks.
|
| 271 |
+
# Terrible algorithm, but good enough for now
|
| 272 |
+
for i in range(1,len(beats)-1):
|
| 273 |
+
if SS_intervals[i-1] > SS_intervals[i]:
|
| 274 |
+
classification[i] = 0
|
| 275 |
+
else:
|
| 276 |
+
classification[i] = 1
|
| 277 |
+
classification[0] = int(not(classification[1]))
|
| 278 |
+
classification[-1] = int(not(classification[-2]))
|
| 279 |
+
|
| 280 |
+
classification += 1
|
| 281 |
+
|
| 282 |
+
return utils.ReturnTuple((classification,), ('heart_sounds',))
|
BioSPPy/source/biosppy/signals/ppg.py
ADDED
|
@@ -0,0 +1,568 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.ppg
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides methods to process Photoplethysmogram (PPG) signals.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
# Imports
|
| 13 |
+
# compat
|
| 14 |
+
from __future__ import absolute_import, division, print_function
|
| 15 |
+
from six.moves import range
|
| 16 |
+
|
| 17 |
+
# 3rd party
|
| 18 |
+
import numpy as np
|
| 19 |
+
import scipy.signal as ss
|
| 20 |
+
import matplotlib.pyplot as plt
|
| 21 |
+
from scipy.stats import gaussian_kde
|
| 22 |
+
|
| 23 |
+
# local
|
| 24 |
+
from . import tools as st
|
| 25 |
+
from .. import plotting, utils
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
def ppg(signal=None, sampling_rate=1000., show=True):
|
| 29 |
+
"""Process a raw PPG signal and extract relevant signal features using
|
| 30 |
+
default parameters.
|
| 31 |
+
|
| 32 |
+
Parameters
|
| 33 |
+
----------
|
| 34 |
+
signal : array
|
| 35 |
+
Raw PPG signal.
|
| 36 |
+
sampling_rate : int, float, optional
|
| 37 |
+
Sampling frequency (Hz).
|
| 38 |
+
show : bool, optional
|
| 39 |
+
If True, show a summary plot.
|
| 40 |
+
|
| 41 |
+
Returns
|
| 42 |
+
-------
|
| 43 |
+
ts : array
|
| 44 |
+
Signal time axis reference (seconds).
|
| 45 |
+
filtered : array
|
| 46 |
+
Filtered PPG signal.
|
| 47 |
+
onsets : array
|
| 48 |
+
Indices of PPG pulse onsets.
|
| 49 |
+
heart_rate_ts : array
|
| 50 |
+
Heart rate time axis reference (seconds).
|
| 51 |
+
heart_rate : array
|
| 52 |
+
Instantaneous heart rate (bpm).
|
| 53 |
+
|
| 54 |
+
"""
|
| 55 |
+
|
| 56 |
+
# check inputs
|
| 57 |
+
if signal is None:
|
| 58 |
+
raise TypeError("Please specify an input signal.")
|
| 59 |
+
|
| 60 |
+
# ensure numpy
|
| 61 |
+
signal = np.array(signal)
|
| 62 |
+
|
| 63 |
+
sampling_rate = float(sampling_rate)
|
| 64 |
+
|
| 65 |
+
# filter signal
|
| 66 |
+
filtered, _, _ = st.filter_signal(signal=signal,
|
| 67 |
+
ftype='butter',
|
| 68 |
+
band='bandpass',
|
| 69 |
+
order=4,
|
| 70 |
+
frequency=[1, 8],
|
| 71 |
+
sampling_rate=sampling_rate)
|
| 72 |
+
|
| 73 |
+
# find onsets
|
| 74 |
+
onsets, _ = find_onsets_elgendi2013(signal=filtered, sampling_rate=sampling_rate)
|
| 75 |
+
|
| 76 |
+
# compute heart rate
|
| 77 |
+
hr_idx, hr = st.get_heart_rate(beats=onsets,
|
| 78 |
+
sampling_rate=sampling_rate,
|
| 79 |
+
smooth=True,
|
| 80 |
+
size=3)
|
| 81 |
+
|
| 82 |
+
# get time vectors
|
| 83 |
+
length = len(signal)
|
| 84 |
+
T = (length - 1) / sampling_rate
|
| 85 |
+
ts = np.linspace(0, T, length, endpoint=False)
|
| 86 |
+
ts_hr = ts[hr_idx]
|
| 87 |
+
|
| 88 |
+
# plot
|
| 89 |
+
if show:
|
| 90 |
+
plotting.plot_ppg(ts=ts,
|
| 91 |
+
raw=signal,
|
| 92 |
+
filtered=filtered,
|
| 93 |
+
onsets=onsets,
|
| 94 |
+
heart_rate_ts=ts_hr,
|
| 95 |
+
heart_rate=hr,
|
| 96 |
+
path=None,
|
| 97 |
+
show=True)
|
| 98 |
+
|
| 99 |
+
# output
|
| 100 |
+
args = (ts, filtered, onsets, ts_hr, hr)
|
| 101 |
+
names = ('ts', 'filtered', 'onsets', 'heart_rate_ts', 'heart_rate')
|
| 102 |
+
|
| 103 |
+
return utils.ReturnTuple(args, names)
|
| 104 |
+
|
| 105 |
+
def find_onsets_elgendi2013(signal=None, sampling_rate=1000., peakwindow=0.111, beatwindow=0.667, beatoffset=0.02, mindelay=0.3):
|
| 106 |
+
"""
|
| 107 |
+
Determines onsets of PPG pulses.
|
| 108 |
+
|
| 109 |
+
Parameters
|
| 110 |
+
----------
|
| 111 |
+
signal : array
|
| 112 |
+
Input filtered PPG signal.
|
| 113 |
+
sampling_rate : int, float, optional
|
| 114 |
+
Sampling frequency (Hz).
|
| 115 |
+
peakwindow : float
|
| 116 |
+
Parameter W1 on referenced article
|
| 117 |
+
Optimized at 0.111
|
| 118 |
+
beatwindow : float
|
| 119 |
+
Parameter W2 on referenced article
|
| 120 |
+
Optimized at 0.667
|
| 121 |
+
beatoffset : float
|
| 122 |
+
Parameter beta on referenced article
|
| 123 |
+
Optimized at 0.2
|
| 124 |
+
mindelay : float
|
| 125 |
+
Minimum delay between peaks.
|
| 126 |
+
Avoids false positives
|
| 127 |
+
|
| 128 |
+
Returns
|
| 129 |
+
----------
|
| 130 |
+
onsets : array
|
| 131 |
+
Indices of PPG pulse onsets.
|
| 132 |
+
params : dict
|
| 133 |
+
Input parameters of the function
|
| 134 |
+
|
| 135 |
+
|
| 136 |
+
References
|
| 137 |
+
----------
|
| 138 |
+
- Elgendi M, Norton I, Brearley M, Abbott D, Schuurmans D (2013) Systolic Peak Detection in
|
| 139 |
+
Acceleration Photoplethysmograms Measured from Emergency Responders in Tropical Conditions.
|
| 140 |
+
PLoS ONE 8(10): e76585. doi:10.1371/journal.pone.0076585.
|
| 141 |
+
|
| 142 |
+
Notes
|
| 143 |
+
---------------------
|
| 144 |
+
Optimal ranges for signal filtering (from Elgendi et al. 2013):
|
| 145 |
+
"Optimization of the beat detector’s spectral window for the lower frequency resulted in a
|
| 146 |
+
value within 0.5– 1 Hz with the higher frequency within 7–15 Hz"
|
| 147 |
+
|
| 148 |
+
All the number references below between curly brackets {...} by the code refer to the line numbers of
|
| 149 |
+
code in "Table 2 Algorithm IV: DETECTOR (PPG signal, F1, F2, W1, W2, b)" from Elgendi et al. 2013 for a
|
| 150 |
+
better comparison of the algorithm
|
| 151 |
+
|
| 152 |
+
"""
|
| 153 |
+
|
| 154 |
+
# check inputs
|
| 155 |
+
if signal is None:
|
| 156 |
+
raise TypeError("Please specify an input signal.")
|
| 157 |
+
|
| 158 |
+
# Create copy of signal (not to modify the original object)
|
| 159 |
+
signal_copy = np.copy(signal)
|
| 160 |
+
|
| 161 |
+
# Truncate to zero and square
|
| 162 |
+
# {3, 4}
|
| 163 |
+
signal_copy[signal_copy < 0] = 0
|
| 164 |
+
squared_signal = signal_copy ** 2
|
| 165 |
+
|
| 166 |
+
# Calculate peak detection threshold
|
| 167 |
+
# {5}
|
| 168 |
+
ma_peak_kernel = int(np.rint(peakwindow * sampling_rate))
|
| 169 |
+
ma_peak, _ = st.smoother(squared_signal, kernel="boxcar", size=ma_peak_kernel)
|
| 170 |
+
|
| 171 |
+
# {6}
|
| 172 |
+
ma_beat_kernel = int(np.rint(beatwindow * sampling_rate))
|
| 173 |
+
ma_beat, _ = st.smoother(squared_signal, kernel="boxcar", size=ma_beat_kernel)
|
| 174 |
+
|
| 175 |
+
# Calculate threshold value
|
| 176 |
+
# {7, 8, 9}
|
| 177 |
+
thr1 = ma_beat + beatoffset * np.mean(squared_signal)
|
| 178 |
+
|
| 179 |
+
# Identify start and end of PPG waves.
|
| 180 |
+
# {10-16}
|
| 181 |
+
waves = ma_peak > thr1
|
| 182 |
+
beg_waves = np.where(np.logical_and(np.logical_not(waves[0:-1]), waves[1:]))[0]
|
| 183 |
+
end_waves = np.where(np.logical_and(waves[0:-1], np.logical_not(waves[1:])))[0]
|
| 184 |
+
# Throw out wave-ends that precede first wave-start.
|
| 185 |
+
end_waves = end_waves[end_waves > beg_waves[0]]
|
| 186 |
+
|
| 187 |
+
# Identify systolic peaks within waves (ignore waves that are too short).
|
| 188 |
+
num_waves = min(beg_waves.size, end_waves.size)
|
| 189 |
+
# {18}
|
| 190 |
+
min_len = int(np.rint(peakwindow * sampling_rate))
|
| 191 |
+
min_delay = int(np.rint(mindelay * sampling_rate))
|
| 192 |
+
onsets = [0]
|
| 193 |
+
|
| 194 |
+
# {19}
|
| 195 |
+
for i in range(num_waves):
|
| 196 |
+
|
| 197 |
+
beg = beg_waves[i]
|
| 198 |
+
end = end_waves[i]
|
| 199 |
+
len_wave = end - beg
|
| 200 |
+
|
| 201 |
+
# {20, 22, 23}
|
| 202 |
+
if len_wave < min_len:
|
| 203 |
+
continue
|
| 204 |
+
|
| 205 |
+
# Find local maxima and their prominence within wave span.
|
| 206 |
+
# {21}
|
| 207 |
+
data = signal_copy[beg:end]
|
| 208 |
+
locmax, props = ss.find_peaks(data, prominence=(None, None))
|
| 209 |
+
|
| 210 |
+
# If more than one peak
|
| 211 |
+
if locmax.size > 0:
|
| 212 |
+
# Identify most prominent local maximum.
|
| 213 |
+
peak = beg + locmax[np.argmax(props["prominences"])]
|
| 214 |
+
# Enforce minimum delay between onsets.
|
| 215 |
+
if peak - onsets[-1] > min_delay:
|
| 216 |
+
onsets.append(peak)
|
| 217 |
+
|
| 218 |
+
onsets.pop(0)
|
| 219 |
+
onsets = np.array(onsets, dtype='int')
|
| 220 |
+
|
| 221 |
+
# output
|
| 222 |
+
params = {'signal': signal, 'sampling_rate': sampling_rate, 'peakwindow': peakwindow, 'beatwindow': beatwindow, 'beatoffset': beatoffset, 'mindelay': mindelay}
|
| 223 |
+
|
| 224 |
+
args = (onsets, params)
|
| 225 |
+
names = ('onsets', 'params')
|
| 226 |
+
|
| 227 |
+
return utils.ReturnTuple(args, names)
|
| 228 |
+
|
| 229 |
+
|
| 230 |
+
def find_onsets_kavsaoglu2016(
|
| 231 |
+
signal=None,
|
| 232 |
+
sampling_rate=1000.0,
|
| 233 |
+
alpha=0.2,
|
| 234 |
+
k=4,
|
| 235 |
+
init_bpm=90,
|
| 236 |
+
min_delay=0.6,
|
| 237 |
+
max_BPM=150,
|
| 238 |
+
):
|
| 239 |
+
"""
|
| 240 |
+
Determines onsets of PPG pulses.
|
| 241 |
+
|
| 242 |
+
Parameters
|
| 243 |
+
----------
|
| 244 |
+
signal : array
|
| 245 |
+
Input filtered PPG signal.
|
| 246 |
+
sampling_rate : int, float, optional
|
| 247 |
+
Sampling frequency (Hz).
|
| 248 |
+
alpha : float, optional
|
| 249 |
+
Low-pass filter factor.
|
| 250 |
+
Avoids abrupt changes of BPM.
|
| 251 |
+
k : int, float, optional
|
| 252 |
+
Number of segments by pulse.
|
| 253 |
+
Width of each segment = Period of pulse according to current BPM / k
|
| 254 |
+
init_bpm : int, float, optional
|
| 255 |
+
Initial BPM.
|
| 256 |
+
Higher value results in a smaller segment width.
|
| 257 |
+
min_delay : float
|
| 258 |
+
Minimum delay between peaks as percentage of current BPM pulse period.
|
| 259 |
+
Avoids false positives
|
| 260 |
+
max_bpm : int, float, optional
|
| 261 |
+
Maximum BPM.
|
| 262 |
+
Maximum value accepted as valid BPM.
|
| 263 |
+
|
| 264 |
+
Returns
|
| 265 |
+
----------
|
| 266 |
+
onsets : array
|
| 267 |
+
Indices of PPG pulse onsets.
|
| 268 |
+
window_marks : array
|
| 269 |
+
Indices of segments window boundaries.
|
| 270 |
+
params : dict
|
| 271 |
+
Input parameters of the function
|
| 272 |
+
|
| 273 |
+
|
| 274 |
+
References
|
| 275 |
+
----------
|
| 276 |
+
- Kavsaoğlu, Ahmet & Polat, Kemal & Bozkurt, Mehmet. (2016). An innovative peak detection algorithm for
|
| 277 |
+
photoplethysmography signals: An adaptive segmentation method. TURKISH JOURNAL OF ELECTRICAL ENGINEERING
|
| 278 |
+
& COMPUTER SCIENCES. 24. 1782-1796. 10.3906/elk-1310-177.
|
| 279 |
+
|
| 280 |
+
Notes
|
| 281 |
+
---------------------
|
| 282 |
+
This algorithm is an adaption of the one described on Kavsaoğlu et al. (2016).
|
| 283 |
+
This version takes into account a minimum delay between peaks and builds upon the adaptive segmentation
|
| 284 |
+
by using a low-pass filter for BPM changes. This way, even if the algorithm wrongly detects a peak, the
|
| 285 |
+
BPM value will stay relatively constant so the next pulse can be correctly segmented.
|
| 286 |
+
|
| 287 |
+
"""
|
| 288 |
+
|
| 289 |
+
# check inputs
|
| 290 |
+
if signal is None:
|
| 291 |
+
raise TypeError("Please specify an input signal.")
|
| 292 |
+
|
| 293 |
+
if alpha <= 0 or alpha > 1:
|
| 294 |
+
raise TypeError("The value of alpha must be in the range: ]0, 1].")
|
| 295 |
+
|
| 296 |
+
if k <= 0:
|
| 297 |
+
raise TypeError("The number of divisions by pulse should be greater than 0.")
|
| 298 |
+
|
| 299 |
+
if init_bpm <= 0:
|
| 300 |
+
raise TypeError("Provide a valid BPM value for initial estimation.")
|
| 301 |
+
|
| 302 |
+
if min_delay < 0 or min_delay > 1:
|
| 303 |
+
raise TypeError(
|
| 304 |
+
"The minimum delay percentage between peaks must be between 0 and 1"
|
| 305 |
+
)
|
| 306 |
+
|
| 307 |
+
if max_BPM >= 248:
|
| 308 |
+
raise TypeError("The maximum BPM must assure the person is alive")
|
| 309 |
+
|
| 310 |
+
# current bpm
|
| 311 |
+
bpm = init_bpm
|
| 312 |
+
|
| 313 |
+
# current segment window width
|
| 314 |
+
window = int(sampling_rate * (60 / bpm) / k)
|
| 315 |
+
|
| 316 |
+
# onsets array
|
| 317 |
+
onsets = []
|
| 318 |
+
|
| 319 |
+
# window marks array - stores the boundaries of each segment
|
| 320 |
+
window_marks = []
|
| 321 |
+
|
| 322 |
+
# buffer for peak indices
|
| 323 |
+
idx_buffer = [-1, -1, -1]
|
| 324 |
+
|
| 325 |
+
# buffer to store the previous 3 values for onset detection
|
| 326 |
+
min_buffer = [0, 0, 0]
|
| 327 |
+
|
| 328 |
+
# signal pointer
|
| 329 |
+
i = 0
|
| 330 |
+
while i + window < len(signal):
|
| 331 |
+
# remove oldest values
|
| 332 |
+
idx_buffer.pop(0)
|
| 333 |
+
min_buffer.pop(0)
|
| 334 |
+
|
| 335 |
+
# add the index of the minimum value of the current segment to buffer
|
| 336 |
+
idx_buffer.append(int(i + np.argmin(signal[i : i + window])))
|
| 337 |
+
|
| 338 |
+
# add the minimum value of the current segment to buffer
|
| 339 |
+
min_buffer.append(signal[idx_buffer[-1]])
|
| 340 |
+
|
| 341 |
+
if (
|
| 342 |
+
# the buffer has to be filled with valid values
|
| 343 |
+
idx_buffer[0] > -1
|
| 344 |
+
# the center value of the buffer must be smaller than its neighbours
|
| 345 |
+
and (min_buffer[1] < min_buffer[0] and min_buffer[1] <= min_buffer[2])
|
| 346 |
+
# if an onset was previously detected, guarantee that the new onset respects the minimum delay, minimum BPM and maximum BPM
|
| 347 |
+
and (
|
| 348 |
+
len(onsets) == 0
|
| 349 |
+
or (
|
| 350 |
+
(idx_buffer[1] - onsets[-1]) / sampling_rate >= min_delay * 60 / bpm
|
| 351 |
+
and (idx_buffer[1] - onsets[-1]) / sampling_rate > 60 / max_BPM
|
| 352 |
+
)
|
| 353 |
+
)
|
| 354 |
+
):
|
| 355 |
+
# store the onset
|
| 356 |
+
onsets.append(idx_buffer[1])
|
| 357 |
+
|
| 358 |
+
# if more than one onset was detected, update the bpm and the segment width
|
| 359 |
+
if len(onsets) > 1:
|
| 360 |
+
# calculate new bpm from the latest two onsets
|
| 361 |
+
new_bpm = int(60 * sampling_rate / (onsets[-1] - onsets[-2]))
|
| 362 |
+
|
| 363 |
+
# update the bpm value
|
| 364 |
+
bpm = alpha * new_bpm + (1 - alpha) * bpm
|
| 365 |
+
|
| 366 |
+
# update the segment window width
|
| 367 |
+
window = int(sampling_rate * (60 / bpm) / k)
|
| 368 |
+
|
| 369 |
+
# update the signal pointer
|
| 370 |
+
i += window
|
| 371 |
+
|
| 372 |
+
# store window segment boundaries index
|
| 373 |
+
window_marks.append(i)
|
| 374 |
+
|
| 375 |
+
onsets = np.array(onsets, dtype="int")
|
| 376 |
+
window_marks = np.array(window_marks, dtype="int")
|
| 377 |
+
|
| 378 |
+
# output
|
| 379 |
+
params = {
|
| 380 |
+
"signal": signal,
|
| 381 |
+
"sampling_rate": sampling_rate,
|
| 382 |
+
"alpha": alpha,
|
| 383 |
+
"k": k,
|
| 384 |
+
"init_bpm": init_bpm,
|
| 385 |
+
"min_delay": min_delay,
|
| 386 |
+
"max_bpm": max_BPM,
|
| 387 |
+
}
|
| 388 |
+
|
| 389 |
+
args = (onsets, window_marks, params)
|
| 390 |
+
names = ("onsets", "window_marks", "params")
|
| 391 |
+
|
| 392 |
+
return utils.ReturnTuple(args, names)
|
| 393 |
+
|
| 394 |
+
|
| 395 |
+
def ppg_segmentation(filtered,
|
| 396 |
+
sampling_rate=1000.,
|
| 397 |
+
show=False,
|
| 398 |
+
show_mean=False,
|
| 399 |
+
selection=False,
|
| 400 |
+
peak_threshold=None):
|
| 401 |
+
""""Segments a filtered PPG signal. Segmentation filtering is achieved by
|
| 402 |
+
taking into account segments selected by peak height and pulse morphology.
|
| 403 |
+
|
| 404 |
+
Parameters
|
| 405 |
+
----------
|
| 406 |
+
filtered : array
|
| 407 |
+
Filtered PPG signal.
|
| 408 |
+
sampling_rate : int, float, optional
|
| 409 |
+
Sampling frequency (Hz).
|
| 410 |
+
show : bool, optional
|
| 411 |
+
If True, show a plot with segments. Segments are clipped.
|
| 412 |
+
show_mean : bool, optional
|
| 413 |
+
If True, shows the mean pulse on top of segments.
|
| 414 |
+
selection : bool, optional
|
| 415 |
+
If True, performs selection with peak height and pulse morphology.
|
| 416 |
+
peak_threshold : int, float, optional
|
| 417 |
+
If `selection` is True, selects peaks with height greater than defined
|
| 418 |
+
threshold.
|
| 419 |
+
|
| 420 |
+
Returns
|
| 421 |
+
-------
|
| 422 |
+
segments : array
|
| 423 |
+
Start and end indices for each detected pulse segment.
|
| 424 |
+
selected_segments : array
|
| 425 |
+
Start and end indices for each selected pulse segment.
|
| 426 |
+
mean_pulse_ts : array
|
| 427 |
+
Mean pulse time axis reference (seconds).
|
| 428 |
+
mean_pulse : array
|
| 429 |
+
Mean wave of clipped PPG pulses.
|
| 430 |
+
onsets : array
|
| 431 |
+
Indices of PPG pulse onsets. Onsets are found based on minima.
|
| 432 |
+
peaks : array
|
| 433 |
+
Indices of PPG pulse peaks. 'Elgendi2013' algorithm is used.
|
| 434 |
+
|
| 435 |
+
"""
|
| 436 |
+
|
| 437 |
+
# check inputs
|
| 438 |
+
if filtered is None:
|
| 439 |
+
raise TypeError("Please specify an input signal.")
|
| 440 |
+
|
| 441 |
+
# ensure numpy
|
| 442 |
+
filtered = np.array(filtered)
|
| 443 |
+
|
| 444 |
+
sampling_rate = float(sampling_rate)
|
| 445 |
+
|
| 446 |
+
# find peaks (last peak is rejected)
|
| 447 |
+
peaks, _ = find_onsets_elgendi2013(filtered)
|
| 448 |
+
nb_segments = len(peaks) - 1
|
| 449 |
+
|
| 450 |
+
# find minima
|
| 451 |
+
minima = (np.diff(np.sign(np.diff(filtered))) > 0).nonzero()[0]
|
| 452 |
+
|
| 453 |
+
# find onsets
|
| 454 |
+
onsets = []
|
| 455 |
+
|
| 456 |
+
for i in peaks:
|
| 457 |
+
# find the lower closest number to the peak
|
| 458 |
+
onsets.append(minima[minima < i].max())
|
| 459 |
+
|
| 460 |
+
onsets = np.array(onsets, dtype='int')
|
| 461 |
+
|
| 462 |
+
if len(peaks) == 0 or len(onsets) == 0:
|
| 463 |
+
raise TypeError("No peaks or onsets detected.")
|
| 464 |
+
|
| 465 |
+
# define peak threshold with peak density function (for segment selection),
|
| 466 |
+
# where the maximum value is choosen.
|
| 467 |
+
if peak_threshold == None and selection:
|
| 468 |
+
density = gaussian_kde(filtered[peaks])
|
| 469 |
+
xs = np.linspace(0, max(filtered[peaks]), 1000)
|
| 470 |
+
density.covariance_factor = lambda : .25
|
| 471 |
+
density._compute_covariance()
|
| 472 |
+
peak_threshold = xs[np.argmax(density(xs))]
|
| 473 |
+
|
| 474 |
+
# segments array with start and end indexes, and segment selection
|
| 475 |
+
segments = np.zeros((nb_segments, 2), dtype='int')
|
| 476 |
+
segments_sel = []
|
| 477 |
+
|
| 478 |
+
for i in range(nb_segments):
|
| 479 |
+
|
| 480 |
+
# assign start and end of each segment
|
| 481 |
+
segments[i, 0] = onsets[i]
|
| 482 |
+
segments[i, 1] = onsets[i + 1]
|
| 483 |
+
|
| 484 |
+
# search segments with at least 4 max+min (standard waveform) and
|
| 485 |
+
# peak height greater than threshold for pulse selection
|
| 486 |
+
if selection:
|
| 487 |
+
seg = filtered[segments[i, 0] : segments[i, 1]]
|
| 488 |
+
if max(seg) > peak_threshold:
|
| 489 |
+
if len(np.where(np.diff(np.sign(np.diff(seg))))[0]) > 3:
|
| 490 |
+
segments_sel.append(i)
|
| 491 |
+
|
| 492 |
+
if len(segments_sel) == 0 :
|
| 493 |
+
print('Warning: Suitable waves not found. [-0.1, 0.4]s cut from peak is made.')
|
| 494 |
+
|
| 495 |
+
# find earliest onset-peak duration (ensure minimal shift of 0.1s)
|
| 496 |
+
shifts = peaks - onsets
|
| 497 |
+
|
| 498 |
+
cut1 = 0.1*sampling_rate
|
| 499 |
+
if len(segments_sel) > 0 and selection:
|
| 500 |
+
shifts_sel = np.take(shifts, segments_sel)
|
| 501 |
+
shifts_sel = shifts_sel[shifts_sel > 0.1*sampling_rate]
|
| 502 |
+
cut1 = min(shifts_sel)
|
| 503 |
+
|
| 504 |
+
# find shortest peak-end duration (ensure minimal duration of 0.4s)
|
| 505 |
+
cut2 = 0.4*sampling_rate
|
| 506 |
+
ep_d = segments[:, 1] - peaks[0 : len(segments)]
|
| 507 |
+
if len(segments_sel) > 0 and selection:
|
| 508 |
+
ep_d_sel = np.take(ep_d, segments_sel)
|
| 509 |
+
ep_d_sel = ep_d_sel[ep_d_sel > 0.4*sampling_rate]
|
| 510 |
+
cut2 = min(ep_d_sel)
|
| 511 |
+
|
| 512 |
+
# clipping segments
|
| 513 |
+
c_segments = np.zeros((nb_segments, 2), dtype=int)
|
| 514 |
+
for i in range(nb_segments):
|
| 515 |
+
c_segments[i, 0] = peaks[i] - cut1
|
| 516 |
+
c_segments[i, 1] = peaks[i] + cut2
|
| 517 |
+
|
| 518 |
+
cut_length = c_segments[0, 1] - c_segments[0, 0]
|
| 519 |
+
|
| 520 |
+
|
| 521 |
+
# time axis
|
| 522 |
+
mean_pulse_ts = np.arange(0, cut_length/sampling_rate, 1./sampling_rate)
|
| 523 |
+
|
| 524 |
+
# plot
|
| 525 |
+
if show:
|
| 526 |
+
# figure layout
|
| 527 |
+
fig, ax = plt.subplots()
|
| 528 |
+
fig.suptitle('PPG Segments', fontweight='bold', x=0.51)
|
| 529 |
+
ax.set_xlabel('Time (s)')
|
| 530 |
+
ax.set_ylabel('Amplitude (a.u.)')
|
| 531 |
+
|
| 532 |
+
# sum of segments to plot mean wave pulse
|
| 533 |
+
sum_segments = np.zeros(cut_length)
|
| 534 |
+
|
| 535 |
+
# transparency factor to plot segments (alpha)
|
| 536 |
+
b = 1 - np.log(1. - 0.01)
|
| 537 |
+
alpha = np.exp(-nb_segments + b) + 0.01
|
| 538 |
+
if alpha > 1:
|
| 539 |
+
alpha = 1
|
| 540 |
+
|
| 541 |
+
# plot segments
|
| 542 |
+
if selection:
|
| 543 |
+
for i in segments_sel:
|
| 544 |
+
wave = filtered[c_segments[i, 0] : c_segments[i, 1]]
|
| 545 |
+
if show:
|
| 546 |
+
ax.plot(mean_pulse_ts, wave, color='tab:blue', alpha=alpha)
|
| 547 |
+
sum_segments = sum_segments + wave
|
| 548 |
+
ax.set_title(f'[selection only, {len(segments_sel)} segment(s)]')
|
| 549 |
+
|
| 550 |
+
else:
|
| 551 |
+
for i in range(nb_segments):
|
| 552 |
+
wave = filtered[c_segments[i, 0] : c_segments[i, 1]]
|
| 553 |
+
if show:
|
| 554 |
+
ax.plot(mean_pulse_ts, wave, color='tab:blue', alpha=alpha)
|
| 555 |
+
ax.set_title(f'[{nb_segments} segment(s)]')
|
| 556 |
+
sum_segments = sum_segments + wave
|
| 557 |
+
|
| 558 |
+
# plot mean pulse
|
| 559 |
+
mean_pulse = sum_segments/len(segments_sel)
|
| 560 |
+
if show and show_mean:
|
| 561 |
+
ax.plot(mean_pulse_ts, mean_pulse, color='tab:orange', label='Mean wave')
|
| 562 |
+
ax.legend()
|
| 563 |
+
|
| 564 |
+
# output
|
| 565 |
+
args = (segments, segments[segments_sel], mean_pulse_ts, mean_pulse, onsets, peaks)
|
| 566 |
+
names = ('segments', 'selected_segments', 'mean_pulse_ts', 'mean_pulse', 'onsets', 'peaks')
|
| 567 |
+
|
| 568 |
+
return utils.ReturnTuple(args, names)
|
BioSPPy/source/biosppy/signals/resp.py
ADDED
|
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.resp
|
| 4 |
+
--------------------
|
| 5 |
+
|
| 6 |
+
This module provides methods to process Respiration (Resp) signals.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
# Imports
|
| 13 |
+
# compat
|
| 14 |
+
from __future__ import absolute_import, division, print_function
|
| 15 |
+
|
| 16 |
+
# 3rd party
|
| 17 |
+
import numpy as np
|
| 18 |
+
|
| 19 |
+
# local
|
| 20 |
+
from . import tools as st
|
| 21 |
+
from .. import plotting, utils
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
def resp(signal=None, sampling_rate=1000., path=None, show=True):
|
| 25 |
+
"""Process a raw Respiration signal and extract relevant signal features
|
| 26 |
+
using default parameters.
|
| 27 |
+
|
| 28 |
+
Parameters
|
| 29 |
+
----------
|
| 30 |
+
signal : array
|
| 31 |
+
Raw Respiration signal.
|
| 32 |
+
sampling_rate : int, float, optional
|
| 33 |
+
Sampling frequency (Hz).
|
| 34 |
+
path : str, optional
|
| 35 |
+
If provided, the plot will be saved to the specified file.
|
| 36 |
+
show : bool, optional
|
| 37 |
+
If True, show a summary plot.
|
| 38 |
+
|
| 39 |
+
Returns
|
| 40 |
+
-------
|
| 41 |
+
ts : array
|
| 42 |
+
Signal time axis reference (seconds).
|
| 43 |
+
filtered : array
|
| 44 |
+
Filtered Respiration signal.
|
| 45 |
+
zeros : array
|
| 46 |
+
Indices of Respiration zero crossings.
|
| 47 |
+
resp_rate_ts : array
|
| 48 |
+
Respiration rate time axis reference (seconds).
|
| 49 |
+
resp_rate : array
|
| 50 |
+
Instantaneous respiration rate (Hz).
|
| 51 |
+
|
| 52 |
+
"""
|
| 53 |
+
|
| 54 |
+
# check inputs
|
| 55 |
+
if signal is None:
|
| 56 |
+
raise TypeError("Please specify an input signal.")
|
| 57 |
+
|
| 58 |
+
# ensure numpy
|
| 59 |
+
signal = np.array(signal)
|
| 60 |
+
|
| 61 |
+
sampling_rate = float(sampling_rate)
|
| 62 |
+
|
| 63 |
+
# filter signal
|
| 64 |
+
filtered, _, _ = st.filter_signal(signal=signal,
|
| 65 |
+
ftype='butter',
|
| 66 |
+
band='bandpass',
|
| 67 |
+
order=2,
|
| 68 |
+
frequency=[0.1, 0.35],
|
| 69 |
+
sampling_rate=sampling_rate)
|
| 70 |
+
|
| 71 |
+
# compute zero crossings
|
| 72 |
+
zeros, = st.zero_cross(signal=filtered, detrend=True)
|
| 73 |
+
beats = zeros[::2]
|
| 74 |
+
|
| 75 |
+
if len(beats) < 2:
|
| 76 |
+
rate_idx = []
|
| 77 |
+
rate = []
|
| 78 |
+
else:
|
| 79 |
+
# compute respiration rate
|
| 80 |
+
rate_idx = beats[1:]
|
| 81 |
+
rate = sampling_rate * (1. / np.diff(beats))
|
| 82 |
+
|
| 83 |
+
# physiological limits
|
| 84 |
+
indx = np.nonzero(rate <= 0.35)
|
| 85 |
+
rate_idx = rate_idx[indx]
|
| 86 |
+
rate = rate[indx]
|
| 87 |
+
|
| 88 |
+
# smooth with moving average
|
| 89 |
+
size = 3
|
| 90 |
+
rate, _ = st.smoother(signal=rate,
|
| 91 |
+
kernel='boxcar',
|
| 92 |
+
size=size,
|
| 93 |
+
mirror=True)
|
| 94 |
+
|
| 95 |
+
# get time vectors
|
| 96 |
+
length = len(signal)
|
| 97 |
+
T = (length - 1) / sampling_rate
|
| 98 |
+
ts = np.linspace(0, T, length, endpoint=True)
|
| 99 |
+
ts_rate = ts[rate_idx]
|
| 100 |
+
|
| 101 |
+
# plot
|
| 102 |
+
if show:
|
| 103 |
+
plotting.plot_resp(ts=ts,
|
| 104 |
+
raw=signal,
|
| 105 |
+
filtered=filtered,
|
| 106 |
+
zeros=zeros,
|
| 107 |
+
resp_rate_ts=ts_rate,
|
| 108 |
+
resp_rate=rate,
|
| 109 |
+
path=path,
|
| 110 |
+
show=True)
|
| 111 |
+
|
| 112 |
+
# output
|
| 113 |
+
args = (ts, filtered, zeros, ts_rate, rate)
|
| 114 |
+
names = ('ts', 'filtered', 'zeros', 'resp_rate_ts', 'resp_rate')
|
| 115 |
+
|
| 116 |
+
return utils.ReturnTuple(args, names)
|
BioSPPy/source/biosppy/signals/tools.py
ADDED
|
@@ -0,0 +1,2191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.tools
|
| 4 |
+
---------------------
|
| 5 |
+
|
| 6 |
+
This module provides various signal analysis methods in the time and
|
| 7 |
+
frequency domains.
|
| 8 |
+
|
| 9 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 10 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
# Imports
|
| 14 |
+
# compat
|
| 15 |
+
from __future__ import absolute_import, division, print_function
|
| 16 |
+
from six.moves import range
|
| 17 |
+
import six
|
| 18 |
+
|
| 19 |
+
# 3rd party
|
| 20 |
+
import sys
|
| 21 |
+
import numpy as np
|
| 22 |
+
import scipy.signal as ss
|
| 23 |
+
from scipy import interpolate, optimize
|
| 24 |
+
from scipy.stats import stats
|
| 25 |
+
|
| 26 |
+
# local
|
| 27 |
+
from .. import utils
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
def _norm_freq(frequency=None, sampling_rate=1000.0):
|
| 31 |
+
"""Normalize frequency to Nyquist Frequency (Fs/2).
|
| 32 |
+
|
| 33 |
+
Parameters
|
| 34 |
+
----------
|
| 35 |
+
frequency : int, float, list, array
|
| 36 |
+
Frequencies to normalize.
|
| 37 |
+
sampling_rate : int, float, optional
|
| 38 |
+
Sampling frequency (Hz).
|
| 39 |
+
|
| 40 |
+
Returns
|
| 41 |
+
-------
|
| 42 |
+
wn : float, array
|
| 43 |
+
Normalized frequencies.
|
| 44 |
+
|
| 45 |
+
"""
|
| 46 |
+
|
| 47 |
+
# check inputs
|
| 48 |
+
if frequency is None:
|
| 49 |
+
raise TypeError("Please specify a frequency to normalize.")
|
| 50 |
+
|
| 51 |
+
# convert inputs to correct representation
|
| 52 |
+
try:
|
| 53 |
+
frequency = float(frequency)
|
| 54 |
+
except TypeError:
|
| 55 |
+
# maybe frequency is a list or array
|
| 56 |
+
frequency = np.array(frequency, dtype="float")
|
| 57 |
+
|
| 58 |
+
Fs = float(sampling_rate)
|
| 59 |
+
|
| 60 |
+
wn = 2.0 * frequency / Fs
|
| 61 |
+
|
| 62 |
+
return wn
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
def _filter_init(b, a, alpha=1.0):
|
| 66 |
+
"""Get an initial filter state that corresponds to the steady-state
|
| 67 |
+
of the step response.
|
| 68 |
+
|
| 69 |
+
Parameters
|
| 70 |
+
----------
|
| 71 |
+
b : array
|
| 72 |
+
Numerator coefficients.
|
| 73 |
+
a : array
|
| 74 |
+
Denominator coefficients.
|
| 75 |
+
alpha : float, optional
|
| 76 |
+
Scaling factor.
|
| 77 |
+
|
| 78 |
+
Returns
|
| 79 |
+
-------
|
| 80 |
+
zi : array
|
| 81 |
+
Initial filter state.
|
| 82 |
+
|
| 83 |
+
"""
|
| 84 |
+
|
| 85 |
+
zi = alpha * ss.lfilter_zi(b, a)
|
| 86 |
+
|
| 87 |
+
return zi
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
def _filter_signal(b, a, signal, zi=None, check_phase=True, **kwargs):
|
| 91 |
+
"""Filter a signal with given coefficients.
|
| 92 |
+
|
| 93 |
+
Parameters
|
| 94 |
+
----------
|
| 95 |
+
b : array
|
| 96 |
+
Numerator coefficients.
|
| 97 |
+
a : array
|
| 98 |
+
Denominator coefficients.
|
| 99 |
+
signal : array
|
| 100 |
+
Signal to filter.
|
| 101 |
+
zi : array, optional
|
| 102 |
+
Initial filter state.
|
| 103 |
+
check_phase : bool, optional
|
| 104 |
+
If True, use the forward-backward technique.
|
| 105 |
+
``**kwargs`` : dict, optional
|
| 106 |
+
Additional keyword arguments are passed to the underlying filtering
|
| 107 |
+
function.
|
| 108 |
+
|
| 109 |
+
Returns
|
| 110 |
+
-------
|
| 111 |
+
filtered : array
|
| 112 |
+
Filtered signal.
|
| 113 |
+
zf : array
|
| 114 |
+
Final filter state.
|
| 115 |
+
|
| 116 |
+
Notes
|
| 117 |
+
-----
|
| 118 |
+
* If check_phase is True, zi cannot be set.
|
| 119 |
+
|
| 120 |
+
"""
|
| 121 |
+
|
| 122 |
+
# check inputs
|
| 123 |
+
if check_phase and zi is not None:
|
| 124 |
+
raise ValueError(
|
| 125 |
+
"Incompatible arguments: initial filter state cannot be set when \
|
| 126 |
+
check_phase is True."
|
| 127 |
+
)
|
| 128 |
+
|
| 129 |
+
if zi is None:
|
| 130 |
+
zf = None
|
| 131 |
+
if check_phase:
|
| 132 |
+
filtered = ss.filtfilt(b, a, signal, **kwargs)
|
| 133 |
+
else:
|
| 134 |
+
filtered = ss.lfilter(b, a, signal, **kwargs)
|
| 135 |
+
else:
|
| 136 |
+
filtered, zf = ss.lfilter(b, a, signal, zi=zi, **kwargs)
|
| 137 |
+
|
| 138 |
+
return filtered, zf
|
| 139 |
+
|
| 140 |
+
|
| 141 |
+
def _filter_resp(b, a, sampling_rate=1000.0, nfreqs=4096):
|
| 142 |
+
"""Compute the filter frequency response.
|
| 143 |
+
|
| 144 |
+
Parameters
|
| 145 |
+
----------
|
| 146 |
+
b : array
|
| 147 |
+
Numerator coefficients.
|
| 148 |
+
a : array
|
| 149 |
+
Denominator coefficients.
|
| 150 |
+
sampling_rate : int, float, optional
|
| 151 |
+
Sampling frequency (Hz).
|
| 152 |
+
nfreqs : int, optional
|
| 153 |
+
Number of frequency points to compute.
|
| 154 |
+
|
| 155 |
+
Returns
|
| 156 |
+
-------
|
| 157 |
+
freqs : array
|
| 158 |
+
Array of frequencies (Hz) at which the response was computed.
|
| 159 |
+
resp : array
|
| 160 |
+
Frequency response.
|
| 161 |
+
|
| 162 |
+
"""
|
| 163 |
+
|
| 164 |
+
w, resp = ss.freqz(b, a, nfreqs)
|
| 165 |
+
|
| 166 |
+
# convert frequencies
|
| 167 |
+
freqs = w * sampling_rate / (2.0 * np.pi)
|
| 168 |
+
|
| 169 |
+
return freqs, resp
|
| 170 |
+
|
| 171 |
+
|
| 172 |
+
def _get_window(kernel, size, **kwargs):
|
| 173 |
+
"""Return a window with the specified parameters.
|
| 174 |
+
|
| 175 |
+
Parameters
|
| 176 |
+
----------
|
| 177 |
+
kernel : str
|
| 178 |
+
Type of window to create.
|
| 179 |
+
size : int
|
| 180 |
+
Size of the window.
|
| 181 |
+
``**kwargs`` : dict, optional
|
| 182 |
+
Additional keyword arguments are passed to the underlying
|
| 183 |
+
scipy.signal.windows function.
|
| 184 |
+
|
| 185 |
+
Returns
|
| 186 |
+
-------
|
| 187 |
+
window : array
|
| 188 |
+
Created window.
|
| 189 |
+
|
| 190 |
+
"""
|
| 191 |
+
|
| 192 |
+
# mimics scipy.signal.get_window
|
| 193 |
+
if kernel in ["blackman", "black", "blk"]:
|
| 194 |
+
winfunc = ss.blackman
|
| 195 |
+
elif kernel in ["triangle", "triang", "tri"]:
|
| 196 |
+
winfunc = ss.triang
|
| 197 |
+
elif kernel in ["hamming", "hamm", "ham"]:
|
| 198 |
+
winfunc = ss.hamming
|
| 199 |
+
elif kernel in ["bartlett", "bart", "brt"]:
|
| 200 |
+
winfunc = ss.bartlett
|
| 201 |
+
elif kernel in ["hanning", "hann", "han"]:
|
| 202 |
+
winfunc = ss.hann
|
| 203 |
+
elif kernel in ["blackmanharris", "blackharr", "bkh"]:
|
| 204 |
+
winfunc = ss.blackmanharris
|
| 205 |
+
elif kernel in ["parzen", "parz", "par"]:
|
| 206 |
+
winfunc = ss.parzen
|
| 207 |
+
elif kernel in ["bohman", "bman", "bmn"]:
|
| 208 |
+
winfunc = ss.bohman
|
| 209 |
+
elif kernel in ["nuttall", "nutl", "nut"]:
|
| 210 |
+
winfunc = ss.nuttall
|
| 211 |
+
elif kernel in ["barthann", "brthan", "bth"]:
|
| 212 |
+
winfunc = ss.barthann
|
| 213 |
+
elif kernel in ["flattop", "flat", "flt"]:
|
| 214 |
+
winfunc = ss.flattop
|
| 215 |
+
elif kernel in ["kaiser", "ksr"]:
|
| 216 |
+
winfunc = ss.kaiser
|
| 217 |
+
elif kernel in ["gaussian", "gauss", "gss"]:
|
| 218 |
+
winfunc = ss.gaussian
|
| 219 |
+
elif kernel in [
|
| 220 |
+
"general gaussian",
|
| 221 |
+
"general_gaussian",
|
| 222 |
+
"general gauss",
|
| 223 |
+
"general_gauss",
|
| 224 |
+
"ggs",
|
| 225 |
+
]:
|
| 226 |
+
winfunc = ss.general_gaussian
|
| 227 |
+
elif kernel in ["boxcar", "box", "ones", "rect", "rectangular"]:
|
| 228 |
+
winfunc = ss.boxcar
|
| 229 |
+
elif kernel in ["slepian", "slep", "optimal", "dpss", "dss"]:
|
| 230 |
+
winfunc = ss.slepian
|
| 231 |
+
elif kernel in ["cosine", "halfcosine"]:
|
| 232 |
+
winfunc = ss.cosine
|
| 233 |
+
elif kernel in ["chebwin", "cheb"]:
|
| 234 |
+
winfunc = ss.chebwin
|
| 235 |
+
else:
|
| 236 |
+
raise ValueError("Unknown window type.")
|
| 237 |
+
|
| 238 |
+
try:
|
| 239 |
+
window = winfunc(size, **kwargs)
|
| 240 |
+
except TypeError as e:
|
| 241 |
+
raise TypeError("Invalid window arguments: %s." % e)
|
| 242 |
+
|
| 243 |
+
return window
|
| 244 |
+
|
| 245 |
+
|
| 246 |
+
def get_filter(
|
| 247 |
+
ftype="FIR",
|
| 248 |
+
band="lowpass",
|
| 249 |
+
order=None,
|
| 250 |
+
frequency=None,
|
| 251 |
+
sampling_rate=1000.0,
|
| 252 |
+
**kwargs
|
| 253 |
+
):
|
| 254 |
+
"""Compute digital (FIR or IIR) filter coefficients with the given
|
| 255 |
+
parameters.
|
| 256 |
+
|
| 257 |
+
Parameters
|
| 258 |
+
----------
|
| 259 |
+
ftype : str
|
| 260 |
+
Filter type:
|
| 261 |
+
* Finite Impulse Response filter ('FIR');
|
| 262 |
+
* Butterworth filter ('butter');
|
| 263 |
+
* Chebyshev filters ('cheby1', 'cheby2');
|
| 264 |
+
* Elliptic filter ('ellip');
|
| 265 |
+
* Bessel filter ('bessel').
|
| 266 |
+
band : str
|
| 267 |
+
Band type:
|
| 268 |
+
* Low-pass filter ('lowpass');
|
| 269 |
+
* High-pass filter ('highpass');
|
| 270 |
+
* Band-pass filter ('bandpass');
|
| 271 |
+
* Band-stop filter ('bandstop').
|
| 272 |
+
order : int
|
| 273 |
+
Order of the filter.
|
| 274 |
+
frequency : int, float, list, array
|
| 275 |
+
Cutoff frequencies; format depends on type of band:
|
| 276 |
+
* 'lowpass' or 'highpass': single frequency;
|
| 277 |
+
* 'bandpass' or 'bandstop': pair of frequencies.
|
| 278 |
+
sampling_rate : int, float, optional
|
| 279 |
+
Sampling frequency (Hz).
|
| 280 |
+
``**kwargs`` : dict, optional
|
| 281 |
+
Additional keyword arguments are passed to the underlying
|
| 282 |
+
scipy.signal function.
|
| 283 |
+
|
| 284 |
+
Returns
|
| 285 |
+
-------
|
| 286 |
+
b : array
|
| 287 |
+
Numerator coefficients.
|
| 288 |
+
a : array
|
| 289 |
+
Denominator coefficients.
|
| 290 |
+
|
| 291 |
+
See Also:
|
| 292 |
+
scipy.signal
|
| 293 |
+
|
| 294 |
+
"""
|
| 295 |
+
|
| 296 |
+
# check inputs
|
| 297 |
+
if order is None:
|
| 298 |
+
raise TypeError("Please specify the filter order.")
|
| 299 |
+
if frequency is None:
|
| 300 |
+
raise TypeError("Please specify the cutoff frequency.")
|
| 301 |
+
if band not in ["lowpass", "highpass", "bandpass", "bandstop"]:
|
| 302 |
+
raise ValueError(
|
| 303 |
+
"Unknown filter type '%r'; choose 'lowpass', 'highpass', \
|
| 304 |
+
'bandpass', or 'bandstop'."
|
| 305 |
+
% band
|
| 306 |
+
)
|
| 307 |
+
|
| 308 |
+
# convert frequencies
|
| 309 |
+
frequency = _norm_freq(frequency, sampling_rate)
|
| 310 |
+
|
| 311 |
+
# get coeffs
|
| 312 |
+
b, a = [], []
|
| 313 |
+
if ftype == "FIR":
|
| 314 |
+
# FIR filter
|
| 315 |
+
if order % 2 == 0:
|
| 316 |
+
order += 1
|
| 317 |
+
a = np.array([1])
|
| 318 |
+
if band in ["lowpass", "bandstop"]:
|
| 319 |
+
b = ss.firwin(numtaps=order, cutoff=frequency, pass_zero=True, **kwargs)
|
| 320 |
+
elif band in ["highpass", "bandpass"]:
|
| 321 |
+
b = ss.firwin(numtaps=order, cutoff=frequency, pass_zero=False, **kwargs)
|
| 322 |
+
elif ftype == "butter":
|
| 323 |
+
# Butterworth filter
|
| 324 |
+
b, a = ss.butter(
|
| 325 |
+
N=order, Wn=frequency, btype=band, analog=False, output="ba", **kwargs
|
| 326 |
+
)
|
| 327 |
+
elif ftype == "cheby1":
|
| 328 |
+
# Chebyshev type I filter
|
| 329 |
+
b, a = ss.cheby1(
|
| 330 |
+
N=order, Wn=frequency, btype=band, analog=False, output="ba", **kwargs
|
| 331 |
+
)
|
| 332 |
+
elif ftype == "cheby2":
|
| 333 |
+
# chebyshev type II filter
|
| 334 |
+
b, a = ss.cheby2(
|
| 335 |
+
N=order, Wn=frequency, btype=band, analog=False, output="ba", **kwargs
|
| 336 |
+
)
|
| 337 |
+
elif ftype == "ellip":
|
| 338 |
+
# Elliptic filter
|
| 339 |
+
b, a = ss.ellip(
|
| 340 |
+
N=order, Wn=frequency, btype=band, analog=False, output="ba", **kwargs
|
| 341 |
+
)
|
| 342 |
+
elif ftype == "bessel":
|
| 343 |
+
# Bessel filter
|
| 344 |
+
b, a = ss.bessel(
|
| 345 |
+
N=order, Wn=frequency, btype=band, analog=False, output="ba", **kwargs
|
| 346 |
+
)
|
| 347 |
+
|
| 348 |
+
return utils.ReturnTuple((b, a), ("b", "a"))
|
| 349 |
+
|
| 350 |
+
|
| 351 |
+
def filter_signal(
|
| 352 |
+
signal=None,
|
| 353 |
+
ftype="FIR",
|
| 354 |
+
band="lowpass",
|
| 355 |
+
order=None,
|
| 356 |
+
frequency=None,
|
| 357 |
+
sampling_rate=1000.0,
|
| 358 |
+
**kwargs
|
| 359 |
+
):
|
| 360 |
+
"""Filter a signal according to the given parameters.
|
| 361 |
+
|
| 362 |
+
Parameters
|
| 363 |
+
----------
|
| 364 |
+
signal : array
|
| 365 |
+
Signal to filter.
|
| 366 |
+
ftype : str
|
| 367 |
+
Filter type:
|
| 368 |
+
* Finite Impulse Response filter ('FIR');
|
| 369 |
+
* Butterworth filter ('butter');
|
| 370 |
+
* Chebyshev filters ('cheby1', 'cheby2');
|
| 371 |
+
* Elliptic filter ('ellip');
|
| 372 |
+
* Bessel filter ('bessel').
|
| 373 |
+
band : str
|
| 374 |
+
Band type:
|
| 375 |
+
* Low-pass filter ('lowpass');
|
| 376 |
+
* High-pass filter ('highpass');
|
| 377 |
+
* Band-pass filter ('bandpass');
|
| 378 |
+
* Band-stop filter ('bandstop').
|
| 379 |
+
order : int
|
| 380 |
+
Order of the filter.
|
| 381 |
+
frequency : int, float, list, array
|
| 382 |
+
Cutoff frequencies; format depends on type of band:
|
| 383 |
+
* 'lowpass' or 'bandpass': single frequency;
|
| 384 |
+
* 'bandpass' or 'bandstop': pair of frequencies.
|
| 385 |
+
sampling_rate : int, float, optional
|
| 386 |
+
Sampling frequency (Hz).
|
| 387 |
+
``**kwargs`` : dict, optional
|
| 388 |
+
Additional keyword arguments are passed to the underlying
|
| 389 |
+
scipy.signal function.
|
| 390 |
+
|
| 391 |
+
Returns
|
| 392 |
+
-------
|
| 393 |
+
signal : array
|
| 394 |
+
Filtered signal.
|
| 395 |
+
sampling_rate : float
|
| 396 |
+
Sampling frequency (Hz).
|
| 397 |
+
params : dict
|
| 398 |
+
Filter parameters.
|
| 399 |
+
|
| 400 |
+
Notes
|
| 401 |
+
-----
|
| 402 |
+
* Uses a forward-backward filter implementation. Therefore, the combined
|
| 403 |
+
filter has linear phase.
|
| 404 |
+
|
| 405 |
+
"""
|
| 406 |
+
|
| 407 |
+
# check inputs
|
| 408 |
+
if signal is None:
|
| 409 |
+
raise TypeError("Please specify a signal to filter.")
|
| 410 |
+
|
| 411 |
+
# get filter
|
| 412 |
+
b, a = get_filter(
|
| 413 |
+
ftype=ftype,
|
| 414 |
+
order=order,
|
| 415 |
+
frequency=frequency,
|
| 416 |
+
sampling_rate=sampling_rate,
|
| 417 |
+
band=band,
|
| 418 |
+
**kwargs
|
| 419 |
+
)
|
| 420 |
+
|
| 421 |
+
# filter
|
| 422 |
+
filtered, _ = _filter_signal(b, a, signal, check_phase=True)
|
| 423 |
+
|
| 424 |
+
# output
|
| 425 |
+
params = {
|
| 426 |
+
"ftype": ftype,
|
| 427 |
+
"order": order,
|
| 428 |
+
"frequency": frequency,
|
| 429 |
+
"band": band,
|
| 430 |
+
}
|
| 431 |
+
params.update(kwargs)
|
| 432 |
+
|
| 433 |
+
args = (filtered, sampling_rate, params)
|
| 434 |
+
names = ("signal", "sampling_rate", "params")
|
| 435 |
+
|
| 436 |
+
return utils.ReturnTuple(args, names)
|
| 437 |
+
|
| 438 |
+
|
| 439 |
+
class OnlineFilter(object):
|
| 440 |
+
"""Online filtering.
|
| 441 |
+
|
| 442 |
+
Parameters
|
| 443 |
+
----------
|
| 444 |
+
b : array
|
| 445 |
+
Numerator coefficients.
|
| 446 |
+
a : array
|
| 447 |
+
Denominator coefficients.
|
| 448 |
+
|
| 449 |
+
"""
|
| 450 |
+
|
| 451 |
+
def __init__(self, b=None, a=None):
|
| 452 |
+
# check inputs
|
| 453 |
+
if b is None:
|
| 454 |
+
raise TypeError("Please specify the numerator coefficients.")
|
| 455 |
+
|
| 456 |
+
if a is None:
|
| 457 |
+
raise TypeError("Please specify the denominator coefficients.")
|
| 458 |
+
|
| 459 |
+
# self things
|
| 460 |
+
self.b = b
|
| 461 |
+
self.a = a
|
| 462 |
+
|
| 463 |
+
# reset
|
| 464 |
+
self.reset()
|
| 465 |
+
|
| 466 |
+
def reset(self):
|
| 467 |
+
"""Reset the filter state."""
|
| 468 |
+
|
| 469 |
+
self.zi = None
|
| 470 |
+
|
| 471 |
+
def filter(self, signal=None):
|
| 472 |
+
"""Filter a signal segment.
|
| 473 |
+
|
| 474 |
+
Parameters
|
| 475 |
+
----------
|
| 476 |
+
signal : array
|
| 477 |
+
Signal segment to filter.
|
| 478 |
+
|
| 479 |
+
Returns
|
| 480 |
+
-------
|
| 481 |
+
filtered : array
|
| 482 |
+
Filtered signal segment.
|
| 483 |
+
|
| 484 |
+
"""
|
| 485 |
+
|
| 486 |
+
# check input
|
| 487 |
+
if signal is None:
|
| 488 |
+
raise TypeError("Please specify the input signal.")
|
| 489 |
+
|
| 490 |
+
if self.zi is None:
|
| 491 |
+
self.zi = signal[0] * ss.lfilter_zi(self.b, self.a)
|
| 492 |
+
|
| 493 |
+
filtered, self.zi = ss.lfilter(self.b, self.a, signal, zi=self.zi)
|
| 494 |
+
|
| 495 |
+
return utils.ReturnTuple((filtered,), ("filtered",))
|
| 496 |
+
|
| 497 |
+
|
| 498 |
+
def smoother(signal=None, kernel="boxzen", size=10, mirror=True, **kwargs):
|
| 499 |
+
"""Smooth a signal using an N-point moving average [MAvg]_ filter.
|
| 500 |
+
|
| 501 |
+
This implementation uses the convolution of a filter kernel with the input
|
| 502 |
+
signal to compute the smoothed signal [Smit97]_.
|
| 503 |
+
|
| 504 |
+
Availabel kernels: median, boxzen, boxcar, triang, blackman, hamming, hann,
|
| 505 |
+
bartlett, flattop, parzen, bohman, blackmanharris, nuttall, barthann,
|
| 506 |
+
kaiser (needs beta), gaussian (needs std), general_gaussian (needs power,
|
| 507 |
+
width), slepian (needs width), chebwin (needs attenuation).
|
| 508 |
+
|
| 509 |
+
Parameters
|
| 510 |
+
----------
|
| 511 |
+
signal : array
|
| 512 |
+
Signal to smooth.
|
| 513 |
+
kernel : str, array, optional
|
| 514 |
+
Type of kernel to use; if array, use directly as the kernel.
|
| 515 |
+
size : int, optional
|
| 516 |
+
Size of the kernel; ignored if kernel is an array.
|
| 517 |
+
mirror : bool, optional
|
| 518 |
+
If True, signal edges are extended to avoid boundary effects.
|
| 519 |
+
``**kwargs`` : dict, optional
|
| 520 |
+
Additional keyword arguments are passed to the underlying
|
| 521 |
+
scipy.signal.windows function.
|
| 522 |
+
|
| 523 |
+
Returns
|
| 524 |
+
-------
|
| 525 |
+
signal : array
|
| 526 |
+
Smoothed signal.
|
| 527 |
+
params : dict
|
| 528 |
+
Smoother parameters.
|
| 529 |
+
|
| 530 |
+
Notes
|
| 531 |
+
-----
|
| 532 |
+
* When the kernel is 'median', mirror is ignored.
|
| 533 |
+
|
| 534 |
+
References
|
| 535 |
+
----------
|
| 536 |
+
.. [MAvg] Wikipedia, "Moving Average",
|
| 537 |
+
http://en.wikipedia.org/wiki/Moving_average
|
| 538 |
+
.. [Smit97] S. W. Smith, "Moving Average Filters - Implementation by
|
| 539 |
+
Convolution", http://www.dspguide.com/ch15/1.htm, 1997
|
| 540 |
+
|
| 541 |
+
"""
|
| 542 |
+
|
| 543 |
+
# check inputs
|
| 544 |
+
if signal is None:
|
| 545 |
+
raise TypeError("Please specify a signal to smooth.")
|
| 546 |
+
|
| 547 |
+
length = len(signal)
|
| 548 |
+
|
| 549 |
+
if isinstance(kernel, six.string_types):
|
| 550 |
+
# check length
|
| 551 |
+
if size > length:
|
| 552 |
+
size = length - 1
|
| 553 |
+
|
| 554 |
+
if size < 1:
|
| 555 |
+
size = 1
|
| 556 |
+
|
| 557 |
+
if kernel == "boxzen":
|
| 558 |
+
# hybrid method
|
| 559 |
+
# 1st pass - boxcar kernel
|
| 560 |
+
aux, _ = smoother(signal, kernel="boxcar", size=size, mirror=mirror)
|
| 561 |
+
|
| 562 |
+
# 2nd pass - parzen kernel
|
| 563 |
+
smoothed, _ = smoother(aux, kernel="parzen", size=size, mirror=mirror)
|
| 564 |
+
|
| 565 |
+
params = {"kernel": kernel, "size": size, "mirror": mirror}
|
| 566 |
+
|
| 567 |
+
args = (smoothed, params)
|
| 568 |
+
names = ("signal", "params")
|
| 569 |
+
|
| 570 |
+
return utils.ReturnTuple(args, names)
|
| 571 |
+
|
| 572 |
+
elif kernel == "median":
|
| 573 |
+
# median filter
|
| 574 |
+
if size % 2 == 0:
|
| 575 |
+
raise ValueError("When the kernel is 'median', size must be odd.")
|
| 576 |
+
|
| 577 |
+
smoothed = ss.medfilt(signal, kernel_size=size)
|
| 578 |
+
|
| 579 |
+
params = {"kernel": kernel, "size": size, "mirror": mirror}
|
| 580 |
+
|
| 581 |
+
args = (smoothed, params)
|
| 582 |
+
names = ("signal", "params")
|
| 583 |
+
|
| 584 |
+
return utils.ReturnTuple(args, names)
|
| 585 |
+
|
| 586 |
+
else:
|
| 587 |
+
win = _get_window(kernel, size, **kwargs)
|
| 588 |
+
|
| 589 |
+
elif isinstance(kernel, np.ndarray):
|
| 590 |
+
win = kernel
|
| 591 |
+
size = len(win)
|
| 592 |
+
|
| 593 |
+
# check length
|
| 594 |
+
if size > length:
|
| 595 |
+
raise ValueError("Kernel size is bigger than signal length.")
|
| 596 |
+
|
| 597 |
+
if size < 1:
|
| 598 |
+
raise ValueError("Kernel size is smaller than 1.")
|
| 599 |
+
|
| 600 |
+
else:
|
| 601 |
+
raise TypeError("Unknown kernel type.")
|
| 602 |
+
|
| 603 |
+
# convolve
|
| 604 |
+
w = win / win.sum()
|
| 605 |
+
if mirror:
|
| 606 |
+
aux = np.concatenate(
|
| 607 |
+
(signal[0] * np.ones(size), signal, signal[-1] * np.ones(size))
|
| 608 |
+
)
|
| 609 |
+
smoothed = np.convolve(w, aux, mode="same")
|
| 610 |
+
smoothed = smoothed[size:-size]
|
| 611 |
+
else:
|
| 612 |
+
smoothed = np.convolve(w, signal, mode="same")
|
| 613 |
+
|
| 614 |
+
# output
|
| 615 |
+
params = {"kernel": kernel, "size": size, "mirror": mirror}
|
| 616 |
+
params.update(kwargs)
|
| 617 |
+
|
| 618 |
+
args = (smoothed, params)
|
| 619 |
+
names = ("signal", "params")
|
| 620 |
+
|
| 621 |
+
return utils.ReturnTuple(args, names)
|
| 622 |
+
|
| 623 |
+
|
| 624 |
+
def analytic_signal(signal=None, N=None):
|
| 625 |
+
"""Compute analytic signal, using the Hilbert Transform.
|
| 626 |
+
|
| 627 |
+
Parameters
|
| 628 |
+
----------
|
| 629 |
+
signal : array
|
| 630 |
+
Input signal.
|
| 631 |
+
N : int, optional
|
| 632 |
+
Number of Fourier components; default is `len(signal)`.
|
| 633 |
+
|
| 634 |
+
Returns
|
| 635 |
+
-------
|
| 636 |
+
amplitude : array
|
| 637 |
+
Amplitude envelope of the analytic signal.
|
| 638 |
+
phase : array
|
| 639 |
+
Instantaneous phase component of the analystic signal.
|
| 640 |
+
|
| 641 |
+
"""
|
| 642 |
+
|
| 643 |
+
# check inputs
|
| 644 |
+
if signal is None:
|
| 645 |
+
raise TypeError("Please specify an input signal.")
|
| 646 |
+
|
| 647 |
+
# hilbert transform
|
| 648 |
+
asig = ss.hilbert(signal, N=N)
|
| 649 |
+
|
| 650 |
+
# amplitude envelope
|
| 651 |
+
amp = np.absolute(asig)
|
| 652 |
+
|
| 653 |
+
# instantaneous
|
| 654 |
+
phase = np.angle(asig)
|
| 655 |
+
|
| 656 |
+
return utils.ReturnTuple((amp, phase), ("amplitude", "phase"))
|
| 657 |
+
|
| 658 |
+
|
| 659 |
+
def phase_locking(signal1=None, signal2=None, N=None):
|
| 660 |
+
"""Compute the Phase-Locking Factor (PLF) between two signals.
|
| 661 |
+
|
| 662 |
+
Parameters
|
| 663 |
+
----------
|
| 664 |
+
signal1 : array
|
| 665 |
+
First input signal.
|
| 666 |
+
signal2 : array
|
| 667 |
+
Second input signal.
|
| 668 |
+
N : int, optional
|
| 669 |
+
Number of Fourier components.
|
| 670 |
+
|
| 671 |
+
Returns
|
| 672 |
+
-------
|
| 673 |
+
plf : float
|
| 674 |
+
The PLF between the two signals.
|
| 675 |
+
|
| 676 |
+
"""
|
| 677 |
+
|
| 678 |
+
# check inputs
|
| 679 |
+
if signal1 is None:
|
| 680 |
+
raise TypeError("Please specify the first input signal.")
|
| 681 |
+
|
| 682 |
+
if signal2 is None:
|
| 683 |
+
raise TypeError("Please specify the second input signal.")
|
| 684 |
+
|
| 685 |
+
if len(signal1) != len(signal2):
|
| 686 |
+
raise ValueError("The input signals must have the same length.")
|
| 687 |
+
|
| 688 |
+
# compute analytic signal
|
| 689 |
+
asig1 = ss.hilbert(signal1, N=N)
|
| 690 |
+
phase1 = np.angle(asig1)
|
| 691 |
+
|
| 692 |
+
asig2 = ss.hilbert(signal2, N=N)
|
| 693 |
+
phase2 = np.angle(asig2)
|
| 694 |
+
|
| 695 |
+
# compute PLF
|
| 696 |
+
plf = np.absolute(np.mean(np.exp(1j * (phase1 - phase2))))
|
| 697 |
+
|
| 698 |
+
return utils.ReturnTuple((plf,), ("plf",))
|
| 699 |
+
|
| 700 |
+
|
| 701 |
+
def power_spectrum(
|
| 702 |
+
signal=None, sampling_rate=1000.0, pad=None, pow2=False, decibel=True
|
| 703 |
+
):
|
| 704 |
+
"""Compute the power spectrum of a signal (one-sided).
|
| 705 |
+
|
| 706 |
+
Parameters
|
| 707 |
+
----------
|
| 708 |
+
signal : array
|
| 709 |
+
Input signal.
|
| 710 |
+
sampling_rate : int, float, optional
|
| 711 |
+
Sampling frequency (Hz).
|
| 712 |
+
pad : int, optional
|
| 713 |
+
Padding for the Fourier Transform (number of zeros added);
|
| 714 |
+
defaults to no padding..
|
| 715 |
+
pow2 : bool, optional
|
| 716 |
+
If True, rounds the number of points `N = len(signal) + pad` to the
|
| 717 |
+
nearest power of 2 greater than N.
|
| 718 |
+
decibel : bool, optional
|
| 719 |
+
If True, returns the power in decibels.
|
| 720 |
+
|
| 721 |
+
Returns
|
| 722 |
+
-------
|
| 723 |
+
freqs : array
|
| 724 |
+
Array of frequencies (Hz) at which the power was computed.
|
| 725 |
+
power : array
|
| 726 |
+
Power spectrum.
|
| 727 |
+
|
| 728 |
+
"""
|
| 729 |
+
|
| 730 |
+
# check inputs
|
| 731 |
+
if signal is None:
|
| 732 |
+
raise TypeError("Please specify an input signal.")
|
| 733 |
+
|
| 734 |
+
npoints = len(signal)
|
| 735 |
+
|
| 736 |
+
if pad is not None:
|
| 737 |
+
if pad >= 0:
|
| 738 |
+
npoints += pad
|
| 739 |
+
else:
|
| 740 |
+
raise ValueError("Padding must be a positive integer.")
|
| 741 |
+
|
| 742 |
+
# power of 2
|
| 743 |
+
if pow2:
|
| 744 |
+
npoints = 2 ** (np.ceil(np.log2(npoints)))
|
| 745 |
+
|
| 746 |
+
Nyq = float(sampling_rate) / 2
|
| 747 |
+
hpoints = npoints // 2
|
| 748 |
+
|
| 749 |
+
freqs = np.linspace(0, Nyq, hpoints)
|
| 750 |
+
power = np.abs(np.fft.fft(signal, npoints)) / npoints
|
| 751 |
+
|
| 752 |
+
# one-sided
|
| 753 |
+
power = power[:hpoints]
|
| 754 |
+
power[1:] *= 2
|
| 755 |
+
power = np.power(power, 2)
|
| 756 |
+
|
| 757 |
+
if decibel:
|
| 758 |
+
power = 10.0 * np.log10(power)
|
| 759 |
+
|
| 760 |
+
return utils.ReturnTuple((freqs, power), ("freqs", "power"))
|
| 761 |
+
|
| 762 |
+
|
| 763 |
+
def welch_spectrum(
|
| 764 |
+
signal=None,
|
| 765 |
+
sampling_rate=1000.0,
|
| 766 |
+
size=None,
|
| 767 |
+
overlap=None,
|
| 768 |
+
window="hanning",
|
| 769 |
+
window_kwargs=None,
|
| 770 |
+
pad=None,
|
| 771 |
+
decibel=True,
|
| 772 |
+
):
|
| 773 |
+
"""Compute the power spectrum of a signal using Welch's method (one-sided).
|
| 774 |
+
|
| 775 |
+
Parameters
|
| 776 |
+
----------
|
| 777 |
+
signal : array
|
| 778 |
+
Input signal.
|
| 779 |
+
sampling_rate : int, float, optional
|
| 780 |
+
Sampling frequency (Hz).
|
| 781 |
+
size : int, optional
|
| 782 |
+
Number of points in each Welch segment;
|
| 783 |
+
defaults to the equivalent of 1 second;
|
| 784 |
+
ignored when 'window' is an array.
|
| 785 |
+
overlap : int, optional
|
| 786 |
+
Number of points to overlap between segments; defaults to `size / 2`.
|
| 787 |
+
window : str, array, optional
|
| 788 |
+
Type of window to use.
|
| 789 |
+
window_kwargs : dict, optional
|
| 790 |
+
Additional keyword arguments to pass on window creation; ignored if
|
| 791 |
+
'window' is an array.
|
| 792 |
+
pad : int, optional
|
| 793 |
+
Padding for the Fourier Transform (number of zeros added);
|
| 794 |
+
defaults to no padding.
|
| 795 |
+
decibel : bool, optional
|
| 796 |
+
If True, returns the power in decibels.
|
| 797 |
+
|
| 798 |
+
Returns
|
| 799 |
+
-------
|
| 800 |
+
freqs : array
|
| 801 |
+
Array of frequencies (Hz) at which the power was computed.
|
| 802 |
+
power : array
|
| 803 |
+
Power spectrum.
|
| 804 |
+
|
| 805 |
+
Notes
|
| 806 |
+
-----
|
| 807 |
+
* Detrends each Welch segment by removing the mean.
|
| 808 |
+
|
| 809 |
+
"""
|
| 810 |
+
|
| 811 |
+
# check inputs
|
| 812 |
+
if signal is None:
|
| 813 |
+
raise TypeError("Please specify an input signal.")
|
| 814 |
+
|
| 815 |
+
length = len(signal)
|
| 816 |
+
sampling_rate = float(sampling_rate)
|
| 817 |
+
|
| 818 |
+
if size is None:
|
| 819 |
+
size = int(sampling_rate)
|
| 820 |
+
|
| 821 |
+
if window_kwargs is None:
|
| 822 |
+
window_kwargs = {}
|
| 823 |
+
|
| 824 |
+
if isinstance(window, six.string_types):
|
| 825 |
+
win = _get_window(window, size, **window_kwargs)
|
| 826 |
+
elif isinstance(window, np.ndarray):
|
| 827 |
+
win = window
|
| 828 |
+
size = len(win)
|
| 829 |
+
|
| 830 |
+
if size > length:
|
| 831 |
+
raise ValueError("Segment size must be smaller than signal length.")
|
| 832 |
+
|
| 833 |
+
if overlap is None:
|
| 834 |
+
overlap = size // 2
|
| 835 |
+
elif overlap > size:
|
| 836 |
+
raise ValueError("Overlap must be smaller than segment size.")
|
| 837 |
+
|
| 838 |
+
nfft = size
|
| 839 |
+
if pad is not None:
|
| 840 |
+
if pad >= 0:
|
| 841 |
+
nfft += pad
|
| 842 |
+
else:
|
| 843 |
+
raise ValueError("Padding must be a positive integer.")
|
| 844 |
+
|
| 845 |
+
freqs, power = ss.welch(
|
| 846 |
+
signal,
|
| 847 |
+
fs=sampling_rate,
|
| 848 |
+
window=win,
|
| 849 |
+
nperseg=size,
|
| 850 |
+
noverlap=overlap,
|
| 851 |
+
nfft=nfft,
|
| 852 |
+
detrend="constant",
|
| 853 |
+
return_onesided=True,
|
| 854 |
+
scaling="spectrum",
|
| 855 |
+
)
|
| 856 |
+
|
| 857 |
+
# compensate one-sided
|
| 858 |
+
power *= 2
|
| 859 |
+
|
| 860 |
+
if decibel:
|
| 861 |
+
power = 10.0 * np.log10(power)
|
| 862 |
+
|
| 863 |
+
return utils.ReturnTuple((freqs, power), ("freqs", "power"))
|
| 864 |
+
|
| 865 |
+
|
| 866 |
+
def band_power(freqs=None, power=None, frequency=None, decibel=True):
|
| 867 |
+
"""Compute the avearge power in a frequency band.
|
| 868 |
+
|
| 869 |
+
Parameters
|
| 870 |
+
----------
|
| 871 |
+
freqs : array
|
| 872 |
+
Array of frequencies (Hz) at which the power was computed.
|
| 873 |
+
power : array
|
| 874 |
+
Input power spectrum.
|
| 875 |
+
frequency : list, array
|
| 876 |
+
Pair of frequencies defining the band.
|
| 877 |
+
decibel : bool, optional
|
| 878 |
+
If True, input power is in decibels.
|
| 879 |
+
|
| 880 |
+
Returns
|
| 881 |
+
-------
|
| 882 |
+
avg_power : float
|
| 883 |
+
The average power in the band.
|
| 884 |
+
|
| 885 |
+
"""
|
| 886 |
+
|
| 887 |
+
# check inputs
|
| 888 |
+
if freqs is None:
|
| 889 |
+
raise TypeError("Please specify the 'freqs' array.")
|
| 890 |
+
|
| 891 |
+
if power is None:
|
| 892 |
+
raise TypeError("Please specify the input power spectrum.")
|
| 893 |
+
|
| 894 |
+
if len(freqs) != len(power):
|
| 895 |
+
raise ValueError(
|
| 896 |
+
"The input 'freqs' and 'power' arrays must have the same length."
|
| 897 |
+
)
|
| 898 |
+
|
| 899 |
+
if frequency is None:
|
| 900 |
+
raise TypeError("Please specify the band frequencies.")
|
| 901 |
+
|
| 902 |
+
try:
|
| 903 |
+
f1, f2 = frequency
|
| 904 |
+
except ValueError:
|
| 905 |
+
raise ValueError("Input 'frequency' must be a pair of frequencies.")
|
| 906 |
+
|
| 907 |
+
# make frequencies sane
|
| 908 |
+
if f1 > f2:
|
| 909 |
+
f1, f2 = f2, f1
|
| 910 |
+
|
| 911 |
+
if f1 < freqs[0]:
|
| 912 |
+
f1 = freqs[0]
|
| 913 |
+
if f2 > freqs[-1]:
|
| 914 |
+
f2 = freqs[-1]
|
| 915 |
+
|
| 916 |
+
# average
|
| 917 |
+
sel = np.nonzero(np.logical_and(f1 <= freqs, freqs <= f2))[0]
|
| 918 |
+
|
| 919 |
+
if decibel:
|
| 920 |
+
aux = 10 ** (power / 10.0)
|
| 921 |
+
avg = np.mean(aux[sel])
|
| 922 |
+
avg = 10.0 * np.log10(avg)
|
| 923 |
+
else:
|
| 924 |
+
avg = np.mean(power[sel])
|
| 925 |
+
|
| 926 |
+
return utils.ReturnTuple((avg,), ("avg_power",))
|
| 927 |
+
|
| 928 |
+
|
| 929 |
+
def signal_stats(signal=None):
|
| 930 |
+
"""Compute various metrics describing the signal.
|
| 931 |
+
|
| 932 |
+
Parameters
|
| 933 |
+
----------
|
| 934 |
+
signal : array
|
| 935 |
+
Input signal.
|
| 936 |
+
|
| 937 |
+
Returns
|
| 938 |
+
-------
|
| 939 |
+
mean : float
|
| 940 |
+
Mean of the signal.
|
| 941 |
+
median : float
|
| 942 |
+
Median of the signal.
|
| 943 |
+
min : float
|
| 944 |
+
Minimum signal value.
|
| 945 |
+
max : float
|
| 946 |
+
Maximum signal value.
|
| 947 |
+
max_amp : float
|
| 948 |
+
Maximum absolute signal amplitude, in relation to the mean.
|
| 949 |
+
var : float
|
| 950 |
+
Signal variance (unbiased).
|
| 951 |
+
std_dev : float
|
| 952 |
+
Standard signal deviation (unbiased).
|
| 953 |
+
abs_dev : float
|
| 954 |
+
Mean absolute signal deviation around the median.
|
| 955 |
+
kurtosis : float
|
| 956 |
+
Signal kurtosis (unbiased).
|
| 957 |
+
skew : float
|
| 958 |
+
Signal skewness (unbiased).
|
| 959 |
+
|
| 960 |
+
"""
|
| 961 |
+
|
| 962 |
+
# check inputs
|
| 963 |
+
if signal is None:
|
| 964 |
+
raise TypeError("Please specify an input signal.")
|
| 965 |
+
|
| 966 |
+
# ensure numpy
|
| 967 |
+
signal = np.array(signal)
|
| 968 |
+
|
| 969 |
+
# mean
|
| 970 |
+
mean = np.mean(signal)
|
| 971 |
+
|
| 972 |
+
# median
|
| 973 |
+
median = np.median(signal)
|
| 974 |
+
|
| 975 |
+
# min
|
| 976 |
+
minVal = np.min(signal)
|
| 977 |
+
|
| 978 |
+
# max
|
| 979 |
+
maxVal = np.max(signal)
|
| 980 |
+
|
| 981 |
+
# maximum amplitude
|
| 982 |
+
maxAmp = np.abs(signal - mean).max()
|
| 983 |
+
|
| 984 |
+
# variance
|
| 985 |
+
sigma2 = signal.var(ddof=1)
|
| 986 |
+
|
| 987 |
+
# standard deviation
|
| 988 |
+
sigma = signal.std(ddof=1)
|
| 989 |
+
|
| 990 |
+
# absolute deviation
|
| 991 |
+
ad = np.mean(np.abs(signal - median))
|
| 992 |
+
|
| 993 |
+
# kurtosis
|
| 994 |
+
kurt = stats.kurtosis(signal, bias=False)
|
| 995 |
+
|
| 996 |
+
# skweness
|
| 997 |
+
skew = stats.skew(signal, bias=False)
|
| 998 |
+
|
| 999 |
+
# output
|
| 1000 |
+
args = (mean, median, minVal, maxVal, maxAmp, sigma2, sigma, ad, kurt, skew)
|
| 1001 |
+
names = (
|
| 1002 |
+
"mean",
|
| 1003 |
+
"median",
|
| 1004 |
+
"min",
|
| 1005 |
+
"max",
|
| 1006 |
+
"max_amp",
|
| 1007 |
+
"var",
|
| 1008 |
+
"std_dev",
|
| 1009 |
+
"abs_dev",
|
| 1010 |
+
"kurtosis",
|
| 1011 |
+
"skewness",
|
| 1012 |
+
)
|
| 1013 |
+
|
| 1014 |
+
return utils.ReturnTuple(args, names)
|
| 1015 |
+
|
| 1016 |
+
|
| 1017 |
+
def normalize(signal=None, ddof=1):
|
| 1018 |
+
"""Normalize a signal to zero mean and unitary standard deviation.
|
| 1019 |
+
|
| 1020 |
+
Parameters
|
| 1021 |
+
----------
|
| 1022 |
+
signal : array
|
| 1023 |
+
Input signal.
|
| 1024 |
+
ddof : int, optional
|
| 1025 |
+
Delta degrees of freedom for standard deviation computation;
|
| 1026 |
+
the divisor is `N - ddof`, where `N` is the number of elements;
|
| 1027 |
+
default is one.
|
| 1028 |
+
|
| 1029 |
+
Returns
|
| 1030 |
+
-------
|
| 1031 |
+
signal : array
|
| 1032 |
+
Normalized signal.
|
| 1033 |
+
|
| 1034 |
+
"""
|
| 1035 |
+
|
| 1036 |
+
# check inputs
|
| 1037 |
+
if signal is None:
|
| 1038 |
+
raise TypeError("Please specify an input signal.")
|
| 1039 |
+
|
| 1040 |
+
# ensure numpy
|
| 1041 |
+
signal = np.array(signal)
|
| 1042 |
+
|
| 1043 |
+
normalized = signal - signal.mean()
|
| 1044 |
+
normalized /= normalized.std(ddof=ddof)
|
| 1045 |
+
|
| 1046 |
+
return utils.ReturnTuple((normalized,), ("signal",))
|
| 1047 |
+
|
| 1048 |
+
|
| 1049 |
+
def zero_cross(signal=None, detrend=False):
|
| 1050 |
+
"""Locate the indices where the signal crosses zero.
|
| 1051 |
+
|
| 1052 |
+
Parameters
|
| 1053 |
+
----------
|
| 1054 |
+
signal : array
|
| 1055 |
+
Input signal.
|
| 1056 |
+
detrend : bool, optional
|
| 1057 |
+
If True, remove signal mean before computation.
|
| 1058 |
+
|
| 1059 |
+
Returns
|
| 1060 |
+
-------
|
| 1061 |
+
zeros : array
|
| 1062 |
+
Indices of zero crossings.
|
| 1063 |
+
|
| 1064 |
+
Notes
|
| 1065 |
+
-----
|
| 1066 |
+
* When the signal crosses zero between samples, the first index
|
| 1067 |
+
is returned.
|
| 1068 |
+
|
| 1069 |
+
"""
|
| 1070 |
+
|
| 1071 |
+
# check inputs
|
| 1072 |
+
if signal is None:
|
| 1073 |
+
raise TypeError("Please specify an input signal.")
|
| 1074 |
+
|
| 1075 |
+
if detrend:
|
| 1076 |
+
signal = signal - np.mean(signal)
|
| 1077 |
+
|
| 1078 |
+
# zeros
|
| 1079 |
+
df = np.diff(np.sign(signal))
|
| 1080 |
+
zeros = np.nonzero(np.abs(df) > 0)[0]
|
| 1081 |
+
|
| 1082 |
+
return utils.ReturnTuple((zeros,), ("zeros",))
|
| 1083 |
+
|
| 1084 |
+
|
| 1085 |
+
def find_extrema(signal=None, mode="both"):
|
| 1086 |
+
"""Locate local extrema points in a signal.
|
| 1087 |
+
|
| 1088 |
+
Based on Fermat's Theorem [Ferm]_.
|
| 1089 |
+
|
| 1090 |
+
Parameters
|
| 1091 |
+
----------
|
| 1092 |
+
signal : array
|
| 1093 |
+
Input signal.
|
| 1094 |
+
mode : str, optional
|
| 1095 |
+
Whether to find maxima ('max'), minima ('min'), or both ('both').
|
| 1096 |
+
|
| 1097 |
+
Returns
|
| 1098 |
+
-------
|
| 1099 |
+
extrema : array
|
| 1100 |
+
Indices of the extrama points.
|
| 1101 |
+
values : array
|
| 1102 |
+
Signal values at the extrema points.
|
| 1103 |
+
|
| 1104 |
+
References
|
| 1105 |
+
----------
|
| 1106 |
+
.. [Ferm] Wikipedia, "Fermat's theorem (stationary points)",
|
| 1107 |
+
https://en.wikipedia.org/wiki/Fermat%27s_theorem_(stationary_points)
|
| 1108 |
+
|
| 1109 |
+
"""
|
| 1110 |
+
|
| 1111 |
+
# check inputs
|
| 1112 |
+
if signal is None:
|
| 1113 |
+
raise TypeError("Please specify an input signal.")
|
| 1114 |
+
|
| 1115 |
+
if mode not in ["max", "min", "both"]:
|
| 1116 |
+
raise ValueError("Unknwon mode %r." % mode)
|
| 1117 |
+
|
| 1118 |
+
aux = np.diff(np.sign(np.diff(signal)))
|
| 1119 |
+
|
| 1120 |
+
if mode == "both":
|
| 1121 |
+
aux = np.abs(aux)
|
| 1122 |
+
extrema = np.nonzero(aux > 0)[0] + 1
|
| 1123 |
+
elif mode == "max":
|
| 1124 |
+
extrema = np.nonzero(aux < 0)[0] + 1
|
| 1125 |
+
elif mode == "min":
|
| 1126 |
+
extrema = np.nonzero(aux > 0)[0] + 1
|
| 1127 |
+
|
| 1128 |
+
values = signal[extrema]
|
| 1129 |
+
|
| 1130 |
+
return utils.ReturnTuple((extrema, values), ("extrema", "values"))
|
| 1131 |
+
|
| 1132 |
+
|
| 1133 |
+
def windower(
|
| 1134 |
+
signal=None,
|
| 1135 |
+
size=None,
|
| 1136 |
+
step=None,
|
| 1137 |
+
fcn=None,
|
| 1138 |
+
fcn_kwargs=None,
|
| 1139 |
+
kernel="boxcar",
|
| 1140 |
+
kernel_kwargs=None,
|
| 1141 |
+
):
|
| 1142 |
+
"""Apply a function to a signal in sequential windows, with optional overlap.
|
| 1143 |
+
|
| 1144 |
+
Availabel window kernels: boxcar, triang, blackman, hamming, hann,
|
| 1145 |
+
bartlett, flattop, parzen, bohman, blackmanharris, nuttall, barthann,
|
| 1146 |
+
kaiser (needs beta), gaussian (needs std), general_gaussian (needs power,
|
| 1147 |
+
width), slepian (needs width), chebwin (needs attenuation).
|
| 1148 |
+
|
| 1149 |
+
Parameters
|
| 1150 |
+
----------
|
| 1151 |
+
signal : array
|
| 1152 |
+
Input signal.
|
| 1153 |
+
size : int
|
| 1154 |
+
Size of the signal window.
|
| 1155 |
+
step : int, optional
|
| 1156 |
+
Size of window shift; if None, there is no overlap.
|
| 1157 |
+
fcn : callable
|
| 1158 |
+
Function to apply to each window.
|
| 1159 |
+
fcn_kwargs : dict, optional
|
| 1160 |
+
Additional keyword arguments to pass to 'fcn'.
|
| 1161 |
+
kernel : str, array, optional
|
| 1162 |
+
Type of kernel to use; if array, use directly as the kernel.
|
| 1163 |
+
kernel_kwargs : dict, optional
|
| 1164 |
+
Additional keyword arguments to pass on window creation; ignored if
|
| 1165 |
+
'kernel' is an array.
|
| 1166 |
+
|
| 1167 |
+
Returns
|
| 1168 |
+
-------
|
| 1169 |
+
index : array
|
| 1170 |
+
Indices characterizing window locations (start of the window).
|
| 1171 |
+
values : array
|
| 1172 |
+
Concatenated output of calling 'fcn' on each window.
|
| 1173 |
+
|
| 1174 |
+
"""
|
| 1175 |
+
|
| 1176 |
+
# check inputs
|
| 1177 |
+
if signal is None:
|
| 1178 |
+
raise TypeError("Please specify an input signal.")
|
| 1179 |
+
|
| 1180 |
+
if fcn is None:
|
| 1181 |
+
raise TypeError("Please specify a function to apply to each window.")
|
| 1182 |
+
|
| 1183 |
+
if fcn_kwargs is None:
|
| 1184 |
+
fcn_kwargs = {}
|
| 1185 |
+
|
| 1186 |
+
if kernel_kwargs is None:
|
| 1187 |
+
kernel_kwargs = {}
|
| 1188 |
+
|
| 1189 |
+
length = len(signal)
|
| 1190 |
+
|
| 1191 |
+
if isinstance(kernel, six.string_types):
|
| 1192 |
+
# check size
|
| 1193 |
+
if size > length:
|
| 1194 |
+
raise ValueError("Window size must be smaller than signal length.")
|
| 1195 |
+
|
| 1196 |
+
win = _get_window(kernel, size, **kernel_kwargs)
|
| 1197 |
+
elif isinstance(kernel, np.ndarray):
|
| 1198 |
+
win = kernel
|
| 1199 |
+
size = len(win)
|
| 1200 |
+
|
| 1201 |
+
# check size
|
| 1202 |
+
if size > length:
|
| 1203 |
+
raise ValueError("Window size must be smaller than signal length.")
|
| 1204 |
+
|
| 1205 |
+
if step is None:
|
| 1206 |
+
step = size
|
| 1207 |
+
|
| 1208 |
+
if step <= 0:
|
| 1209 |
+
raise ValueError("Step size must be at least 1.")
|
| 1210 |
+
|
| 1211 |
+
# number of windows
|
| 1212 |
+
nb = 1 + (length - size) // step
|
| 1213 |
+
|
| 1214 |
+
# check signal dimensionality
|
| 1215 |
+
if np.ndim(signal) == 2:
|
| 1216 |
+
# time along 1st dim, tile window
|
| 1217 |
+
nch = np.shape(signal)[1]
|
| 1218 |
+
win = np.tile(np.reshape(win, (size, 1)), nch)
|
| 1219 |
+
|
| 1220 |
+
index = []
|
| 1221 |
+
values = []
|
| 1222 |
+
for i in range(nb):
|
| 1223 |
+
start = i * step
|
| 1224 |
+
stop = start + size
|
| 1225 |
+
index.append(start)
|
| 1226 |
+
|
| 1227 |
+
aux = signal[start:stop] * win
|
| 1228 |
+
|
| 1229 |
+
# apply function
|
| 1230 |
+
out = fcn(aux, **fcn_kwargs)
|
| 1231 |
+
values.append(out)
|
| 1232 |
+
|
| 1233 |
+
# transform to numpy
|
| 1234 |
+
index = np.array(index, dtype="int")
|
| 1235 |
+
values = np.array(values)
|
| 1236 |
+
|
| 1237 |
+
return utils.ReturnTuple((index, values), ("index", "values"))
|
| 1238 |
+
|
| 1239 |
+
|
| 1240 |
+
def synchronize(x=None, y=None, detrend=True):
|
| 1241 |
+
"""Align two signals based on cross-correlation.
|
| 1242 |
+
|
| 1243 |
+
Parameters
|
| 1244 |
+
----------
|
| 1245 |
+
x : array
|
| 1246 |
+
First input signal.
|
| 1247 |
+
y : array
|
| 1248 |
+
Second input signal.
|
| 1249 |
+
detrend : bool, optional
|
| 1250 |
+
If True, remove signal means before computation.
|
| 1251 |
+
|
| 1252 |
+
Returns
|
| 1253 |
+
-------
|
| 1254 |
+
delay : int
|
| 1255 |
+
Delay (number of samples) of 'x' in relation to 'y';
|
| 1256 |
+
if 'delay' < 0 , 'x' is ahead in relation to 'y';
|
| 1257 |
+
if 'delay' > 0 , 'x' is delayed in relation to 'y'.
|
| 1258 |
+
corr : float
|
| 1259 |
+
Value of maximum correlation.
|
| 1260 |
+
synch_x : array
|
| 1261 |
+
Biggest possible portion of 'x' in synchronization.
|
| 1262 |
+
synch_y : array
|
| 1263 |
+
Biggest possible portion of 'y' in synchronization.
|
| 1264 |
+
|
| 1265 |
+
"""
|
| 1266 |
+
|
| 1267 |
+
# check inputs
|
| 1268 |
+
if x is None:
|
| 1269 |
+
raise TypeError("Please specify the first input signal.")
|
| 1270 |
+
|
| 1271 |
+
if y is None:
|
| 1272 |
+
raise TypeError("Please specify the second input signal.")
|
| 1273 |
+
|
| 1274 |
+
n1 = len(x)
|
| 1275 |
+
n2 = len(y)
|
| 1276 |
+
|
| 1277 |
+
if detrend:
|
| 1278 |
+
x = x - np.mean(x)
|
| 1279 |
+
y = y - np.mean(y)
|
| 1280 |
+
|
| 1281 |
+
# correlate
|
| 1282 |
+
corr = np.correlate(x, y, mode="full")
|
| 1283 |
+
d = np.arange(-n2 + 1, n1, dtype="int")
|
| 1284 |
+
ind = np.argmax(corr)
|
| 1285 |
+
|
| 1286 |
+
delay = d[ind]
|
| 1287 |
+
maxCorr = corr[ind]
|
| 1288 |
+
|
| 1289 |
+
# get synchronization overlap
|
| 1290 |
+
if delay < 0:
|
| 1291 |
+
c = min([n1, len(y[-delay:])])
|
| 1292 |
+
synch_x = x[:c]
|
| 1293 |
+
synch_y = y[-delay : -delay + c]
|
| 1294 |
+
elif delay > 0:
|
| 1295 |
+
c = min([n2, len(x[delay:])])
|
| 1296 |
+
synch_x = x[delay : delay + c]
|
| 1297 |
+
synch_y = y[:c]
|
| 1298 |
+
else:
|
| 1299 |
+
c = min([n1, n2])
|
| 1300 |
+
synch_x = x[:c]
|
| 1301 |
+
synch_y = y[:c]
|
| 1302 |
+
|
| 1303 |
+
# output
|
| 1304 |
+
args = (delay, maxCorr, synch_x, synch_y)
|
| 1305 |
+
names = ("delay", "corr", "synch_x", "synch_y")
|
| 1306 |
+
|
| 1307 |
+
return utils.ReturnTuple(args, names)
|
| 1308 |
+
|
| 1309 |
+
|
| 1310 |
+
def pearson_correlation(x=None, y=None):
|
| 1311 |
+
"""Compute the Pearson Correlation Coefficient bertween two signals.
|
| 1312 |
+
|
| 1313 |
+
The coefficient is given by:
|
| 1314 |
+
|
| 1315 |
+
.. math::
|
| 1316 |
+
|
| 1317 |
+
r_{xy} = \\frac{E[(X - \\mu_X) (Y - \\mu_Y)]}{\\sigma_X \\sigma_Y}
|
| 1318 |
+
|
| 1319 |
+
Parameters
|
| 1320 |
+
----------
|
| 1321 |
+
x : array
|
| 1322 |
+
First input signal.
|
| 1323 |
+
y : array
|
| 1324 |
+
Second input signal.
|
| 1325 |
+
|
| 1326 |
+
Returns
|
| 1327 |
+
-------
|
| 1328 |
+
rxy : float
|
| 1329 |
+
Pearson correlation coefficient, ranging between -1 and +1.
|
| 1330 |
+
|
| 1331 |
+
Raises
|
| 1332 |
+
------
|
| 1333 |
+
ValueError
|
| 1334 |
+
If the input signals do not have the same length.
|
| 1335 |
+
|
| 1336 |
+
"""
|
| 1337 |
+
|
| 1338 |
+
print(
|
| 1339 |
+
"tools.pearson_correlation is deprecated, use stats.pearson_correlation instead",
|
| 1340 |
+
file=sys.stderr,
|
| 1341 |
+
)
|
| 1342 |
+
|
| 1343 |
+
# check inputs
|
| 1344 |
+
if x is None:
|
| 1345 |
+
raise TypeError("Please specify the first input signal.")
|
| 1346 |
+
|
| 1347 |
+
if y is None:
|
| 1348 |
+
raise TypeError("Please specify the second input signal.")
|
| 1349 |
+
|
| 1350 |
+
# ensure numpy
|
| 1351 |
+
x = np.array(x)
|
| 1352 |
+
y = np.array(y)
|
| 1353 |
+
|
| 1354 |
+
n = len(x)
|
| 1355 |
+
|
| 1356 |
+
if n != len(y):
|
| 1357 |
+
raise ValueError("Input signals must have the same length.")
|
| 1358 |
+
|
| 1359 |
+
mx = np.mean(x)
|
| 1360 |
+
my = np.mean(y)
|
| 1361 |
+
|
| 1362 |
+
Sxy = np.sum(x * y) - n * mx * my
|
| 1363 |
+
Sxx = np.sum(np.power(x, 2)) - n * mx**2
|
| 1364 |
+
Syy = np.sum(np.power(y, 2)) - n * my**2
|
| 1365 |
+
|
| 1366 |
+
rxy = Sxy / (np.sqrt(Sxx) * np.sqrt(Syy))
|
| 1367 |
+
|
| 1368 |
+
# avoid propagation of numerical errors
|
| 1369 |
+
if rxy > 1.0:
|
| 1370 |
+
rxy = 1.0
|
| 1371 |
+
elif rxy < -1.0:
|
| 1372 |
+
rxy = -1.0
|
| 1373 |
+
|
| 1374 |
+
return utils.ReturnTuple((rxy,), ("rxy",))
|
| 1375 |
+
|
| 1376 |
+
|
| 1377 |
+
def rms_error(x=None, y=None):
|
| 1378 |
+
"""Compute the Root-Mean-Square Error between two signals.
|
| 1379 |
+
|
| 1380 |
+
The error is given by:
|
| 1381 |
+
|
| 1382 |
+
.. math::
|
| 1383 |
+
|
| 1384 |
+
rmse = \\sqrt{E[(X - Y)^2]}
|
| 1385 |
+
|
| 1386 |
+
Parameters
|
| 1387 |
+
----------
|
| 1388 |
+
x : array
|
| 1389 |
+
First input signal.
|
| 1390 |
+
y : array
|
| 1391 |
+
Second input signal.
|
| 1392 |
+
|
| 1393 |
+
Returns
|
| 1394 |
+
-------
|
| 1395 |
+
rmse : float
|
| 1396 |
+
Root-mean-square error.
|
| 1397 |
+
|
| 1398 |
+
Raises
|
| 1399 |
+
------
|
| 1400 |
+
ValueError
|
| 1401 |
+
If the input signals do not have the same length.
|
| 1402 |
+
|
| 1403 |
+
"""
|
| 1404 |
+
|
| 1405 |
+
# check inputs
|
| 1406 |
+
if x is None:
|
| 1407 |
+
raise TypeError("Please specify the first input signal.")
|
| 1408 |
+
|
| 1409 |
+
if y is None:
|
| 1410 |
+
raise TypeError("Please specify the second input signal.")
|
| 1411 |
+
|
| 1412 |
+
# ensure numpy
|
| 1413 |
+
x = np.array(x)
|
| 1414 |
+
y = np.array(y)
|
| 1415 |
+
|
| 1416 |
+
n = len(x)
|
| 1417 |
+
|
| 1418 |
+
if n != len(y):
|
| 1419 |
+
raise ValueError("Input signals must have the same length.")
|
| 1420 |
+
|
| 1421 |
+
rmse = np.sqrt(np.mean(np.power(x - y, 2)))
|
| 1422 |
+
|
| 1423 |
+
return utils.ReturnTuple((rmse,), ("rmse",))
|
| 1424 |
+
|
| 1425 |
+
|
| 1426 |
+
def get_heart_rate(beats=None, sampling_rate=1000.0, smooth=False, size=3):
|
| 1427 |
+
"""Compute instantaneous heart rate from an array of beat indices.
|
| 1428 |
+
|
| 1429 |
+
Parameters
|
| 1430 |
+
----------
|
| 1431 |
+
beats : array
|
| 1432 |
+
Beat location indices.
|
| 1433 |
+
sampling_rate : int, float, optional
|
| 1434 |
+
Sampling frequency (Hz).
|
| 1435 |
+
smooth : bool, optional
|
| 1436 |
+
If True, perform smoothing on the resulting heart rate.
|
| 1437 |
+
size : int, optional
|
| 1438 |
+
Size of smoothing window; ignored if `smooth` is False.
|
| 1439 |
+
|
| 1440 |
+
Returns
|
| 1441 |
+
-------
|
| 1442 |
+
index : array
|
| 1443 |
+
Heart rate location indices.
|
| 1444 |
+
heart_rate : array
|
| 1445 |
+
Instantaneous heart rate (bpm).
|
| 1446 |
+
|
| 1447 |
+
Notes
|
| 1448 |
+
-----
|
| 1449 |
+
* Assumes normal human heart rate to be between 40 and 200 bpm.
|
| 1450 |
+
|
| 1451 |
+
"""
|
| 1452 |
+
|
| 1453 |
+
# check inputs
|
| 1454 |
+
if beats is None:
|
| 1455 |
+
raise TypeError("Please specify the input beat indices.")
|
| 1456 |
+
|
| 1457 |
+
if len(beats) < 2:
|
| 1458 |
+
raise ValueError("Not enough beats to compute heart rate.")
|
| 1459 |
+
|
| 1460 |
+
# compute heart rate
|
| 1461 |
+
ts = beats[1:]
|
| 1462 |
+
hr = sampling_rate * (60.0 / np.diff(beats))
|
| 1463 |
+
|
| 1464 |
+
# physiological limits
|
| 1465 |
+
indx = np.nonzero(np.logical_and(hr >= 40, hr <= 200))
|
| 1466 |
+
ts = ts[indx]
|
| 1467 |
+
hr = hr[indx]
|
| 1468 |
+
|
| 1469 |
+
# smooth with moving average
|
| 1470 |
+
if smooth and (len(hr) > 1):
|
| 1471 |
+
hr, _ = smoother(signal=hr, kernel="boxcar", size=size, mirror=True)
|
| 1472 |
+
|
| 1473 |
+
return utils.ReturnTuple((ts, hr), ("index", "heart_rate"))
|
| 1474 |
+
|
| 1475 |
+
|
| 1476 |
+
def _pdiff(x, p1, p2):
|
| 1477 |
+
"""Compute the squared difference between two interpolators, given the
|
| 1478 |
+
x-coordinates.
|
| 1479 |
+
|
| 1480 |
+
Parameters
|
| 1481 |
+
----------
|
| 1482 |
+
x : array
|
| 1483 |
+
Array of x-coordinates.
|
| 1484 |
+
p1 : object
|
| 1485 |
+
First interpolator.
|
| 1486 |
+
p2 : object
|
| 1487 |
+
Second interpolator.
|
| 1488 |
+
|
| 1489 |
+
Returns
|
| 1490 |
+
-------
|
| 1491 |
+
diff : array
|
| 1492 |
+
Squared differences.
|
| 1493 |
+
|
| 1494 |
+
"""
|
| 1495 |
+
|
| 1496 |
+
diff = (p1(x) - p2(x)) ** 2
|
| 1497 |
+
|
| 1498 |
+
return diff
|
| 1499 |
+
|
| 1500 |
+
|
| 1501 |
+
def find_intersection(
|
| 1502 |
+
x1=None, y1=None, x2=None, y2=None, alpha=1.5, xtol=1e-6, ytol=1e-6
|
| 1503 |
+
):
|
| 1504 |
+
"""Find the intersection points between two lines using piecewise
|
| 1505 |
+
polynomial interpolation.
|
| 1506 |
+
|
| 1507 |
+
Parameters
|
| 1508 |
+
----------
|
| 1509 |
+
x1 : array
|
| 1510 |
+
Array of x-coordinates of the first line.
|
| 1511 |
+
y1 : array
|
| 1512 |
+
Array of y-coordinates of the first line.
|
| 1513 |
+
x2 : array
|
| 1514 |
+
Array of x-coordinates of the second line.
|
| 1515 |
+
y2 : array
|
| 1516 |
+
Array of y-coordinates of the second line.
|
| 1517 |
+
alpha : float, optional
|
| 1518 |
+
Resolution factor for the x-axis; fraction of total number of
|
| 1519 |
+
x-coordinates.
|
| 1520 |
+
xtol : float, optional
|
| 1521 |
+
Tolerance for the x-axis.
|
| 1522 |
+
ytol : float, optional
|
| 1523 |
+
Tolerance for the y-axis.
|
| 1524 |
+
|
| 1525 |
+
Returns
|
| 1526 |
+
-------
|
| 1527 |
+
roots : array
|
| 1528 |
+
Array of x-coordinates of found intersection points.
|
| 1529 |
+
values : array
|
| 1530 |
+
Array of y-coordinates of found intersection points.
|
| 1531 |
+
|
| 1532 |
+
Notes
|
| 1533 |
+
-----
|
| 1534 |
+
* If no intersection is found, returns the closest point.
|
| 1535 |
+
|
| 1536 |
+
"""
|
| 1537 |
+
|
| 1538 |
+
# check inputs
|
| 1539 |
+
if x1 is None:
|
| 1540 |
+
raise TypeError("Please specify the x-coordinates of the first line.")
|
| 1541 |
+
if y1 is None:
|
| 1542 |
+
raise TypeError("Please specify the y-coordinates of the first line.")
|
| 1543 |
+
if x2 is None:
|
| 1544 |
+
raise TypeError("Please specify the x-coordinates of the second line.")
|
| 1545 |
+
if y2 is None:
|
| 1546 |
+
raise TypeError("Please specify the y-coordinates of the second line.")
|
| 1547 |
+
|
| 1548 |
+
# ensure numpy
|
| 1549 |
+
x1 = np.array(x1)
|
| 1550 |
+
y1 = np.array(y1)
|
| 1551 |
+
x2 = np.array(x2)
|
| 1552 |
+
y2 = np.array(y2)
|
| 1553 |
+
|
| 1554 |
+
if x1.shape != y1.shape:
|
| 1555 |
+
raise ValueError(
|
| 1556 |
+
"Input coordinates for the first line must have the same shape."
|
| 1557 |
+
)
|
| 1558 |
+
if x2.shape != y2.shape:
|
| 1559 |
+
raise ValueError(
|
| 1560 |
+
"Input coordinates for the second line must have the same shape."
|
| 1561 |
+
)
|
| 1562 |
+
|
| 1563 |
+
# interpolate
|
| 1564 |
+
p1 = interpolate.BPoly.from_derivatives(x1, y1[:, np.newaxis])
|
| 1565 |
+
p2 = interpolate.BPoly.from_derivatives(x2, y2[:, np.newaxis])
|
| 1566 |
+
|
| 1567 |
+
# combine x intervals
|
| 1568 |
+
x = np.r_[x1, x2]
|
| 1569 |
+
x_min = x.min()
|
| 1570 |
+
x_max = x.max()
|
| 1571 |
+
npoints = int(len(np.unique(x)) * alpha)
|
| 1572 |
+
x = np.linspace(x_min, x_max, npoints)
|
| 1573 |
+
|
| 1574 |
+
# initial estimates
|
| 1575 |
+
pd = p1(x) - p2(x)
|
| 1576 |
+
(zerocs,) = zero_cross(pd)
|
| 1577 |
+
|
| 1578 |
+
pd_abs = np.abs(pd)
|
| 1579 |
+
zeros = np.nonzero(pd_abs < ytol)[0]
|
| 1580 |
+
|
| 1581 |
+
ind = np.unique(np.concatenate((zerocs, zeros)))
|
| 1582 |
+
xi = x[ind]
|
| 1583 |
+
|
| 1584 |
+
# search for solutions
|
| 1585 |
+
roots = set()
|
| 1586 |
+
for v in xi:
|
| 1587 |
+
root, _, ier, _ = optimize.fsolve(
|
| 1588 |
+
_pdiff,
|
| 1589 |
+
v,
|
| 1590 |
+
args=(p1, p2),
|
| 1591 |
+
full_output=True,
|
| 1592 |
+
xtol=xtol,
|
| 1593 |
+
)
|
| 1594 |
+
if ier == 1 and x_min <= root <= x_max:
|
| 1595 |
+
roots.add(root[0])
|
| 1596 |
+
|
| 1597 |
+
if len(roots) == 0:
|
| 1598 |
+
# no solution was found => give the best from the initial estimates
|
| 1599 |
+
aux = np.abs(pd)
|
| 1600 |
+
bux = aux.min() * np.ones(npoints, dtype="float")
|
| 1601 |
+
roots, _ = find_intersection(x, aux, x, bux, alpha=1.0, xtol=xtol, ytol=ytol)
|
| 1602 |
+
|
| 1603 |
+
# compute values
|
| 1604 |
+
roots = list(roots)
|
| 1605 |
+
roots.sort()
|
| 1606 |
+
roots = np.array(roots)
|
| 1607 |
+
values = np.mean(np.vstack((p1(roots), p2(roots))), axis=0)
|
| 1608 |
+
|
| 1609 |
+
return utils.ReturnTuple((roots, values), ("roots", "values"))
|
| 1610 |
+
|
| 1611 |
+
|
| 1612 |
+
def finite_difference(signal=None, weights=None):
|
| 1613 |
+
"""Apply the Finite Difference method to compute derivatives.
|
| 1614 |
+
|
| 1615 |
+
Parameters
|
| 1616 |
+
----------
|
| 1617 |
+
signal : array
|
| 1618 |
+
Signal to differentiate.
|
| 1619 |
+
weights : list, array
|
| 1620 |
+
Finite difference weight coefficients.
|
| 1621 |
+
|
| 1622 |
+
Returns
|
| 1623 |
+
-------
|
| 1624 |
+
index : array
|
| 1625 |
+
Indices from `signal` for which the derivative was computed.
|
| 1626 |
+
derivative : array
|
| 1627 |
+
Computed derivative.
|
| 1628 |
+
|
| 1629 |
+
Notes
|
| 1630 |
+
-----
|
| 1631 |
+
* The method assumes central differences weights.
|
| 1632 |
+
* The method accounts for the delay introduced by the algorithm.
|
| 1633 |
+
|
| 1634 |
+
Raises
|
| 1635 |
+
------
|
| 1636 |
+
ValueError
|
| 1637 |
+
If the number of weights is not odd.
|
| 1638 |
+
|
| 1639 |
+
"""
|
| 1640 |
+
|
| 1641 |
+
# check inputs
|
| 1642 |
+
if signal is None:
|
| 1643 |
+
raise TypeError("Please specify a signal to differentiate.")
|
| 1644 |
+
|
| 1645 |
+
if weights is None:
|
| 1646 |
+
raise TypeError("Please specify the weight coefficients.")
|
| 1647 |
+
|
| 1648 |
+
N = len(weights)
|
| 1649 |
+
if N % 2 == 0:
|
| 1650 |
+
raise ValueError("Number of weights must be odd.")
|
| 1651 |
+
|
| 1652 |
+
# diff
|
| 1653 |
+
weights = weights[::-1]
|
| 1654 |
+
derivative = ss.lfilter(weights, [1], signal)
|
| 1655 |
+
|
| 1656 |
+
# trim delay
|
| 1657 |
+
D = N - 1
|
| 1658 |
+
D2 = D // 2
|
| 1659 |
+
|
| 1660 |
+
index = np.arange(D2, len(signal) - D2, dtype="int")
|
| 1661 |
+
derivative = derivative[D:]
|
| 1662 |
+
|
| 1663 |
+
return utils.ReturnTuple((index, derivative), ("index", "derivative"))
|
| 1664 |
+
|
| 1665 |
+
|
| 1666 |
+
def _init_dist_profile(m, n, signal):
|
| 1667 |
+
"""Compute initial time series signal statistics for distance profile.
|
| 1668 |
+
|
| 1669 |
+
Implements the algorithm described in [Mueen2014]_, using the notation
|
| 1670 |
+
from [Yeh2016_a]_.
|
| 1671 |
+
|
| 1672 |
+
Parameters
|
| 1673 |
+
----------
|
| 1674 |
+
m : int
|
| 1675 |
+
Sub-sequence length.
|
| 1676 |
+
n : int
|
| 1677 |
+
Target signal length.
|
| 1678 |
+
signal : array
|
| 1679 |
+
Target signal.
|
| 1680 |
+
|
| 1681 |
+
Returns
|
| 1682 |
+
-------
|
| 1683 |
+
X : array
|
| 1684 |
+
Fourier Transform (FFT) of the signal.
|
| 1685 |
+
sigma : array
|
| 1686 |
+
Moving standard deviation in windows of length `m`.
|
| 1687 |
+
|
| 1688 |
+
References
|
| 1689 |
+
----------
|
| 1690 |
+
.. [Mueen2014] Abdullah Mueen, Hossein Hamooni, "Trilce Estrada: Time
|
| 1691 |
+
Series Join on Subsequence Correlation", ICDM 2014: 450-459
|
| 1692 |
+
.. [Yeh2016_a] Chin-Chia Michael Yeh, Yan Zhu, Liudmila Ulanova,
|
| 1693 |
+
Nurjahan Begum, Yifei Ding, Hoang Anh Dau, Diego Furtado Silva,
|
| 1694 |
+
Abdullah Mueen, Eamonn Keogh, "Matrix Profile I: All Pairs Similarity
|
| 1695 |
+
Joins for Time Series: A Unifying View that Includes Motifs, Discords
|
| 1696 |
+
and Shapelets", IEEE ICDM 2016
|
| 1697 |
+
|
| 1698 |
+
"""
|
| 1699 |
+
|
| 1700 |
+
# compute signal stats
|
| 1701 |
+
csumx = np.zeros(n + 1, dtype="float")
|
| 1702 |
+
csumx[1:] = np.cumsum(signal)
|
| 1703 |
+
sumx = csumx[m:] - csumx[:-m]
|
| 1704 |
+
|
| 1705 |
+
csumx2 = np.zeros(n + 1, dtype="float")
|
| 1706 |
+
csumx2[1:] = np.cumsum(np.power(signal, 2))
|
| 1707 |
+
sumx2 = csumx2[m:] - csumx2[:-m]
|
| 1708 |
+
|
| 1709 |
+
meanx = sumx / m
|
| 1710 |
+
sigmax2 = (sumx2 / m) - np.power(meanx, 2)
|
| 1711 |
+
sigma = np.sqrt(sigmax2)
|
| 1712 |
+
|
| 1713 |
+
# append zeros
|
| 1714 |
+
x = np.concatenate((signal, np.zeros(n, dtype="float")))
|
| 1715 |
+
|
| 1716 |
+
# compute FFT
|
| 1717 |
+
X = np.fft.fft(x)
|
| 1718 |
+
|
| 1719 |
+
return X, sigma
|
| 1720 |
+
|
| 1721 |
+
|
| 1722 |
+
def _ditance_profile(m, n, query, X, sigma):
|
| 1723 |
+
"""Compute the distance profile of a query sequence against a signal.
|
| 1724 |
+
|
| 1725 |
+
Implements the algorithm described in [Mueen2014]_, using the notation
|
| 1726 |
+
from [Yeh2016]_.
|
| 1727 |
+
|
| 1728 |
+
Parameters
|
| 1729 |
+
----------
|
| 1730 |
+
m : int
|
| 1731 |
+
Query sub-sequence length.
|
| 1732 |
+
n : int
|
| 1733 |
+
Target time series length.
|
| 1734 |
+
query : array
|
| 1735 |
+
Query sub-sequence.
|
| 1736 |
+
X : array
|
| 1737 |
+
Target time series Fourier Transform (FFT).
|
| 1738 |
+
sigma : array
|
| 1739 |
+
Moving standard deviation in windows of length `m`.
|
| 1740 |
+
|
| 1741 |
+
Returns
|
| 1742 |
+
-------
|
| 1743 |
+
dist : array
|
| 1744 |
+
Distance profile (squared).
|
| 1745 |
+
|
| 1746 |
+
Notes
|
| 1747 |
+
-----
|
| 1748 |
+
* Computes distances on z-normalized data.
|
| 1749 |
+
|
| 1750 |
+
References
|
| 1751 |
+
----------
|
| 1752 |
+
.. [Mueen2014] Abdullah Mueen, Hossein Hamooni, "Trilce Estrada: Time
|
| 1753 |
+
Series Join on Subsequence Correlation", ICDM 2014: 450-459
|
| 1754 |
+
.. [Yeh2016] Chin-Chia Michael Yeh, Yan Zhu, Liudmila Ulanova,
|
| 1755 |
+
Nurjahan Begum, Yifei Ding, Hoang Anh Dau, Diego Furtado Silva,
|
| 1756 |
+
Abdullah Mueen, Eamonn Keogh, "Matrix Profile I: All Pairs Similarity
|
| 1757 |
+
Joins for Time Series: A Unifying View that Includes Motifs, Discords
|
| 1758 |
+
and Shapelets", IEEE ICDM 2016
|
| 1759 |
+
|
| 1760 |
+
"""
|
| 1761 |
+
|
| 1762 |
+
# normalize query
|
| 1763 |
+
q = (query - np.mean(query)) / np.std(query)
|
| 1764 |
+
|
| 1765 |
+
# reverse query and append zeros
|
| 1766 |
+
y = np.concatenate((q[::-1], np.zeros(2 * n - m, dtype="float")))
|
| 1767 |
+
|
| 1768 |
+
# compute dot products fast
|
| 1769 |
+
Y = np.fft.fft(y)
|
| 1770 |
+
Z = X * Y
|
| 1771 |
+
z = np.fft.ifft(Z)
|
| 1772 |
+
z = z[m - 1 : n]
|
| 1773 |
+
|
| 1774 |
+
# compute distances (z-normalized squared euclidean distance)
|
| 1775 |
+
dist = 2 * m * (1 - z / (m * sigma))
|
| 1776 |
+
|
| 1777 |
+
return dist
|
| 1778 |
+
|
| 1779 |
+
|
| 1780 |
+
def distance_profile(query=None, signal=None, metric="euclidean"):
|
| 1781 |
+
"""Compute the distance profile of a query sequence against a signal.
|
| 1782 |
+
|
| 1783 |
+
Implements the algorithm described in [Mueen2014]_.
|
| 1784 |
+
|
| 1785 |
+
Parameters
|
| 1786 |
+
----------
|
| 1787 |
+
query : array
|
| 1788 |
+
Input query signal sequence.
|
| 1789 |
+
signal : array
|
| 1790 |
+
Input target time series signal.
|
| 1791 |
+
metric : str, optional
|
| 1792 |
+
The distance metric to use; one of 'euclidean' or 'pearson'; default
|
| 1793 |
+
is 'euclidean'.
|
| 1794 |
+
|
| 1795 |
+
Returns
|
| 1796 |
+
-------
|
| 1797 |
+
dist : array
|
| 1798 |
+
Distance of the query sequence to every sub-sequnce in the signal.
|
| 1799 |
+
|
| 1800 |
+
Notes
|
| 1801 |
+
-----
|
| 1802 |
+
* Computes distances on z-normalized data.
|
| 1803 |
+
|
| 1804 |
+
References
|
| 1805 |
+
----------
|
| 1806 |
+
.. [Mueen2014] Abdullah Mueen, Hossein Hamooni, "Trilce Estrada: Time
|
| 1807 |
+
Series Join on Subsequence Correlation", ICDM 2014: 450-459
|
| 1808 |
+
|
| 1809 |
+
"""
|
| 1810 |
+
|
| 1811 |
+
# check inputs
|
| 1812 |
+
if query is None:
|
| 1813 |
+
raise TypeError("Please specify the input query sequence.")
|
| 1814 |
+
|
| 1815 |
+
if signal is None:
|
| 1816 |
+
raise TypeError("Please specify the input time series signal.")
|
| 1817 |
+
|
| 1818 |
+
if metric not in ["euclidean", "pearson"]:
|
| 1819 |
+
raise ValueError("Unknown distance metric.")
|
| 1820 |
+
|
| 1821 |
+
# ensure numpy
|
| 1822 |
+
query = np.array(query)
|
| 1823 |
+
signal = np.array(signal)
|
| 1824 |
+
|
| 1825 |
+
m = len(query)
|
| 1826 |
+
n = len(signal)
|
| 1827 |
+
if m > n / 2:
|
| 1828 |
+
raise ValueError("Time series signal is too short relative to" " query length.")
|
| 1829 |
+
|
| 1830 |
+
# get initial signal stats
|
| 1831 |
+
X, sigma = _init_dist_profile(m, n, signal)
|
| 1832 |
+
|
| 1833 |
+
# compute distance profile
|
| 1834 |
+
dist = _ditance_profile(m, n, query, X, sigma)
|
| 1835 |
+
|
| 1836 |
+
if metric == "pearson":
|
| 1837 |
+
dist = 1 - np.abs(dist) / (2 * m)
|
| 1838 |
+
elif metric == "euclidean":
|
| 1839 |
+
dist = np.abs(np.sqrt(dist))
|
| 1840 |
+
|
| 1841 |
+
return utils.ReturnTuple((dist,), ("dist",))
|
| 1842 |
+
|
| 1843 |
+
|
| 1844 |
+
def signal_self_join(signal=None, size=None, index=None, limit=None):
|
| 1845 |
+
"""Compute the matrix profile for a self-similarity join of a time series.
|
| 1846 |
+
|
| 1847 |
+
Implements the algorithm described in [Yeh2016_b]_.
|
| 1848 |
+
|
| 1849 |
+
Parameters
|
| 1850 |
+
----------
|
| 1851 |
+
signal : array
|
| 1852 |
+
Input target time series signal.
|
| 1853 |
+
size : int
|
| 1854 |
+
Size of the query sub-sequences.
|
| 1855 |
+
index : list, array, optional
|
| 1856 |
+
Starting indices for query sub-sequences; the default is to search all
|
| 1857 |
+
sub-sequences.
|
| 1858 |
+
limit : int, optional
|
| 1859 |
+
Upper limit for the number of query sub-sequences; the default is to
|
| 1860 |
+
search all sub-sequences.
|
| 1861 |
+
|
| 1862 |
+
Returns
|
| 1863 |
+
-------
|
| 1864 |
+
matrix_index : array
|
| 1865 |
+
Matric profile index.
|
| 1866 |
+
matrix_profile : array
|
| 1867 |
+
Computed matrix profile (distances).
|
| 1868 |
+
|
| 1869 |
+
Notes
|
| 1870 |
+
-----
|
| 1871 |
+
* Computes euclidean distances on z-normalized data.
|
| 1872 |
+
|
| 1873 |
+
References
|
| 1874 |
+
----------
|
| 1875 |
+
.. [Yeh2016_b] Chin-Chia Michael Yeh, Yan Zhu, Liudmila Ulanova,
|
| 1876 |
+
Nurjahan Begum, Yifei Ding, Hoang Anh Dau, Diego Furtado Silva,
|
| 1877 |
+
Abdullah Mueen, Eamonn Keogh, "Matrix Profile I: All Pairs Similarity
|
| 1878 |
+
Joins for Time Series: A Unifying View that Includes Motifs, Discords
|
| 1879 |
+
and Shapelets", IEEE ICDM 2016
|
| 1880 |
+
|
| 1881 |
+
"""
|
| 1882 |
+
|
| 1883 |
+
# check inputs
|
| 1884 |
+
if signal is None:
|
| 1885 |
+
raise TypeError("Please specify the input time series signal.")
|
| 1886 |
+
|
| 1887 |
+
if size is None:
|
| 1888 |
+
raise TypeError("Please specify the sub-sequence size.")
|
| 1889 |
+
|
| 1890 |
+
# ensure numpy
|
| 1891 |
+
signal = np.array(signal)
|
| 1892 |
+
|
| 1893 |
+
n = len(signal)
|
| 1894 |
+
if size > n / 2:
|
| 1895 |
+
raise ValueError(
|
| 1896 |
+
"Time series signal is too short relative to desired"
|
| 1897 |
+
" sub-sequence length."
|
| 1898 |
+
)
|
| 1899 |
+
|
| 1900 |
+
if size < 4:
|
| 1901 |
+
raise ValueError("Sub-sequence length must be at least 4.")
|
| 1902 |
+
|
| 1903 |
+
# matrix profile length
|
| 1904 |
+
nb = n - size + 1
|
| 1905 |
+
|
| 1906 |
+
# get search index
|
| 1907 |
+
if index is None:
|
| 1908 |
+
index = np.random.permutation(np.arange(nb, dtype="int"))
|
| 1909 |
+
else:
|
| 1910 |
+
index = np.array(index)
|
| 1911 |
+
if not np.all(index < nb):
|
| 1912 |
+
raise ValueError("Provided `index` exceeds allowable sub-sequences.")
|
| 1913 |
+
|
| 1914 |
+
# limit search
|
| 1915 |
+
if limit is not None:
|
| 1916 |
+
if limit < 1:
|
| 1917 |
+
raise ValueError("Search limit must be at least 1.")
|
| 1918 |
+
|
| 1919 |
+
index = index[:limit]
|
| 1920 |
+
|
| 1921 |
+
# exclusion zone (to avoid query self-matches)
|
| 1922 |
+
ezone = int(round(size / 4))
|
| 1923 |
+
|
| 1924 |
+
# initialization
|
| 1925 |
+
matrix_profile = np.inf * np.ones(nb, dtype="float")
|
| 1926 |
+
matrix_index = np.zeros(nb, dtype="int")
|
| 1927 |
+
|
| 1928 |
+
X, sigma = _init_dist_profile(size, n, signal)
|
| 1929 |
+
|
| 1930 |
+
# compute matrix profile
|
| 1931 |
+
for idx in index:
|
| 1932 |
+
# compute distance profile
|
| 1933 |
+
query = signal[idx : idx + size]
|
| 1934 |
+
dist = _ditance_profile(size, n, query, X, sigma)
|
| 1935 |
+
dist = np.abs(np.sqrt(dist)) # to have euclidean distance
|
| 1936 |
+
|
| 1937 |
+
# apply exlusion zone
|
| 1938 |
+
a = max([0, idx - ezone])
|
| 1939 |
+
b = min([nb, idx + ezone + 1])
|
| 1940 |
+
dist[a:b] = np.inf
|
| 1941 |
+
|
| 1942 |
+
# find nearest neighbors
|
| 1943 |
+
pos = dist < matrix_profile
|
| 1944 |
+
matrix_profile[pos] = dist[pos]
|
| 1945 |
+
matrix_index[pos] = idx
|
| 1946 |
+
|
| 1947 |
+
# account for exlusion zone
|
| 1948 |
+
neighbor = np.argmin(dist)
|
| 1949 |
+
matrix_profile[idx] = dist[neighbor]
|
| 1950 |
+
matrix_index[idx] = neighbor
|
| 1951 |
+
|
| 1952 |
+
# output
|
| 1953 |
+
args = (matrix_index, matrix_profile)
|
| 1954 |
+
names = ("matrix_index", "matrix_profile")
|
| 1955 |
+
|
| 1956 |
+
return utils.ReturnTuple(args, names)
|
| 1957 |
+
|
| 1958 |
+
|
| 1959 |
+
def signal_cross_join(signal1=None, signal2=None, size=None, index=None, limit=None):
|
| 1960 |
+
"""Compute the matrix profile for a similarity join of two time series.
|
| 1961 |
+
|
| 1962 |
+
Computes the nearest sub-sequence in `signal2` for each sub-sequence in
|
| 1963 |
+
`signal1`. Implements the algorithm described in [Yeh2016_c]_.
|
| 1964 |
+
|
| 1965 |
+
Parameters
|
| 1966 |
+
----------
|
| 1967 |
+
signal1 : array
|
| 1968 |
+
Fisrt input time series signal.
|
| 1969 |
+
signal2 : array
|
| 1970 |
+
Second input time series signal.
|
| 1971 |
+
size : int
|
| 1972 |
+
Size of the query sub-sequences.
|
| 1973 |
+
index : list, array, optional
|
| 1974 |
+
Starting indices for query sub-sequences; the default is to search all
|
| 1975 |
+
sub-sequences.
|
| 1976 |
+
limit : int, optional
|
| 1977 |
+
Upper limit for the number of query sub-sequences; the default is to
|
| 1978 |
+
search all sub-sequences.
|
| 1979 |
+
|
| 1980 |
+
Returns
|
| 1981 |
+
-------
|
| 1982 |
+
matrix_index : array
|
| 1983 |
+
Matric profile index.
|
| 1984 |
+
matrix_profile : array
|
| 1985 |
+
Computed matrix profile (distances).
|
| 1986 |
+
|
| 1987 |
+
Notes
|
| 1988 |
+
-----
|
| 1989 |
+
* Computes euclidean distances on z-normalized data.
|
| 1990 |
+
|
| 1991 |
+
References
|
| 1992 |
+
----------
|
| 1993 |
+
.. [Yeh2016_c] Chin-Chia Michael Yeh, Yan Zhu, Liudmila Ulanova,
|
| 1994 |
+
Nurjahan Begum, Yifei Ding, Hoang Anh Dau, Diego Furtado Silva,
|
| 1995 |
+
Abdullah Mueen, Eamonn Keogh, "Matrix Profile I: All Pairs Similarity
|
| 1996 |
+
Joins for Time Series: A Unifying View that Includes Motifs, Discords
|
| 1997 |
+
and Shapelets", IEEE ICDM 2016
|
| 1998 |
+
|
| 1999 |
+
"""
|
| 2000 |
+
|
| 2001 |
+
# check inputs
|
| 2002 |
+
if signal1 is None:
|
| 2003 |
+
raise TypeError("Please specify the first input time series signal.")
|
| 2004 |
+
|
| 2005 |
+
if signal2 is None:
|
| 2006 |
+
raise TypeError("Please specify the second input time series signal.")
|
| 2007 |
+
|
| 2008 |
+
if size is None:
|
| 2009 |
+
raise TypeError("Please specify the sub-sequence size.")
|
| 2010 |
+
|
| 2011 |
+
# ensure numpy
|
| 2012 |
+
signal1 = np.array(signal1)
|
| 2013 |
+
signal2 = np.array(signal2)
|
| 2014 |
+
|
| 2015 |
+
n1 = len(signal1)
|
| 2016 |
+
if size > n1 / 2:
|
| 2017 |
+
raise ValueError(
|
| 2018 |
+
"First time series signal is too short relative to"
|
| 2019 |
+
" desired sub-sequence length."
|
| 2020 |
+
)
|
| 2021 |
+
|
| 2022 |
+
n2 = len(signal2)
|
| 2023 |
+
if size > n2 / 2:
|
| 2024 |
+
raise ValueError(
|
| 2025 |
+
"Second time series signal is too short relative to"
|
| 2026 |
+
" desired sub-sequence length."
|
| 2027 |
+
)
|
| 2028 |
+
|
| 2029 |
+
if size < 4:
|
| 2030 |
+
raise ValueError("Sub-sequence length must be at least 4.")
|
| 2031 |
+
|
| 2032 |
+
# matrix profile length
|
| 2033 |
+
nb1 = n1 - size + 1
|
| 2034 |
+
nb2 = n2 - size + 1
|
| 2035 |
+
|
| 2036 |
+
# get search index
|
| 2037 |
+
if index is None:
|
| 2038 |
+
index = np.random.permutation(np.arange(nb2, dtype="int"))
|
| 2039 |
+
else:
|
| 2040 |
+
index = np.array(index)
|
| 2041 |
+
if not np.all(index < nb2):
|
| 2042 |
+
raise ValueError(
|
| 2043 |
+
"Provided `index` exceeds allowable `signal2`" " sub-sequences."
|
| 2044 |
+
)
|
| 2045 |
+
|
| 2046 |
+
# limit search
|
| 2047 |
+
if limit is not None:
|
| 2048 |
+
if limit < 1:
|
| 2049 |
+
raise ValueError("Search limit must be at least 1.")
|
| 2050 |
+
|
| 2051 |
+
index = index[:limit]
|
| 2052 |
+
|
| 2053 |
+
# initialization
|
| 2054 |
+
matrix_profile = np.inf * np.ones(nb1, dtype="float")
|
| 2055 |
+
matrix_index = np.zeros(nb1, dtype="int")
|
| 2056 |
+
|
| 2057 |
+
X, sigma = _init_dist_profile(size, n1, signal1)
|
| 2058 |
+
|
| 2059 |
+
# compute matrix profile
|
| 2060 |
+
for idx in index:
|
| 2061 |
+
# compute distance profile
|
| 2062 |
+
query = signal2[idx : idx + size]
|
| 2063 |
+
dist = _ditance_profile(size, n1, query, X, sigma)
|
| 2064 |
+
dist = np.abs(np.sqrt(dist)) # to have euclidean distance
|
| 2065 |
+
|
| 2066 |
+
# find nearest neighbor
|
| 2067 |
+
pos = dist <= matrix_profile
|
| 2068 |
+
matrix_profile[pos] = dist[pos]
|
| 2069 |
+
matrix_index[pos] = idx
|
| 2070 |
+
|
| 2071 |
+
# output
|
| 2072 |
+
args = (matrix_index, matrix_profile)
|
| 2073 |
+
names = ("matrix_index", "matrix_profile")
|
| 2074 |
+
|
| 2075 |
+
return utils.ReturnTuple(args, names)
|
| 2076 |
+
|
| 2077 |
+
|
| 2078 |
+
def mean_waves(data=None, size=None, step=None):
|
| 2079 |
+
"""Extract mean samples from a data set.
|
| 2080 |
+
|
| 2081 |
+
Parameters
|
| 2082 |
+
----------
|
| 2083 |
+
data : array
|
| 2084 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 2085 |
+
size : int
|
| 2086 |
+
Number of samples to use for each mean sample.
|
| 2087 |
+
step : int, optional
|
| 2088 |
+
Number of samples to jump, controlling overlap; default is equal to
|
| 2089 |
+
`size` (no overlap).
|
| 2090 |
+
|
| 2091 |
+
Returns
|
| 2092 |
+
-------
|
| 2093 |
+
waves : array
|
| 2094 |
+
An k by n array of mean samples.
|
| 2095 |
+
|
| 2096 |
+
Notes
|
| 2097 |
+
-----
|
| 2098 |
+
* Discards trailing samples if they are not enough to satify the `size`
|
| 2099 |
+
parameter.
|
| 2100 |
+
|
| 2101 |
+
Raises
|
| 2102 |
+
------
|
| 2103 |
+
ValueError
|
| 2104 |
+
If `step` is an invalid value.
|
| 2105 |
+
ValueError
|
| 2106 |
+
If there are not enough samples for the given `size`.
|
| 2107 |
+
|
| 2108 |
+
"""
|
| 2109 |
+
|
| 2110 |
+
# check inputs
|
| 2111 |
+
if data is None:
|
| 2112 |
+
raise TypeError("Please specify an input data set.")
|
| 2113 |
+
|
| 2114 |
+
if size is None:
|
| 2115 |
+
raise TypeError("Please specify the number of samples for the mean.")
|
| 2116 |
+
|
| 2117 |
+
if step is None:
|
| 2118 |
+
step = size
|
| 2119 |
+
|
| 2120 |
+
if step < 0:
|
| 2121 |
+
raise ValueError("The step must be a positive integer.")
|
| 2122 |
+
|
| 2123 |
+
# number of waves
|
| 2124 |
+
L = len(data) - size
|
| 2125 |
+
nb = 1 + L // step
|
| 2126 |
+
if nb <= 0:
|
| 2127 |
+
raise ValueError("Not enough samples for the given `size`.")
|
| 2128 |
+
|
| 2129 |
+
# compute
|
| 2130 |
+
waves = [np.mean(data[i : i + size], axis=0) for i in range(0, L + 1, step)]
|
| 2131 |
+
waves = np.array(waves)
|
| 2132 |
+
|
| 2133 |
+
return utils.ReturnTuple((waves,), ("waves",))
|
| 2134 |
+
|
| 2135 |
+
|
| 2136 |
+
def median_waves(data=None, size=None, step=None):
|
| 2137 |
+
"""Extract median samples from a data set.
|
| 2138 |
+
|
| 2139 |
+
Parameters
|
| 2140 |
+
----------
|
| 2141 |
+
data : array
|
| 2142 |
+
An m by n array of m data samples in an n-dimensional space.
|
| 2143 |
+
size : int
|
| 2144 |
+
Number of samples to use for each median sample.
|
| 2145 |
+
step : int, optional
|
| 2146 |
+
Number of samples to jump, controlling overlap; default is equal to
|
| 2147 |
+
`size` (no overlap).
|
| 2148 |
+
|
| 2149 |
+
Returns
|
| 2150 |
+
-------
|
| 2151 |
+
waves : array
|
| 2152 |
+
An k by n array of median samples.
|
| 2153 |
+
|
| 2154 |
+
Notes
|
| 2155 |
+
-----
|
| 2156 |
+
* Discards trailing samples if they are not enough to satify the `size`
|
| 2157 |
+
parameter.
|
| 2158 |
+
|
| 2159 |
+
Raises
|
| 2160 |
+
------
|
| 2161 |
+
ValueError
|
| 2162 |
+
If `step` is an invalid value.
|
| 2163 |
+
ValueError
|
| 2164 |
+
If there are not enough samples for the given `size`.
|
| 2165 |
+
|
| 2166 |
+
"""
|
| 2167 |
+
|
| 2168 |
+
# check inputs
|
| 2169 |
+
if data is None:
|
| 2170 |
+
raise TypeError("Please specify an input data set.")
|
| 2171 |
+
|
| 2172 |
+
if size is None:
|
| 2173 |
+
raise TypeError("Please specify the number of samples for the median.")
|
| 2174 |
+
|
| 2175 |
+
if step is None:
|
| 2176 |
+
step = size
|
| 2177 |
+
|
| 2178 |
+
if step < 0:
|
| 2179 |
+
raise ValueError("The step must be a positive integer.")
|
| 2180 |
+
|
| 2181 |
+
# number of waves
|
| 2182 |
+
L = len(data) - size
|
| 2183 |
+
nb = 1 + L // step
|
| 2184 |
+
if nb <= 0:
|
| 2185 |
+
raise ValueError("Not enough samples for the given `size`.")
|
| 2186 |
+
|
| 2187 |
+
# compute
|
| 2188 |
+
waves = [np.median(data[i : i + size], axis=0) for i in range(0, L + 1, step)]
|
| 2189 |
+
waves = np.array(waves)
|
| 2190 |
+
|
| 2191 |
+
return utils.ReturnTuple((waves,), ("waves",))
|
BioSPPy/source/biosppy/stats.py
ADDED
|
@@ -0,0 +1,240 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.stats
|
| 4 |
+
---------------
|
| 5 |
+
|
| 6 |
+
This module provides statistica functions and related tools.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2021 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
# Imports
|
| 13 |
+
# compat
|
| 14 |
+
from __future__ import absolute_import, division, print_function
|
| 15 |
+
import six
|
| 16 |
+
|
| 17 |
+
# local
|
| 18 |
+
|
| 19 |
+
from . import utils
|
| 20 |
+
|
| 21 |
+
# 3rd party
|
| 22 |
+
import numpy as np
|
| 23 |
+
import matplotlib.pyplot as plt
|
| 24 |
+
from scipy.stats import pearsonr, ttest_rel, ttest_ind
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
def pearson_correlation(x=None, y=None):
|
| 28 |
+
"""Compute the Pearson Correlation Coefficient between two signals.
|
| 29 |
+
|
| 30 |
+
The coefficient is given by:
|
| 31 |
+
|
| 32 |
+
.. math::
|
| 33 |
+
|
| 34 |
+
r_{xy} = \\frac{E[(X - \\mu_X) (Y - \\mu_Y)]}{\\sigma_X \\sigma_Y}
|
| 35 |
+
|
| 36 |
+
Parameters
|
| 37 |
+
----------
|
| 38 |
+
x : array
|
| 39 |
+
First input signal.
|
| 40 |
+
y : array
|
| 41 |
+
Second input signal.
|
| 42 |
+
|
| 43 |
+
Returns
|
| 44 |
+
-------
|
| 45 |
+
r : float
|
| 46 |
+
Pearson correlation coefficient, ranging between -1 and +1.
|
| 47 |
+
pvalue : float
|
| 48 |
+
Two-tailed p-value. The p-value roughly indicates the probability of
|
| 49 |
+
an uncorrelated system producing datasets that have a Pearson correlation
|
| 50 |
+
at least as extreme as the one computed from these datasets.
|
| 51 |
+
|
| 52 |
+
Raises
|
| 53 |
+
------
|
| 54 |
+
ValueError
|
| 55 |
+
If the input signals do not have the same length.
|
| 56 |
+
|
| 57 |
+
"""
|
| 58 |
+
|
| 59 |
+
# check inputs
|
| 60 |
+
if x is None:
|
| 61 |
+
raise TypeError("Please specify the first input signal.")
|
| 62 |
+
|
| 63 |
+
if y is None:
|
| 64 |
+
raise TypeError("Please specify the second input signal.")
|
| 65 |
+
|
| 66 |
+
# ensure numpy
|
| 67 |
+
x = np.array(x)
|
| 68 |
+
y = np.array(y)
|
| 69 |
+
|
| 70 |
+
n = len(x)
|
| 71 |
+
|
| 72 |
+
if n != len(y):
|
| 73 |
+
raise ValueError("Input signals must have the same length.")
|
| 74 |
+
|
| 75 |
+
r, pvalue = pearsonr(x, y)
|
| 76 |
+
|
| 77 |
+
return r, pvalue
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
def linear_regression(x=None, y=None):
|
| 81 |
+
"""Plot the linear regression between two signals and get the equation coefficients.
|
| 82 |
+
|
| 83 |
+
The linear regression uses the least squares method.
|
| 84 |
+
|
| 85 |
+
Parameters
|
| 86 |
+
----------
|
| 87 |
+
x : array
|
| 88 |
+
First input signal.
|
| 89 |
+
y : array
|
| 90 |
+
Second input signal.
|
| 91 |
+
|
| 92 |
+
Returns
|
| 93 |
+
-------
|
| 94 |
+
coeffs : array
|
| 95 |
+
Linear regression coefficients: [m, b].
|
| 96 |
+
|
| 97 |
+
Raises
|
| 98 |
+
------
|
| 99 |
+
ValueError
|
| 100 |
+
If the input signals do not have the same length.
|
| 101 |
+
|
| 102 |
+
"""
|
| 103 |
+
|
| 104 |
+
# check inputs
|
| 105 |
+
if x is None:
|
| 106 |
+
raise TypeError("Please specify the first input signal.")
|
| 107 |
+
|
| 108 |
+
if y is None:
|
| 109 |
+
raise TypeError("Please specify the second input signal.")
|
| 110 |
+
|
| 111 |
+
# ensure numpy
|
| 112 |
+
x = np.array(x)
|
| 113 |
+
y = np.array(y)
|
| 114 |
+
|
| 115 |
+
n = len(x)
|
| 116 |
+
|
| 117 |
+
if n != len(y):
|
| 118 |
+
raise ValueError("Input signals must have the same length.")
|
| 119 |
+
|
| 120 |
+
coeffs = np.polyfit(x, y, 1)
|
| 121 |
+
f = np.poly1d(coeffs)
|
| 122 |
+
|
| 123 |
+
x_min = x.min()
|
| 124 |
+
x_max = x.max()
|
| 125 |
+
|
| 126 |
+
y_min = f(x_min)
|
| 127 |
+
y_max = f(x_max)
|
| 128 |
+
|
| 129 |
+
plt.scatter(x, y)
|
| 130 |
+
plt.plot(
|
| 131 |
+
[x_min, x_max],
|
| 132 |
+
[y_min, y_max],
|
| 133 |
+
c="orange",
|
| 134 |
+
label="y={:.3f}x+{:.3f}".format(coeffs[0], coeffs[1]),
|
| 135 |
+
)
|
| 136 |
+
plt.title("Linear Regression")
|
| 137 |
+
plt.xlabel("x")
|
| 138 |
+
plt.ylabel("y")
|
| 139 |
+
plt.legend()
|
| 140 |
+
|
| 141 |
+
return coeffs
|
| 142 |
+
|
| 143 |
+
|
| 144 |
+
def paired_test(x=None, y=None):
|
| 145 |
+
"""
|
| 146 |
+
Perform the Student's paired t-test on the arrays x and y.
|
| 147 |
+
This is a two-sided test for the null hypothesis that 2 related
|
| 148 |
+
or repeated samples have identical average (expected) values.
|
| 149 |
+
|
| 150 |
+
Parameters
|
| 151 |
+
----------
|
| 152 |
+
x : array
|
| 153 |
+
First input signal.
|
| 154 |
+
y : array
|
| 155 |
+
Second input signal.
|
| 156 |
+
|
| 157 |
+
Returns
|
| 158 |
+
-------
|
| 159 |
+
statistic : float
|
| 160 |
+
t-statistic. The t-statistic is used in a t-test to determine
|
| 161 |
+
if you should support or reject the null hypothesis.
|
| 162 |
+
pvalue : float
|
| 163 |
+
Two-sided p-value.
|
| 164 |
+
|
| 165 |
+
Raises
|
| 166 |
+
------
|
| 167 |
+
ValueError
|
| 168 |
+
If the input signals do not have the same length.
|
| 169 |
+
|
| 170 |
+
"""
|
| 171 |
+
|
| 172 |
+
# check inputs
|
| 173 |
+
if x is None:
|
| 174 |
+
raise TypeError("Please specify the first input signal.")
|
| 175 |
+
|
| 176 |
+
if y is None:
|
| 177 |
+
raise TypeError("Please specify the second input signal.")
|
| 178 |
+
|
| 179 |
+
# ensure numpy
|
| 180 |
+
x = np.array(x)
|
| 181 |
+
y = np.array(y)
|
| 182 |
+
|
| 183 |
+
n = len(x)
|
| 184 |
+
|
| 185 |
+
if n != len(y):
|
| 186 |
+
raise ValueError("Input signals must have the same length.")
|
| 187 |
+
|
| 188 |
+
statistic, pvalue = ttest_rel(x, y)
|
| 189 |
+
|
| 190 |
+
return statistic, pvalue
|
| 191 |
+
|
| 192 |
+
|
| 193 |
+
def unpaired_test(x=None, y=None):
|
| 194 |
+
"""
|
| 195 |
+
Perform the Student's unpaired t-test on the arrays x and y.
|
| 196 |
+
This is a two-sided test for the null hypothesis that 2 independent
|
| 197 |
+
samples have identical average (expected) values. This test assumes
|
| 198 |
+
that the populations have identical variances by default.
|
| 199 |
+
|
| 200 |
+
Parameters
|
| 201 |
+
----------
|
| 202 |
+
x : array
|
| 203 |
+
First input signal.
|
| 204 |
+
y : array
|
| 205 |
+
Second input signal.
|
| 206 |
+
|
| 207 |
+
Returns
|
| 208 |
+
-------
|
| 209 |
+
statistic : float
|
| 210 |
+
t-statistic. The t-statistic is used in a t-test to determine
|
| 211 |
+
if you should support or reject the null hypothesis.
|
| 212 |
+
pvalue : float
|
| 213 |
+
Two-sided p-value.
|
| 214 |
+
|
| 215 |
+
Raises
|
| 216 |
+
------
|
| 217 |
+
ValueError
|
| 218 |
+
If the input signals do not have the same length.
|
| 219 |
+
|
| 220 |
+
"""
|
| 221 |
+
|
| 222 |
+
# check inputs
|
| 223 |
+
if x is None:
|
| 224 |
+
raise TypeError("Please specify the first input signal.")
|
| 225 |
+
|
| 226 |
+
if y is None:
|
| 227 |
+
raise TypeError("Please specify the second input signal.")
|
| 228 |
+
|
| 229 |
+
# ensure numpy
|
| 230 |
+
x = np.array(x)
|
| 231 |
+
y = np.array(y)
|
| 232 |
+
|
| 233 |
+
n = len(x)
|
| 234 |
+
|
| 235 |
+
if n != len(y):
|
| 236 |
+
raise ValueError("Input signals must have the same length.")
|
| 237 |
+
|
| 238 |
+
statistic, pvalue = ttest_ind(x, y)
|
| 239 |
+
|
| 240 |
+
return statistic, pvalue
|
BioSPPy/source/biosppy/storage.py
ADDED
|
@@ -0,0 +1,1043 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.storage
|
| 4 |
+
---------------
|
| 5 |
+
|
| 6 |
+
This module provides several data storage methods.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
# Imports
|
| 13 |
+
# compat
|
| 14 |
+
from __future__ import absolute_import, division, print_function
|
| 15 |
+
from six.moves import range
|
| 16 |
+
import six
|
| 17 |
+
|
| 18 |
+
# built-in
|
| 19 |
+
import datetime
|
| 20 |
+
import json
|
| 21 |
+
import os
|
| 22 |
+
import zipfile
|
| 23 |
+
|
| 24 |
+
# 3rd party
|
| 25 |
+
import h5py
|
| 26 |
+
import numpy as np
|
| 27 |
+
import shortuuid
|
| 28 |
+
import joblib
|
| 29 |
+
|
| 30 |
+
# local
|
| 31 |
+
from . import utils
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
def serialize(data, path, compress=3):
|
| 35 |
+
"""Serialize data and save to a file using sklearn's joblib.
|
| 36 |
+
|
| 37 |
+
Parameters
|
| 38 |
+
----------
|
| 39 |
+
data : object
|
| 40 |
+
Object to serialize.
|
| 41 |
+
path : str
|
| 42 |
+
Destination path.
|
| 43 |
+
compress : int, optional
|
| 44 |
+
Compression level; from 0 to 9 (highest compression).
|
| 45 |
+
|
| 46 |
+
"""
|
| 47 |
+
|
| 48 |
+
# normalize path
|
| 49 |
+
utils.normpath(path)
|
| 50 |
+
|
| 51 |
+
joblib.dump(data, path, compress=compress)
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
def deserialize(path):
|
| 55 |
+
"""Deserialize data from a file using sklearn's joblib.
|
| 56 |
+
|
| 57 |
+
Parameters
|
| 58 |
+
----------
|
| 59 |
+
path : str
|
| 60 |
+
Source path.
|
| 61 |
+
|
| 62 |
+
Returns
|
| 63 |
+
-------
|
| 64 |
+
data : object
|
| 65 |
+
Deserialized object.
|
| 66 |
+
|
| 67 |
+
"""
|
| 68 |
+
|
| 69 |
+
# normalize path
|
| 70 |
+
path = utils.normpath(path)
|
| 71 |
+
|
| 72 |
+
return joblib.load(path)
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
def dumpJSON(data, path):
|
| 76 |
+
"""Save JSON data to a file.
|
| 77 |
+
|
| 78 |
+
Parameters
|
| 79 |
+
----------
|
| 80 |
+
data : dict
|
| 81 |
+
The JSON data to dump.
|
| 82 |
+
path : str
|
| 83 |
+
Destination path.
|
| 84 |
+
|
| 85 |
+
"""
|
| 86 |
+
|
| 87 |
+
# normalize path
|
| 88 |
+
path = utils.normpath(path)
|
| 89 |
+
|
| 90 |
+
with open(path, 'w') as fid:
|
| 91 |
+
json.dump(data, fid)
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
def loadJSON(path):
|
| 95 |
+
"""Load JSON data from a file.
|
| 96 |
+
|
| 97 |
+
Parameters
|
| 98 |
+
----------
|
| 99 |
+
path : str
|
| 100 |
+
Source path.
|
| 101 |
+
|
| 102 |
+
Returns
|
| 103 |
+
-------
|
| 104 |
+
data : dict
|
| 105 |
+
The loaded JSON data.
|
| 106 |
+
|
| 107 |
+
"""
|
| 108 |
+
|
| 109 |
+
# normalize path
|
| 110 |
+
path = utils.normpath(path)
|
| 111 |
+
|
| 112 |
+
with open(path, 'r') as fid:
|
| 113 |
+
return json.load(fid)
|
| 114 |
+
|
| 115 |
+
|
| 116 |
+
def zip_write(fid, files, recursive=True, root=None):
|
| 117 |
+
"""Write files to zip archive.
|
| 118 |
+
|
| 119 |
+
Parameters
|
| 120 |
+
----------
|
| 121 |
+
fid : file-like object
|
| 122 |
+
The zip file to write into.
|
| 123 |
+
files : iterable
|
| 124 |
+
List of files or directories to pack.
|
| 125 |
+
recursive : bool, optional
|
| 126 |
+
If True, sub-directories and sub-folders are also written to the
|
| 127 |
+
archive.
|
| 128 |
+
root : str, optional
|
| 129 |
+
Relative folder path.
|
| 130 |
+
|
| 131 |
+
Notes
|
| 132 |
+
-----
|
| 133 |
+
* Ignores non-existent files and directories.
|
| 134 |
+
|
| 135 |
+
"""
|
| 136 |
+
|
| 137 |
+
if root is None:
|
| 138 |
+
root = ''
|
| 139 |
+
|
| 140 |
+
for item in files:
|
| 141 |
+
fpath = utils.normpath(item)
|
| 142 |
+
|
| 143 |
+
if not os.path.exists(fpath):
|
| 144 |
+
continue
|
| 145 |
+
|
| 146 |
+
# relative archive name
|
| 147 |
+
arcname = os.path.join(root, os.path.split(fpath)[1])
|
| 148 |
+
|
| 149 |
+
# write
|
| 150 |
+
fid.write(fpath, arcname)
|
| 151 |
+
|
| 152 |
+
# recur
|
| 153 |
+
if recursive and os.path.isdir(fpath):
|
| 154 |
+
rfiles = [os.path.join(fpath, subitem)
|
| 155 |
+
for subitem in os.listdir(fpath)]
|
| 156 |
+
zip_write(fid, rfiles, recursive=recursive, root=arcname)
|
| 157 |
+
|
| 158 |
+
|
| 159 |
+
def pack_zip(files, path, recursive=True, forceExt=True):
|
| 160 |
+
"""Pack files into a zip archive.
|
| 161 |
+
|
| 162 |
+
Parameters
|
| 163 |
+
----------
|
| 164 |
+
files : iterable
|
| 165 |
+
List of files or directories to pack.
|
| 166 |
+
path : str
|
| 167 |
+
Destination path.
|
| 168 |
+
recursive : bool, optional
|
| 169 |
+
If True, sub-directories and sub-folders are also written to the
|
| 170 |
+
archive.
|
| 171 |
+
forceExt : bool, optional
|
| 172 |
+
Append default extension.
|
| 173 |
+
|
| 174 |
+
Returns
|
| 175 |
+
-------
|
| 176 |
+
zip_path : str
|
| 177 |
+
Full path to created zip archive.
|
| 178 |
+
|
| 179 |
+
"""
|
| 180 |
+
|
| 181 |
+
# normalize destination path
|
| 182 |
+
zip_path = utils.normpath(path)
|
| 183 |
+
|
| 184 |
+
if forceExt:
|
| 185 |
+
zip_path += '.zip'
|
| 186 |
+
|
| 187 |
+
# allowZip64 is True to allow files > 2 GB
|
| 188 |
+
with zipfile.ZipFile(zip_path, 'w', allowZip64=True) as fid:
|
| 189 |
+
zip_write(fid, files, recursive=recursive)
|
| 190 |
+
|
| 191 |
+
return zip_path
|
| 192 |
+
|
| 193 |
+
|
| 194 |
+
def unpack_zip(zip_path, path):
|
| 195 |
+
"""Unpack a zip archive.
|
| 196 |
+
|
| 197 |
+
Parameters
|
| 198 |
+
----------
|
| 199 |
+
zip_path : str
|
| 200 |
+
Path to zip archive.
|
| 201 |
+
path : str
|
| 202 |
+
Destination path (directory).
|
| 203 |
+
|
| 204 |
+
"""
|
| 205 |
+
|
| 206 |
+
# allowZip64 is True to allow files > 2 GB
|
| 207 |
+
with zipfile.ZipFile(zip_path, 'r', allowZip64=True) as fid:
|
| 208 |
+
fid.extractall(path)
|
| 209 |
+
|
| 210 |
+
|
| 211 |
+
def alloc_h5(path):
|
| 212 |
+
"""Prepare an HDF5 file.
|
| 213 |
+
|
| 214 |
+
Parameters
|
| 215 |
+
----------
|
| 216 |
+
path : str
|
| 217 |
+
Path to file.
|
| 218 |
+
|
| 219 |
+
"""
|
| 220 |
+
|
| 221 |
+
# normalize path
|
| 222 |
+
path = utils.normpath(path)
|
| 223 |
+
|
| 224 |
+
with h5py.File(path):
|
| 225 |
+
pass
|
| 226 |
+
|
| 227 |
+
|
| 228 |
+
def store_h5(path, label, data):
|
| 229 |
+
"""Store data to HDF5 file.
|
| 230 |
+
|
| 231 |
+
Parameters
|
| 232 |
+
----------
|
| 233 |
+
path : str
|
| 234 |
+
Path to file.
|
| 235 |
+
label : hashable
|
| 236 |
+
Data label.
|
| 237 |
+
data : array
|
| 238 |
+
Data to store.
|
| 239 |
+
|
| 240 |
+
"""
|
| 241 |
+
|
| 242 |
+
# normalize path
|
| 243 |
+
path = utils.normpath(path)
|
| 244 |
+
|
| 245 |
+
with h5py.File(path) as fid:
|
| 246 |
+
label = str(label)
|
| 247 |
+
|
| 248 |
+
try:
|
| 249 |
+
fid.create_dataset(label, data=data)
|
| 250 |
+
except (RuntimeError, ValueError):
|
| 251 |
+
# existing label, replace
|
| 252 |
+
del fid[label]
|
| 253 |
+
fid.create_dataset(label, data=data)
|
| 254 |
+
|
| 255 |
+
|
| 256 |
+
def load_h5(path, label):
|
| 257 |
+
"""Load data from an HDF5 file.
|
| 258 |
+
|
| 259 |
+
Parameters
|
| 260 |
+
----------
|
| 261 |
+
path : str
|
| 262 |
+
Path to file.
|
| 263 |
+
label : hashable
|
| 264 |
+
Data label.
|
| 265 |
+
|
| 266 |
+
Returns
|
| 267 |
+
-------
|
| 268 |
+
data : array
|
| 269 |
+
Loaded data.
|
| 270 |
+
|
| 271 |
+
"""
|
| 272 |
+
|
| 273 |
+
# normalize path
|
| 274 |
+
path = utils.normpath(path)
|
| 275 |
+
|
| 276 |
+
with h5py.File(path) as fid:
|
| 277 |
+
label = str(label)
|
| 278 |
+
|
| 279 |
+
try:
|
| 280 |
+
return fid[label][...]
|
| 281 |
+
except KeyError:
|
| 282 |
+
return None
|
| 283 |
+
|
| 284 |
+
|
| 285 |
+
def store_txt(path, data, sampling_rate=1000., resolution=None, date=None,
|
| 286 |
+
labels=None, precision=6):
|
| 287 |
+
"""Store data to a simple text file.
|
| 288 |
+
|
| 289 |
+
Parameters
|
| 290 |
+
----------
|
| 291 |
+
path : str
|
| 292 |
+
Path to file.
|
| 293 |
+
data : array
|
| 294 |
+
Data to store (up to 2 dimensions).
|
| 295 |
+
sampling_rate : int, float, optional
|
| 296 |
+
Sampling frequency (Hz).
|
| 297 |
+
resolution : int, optional
|
| 298 |
+
Sampling resolution.
|
| 299 |
+
date : datetime, str, optional
|
| 300 |
+
Datetime object, or an ISO 8601 formatted date-time string.
|
| 301 |
+
labels : list, optional
|
| 302 |
+
Labels for each column of `data`.
|
| 303 |
+
precision : int, optional
|
| 304 |
+
Precision for string conversion.
|
| 305 |
+
|
| 306 |
+
Raises
|
| 307 |
+
------
|
| 308 |
+
ValueError
|
| 309 |
+
If the number of data dimensions is greater than 2.
|
| 310 |
+
ValueError
|
| 311 |
+
If the number of labels is inconsistent with the data.
|
| 312 |
+
|
| 313 |
+
"""
|
| 314 |
+
|
| 315 |
+
# ensure numpy
|
| 316 |
+
data = np.array(data)
|
| 317 |
+
|
| 318 |
+
# check dimension
|
| 319 |
+
if data.ndim > 2:
|
| 320 |
+
raise ValueError("Number of data dimensions cannot be greater than 2.")
|
| 321 |
+
|
| 322 |
+
# build header
|
| 323 |
+
header = "Simple Text Format\n"
|
| 324 |
+
header += "Sampling Rate (Hz):= %0.2f\n" % sampling_rate
|
| 325 |
+
if resolution is not None:
|
| 326 |
+
header += "Resolution:= %d\n" % resolution
|
| 327 |
+
if date is not None:
|
| 328 |
+
if isinstance(date, six.string_types):
|
| 329 |
+
header += "Date:= %s\n" % date
|
| 330 |
+
elif isinstance(date, datetime.datetime):
|
| 331 |
+
header += "Date:= %s\n" % date.isoformat()
|
| 332 |
+
else:
|
| 333 |
+
ct = datetime.datetime.utcnow().isoformat()
|
| 334 |
+
header += "Date:= %s\n" % ct
|
| 335 |
+
|
| 336 |
+
# data type
|
| 337 |
+
header += "Data Type:= %s\n" % data.dtype
|
| 338 |
+
|
| 339 |
+
# labels
|
| 340 |
+
if data.ndim == 1:
|
| 341 |
+
ncols = 1
|
| 342 |
+
elif data.ndim == 2:
|
| 343 |
+
ncols = data.shape[1]
|
| 344 |
+
|
| 345 |
+
if labels is None:
|
| 346 |
+
labels = ['%d' % i for i in range(ncols)]
|
| 347 |
+
elif len(labels) != ncols:
|
| 348 |
+
raise ValueError("Inconsistent number of labels.")
|
| 349 |
+
|
| 350 |
+
header += "Labels:= %s" % '\t'.join(labels)
|
| 351 |
+
|
| 352 |
+
# normalize path
|
| 353 |
+
path = utils.normpath(path)
|
| 354 |
+
|
| 355 |
+
# data format
|
| 356 |
+
p = '%d' % precision
|
| 357 |
+
if np.issubdtype(data.dtype, np.integer):
|
| 358 |
+
fmt = '%d'
|
| 359 |
+
elif np.issubdtype(data.dtype, np.float):
|
| 360 |
+
fmt = '%%.%sf' % p
|
| 361 |
+
elif np.issubdtype(data.dtype, np.bool_):
|
| 362 |
+
fmt = '%d'
|
| 363 |
+
else:
|
| 364 |
+
fmt = '%%.%se' % p
|
| 365 |
+
|
| 366 |
+
# store
|
| 367 |
+
np.savetxt(path, data, header=header, fmt=fmt, delimiter='\t')
|
| 368 |
+
|
| 369 |
+
|
| 370 |
+
def load_txt(path):
|
| 371 |
+
"""Load data from a text file.
|
| 372 |
+
|
| 373 |
+
Parameters
|
| 374 |
+
----------
|
| 375 |
+
path : str
|
| 376 |
+
Path to file.
|
| 377 |
+
|
| 378 |
+
Returns
|
| 379 |
+
-------
|
| 380 |
+
data : array
|
| 381 |
+
Loaded data.
|
| 382 |
+
mdata : dict
|
| 383 |
+
Metadata.
|
| 384 |
+
|
| 385 |
+
"""
|
| 386 |
+
|
| 387 |
+
# normalize path
|
| 388 |
+
path = utils.normpath(path)
|
| 389 |
+
|
| 390 |
+
with open(path, 'rb') as fid:
|
| 391 |
+
lines = fid.readlines()
|
| 392 |
+
|
| 393 |
+
# extract header
|
| 394 |
+
mdata_tmp = {}
|
| 395 |
+
fields = ['Sampling Rate', 'Resolution', 'Date', 'Data Type', 'Labels']
|
| 396 |
+
values = []
|
| 397 |
+
for item in lines:
|
| 398 |
+
if b'#' in item:
|
| 399 |
+
item = item.decode('utf-8')
|
| 400 |
+
# parse comment
|
| 401 |
+
for f in fields:
|
| 402 |
+
if f in item:
|
| 403 |
+
mdata_tmp[f] = item.split(':= ')[1].strip()
|
| 404 |
+
fields.remove(f)
|
| 405 |
+
break
|
| 406 |
+
else:
|
| 407 |
+
values.append(item)
|
| 408 |
+
|
| 409 |
+
# convert mdata
|
| 410 |
+
mdata = {}
|
| 411 |
+
df = '%Y-%m-%dT%H:%M:%S.%f'
|
| 412 |
+
try:
|
| 413 |
+
mdata['sampling_rate'] = float(mdata_tmp['Sampling Rate'])
|
| 414 |
+
except KeyError:
|
| 415 |
+
pass
|
| 416 |
+
try:
|
| 417 |
+
mdata['resolution'] = int(mdata_tmp['Resolution'])
|
| 418 |
+
except KeyError:
|
| 419 |
+
pass
|
| 420 |
+
try:
|
| 421 |
+
dtype = mdata_tmp['Data Type']
|
| 422 |
+
except KeyError:
|
| 423 |
+
dtype = None
|
| 424 |
+
try:
|
| 425 |
+
d = datetime.datetime.strptime(mdata_tmp['Date'], df)
|
| 426 |
+
mdata['date'] = d
|
| 427 |
+
except (KeyError, ValueError):
|
| 428 |
+
pass
|
| 429 |
+
try:
|
| 430 |
+
labels = mdata_tmp['Labels'].split('\t')
|
| 431 |
+
mdata['labels'] = labels
|
| 432 |
+
except KeyError:
|
| 433 |
+
pass
|
| 434 |
+
|
| 435 |
+
# load array
|
| 436 |
+
data = np.genfromtxt(values, dtype=dtype, delimiter=b'\t')
|
| 437 |
+
|
| 438 |
+
return data, mdata
|
| 439 |
+
|
| 440 |
+
|
| 441 |
+
class HDF(object):
|
| 442 |
+
"""Wrapper class to operate on BioSPPy HDF5 files.
|
| 443 |
+
|
| 444 |
+
Parameters
|
| 445 |
+
----------
|
| 446 |
+
path : str
|
| 447 |
+
Path to the HDF5 file.
|
| 448 |
+
mode : str, optional
|
| 449 |
+
File mode; one of:
|
| 450 |
+
|
| 451 |
+
* 'a': read/write, creates file if it does not exist;
|
| 452 |
+
* 'r+': read/write, file must exist;
|
| 453 |
+
* 'r': read only, file must exist;
|
| 454 |
+
* 'w': create file, truncate if it already exists;
|
| 455 |
+
* 'w-': create file, fails if it already esists.
|
| 456 |
+
|
| 457 |
+
"""
|
| 458 |
+
|
| 459 |
+
def __init__(self, path=None, mode='a'):
|
| 460 |
+
# normalize path
|
| 461 |
+
path = utils.normpath(path)
|
| 462 |
+
|
| 463 |
+
# open file
|
| 464 |
+
self._file = h5py.File(path, mode)
|
| 465 |
+
|
| 466 |
+
# check BioSPPy structures
|
| 467 |
+
try:
|
| 468 |
+
self._signals = self._file['signals']
|
| 469 |
+
except KeyError:
|
| 470 |
+
if mode == 'r':
|
| 471 |
+
raise IOError(
|
| 472 |
+
"Unable to create 'signals' group with current file mode.")
|
| 473 |
+
self._signals = self._file.create_group('signals')
|
| 474 |
+
|
| 475 |
+
try:
|
| 476 |
+
self._events = self._file['events']
|
| 477 |
+
except KeyError:
|
| 478 |
+
if mode == 'r':
|
| 479 |
+
raise IOError(
|
| 480 |
+
"Unable to create 'events' group with current file mode.")
|
| 481 |
+
self._events = self._file.create_group('events')
|
| 482 |
+
|
| 483 |
+
def __enter__(self):
|
| 484 |
+
"""Method for with statement."""
|
| 485 |
+
|
| 486 |
+
return self
|
| 487 |
+
|
| 488 |
+
def __exit__(self, exc_type, exc_value, traceback):
|
| 489 |
+
"""Method for with statement."""
|
| 490 |
+
|
| 491 |
+
self.close()
|
| 492 |
+
|
| 493 |
+
def _join_group(self, *args):
|
| 494 |
+
"""Join group elements.
|
| 495 |
+
|
| 496 |
+
Parameters
|
| 497 |
+
----------
|
| 498 |
+
``*args`` : list
|
| 499 |
+
Group elements to join.
|
| 500 |
+
|
| 501 |
+
Returns
|
| 502 |
+
-------
|
| 503 |
+
weg : str
|
| 504 |
+
Joined group path.
|
| 505 |
+
|
| 506 |
+
"""
|
| 507 |
+
|
| 508 |
+
bits = []
|
| 509 |
+
for item in args:
|
| 510 |
+
bits.extend(item.split('/'))
|
| 511 |
+
|
| 512 |
+
# filter out blanks, slashes, white space
|
| 513 |
+
out = []
|
| 514 |
+
for item in bits:
|
| 515 |
+
item = item.strip()
|
| 516 |
+
if item == '':
|
| 517 |
+
continue
|
| 518 |
+
elif item == '/':
|
| 519 |
+
continue
|
| 520 |
+
out.append(item)
|
| 521 |
+
|
| 522 |
+
weg = '/' + '/'.join(out)
|
| 523 |
+
|
| 524 |
+
return weg
|
| 525 |
+
|
| 526 |
+
def add_header(self, header=None):
|
| 527 |
+
"""Add header metadata.
|
| 528 |
+
|
| 529 |
+
Parameters
|
| 530 |
+
----------
|
| 531 |
+
header : dict
|
| 532 |
+
Header metadata.
|
| 533 |
+
|
| 534 |
+
"""
|
| 535 |
+
|
| 536 |
+
# check inputs
|
| 537 |
+
if header is None:
|
| 538 |
+
raise TypeError("Please specify the header information.")
|
| 539 |
+
|
| 540 |
+
self._file.attrs['json'] = json.dumps(header)
|
| 541 |
+
|
| 542 |
+
def get_header(self):
|
| 543 |
+
"""Retrieve header metadata.
|
| 544 |
+
|
| 545 |
+
Returns
|
| 546 |
+
-------
|
| 547 |
+
header : dict
|
| 548 |
+
Header metadata.
|
| 549 |
+
|
| 550 |
+
"""
|
| 551 |
+
|
| 552 |
+
try:
|
| 553 |
+
header = json.loads(self._file.attrs['json'])
|
| 554 |
+
except KeyError:
|
| 555 |
+
header = {}
|
| 556 |
+
|
| 557 |
+
return utils.ReturnTuple((header,), ('header',))
|
| 558 |
+
|
| 559 |
+
def add_signal(self,
|
| 560 |
+
signal=None,
|
| 561 |
+
mdata=None,
|
| 562 |
+
group='',
|
| 563 |
+
name=None,
|
| 564 |
+
compress=False):
|
| 565 |
+
"""Add a signal to the file.
|
| 566 |
+
|
| 567 |
+
Parameters
|
| 568 |
+
----------
|
| 569 |
+
signal : array
|
| 570 |
+
Signal to add.
|
| 571 |
+
mdata : dict, optional
|
| 572 |
+
Signal metadata.
|
| 573 |
+
group : str, optional
|
| 574 |
+
Destination signal group.
|
| 575 |
+
name : str, optional
|
| 576 |
+
Name of the dataset to create.
|
| 577 |
+
compress : bool, optional
|
| 578 |
+
If True, the signal will be compressed with gzip.
|
| 579 |
+
|
| 580 |
+
Returns
|
| 581 |
+
-------
|
| 582 |
+
group : str
|
| 583 |
+
Destination group.
|
| 584 |
+
name : str
|
| 585 |
+
Name of the created signal dataset.
|
| 586 |
+
|
| 587 |
+
"""
|
| 588 |
+
|
| 589 |
+
# check inputs
|
| 590 |
+
if signal is None:
|
| 591 |
+
raise TypeError("Please specify an input signal.")
|
| 592 |
+
|
| 593 |
+
if mdata is None:
|
| 594 |
+
mdata = {}
|
| 595 |
+
|
| 596 |
+
if name is None:
|
| 597 |
+
name = shortuuid.uuid()
|
| 598 |
+
|
| 599 |
+
# navigate to group
|
| 600 |
+
weg = self._join_group(self._signals.name, group)
|
| 601 |
+
try:
|
| 602 |
+
node = self._file[weg]
|
| 603 |
+
except KeyError:
|
| 604 |
+
# create group
|
| 605 |
+
node = self._file.create_group(weg)
|
| 606 |
+
|
| 607 |
+
# create dataset
|
| 608 |
+
if compress:
|
| 609 |
+
dset = node.create_dataset(name, data=signal, compression='gzip')
|
| 610 |
+
else:
|
| 611 |
+
dset = node.create_dataset(name, data=signal)
|
| 612 |
+
|
| 613 |
+
# add metadata
|
| 614 |
+
dset.attrs['json'] = json.dumps(mdata)
|
| 615 |
+
|
| 616 |
+
# output
|
| 617 |
+
grp = weg.replace('/signals', '')
|
| 618 |
+
|
| 619 |
+
return utils.ReturnTuple((grp, name), ('group', 'name'))
|
| 620 |
+
|
| 621 |
+
def _get_signal(self, group='', name=None):
|
| 622 |
+
"""Retrieve a signal dataset from the file.
|
| 623 |
+
|
| 624 |
+
Parameters
|
| 625 |
+
----------
|
| 626 |
+
group : str, optional
|
| 627 |
+
Signal group.
|
| 628 |
+
name : str
|
| 629 |
+
Name of the signal dataset.
|
| 630 |
+
|
| 631 |
+
Returns
|
| 632 |
+
-------
|
| 633 |
+
dataset : h5py.Dataset
|
| 634 |
+
HDF5 dataset.
|
| 635 |
+
|
| 636 |
+
"""
|
| 637 |
+
|
| 638 |
+
# check inputs
|
| 639 |
+
if name is None:
|
| 640 |
+
raise TypeError(
|
| 641 |
+
"Please specify the name of the signal to retrieve.")
|
| 642 |
+
|
| 643 |
+
# navigate to group
|
| 644 |
+
weg = self._join_group(self._signals.name, group)
|
| 645 |
+
try:
|
| 646 |
+
node = self._file[weg]
|
| 647 |
+
except KeyError:
|
| 648 |
+
raise KeyError("Inexistent signal group.")
|
| 649 |
+
|
| 650 |
+
# get data
|
| 651 |
+
try:
|
| 652 |
+
dset = node[name]
|
| 653 |
+
except KeyError:
|
| 654 |
+
raise KeyError("Inexistent signal dataset.")
|
| 655 |
+
|
| 656 |
+
return dset
|
| 657 |
+
|
| 658 |
+
def get_signal(self, group='', name=None):
|
| 659 |
+
"""Retrieve a signal from the file.
|
| 660 |
+
|
| 661 |
+
Parameters
|
| 662 |
+
----------
|
| 663 |
+
group : str, optional
|
| 664 |
+
Signal group.
|
| 665 |
+
name : str
|
| 666 |
+
Name of the signal dataset.
|
| 667 |
+
|
| 668 |
+
Returns
|
| 669 |
+
-------
|
| 670 |
+
signal : array
|
| 671 |
+
Retrieved signal.
|
| 672 |
+
mdata : dict
|
| 673 |
+
Signal metadata.
|
| 674 |
+
|
| 675 |
+
Notes
|
| 676 |
+
-----
|
| 677 |
+
* Loads the entire signal data into memory.
|
| 678 |
+
|
| 679 |
+
"""
|
| 680 |
+
|
| 681 |
+
dset = self._get_signal(group=group, name=name)
|
| 682 |
+
signal = dset[...]
|
| 683 |
+
|
| 684 |
+
try:
|
| 685 |
+
mdata = json.loads(dset.attrs['json'])
|
| 686 |
+
except KeyError:
|
| 687 |
+
mdata = {}
|
| 688 |
+
|
| 689 |
+
return utils.ReturnTuple((signal, mdata), ('signal', 'mdata'))
|
| 690 |
+
|
| 691 |
+
def del_signal(self, group='', name=None):
|
| 692 |
+
"""Delete a signal from the file.
|
| 693 |
+
|
| 694 |
+
Parameters
|
| 695 |
+
----------
|
| 696 |
+
group : str, optional
|
| 697 |
+
Signal group.
|
| 698 |
+
name : str
|
| 699 |
+
Name of the dataset.
|
| 700 |
+
|
| 701 |
+
"""
|
| 702 |
+
|
| 703 |
+
dset = self._get_signal(group=group, name=name)
|
| 704 |
+
|
| 705 |
+
try:
|
| 706 |
+
del self._file[dset.name]
|
| 707 |
+
except IOError:
|
| 708 |
+
raise IOError("Unable to delete object with current file mode.")
|
| 709 |
+
|
| 710 |
+
def del_signal_group(self, group=''):
|
| 711 |
+
"""Delete all signals in a file group.
|
| 712 |
+
|
| 713 |
+
Parameters
|
| 714 |
+
----------
|
| 715 |
+
group : str, optional
|
| 716 |
+
Signal group.
|
| 717 |
+
|
| 718 |
+
"""
|
| 719 |
+
|
| 720 |
+
# navigate to group
|
| 721 |
+
weg = self._join_group(self._signals.name, group)
|
| 722 |
+
try:
|
| 723 |
+
node = self._file[weg]
|
| 724 |
+
except KeyError:
|
| 725 |
+
raise KeyError("Inexistent signal group.")
|
| 726 |
+
|
| 727 |
+
if node.name == '/signals':
|
| 728 |
+
# delete all elements
|
| 729 |
+
for _, item in six.iteritems(node):
|
| 730 |
+
try:
|
| 731 |
+
del self._file[item.name]
|
| 732 |
+
except IOError:
|
| 733 |
+
raise IOError(
|
| 734 |
+
"Unable to delete object with current file mode.")
|
| 735 |
+
else:
|
| 736 |
+
# delete single node
|
| 737 |
+
try:
|
| 738 |
+
del self._file[node.name]
|
| 739 |
+
except IOError:
|
| 740 |
+
raise IOError(
|
| 741 |
+
"Unable to delete object with current file mode.")
|
| 742 |
+
|
| 743 |
+
def list_signals(self, group='', recursive=False):
|
| 744 |
+
"""List signals in the file.
|
| 745 |
+
|
| 746 |
+
Parameters
|
| 747 |
+
----------
|
| 748 |
+
group : str, optional
|
| 749 |
+
Signal group.
|
| 750 |
+
recursive : bool, optional
|
| 751 |
+
If True, also lists signals in sub-groups.
|
| 752 |
+
|
| 753 |
+
Returns
|
| 754 |
+
-------
|
| 755 |
+
signals : list
|
| 756 |
+
List of (group, name) tuples of the found signals.
|
| 757 |
+
|
| 758 |
+
"""
|
| 759 |
+
|
| 760 |
+
# navigate to group
|
| 761 |
+
weg = self._join_group(self._signals.name, group)
|
| 762 |
+
try:
|
| 763 |
+
node = self._file[weg]
|
| 764 |
+
except KeyError:
|
| 765 |
+
raise KeyError("Inexistent signal group.")
|
| 766 |
+
|
| 767 |
+
out = []
|
| 768 |
+
for name, item in six.iteritems(node):
|
| 769 |
+
if isinstance(item, h5py.Dataset):
|
| 770 |
+
out.append((group, name))
|
| 771 |
+
elif recursive and isinstance(item, h5py.Group):
|
| 772 |
+
aux = self._join_group(group, name)
|
| 773 |
+
out.extend(self.list_signals(group=aux,
|
| 774 |
+
recursive=True)['signals'])
|
| 775 |
+
|
| 776 |
+
return utils.ReturnTuple((out,), ('signals',))
|
| 777 |
+
|
| 778 |
+
def add_event(self,
|
| 779 |
+
ts=None,
|
| 780 |
+
values=None,
|
| 781 |
+
mdata=None,
|
| 782 |
+
group='',
|
| 783 |
+
name=None,
|
| 784 |
+
compress=False):
|
| 785 |
+
"""Add an event to the file.
|
| 786 |
+
|
| 787 |
+
Parameters
|
| 788 |
+
----------
|
| 789 |
+
ts : array
|
| 790 |
+
Array of time stamps.
|
| 791 |
+
values : array, optional
|
| 792 |
+
Array with data for each time stamp.
|
| 793 |
+
mdata : dict, optional
|
| 794 |
+
Event metadata.
|
| 795 |
+
group : str, optional
|
| 796 |
+
Destination event group.
|
| 797 |
+
name : str, optional
|
| 798 |
+
Name of the dataset to create.
|
| 799 |
+
compress : bool, optional
|
| 800 |
+
If True, the data will be compressed with gzip.
|
| 801 |
+
|
| 802 |
+
Returns
|
| 803 |
+
-------
|
| 804 |
+
group : str
|
| 805 |
+
Destination group.
|
| 806 |
+
name : str
|
| 807 |
+
Name of the created event dataset.
|
| 808 |
+
|
| 809 |
+
"""
|
| 810 |
+
|
| 811 |
+
# check inputs
|
| 812 |
+
if ts is None:
|
| 813 |
+
raise TypeError("Please specify an input array of time stamps.")
|
| 814 |
+
|
| 815 |
+
if values is None:
|
| 816 |
+
values = []
|
| 817 |
+
|
| 818 |
+
if mdata is None:
|
| 819 |
+
mdata = {}
|
| 820 |
+
|
| 821 |
+
if name is None:
|
| 822 |
+
name = shortuuid.uuid()
|
| 823 |
+
|
| 824 |
+
# navigate to group
|
| 825 |
+
weg = self._join_group(self._events.name, group)
|
| 826 |
+
try:
|
| 827 |
+
node = self._file[weg]
|
| 828 |
+
except KeyError:
|
| 829 |
+
# create group
|
| 830 |
+
node = self._file.create_group(weg)
|
| 831 |
+
|
| 832 |
+
# create new event group
|
| 833 |
+
evt_node = node.create_group(name)
|
| 834 |
+
|
| 835 |
+
# create datasets
|
| 836 |
+
if compress:
|
| 837 |
+
_ = evt_node.create_dataset('ts', data=ts, compression='gzip')
|
| 838 |
+
_ = evt_node.create_dataset('values',
|
| 839 |
+
data=values,
|
| 840 |
+
compression='gzip')
|
| 841 |
+
else:
|
| 842 |
+
_ = evt_node.create_dataset('ts', data=ts)
|
| 843 |
+
_ = evt_node.create_dataset('values', data=values)
|
| 844 |
+
|
| 845 |
+
# add metadata
|
| 846 |
+
evt_node.attrs['json'] = json.dumps(mdata)
|
| 847 |
+
|
| 848 |
+
# output
|
| 849 |
+
grp = weg.replace('/events', '')
|
| 850 |
+
|
| 851 |
+
return utils.ReturnTuple((grp, name), ('group', 'name'))
|
| 852 |
+
|
| 853 |
+
def _get_event(self, group='', name=None):
|
| 854 |
+
"""Retrieve event datasets from the file.
|
| 855 |
+
|
| 856 |
+
Parameters
|
| 857 |
+
----------
|
| 858 |
+
group : str, optional
|
| 859 |
+
Event group.
|
| 860 |
+
name : str
|
| 861 |
+
Name of the event dataset.
|
| 862 |
+
|
| 863 |
+
Returns
|
| 864 |
+
-------
|
| 865 |
+
event : h5py.Group
|
| 866 |
+
HDF5 event group.
|
| 867 |
+
ts : h5py.Dataset
|
| 868 |
+
HDF5 time stamps dataset.
|
| 869 |
+
values : h5py.Dataset
|
| 870 |
+
HDF5 values dataset.
|
| 871 |
+
|
| 872 |
+
"""
|
| 873 |
+
|
| 874 |
+
# check inputs
|
| 875 |
+
if name is None:
|
| 876 |
+
raise TypeError(
|
| 877 |
+
"Please specify the name of the signal to retrieve.")
|
| 878 |
+
|
| 879 |
+
# navigate to group
|
| 880 |
+
weg = self._join_group(self._events.name, group)
|
| 881 |
+
try:
|
| 882 |
+
node = self._file[weg]
|
| 883 |
+
except KeyError:
|
| 884 |
+
raise KeyError("Inexistent event group.")
|
| 885 |
+
|
| 886 |
+
# event group
|
| 887 |
+
try:
|
| 888 |
+
evt_group = node[name]
|
| 889 |
+
except KeyError:
|
| 890 |
+
raise KeyError("Inexistent event dataset.")
|
| 891 |
+
|
| 892 |
+
# get data
|
| 893 |
+
try:
|
| 894 |
+
ts = evt_group['ts']
|
| 895 |
+
except KeyError:
|
| 896 |
+
raise KeyError("Could not find expected time stamps structure.")
|
| 897 |
+
|
| 898 |
+
try:
|
| 899 |
+
values = evt_group['values']
|
| 900 |
+
except KeyError:
|
| 901 |
+
raise KeyError("Could not find expected values structure.")
|
| 902 |
+
|
| 903 |
+
return evt_group, ts, values
|
| 904 |
+
|
| 905 |
+
def get_event(self, group='', name=None):
|
| 906 |
+
"""Retrieve an event from the file.
|
| 907 |
+
|
| 908 |
+
Parameters
|
| 909 |
+
----------
|
| 910 |
+
group : str, optional
|
| 911 |
+
Event group.
|
| 912 |
+
name : str
|
| 913 |
+
Name of the event dataset.
|
| 914 |
+
|
| 915 |
+
Returns
|
| 916 |
+
-------
|
| 917 |
+
ts : array
|
| 918 |
+
Array of time stamps.
|
| 919 |
+
values : array
|
| 920 |
+
Array with data for each time stamp.
|
| 921 |
+
mdata : dict
|
| 922 |
+
Event metadata.
|
| 923 |
+
|
| 924 |
+
Notes
|
| 925 |
+
-----
|
| 926 |
+
Loads the entire event data into memory.
|
| 927 |
+
|
| 928 |
+
"""
|
| 929 |
+
|
| 930 |
+
evt_group, dset_ts, dset_values = self._get_event(group=group,
|
| 931 |
+
name=name)
|
| 932 |
+
ts = dset_ts[...]
|
| 933 |
+
values = dset_values[...]
|
| 934 |
+
|
| 935 |
+
try:
|
| 936 |
+
mdata = json.loads(evt_group.attrs['json'])
|
| 937 |
+
except KeyError:
|
| 938 |
+
mdata = {}
|
| 939 |
+
|
| 940 |
+
return utils.ReturnTuple((ts, values, mdata),
|
| 941 |
+
('ts', 'values', 'mdata'))
|
| 942 |
+
|
| 943 |
+
def del_event(self, group='', name=None):
|
| 944 |
+
"""Delete an event from the file.
|
| 945 |
+
|
| 946 |
+
Parameters
|
| 947 |
+
----------
|
| 948 |
+
group : str, optional
|
| 949 |
+
Event group.
|
| 950 |
+
name : str
|
| 951 |
+
Name of the event dataset.
|
| 952 |
+
|
| 953 |
+
"""
|
| 954 |
+
|
| 955 |
+
evt_group, _, _ = self._get_event(group=group, name=name)
|
| 956 |
+
|
| 957 |
+
try:
|
| 958 |
+
del self._file[evt_group.name]
|
| 959 |
+
except IOError:
|
| 960 |
+
raise IOError("Unable to delete object with current file mode.")
|
| 961 |
+
|
| 962 |
+
def del_event_group(self, group=''):
|
| 963 |
+
"""Delete all events in a file group.
|
| 964 |
+
|
| 965 |
+
Parameters
|
| 966 |
+
----------
|
| 967 |
+
group str, optional
|
| 968 |
+
Event group.
|
| 969 |
+
|
| 970 |
+
"""
|
| 971 |
+
|
| 972 |
+
# navigate to group
|
| 973 |
+
weg = self._join_group(self._events.name, group)
|
| 974 |
+
try:
|
| 975 |
+
node = self._file[weg]
|
| 976 |
+
except KeyError:
|
| 977 |
+
raise KeyError("Inexistent event group.")
|
| 978 |
+
|
| 979 |
+
if node.name == '/events':
|
| 980 |
+
# delete all elements
|
| 981 |
+
for _, item in six.iteritems(node):
|
| 982 |
+
try:
|
| 983 |
+
del self._file[item.name]
|
| 984 |
+
except IOError:
|
| 985 |
+
raise IOError(
|
| 986 |
+
"Unable to delete object with current file mode.")
|
| 987 |
+
else:
|
| 988 |
+
# delete single node
|
| 989 |
+
try:
|
| 990 |
+
del self._file[node.name]
|
| 991 |
+
except IOError:
|
| 992 |
+
raise IOError(
|
| 993 |
+
"Unable to delete object with current file mode.")
|
| 994 |
+
|
| 995 |
+
def list_events(self, group='', recursive=False):
|
| 996 |
+
"""List events in the file.
|
| 997 |
+
|
| 998 |
+
Parameters
|
| 999 |
+
----------
|
| 1000 |
+
group : str, optional
|
| 1001 |
+
Event group.
|
| 1002 |
+
recursive : bool, optional
|
| 1003 |
+
If True, also lists events in sub-groups.
|
| 1004 |
+
|
| 1005 |
+
Returns
|
| 1006 |
+
-------
|
| 1007 |
+
events : list
|
| 1008 |
+
List of (group, name) tuples of the found events.
|
| 1009 |
+
|
| 1010 |
+
"""
|
| 1011 |
+
|
| 1012 |
+
# navigate to group
|
| 1013 |
+
weg = self._join_group(self._events.name, group)
|
| 1014 |
+
try:
|
| 1015 |
+
node = self._file[weg]
|
| 1016 |
+
except KeyError:
|
| 1017 |
+
raise KeyError("Inexistent event group.")
|
| 1018 |
+
|
| 1019 |
+
out = []
|
| 1020 |
+
for name, item in six.iteritems(node):
|
| 1021 |
+
if isinstance(item, h5py.Group):
|
| 1022 |
+
try:
|
| 1023 |
+
_ = item.attrs['json']
|
| 1024 |
+
except KeyError:
|
| 1025 |
+
# normal group
|
| 1026 |
+
if recursive:
|
| 1027 |
+
aux = self._join_group(group, name)
|
| 1028 |
+
out.extend(self.list_events(group=aux,
|
| 1029 |
+
recursive=True)['events'])
|
| 1030 |
+
else:
|
| 1031 |
+
# event group
|
| 1032 |
+
out.append((group, name))
|
| 1033 |
+
|
| 1034 |
+
return utils.ReturnTuple((out,), ('events',))
|
| 1035 |
+
|
| 1036 |
+
def close(self):
|
| 1037 |
+
"""Close file descriptor."""
|
| 1038 |
+
|
| 1039 |
+
# flush buffers
|
| 1040 |
+
self._file.flush()
|
| 1041 |
+
|
| 1042 |
+
# close
|
| 1043 |
+
self._file.close()
|
BioSPPy/source/biosppy/synthesizers/__init__.py
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals
|
| 4 |
+
---------------
|
| 5 |
+
|
| 6 |
+
This package provides methods to synthesize common
|
| 7 |
+
physiological signals (biosignals):
|
| 8 |
+
* Electrocardiogram (ECG)
|
| 9 |
+
|
| 10 |
+
:copyright: (c) 2015-2021 by Instituto de Telecomunicacoes
|
| 11 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 12 |
+
"""
|
| 13 |
+
|
| 14 |
+
# compat
|
| 15 |
+
from __future__ import absolute_import, division, print_function
|
| 16 |
+
|
| 17 |
+
# allow lazy loading
|
| 18 |
+
from . import ecg
|
BioSPPy/source/biosppy/synthesizers/ecg.py
ADDED
|
@@ -0,0 +1,661 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.signals.ecg
|
| 4 |
+
-------------------
|
| 5 |
+
|
| 6 |
+
This module provides methods to synthesize Electrocardiographic (ECG) signals.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2021 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
# Imports
|
| 14 |
+
from math import pi
|
| 15 |
+
|
| 16 |
+
# 3rd party
|
| 17 |
+
import numpy as np
|
| 18 |
+
import biosppy.signals
|
| 19 |
+
import warnings
|
| 20 |
+
import matplotlib.pyplot as plt
|
| 21 |
+
|
| 22 |
+
# local
|
| 23 |
+
from biosppy.signals import tools as st
|
| 24 |
+
from .. import plotting, utils
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
def B(l, Kb):
|
| 28 |
+
"""Generates the amplitude values of the first isoelectric line (B segment) of the ECG signal.
|
| 29 |
+
|
| 30 |
+
Follows the approach by Dolinský, Andráš, Michaeli and Grimaldi [Model03].
|
| 31 |
+
|
| 32 |
+
If the parameter introduced doesn't make sense in this context, an error will raise.
|
| 33 |
+
Parameters
|
| 34 |
+
----------
|
| 35 |
+
l : float
|
| 36 |
+
Inverse of the sampling rate.
|
| 37 |
+
Kb : int
|
| 38 |
+
B segment width (miliseconds).
|
| 39 |
+
Returns
|
| 40 |
+
-------
|
| 41 |
+
B_segment : array
|
| 42 |
+
B segment amplitude values (milivolts).
|
| 43 |
+
|
| 44 |
+
References
|
| 45 |
+
----------
|
| 46 |
+
.. [Model03] Pavol DOLINSKÝ, Imrich ANDRÁŠ, Linus MICHAELI, Domenico GRIMALDI,
|
| 47 |
+
"MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS",
|
| 48 |
+
Acta Electrotechnica et Informatica, Vol. 18, No. 3, 2018, 3–8
|
| 49 |
+
"""
|
| 50 |
+
if Kb > 130:
|
| 51 |
+
raise Exception("Warning! Kb is out of boundaries.")
|
| 52 |
+
else:
|
| 53 |
+
a = np.zeros(Kb * l)
|
| 54 |
+
B_segment = a.tolist()
|
| 55 |
+
return B_segment
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
def P(i, Ap, Kp):
|
| 59 |
+
"""Generates the amplitude values of the P wave in the ECG signal.
|
| 60 |
+
|
| 61 |
+
Follows the approach by Dolinský, Andráš, Michaeli and Grimaldi [Model03].
|
| 62 |
+
|
| 63 |
+
If the parameters introduced don't make sense in this context, an error will raise.
|
| 64 |
+
Parameters
|
| 65 |
+
----------
|
| 66 |
+
i : int
|
| 67 |
+
Sampling rate.
|
| 68 |
+
Ap : int
|
| 69 |
+
P wave amplitude (milivolts).
|
| 70 |
+
Kp : int
|
| 71 |
+
P wave width (miliseconds).
|
| 72 |
+
Returns
|
| 73 |
+
-------
|
| 74 |
+
P_wave : array
|
| 75 |
+
P wave amplitude values (milivolts).
|
| 76 |
+
|
| 77 |
+
References
|
| 78 |
+
----------
|
| 79 |
+
.. [Model03] Pavol DOLINSKÝ, Imrich ANDRÁŠ, Linus MICHAELI, Domenico GRIMALDI,
|
| 80 |
+
"MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS",
|
| 81 |
+
Acta Electrotechnica et Informatica, Vol. 18, No. 3, 2018, 3–8
|
| 82 |
+
"""
|
| 83 |
+
if Ap < -0.2 or Ap > 0.5:
|
| 84 |
+
raise Exception("Warning! Ap is out of boundaries.")
|
| 85 |
+
elif Kp < 10 or Kp > 100:
|
| 86 |
+
raise Exception("Warning! Kp is out of boundaries.")
|
| 87 |
+
else:
|
| 88 |
+
k = np.arange(0, Kp, i)
|
| 89 |
+
a = -(Ap / 2.0) * np.cos((2 * np.pi * k + 15) / Kp) + Ap / 2.0
|
| 90 |
+
P_wave = a.tolist()
|
| 91 |
+
return P_wave
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
def Pq(l, Kpq):
|
| 95 |
+
"""Generates the amplitude values of the PQ segment in the ECG signal.
|
| 96 |
+
|
| 97 |
+
Follows the approach by Dolinský, Andráš, Michaeli and Grimaldi [Model03].
|
| 98 |
+
|
| 99 |
+
If the parameters introduced don't make sense in this context, an error will raise.
|
| 100 |
+
Parameters
|
| 101 |
+
----------
|
| 102 |
+
l : float
|
| 103 |
+
Inverse of the sampling rate.
|
| 104 |
+
Kpq : int
|
| 105 |
+
PQ segment width (miliseconds).
|
| 106 |
+
Returns
|
| 107 |
+
-------
|
| 108 |
+
PQ_segment : array
|
| 109 |
+
PQ segment amplitude values (milivolts).
|
| 110 |
+
|
| 111 |
+
References
|
| 112 |
+
----------
|
| 113 |
+
.. [Model03] Pavol DOLINSKÝ, Imrich ANDRÁŠ, Linus MICHAELI, Domenico GRIMALDI,
|
| 114 |
+
"MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS",
|
| 115 |
+
Acta Electrotechnica et Informatica, Vol. 18, No. 3, 2018, 3–8
|
| 116 |
+
"""
|
| 117 |
+
if Kpq < 0 or Kpq > 60:
|
| 118 |
+
raise Exception("Warning! Kpq is out of boundaries.")
|
| 119 |
+
else:
|
| 120 |
+
a = np.zeros(Kpq * l)
|
| 121 |
+
PQ_segment = a.tolist()
|
| 122 |
+
return PQ_segment
|
| 123 |
+
|
| 124 |
+
|
| 125 |
+
def Q1(i, Aq, Kq1):
|
| 126 |
+
"""Generates the amplitude values of the first 5/6 of the Q wave in the ECG signal.
|
| 127 |
+
|
| 128 |
+
Follows the approach by Dolinský, Andráš, Michaeli and Grimaldi [Model03].
|
| 129 |
+
|
| 130 |
+
If the parameters introduced don't make sense in this context, an error will raise.
|
| 131 |
+
Parameters
|
| 132 |
+
----------
|
| 133 |
+
i : int
|
| 134 |
+
Sampling rate.
|
| 135 |
+
Aq : int
|
| 136 |
+
Q wave amplitude (milivolts).
|
| 137 |
+
Kq1 : int
|
| 138 |
+
First 5/6 of the Q wave width (miliseconds).
|
| 139 |
+
Returns
|
| 140 |
+
-------
|
| 141 |
+
Q1_wave : array
|
| 142 |
+
First 5/6 of the Q wave amplitude values (milivolts).
|
| 143 |
+
|
| 144 |
+
References
|
| 145 |
+
----------
|
| 146 |
+
.. [Model03] Pavol DOLINSKÝ, Imrich ANDRÁŠ, Linus MICHAELI, Domenico GRIMALDI,
|
| 147 |
+
"MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS",
|
| 148 |
+
Acta Electrotechnica et Informatica, Vol. 18, No. 3, 2018, 3–8
|
| 149 |
+
"""
|
| 150 |
+
if Aq < 0 or Aq > 0.5:
|
| 151 |
+
raise Exception("Warning! Aq is out of boundaries.")
|
| 152 |
+
elif Kq1 < 0 or Kq1 > 70:
|
| 153 |
+
raise Exception("Warning! Kq1 is out of boundaries.")
|
| 154 |
+
else:
|
| 155 |
+
k = np.arange(0, Kq1, i)
|
| 156 |
+
a = -Aq * (k / Kq1)
|
| 157 |
+
Q1_wave = a.tolist()
|
| 158 |
+
return Q1_wave
|
| 159 |
+
|
| 160 |
+
|
| 161 |
+
def Q2(i, Aq, Kq2):
|
| 162 |
+
"""Generates the amplitude values of the last 1/6 of the Q wave in the ECG signal.
|
| 163 |
+
|
| 164 |
+
Follows the approach by Dolinský, Andráš, Michaeli and Grimaldi [Model03].
|
| 165 |
+
|
| 166 |
+
If the parameters introduced don't make sense in this context, an error will raise.
|
| 167 |
+
Parameters
|
| 168 |
+
----------
|
| 169 |
+
i : int
|
| 170 |
+
Sampling rate.
|
| 171 |
+
Aq : int
|
| 172 |
+
Q wave amplitude (milivolts).
|
| 173 |
+
Kq2 : int
|
| 174 |
+
Last 1/6 of the Q wave width (miliseconds).
|
| 175 |
+
Returns
|
| 176 |
+
-------
|
| 177 |
+
Q2_wave : array
|
| 178 |
+
Last 1/6 of the Q wave amplitude values (milivolts).
|
| 179 |
+
|
| 180 |
+
References
|
| 181 |
+
----------
|
| 182 |
+
.. [Model03] Pavol DOLINSKÝ, Imrich ANDRÁŠ, Linus MICHAELI, Domenico GRIMALDI,
|
| 183 |
+
"MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS",
|
| 184 |
+
Acta Electrotechnica et Informatica, Vol. 18, No. 3, 2018, 3–8
|
| 185 |
+
"""
|
| 186 |
+
if Aq < 0 or Aq > 0.5:
|
| 187 |
+
raise Exception("Warning! Aq is out of boundaries.")
|
| 188 |
+
elif Kq2 < 0 or Kq2 > 50:
|
| 189 |
+
raise Exception("Warning! Kq2 is out of boundaries.")
|
| 190 |
+
else:
|
| 191 |
+
k = np.arange(0, Kq2, i)
|
| 192 |
+
a = Aq * (k / Kq2) - Aq
|
| 193 |
+
Q2_wave = a.tolist()
|
| 194 |
+
return Q2_wave
|
| 195 |
+
|
| 196 |
+
|
| 197 |
+
def R(i, Ar, Kr):
|
| 198 |
+
"""Generates the amplitude values of the R wave in the ECG signal.
|
| 199 |
+
|
| 200 |
+
Follows the approach by Dolinský, Andráš, Michaeli and Grimaldi [Model03].
|
| 201 |
+
|
| 202 |
+
If the parameters introduced don't make sense in this context, an error will raise.
|
| 203 |
+
Parameters
|
| 204 |
+
----------
|
| 205 |
+
i : int
|
| 206 |
+
Sampling rate.
|
| 207 |
+
Ar : int
|
| 208 |
+
R wave amplitude (milivolts).
|
| 209 |
+
Kr : int
|
| 210 |
+
R wave width (miliseconds).
|
| 211 |
+
Returns
|
| 212 |
+
-------
|
| 213 |
+
R_wave : array
|
| 214 |
+
R wave amplitude values (milivolts).
|
| 215 |
+
|
| 216 |
+
References
|
| 217 |
+
----------
|
| 218 |
+
.. [Model03] Pavol DOLINSKÝ, Imrich ANDRÁŠ, Linus MICHAELI, Domenico GRIMALDI,
|
| 219 |
+
"MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS",
|
| 220 |
+
Acta Electrotechnica et Informatica, Vol. 18, No. 3, 2018, 3–8
|
| 221 |
+
"""
|
| 222 |
+
if Ar < 0.5 or Ar > 2:
|
| 223 |
+
raise Exception("Warning! Ar is out of boundaries.")
|
| 224 |
+
elif Kr < 10 or Kr > 150:
|
| 225 |
+
raise Exception("Warning! Kr is out of boundaries.")
|
| 226 |
+
else:
|
| 227 |
+
k = np.arange(0, Kr, i)
|
| 228 |
+
a = Ar * np.sin((np.pi * k) / Kr)
|
| 229 |
+
R_wave = a.tolist()
|
| 230 |
+
return R_wave
|
| 231 |
+
|
| 232 |
+
|
| 233 |
+
def S(i, As, Ks, Kcs, k=0):
|
| 234 |
+
"""Generates the amplitude values of the S wave in the ECG signal.
|
| 235 |
+
|
| 236 |
+
Follows the approach by Dolinský, Andráš, Michaeli and Grimaldi [Model03].
|
| 237 |
+
|
| 238 |
+
If the parameters introduced don't make sense in this context, an error will raise.
|
| 239 |
+
Parameters
|
| 240 |
+
----------
|
| 241 |
+
i : int
|
| 242 |
+
Sampling rate.
|
| 243 |
+
As : int
|
| 244 |
+
S wave amplitude (milivolts).
|
| 245 |
+
Ks : int
|
| 246 |
+
S wave width (miliseconds).
|
| 247 |
+
Kcs : int
|
| 248 |
+
Parameter which allows slight adjustment of S wave shape by cutting away a portion at the end.
|
| 249 |
+
k : int, optional
|
| 250 |
+
Returns
|
| 251 |
+
-------
|
| 252 |
+
S : array
|
| 253 |
+
If k = 0, S wave amplitude values (milivolts).
|
| 254 |
+
S : int
|
| 255 |
+
If k != 0, value obtained by using the S wave expression for the given k value.
|
| 256 |
+
|
| 257 |
+
References
|
| 258 |
+
----------
|
| 259 |
+
.. [Model03] Pavol DOLINSKÝ, Imrich ANDRÁŠ, Linus MICHAELI, Domenico GRIMALDI,
|
| 260 |
+
"MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS",
|
| 261 |
+
Acta Electrotechnica et Informatica, Vol. 18, No. 3, 2018, 3–8
|
| 262 |
+
"""
|
| 263 |
+
if As < 0 or As > 1:
|
| 264 |
+
raise Exception("Warning! As is out of boundaries.")
|
| 265 |
+
elif Ks < 10 or Ks > 200:
|
| 266 |
+
raise Exception("Warning! Ks is out of boundaries.")
|
| 267 |
+
elif Kcs < -5 or Kcs > 150:
|
| 268 |
+
raise Exception("Warning! Kcs is out of boundaries.")
|
| 269 |
+
else:
|
| 270 |
+
if k == 0:
|
| 271 |
+
k = np.arange(0, Ks - Kcs, i)
|
| 272 |
+
a = (
|
| 273 |
+
-As
|
| 274 |
+
* i
|
| 275 |
+
* k
|
| 276 |
+
* (19.78 * np.pi)
|
| 277 |
+
/ Ks
|
| 278 |
+
* np.exp(-2 * (((6 * np.pi) / Ks) * i * k) ** 2)
|
| 279 |
+
)
|
| 280 |
+
S = a.tolist()
|
| 281 |
+
else:
|
| 282 |
+
S = (
|
| 283 |
+
-As
|
| 284 |
+
* i
|
| 285 |
+
* k
|
| 286 |
+
* (19.78 * np.pi)
|
| 287 |
+
/ Ks
|
| 288 |
+
* np.exp(-2 * (((6 * np.pi) / Ks) * i * k) ** 2)
|
| 289 |
+
)
|
| 290 |
+
return S
|
| 291 |
+
|
| 292 |
+
|
| 293 |
+
def St(i, As, Ks, Kcs, sm, Kst, k=0):
|
| 294 |
+
"""Generates the amplitude values of the ST segment in the ECG signal.
|
| 295 |
+
|
| 296 |
+
Follows the approach by Dolinský, Andráš, Michaeli and Grimaldi [Model03].
|
| 297 |
+
|
| 298 |
+
If the parameters introduced don't make sense in this context, an error will raise.
|
| 299 |
+
Parameters
|
| 300 |
+
----------
|
| 301 |
+
i : int
|
| 302 |
+
Sampling rate.
|
| 303 |
+
As : int
|
| 304 |
+
S wave amplitude (milivolts).
|
| 305 |
+
Ks : int
|
| 306 |
+
S wave width (miliseconds).
|
| 307 |
+
Kcs : int
|
| 308 |
+
Parameter which allows slight adjustment of S wave shape by cutting away a portion at the end.
|
| 309 |
+
sm : int
|
| 310 |
+
Slope parameter in the ST segment.
|
| 311 |
+
Kst : int
|
| 312 |
+
ST segment width (miliseconds).
|
| 313 |
+
k : int, optional
|
| 314 |
+
Returns
|
| 315 |
+
-------
|
| 316 |
+
ST : array
|
| 317 |
+
If k = 0, ST segment amplitude values (milivolts).
|
| 318 |
+
ST : int
|
| 319 |
+
If k != 0, value obtained by using the ST segment expression for the given k value.
|
| 320 |
+
|
| 321 |
+
References
|
| 322 |
+
----------
|
| 323 |
+
.. [Model03] Pavol DOLINSKÝ, Imrich ANDRÁŠ, Linus MICHAELI, Domenico GRIMALDI,
|
| 324 |
+
"MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS",
|
| 325 |
+
Acta Electrotechnica et Informatica, Vol. 18, No. 3, 2018, 3–8
|
| 326 |
+
"""
|
| 327 |
+
if sm < 1 or sm > 150:
|
| 328 |
+
raise Exception("Warning! sm is out of boundaries.")
|
| 329 |
+
elif Kst < 0 or Kst > 110:
|
| 330 |
+
raise Exception("Warning! Kst is out of boundaries.")
|
| 331 |
+
else:
|
| 332 |
+
if k == 0:
|
| 333 |
+
k = np.arange(0, Kst, i)
|
| 334 |
+
a = -S(i, As, Ks, Kcs, Ks - Kcs) * (k / sm) + S(i, As, Ks, Kcs, Ks - Kcs)
|
| 335 |
+
ST = a.tolist()
|
| 336 |
+
else:
|
| 337 |
+
ST = -S(i, As, Ks, Kcs, Ks - Kcs) * (k / sm) + S(i, As, Ks, Kcs, Ks - Kcs)
|
| 338 |
+
return ST
|
| 339 |
+
|
| 340 |
+
|
| 341 |
+
def T(i, As, Ks, Kcs, sm, Kst, At, Kt, k=0):
|
| 342 |
+
"""Generates the amplitude values of the T wave in the ECG signal.
|
| 343 |
+
|
| 344 |
+
Follows the approach by Dolinský, Andráš, Michaeli and Grimaldi [Model03].
|
| 345 |
+
|
| 346 |
+
If the parameters introduced don't make sense in this context, an error will raise.
|
| 347 |
+
Parameters
|
| 348 |
+
----------
|
| 349 |
+
i : int
|
| 350 |
+
Sampling rate.
|
| 351 |
+
As : int
|
| 352 |
+
S wave amplitude (milivolts).
|
| 353 |
+
Ks : int
|
| 354 |
+
S wave width (miliseconds).
|
| 355 |
+
Kcs : int
|
| 356 |
+
Parameter which allows slight adjustment of S wave shape by cutting away a portion at the end.
|
| 357 |
+
sm : int
|
| 358 |
+
Slope parameter in the ST segment.
|
| 359 |
+
Kst : int
|
| 360 |
+
ST segment width (miliseconds).
|
| 361 |
+
At : int
|
| 362 |
+
1/2 of the T wave amplitude (milivolts).
|
| 363 |
+
Kt : int
|
| 364 |
+
T wave width (miliseconds).
|
| 365 |
+
k : int, optional
|
| 366 |
+
Returns
|
| 367 |
+
-------
|
| 368 |
+
T : array
|
| 369 |
+
If k = 0, T wave amplitude values (milivolts).
|
| 370 |
+
T : int
|
| 371 |
+
If k != 0, value obtained by using the T wave expression for the given k value.
|
| 372 |
+
|
| 373 |
+
References
|
| 374 |
+
----------
|
| 375 |
+
.. [Model03] Pavol DOLINSKÝ, Imrich ANDRÁŠ, Linus MICHAELI, Domenico GRIMALDI,
|
| 376 |
+
"MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS",
|
| 377 |
+
Acta Electrotechnica et Informatica, Vol. 18, No. 3, 2018, 3–8
|
| 378 |
+
"""
|
| 379 |
+
if At < -0.5 or At > 1:
|
| 380 |
+
raise Exception("Warning! At is out of boundaries.")
|
| 381 |
+
elif Kt < 50 or Kt > 300:
|
| 382 |
+
raise Exception("Warning! Kt is out of boundaries.")
|
| 383 |
+
else:
|
| 384 |
+
if k == 0:
|
| 385 |
+
k = np.arange(0, Kt, i)
|
| 386 |
+
a = (
|
| 387 |
+
-At * np.cos((1.48 * np.pi * k + 15) / Kt)
|
| 388 |
+
+ At
|
| 389 |
+
+ St(i, As, Ks, Kcs, sm, Kst, Kst)
|
| 390 |
+
)
|
| 391 |
+
T = a.tolist()
|
| 392 |
+
else:
|
| 393 |
+
T = (
|
| 394 |
+
-At * np.cos((1.48 * np.pi * k + 15) / Kt)
|
| 395 |
+
+ At
|
| 396 |
+
+ St(i, As, Ks, Kcs, sm, Kst, Kst)
|
| 397 |
+
)
|
| 398 |
+
return T
|
| 399 |
+
|
| 400 |
+
|
| 401 |
+
def I(i, As, Ks, Kcs, sm, Kst, At, Kt, si, Ki):
|
| 402 |
+
"""Generates the amplitude values of the final isoelectric segment (I segment) in the ECG signal.
|
| 403 |
+
|
| 404 |
+
Follows the approach by Dolinský, Andráš, Michaeli and Grimaldi [Model03].
|
| 405 |
+
|
| 406 |
+
If the parameters introduced don't make sense in this context, an error will raise.
|
| 407 |
+
Parameters
|
| 408 |
+
----------
|
| 409 |
+
i : int
|
| 410 |
+
Sampling rate.
|
| 411 |
+
As : int
|
| 412 |
+
S wave amplitude (milivolts).
|
| 413 |
+
Ks : int
|
| 414 |
+
S wave width (miliseconds).
|
| 415 |
+
Kcs : int
|
| 416 |
+
Parameter which allows slight adjustment of S wave shape by cutting away a portion at the end.
|
| 417 |
+
sm : int
|
| 418 |
+
Slope parameter in the ST segment.
|
| 419 |
+
Kst : int
|
| 420 |
+
ST segment width (miliseconds).
|
| 421 |
+
At : int
|
| 422 |
+
1/2 of the T wave amplitude (milivolts).
|
| 423 |
+
Kt : int
|
| 424 |
+
T wave width (miliseconds).
|
| 425 |
+
si : int
|
| 426 |
+
Parameter for setting the transition slope between T wave and isoelectric line.
|
| 427 |
+
Ki : int
|
| 428 |
+
I segment width (miliseconds).
|
| 429 |
+
Returns
|
| 430 |
+
-------
|
| 431 |
+
I_segment : array
|
| 432 |
+
I segment amplitude values (milivolts).
|
| 433 |
+
|
| 434 |
+
References
|
| 435 |
+
----------
|
| 436 |
+
.. [Model03] Pavol DOLINSKÝ, Imrich ANDRÁŠ, Linus MICHAELI, Domenico GRIMALDI,
|
| 437 |
+
"MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS",
|
| 438 |
+
Acta Electrotechnica et Informatica, Vol. 18, No. 3, 2018, 3–8
|
| 439 |
+
"""
|
| 440 |
+
if si < 0 or si > 50:
|
| 441 |
+
raise Exception("Warning! si is out of boundaries.")
|
| 442 |
+
else:
|
| 443 |
+
k = np.arange(0, Ki, i)
|
| 444 |
+
a = T(i, As, Ks, Kcs, sm, Kst, At, Kt, Kt) * (si / (k + 10))
|
| 445 |
+
I_segment = a.tolist()
|
| 446 |
+
return I_segment
|
| 447 |
+
|
| 448 |
+
|
| 449 |
+
def ecg(
|
| 450 |
+
Kb=130,
|
| 451 |
+
Ap=0.2,
|
| 452 |
+
Kp=100,
|
| 453 |
+
Kpq=40,
|
| 454 |
+
Aq=0.1,
|
| 455 |
+
Kq1=25,
|
| 456 |
+
Kq2=5,
|
| 457 |
+
Ar=0.7,
|
| 458 |
+
Kr=40,
|
| 459 |
+
As=0.2,
|
| 460 |
+
Ks=30,
|
| 461 |
+
Kcs=5,
|
| 462 |
+
sm=96,
|
| 463 |
+
Kst=100,
|
| 464 |
+
At=0.15,
|
| 465 |
+
Kt=220,
|
| 466 |
+
si=2,
|
| 467 |
+
Ki=200,
|
| 468 |
+
var=0.01,
|
| 469 |
+
sampling_rate=10000,
|
| 470 |
+
): # normal values by default
|
| 471 |
+
"""Concatenates the segments and waves to make an ECG signal. The default values are physiological.
|
| 472 |
+
|
| 473 |
+
Follows the approach by Dolinský, Andráš, Michaeli and Grimaldi [Model03].
|
| 474 |
+
|
| 475 |
+
If the parameters introduced aren't within physiological values (limits based on the website [ECGwaves]), a warning will raise.
|
| 476 |
+
|
| 477 |
+
Parameters
|
| 478 |
+
----------
|
| 479 |
+
Kb : int, optional
|
| 480 |
+
B segment width (miliseconds).
|
| 481 |
+
Ap : float, optional
|
| 482 |
+
P wave amplitude (milivolts).
|
| 483 |
+
Kp : int, optional
|
| 484 |
+
P wave width (miliseconds).
|
| 485 |
+
Kpq : int, optional
|
| 486 |
+
PQ segment width (miliseconds).
|
| 487 |
+
Aq : float, optional
|
| 488 |
+
Q wave amplitude (milivolts).
|
| 489 |
+
Kq1 : int, optional
|
| 490 |
+
First 5/6 of the Q wave width (miliseconds).
|
| 491 |
+
Kq2 : int, optional
|
| 492 |
+
Last 1/6 of the Q wave width (miliseconds).
|
| 493 |
+
Ar : float, optional
|
| 494 |
+
R wave amplitude (milivolts).
|
| 495 |
+
Kr : int, optional
|
| 496 |
+
R wave width (miliseconds).
|
| 497 |
+
As : float, optional
|
| 498 |
+
S wave amplitude (milivolts).
|
| 499 |
+
Ks : int, optional
|
| 500 |
+
S wave width (miliseconds).
|
| 501 |
+
Kcs : int, optional
|
| 502 |
+
Parameter which allows slight adjustment of S wave shape by cutting away a portion at the end.
|
| 503 |
+
sm : int, optional
|
| 504 |
+
Slope parameter in the ST segment.
|
| 505 |
+
Kst : int, optional
|
| 506 |
+
ST segment width (miliseconds).
|
| 507 |
+
At : float, optional
|
| 508 |
+
1/2 of the T wave amplitude (milivolts).
|
| 509 |
+
Kt : int, optional
|
| 510 |
+
T wave width (miliseconds).
|
| 511 |
+
si : int, optional
|
| 512 |
+
Parameter for setting the transition slope between T wave and isoelectric line.
|
| 513 |
+
Ki : int, optional
|
| 514 |
+
I segment width (miliseconds).
|
| 515 |
+
var : float, optional
|
| 516 |
+
Value between 0.0 and 1.0 that adds variability to the obtained signal, by changing each parameter following a normal distribution with mean value `parameter_value` and std `var * parameter_value`.
|
| 517 |
+
sampling_rate : int, optional
|
| 518 |
+
Sampling frequency (Hz).
|
| 519 |
+
|
| 520 |
+
Returns
|
| 521 |
+
-------
|
| 522 |
+
ecg : array
|
| 523 |
+
Amplitude values of the ECG wave.
|
| 524 |
+
t : array
|
| 525 |
+
Time values accoring to the provided sampling rate.
|
| 526 |
+
params : dict
|
| 527 |
+
Input parameters of the function
|
| 528 |
+
|
| 529 |
+
|
| 530 |
+
Example
|
| 531 |
+
-------
|
| 532 |
+
sampling_rate = 10000
|
| 533 |
+
beats = 3
|
| 534 |
+
noise_amplitude = 0.05
|
| 535 |
+
|
| 536 |
+
ECGtotal = np.array([])
|
| 537 |
+
for i in range(beats):
|
| 538 |
+
ECGwave, _, _ = ecg(sampling_rate=sampling_rate, var=0.1)
|
| 539 |
+
ECGtotal = np.concatenate((ECGtotal, ECGwave))
|
| 540 |
+
t = np.arange(0, len(ECGtotal)) / sampling_rate
|
| 541 |
+
|
| 542 |
+
# add powerline noise (50 Hz)
|
| 543 |
+
noise = noise_amplitude * np.sin(50 * (2 * pi) * t)
|
| 544 |
+
ECGtotal += noise
|
| 545 |
+
|
| 546 |
+
plt.plot(t, ECGtotal)
|
| 547 |
+
plt.xlabel("Time (ms)")
|
| 548 |
+
plt.ylabel("Amplitude (mV)")
|
| 549 |
+
plt.grid()
|
| 550 |
+
plt.title("ECG")
|
| 551 |
+
|
| 552 |
+
plt.show()
|
| 553 |
+
|
| 554 |
+
References
|
| 555 |
+
----------
|
| 556 |
+
.. [Model03] Pavol DOLINSKÝ, Imrich ANDRÁŠ, Linus MICHAELI, Domenico GRIMALDI,
|
| 557 |
+
"MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS",
|
| 558 |
+
Acta Electrotechnica et Informatica, Vol. 18, No. 3, 2018, 3–8
|
| 559 |
+
.. [ECGwaves] https://ecgwaves.com/
|
| 560 |
+
"""
|
| 561 |
+
if Kp > 120 and Ap >= 0.25:
|
| 562 |
+
warnings.warn("P wave isn't within physiological values.")
|
| 563 |
+
|
| 564 |
+
if Kq1 + Kq2 > 30 or Aq > 0.25 * Ar:
|
| 565 |
+
warnings.warn("Q wave isn't within physiological values.")
|
| 566 |
+
|
| 567 |
+
if 120 > Kp + Kpq or Kp + Kpq > 220:
|
| 568 |
+
warnings.warn("PR interval isn't within physiological limits.")
|
| 569 |
+
|
| 570 |
+
if Kq1 + Kq2 + Kr + Ks - Kcs > 120:
|
| 571 |
+
warnings.warn("QRS complex duration isn't within physiological limits.")
|
| 572 |
+
|
| 573 |
+
if Kq1 + Kq2 + Kr + Ks - Kcs + Kst + Kt > 450:
|
| 574 |
+
warnings.warn("QT segment duration isn't within physiological limits for men.")
|
| 575 |
+
|
| 576 |
+
if Kq1 + Kq2 + Kr + Ks - Kcs + Kst + Kt > 470:
|
| 577 |
+
warnings.warn(
|
| 578 |
+
"QT segment duration isn't within physiological limits for women."
|
| 579 |
+
)
|
| 580 |
+
|
| 581 |
+
if var < 0 or var > 1:
|
| 582 |
+
raise TypeError("Variability value should be between 0.0 and 1.0")
|
| 583 |
+
|
| 584 |
+
if var > 0:
|
| 585 |
+
# change the parameter according to the provided variability
|
| 586 |
+
nd = lambda x: np.random.normal(x, x * var)
|
| 587 |
+
Kb = round(np.clip(nd(Kb), 0, 130))
|
| 588 |
+
Ap = np.clip(nd(Ap), -0.2, 0.5)
|
| 589 |
+
Kp = np.clip(nd(Kp), 10, 100)
|
| 590 |
+
Kpq = round(np.clip(nd(Kpq), 0, 60))
|
| 591 |
+
Aq = np.clip(nd(Aq), 0, 0.5)
|
| 592 |
+
Kq1 = round(np.clip(nd(Kq1), 0, 70))
|
| 593 |
+
Kq2 = round(np.clip(nd(Kq2), 0, 50))
|
| 594 |
+
Ar = np.clip(nd(Ar), 0.5, 2)
|
| 595 |
+
Kr = round(np.clip(nd(Kr), 10, 150))
|
| 596 |
+
As = np.clip(nd(As), 0, 1)
|
| 597 |
+
Ks = round(np.clip(nd(Ks), 10, 200))
|
| 598 |
+
Kcs = round(np.clip(nd(Kcs), -5, 150))
|
| 599 |
+
sm = round(np.clip(nd(sm), 1, 150))
|
| 600 |
+
Kst = round(np.clip(nd(Kst), 0, 110))
|
| 601 |
+
At = np.clip(nd(At), -0.5, 1)
|
| 602 |
+
Kt = round(np.clip(nd(Kt), 50, 300))
|
| 603 |
+
si = round(np.clip(nd(si), 0, 50))
|
| 604 |
+
|
| 605 |
+
# variable i is the time between samples (in miliseconds)
|
| 606 |
+
i = 1000 / sampling_rate
|
| 607 |
+
l = int(1 / i)
|
| 608 |
+
|
| 609 |
+
B_to_S = (
|
| 610 |
+
B(l, Kb)
|
| 611 |
+
+ P(i, Ap, Kp)
|
| 612 |
+
+ Pq(l, Kpq)
|
| 613 |
+
+ Q1(i, Aq, Kq1)
|
| 614 |
+
+ Q2(i, Aq, Kq2)
|
| 615 |
+
+ R(i, Ar, Kr)
|
| 616 |
+
+ S(i, As, Ks, Kcs)
|
| 617 |
+
)
|
| 618 |
+
St_to_I = (
|
| 619 |
+
St(i, As, Ks, Kcs, sm, Kst)
|
| 620 |
+
+ T(i, As, Ks, Kcs, sm, Kst, At, Kt)
|
| 621 |
+
+ I(i, As, Ks, Kcs, sm, Kst, At, Kt, si, Ki)
|
| 622 |
+
)
|
| 623 |
+
|
| 624 |
+
# The signal is filtered in two different sizes
|
| 625 |
+
ECG1_filtered, n1 = st.smoother(B_to_S, size=50)
|
| 626 |
+
ECG2_filtered, n2 = st.smoother(St_to_I, size=500)
|
| 627 |
+
|
| 628 |
+
# The signal is concatenated
|
| 629 |
+
ECGwave = np.concatenate((ECG1_filtered, ECG2_filtered))
|
| 630 |
+
|
| 631 |
+
# Time array
|
| 632 |
+
t = np.arange(0, len(ECGwave)) / sampling_rate
|
| 633 |
+
|
| 634 |
+
# output
|
| 635 |
+
params = {
|
| 636 |
+
"Kb": 130,
|
| 637 |
+
"Ap": 0.2,
|
| 638 |
+
"Kp": 100,
|
| 639 |
+
"Kpq": 40,
|
| 640 |
+
"Aq": 0.1,
|
| 641 |
+
"Kq1": 25,
|
| 642 |
+
"Kq2": 5,
|
| 643 |
+
"Ar": 0.7,
|
| 644 |
+
"Kr": 40,
|
| 645 |
+
"As": 0.2,
|
| 646 |
+
"Ks": 30,
|
| 647 |
+
"Kcs": 5,
|
| 648 |
+
"sm": 96,
|
| 649 |
+
"Kst": 100,
|
| 650 |
+
"At": 0.15,
|
| 651 |
+
"Kt": 220,
|
| 652 |
+
"si": 2,
|
| 653 |
+
"Ki": 200,
|
| 654 |
+
"var": 0.01,
|
| 655 |
+
"sampling_rate": 10000,
|
| 656 |
+
}
|
| 657 |
+
|
| 658 |
+
args = (ECGwave, t, params)
|
| 659 |
+
names = ("ecg", "t", "params")
|
| 660 |
+
|
| 661 |
+
return utils.ReturnTuple(args, names)
|
BioSPPy/source/biosppy/timing.py
ADDED
|
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.timing
|
| 4 |
+
--------------
|
| 5 |
+
|
| 6 |
+
This module provides simple methods to measure computation times.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
# Imports
|
| 13 |
+
# compat
|
| 14 |
+
from __future__ import absolute_import, division, print_function
|
| 15 |
+
# from six.moves import map, range, zip
|
| 16 |
+
# import six
|
| 17 |
+
|
| 18 |
+
# built-in
|
| 19 |
+
import time
|
| 20 |
+
|
| 21 |
+
# 3rd party
|
| 22 |
+
|
| 23 |
+
# local
|
| 24 |
+
|
| 25 |
+
# Globals
|
| 26 |
+
CLOCKS = dict()
|
| 27 |
+
DFC = '__default_clock__'
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
def tic(name=None):
|
| 31 |
+
"""Start the clock.
|
| 32 |
+
|
| 33 |
+
Parameters
|
| 34 |
+
----------
|
| 35 |
+
name : str, optional
|
| 36 |
+
Name of the clock; if None, uses the default name.
|
| 37 |
+
|
| 38 |
+
"""
|
| 39 |
+
|
| 40 |
+
if name is None:
|
| 41 |
+
name = DFC
|
| 42 |
+
|
| 43 |
+
CLOCKS[name] = time.time()
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
def tac(name=None):
|
| 47 |
+
"""Stop the clock.
|
| 48 |
+
|
| 49 |
+
Parameters
|
| 50 |
+
----------
|
| 51 |
+
name : str, optional
|
| 52 |
+
Name of the clock; if None, uses the default name.
|
| 53 |
+
|
| 54 |
+
Returns
|
| 55 |
+
-------
|
| 56 |
+
delta : float
|
| 57 |
+
Elapsed time, in seconds.
|
| 58 |
+
|
| 59 |
+
Raises
|
| 60 |
+
------
|
| 61 |
+
KeyError if the name of the clock is unknown.
|
| 62 |
+
|
| 63 |
+
"""
|
| 64 |
+
|
| 65 |
+
toc = time.time()
|
| 66 |
+
|
| 67 |
+
if name is None:
|
| 68 |
+
name = DFC
|
| 69 |
+
|
| 70 |
+
try:
|
| 71 |
+
delta = toc - CLOCKS[name]
|
| 72 |
+
except KeyError:
|
| 73 |
+
raise KeyError('Unknown clock.')
|
| 74 |
+
|
| 75 |
+
return delta
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
def clear(name=None):
|
| 79 |
+
"""Clear the clock.
|
| 80 |
+
|
| 81 |
+
Parameters
|
| 82 |
+
----------
|
| 83 |
+
name : str, optional
|
| 84 |
+
Name of the clock; if None, uses the default name.
|
| 85 |
+
|
| 86 |
+
"""
|
| 87 |
+
|
| 88 |
+
if name is None:
|
| 89 |
+
name = DFC
|
| 90 |
+
|
| 91 |
+
CLOCKS.pop(name)
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
def clear_all():
|
| 95 |
+
"""Clear all clocks."""
|
| 96 |
+
|
| 97 |
+
CLOCKS.clear()
|
BioSPPy/source/biosppy/utils.py
ADDED
|
@@ -0,0 +1,439 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""
|
| 3 |
+
biosppy.utils
|
| 4 |
+
-------------
|
| 5 |
+
|
| 6 |
+
This module provides several frequently used functions and hacks.
|
| 7 |
+
|
| 8 |
+
:copyright: (c) 2015-2018 by Instituto de Telecomunicacoes
|
| 9 |
+
:license: BSD 3-clause, see LICENSE for more details.
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
# Imports
|
| 13 |
+
# compat
|
| 14 |
+
from __future__ import absolute_import, division, print_function
|
| 15 |
+
from six.moves import map, range, zip
|
| 16 |
+
import six
|
| 17 |
+
|
| 18 |
+
# built-in
|
| 19 |
+
import collections
|
| 20 |
+
import copy
|
| 21 |
+
import keyword
|
| 22 |
+
import os
|
| 23 |
+
import re
|
| 24 |
+
|
| 25 |
+
# 3rd party
|
| 26 |
+
import numpy as np
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
def normpath(path):
|
| 30 |
+
"""Normalize a path.
|
| 31 |
+
|
| 32 |
+
Parameters
|
| 33 |
+
----------
|
| 34 |
+
path : str
|
| 35 |
+
The path to normalize.
|
| 36 |
+
|
| 37 |
+
Returns
|
| 38 |
+
-------
|
| 39 |
+
npath : str
|
| 40 |
+
The normalized path.
|
| 41 |
+
|
| 42 |
+
"""
|
| 43 |
+
|
| 44 |
+
if "~" in path:
|
| 45 |
+
out = os.path.abspath(os.path.expanduser(path))
|
| 46 |
+
else:
|
| 47 |
+
out = os.path.abspath(path)
|
| 48 |
+
|
| 49 |
+
return out
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
def fileparts(path):
|
| 53 |
+
"""split a file path into its directory, name, and extension.
|
| 54 |
+
|
| 55 |
+
Parameters
|
| 56 |
+
----------
|
| 57 |
+
path : str
|
| 58 |
+
Input file path.
|
| 59 |
+
|
| 60 |
+
Returns
|
| 61 |
+
-------
|
| 62 |
+
dirname : str
|
| 63 |
+
File directory.
|
| 64 |
+
fname : str
|
| 65 |
+
File name.
|
| 66 |
+
ext : str
|
| 67 |
+
File extension.
|
| 68 |
+
|
| 69 |
+
Notes
|
| 70 |
+
-----
|
| 71 |
+
* Removes the dot ('.') from the extension.
|
| 72 |
+
|
| 73 |
+
"""
|
| 74 |
+
|
| 75 |
+
dirname, fname = os.path.split(path)
|
| 76 |
+
fname, ext = os.path.splitext(fname)
|
| 77 |
+
ext = ext.replace(".", "")
|
| 78 |
+
|
| 79 |
+
return dirname, fname, ext
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
def fullfile(*args):
|
| 83 |
+
"""Join one or more file path components, assuming the last is
|
| 84 |
+
the extension.
|
| 85 |
+
|
| 86 |
+
Parameters
|
| 87 |
+
----------
|
| 88 |
+
``*args`` : list, optional
|
| 89 |
+
Components to concatenate.
|
| 90 |
+
|
| 91 |
+
Returns
|
| 92 |
+
-------
|
| 93 |
+
fpath : str
|
| 94 |
+
The concatenated file path.
|
| 95 |
+
|
| 96 |
+
"""
|
| 97 |
+
|
| 98 |
+
nb = len(args)
|
| 99 |
+
if nb == 0:
|
| 100 |
+
return ""
|
| 101 |
+
elif nb == 1:
|
| 102 |
+
return args[0]
|
| 103 |
+
elif nb == 2:
|
| 104 |
+
return os.path.join(*args)
|
| 105 |
+
|
| 106 |
+
fpath = os.path.join(*args[:-1]) + "." + args[-1]
|
| 107 |
+
|
| 108 |
+
return fpath
|
| 109 |
+
|
| 110 |
+
|
| 111 |
+
def walktree(top=None, spec=None):
|
| 112 |
+
"""Iterator to recursively descend a directory and return all files
|
| 113 |
+
matching the spec.
|
| 114 |
+
|
| 115 |
+
Parameters
|
| 116 |
+
----------
|
| 117 |
+
top : str, optional
|
| 118 |
+
Starting directory; if None, defaults to the current working directoty.
|
| 119 |
+
spec : str, optional
|
| 120 |
+
Regular expression to match the desired files;
|
| 121 |
+
if None, matches all files; typical patterns:
|
| 122 |
+
* `r'\.txt$'` - matches files with '.txt' extension;
|
| 123 |
+
* `r'^File_'` - matches files starting with 'File\_'
|
| 124 |
+
* `r'^File_.+\.txt$'` - matches files starting with 'File\_' and ending with the '.txt' extension.
|
| 125 |
+
|
| 126 |
+
Yields
|
| 127 |
+
------
|
| 128 |
+
fpath : str
|
| 129 |
+
Absolute file path.
|
| 130 |
+
|
| 131 |
+
Notes
|
| 132 |
+
-----
|
| 133 |
+
* Partial matches are also selected.
|
| 134 |
+
|
| 135 |
+
See Also
|
| 136 |
+
--------
|
| 137 |
+
* https://docs.python.org/3/library/re.html
|
| 138 |
+
* https://regex101.com/
|
| 139 |
+
|
| 140 |
+
"""
|
| 141 |
+
|
| 142 |
+
if top is None:
|
| 143 |
+
top = os.getcwd()
|
| 144 |
+
|
| 145 |
+
if spec is None:
|
| 146 |
+
spec = r".*?"
|
| 147 |
+
|
| 148 |
+
prog = re.compile(spec)
|
| 149 |
+
|
| 150 |
+
for root, _, files in os.walk(top):
|
| 151 |
+
for name in files:
|
| 152 |
+
if prog.search(name):
|
| 153 |
+
fname = os.path.join(root, name)
|
| 154 |
+
yield fname
|
| 155 |
+
|
| 156 |
+
|
| 157 |
+
def remainderAllocator(votes, k, reverse=True, check=False):
|
| 158 |
+
"""Allocate k seats proportionally using the Remainder Method.
|
| 159 |
+
|
| 160 |
+
Also known as Hare-Niemeyer Method. Uses the Hare quota.
|
| 161 |
+
|
| 162 |
+
Parameters
|
| 163 |
+
----------
|
| 164 |
+
votes : list
|
| 165 |
+
Number of votes for each class/party/cardinal.
|
| 166 |
+
k : int
|
| 167 |
+
Total number o seats to allocate.
|
| 168 |
+
reverse : bool, optional
|
| 169 |
+
If True, allocates remaining seats largest quota first.
|
| 170 |
+
check : bool, optional
|
| 171 |
+
If True, limits the number of seats to the total number of votes.
|
| 172 |
+
|
| 173 |
+
Returns
|
| 174 |
+
-------
|
| 175 |
+
seats : list
|
| 176 |
+
Number of seats for each class/party/cardinal.
|
| 177 |
+
|
| 178 |
+
"""
|
| 179 |
+
|
| 180 |
+
# check total number of votes
|
| 181 |
+
tot = np.sum(votes)
|
| 182 |
+
if check and k > tot:
|
| 183 |
+
k = tot
|
| 184 |
+
|
| 185 |
+
# frequencies
|
| 186 |
+
length = len(votes)
|
| 187 |
+
freqs = np.array(votes, dtype="float") / tot
|
| 188 |
+
|
| 189 |
+
# assign items
|
| 190 |
+
aux = k * freqs
|
| 191 |
+
seats = aux.astype("int")
|
| 192 |
+
|
| 193 |
+
# leftovers
|
| 194 |
+
nb = k - seats.sum()
|
| 195 |
+
if nb > 0:
|
| 196 |
+
if reverse:
|
| 197 |
+
ind = np.argsort(aux - seats)[::-1]
|
| 198 |
+
else:
|
| 199 |
+
ind = np.argsort(aux - seats)
|
| 200 |
+
|
| 201 |
+
for i in range(nb):
|
| 202 |
+
seats[ind[i % length]] += 1
|
| 203 |
+
|
| 204 |
+
return seats.tolist()
|
| 205 |
+
|
| 206 |
+
|
| 207 |
+
def highestAveragesAllocator(votes, k, divisor="dHondt", check=False):
|
| 208 |
+
"""Allocate k seats proportionally using the Highest Averages Method.
|
| 209 |
+
|
| 210 |
+
Parameters
|
| 211 |
+
----------
|
| 212 |
+
votes : list
|
| 213 |
+
Number of votes for each class/party/cardinal.
|
| 214 |
+
k : int
|
| 215 |
+
Total number o seats to allocate.
|
| 216 |
+
divisor : str, optional
|
| 217 |
+
Divisor method; one of 'dHondt', 'Huntington-Hill', 'Sainte-Lague',
|
| 218 |
+
'Imperiali', or 'Danish'.
|
| 219 |
+
check : bool, optional
|
| 220 |
+
If True, limits the number of seats to the total number of votes.
|
| 221 |
+
|
| 222 |
+
Returns
|
| 223 |
+
-------
|
| 224 |
+
seats : list
|
| 225 |
+
Number of seats for each class/party/cardinal.
|
| 226 |
+
|
| 227 |
+
"""
|
| 228 |
+
|
| 229 |
+
# check total number of cardinals
|
| 230 |
+
tot = np.sum(votes)
|
| 231 |
+
if check and k > tot:
|
| 232 |
+
k = tot
|
| 233 |
+
|
| 234 |
+
# select divisor
|
| 235 |
+
if divisor == "dHondt":
|
| 236 |
+
fcn = lambda i: float(i)
|
| 237 |
+
elif divisor == "Huntington-Hill":
|
| 238 |
+
fcn = lambda i: np.sqrt(i * (i + 1.0))
|
| 239 |
+
elif divisor == "Sainte-Lague":
|
| 240 |
+
fcn = lambda i: i - 0.5
|
| 241 |
+
elif divisor == "Imperiali":
|
| 242 |
+
fcn = lambda i: float(i + 1)
|
| 243 |
+
elif divisor == "Danish":
|
| 244 |
+
fcn = lambda i: 3.0 * (i - 1.0) + 1.0
|
| 245 |
+
else:
|
| 246 |
+
raise ValueError("Unknown divisor method.")
|
| 247 |
+
|
| 248 |
+
# compute coefficients
|
| 249 |
+
tab = []
|
| 250 |
+
length = len(votes)
|
| 251 |
+
D = [fcn(i) for i in range(1, k + 1)]
|
| 252 |
+
for i in range(length):
|
| 253 |
+
for j in range(k):
|
| 254 |
+
tab.append((i, votes[i] / D[j]))
|
| 255 |
+
|
| 256 |
+
# sort
|
| 257 |
+
tab.sort(key=lambda item: item[1], reverse=True)
|
| 258 |
+
tab = tab[:k]
|
| 259 |
+
tab = np.array([item[0] for item in tab], dtype="int")
|
| 260 |
+
|
| 261 |
+
seats = np.zeros(length, dtype="int")
|
| 262 |
+
for i in range(length):
|
| 263 |
+
seats[i] = np.sum(tab == i)
|
| 264 |
+
|
| 265 |
+
return seats.tolist()
|
| 266 |
+
|
| 267 |
+
|
| 268 |
+
def random_fraction(indx, fraction, sort=True):
|
| 269 |
+
"""Select a random fraction of an input list of elements.
|
| 270 |
+
|
| 271 |
+
Parameters
|
| 272 |
+
----------
|
| 273 |
+
indx : list, array
|
| 274 |
+
Elements to partition.
|
| 275 |
+
fraction : int, float
|
| 276 |
+
Fraction to select.
|
| 277 |
+
sort : bool, optional
|
| 278 |
+
If True, output lists will be sorted.
|
| 279 |
+
|
| 280 |
+
Returns
|
| 281 |
+
-------
|
| 282 |
+
use : list, array
|
| 283 |
+
Selected elements.
|
| 284 |
+
unuse : list, array
|
| 285 |
+
Remaining elements.
|
| 286 |
+
|
| 287 |
+
"""
|
| 288 |
+
|
| 289 |
+
# number of elements to use
|
| 290 |
+
fraction = float(fraction)
|
| 291 |
+
nb = int(fraction * len(indx))
|
| 292 |
+
|
| 293 |
+
# copy because shuffle works in place
|
| 294 |
+
aux = copy.deepcopy(indx)
|
| 295 |
+
|
| 296 |
+
# shuffle
|
| 297 |
+
np.random.shuffle(aux)
|
| 298 |
+
|
| 299 |
+
# select
|
| 300 |
+
use = aux[:nb]
|
| 301 |
+
unuse = aux[nb:]
|
| 302 |
+
|
| 303 |
+
# sort
|
| 304 |
+
if sort:
|
| 305 |
+
use.sort()
|
| 306 |
+
unuse.sort()
|
| 307 |
+
|
| 308 |
+
return use, unuse
|
| 309 |
+
|
| 310 |
+
|
| 311 |
+
class ReturnTuple(tuple):
|
| 312 |
+
"""A named tuple to use as a hybrid tuple-dict return object.
|
| 313 |
+
|
| 314 |
+
Parameters
|
| 315 |
+
----------
|
| 316 |
+
values : iterable
|
| 317 |
+
Return values.
|
| 318 |
+
names : iterable, optional
|
| 319 |
+
Names for return values.
|
| 320 |
+
|
| 321 |
+
Raises
|
| 322 |
+
------
|
| 323 |
+
ValueError
|
| 324 |
+
If the number of values differs from the number of names.
|
| 325 |
+
ValueError
|
| 326 |
+
If any of the items in names:
|
| 327 |
+
* contain non-alphanumeric characters;
|
| 328 |
+
* are Python keywords;
|
| 329 |
+
* start with a number;
|
| 330 |
+
* are duplicates.
|
| 331 |
+
|
| 332 |
+
"""
|
| 333 |
+
|
| 334 |
+
def __new__(cls, values, names=None):
|
| 335 |
+
|
| 336 |
+
return tuple.__new__(cls, tuple(values))
|
| 337 |
+
|
| 338 |
+
def __init__(self, values, names=None):
|
| 339 |
+
|
| 340 |
+
nargs = len(values)
|
| 341 |
+
|
| 342 |
+
if names is None:
|
| 343 |
+
# create names
|
| 344 |
+
names = ["_%d" % i for i in range(nargs)]
|
| 345 |
+
else:
|
| 346 |
+
# check length
|
| 347 |
+
if len(names) != nargs:
|
| 348 |
+
raise ValueError("Number of names and values mismatch.")
|
| 349 |
+
|
| 350 |
+
# convert to str
|
| 351 |
+
names = list(map(str, names))
|
| 352 |
+
|
| 353 |
+
# check for keywords, alphanumeric, digits, repeats
|
| 354 |
+
seen = set()
|
| 355 |
+
for name in names:
|
| 356 |
+
if not all(c.isalnum() or (c == "_") for c in name):
|
| 357 |
+
raise ValueError(
|
| 358 |
+
"Names can only contain alphanumeric \
|
| 359 |
+
characters and underscores: %r."
|
| 360 |
+
% name
|
| 361 |
+
)
|
| 362 |
+
|
| 363 |
+
if keyword.iskeyword(name):
|
| 364 |
+
raise ValueError("Names cannot be a keyword: %r." % name)
|
| 365 |
+
|
| 366 |
+
if name[0].isdigit():
|
| 367 |
+
raise ValueError("Names cannot start with a number: %r." % name)
|
| 368 |
+
|
| 369 |
+
if name in seen:
|
| 370 |
+
raise ValueError("Encountered duplicate name: %r." % name)
|
| 371 |
+
|
| 372 |
+
seen.add(name)
|
| 373 |
+
|
| 374 |
+
self._names = names
|
| 375 |
+
|
| 376 |
+
def as_dict(self):
|
| 377 |
+
"""Convert to an ordered dictionary.
|
| 378 |
+
|
| 379 |
+
Returns
|
| 380 |
+
-------
|
| 381 |
+
out : OrderedDict
|
| 382 |
+
An OrderedDict representing the return values.
|
| 383 |
+
|
| 384 |
+
"""
|
| 385 |
+
|
| 386 |
+
return collections.OrderedDict(zip(self._names, self))
|
| 387 |
+
|
| 388 |
+
__dict__ = property(as_dict)
|
| 389 |
+
|
| 390 |
+
def __getitem__(self, key):
|
| 391 |
+
"""Get item as an index or keyword.
|
| 392 |
+
|
| 393 |
+
Returns
|
| 394 |
+
-------
|
| 395 |
+
out : object
|
| 396 |
+
The object corresponding to the key, if it exists.
|
| 397 |
+
|
| 398 |
+
Raises
|
| 399 |
+
------
|
| 400 |
+
KeyError
|
| 401 |
+
If the key is a string and it does not exist in the mapping.
|
| 402 |
+
IndexError
|
| 403 |
+
If the key is an int and it is out of range.
|
| 404 |
+
|
| 405 |
+
"""
|
| 406 |
+
|
| 407 |
+
if isinstance(key, six.string_types):
|
| 408 |
+
if key not in self._names:
|
| 409 |
+
raise KeyError("Unknown key: %r." % key)
|
| 410 |
+
|
| 411 |
+
key = self._names.index(key)
|
| 412 |
+
|
| 413 |
+
return super(ReturnTuple, self).__getitem__(key)
|
| 414 |
+
|
| 415 |
+
def __repr__(self):
|
| 416 |
+
"""Return representation string."""
|
| 417 |
+
|
| 418 |
+
tpl = "%s=%r"
|
| 419 |
+
|
| 420 |
+
rp = ", ".join(tpl % item for item in zip(self._names, self))
|
| 421 |
+
|
| 422 |
+
return "ReturnTuple(%s)" % rp
|
| 423 |
+
|
| 424 |
+
def __getnewargs__(self):
|
| 425 |
+
"""Return self as a plain tuple; used for copy and pickle."""
|
| 426 |
+
|
| 427 |
+
return tuple(self)
|
| 428 |
+
|
| 429 |
+
def keys(self):
|
| 430 |
+
"""Return the value names.
|
| 431 |
+
|
| 432 |
+
Returns
|
| 433 |
+
-------
|
| 434 |
+
out : list
|
| 435 |
+
The keys in the mapping.
|
| 436 |
+
|
| 437 |
+
"""
|
| 438 |
+
|
| 439 |
+
return list(self._names)
|
BioSPPy/source/docs/Makefile
ADDED
|
@@ -0,0 +1,192 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Makefile for Sphinx documentation
|
| 2 |
+
#
|
| 3 |
+
|
| 4 |
+
# You can set these variables from the command line.
|
| 5 |
+
SPHINXOPTS =
|
| 6 |
+
SPHINXBUILD = sphinx-build
|
| 7 |
+
PAPER =
|
| 8 |
+
BUILDDIR = _build
|
| 9 |
+
|
| 10 |
+
# User-friendly check for sphinx-build
|
| 11 |
+
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
|
| 12 |
+
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
|
| 13 |
+
endif
|
| 14 |
+
|
| 15 |
+
# Internal variables.
|
| 16 |
+
PAPEROPT_a4 = -D latex_paper_size=a4
|
| 17 |
+
PAPEROPT_letter = -D latex_paper_size=letter
|
| 18 |
+
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
| 19 |
+
# the i18n builder cannot share the environment and doctrees with the others
|
| 20 |
+
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
| 21 |
+
|
| 22 |
+
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
|
| 23 |
+
|
| 24 |
+
help:
|
| 25 |
+
@echo "Please use \`make <target>' where <target> is one of"
|
| 26 |
+
@echo " html to make standalone HTML files"
|
| 27 |
+
@echo " dirhtml to make HTML files named index.html in directories"
|
| 28 |
+
@echo " singlehtml to make a single large HTML file"
|
| 29 |
+
@echo " pickle to make pickle files"
|
| 30 |
+
@echo " json to make JSON files"
|
| 31 |
+
@echo " htmlhelp to make HTML files and a HTML help project"
|
| 32 |
+
@echo " qthelp to make HTML files and a qthelp project"
|
| 33 |
+
@echo " applehelp to make an Apple Help Book"
|
| 34 |
+
@echo " devhelp to make HTML files and a Devhelp project"
|
| 35 |
+
@echo " epub to make an epub"
|
| 36 |
+
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
| 37 |
+
@echo " latexpdf to make LaTeX files and run them through pdflatex"
|
| 38 |
+
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
|
| 39 |
+
@echo " text to make text files"
|
| 40 |
+
@echo " man to make manual pages"
|
| 41 |
+
@echo " texinfo to make Texinfo files"
|
| 42 |
+
@echo " info to make Texinfo files and run them through makeinfo"
|
| 43 |
+
@echo " gettext to make PO message catalogs"
|
| 44 |
+
@echo " changes to make an overview of all changed/added/deprecated items"
|
| 45 |
+
@echo " xml to make Docutils-native XML files"
|
| 46 |
+
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
|
| 47 |
+
@echo " linkcheck to check all external links for integrity"
|
| 48 |
+
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
|
| 49 |
+
@echo " coverage to run coverage check of the documentation (if enabled)"
|
| 50 |
+
|
| 51 |
+
clean:
|
| 52 |
+
rm -rf $(BUILDDIR)/*
|
| 53 |
+
|
| 54 |
+
html:
|
| 55 |
+
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
| 56 |
+
@echo
|
| 57 |
+
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
| 58 |
+
|
| 59 |
+
dirhtml:
|
| 60 |
+
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
|
| 61 |
+
@echo
|
| 62 |
+
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
|
| 63 |
+
|
| 64 |
+
singlehtml:
|
| 65 |
+
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
|
| 66 |
+
@echo
|
| 67 |
+
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
|
| 68 |
+
|
| 69 |
+
pickle:
|
| 70 |
+
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
|
| 71 |
+
@echo
|
| 72 |
+
@echo "Build finished; now you can process the pickle files."
|
| 73 |
+
|
| 74 |
+
json:
|
| 75 |
+
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
|
| 76 |
+
@echo
|
| 77 |
+
@echo "Build finished; now you can process the JSON files."
|
| 78 |
+
|
| 79 |
+
htmlhelp:
|
| 80 |
+
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
|
| 81 |
+
@echo
|
| 82 |
+
@echo "Build finished; now you can run HTML Help Workshop with the" \
|
| 83 |
+
".hhp project file in $(BUILDDIR)/htmlhelp."
|
| 84 |
+
|
| 85 |
+
qthelp:
|
| 86 |
+
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
|
| 87 |
+
@echo
|
| 88 |
+
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
|
| 89 |
+
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
|
| 90 |
+
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/BioSPPy.qhcp"
|
| 91 |
+
@echo "To view the help file:"
|
| 92 |
+
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/BioSPPy.qhc"
|
| 93 |
+
|
| 94 |
+
applehelp:
|
| 95 |
+
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
|
| 96 |
+
@echo
|
| 97 |
+
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
|
| 98 |
+
@echo "N.B. You won't be able to view it unless you put it in" \
|
| 99 |
+
"~/Library/Documentation/Help or install it in your application" \
|
| 100 |
+
"bundle."
|
| 101 |
+
|
| 102 |
+
devhelp:
|
| 103 |
+
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
|
| 104 |
+
@echo
|
| 105 |
+
@echo "Build finished."
|
| 106 |
+
@echo "To view the help file:"
|
| 107 |
+
@echo "# mkdir -p $$HOME/.local/share/devhelp/BioSPPy"
|
| 108 |
+
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/BioSPPy"
|
| 109 |
+
@echo "# devhelp"
|
| 110 |
+
|
| 111 |
+
epub:
|
| 112 |
+
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
|
| 113 |
+
@echo
|
| 114 |
+
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
|
| 115 |
+
|
| 116 |
+
latex:
|
| 117 |
+
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
| 118 |
+
@echo
|
| 119 |
+
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
|
| 120 |
+
@echo "Run \`make' in that directory to run these through (pdf)latex" \
|
| 121 |
+
"(use \`make latexpdf' here to do that automatically)."
|
| 122 |
+
|
| 123 |
+
latexpdf:
|
| 124 |
+
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
| 125 |
+
@echo "Running LaTeX files through pdflatex..."
|
| 126 |
+
$(MAKE) -C $(BUILDDIR)/latex all-pdf
|
| 127 |
+
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
| 128 |
+
|
| 129 |
+
latexpdfja:
|
| 130 |
+
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
| 131 |
+
@echo "Running LaTeX files through platex and dvipdfmx..."
|
| 132 |
+
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
|
| 133 |
+
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
| 134 |
+
|
| 135 |
+
text:
|
| 136 |
+
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
|
| 137 |
+
@echo
|
| 138 |
+
@echo "Build finished. The text files are in $(BUILDDIR)/text."
|
| 139 |
+
|
| 140 |
+
man:
|
| 141 |
+
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
|
| 142 |
+
@echo
|
| 143 |
+
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
|
| 144 |
+
|
| 145 |
+
texinfo:
|
| 146 |
+
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
| 147 |
+
@echo
|
| 148 |
+
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
|
| 149 |
+
@echo "Run \`make' in that directory to run these through makeinfo" \
|
| 150 |
+
"(use \`make info' here to do that automatically)."
|
| 151 |
+
|
| 152 |
+
info:
|
| 153 |
+
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
| 154 |
+
@echo "Running Texinfo files through makeinfo..."
|
| 155 |
+
make -C $(BUILDDIR)/texinfo info
|
| 156 |
+
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
|
| 157 |
+
|
| 158 |
+
gettext:
|
| 159 |
+
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
|
| 160 |
+
@echo
|
| 161 |
+
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
|
| 162 |
+
|
| 163 |
+
changes:
|
| 164 |
+
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
|
| 165 |
+
@echo
|
| 166 |
+
@echo "The overview file is in $(BUILDDIR)/changes."
|
| 167 |
+
|
| 168 |
+
linkcheck:
|
| 169 |
+
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
|
| 170 |
+
@echo
|
| 171 |
+
@echo "Link check complete; look for any errors in the above output " \
|
| 172 |
+
"or in $(BUILDDIR)/linkcheck/output.txt."
|
| 173 |
+
|
| 174 |
+
doctest:
|
| 175 |
+
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
|
| 176 |
+
@echo "Testing of doctests in the sources finished, look at the " \
|
| 177 |
+
"results in $(BUILDDIR)/doctest/output.txt."
|
| 178 |
+
|
| 179 |
+
coverage:
|
| 180 |
+
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
|
| 181 |
+
@echo "Testing of coverage in the sources finished, look at the " \
|
| 182 |
+
"results in $(BUILDDIR)/coverage/python.txt."
|
| 183 |
+
|
| 184 |
+
xml:
|
| 185 |
+
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
|
| 186 |
+
@echo
|
| 187 |
+
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
|
| 188 |
+
|
| 189 |
+
pseudoxml:
|
| 190 |
+
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
|
| 191 |
+
@echo
|
| 192 |
+
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
|
BioSPPy/source/docs/__init__.py
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
BioSPPy/source/docs/biosppy.rst
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
API Reference
|
| 2 |
+
=============
|
| 3 |
+
|
| 4 |
+
This part of the documentation details the complete ``BioSPPy`` API.
|
| 5 |
+
|
| 6 |
+
Packages
|
| 7 |
+
--------
|
| 8 |
+
|
| 9 |
+
.. toctree::
|
| 10 |
+
:maxdepth: 1
|
| 11 |
+
|
| 12 |
+
biosppy.signals
|
| 13 |
+
|
| 14 |
+
Modules
|
| 15 |
+
-------
|
| 16 |
+
|
| 17 |
+
.. contents::
|
| 18 |
+
:local:
|
| 19 |
+
|
| 20 |
+
.. automodule:: biosppy.biometrics
|
| 21 |
+
:members:
|
| 22 |
+
:undoc-members:
|
| 23 |
+
:show-inheritance:
|
| 24 |
+
|
| 25 |
+
.. automodule:: biosppy.clustering
|
| 26 |
+
:members:
|
| 27 |
+
:undoc-members:
|
| 28 |
+
:show-inheritance:
|
| 29 |
+
|
| 30 |
+
.. automodule:: biosppy.metrics
|
| 31 |
+
:members:
|
| 32 |
+
:undoc-members:
|
| 33 |
+
:show-inheritance:
|
| 34 |
+
|
| 35 |
+
.. automodule:: biosppy.plotting
|
| 36 |
+
:members:
|
| 37 |
+
:undoc-members:
|
| 38 |
+
:show-inheritance:
|
| 39 |
+
|
| 40 |
+
.. automodule:: biosppy.storage
|
| 41 |
+
:members:
|
| 42 |
+
:undoc-members:
|
| 43 |
+
:show-inheritance:
|
| 44 |
+
|
| 45 |
+
.. automodule:: biosppy.timing
|
| 46 |
+
:members:
|
| 47 |
+
:undoc-members:
|
| 48 |
+
:show-inheritance:
|
| 49 |
+
|
| 50 |
+
.. automodule:: biosppy.utils
|
| 51 |
+
:members:
|
| 52 |
+
:undoc-members:
|
| 53 |
+
:show-inheritance:
|
BioSPPy/source/docs/biosppy.signals.rst
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
biosppy.signals
|
| 2 |
+
===============
|
| 3 |
+
|
| 4 |
+
This sub-package provides methods to process common physiological signals
|
| 5 |
+
(biosignals).
|
| 6 |
+
|
| 7 |
+
Modules
|
| 8 |
+
-------
|
| 9 |
+
|
| 10 |
+
.. contents::
|
| 11 |
+
:local:
|
| 12 |
+
|
| 13 |
+
.. automodule:: biosppy.signals.abp
|
| 14 |
+
:members:
|
| 15 |
+
:undoc-members:
|
| 16 |
+
:show-inheritance:
|
| 17 |
+
|
| 18 |
+
.. automodule:: biosppy.signals.bvp
|
| 19 |
+
:members:
|
| 20 |
+
:undoc-members:
|
| 21 |
+
:show-inheritance:
|
| 22 |
+
|
| 23 |
+
.. automodule:: biosppy.signals.ppg
|
| 24 |
+
:members:
|
| 25 |
+
:undoc-members:
|
| 26 |
+
:show-inheritance:
|
| 27 |
+
|
| 28 |
+
.. automodule:: biosppy.signals.ecg
|
| 29 |
+
:members:
|
| 30 |
+
:undoc-members:
|
| 31 |
+
:show-inheritance:
|
| 32 |
+
|
| 33 |
+
.. automodule:: biosppy.signals.eda
|
| 34 |
+
:members:
|
| 35 |
+
:undoc-members:
|
| 36 |
+
:show-inheritance:
|
| 37 |
+
|
| 38 |
+
.. automodule:: biosppy.signals.eeg
|
| 39 |
+
:members:
|
| 40 |
+
:undoc-members:
|
| 41 |
+
:show-inheritance:
|
| 42 |
+
|
| 43 |
+
.. automodule:: biosppy.signals.emg
|
| 44 |
+
:members:
|
| 45 |
+
:undoc-members:
|
| 46 |
+
:show-inheritance:
|
| 47 |
+
|
| 48 |
+
.. automodule:: biosppy.signals.resp
|
| 49 |
+
:members:
|
| 50 |
+
:undoc-members:
|
| 51 |
+
:show-inheritance:
|
| 52 |
+
|
| 53 |
+
.. automodule:: biosppy.signals.tools
|
| 54 |
+
:members:
|
| 55 |
+
:undoc-members:
|
| 56 |
+
:show-inheritance:
|
BioSPPy/source/docs/conf.py
ADDED
|
@@ -0,0 +1,316 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
#
|
| 3 |
+
# BioSPPy documentation build configuration file, created by
|
| 4 |
+
# sphinx-quickstart on Tue Aug 18 11:33:55 2015.
|
| 5 |
+
#
|
| 6 |
+
# This file is execfile()d with the current directory set to its
|
| 7 |
+
# containing dir.
|
| 8 |
+
#
|
| 9 |
+
# Note that not all possible configuration values are present in this
|
| 10 |
+
# autogenerated file.
|
| 11 |
+
#
|
| 12 |
+
# All configuration values have a default; values that are commented out
|
| 13 |
+
# serve to show the default.
|
| 14 |
+
|
| 15 |
+
import sys
|
| 16 |
+
# import os
|
| 17 |
+
# import shlex
|
| 18 |
+
|
| 19 |
+
# To be able to import to ReadTheDocs
|
| 20 |
+
from mock import Mock as MagicMock
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
class Mock(MagicMock):
|
| 24 |
+
@classmethod
|
| 25 |
+
def __getattr__(cls, name):
|
| 26 |
+
return Mock()
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
MOCK_MODULES = ['numpy', 'scipy', 'matplotlib', 'matplotlib.pyplot',
|
| 30 |
+
'scipy.signal', 'scipy.interpolate', 'scipy.optimize',
|
| 31 |
+
'scipy.stats', 'scipy.cluster', 'scipy.cluster.hierarchy',
|
| 32 |
+
'scipy.cluster.vq', 'scipy.sparse', 'scipy.spatial',
|
| 33 |
+
'scipy.spatial.distance', 'sklearn', 'sklearn.cluster',
|
| 34 |
+
'sklearn.model_selection', 'sklearn.externals',
|
| 35 |
+
'matplotlib.gridspec', 'h5py', 'shortuuid', 'bidict', 'svm',
|
| 36 |
+
'sksvm']
|
| 37 |
+
|
| 38 |
+
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
|
| 39 |
+
|
| 40 |
+
# If extensions (or modules to document with autodoc) are in another directory,
|
| 41 |
+
# add these directories to sys.path here. If the directory is relative to the
|
| 42 |
+
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
| 43 |
+
#sys.path.insert(0, os.path.abspath('.'))
|
| 44 |
+
|
| 45 |
+
# -- General configuration ------------------------------------------------
|
| 46 |
+
|
| 47 |
+
# If your documentation needs a minimal Sphinx version, state it here.
|
| 48 |
+
#needs_sphinx = '1.0'
|
| 49 |
+
|
| 50 |
+
# Add any Sphinx extension module names here, as strings. They can be
|
| 51 |
+
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
| 52 |
+
# ones.
|
| 53 |
+
extensions = [
|
| 54 |
+
'sphinx.ext.autodoc',
|
| 55 |
+
'sphinx.ext.coverage',
|
| 56 |
+
'sphinx.ext.viewcode',
|
| 57 |
+
'sphinx.ext.napoleon',
|
| 58 |
+
'sphinx.ext.imgmath',
|
| 59 |
+
]
|
| 60 |
+
|
| 61 |
+
# Napoleon settings
|
| 62 |
+
napoleon_use_rtype = False
|
| 63 |
+
|
| 64 |
+
# Add any paths that contain templates here, relative to this directory.
|
| 65 |
+
templates_path = ['_templates']
|
| 66 |
+
|
| 67 |
+
# The suffix(es) of source filenames.
|
| 68 |
+
# You can specify multiple suffix as a list of string:
|
| 69 |
+
# source_suffix = ['.rst', '.md']
|
| 70 |
+
source_suffix = '.rst'
|
| 71 |
+
|
| 72 |
+
# The encoding of source files.
|
| 73 |
+
#source_encoding = 'utf-8-sig'
|
| 74 |
+
|
| 75 |
+
# The master toctree document.
|
| 76 |
+
master_doc = 'index'
|
| 77 |
+
|
| 78 |
+
# General information about the project.
|
| 79 |
+
project = 'BioSPPy'
|
| 80 |
+
copyright = '2015-2018, Instituto de Telecomunicacoes'
|
| 81 |
+
author = 'Instituto de Telecomunicacoes'
|
| 82 |
+
|
| 83 |
+
# The version info for the project you're documenting, acts as replacement for
|
| 84 |
+
# |version| and |release|, also used in various other places throughout the
|
| 85 |
+
# built documents.
|
| 86 |
+
#
|
| 87 |
+
# The short X.Y version.
|
| 88 |
+
version = '0.6.1'
|
| 89 |
+
# The full version, including alpha/beta/rc tags.
|
| 90 |
+
release = version
|
| 91 |
+
|
| 92 |
+
# The language for content autogenerated by Sphinx. Refer to documentation
|
| 93 |
+
# for a list of supported languages.
|
| 94 |
+
#
|
| 95 |
+
# This is also used if you do content translation via gettext catalogs.
|
| 96 |
+
# Usually you set "language" from the command line for these cases.
|
| 97 |
+
language = None
|
| 98 |
+
|
| 99 |
+
# There are two options for replacing |today|: either, you set today to some
|
| 100 |
+
# non-false value, then it is used:
|
| 101 |
+
#today = ''
|
| 102 |
+
# Else, today_fmt is used as the format for a strftime call.
|
| 103 |
+
#today_fmt = '%B %d, %Y'
|
| 104 |
+
|
| 105 |
+
# List of patterns, relative to source directory, that match files and
|
| 106 |
+
# directories to ignore when looking for source files.
|
| 107 |
+
exclude_patterns = ['_build']
|
| 108 |
+
|
| 109 |
+
# The reST default role (used for this markup: `text`) to use for all
|
| 110 |
+
# documents.
|
| 111 |
+
#default_role = None
|
| 112 |
+
|
| 113 |
+
# If true, '()' will be appended to :func: etc. cross-reference text.
|
| 114 |
+
#add_function_parentheses = True
|
| 115 |
+
|
| 116 |
+
# If true, the current module name will be prepended to all description
|
| 117 |
+
# unit titles (such as .. function::).
|
| 118 |
+
#add_module_names = True
|
| 119 |
+
|
| 120 |
+
# If true, sectionauthor and moduleauthor directives will be shown in the
|
| 121 |
+
# output. They are ignored by default.
|
| 122 |
+
#show_authors = False
|
| 123 |
+
|
| 124 |
+
# The name of the Pygments (syntax highlighting) style to use.
|
| 125 |
+
pygments_style = 'sphinx'
|
| 126 |
+
|
| 127 |
+
# A list of ignored prefixes for module index sorting.
|
| 128 |
+
#modindex_common_prefix = []
|
| 129 |
+
|
| 130 |
+
# If true, keep warnings as "system message" paragraphs in the built documents.
|
| 131 |
+
#keep_warnings = False
|
| 132 |
+
|
| 133 |
+
# If true, `todo` and `todoList` produce output, else they produce nothing.
|
| 134 |
+
todo_include_todos = False
|
| 135 |
+
|
| 136 |
+
|
| 137 |
+
# -- Options for HTML output ----------------------------------------------
|
| 138 |
+
|
| 139 |
+
# The theme to use for HTML and HTML Help pages. See the documentation for
|
| 140 |
+
# a list of builtin themes.
|
| 141 |
+
html_theme = 'sphinx_rtd_theme'
|
| 142 |
+
|
| 143 |
+
# Theme options are theme-specific and customize the look and feel of a theme
|
| 144 |
+
# further. For a list of options available for each theme, see the
|
| 145 |
+
# documentation.
|
| 146 |
+
html_theme_options = {
|
| 147 |
+
'logo_only': True,
|
| 148 |
+
}
|
| 149 |
+
|
| 150 |
+
# Add any paths that contain custom themes here, relative to this directory.
|
| 151 |
+
#html_theme_path = []
|
| 152 |
+
|
| 153 |
+
# The name for this set of Sphinx documents. If None, it defaults to
|
| 154 |
+
# "<project> v<release> documentation".
|
| 155 |
+
#html_title = None
|
| 156 |
+
|
| 157 |
+
# A shorter title for the navigation bar. Default is the same as html_title.
|
| 158 |
+
#html_short_title = None
|
| 159 |
+
|
| 160 |
+
# The name of an image file (relative to this directory) to place at the top
|
| 161 |
+
# of the sidebar.
|
| 162 |
+
html_logo = "logo/logo_inverted_no_tag.png"
|
| 163 |
+
|
| 164 |
+
# The name of an image file (within the static path) to use as favicon of the
|
| 165 |
+
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
| 166 |
+
# pixels large.
|
| 167 |
+
html_favicon = "favicon.ico"
|
| 168 |
+
|
| 169 |
+
# Add any paths that contain custom static files (such as style sheets) here,
|
| 170 |
+
# relative to this directory. They are copied after the builtin static files,
|
| 171 |
+
# so a file named "default.css" will overwrite the builtin "default.css".
|
| 172 |
+
# html_static_path = ['_static']
|
| 173 |
+
|
| 174 |
+
# Add any extra paths that contain custom files (such as robots.txt or
|
| 175 |
+
# .htaccess) here, relative to this directory. These files are copied
|
| 176 |
+
# directly to the root of the documentation.
|
| 177 |
+
#html_extra_path = []
|
| 178 |
+
|
| 179 |
+
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
| 180 |
+
# using the given strftime format.
|
| 181 |
+
#html_last_updated_fmt = '%b %d, %Y'
|
| 182 |
+
|
| 183 |
+
# If true, SmartyPants will be used to convert quotes and dashes to
|
| 184 |
+
# typographically correct entities.
|
| 185 |
+
#html_use_smartypants = True
|
| 186 |
+
|
| 187 |
+
# Custom sidebar templates, maps document names to template names.
|
| 188 |
+
#html_sidebars = {}
|
| 189 |
+
|
| 190 |
+
# Additional templates that should be rendered to pages, maps page names to
|
| 191 |
+
# template names.
|
| 192 |
+
#html_additional_pages = {}
|
| 193 |
+
|
| 194 |
+
# If false, no module index is generated.
|
| 195 |
+
#html_domain_indices = True
|
| 196 |
+
|
| 197 |
+
# If false, no index is generated.
|
| 198 |
+
#html_use_index = True
|
| 199 |
+
|
| 200 |
+
# If true, the index is split into individual pages for each letter.
|
| 201 |
+
#html_split_index = False
|
| 202 |
+
|
| 203 |
+
# If true, links to the reST sources are added to the pages.
|
| 204 |
+
html_show_sourcelink = False
|
| 205 |
+
|
| 206 |
+
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
| 207 |
+
#html_show_sphinx = True
|
| 208 |
+
|
| 209 |
+
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
| 210 |
+
#html_show_copyright = True
|
| 211 |
+
|
| 212 |
+
# If true, an OpenSearch description file will be output, and all pages will
|
| 213 |
+
# contain a <link> tag referring to it. The value of this option must be the
|
| 214 |
+
# base URL from which the finished HTML is served.
|
| 215 |
+
#html_use_opensearch = ''
|
| 216 |
+
|
| 217 |
+
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
| 218 |
+
#html_file_suffix = None
|
| 219 |
+
|
| 220 |
+
# Language to be used for generating the HTML full-text search index.
|
| 221 |
+
# Sphinx supports the following languages:
|
| 222 |
+
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
|
| 223 |
+
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
|
| 224 |
+
#html_search_language = 'en'
|
| 225 |
+
|
| 226 |
+
# A dictionary with options for the search language support, empty by default.
|
| 227 |
+
# Now only 'ja' uses this config value
|
| 228 |
+
#html_search_options = {'type': 'default'}
|
| 229 |
+
|
| 230 |
+
# The name of a javascript file (relative to the configuration directory) that
|
| 231 |
+
# implements a search results scorer. If empty, the default will be used.
|
| 232 |
+
#html_search_scorer = 'scorer.js'
|
| 233 |
+
|
| 234 |
+
# Output file base name for HTML help builder.
|
| 235 |
+
htmlhelp_basename = 'BioSPPydoc'
|
| 236 |
+
|
| 237 |
+
# -- Options for LaTeX output ---------------------------------------------
|
| 238 |
+
|
| 239 |
+
latex_elements = {
|
| 240 |
+
# The paper size ('letterpaper' or 'a4paper').
|
| 241 |
+
#'papersize': 'letterpaper',
|
| 242 |
+
|
| 243 |
+
# The font size ('10pt', '11pt' or '12pt').
|
| 244 |
+
#'pointsize': '10pt',
|
| 245 |
+
|
| 246 |
+
# Additional stuff for the LaTeX preamble.
|
| 247 |
+
#'preamble': '',
|
| 248 |
+
|
| 249 |
+
# Latex figure (float) alignment
|
| 250 |
+
#'figure_align': 'htbp',
|
| 251 |
+
}
|
| 252 |
+
|
| 253 |
+
# Grouping the document tree into LaTeX files. List of tuples
|
| 254 |
+
# (source start file, target name, title,
|
| 255 |
+
# author, documentclass [howto, manual, or own class]).
|
| 256 |
+
latex_documents = [
|
| 257 |
+
(master_doc, 'BioSPPy.tex', 'BioSPPy Documentation',
|
| 258 |
+
'Instituto de Telecomunicacoes', 'manual'),
|
| 259 |
+
]
|
| 260 |
+
|
| 261 |
+
# The name of an image file (relative to this directory) to place at the top of
|
| 262 |
+
# the title page.
|
| 263 |
+
#latex_logo = None
|
| 264 |
+
|
| 265 |
+
# For "manual" documents, if this is true, then toplevel headings are parts,
|
| 266 |
+
# not chapters.
|
| 267 |
+
#latex_use_parts = False
|
| 268 |
+
|
| 269 |
+
# If true, show page references after internal links.
|
| 270 |
+
#latex_show_pagerefs = False
|
| 271 |
+
|
| 272 |
+
# If true, show URL addresses after external links.
|
| 273 |
+
#latex_show_urls = False
|
| 274 |
+
|
| 275 |
+
# Documents to append as an appendix to all manuals.
|
| 276 |
+
#latex_appendices = []
|
| 277 |
+
|
| 278 |
+
# If false, no module index is generated.
|
| 279 |
+
#latex_domain_indices = True
|
| 280 |
+
|
| 281 |
+
|
| 282 |
+
# -- Options for manual page output ---------------------------------------
|
| 283 |
+
|
| 284 |
+
# One entry per manual page. List of tuples
|
| 285 |
+
# (source start file, name, description, authors, manual section).
|
| 286 |
+
man_pages = [
|
| 287 |
+
(master_doc, 'biosppy', 'BioSPPy Documentation',
|
| 288 |
+
[author], 1)
|
| 289 |
+
]
|
| 290 |
+
|
| 291 |
+
# If true, show URL addresses after external links.
|
| 292 |
+
#man_show_urls = False
|
| 293 |
+
|
| 294 |
+
|
| 295 |
+
# -- Options for Texinfo output -------------------------------------------
|
| 296 |
+
|
| 297 |
+
# Grouping the document tree into Texinfo files. List of tuples
|
| 298 |
+
# (source start file, target name, title, author,
|
| 299 |
+
# dir menu entry, description, category)
|
| 300 |
+
texinfo_documents = [
|
| 301 |
+
(master_doc, 'BioSPPy', 'BioSPPy Documentation',
|
| 302 |
+
author, 'BioSPPy', 'Biosignal Processing in Python.',
|
| 303 |
+
'Miscellaneous'),
|
| 304 |
+
]
|
| 305 |
+
|
| 306 |
+
# Documents to append as an appendix to all manuals.
|
| 307 |
+
#texinfo_appendices = []
|
| 308 |
+
|
| 309 |
+
# If false, no module index is generated.
|
| 310 |
+
#texinfo_domain_indices = True
|
| 311 |
+
|
| 312 |
+
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
| 313 |
+
#texinfo_show_urls = 'footnote'
|
| 314 |
+
|
| 315 |
+
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
| 316 |
+
#texinfo_no_detailmenu = False
|
BioSPPy/source/docs/favicon.ico
ADDED
|
|