signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
def _getCharacterMapping(self):
layer = self.defaultLayer<EOL>return layer.getCharacterMapping()<EOL>
This is the environment implementation of :meth:`BaseFont.getCharacterMapping`. Subclasses may override this method.
f10854:c0:m87
def _get_selectedLayers(self):
return self._getSelectedSubObjects(self.layers)<EOL>
Subclasses may override this method.
f10854:c0:m89
def _set_selectedLayers(self, value):
return self._setSelectedSubObjects(self.layers, value)<EOL>
Subclasses may override this method.
f10854:c0:m91
def _get_selectedLayerNames(self):
selected = [layer.name for layer in self.selectedLayers]<EOL>return selected<EOL>
Subclasses may override this method.
f10854:c0:m93
def _set_selectedLayerNames(self, value):
select = [self.layers(name) for name in value]<EOL>self.selectedLayers = select<EOL>
Subclasses may override this method.
f10854:c0:m95
def _get_selectedGuidelines(self):
return self._getSelectedSubObjects(self.guidelines)<EOL>
Subclasses may override this method.
f10854:c0:m97
def _set_selectedGuidelines(self, value):
return self._setSelectedSubObjects(self.guidelines, value)<EOL>
Subclasses may override this method.
f10854:c0:m99
def _get_x(self):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseGuideline.x`. This must return an :ref:`type-int-float`. Subclasses must override this method.
f10855:c0:m8
def _set_x(self, value):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseGuideline.x`. **value** will be an :ref:`type-int-float`. Subclasses must override this method.
f10855:c0:m9
def _get_y(self):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseGuideline.y`. This must return an :ref:`type-int-float`. Subclasses must override this method.
f10855:c0:m12
def _set_y(self, value):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseGuideline.y`. **value** will be an :ref:`type-int-float`. Subclasses must override this method.
f10855:c0:m13
def _get_angle(self):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseGuideline.angle`. This must return an :ref:`type-angle`. Subclasses must override this method.
f10855:c0:m16
def _set_angle(self, value):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseGuideline.angle`. **value** will be an :ref:`type-angle`. Subclasses must override this method.
f10855:c0:m17
def _get_index(self):
glyph = self.glyph<EOL>if glyph is not None:<EOL><INDENT>parent = glyph<EOL><DEDENT>else:<EOL><INDENT>parent = self.font<EOL><DEDENT>if parent is None:<EOL><INDENT>return None<EOL><DEDENT>return parent.guidelines.index(self)<EOL>
Get the guideline's index. This must return an ``int``. Subclasses may override this method.
f10855:c0:m19
def _get_name(self):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseGuideline.name`. This must return a :ref:`type-string` or ``None``. The returned value will be normalized with :func:`normalizers.normalizeGuidelineName`. Subclasses must override this method.
f10855:c0:m22
def _set_name(self, value):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseGuideline.name`. **value** will be a :ref:`type-string` or ``None``. It will have been normalized with :func:`normalizers.normalizeGuidelineName`. Subclasses must override this method.
f10855:c0:m23
def _get_color(self):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseGuideline.color`. This must return a :ref:`type-color` or ``None``. The returned value will be normalized with :func:`normalizers.normalizeColor`. Subclasses must override this method.
f10855:c0:m26
def _set_color(self, value):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseGuideline.color`. **value** will be a :ref:`type-color` or ``None``. It will have been normalized with :func:`normalizers.normalizeColor`. Subclasses must override this method.
f10855:c0:m27
def _transformBy(self, matrix, **kwargs):
t = transform.Transform(*matrix)<EOL>x, y = t.transformPoint((self.x, self.y))<EOL>self.x = x<EOL>self.y = y<EOL>angle = math.radians(-self.angle)<EOL>dx = math.cos(angle)<EOL>dy = math.sin(angle)<EOL>tdx, tdy = t.transformPoint((dx, dy))<EOL>ta = math.atan2(tdy - t[<NUM_LIT:5>], tdx - t[<NUM_LIT:4>])<EOL>self.angle = -math.degrees(ta)<EOL>
This is the environment implementation of :meth:`BaseGuideline.transformBy`. **matrix** will be a :ref:`type-transformation`. that has been normalized with :func:`normalizers.normalizeTransformationMatrix`. Subclasses may override this method.
f10855:c0:m28
def isCompatible(self, other):
return super(BaseGuideline, self).isCompatible(other, BaseGuideline)<EOL>
Evaluate interpolation compatibility with **other**. :: >>> compatible, report = self.isCompatible(otherGuideline) >>> compatible True >>> compatible [Warning] Guideline: "xheight" + "cap_height" [Warning] Guideline: "xheight" has name xheight | "cap_height" has name cap_height This will return a ``bool`` indicating if the guideline is compatible for interpolation with **other** and a :ref:`type-string` of compatibility notes.
f10855:c0:m29
def _isCompatible(self, other, reporter):
guideline1 = self<EOL>guideline2 = other<EOL>if guideline1.name != guideline2.name:<EOL><INDENT>reporter.nameDifference = True<EOL>reporter.warning = True<EOL><DEDENT>
This is the environment implementation of :meth:`BaseGuideline.isCompatible`. Subclasses may override this method.
f10855:c0:m30
def round(self):
self._round()<EOL>
Round the guideline's coordinate. >>> guideline.round() This applies to the following: * x * y It does not apply to * angle
f10855:c0:m31
def _round(self, **kwargs):
self.x = normalizers.normalizeRounding(self.x)<EOL>self.y = normalizers.normalizeRounding(self.y)<EOL>
This is the environment implementation of :meth:`BaseGuideline.round`. Subclasses may override this method.
f10855:c0:m32
def __eq__(self, other):
if isinstance(other, self.__class__):<EOL><INDENT>return self.points == other.points<EOL><DEDENT>return NotImplemented<EOL>
The :meth:`BaseObject.__eq__` method can't be used here because the :class:`BaseContour` implementation contructs segment objects without assigning an underlying ``naked`` object. Therefore, comparisons will always fail. This method overrides the base method and compares the :class:`BasePoint` contained by the segment. Subclasses may override this method.
f10856:c0:m7
def _get_index(self):
contour = self.contour<EOL>value = contour.segments.index(self)<EOL>return value<EOL>
Subclasses may override this method.
f10856:c0:m9
def _get_type(self):
value = self.onCurve.type<EOL>return value<EOL>
Subclasses may override this method.
f10856:c0:m12
def _set_type(self, newType):
oldType = self.type<EOL>if oldType == newType:<EOL><INDENT>return<EOL><DEDENT>contour = self.contour<EOL>if contour is None:<EOL><INDENT>raise FontPartsError("<STR_LIT>")<EOL><DEDENT>if newType in ("<STR_LIT>", "<STR_LIT>") and oldType in ("<STR_LIT>", "<STR_LIT>"):<EOL><INDENT>pass<EOL><DEDENT>elif newType not in ("<STR_LIT>", "<STR_LIT>"):<EOL><INDENT>offCurves = self.offCurve<EOL>for point in offCurves:<EOL><INDENT>contour.removePoint(point)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>segments = contour.segments<EOL>i = segments.index(self)<EOL>prev = segments[i - <NUM_LIT:1>].onCurve<EOL>on = self.onCurve<EOL>x = on.x<EOL>y = on.y<EOL>points = contour.points<EOL>i = points.index(on)<EOL>contour.insertPoint(i, (x, y), "<STR_LIT>")<EOL>off2 = contour.points[i]<EOL>contour.insertPoint(i, (prev.x, prev.y), "<STR_LIT>")<EOL>off1 = contour.points[i]<EOL>del self._points<EOL>self._setPoints((off1, off2, on))<EOL><DEDENT>self.onCurve.type = newType<EOL>
Subclasses may override this method.
f10856:c0:m13
def _get_smooth(self):
return self.onCurve.smooth<EOL>
Subclasses may override this method.
f10856:c0:m16
def _set_smooth(self, value):
self.onCurve.smooth = value<EOL>
Subclasses may override this method.
f10856:c0:m17
def _getItem(self, index):
return self.points[index]<EOL>
Subclasses may override this method.
f10856:c0:m19
def _iterPoints(self, **kwargs):
points = self.points<EOL>count = len(points)<EOL>index = <NUM_LIT:0><EOL>while count:<EOL><INDENT>yield points[index]<EOL>count -= <NUM_LIT:1><EOL>index += <NUM_LIT:1><EOL><DEDENT>
Subclasses may override this method.
f10856:c0:m21
def _len(self, **kwargs):
return len(self.points)<EOL>
Subclasses may override this method.
f10856:c0:m23
def _get_points(self):
if not hasattr(self, "<STR_LIT>"):<EOL><INDENT>return tuple()<EOL><DEDENT>return tuple(self._points)<EOL>
Subclasses may override this method.
f10856:c0:m25
def _get_onCurve(self):
return self.points[-<NUM_LIT:1>]<EOL>
Subclasses may override this method.
f10856:c0:m27
def _get_base_offCurve(self):
return self._get_offCurve()<EOL>
Subclasses may override this method.
f10856:c0:m28
def _get_offCurve(self):
return self.points[:-<NUM_LIT:1>]<EOL>
Subclasses may override this method.
f10856:c0:m29
def _transformBy(self, matrix, **kwargs):
for point in self.points:<EOL><INDENT>point.transformBy(matrix)<EOL><DEDENT>
Subclasses may override this method.
f10856:c0:m30
def isCompatible(self, other):
return super(BaseSegment, self).isCompatible(other, BaseSegment)<EOL>
Evaluate interpolation compatibility with **other**. :: >>> compatible, report = self.isCompatible(otherSegment) >>> compatible False >>> compatible [Fatal] Segment: [0] + [0] [Fatal] Segment: [0] is line | [0] is move [Fatal] Segment: [1] + [1] [Fatal] Segment: [1] is line | [1] is qcurve This will return a ``bool`` indicating if the segment is compatible for interpolation with **other** and a :ref:`type-string` of compatibility notes.
f10856:c0:m31
def _isCompatible(self, other, reporter):
segment1 = self<EOL>segment2 = other<EOL>if segment1.type != segment2.type:<EOL><INDENT>if set((segment1.type, segment2.type)) != set(("<STR_LIT>", "<STR_LIT>")):<EOL><INDENT>reporter.typeDifference = True<EOL>reporter.fatal = True<EOL><DEDENT><DEDENT>
This is the environment implementation of :meth:`BaseSegment.isCompatible`. Subclasses may override this method.
f10856:c0:m32
def round(self):
for point in self.points:<EOL><INDENT>point.round()<EOL><DEDENT>
Round coordinates in all points.
f10856:c0:m33
def relativeBCPIn(anchor, BCPIn):
return (BCPIn[<NUM_LIT:0>] - anchor[<NUM_LIT:0>], BCPIn[<NUM_LIT:1>] - anchor[<NUM_LIT:1>])<EOL>
convert absolute incoming bcp value to a relative value
f10857:m0
def absoluteBCPIn(anchor, BCPIn):
return (BCPIn[<NUM_LIT:0>] + anchor[<NUM_LIT:0>], BCPIn[<NUM_LIT:1>] + anchor[<NUM_LIT:1>])<EOL>
convert relative incoming bcp value to an absolute value
f10857:m1
def relativeBCPOut(anchor, BCPOut):
return (BCPOut[<NUM_LIT:0>] - anchor[<NUM_LIT:0>], BCPOut[<NUM_LIT:1>] - anchor[<NUM_LIT:1>])<EOL>
convert absolute outgoing bcp value to a relative value
f10857:m2
def absoluteBCPOut(anchor, BCPOut):
return (BCPOut[<NUM_LIT:0>] + anchor[<NUM_LIT:0>], BCPOut[<NUM_LIT:1>] + anchor[<NUM_LIT:1>])<EOL>
convert relative outgoing bcp value to an absolute value
f10857:m3
def _get_identifier(self):
return self._point.identifier<EOL>
Subclasses may override this method.
f10857:c0:m3
def _getIdentifier(self):
return self._point.getIdentifier()<EOL>
Subclasses may override this method.
f10857:c0:m4
def _get_anchor(self):
point = self._point<EOL>return (point.x, point.y)<EOL>
Subclasses may override this method.
f10857:c0:m14
def _set_anchor(self, value):
pX, pY = self.anchor<EOL>x, y = value<EOL>dX = x - pX<EOL>dY = y - pY<EOL>self.moveBy((dX, dY))<EOL>
Subclasses may override this method.
f10857:c0:m15
def _get_bcpIn(self):
segment = self._segment<EOL>offCurves = segment.offCurve<EOL>if offCurves:<EOL><INDENT>bcp = offCurves[-<NUM_LIT:1>]<EOL>x, y = relativeBCPIn(self.anchor, (bcp.x, bcp.y))<EOL><DEDENT>else:<EOL><INDENT>x = y = <NUM_LIT:0><EOL><DEDENT>return (x, y)<EOL>
Subclasses may override this method.
f10857:c0:m18
def _set_bcpIn(self, value):
x, y = absoluteBCPIn(self.anchor, value)<EOL>segment = self._segment<EOL>if segment.type == "<STR_LIT>" and value != (<NUM_LIT:0>, <NUM_LIT:0>):<EOL><INDENT>raise FontPartsError(("<STR_LIT>"<EOL>"<STR_LIT>")<EOL>)<EOL><DEDENT>else:<EOL><INDENT>offCurves = segment.offCurve<EOL>if offCurves:<EOL><INDENT>if value == (<NUM_LIT:0>, <NUM_LIT:0>) and self.bcpOut == (<NUM_LIT:0>, <NUM_LIT:0>):<EOL><INDENT>segment.type = "<STR_LIT>"<EOL>segment.smooth = False<EOL><DEDENT>else:<EOL><INDENT>offCurves[-<NUM_LIT:1>].x = x<EOL>offCurves[-<NUM_LIT:1>].y = y<EOL><DEDENT><DEDENT>elif value != (<NUM_LIT:0>, <NUM_LIT:0>):<EOL><INDENT>segment.type = "<STR_LIT>"<EOL>offCurves = segment.offCurve<EOL>offCurves[-<NUM_LIT:1>].x = x<EOL>offCurves[-<NUM_LIT:1>].y = y<EOL><DEDENT><DEDENT>
Subclasses may override this method.
f10857:c0:m19
def _get_bcpOut(self):
nextSegment = self._nextSegment<EOL>offCurves = nextSegment.offCurve<EOL>if offCurves:<EOL><INDENT>bcp = offCurves[<NUM_LIT:0>]<EOL>x, y = relativeBCPOut(self.anchor, (bcp.x, bcp.y))<EOL><DEDENT>else:<EOL><INDENT>x = y = <NUM_LIT:0><EOL><DEDENT>return (x, y)<EOL>
Subclasses may override this method.
f10857:c0:m22
def _set_bcpOut(self, value):
x, y = absoluteBCPOut(self.anchor, value)<EOL>segment = self._segment<EOL>nextSegment = self._nextSegment<EOL>if nextSegment.type == "<STR_LIT>" and value != (<NUM_LIT:0>, <NUM_LIT:0>):<EOL><INDENT>raise FontPartsError(("<STR_LIT>"<EOL>"<STR_LIT>")<EOL>)<EOL><DEDENT>else:<EOL><INDENT>offCurves = nextSegment.offCurve<EOL>if offCurves:<EOL><INDENT>if value == (<NUM_LIT:0>, <NUM_LIT:0>) and self.bcpIn == (<NUM_LIT:0>, <NUM_LIT:0>):<EOL><INDENT>segment.type = "<STR_LIT>"<EOL>segment.smooth = False<EOL><DEDENT>else:<EOL><INDENT>offCurves[<NUM_LIT:0>].x = x<EOL>offCurves[<NUM_LIT:0>].y = y<EOL><DEDENT><DEDENT>elif value != (<NUM_LIT:0>, <NUM_LIT:0>):<EOL><INDENT>nextSegment.type = "<STR_LIT>"<EOL>offCurves = nextSegment.offCurve<EOL>offCurves[<NUM_LIT:0>].x = x<EOL>offCurves[<NUM_LIT:0>].y = y<EOL><DEDENT><DEDENT>
Subclasses may override this method.
f10857:c0:m23
def _get_type(self):
point = self._point<EOL>typ = point.type<EOL>bType = None<EOL>if point.smooth:<EOL><INDENT>if typ == "<STR_LIT>":<EOL><INDENT>bType = "<STR_LIT>"<EOL><DEDENT>elif typ == "<STR_LIT>":<EOL><INDENT>nextSegment = self._nextSegment<EOL>if nextSegment is not None and nextSegment.type == "<STR_LIT>":<EOL><INDENT>bType = "<STR_LIT>"<EOL><DEDENT>else:<EOL><INDENT>bType = "<STR_LIT>"<EOL><DEDENT><DEDENT><DEDENT>elif typ in ("<STR_LIT>", "<STR_LIT>", "<STR_LIT>"):<EOL><INDENT>bType = "<STR_LIT>"<EOL><DEDENT>if bType is None:<EOL><INDENT>raise FontPartsError("<STR_LIT>"<EOL>% typ)<EOL><DEDENT>return bType<EOL>
Subclasses may override this method.
f10857:c0:m26
def _set_type(self, value):
point = self._point<EOL>if value == "<STR_LIT>" and point.type == "<STR_LIT>":<EOL><INDENT>segment = self._segment<EOL>segment.type = "<STR_LIT>"<EOL>segment.smooth = True<EOL><DEDENT>elif value == "<STR_LIT>" and point.type == "<STR_LIT>":<EOL><INDENT>point.smooth = False<EOL><DEDENT>
Subclasses may override this method.
f10857:c0:m27
def _get_index(self):
contour = self.contour<EOL>value = contour.bPoints.index(self)<EOL>return value<EOL>
Subclasses may override this method.
f10857:c0:m29
def _transformBy(self, matrix, **kwargs):
anchor = self.anchor<EOL>bcpIn = absoluteBCPIn(anchor, self.bcpIn)<EOL>bcpOut = absoluteBCPOut(anchor, self.bcpOut)<EOL>points = [bcpIn, anchor, bcpOut]<EOL>t = transform.Transform(*matrix)<EOL>bcpIn, anchor, bcpOut = t.transformPoints(points)<EOL>x, y = anchor<EOL>self._point.x = x<EOL>self._point.y = y<EOL>self.bcpIn = relativeBCPIn(anchor, bcpIn)<EOL>self.bcpOut = relativeBCPOut(anchor, bcpOut)<EOL>
Subclasses may override this method.
f10857:c0:m30
def round(self):
x, y = self.anchor<EOL>self.anchor = (normalizers.normalizeRounding(x),<EOL>normalizers.normalizeRounding(y))<EOL>x, y = self.bcpIn<EOL>self.bcpIn = (normalizers.normalizeRounding(x),<EOL>normalizers.normalizeRounding(y))<EOL>x, y = self.bcpOut<EOL>self.bcpOut = (normalizers.normalizeRounding(x),<EOL>normalizers.normalizeRounding(y))<EOL>
Round coordinates.
f10857:c0:m31
def _get_x(self):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseAnchor.x`. This must return an :ref:`type-int-float`. Subclasses must override this method.
f10859:c0:m7
def _set_x(self, value):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseAnchor.x`. **value** will be an :ref:`type-int-float`. Subclasses must override this method.
f10859:c0:m8
def _get_y(self):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseAnchor.y`. This must return an :ref:`type-int-float`. Subclasses must override this method.
f10859:c0:m11
def _set_y(self, value):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseAnchor.y`. **value** will be an :ref:`type-int-float`. Subclasses must override this method.
f10859:c0:m12
def _get_index(self):
glyph = self.glyph<EOL>if glyph is None:<EOL><INDENT>return None<EOL><DEDENT>return glyph.anchors.index(self)<EOL>
Get the anchor's index. This must return an ``int``. Subclasses may override this method.
f10859:c0:m14
def _get_name(self):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseAnchor.name`. This must return a :ref:`type-string` or ``None``. The returned value will be normalized with :func:`normalizers.normalizeAnchorName`. Subclasses must override this method.
f10859:c0:m17
def _set_name(self, value):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseAnchor.name`. **value** will be a :ref:`type-string` or ``None``. It will have been normalized with :func:`normalizers.normalizeAnchorName`. Subclasses must override this method.
f10859:c0:m18
def _get_color(self):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseAnchor.color`. This must return a :ref:`type-color` or ``None``. The returned value will be normalized with :func:`normalizers.normalizeColor`. Subclasses must override this method.
f10859:c0:m21
def _set_color(self, value):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseAnchor.color`. **value** will be a :ref:`type-color` or ``None``. It will have been normalized with :func:`normalizers.normalizeColor`. Subclasses must override this method.
f10859:c0:m22
def _transformBy(self, matrix, **kwargs):
t = transform.Transform(*matrix)<EOL>x, y = t.transformPoint((self.x, self.y))<EOL>self.x = x<EOL>self.y = y<EOL>
This is the environment implementation of :meth:`BaseAnchor.transformBy`. **matrix** will be a :ref:`type-transformation`. that has been normalized with :func:`normalizers.normalizeTransformationMatrix`. Subclasses may override this method.
f10859:c0:m23
def isCompatible(self, other):
return super(BaseAnchor, self).isCompatible(other, BaseAnchor)<EOL>
Evaluate interpolation compatibility with **other**. :: >>> compatible, report = self.isCompatible(otherAnchor) >>> compatible True >>> compatible [Warning] Anchor: "left" + "right" [Warning] Anchor: "left" has name left | "right" has name right This will return a ``bool`` indicating if the anchor is compatible for interpolation with **other** and a :ref:`type-string` of compatibility notes.
f10859:c0:m24
def _isCompatible(self, other, reporter):
anchor1 = self<EOL>anchor2 = other<EOL>if anchor1.name != anchor2.name:<EOL><INDENT>reporter.nameDifference = True<EOL>reporter.warning = True<EOL><DEDENT>
This is the environment implementation of :meth:`BaseAnchor.isCompatible`. Subclasses may override this method.
f10859:c0:m25
def round(self):
self._round()<EOL>
Round the anchor's coordinate. >>> anchor.round() This applies to the following: * x * y
f10859:c0:m26
def _round(self):
self.x = normalizers.normalizeRounding(self.x)<EOL>self.y = normalizers.normalizeRounding(self.y)<EOL>
This is the environment implementation of :meth:`BaseAnchor.round`. Subclasses may override this method.
f10859:c0:m27
def _get_text(self):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseFeatures.text`. This must return a :ref:`type-string`. Subclasses must override this method.
f10860:c0:m5
def _set_text(self, value):
self.raiseNotImplementedError()<EOL>
This is the environment implementation of :attr:`BaseFeatures.text`. **value** will be a :ref:`type-string`. Subclasses must override this method.
f10860:c0:m6
def AskString(message, value='<STR_LIT>', title='<STR_LIT>'):
return dispatcher["<STR_LIT>"](message=message, value=value, title=title)<EOL>
An ask a string dialog, a `message` is required. Optionally a `value` and `title` can be provided. :: from fontParts.ui import AskString print(AskString("who are you?"))
f10863:m0
def AskYesNoCancel(message, title='<STR_LIT>', default=<NUM_LIT:0>, informativeText="<STR_LIT>"):
return dispatcher["<STR_LIT>"](message=message, title=title,<EOL>default=default, informativeText=informativeText)<EOL>
An ask yes, no or cancel dialog, a `message` is required. Optionally a `title`, `default` and `informativeText` can be provided. The `default` option is to indicate which button is the default button. :: from fontParts.ui import AskYesNoCancel print(AskYesNoCancel("who are you?"))
f10863:m1
def FindGlyph(aFont, message="<STR_LIT>", title='<STR_LIT>'):
return dispatcher["<STR_LIT>"](aFont=aFont, message=message, title=title)<EOL>
A dialog to search a glyph for a provided font. Optionally a `message`, `title` and `allFonts` can be provided. from fontParts.ui import FindGlyph from fontParts.world import CurrentFont glyph = FindGlyph(CurrentFont()) print(glyph)
f10863:m2
def GetFile(message=None, title=None, directory=None, fileName=None,<EOL>allowsMultipleSelection=False, fileTypes=None):
return dispatcher["<STR_LIT>"](message=message, title=title, directory=directory,<EOL>fileName=fileName,<EOL>allowsMultipleSelection=allowsMultipleSelection,<EOL>fileTypes=fileTypes)<EOL>
An get file dialog. Optionally a `message`, `title`, `directory`, `fileName` and `allowsMultipleSelection` can be provided. :: from fontParts.ui import GetFile print(GetFile())
f10863:m3
def GetFileOrFolder(message=None, title=None, directory=None, fileName=None,<EOL>allowsMultipleSelection=False, fileTypes=None):
return dispatcher["<STR_LIT>"](message=message, title=title,<EOL>directory=directory, fileName=fileName,<EOL>allowsMultipleSelection=allowsMultipleSelection,<EOL>fileTypes=fileTypes)<EOL>
An get file or folder dialog. Optionally a `message`, `title`, `directory`, `fileName`, `allowsMultipleSelection` and `fileTypes` can be provided. :: from fontParts.ui import GetFileOrFolder print(GetFileOrFolder())
f10863:m4
def Message(message, title='<STR_LIT>', informativeText="<STR_LIT>"):
return dispatcher["<STR_LIT>"](message=message, title=title,<EOL>informativeText=informativeText)<EOL>
An message dialog. Optionally a `message`, `title` and `informativeText` can be provided. :: from fontParts.ui import Message print(Message("This is a message"))
f10863:m5
def PutFile(message=None, fileName=None):
return dispatcher["<STR_LIT>"](message=message, fileName=fileName)<EOL>
An put file dialog. Optionally a `message` and `fileName` can be provided. :: from fontParts.ui import PutFile print(PutFile())
f10863:m6
def SearchList(items, message="<STR_LIT>", title='<STR_LIT>'):
return dispatcher["<STR_LIT>"](items=items, message=message, title=title)<EOL>
A dialgo to search a given list. Optionally a `message`, `title` and `allFonts` can be provided. :: from fontParts.ui import SearchList result = SearchList(["a", "b", "c"]) print(result)
f10863:m7
def SelectFont(message="<STR_LIT>", title='<STR_LIT>', allFonts=None):
return dispatcher["<STR_LIT>"](message=message, title=title, allFonts=allFonts)<EOL>
Select a font from all open fonts. Optionally a `message`, `title` and `allFonts` can be provided. If `allFonts` is `None` it will list all open fonts. :: from fontParts.ui import SelectFont font = SelectFont() print(font)
f10863:m8
def SelectGlyph(aFont, message="<STR_LIT>", title='<STR_LIT>'):
return dispatcher["<STR_LIT>"](aFont=aFont, message=message, title=title)<EOL>
Select a glyph for a given font. Optionally a `message` and `title` can be provided. :: from fontParts.ui import SelectGlyph font = CurrentFont() glyph = SelectGlyph(font) print(glyph)
f10863:m9
def ProgressBar(title="<STR_LIT>", ticks=None, label="<STR_LIT>"):
return dispatcher["<STR_LIT>"](title=title, ticks=ticks, label=label)<EOL>
A progess bar dialog. Optionally a `title`, `ticks` and `label` can be provided. :: from fontParts.ui import ProgressBar bar = ProgressBar() # do something bar.close()
f10863:m10
def create_epochs(data, events_onsets, sampling_rate=<NUM_LIT:1000>, duration=<NUM_LIT:1>, onset=<NUM_LIT:0>, index=None):
<EOL>if isinstance(duration, list) or isinstance(duration, np.ndarray):<EOL><INDENT>duration = np.array(duration)<EOL><DEDENT>else:<EOL><INDENT>duration = np.array([duration]*len(events_onsets))<EOL><DEDENT>if isinstance(onset, list) or isinstance(onset, np.ndarray):<EOL><INDENT>onset = np.array(onset)<EOL><DEDENT>else:<EOL><INDENT>onset = np.array([onset]*len(events_onsets))<EOL><DEDENT>if isinstance(data, list) or isinstance(data, np.ndarray) or isinstance(data, pd.Series):<EOL><INDENT>data = pd.DataFrame({"<STR_LIT>": list(data)})<EOL><DEDENT>duration_in_s = duration.copy()<EOL>onset_in_s = onset.copy()<EOL>duration = duration*sampling_rate<EOL>onset = onset*sampling_rate<EOL>if index is None:<EOL><INDENT>index = list(range(len(events_onsets)))<EOL><DEDENT>else:<EOL><INDENT>if len(list(set(index))) != len(index):<EOL><INDENT>print("<STR_LIT>")<EOL>index = list(range(len(events_onsets)))<EOL><DEDENT>else:<EOL><INDENT>index = list(index)<EOL><DEDENT><DEDENT>epochs = {}<EOL>for event, event_onset in enumerate(events_onsets):<EOL><INDENT>epoch_onset = int(event_onset + onset[event])<EOL>epoch_end = int(event_onset+duration[event]+<NUM_LIT:1>)<EOL>epoch = data[epoch_onset:epoch_end].copy()<EOL>epoch.index = np.linspace(start=onset_in_s[event], stop=duration_in_s[event], num=len(epoch), endpoint=True)<EOL>relative_time = np.linspace(start=onset[event], stop=duration[event], num=len(epoch), endpoint=True).astype(int).tolist()<EOL>absolute_time = np.linspace(start=epoch_onset, stop=epoch_end, num=len(epoch), endpoint=True).astype(int).tolist()<EOL>epoch["<STR_LIT>"] = relative_time<EOL>epoch["<STR_LIT>"] = absolute_time<EOL>epochs[index[event]] = epoch<EOL><DEDENT>return(epochs)<EOL>
Epoching a dataframe. Parameters ---------- data : pandas.DataFrame Data*time. events_onsets : list A list of event onsets indices. sampling_rate : int Sampling rate (samples/second). duration : int or list Duration(s) of each epoch(s) (in seconds). onset : int Epoch onset(s) relative to events_onsets (in seconds). index : list Events names in order that will be used as index. Must contains uniques names. If not provided, will be replaced by event number. Returns ---------- epochs : dict dict containing all epochs. Example ---------- >>> import neurokit as nk >>> epochs = nk.create_epochs(data, events_onsets) Notes ---------- *Authors* - Dominique Makowski (https://github.com/DominiqueMakowski) *Dependencies* - numpy
f10891:m0
def interpolate(values, value_times, sampling_rate=<NUM_LIT:1000>):
<EOL><INDENT>initial_index = value_times[<NUM_LIT:0>]<EOL>value_times = np.array(value_times) - initial_index<EOL>spline = scipy.interpolate.splrep(x=value_times, y=values, k=<NUM_LIT:3>, s=<NUM_LIT:0>) <EOL>x = np.arange(<NUM_LIT:0>, value_times[-<NUM_LIT:1>], <NUM_LIT:1>)<EOL>signal = scipy.interpolate.splev(x=x, tck=spline, der=<NUM_LIT:0>)<EOL>signal = pd.Series(signal)<EOL>signal.index = np.array(np.arange(initial_index, initial_index+len(signal), <NUM_LIT:1>))<EOL>return(signal)<EOL><DEDENT>
3rd order spline interpolation. Parameters ---------- values : dataframe Values. value_times : list Time indices of values. sampling_rate : int Sampling rate (samples/second). Returns ---------- signal : pd.Series An array containing the values indexed by time. Example ---------- >>> import neurokit as nk >>> signal = interpolate([800, 900, 700, 500], [1000, 2000, 3000, 4000], sampling_rate=1000) >>> pd.Series(signal).plot() Notes ---------- *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - scipy - pandas
f10892:m0
def find_peaks(signal):
derivative = np.gradient(signal, <NUM_LIT:2>)<EOL>peaks = np.where(np.diff(np.sign(derivative)))[<NUM_LIT:0>]<EOL>return(peaks)<EOL>
Locate peaks based on derivative. Parameters ---------- signal : list or array Signal. Returns ---------- peaks : array An array containing the peak indices. Example ---------- >>> signal = np.sin(np.arange(0, np.pi*10, 0.05)) >>> peaks = nk.find_peaks(signal) >>> nk.plot_events_in_signal(signal, peaks) Notes ---------- *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - scipy - pandas
f10892:m1
def complexity(signal, sampling_rate=<NUM_LIT:1000>, shannon=True, sampen=True, multiscale=True, spectral=True, svd=True, correlation=True, higushi=True, petrosian=True, fisher=True, hurst=True, dfa=True, lyap_r=False, lyap_e=False, emb_dim=<NUM_LIT:2>, tolerance="<STR_LIT:default>", k_max=<NUM_LIT:8>, bands=None, tau=<NUM_LIT:1>):
if tolerance == "<STR_LIT:default>":<EOL><INDENT>tolerance = <NUM_LIT>*np.std(signal)<EOL><DEDENT>complexity = {}<EOL>----------------------------------------------------------------------------<EOL>if shannon is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = complexity_entropy_shannon(signal)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>if sampen is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = nolds.sampen(signal, emb_dim, tolerance, dist="<STR_LIT>", debug_plot=False, plot_file=None)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>if multiscale is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = complexity_entropy_multiscale(signal, emb_dim, tolerance)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>if spectral is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = complexity_entropy_spectral(signal, sampling_rate=sampling_rate, bands=bands)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>if svd is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = complexity_entropy_svd(signal, tau=tau, emb_dim=emb_dim)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>----------------------------------------------------------------------------<EOL>if correlation is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = nolds.corr_dim(signal, emb_dim, rvals=None, fit="<STR_LIT>", debug_plot=False, plot_file=None)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>if higushi is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = complexity_fd_higushi(signal, k_max)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>if petrosian is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = complexity_fd_petrosian(signal)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>----------------------------------------------------------------------------<EOL>if fisher is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = complexity_fisher_info(signal, tau=tau, emb_dim=emb_dim)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>if hurst is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = nolds.hurst_rs(signal, nvals=None, fit="<STR_LIT>", debug_plot=False, plot_file=None)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>if dfa is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = nolds.dfa(signal, nvals=None, overlap=True, order=<NUM_LIT:1>, fit_trend="<STR_LIT>", fit_exp="<STR_LIT>", debug_plot=False, plot_file=None)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>if lyap_r is True:<EOL><INDENT>try:<EOL><INDENT>complexity["<STR_LIT>"] = nolds.lyap_r(signal, emb_dim=<NUM_LIT:10>, lag=None, min_tsep=None, tau=tau, min_vectors=<NUM_LIT:20>, trajectory_len=<NUM_LIT:20>, fit="<STR_LIT>", debug_plot=False, plot_file=None)<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>if lyap_e is True:<EOL><INDENT>try:<EOL><INDENT>result = nolds.lyap_e(signal, emb_dim=<NUM_LIT:10>, matrix_dim=<NUM_LIT:4>, min_nb=None, min_tsep=<NUM_LIT:0>, tau=tau, debug_plot=False, plot_file=None)<EOL>for i, value in enumerate(result):<EOL><INDENT>complexity["<STR_LIT>" + str(i)] = value<EOL><DEDENT><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>complexity["<STR_LIT>"] = np.nan<EOL><DEDENT><DEDENT>return(complexity)<EOL>
Computes several chaos/complexity indices of a signal (including entropy, fractal dimensions, Hurst and Lyapunov exponent etc.). Parameters ---------- signal : list or array List or array of values. sampling_rate : int Sampling rate (samples/second). shannon : bool Computes Shannon entropy. sampen : bool Computes approximate sample entropy (sampen) using Chebychev and Euclidean distances. multiscale : bool Computes multiscale entropy (MSE). Note that it uses the 'euclidean' distance. spectral : bool Computes Spectral Entropy. svd : bool Computes the Singular Value Decomposition (SVD) entropy. correlation : bool Computes the fractal (correlation) dimension. higushi : bool Computes the Higushi fractal dimension. petrosian : bool Computes the Petrosian fractal dimension. fisher : bool Computes the Fisher Information. hurst : bool Computes the Hurst exponent. dfa : bool Computes DFA. lyap_r : bool Computes Positive Lyapunov exponents (Rosenstein et al. (1993) method). lyap_e : bool Computes Positive Lyapunov exponents (Eckmann et al. (1986) method). emb_dim : int The embedding dimension (*m*, the length of vectors to compare). Used in sampen, fisher, svd and fractal_dim. tolerance : float Distance *r* threshold for two template vectors to be considered equal. Default is 0.2*std(signal). Used in sampen and fractal_dim. k_max : int The maximal value of k used for Higushi fractal dimension. The point at which the FD plateaus is considered a saturation point and that kmax value should be selected (Gómez, 2009). Some studies use a value of 8 or 16 for ECG signal and other 48 for MEG. bands : int Used for spectral density. A list of numbers delimiting the bins of the frequency bands. If None the entropy is computed over the whole range of the DFT (from 0 to `f_s/2`). tau : int The delay. Used for fisher, svd, lyap_e and lyap_r. Returns ---------- complexity : dict Dict containing values for each indices. Example ---------- >>> import neurokit as nk >>> import numpy as np >>> >>> signal = np.sin(np.log(np.random.sample(666))) >>> complexity = nk.complexity(signal) Notes ---------- *Details* - **Entropy**: Entropy is a measure of unpredictability of the state, or equivalently, of its average information content. - *Shannon entropy*: Shannon entropy was introduced by Claude E. Shannon in his 1948 paper "A Mathematical Theory of Communication". Shannon entropy provides an absolute limit on the best possible average length of lossless encoding or compression of an information source. - *Sample entropy (sampen)*: Measures the complexity of a time-series, based on approximate entropy. The sample entropy of a time series is defined as the negative natural logarithm of the conditional probability that two sequences similar for emb_dim points remain similar at the next point, excluding self-matches. A lower value for the sample entropy therefore corresponds to a higher probability indicating more self-similarity. - *Multiscale entropy*: Multiscale entropy (MSE) analysis is a new method of measuring the complexity of finite length time series. - *SVD Entropy*: Indicator of how many vectors are needed for an adequate explanation of the data set. Measures feature-richness in the sense that the higher the entropy of the set of SVD weights, the more orthogonal vectors are required to adequately explain it. - **fractal dimension**: The term *fractal* was first introduced by Mandelbrot in 1983. A fractal is a set of points that when looked at smaller scales, resembles the whole set. The concept of fractak dimension (FD) originates from fractal geometry. In traditional geometry, the topological or Euclidean dimension of an object is known as the number of directions each differential of the object occupies in space. This definition of dimension works well for geometrical objects whose level of detail, complexity or *space-filling* is the same. However, when considering two fractals of the same topological dimension, their level of *space-filling* is different, and that information is not given by the topological dimension. The FD emerges to provide a measure of how much space an object occupies between Euclidean dimensions. The FD of a waveform represents a powerful tool for transient detection. This feature has been used in the analysis of ECG and EEG to identify and distinguish specific states of physiologic function. Many algorithms are available to determine the FD of the waveform (Acharya, 2005). - *Correlation*: A measure of the fractal (or correlation) dimension of a time series which is also related to complexity. The correlation dimension is a characteristic measure that can be used to describe the geometry of chaotic attractors. It is defined using the correlation sum C(r) which is the fraction of pairs of points X_i in the phase space whose distance is smaller than r. - *Higushi*: Higuchi proposed in 1988 an efficient algorithm for measuring the FD of discrete time sequences. As the reconstruction of the attractor phase space is not necessary, this algorithm is simpler and faster than D2 and other classical measures derived from chaos theory. FD can be used to quantify the complexity and self-similarity of a signal. HFD has already been used to analyse the complexity of brain recordings and other biological signals. - *Petrosian Fractal Dimension*: Provide a fast computation of the FD of a signal by translating the series into a binary sequence. - **Other**: - *Fisher Information*: A way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information. - *Hurst*: The Hurst exponent is a measure of the "long-term memory" of a time series. It can be used to determine whether the time series is more, less, or equally likely to increase if it has increased in previous steps. This property makes the Hurst exponent especially interesting for the analysis of stock data. - *DFA*: DFA measures the Hurst parameter H, which is very similar to the Hurst exponent. The main difference is that DFA can be used for non-stationary processes (whose mean and/or variance change over time). - *Lyap*: Positive Lyapunov exponents indicate chaos and unpredictability. Provides the algorithm of Rosenstein et al. (1993) to estimate the largest Lyapunov exponent and the algorithm of Eckmann et al. (1986) to estimate the whole spectrum of Lyapunov exponents. *Authors* - Dominique Makowski (https://github.com/DominiqueMakowski) - Christopher Schölzel (https://github.com/CSchoel) - tjugo (https://github.com/nikdon) - Quentin Geissmann (https://github.com/qgeissmann) *Dependencies* - nolds - numpy *See Also* - nolds package: https://github.com/CSchoel/nolds - pyEntropy package: https://github.com/nikdon/pyEntropy - pyrem package: https://github.com/gilestrolab/pyrem References ----------- - Accardo, A., Affinito, M., Carrozzi, M., & Bouquet, F. (1997). Use of the fractal dimension for the analysis of electroencephalographic time series. Biological cybernetics, 77(5), 339-350. - Pierzchalski, M. Application of Higuchi Fractal Dimension in Analysis of Heart Rate Variability with Artificial and Natural Noise. Recent Advances in Systems Science. - Acharya, R., Bhat, P. S., Kannathal, N., Rao, A., & Lim, C. M. (2005). Analysis of cardiac health using fractal dimension and wavelet transformation. ITBM-RBM, 26(2), 133-139. - Richman, J. S., & Moorman, J. R. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039-H2049. - Costa, M., Goldberger, A. L., & Peng, C. K. (2005). Multiscale entropy analysis of biological signals. Physical review E, 71(2), 021906.
f10894:m0
def complexity_entropy_shannon(signal):
<EOL>f not isinstance(signal, str):<EOL><INDENT>signal = list(signal)<EOL><DEDENT>ignal = np.array(signal)<EOL><INDENT>Create a frequency data<EOL><DEDENT>ata_set = list(set(signal))<EOL>req_list = []<EOL>or entry in data_set:<EOL><INDENT>counter = <NUM_LIT:0.><EOL>for i in signal:<EOL><INDENT>if i == entry:<EOL><INDENT>counter += <NUM_LIT:1><EOL><DEDENT><DEDENT>freq_list.append(float(counter) / len(signal))<EOL>
Computes the shannon entropy. Copied from the `pyEntropy <https://github.com/nikdon/pyEntropy>`_ repo by tjugo. Parameters ---------- signal : list or array List or array of values. Returns ---------- shannon_entropy : float The Shannon Entropy as float value. Example ---------- >>> import neurokit as nk >>> >>> signal = np.sin(np.log(np.random.sample(666))) >>> shannon_entropy = nk.complexity_entropy_shannon(signal) Notes ---------- *Details* - **shannon entropy**: Entropy is a measure of unpredictability of the state, or equivalently, of its average information content. *Authors* - tjugo (https://github.com/nikdon) *Dependencies* - numpy *See Also* - pyEntropy package: https://github.com/nikdon/pyEntropy References ----------- - None
f10894:m1
def complexity_entropy_multiscale(signal, max_scale_factor=<NUM_LIT:20>, m=<NUM_LIT:2>, r="<STR_LIT:default>"):
if r == "<STR_LIT:default>":<EOL><INDENT>r = <NUM_LIT>*np.std(signal)<EOL><DEDENT>n = len(signal)<EOL>per_scale_entropy_values = np.zeros(max_scale_factor)<EOL>for i in range(max_scale_factor):<EOL><INDENT>b = int(np.fix(n / (i + <NUM_LIT:1>)))<EOL>temp_ts = [<NUM_LIT:0>] * int(b)<EOL>for j in range(b):<EOL><INDENT>num = sum(signal[j * (i + <NUM_LIT:1>): (j + <NUM_LIT:1>) * (i + <NUM_LIT:1>)])<EOL>den = i + <NUM_LIT:1><EOL>temp_ts[j] = float(num) / float(den)<EOL><DEDENT>se = nolds.sampen(temp_ts, m, r, nolds.measures.rowwise_chebyshev, debug_plot=False, plot_file=None)<EOL>if np.isinf(se):<EOL><INDENT>print("<STR_LIT>" + str(i) + "<STR_LIT>" + str(i) + "<STR_LIT:.>")<EOL>max_scale_factor = i<EOL>break<EOL><DEDENT>else:<EOL><INDENT>per_scale_entropy_values[i] = se<EOL><DEDENT><DEDENT>all_entropy_values = per_scale_entropy_values[<NUM_LIT:0>:max_scale_factor]<EOL>parameters = {"<STR_LIT>": max_scale_factor,<EOL>"<STR_LIT:r>": r,<EOL>"<STR_LIT:m>": m}<EOL>mse = {"<STR_LIT>": parameters,<EOL>"<STR_LIT>" : all_entropy_values,<EOL>"<STR_LIT>": np.trapz(all_entropy_values),<EOL>"<STR_LIT>": np.sum(all_entropy_values)}<EOL>return (mse)<EOL>
Computes the Multiscale Entropy. Uses sample entropy with 'chebychev' distance. Parameters ---------- signal : list or array List or array of values. max_scale_factor: int Max scale factor (*tau*). The max length of coarse-grained time series analyzed. Will analyze scales for all integers from 1:max_scale_factor. See Costa (2005). m : int The embedding dimension (*m*, the length of vectors to compare). r : float Similarity factor *r*. Distance threshold for two template vectors to be considered equal. Default is 0.15*std(signal). Returns ---------- mse: dict A dict containing "MSE_Parameters" (a dict with the actual max_scale_factor, m and r), "MSE_Values" (an array with the sample entropy for each scale_factor up to the max_scale_factor), "MSE_AUC" (A float: The area under the MSE_Values curve. A point-estimate of mse) and "MSE_Sum" (A float: The sum of MSE_Values curve. Another point-estimate of mse; Norris, 2008). Example ---------- >>> import neurokit as nk >>> >>> signal = np.sin(np.log(np.random.sample(666))) >>> mse = nk.complexity_entropy_multiscale(signal) >>> mse_values = mse["MSE_Values"] Notes ---------- *Details* - **multiscale entropy**: Entropy is a measure of unpredictability of the state, or equivalently, of its average information content. Multiscale entropy (MSE) analysis is a new method of measuring the complexity of coarse grained versions of the original data, where coarse graining is at all scale factors from 1:max_scale_factor. *Authors* - tjugo (https://github.com/nikdon) - Dominique Makowski (https://github.com/DominiqueMakowski) - Anthony Gatti (https://github.com/gattia) *Dependencies* - numpy - nolds *See Also* - pyEntropy package: https://github.com/nikdon/pyEntropy References ----------- - Richman, J. S., & Moorman, J. R. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039-H2049. - Costa, M., Goldberger, A. L., & Peng, C. K. (2005). Multiscale entropy analysis of biological signals. Physical review E, 71(2), 021906. - Gow, B. J., Peng, C. K., Wayne, P. M., & Ahn, A. C. (2015). Multiscale entropy analysis of center-of-pressure dynamics in human postural control: methodological considerations. Entropy, 17(12), 7926-7947. - Norris, P. R., Anderson, S. M., Jenkins, J. M., Williams, A. E., & Morris Jr, J. A. (2008). Heart rate multiscale entropy at three hours predicts hospital mortality in 3,154 trauma patients. Shock, 30(1), 17-22.
f10894:m2
def complexity_fd_higushi(signal, k_max):
signal = np.array(signal)<EOL>L = []<EOL>x = []<EOL>N = signal.size<EOL>km_idxs = np.triu_indices(k_max - <NUM_LIT:1>)<EOL>km_idxs = k_max - np.flipud(np.column_stack(km_idxs)) -<NUM_LIT:1><EOL>km_idxs[:,<NUM_LIT:1>] -= <NUM_LIT:1><EOL>for k in range(<NUM_LIT:1>, k_max):<EOL><INDENT>Lk = <NUM_LIT:0><EOL>for m in range(<NUM_LIT:0>, k):<EOL><INDENT>idxs = np.arange(<NUM_LIT:1>,int(np.floor((N-m)/k)))<EOL>Lmk = np.sum(np.abs(signal[m+idxs*k] - signal[m+k*(idxs-<NUM_LIT:1>)]))<EOL>Lmk = (Lmk*(N - <NUM_LIT:1>)/(((N - m)/ k)* k)) / k<EOL>Lk += Lmk<EOL><DEDENT>if Lk != <NUM_LIT:0>:<EOL><INDENT>L.append(np.log(Lk/(m+<NUM_LIT:1>)))<EOL>x.append([np.log(<NUM_LIT:1.0>/ k), <NUM_LIT:1>])<EOL><DEDENT><DEDENT>(p, r1, r2, s)=np.linalg.lstsq(x, L)<EOL>fd_higushi = p[<NUM_LIT:0>]<EOL>return (fd_higushi)<EOL>
Computes Higuchi Fractal Dimension of a signal. Based on the `pyrem <https://github.com/gilestrolab/pyrem>`_ repo by Quentin Geissmann. Parameters ---------- signal : list or array List or array of values. k_max : int The maximal value of k. The point at which the FD plateaus is considered a saturation point and that kmax value should be selected (Gómez, 2009). Some studies use a value of 8 or 16 for ECG signal and other 48 for MEG. Returns ---------- fd_higushi : float The Higushi Fractal Dimension as float value. Example ---------- >>> import neurokit as nk >>> >>> signal = np.sin(np.log(np.random.sample(666))) >>> fd_higushi = nk.complexity_fd_higushi(signal, 8) Notes ---------- *Details* - **Higushi Fractal Dimension**: Higuchi proposed in 1988 an efficient algorithm for measuring the FD of discrete time sequences. As the reconstruction of the attractor phase space is not necessary, this algorithm is simpler and faster than D2 and other classical measures derived from chaos theory. FD can be used to quantify the complexity and self-similarity of a signal. HFD has already been used to analyse the complexity of brain recordings and other biological signals. *Authors* - Quentin Geissmann (https://github.com/qgeissmann) *Dependencies* - numpy *See Also* - pyrem package: https://github.com/gilestrolab/pyrem References ----------- - Accardo, A., Affinito, M., Carrozzi, M., & Bouquet, F. (1997). Use of the fractal dimension for the analysis of electroencephalographic time series. Biological cybernetics, 77(5), 339-350. - Gómez, C., Mediavilla, Á., Hornero, R., Abásolo, D., & Fernández, A. (2009). Use of the Higuchi's fractal dimension for the analysis of MEG recordings from Alzheimer's disease patients. Medical engineering & physics, 31(3), 306-313.
f10894:m3
def complexity_entropy_spectral(signal, sampling_rate, bands=None):
psd = np.abs(np.fft.rfft(signal))**<NUM_LIT:2><EOL>psd /= np.sum(psd) <EOL>if bands is None:<EOL><INDENT>power_per_band= psd[psd><NUM_LIT:0>]<EOL><DEDENT>else:<EOL><INDENT>freqs = np.fft.rfftfreq(signal.size, <NUM_LIT:1>/float(sampling_rate))<EOL>bands = np.asarray(bands)<EOL>freq_limits_low = np.concatenate([[<NUM_LIT:0.0>],bands])<EOL>freq_limits_up = np.concatenate([bands, [np.Inf]])<EOL>power_per_band = [np.sum(psd[np.bitwise_and(freqs >= low, freqs<up)])<EOL>for low,up in zip(freq_limits_low, freq_limits_up)]<EOL>power_per_band= np.array(power_per_band)[np.array(power_per_band) > <NUM_LIT:0>]<EOL><DEDENT>spectral = - np.sum(power_per_band * np.log2(power_per_band))<EOL>return(spectral)<EOL>
Computes Spectral Entropy of a signal. Based on the `pyrem <https://github.com/gilestrolab/pyrem>`_ repo by Quentin Geissmann. The power spectrum is computed through fft. Then, it is normalised and assimilated to a probability density function. Parameters ---------- signal : list or array List or array of values. sampling_rate : int Sampling rate (samples/second). bands : list or array A list of numbers delimiting the bins of the frequency bands. If None the entropy is computed over the whole range of the DFT (from 0 to `f_s/2`). Returns ---------- spectral_entropy : float The spectral entropy as float value. Example ---------- >>> import neurokit as nk >>> >>> signal = np.sin(np.log(np.random.sample(666))) >>> spectral_entropy = nk.complexity_entropy_spectral(signal, 1000) Notes ---------- *Details* - **Spectral Entropy**: Entropy for different frequency bands. *Authors* - Quentin Geissmann (https://github.com/qgeissmann) *Dependencies* - numpy *See Also* - pyrem package: https://github.com/gilestrolab/pyrem
f10894:m4
def complexity_entropy_svd(signal, tau=<NUM_LIT:1>, emb_dim=<NUM_LIT:2>):
mat = _embed_seq(signal, tau, emb_dim)<EOL>W = np.linalg.svd(mat, compute_uv = False)<EOL>W /= sum(W) <EOL>entropy_svd = -<NUM_LIT:1>*sum(W * np.log2(W))<EOL>return(entropy_svd)<EOL>
Computes the Singular Value Decomposition (SVD) entropy of a signal. Based on the `pyrem <https://github.com/gilestrolab/pyrem>`_ repo by Quentin Geissmann. Parameters ---------- signal : list or array List or array of values. tau : int The delay emb_dim : int The embedding dimension (*m*, the length of vectors to compare). Returns ---------- entropy_svd : float The SVD entropy as float value. Example ---------- >>> import neurokit as nk >>> >>> signal = np.sin(np.log(np.random.sample(666))) >>> entropy_svd = nk.complexity_entropy_svd(signal, 1, 2) Notes ---------- *Details* - **SVD Entropy**: Indicator of how many vectors are needed for an adequate explanation of the data set. Measures feature-richness in the sense that the higher the entropy of the set of SVD weights, the more orthogonal vectors are required to adequately explain it. *Authors* - Quentin Geissmann (https://github.com/qgeissmann) *Dependencies* - numpy *See Also* - pyrem package: https://github.com/gilestrolab/pyrem
f10894:m6
def complexity_fd_petrosian(signal):
diff = np.diff(signal)<EOL>prod = diff[<NUM_LIT:1>:-<NUM_LIT:1>] * diff[<NUM_LIT:0>:-<NUM_LIT:2>]<EOL>N_delta = np.sum(prod < <NUM_LIT:0>)<EOL>n = len(signal)<EOL>fd_petrosian = np.log(n)/(np.log(n)+np.log(n/(n+<NUM_LIT>*N_delta)))<EOL>return(fd_petrosian)<EOL>
Computes the Petrosian Fractal Dimension of a signal. Based on the `pyrem <https://github.com/gilestrolab/pyrem>`_ repo by Quentin Geissmann. Parameters ---------- signal : list or array List or array of values. Returns ---------- fd_petrosian : float The Petrosian FD as float value. Example ---------- >>> import neurokit as nk >>> >>> signal = np.sin(np.log(np.random.sample(666))) >>> fd_petrosian = nk.complexity_fd_petrosian(signal, 1, 2) Notes ---------- *Details* - **Petrosian Fractal Dimension**: Provide a fast computation of the FD of a signal by translating the series into a binary sequence. *Authors* - Quentin Geissmann (https://github.com/qgeissmann) *Dependencies* - numpy *See Also* - pyrem package: https://github.com/gilestrolab/pyrem
f10894:m7
def complexity_fisher_info(signal, tau=<NUM_LIT:1>, emb_dim=<NUM_LIT:2>):
mat = _embed_seq(signal, tau, emb_dim)<EOL>W = np.linalg.svd(mat, compute_uv = False)<EOL>W /= sum(W) <EOL>FI_v = (W[<NUM_LIT:1>:] - W[:-<NUM_LIT:1>]) **<NUM_LIT:2> / W[:-<NUM_LIT:1>]<EOL>fisher_info = np.sum(FI_v)<EOL>return(fisher_info)<EOL>
Computes the Fisher information of a signal. Based on the `pyrem <https://github.com/gilestrolab/pyrem>`_ repo by Quentin Geissmann. Parameters ---------- signal : list or array List or array of values. tau : int The delay emb_dim : int The embedding dimension (*m*, the length of vectors to compare). Returns ---------- fisher_info : float The Fisher information as float value. Example ---------- >>> import neurokit as nk >>> >>> signal = np.sin(np.log(np.random.sample(666))) >>> fisher_info = nk.complexity_fisher_info(signal, 1, 2) Notes ---------- *Details* - **Fisher Information**: A way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information. *Authors* - Quentin Geissmann (https://github.com/qgeissmann) *Dependencies* - numpy *See Also* - pyrem package: https://github.com/gilestrolab/pyrem
f10894:m8
def binarize_signal(signal, treshold="<STR_LIT>", cut="<STR_LIT>"):
if treshold == "<STR_LIT>":<EOL><INDENT>treshold = (np.max(np.array(signal)) - np.min(np.array(signal)))/<NUM_LIT:2><EOL><DEDENT>signal = list(signal)<EOL>binary_signal = []<EOL>for i in range(len(signal)):<EOL><INDENT>if cut == "<STR_LIT>":<EOL><INDENT>if signal[i] > treshold:<EOL><INDENT>binary_signal.append(<NUM_LIT:1>)<EOL><DEDENT>else:<EOL><INDENT>binary_signal.append(<NUM_LIT:0>)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>if signal[i] < treshold:<EOL><INDENT>binary_signal.append(<NUM_LIT:1>)<EOL><DEDENT>else:<EOL><INDENT>binary_signal.append(<NUM_LIT:0>)<EOL><DEDENT><DEDENT><DEDENT>return(binary_signal)<EOL>
Binarize a channel based on a continuous channel. Parameters ---------- signal = array or list The signal channel. treshold = float The treshold value by which to select the events. If "auto", takes the value between the max and the min. cut = str "higher" or "lower", define the events as above or under the treshold. For photosensors, a white screen corresponds usually to higher values. Therefore, if your events were signalled by a black colour, events values would be the lower ones, and you should set the cut to "lower". Returns ---------- list binary_signal Example ---------- >>> import neurokit as nk >>> binary_signal = nk.binarize_signal(signal, treshold=4) Authors ---------- - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ Dependencies ---------- None
f10895:m0
def localize_events(events_channel, treshold="<STR_LIT>", cut="<STR_LIT>", time_index=None):
events_channel = binarize_signal(events_channel, treshold=treshold, cut=cut)<EOL>events = {"<STR_LIT>":[], "<STR_LIT>":[]}<EOL>if time_index is not None:<EOL><INDENT>events["<STR_LIT>"] = []<EOL><DEDENT>index = <NUM_LIT:0><EOL>for key, g in (groupby(events_channel)):<EOL><INDENT>duration = len(list(g))<EOL>if key == <NUM_LIT:1>:<EOL><INDENT>events["<STR_LIT>"].append(index)<EOL>events["<STR_LIT>"].append(duration)<EOL>if time_index is not None:<EOL><INDENT>events["<STR_LIT>"].append(time_index[index])<EOL><DEDENT><DEDENT>index += duration<EOL><DEDENT>return(events)<EOL>
Find the onsets of all events based on a continuous signal. Parameters ---------- events_channel = array or list The trigger channel. treshold = float The treshold value by which to select the events. If "auto", takes the value between the max and the min. cut = str "higher" or "lower", define the events as above or under the treshold. For photosensors, a white screen corresponds usually to higher values. Therefore, if your events were signalled by a black colour, events values would be the lower ones, and you should set the cut to "lower". time_index = array or list Add a corresponding datetime index, will return an addional array with the onsets as datetimes. Returns ---------- dict dict containing the onsets, the duration and the time index if provided. Example ---------- >>> import neurokit as nk >>> events_onset = nk.events_onset(events_channel) Authors ---------- - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ Dependencies ---------- None
f10895:m1
def find_events(events_channel, treshold="<STR_LIT>", cut="<STR_LIT>", time_index=None, number="<STR_LIT:all>", after=<NUM_LIT:0>, before=None, min_duration=<NUM_LIT:1>):
events = localize_events(events_channel, treshold=treshold, cut=cut, time_index=time_index)<EOL>if len(events["<STR_LIT>"]) == <NUM_LIT:0>:<EOL><INDENT>print("<STR_LIT>")<EOL>return()<EOL><DEDENT>toremove = []<EOL>for event in range(len(events["<STR_LIT>"])):<EOL><INDENT>if events["<STR_LIT>"][event] < min_duration:<EOL><INDENT>toremove.append(False)<EOL><DEDENT>else:<EOL><INDENT>toremove.append(True)<EOL><DEDENT><DEDENT>events["<STR_LIT>"] = np.array(events["<STR_LIT>"])[np.array(toremove)]<EOL>events["<STR_LIT>"] = np.array(events["<STR_LIT>"])[np.array(toremove)]<EOL>if time_index is not None:<EOL><INDENT>events["<STR_LIT>"] = np.array(events["<STR_LIT>"])[np.array(toremove)]<EOL><DEDENT>if isinstance(number, int):<EOL><INDENT>after_times = []<EOL>after_onsets = []<EOL>after_length = []<EOL>before_times = []<EOL>before_onsets = []<EOL>before_length = []<EOL>if after != None:<EOL><INDENT>if events["<STR_LIT>"] == []:<EOL><INDENT>events["<STR_LIT>"] = np.array(events["<STR_LIT>"])<EOL><DEDENT>else:<EOL><INDENT>events["<STR_LIT>"] = np.array(events["<STR_LIT>"])<EOL><DEDENT>after_onsets = list(np.array(events["<STR_LIT>"])[events["<STR_LIT>"]>after])[:number]<EOL>after_times = list(np.array(events["<STR_LIT>"])[events["<STR_LIT>"]>after])[:number]<EOL>after_length = list(np.array(events["<STR_LIT>"])[events["<STR_LIT>"]>after])[:number]<EOL><DEDENT>if before != None:<EOL><INDENT>if events["<STR_LIT>"] == []:<EOL><INDENT>events["<STR_LIT>"] = np.array(events["<STR_LIT>"])<EOL><DEDENT>else:<EOL><INDENT>events["<STR_LIT>"] = np.array(events["<STR_LIT>"])<EOL><DEDENT>before_onsets = list(np.array(events["<STR_LIT>"])[events["<STR_LIT>"]<before])[:number]<EOL>before_times = list(np.array(events["<STR_LIT>"])[events["<STR_LIT>"]<before])[:number]<EOL>before_length = list(np.array(events["<STR_LIT>"])[events["<STR_LIT>"]<before])[:number]<EOL><DEDENT>events["<STR_LIT>"] = before_onsets + after_onsets<EOL>events["<STR_LIT>"] = before_times + after_times<EOL>events["<STR_LIT>"] = before_length + after_length<EOL><DEDENT>return(events)<EOL>
Find and select events based on a continuous signal. Parameters ---------- events_channel : array or list The trigger channel. treshold : float The treshold value by which to select the events. If "auto", takes the value between the max and the min. cut : str "higher" or "lower", define the events as above or under the treshold. For photosensors, a white screen corresponds usually to higher values. Therefore, if your events were signalled by a black colour, events values would be the lower ones, and you should set the cut to "lower". Add a corresponding datetime index, will return an addional array with the onsets as datetimes. number : str or int How many events should it select. after : int If number different than "all", then at what time should it start selecting the events. before : int If number different than "all", before what time should it select the events. min_duration : int The minimum duration of an event (in timepoints). Returns ---------- events : dict Dict containing events onsets and durations. Example ---------- >>> import neurokit as nk >>> events = nk.select_events(events_channel) Notes ---------- *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - numpy
f10895:m2
def plot_events_in_signal(signal, events_onsets, color="<STR_LIT>", marker=None):
df = pd.DataFrame(signal)<EOL>ax = df.plot()<EOL>def plotOnSignal(x, color, marker=None):<EOL><INDENT>if (marker is None):<EOL><INDENT>plt.axvline(x=event, color=color)<EOL><DEDENT>else:<EOL><INDENT>plt.plot(x, signal[x], marker, color=color)<EOL><DEDENT><DEDENT>events_onsets = np.array(events_onsets)<EOL>try:<EOL><INDENT>len(events_onsets[<NUM_LIT:0>])<EOL>for index, dim in enumerate(events_onsets):<EOL><INDENT>for event in dim:<EOL><INDENT>plotOnSignal(x=event,<EOL>color=color[index] if isinstance(color, list) else color,<EOL>marker=marker[index] if isinstance(marker, list) else marker)<EOL><DEDENT><DEDENT><DEDENT>except TypeError:<EOL><INDENT>for event in events_onsets:<EOL><INDENT>plotOnSignal(x=event,<EOL>color=color[<NUM_LIT:0>] if isinstance(color, list) else color,<EOL>marker=marker[<NUM_LIT:0>] if isinstance(marker, list) else marker)<EOL><DEDENT><DEDENT>return ax<EOL>
Plot events in signal. Parameters ---------- signal : array or DataFrame Signal array (can be a dataframe with many signals). events_onsets : list or ndarray Events location. color : int or list Marker color. marker : marker or list of markers (for possible marker values, see: https://matplotlib.org/api/markers_api.html) Marker type. Example ---------- >>> import neurokit as nk >>> bio = nk.bio_process(ecg=signal, sampling_rate=1000) >>> events_onsets = bio["ECG"]["R_Peaks"] >>> plot_events_in_signal(bio["df"]["ECG_Filtered"], events_onsets) >>> plot_events_in_signal(bio["df"]["ECG_Filtered"], events_onsets, color="red", marker="o") >>> plot_events_in_signal(bio["df"]["ECG_Filtered"], [bio["ECG"]["P_Waves"], bio["ECG"]["R_Peaks"]], color=["blue", "red"], marker=["d","o"]) Notes ---------- *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ - `Renatosc <https://github.com/renatosc/>`_ *Dependencies* - matplotlib - pandas
f10895:m3
def eda_process(eda, sampling_rate=<NUM_LIT:1000>, alpha=<NUM_LIT>, gamma=<NUM_LIT>, filter_type = "<STR_LIT>", scr_method="<STR_LIT>", scr_treshold=<NUM_LIT:0.1>):
<EOL>eda = np.array(eda)<EOL>eda_df = pd.DataFrame({"<STR_LIT>": np.array(eda)})<EOL>if filter_type is not None:<EOL><INDENT>filtered, _, _ = biosppy.tools.filter_signal(signal=eda,<EOL>ftype=filter_type,<EOL>band='<STR_LIT>',<EOL>order=<NUM_LIT:4>,<EOL>frequency=<NUM_LIT:5>,<EOL>sampling_rate=sampling_rate)<EOL>filtered, _ = biosppy.tools.smoother(signal=filtered,<EOL>kernel='<STR_LIT>',<EOL>size=int(<NUM_LIT> * sampling_rate),<EOL>mirror=True)<EOL>eda_df["<STR_LIT>"] = filtered<EOL><DEDENT>try:<EOL><INDENT>tonic, phasic = cvxEDA(eda, sampling_rate=sampling_rate, alpha=alpha, gamma=gamma)<EOL>eda_df["<STR_LIT>"] = phasic<EOL>eda_df["<STR_LIT>"] = tonic<EOL>signal = phasic<EOL><DEDENT>except:<EOL><INDENT>print("<STR_LIT>")<EOL>signal = eda<EOL><DEDENT>if scr_method == "<STR_LIT>":<EOL><INDENT>onsets, peaks, amplitudes = biosppy.eda.kbk_scr(signal=signal, sampling_rate=sampling_rate, min_amplitude=scr_treshold)<EOL>recoveries = [np.nan]*len(onsets)<EOL><DEDENT>elif scr_method == "<STR_LIT>":<EOL><INDENT>onsets, peaks, amplitudes = biosppy.eda.basic_scr(signal=signal, sampling_rate=sampling_rate)<EOL>recoveries = [np.nan]*len(onsets)<EOL><DEDENT>else: <EOL><INDENT>onsets, peaks, amplitudes, recoveries = eda_scr(signal, sampling_rate=sampling_rate, treshold=scr_treshold, method="<STR_LIT>")<EOL><DEDENT>scr_onsets = np.array([np.nan]*len(signal))<EOL>if len(onsets) > <NUM_LIT:0>:<EOL><INDENT>scr_onsets[onsets] = <NUM_LIT:1><EOL><DEDENT>eda_df["<STR_LIT>"] = scr_onsets<EOL>scr_recoveries = np.array([np.nan]*len(signal))<EOL>if len(recoveries) > <NUM_LIT:0>:<EOL><INDENT>scr_recoveries[recoveries[pd.notnull(recoveries)].astype(int)] = <NUM_LIT:1><EOL><DEDENT>eda_df["<STR_LIT>"] = scr_recoveries<EOL>scr_peaks = np.array([np.nan]*len(eda))<EOL>peak_index = <NUM_LIT:0><EOL>for index in range(len(scr_peaks)):<EOL><INDENT>try:<EOL><INDENT>if index == peaks[peak_index]:<EOL><INDENT>scr_peaks[index] = amplitudes[peak_index]<EOL>peak_index += <NUM_LIT:1><EOL><DEDENT><DEDENT>except:<EOL><INDENT>pass<EOL><DEDENT><DEDENT>eda_df["<STR_LIT>"] = scr_peaks<EOL>processed_eda = {"<STR_LIT>": eda_df,<EOL>"<STR_LIT>": {<EOL>"<STR_LIT>": onsets,<EOL>"<STR_LIT>": peaks,<EOL>"<STR_LIT>": recoveries,<EOL>"<STR_LIT>": amplitudes}}<EOL>return(processed_eda)<EOL>
Automated processing of EDA signal using convex optimization (CVXEDA; Greco et al., 2015). Parameters ---------- eda : list or array EDA signal array. sampling_rate : int Sampling rate (samples/second). alpha : float cvxEDA penalization for the sparse SMNA driver. gamma : float cvxEDA penalization for the tonic spline coefficients. filter_type : str or None Can be Butterworth filter ("butter"), Finite Impulse Response filter ("FIR"), Chebyshev filters ("cheby1" and "cheby2"), Elliptic filter ("ellip") or Bessel filter ("bessel"). Set to None to skip filtering. scr_method : str SCR extraction algorithm. "makowski" (default), "kim" (biosPPy's default; See Kim et al., 2004) or "gamboa" (Gamboa, 2004). scr_treshold : float SCR minimum treshold (in terms of signal standart deviation). Returns ---------- processed_eda : dict Dict containing processed EDA features. Contains the EDA raw signal, the filtered signal, the phasic compnent (if cvxEDA is True), the SCR onsets, peak indexes and amplitudes. This function is mainly a wrapper for the biosppy.eda.eda() and cvxEDA() functions. Credits go to their authors. Example ---------- >>> import neurokit as nk >>> >>> processed_eda = nk.eda_process(eda_signal) Notes ---------- *Details* - **cvxEDA**: Based on a model which describes EDA as the sum of three terms: the phasic component, the tonic component, and an additive white Gaussian noise term incorporating model prediction errors as well as measurement errors and artifacts. This model is physiologically inspired and fully explains EDA through a rigorous methodology based on Bayesian statistics, mathematical convex optimization and sparsity. *Authors* - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ *Dependencies* - biosppy - numpy - pandas - cvxopt *See Also* - BioSPPy: https://github.com/PIA-Group/BioSPPy - cvxEDA: https://github.com/lciti/cvxEDA References ----------- - Greco, A., Valenza, G., & Scilingo, E. P. (2016). Evaluation of CDA and CvxEDA Models. In Advances in Electrodermal Activity Processing with Applications for Mental Health (pp. 35-43). Springer International Publishing. - Greco, A., Valenza, G., Lanata, A., Scilingo, E. P., & Citi, L. (2016). cvxEDA: A convex optimization approach to electrodermal activity processing. IEEE Transactions on Biomedical Engineering, 63(4), 797-804. - Kim, K. H., Bang, S. W., & Kim, S. R. (2004). Emotion recognition system using short-term monitoring of physiological signals. Medical and biological engineering and computing, 42(3), 419-427. - Gamboa, H. (2008). Multi-Modal Behavioral Biometrics Based on HCI and Electrophysiology (Doctoral dissertation, PhD thesis, Universidade Técnica de Lisboa, Instituto Superior Técnico).
f10896:m0