sentence1
stringlengths
52
3.87M
sentence2
stringlengths
1
47.2k
label
stringclasses
1 value
def text(self): """Get the entire text content as str""" divisions = list(self.divisions) if len(divisions) == 0: return '' elif len(divisions) == 1: return divisions[0].text.strip() else: return super().text
Get the entire text content as str
entailment
def _calculate(self, startingPercentage, endPercentage, startDate, endDate): """This is the error calculation function that gets called by :py:meth:`BaseErrorMeasure.get_error`. Both parameters will be correct at this time. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :param float startDate: Epoch representing the start date used for error calculation. :param float endDate: Epoch representing the end date used in the error calculation. :return: Returns a float representing the error. :rtype: float """ # get the defined subset of error values errorValues = self._get_error_values(startingPercentage, endPercentage, startDate, endDate) errorValues = filter(lambda item: item is not None, errorValues) return sorted(errorValues)[len(errorValues)//2]
This is the error calculation function that gets called by :py:meth:`BaseErrorMeasure.get_error`. Both parameters will be correct at this time. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :param float startDate: Epoch representing the start date used for error calculation. :param float endDate: Epoch representing the end date used in the error calculation. :return: Returns a float representing the error. :rtype: float
entailment
def frequency(self, data_frame): """ This method returns the number of #taps divided by the test duration :param data_frame: the data frame :type data_frame: pandas.DataFrame :return frequency: frequency :rtype frequency: float """ freq = sum(data_frame.action_type == 1) / data_frame.td[-1] duration = math.ceil(data_frame.td[-1]) return freq, duration
This method returns the number of #taps divided by the test duration :param data_frame: the data frame :type data_frame: pandas.DataFrame :return frequency: frequency :rtype frequency: float
entailment
def moving_frequency(self, data_frame): """ This method returns moving frequency :param data_frame: the data frame :type data_frame: pandas.DataFrame :return diff_mov_freq: frequency :rtype diff_mov_freq: float """ f = [] for i in range(0, (data_frame.td[-1].astype('int') - self.window)): f.append(sum(data_frame.action_type[(data_frame.td >= i) & (data_frame.td < (i + self.window))] == 1) / float(self.window)) diff_mov_freq = (np.array(f[1:-1]) - np.array(f[0:-2])) / np.array(f[0:-2]) duration = math.ceil(data_frame.td[-1]) return diff_mov_freq, duration
This method returns moving frequency :param data_frame: the data frame :type data_frame: pandas.DataFrame :return diff_mov_freq: frequency :rtype diff_mov_freq: float
entailment
def continuous_frequency(self, data_frame): """ This method returns continuous frequency :param data_frame: the data frame :type data_frame: pandas.DataFrame :return cont_freq: frequency :rtype cont_freq: float """ tap_timestamps = data_frame.td[data_frame.action_type==1] cont_freq = 1.0/(np.array(tap_timestamps[1:-1])-np.array(tap_timestamps[0:-2])) duration = math.ceil(data_frame.td[-1]) return cont_freq, duration
This method returns continuous frequency :param data_frame: the data frame :type data_frame: pandas.DataFrame :return cont_freq: frequency :rtype cont_freq: float
entailment
def mean_moving_time(self, data_frame): """ This method calculates the mean time (ms) that the hand was moving from one target to the next :param data_frame: the data frame :type data_frame: pandas.DataFrame :return mmt: the mean moving time in ms :rtype mmt: float """ diff = data_frame.td[1:-1].values-data_frame.td[0:-2].values mmt = np.mean(diff[np.arange(1,len(diff),2)]) * 1000.0 duration = math.ceil(data_frame.td[-1]) return mmt, duration
This method calculates the mean time (ms) that the hand was moving from one target to the next :param data_frame: the data frame :type data_frame: pandas.DataFrame :return mmt: the mean moving time in ms :rtype mmt: float
entailment
def incoordination_score(self, data_frame): """ This method calculates the variance of the time interval in msec between taps :param data_frame: the data frame :type data_frame: pandas.DataFrame :return is: incoordination score :rtype is: float """ diff = data_frame.td[1:-1].values - data_frame.td[0:-2].values inc_s = np.var(diff[np.arange(1, len(diff), 2)], dtype=np.float64) * 1000.0 duration = math.ceil(data_frame.td[-1]) return inc_s, duration
This method calculates the variance of the time interval in msec between taps :param data_frame: the data frame :type data_frame: pandas.DataFrame :return is: incoordination score :rtype is: float
entailment
def mean_alnt_target_distance(self, data_frame): """ This method calculates the distance (number of pixels) between alternate tapping :param data_frame: the data frame :type data_frame: pandas.DataFrame :return matd: the mean alternate target distance in pixels :rtype matd: float """ dist = np.sqrt((data_frame.x[1:-1].values-data_frame.x[0:-2].values)**2+ (data_frame.y[1:-1].values-data_frame.y[0:-2].values)**2) matd = np.mean(dist[np.arange(1,len(dist),2)]) duration = math.ceil(data_frame.td[-1]) return matd, duration
This method calculates the distance (number of pixels) between alternate tapping :param data_frame: the data frame :type data_frame: pandas.DataFrame :return matd: the mean alternate target distance in pixels :rtype matd: float
entailment
def kinesia_scores(self, data_frame): """ This method calculates the number of key taps :param data_frame: the data frame :type data_frame: pandas.DataFrame :return ks: key taps :rtype ks: float :return duration: test duration (seconds) :rtype duration: float """ # tap_timestamps = data_frame.td[data_frame.action_type == 1] # grouped = tap_timestamps.groupby(pd.TimeGrouper('30u')) # return np.mean(grouped.size().values) ks = sum(data_frame.action_type == 1) duration = math.ceil(data_frame.td[-1]) return ks, duration
This method calculates the number of key taps :param data_frame: the data frame :type data_frame: pandas.DataFrame :return ks: key taps :rtype ks: float :return duration: test duration (seconds) :rtype duration: float
entailment
def akinesia_times(self, data_frame): """ This method calculates akinesia times, mean dwell time on each key in milliseconds :param data_frame: the data frame :type data_frame: pandas.DataFrame :return at: akinesia times :rtype at: float :return duration: test duration (seconds) :rtype duration: float """ raise_timestamps = data_frame.td[data_frame.action_type == 1] down_timestamps = data_frame.td[data_frame.action_type == 0] if len(raise_timestamps) == len(down_timestamps): at = np.mean(down_timestamps.values - raise_timestamps.values) else: if len(raise_timestamps) > len(down_timestamps): at = np.mean(down_timestamps.values - raise_timestamps.values[:-(len(raise_timestamps) - len(down_timestamps))]) else: at = np.mean(down_timestamps.values[:-(len(down_timestamps)-len(raise_timestamps))] - raise_timestamps.values) duration = math.ceil(data_frame.td[-1]) return np.abs(at), duration
This method calculates akinesia times, mean dwell time on each key in milliseconds :param data_frame: the data frame :type data_frame: pandas.DataFrame :return at: akinesia times :rtype at: float :return duration: test duration (seconds) :rtype duration: float
entailment
def dysmetria_score(self, data_frame): """ This method calculates accuracy of target taps in pixels :param data_frame: the data frame :type data_frame: pandas.DataFrame :return ds: dysmetria score in pixels :rtype ds: float """ tap_data = data_frame[data_frame.action_type == 0] ds = np.mean(np.sqrt((tap_data.x - tap_data.x_target) ** 2 + (tap_data.y - tap_data.y_target) ** 2)) duration = math.ceil(data_frame.td[-1]) return ds, duration
This method calculates accuracy of target taps in pixels :param data_frame: the data frame :type data_frame: pandas.DataFrame :return ds: dysmetria score in pixels :rtype ds: float
entailment
def extract_features(self, data_frame, pre=''): """ This method extracts all the features available to the Finger Tapping Processor class. :param data_frame: the data frame :type data_frame: pandas.DataFrame :return: 'frequency', 'moving_frequency','continuous_frequency','mean_moving_time','incoordination_score', \ 'mean_alnt_target_distance','kinesia_scores', 'akinesia_times','dysmetria_score' :rtype: list """ try: return {pre+'frequency': self.frequency(data_frame)[0], pre+'mean_moving_time': self.mean_moving_time(data_frame)[0], pre+'incoordination_score': self.incoordination_score(data_frame)[0], pre+'mean_alnt_target_distance': self.mean_alnt_target_distance(data_frame)[0], pre+'kinesia_scores': self.kinesia_scores(data_frame)[0], pre+'akinesia_times': self.akinesia_times(data_frame)[0], pre+'dysmetria_score': self.dysmetria_score(data_frame)[0]} except: logging.error("Error on FingerTappingProcessor process, extract features: %s", sys.exc_info()[0])
This method extracts all the features available to the Finger Tapping Processor class. :param data_frame: the data frame :type data_frame: pandas.DataFrame :return: 'frequency', 'moving_frequency','continuous_frequency','mean_moving_time','incoordination_score', \ 'mean_alnt_target_distance','kinesia_scores', 'akinesia_times','dysmetria_score' :rtype: list
entailment
def __train(self, n_neighbors=3): """ Train the classifier implementing the `k-nearest neighbors vote <http://scikit-learn.org/stable/modules/\ generated/sklearn.neighbors.KNeighborsClassifier.html>`_ :param n_clusters: the number of clusters :type n_clusters: int """ # m = self.labels.drop(['id','MDS_UPDRSIII'], axis=1).values # print(itemfreq(m)) # # for i, row in enumerate(self.labels.drop(['id','MDS_UPDRSIII'], axis=1).values): # print(np.bincount(row)) try: for obs in self.observations: features, ids = self.__get_features_for_observation(observation=obs, skip_id=3497, last_column_is_id=True) normalised_data = whiten(features) x = pd.DataFrame(normalised_data) y = self.labels[obs].values # x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=42) knn = KNeighborsClassifier(n_neighbors=n_neighbors, weights='distance') # knn.fit(x_train, y_train) knn.fit(x, y) # print('Accuracy of K-NN classifier: {:.2f}'.format(knn.score(x, y))) # print('Accuracy of K-NN classifier on training set: {:.2f}'.format(knn.score(x_train, y_train))) # print('Accuracy of K-NN classifier on test set: {:.2f}'.format(knn.score(x_test, y_test))) # print('------') if not self.knns: self.knns = [[obs, knn]] else: self.knns.append([obs, knn]) except IOError as e: ierr = "({}): {}".format(e.errno, e.strerror) logging.error("Error training Clinical UPDRS, file not found, I/O error %s", ierr) except ValueError as verr: logging.error("Error training Clinical UPDRS ValueError ->%s", verr.message) except: logging.error("Unexpected error on training Clinical UPDRS init: %s", sys.exc_info()[0])
Train the classifier implementing the `k-nearest neighbors vote <http://scikit-learn.org/stable/modules/\ generated/sklearn.neighbors.KNeighborsClassifier.html>`_ :param n_clusters: the number of clusters :type n_clusters: int
entailment
def __get_features_for_observation(self, data_frame=None, observation='LA-LL', skip_id=None, last_column_is_id=False): """ Extract the features for a given observation from a data frame :param data_frame: data frame to get features from :type data_frame: pandas.DataFrame :param observation: observation name :type observation: string :param skip_id: skip any test with a given id (optional) :type skip_id: int :param last_column_is_id: skip the last column of the data frame (useful when id is last column - optional) :type last_column_is_id: bool :return features: the features :rtype features: np.array """ try: features = np.array([]) if data_frame is None: data_frame = self.data_frame for index, row in data_frame.iterrows(): if not skip_id == row['id']: features_row = np.nan_to_num(row[row.keys().str.contains(observation)].values) features_row = np.append(features_row, row['id']) features = np.vstack([features, features_row]) if features.size else features_row # not the same when getting a single point if last_column_is_id: if np.ndim(features) > 1: to_return = features[:,:-1] else: to_return = features[:-1] else: to_return = features return to_return, data_frame['id'].values except: logging.error(" observation not found in data frame")
Extract the features for a given observation from a data frame :param data_frame: data frame to get features from :type data_frame: pandas.DataFrame :param observation: observation name :type observation: string :param skip_id: skip any test with a given id (optional) :type skip_id: int :param last_column_is_id: skip the last column of the data frame (useful when id is last column - optional) :type last_column_is_id: bool :return features: the features :rtype features: np.array
entailment
def predict(self, measurement, output_format='array'): """ Method to predict the class labels for the provided data :param measurement: the point to classify :type measurement: pandas.DataFrame :param output_format: the format to return the scores ('array' or 'str') :type output_format: string :return prediction: the prediction for a given test/point :rtype prediction: np.array """ scores = np.array([]) for obs in self.observations: knn = self.__get_knn_by_observation(obs) p, ids = self.__get_features_for_observation(data_frame=measurement, observation=obs, skip_id=3497, last_column_is_id=True) score = knn.predict(pd.DataFrame(p).T) scores = np.append(scores, score, axis=0) if output_format == 'array': return scores.astype(int) else: return np.array_str(scores.astype(int))
Method to predict the class labels for the provided data :param measurement: the point to classify :type measurement: pandas.DataFrame :param output_format: the format to return the scores ('array' or 'str') :type output_format: string :return prediction: the prediction for a given test/point :rtype prediction: np.array
entailment
def _namify_arguments(mapping): """ Ensure that a mapping of names to parameters has the parameters set to the correct name. """ result = [] for name, parameter in mapping.iteritems(): parameter.name = name result.append(parameter) return result
Ensure that a mapping of names to parameters has the parameters set to the correct name.
entailment
def _merge_associative_list(alist, path, value): """ Merge a value into an associative list at the given path, maintaining insertion order. Examples will explain it:: >>> alist = [] >>> _merge_associative_list(alist, ["foo", "bar"], "barvalue") >>> _merge_associative_list(alist, ["foo", "baz"], "bazvalue") >>> alist == [("foo", [("bar", "barvalue"), ("baz", "bazvalue")])] @param alist: An associative list of names to values. @param path: A path through sub-alists which we ultimately want to point to C{value}. @param value: The value to set. @return: None. This operation mutates the associative list in place. """ for key in path[:-1]: for item in alist: if item[0] == key: alist = item[1] break else: subalist = [] alist.append((key, subalist)) alist = subalist alist.append((path[-1], value))
Merge a value into an associative list at the given path, maintaining insertion order. Examples will explain it:: >>> alist = [] >>> _merge_associative_list(alist, ["foo", "bar"], "barvalue") >>> _merge_associative_list(alist, ["foo", "baz"], "bazvalue") >>> alist == [("foo", [("bar", "barvalue"), ("baz", "bazvalue")])] @param alist: An associative list of names to values. @param path: A path through sub-alists which we ultimately want to point to C{value}. @param value: The value to set. @return: None. This operation mutates the associative list in place.
entailment
def coerce(self, value): """Coerce a single value according to this parameter's settings. @param value: A L{str}, or L{None}. If L{None} is passed - meaning no value is avalable at all, not even the empty string - and this parameter is optional, L{self.default} will be returned. """ if value is None: if self.optional: return self.default else: value = "" if value == "": if not self.allow_none: raise MissingParameterError(self.name, kind=self.kind) return self.default try: self._check_range(value) parsed = self.parse(value) if self.validator and not self.validator(parsed): raise ValueError(value) return parsed except ValueError: try: value = value.decode("utf-8") message = "Invalid %s value %s" % (self.kind, value) except UnicodeDecodeError: message = "Invalid %s value" % self.kind raise InvalidParameterValueError(message)
Coerce a single value according to this parameter's settings. @param value: A L{str}, or L{None}. If L{None} is passed - meaning no value is avalable at all, not even the empty string - and this parameter is optional, L{self.default} will be returned.
entailment
def _check_range(self, value): """Check that the given C{value} is in the expected range.""" if self.min is None and self.max is None: return measure = self.measure(value) prefix = "Value (%s) for parameter %s is invalid. %s" if self.min is not None and measure < self.min: message = prefix % (value, self.name, self.lower_than_min_template % self.min) raise InvalidParameterValueError(message) if self.max is not None and measure > self.max: message = prefix % (value, self.name, self.greater_than_max_template % self.max) raise InvalidParameterValueError(message)
Check that the given C{value} is in the expected range.
entailment
def parse(self, value): """ Convert a dictionary of {relative index: value} to a list of parsed C{value}s. """ indices = [] if not isinstance(value, dict): # We interpret non-list inputs as a list of one element, for # compatibility with certain EC2 APIs. return [self.item.coerce(value)] for index in value.keys(): try: indices.append(int(index)) except ValueError: raise UnknownParameterError(index) result = [None] * len(value) for index_index, index in enumerate(sorted(indices)): v = value[str(index)] if index < 0: raise UnknownParameterError(index) result[index_index] = self.item.coerce(v) return result
Convert a dictionary of {relative index: value} to a list of parsed C{value}s.
entailment
def format(self, value): """ Convert a list like:: ["a", "b", "c"] to: {"1": "a", "2": "b", "3": "c"} C{value} may also be an L{Arguments} instance, mapping indices to values. Who knows why. """ if isinstance(value, Arguments): return dict((str(i), self.item.format(v)) for i, v in value) return dict((str(i + 1), self.item.format(v)) for i, v in enumerate(value))
Convert a list like:: ["a", "b", "c"] to: {"1": "a", "2": "b", "3": "c"} C{value} may also be an L{Arguments} instance, mapping indices to values. Who knows why.
entailment
def parse(self, value): """ Convert a dictionary of raw values to a dictionary of processed values. """ result = {} rest = {} for k, v in value.iteritems(): if k in self.fields: if (isinstance(v, dict) and not self.fields[k].supports_multiple): if len(v) == 1: # We support "foo.1" as "foo" as long as there is only # one "foo.#" parameter provided.... -_- v = v.values()[0] else: raise InvalidParameterCombinationError(k) result[k] = self.fields[k].coerce(v) else: rest[k] = v for k, v in self.fields.iteritems(): if k not in result: result[k] = v.coerce(None) if rest: raise UnknownParametersError(result, rest) return result
Convert a dictionary of raw values to a dictionary of processed values.
entailment
def format(self, value): """ Convert a dictionary of processed values to a dictionary of raw values. """ if not isinstance(value, Arguments): value = value.iteritems() return dict((k, self.fields[k].format(v)) for k, v in value)
Convert a dictionary of processed values to a dictionary of raw values.
entailment
def _wrap(self, value): """Wrap the given L{tree} with L{Arguments} as necessary. @param tree: A {dict}, containing L{dict}s and/or leaf values, nested arbitrarily deep. """ if isinstance(value, dict): if any(isinstance(name, int) for name in value.keys()): if not all(isinstance(name, int) for name in value.keys()): raise RuntimeError("Integer and non-integer keys: %r" % value.keys()) items = sorted(value.iteritems(), key=itemgetter(0)) return [self._wrap(val) for _, val in items] else: return Arguments(value) elif isinstance(value, list): return [self._wrap(x) for x in value] else: return value
Wrap the given L{tree} with L{Arguments} as necessary. @param tree: A {dict}, containing L{dict}s and/or leaf values, nested arbitrarily deep.
entailment
def extract(self, params): """Extract parameters from a raw C{dict} according to this schema. @param params: The raw parameters to parse. @return: A tuple of an L{Arguments} object holding the extracted arguments and any unparsed arguments. """ structure = Structure(fields=dict([(p.name, p) for p in self._parameters])) try: tree = structure.coerce(self._convert_flat_to_nest(params)) rest = {} except UnknownParametersError, error: tree = error.result rest = self._convert_nest_to_flat(error.unknown) return Arguments(tree), rest
Extract parameters from a raw C{dict} according to this schema. @param params: The raw parameters to parse. @return: A tuple of an L{Arguments} object holding the extracted arguments and any unparsed arguments.
entailment
def bundle(self, *arguments, **extra): """Bundle the given arguments in a C{dict} with EC2-style format. @param arguments: L{Arguments} instances to bundle. Keys in later objects will override those in earlier objects. @param extra: Any number of additional parameters. These will override similarly named arguments in L{arguments}. """ params = {} for argument in arguments: params.update(argument) params.update(extra) result = {} for name, value in params.iteritems(): if value is None: continue segments = name.split('.') first = segments[0] parameter = self.get_parameter(first) if parameter is None: raise RuntimeError("Parameter '%s' not in schema" % name) else: if value is None: result[name] = "" else: result[name] = parameter.format(value) return self._convert_nest_to_flat(result)
Bundle the given arguments in a C{dict} with EC2-style format. @param arguments: L{Arguments} instances to bundle. Keys in later objects will override those in earlier objects. @param extra: Any number of additional parameters. These will override similarly named arguments in L{arguments}.
entailment
def get_parameter(self, name): """ Get the parameter on this schema with the given C{name}. """ for parameter in self._parameters: if parameter.name == name: return parameter
Get the parameter on this schema with the given C{name}.
entailment
def _convert_flat_to_nest(self, params): """ Convert a structure in the form of:: {'foo.1.bar': 'value', 'foo.2.baz': 'value'} to:: {'foo': {'1': {'bar': 'value'}, '2': {'baz': 'value'}}} This is intended for use both during parsing of HTTP arguments like 'foo.1.bar=value' and when dealing with schema declarations that look like 'foo.n.bar'. This is the inverse of L{_convert_nest_to_flat}. """ result = {} for k, v in params.iteritems(): last = result segments = k.split('.') for index, item in enumerate(segments): if index == len(segments) - 1: newd = v else: newd = {} if not isinstance(last, dict): raise InconsistentParameterError(k) if type(last.get(item)) is dict and type(newd) is not dict: raise InconsistentParameterError(k) last = last.setdefault(item, newd) return result
Convert a structure in the form of:: {'foo.1.bar': 'value', 'foo.2.baz': 'value'} to:: {'foo': {'1': {'bar': 'value'}, '2': {'baz': 'value'}}} This is intended for use both during parsing of HTTP arguments like 'foo.1.bar=value' and when dealing with schema declarations that look like 'foo.n.bar'. This is the inverse of L{_convert_nest_to_flat}.
entailment
def _convert_nest_to_flat(self, params, _result=None, _prefix=None): """ Convert a data structure that looks like:: {"foo": {"bar": "baz", "shimmy": "sham"}} to:: {"foo.bar": "baz", "foo.shimmy": "sham"} This is the inverse of L{_convert_flat_to_nest}. """ if _result is None: _result = {} for k, v in params.iteritems(): if _prefix is None: path = k else: path = _prefix + '.' + k if isinstance(v, dict): self._convert_nest_to_flat(v, _result=_result, _prefix=path) else: _result[path] = v return _result
Convert a data structure that looks like:: {"foo": {"bar": "baz", "shimmy": "sham"}} to:: {"foo.bar": "baz", "foo.shimmy": "sham"} This is the inverse of L{_convert_flat_to_nest}.
entailment
def extend(self, *schema_items, **kwargs): """ Add any number of schema items to a new schema. Takes the same arguments as the constructor, and returns a new L{Schema} instance. If parameters, result, or errors is specified, they will be merged with the existing parameters, result, or errors. """ new_kwargs = { 'name': self.name, 'doc': self.doc, 'parameters': self._parameters[:], 'result': self.result.copy() if self.result else {}, 'errors': self.errors.copy() if self.errors else set()} if 'parameters' in kwargs: new_params = kwargs.pop('parameters') new_kwargs['parameters'].extend(new_params) new_kwargs['result'].update(kwargs.pop('result', {})) new_kwargs['errors'].update(kwargs.pop('errors', set())) new_kwargs.update(kwargs) if schema_items: parameters = self._convert_old_schema(schema_items) new_kwargs['parameters'].extend(parameters) return Schema(**new_kwargs)
Add any number of schema items to a new schema. Takes the same arguments as the constructor, and returns a new L{Schema} instance. If parameters, result, or errors is specified, they will be merged with the existing parameters, result, or errors.
entailment
def _convert_old_schema(self, parameters): """ Convert an ugly old schema, using dotted names, to the hot new schema, using List and Structure. The old schema assumes that every other dot implies an array. So a list of two parameters, [Integer("foo.bar.baz.quux"), Integer("foo.bar.shimmy")] becomes:: [List( "foo", item=Structure( fields={"baz": List(item=Integer()), "shimmy": Integer()}))] By design, the old schema syntax ignored the names "bar" and "quux". """ # 'merged' here is an associative list that maps parameter names to # Parameter instances, OR sub-associative lists which represent nested # lists and structures. # e.g., # [Integer("foo")] # becomes # [("foo", Integer("foo"))] # and # [Integer("foo.bar")] # (which represents a list of integers called "foo" with a meaningless # index name of "bar") becomes # [("foo", [("bar", Integer("foo.bar"))])]. merged = [] for parameter in parameters: segments = parameter.name.split('.') _merge_associative_list(merged, segments, parameter) result = [self._inner_convert_old_schema(node, 1) for node in merged] return result
Convert an ugly old schema, using dotted names, to the hot new schema, using List and Structure. The old schema assumes that every other dot implies an array. So a list of two parameters, [Integer("foo.bar.baz.quux"), Integer("foo.bar.shimmy")] becomes:: [List( "foo", item=Structure( fields={"baz": List(item=Integer()), "shimmy": Integer()}))] By design, the old schema syntax ignored the names "bar" and "quux".
entailment
def _inner_convert_old_schema(self, node, depth): """ Internal recursion helper for L{_convert_old_schema}. @param node: A node in the associative list tree as described in _convert_old_schema. A two tuple of (name, parameter). @param depth: The depth that the node is at. This is important to know if we're currently processing a list or a structure. ("foo.N" is a list called "foo", "foo.N.fieldname" describes a field in a list of structs). """ name, parameter_description = node if not isinstance(parameter_description, list): # This is a leaf, i.e., an actual L{Parameter} instance. return parameter_description if depth % 2 == 0: # we're processing a structure. fields = {} for node in parameter_description: fields[node[0]] = self._inner_convert_old_schema( node, depth + 1) return Structure(name, fields=fields) else: # we're processing a list. if not isinstance(parameter_description, list): raise TypeError("node %r must be an associative list" % (parameter_description,)) if not len(parameter_description) == 1: raise ValueError( "Multiple different index names specified: %r" % ([item[0] for item in parameter_description],)) subnode = parameter_description[0] item = self._inner_convert_old_schema(subnode, depth + 1) return List(name=name, item=item, optional=item.optional)
Internal recursion helper for L{_convert_old_schema}. @param node: A node in the associative list tree as described in _convert_old_schema. A two tuple of (name, parameter). @param depth: The depth that the node is at. This is important to know if we're currently processing a list or a structure. ("foo.N" is a list called "foo", "foo.N.fieldname" describes a field in a list of structs).
entailment
def wait_for_processes(processes, size, progress_queue, watcher, item): """ Watch progress queue for errors or progress. Cleanup processes on error or success. :param processes: [Process]: processes we are waiting to finish downloading a file :param size: int: how many values we expect to be processed by processes :param progress_queue: ProgressQueue: queue which will receive tuples of progress or error :param watcher: ProgressPrinter: we notify of our progress: :param item: object: RemoteFile/LocalFile we are transferring. """ while size > 0: progress_type, value = progress_queue.get() if progress_type == ProgressQueue.PROCESSED: chunk_size = value watcher.transferring_item(item, increment_amt=chunk_size) size -= chunk_size elif progress_type == ProgressQueue.START_WAITING: watcher.start_waiting() elif progress_type == ProgressQueue.DONE_WAITING: watcher.done_waiting() else: error_message = value for process in processes: process.terminate() raise ValueError(error_message) for process in processes: process.join()
Watch progress queue for errors or progress. Cleanup processes on error or success. :param processes: [Process]: processes we are waiting to finish downloading a file :param size: int: how many values we expect to be processed by processes :param progress_queue: ProgressQueue: queue which will receive tuples of progress or error :param watcher: ProgressPrinter: we notify of our progress: :param item: object: RemoteFile/LocalFile we are transferring.
entailment
def verify_file_private(filename): """ Raises ValueError the file permissions allow group/other On windows this never raises due to the implementation of stat. """ if platform.system().upper() != 'WINDOWS': filename = os.path.expanduser(filename) if os.path.exists(filename): file_stat = os.stat(filename) if mode_allows_group_or_other(file_stat.st_mode): raise ValueError(CONFIG_FILE_PERMISSIONS_ERROR)
Raises ValueError the file permissions allow group/other On windows this never raises due to the implementation of stat.
entailment
def transferring_item(self, item, increment_amt=1): """ Update progress that item is about to be transferred. :param item: LocalFile, LocalFolder, or LocalContent(project) that is about to be sent. :param increment_amt: int amount to increase our count(how much progress have we made) """ self.cnt += increment_amt percent_done = int(float(self.cnt) / float(self.total) * 100.0) if KindType.is_project(item): details = 'project' else: details = os.path.basename(item.path) self.progress_bar.update(percent_done, '{} {}'.format(self.msg_verb, details)) self.progress_bar.show()
Update progress that item is about to be transferred. :param item: LocalFile, LocalFolder, or LocalContent(project) that is about to be sent. :param increment_amt: int amount to increase our count(how much progress have we made)
entailment
def finished(self): """ Must be called to print final progress label. """ self.progress_bar.set_state(ProgressBar.STATE_DONE) self.progress_bar.show()
Must be called to print final progress label.
entailment
def start_waiting(self): """ Show waiting progress bar until done_waiting is called. Only has an effect if we are in waiting state. """ if not self.waiting: self.waiting = True wait_msg = "Waiting for project to become ready for {}".format(self.msg_verb) self.progress_bar.show_waiting(wait_msg)
Show waiting progress bar until done_waiting is called. Only has an effect if we are in waiting state.
entailment
def done_waiting(self): """ Show running progress bar (only has an effect if we are in waiting state). """ if self.waiting: self.waiting = False self.progress_bar.show_running()
Show running progress bar (only has an effect if we are in waiting state).
entailment
def show_waiting(self, wait_msg): """ Show waiting progress bar until done_waiting is called. Only has an effect if we are in waiting state. :param wait_msg: str: message describing what we are waiting for """ self.wait_msg = wait_msg self.set_state(ProgressBar.STATE_WAITING) self.show()
Show waiting progress bar until done_waiting is called. Only has an effect if we are in waiting state. :param wait_msg: str: message describing what we are waiting for
entailment
def _visit_content(item, parent, visitor): """ Recursively visit nodes in the project tree. :param item: LocalContent/LocalFolder/LocalFile we are traversing down from :param parent: LocalContent/LocalFolder parent or None :param visitor: object visiting the tree """ if KindType.is_project(item): visitor.visit_project(item) elif KindType.is_folder(item): visitor.visit_folder(item, parent) else: visitor.visit_file(item, parent) if not KindType.is_file(item): for child in item.children: ProjectWalker._visit_content(child, item, visitor)
Recursively visit nodes in the project tree. :param item: LocalContent/LocalFolder/LocalFile we are traversing down from :param parent: LocalContent/LocalFolder parent or None :param visitor: object visiting the tree
entailment
def s3_url_context(service_endpoint, bucket=None, object_name=None): """ Create a URL based on the given service endpoint and suitable for the given bucket or object. @param service_endpoint: The service endpoint on which to base the resulting URL. @type service_endpoint: L{AWSServiceEndpoint} @param bucket: If given, the name of a bucket to reference. @type bucket: L{unicode} @param object_name: If given, the name of an object or object subresource to reference. @type object_name: L{unicode} """ # Define our own query parser which can handle the consequences of # `?acl` and such (subresources). At its best, parse_qsl doesn't # let us differentiate between these and empty values (such as # `?acl=`). def p(s): results = [] args = s.split(u"&") for a in args: pieces = a.split(u"=") if len(pieces) == 1: results.append((unquote(pieces[0]),)) elif len(pieces) == 2: results.append(tuple(map(unquote, pieces))) else: raise Exception("oh no") return results query = [] path = [] if bucket is None: path.append(u"") else: if isinstance(bucket, bytes): bucket = bucket.decode("utf-8") path.append(bucket) if object_name is None: path.append(u"") else: if isinstance(object_name, bytes): object_name = object_name.decode("utf-8") if u"?" in object_name: object_name, query = object_name.split(u"?", 1) query = p(query) object_name_components = object_name.split(u"/") if object_name_components[0] == u"": object_name_components.pop(0) if object_name_components: path.extend(object_name_components) else: path.append(u"") return _S3URLContext( scheme=service_endpoint.scheme.decode("utf-8"), host=service_endpoint.get_host().decode("utf-8"), port=service_endpoint.port, path=path, query=query, )
Create a URL based on the given service endpoint and suitable for the given bucket or object. @param service_endpoint: The service endpoint on which to base the resulting URL. @type service_endpoint: L{AWSServiceEndpoint} @param bucket: If given, the name of a bucket to reference. @type bucket: L{unicode} @param object_name: If given, the name of an object or object subresource to reference. @type object_name: L{unicode}
entailment
def list_buckets(self): """ List all buckets. Returns a list of all the buckets owned by the authenticated sender of the request. """ details = self._details( method=b"GET", url_context=self._url_context(), ) query = self._query_factory(details) d = self._submit(query) d.addCallback(self._parse_list_buckets) return d
List all buckets. Returns a list of all the buckets owned by the authenticated sender of the request.
entailment
def _parse_list_buckets(self, (response, xml_bytes)): """ Parse XML bucket list response. """ root = XML(xml_bytes) buckets = [] for bucket_data in root.find("Buckets"): name = bucket_data.findtext("Name") date_text = bucket_data.findtext("CreationDate") date_time = parseTime(date_text) bucket = Bucket(name, date_time) buckets.append(bucket) return buckets
Parse XML bucket list response.
entailment
def create_bucket(self, bucket): """ Create a new bucket. """ details = self._details( method=b"PUT", url_context=self._url_context(bucket=bucket), ) query = self._query_factory(details) return self._submit(query)
Create a new bucket.
entailment
def delete_bucket(self, bucket): """ Delete a bucket. The bucket must be empty before it can be deleted. """ details = self._details( method=b"DELETE", url_context=self._url_context(bucket=bucket), ) query = self._query_factory(details) return self._submit(query)
Delete a bucket. The bucket must be empty before it can be deleted.
entailment
def get_bucket(self, bucket, marker=None, max_keys=None, prefix=None): """ Get a list of all the objects in a bucket. @param bucket: The name of the bucket from which to retrieve objects. @type bucket: L{unicode} @param marker: If given, indicate a position in the overall results where the results of this call should begin. The first result is the first object that sorts greater than this marker. @type marker: L{bytes} or L{NoneType} @param max_keys: If given, the maximum number of objects to return. @type max_keys: L{int} or L{NoneType} @param prefix: If given, indicate that only objects with keys beginning with this value should be returned. @type prefix: L{bytes} or L{NoneType} @return: A L{Deferred} that fires with a L{BucketListing} describing the result. @see: U{http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html} """ args = [] if marker is not None: args.append(("marker", marker)) if max_keys is not None: args.append(("max-keys", "%d" % (max_keys,))) if prefix is not None: args.append(("prefix", prefix)) if args: object_name = "?" + urlencode(args) else: object_name = None details = self._details( method=b"GET", url_context=self._url_context(bucket=bucket, object_name=object_name), ) d = self._submit(self._query_factory(details)) d.addCallback(self._parse_get_bucket) return d
Get a list of all the objects in a bucket. @param bucket: The name of the bucket from which to retrieve objects. @type bucket: L{unicode} @param marker: If given, indicate a position in the overall results where the results of this call should begin. The first result is the first object that sorts greater than this marker. @type marker: L{bytes} or L{NoneType} @param max_keys: If given, the maximum number of objects to return. @type max_keys: L{int} or L{NoneType} @param prefix: If given, indicate that only objects with keys beginning with this value should be returned. @type prefix: L{bytes} or L{NoneType} @return: A L{Deferred} that fires with a L{BucketListing} describing the result. @see: U{http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html}
entailment
def get_bucket_location(self, bucket): """ Get the location (region) of a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will fire with the bucket's region. """ details = self._details( method=b"GET", url_context=self._url_context(bucket=bucket, object_name="?location"), ) d = self._submit(self._query_factory(details)) d.addCallback(self._parse_bucket_location) return d
Get the location (region) of a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will fire with the bucket's region.
entailment
def get_bucket_lifecycle(self, bucket): """ Get the lifecycle configuration of a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will fire with the bucket's lifecycle configuration. """ details = self._details( method=b"GET", url_context=self._url_context(bucket=bucket, object_name="?lifecycle"), ) d = self._submit(self._query_factory(details)) d.addCallback(self._parse_lifecycle_config) return d
Get the lifecycle configuration of a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will fire with the bucket's lifecycle configuration.
entailment
def _parse_lifecycle_config(self, (response, xml_bytes)): """Parse a C{LifecycleConfiguration} XML document.""" root = XML(xml_bytes) rules = [] for content_data in root.findall("Rule"): id = content_data.findtext("ID") prefix = content_data.findtext("Prefix") status = content_data.findtext("Status") expiration = int(content_data.findtext("Expiration/Days")) rules.append( LifecycleConfigurationRule(id, prefix, status, expiration)) return LifecycleConfiguration(rules)
Parse a C{LifecycleConfiguration} XML document.
entailment
def get_bucket_website_config(self, bucket): """ Get the website configuration of a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will fire with the bucket's website configuration. """ details = self._details( method=b"GET", url_context=self._url_context(bucket=bucket, object_name='?website'), ) d = self._submit(self._query_factory(details)) d.addCallback(self._parse_website_config) return d
Get the website configuration of a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will fire with the bucket's website configuration.
entailment
def _parse_website_config(self, (response, xml_bytes)): """Parse a C{WebsiteConfiguration} XML document.""" root = XML(xml_bytes) index_suffix = root.findtext("IndexDocument/Suffix") error_key = root.findtext("ErrorDocument/Key") return WebsiteConfiguration(index_suffix, error_key)
Parse a C{WebsiteConfiguration} XML document.
entailment
def get_bucket_notification_config(self, bucket): """ Get the notification configuration of a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will request the bucket's notification configuration. """ details = self._details( method=b"GET", url_context=self._url_context(bucket=bucket, object_name="?notification"), ) d = self._submit(self._query_factory(details)) d.addCallback(self._parse_notification_config) return d
Get the notification configuration of a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will request the bucket's notification configuration.
entailment
def _parse_notification_config(self, (response, xml_bytes)): """Parse a C{NotificationConfiguration} XML document.""" root = XML(xml_bytes) topic = root.findtext("TopicConfiguration/Topic") event = root.findtext("TopicConfiguration/Event") return NotificationConfiguration(topic, event)
Parse a C{NotificationConfiguration} XML document.
entailment
def get_bucket_versioning_config(self, bucket): """ Get the versioning configuration of a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will request the bucket's versioning configuration. """ details = self._details( method=b"GET", url_context=self._url_context(bucket=bucket, object_name="?versioning"), ) d = self._submit(self._query_factory(details)) d.addCallback(self._parse_versioning_config) return d
Get the versioning configuration of a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will request the bucket's versioning configuration.
entailment
def _parse_versioning_config(self, (response, xml_bytes)): """Parse a C{VersioningConfiguration} XML document.""" root = XML(xml_bytes) mfa_delete = root.findtext("MfaDelete") status = root.findtext("Status") return VersioningConfiguration(mfa_delete=mfa_delete, status=status)
Parse a C{VersioningConfiguration} XML document.
entailment
def get_bucket_acl(self, bucket): """ Get the access control policy for a bucket. """ details = self._details( method=b"GET", url_context=self._url_context(bucket=bucket, object_name="?acl"), ) d = self._submit(self._query_factory(details)) d.addCallback(self._parse_acl) return d
Get the access control policy for a bucket.
entailment
def put_object(self, bucket, object_name, data=None, content_type=None, metadata={}, amz_headers={}, body_producer=None): """ Put an object in a bucket. An existing object with the same name will be replaced. @param bucket: The name of the bucket. @param object_name: The name of the object. @type object_name: L{unicode} @param data: The data to write. @param content_type: The type of data being written. @param metadata: A C{dict} used to build C{x-amz-meta-*} headers. @param amz_headers: A C{dict} used to build C{x-amz-*} headers. @return: A C{Deferred} that will fire with the result of request. """ details = self._details( method=b"PUT", url_context=self._url_context(bucket=bucket, object_name=object_name), headers=self._headers(content_type), metadata=metadata, amz_headers=amz_headers, body=data, body_producer=body_producer, ) d = self._submit(self._query_factory(details)) d.addCallback(itemgetter(1)) return d
Put an object in a bucket. An existing object with the same name will be replaced. @param bucket: The name of the bucket. @param object_name: The name of the object. @type object_name: L{unicode} @param data: The data to write. @param content_type: The type of data being written. @param metadata: A C{dict} used to build C{x-amz-meta-*} headers. @param amz_headers: A C{dict} used to build C{x-amz-*} headers. @return: A C{Deferred} that will fire with the result of request.
entailment
def copy_object(self, source_bucket, source_object_name, dest_bucket=None, dest_object_name=None, metadata={}, amz_headers={}): """ Copy an object stored in S3 from a source bucket to a destination bucket. @param source_bucket: The S3 bucket to copy the object from. @param source_object_name: The name of the object to copy. @param dest_bucket: Optionally, the S3 bucket to copy the object to. Defaults to C{source_bucket}. @param dest_object_name: Optionally, the name of the new object. Defaults to C{source_object_name}. @param metadata: A C{dict} used to build C{x-amz-meta-*} headers. @param amz_headers: A C{dict} used to build C{x-amz-*} headers. @return: A C{Deferred} that will fire with the result of request. """ dest_bucket = dest_bucket or source_bucket dest_object_name = dest_object_name or source_object_name amz_headers["copy-source"] = "/%s/%s" % (source_bucket, source_object_name) details = self._details( method=b"PUT", url_context=self._url_context( bucket=dest_bucket, object_name=dest_object_name, ), metadata=metadata, amz_headers=amz_headers, ) d = self._submit(self._query_factory(details)) return d
Copy an object stored in S3 from a source bucket to a destination bucket. @param source_bucket: The S3 bucket to copy the object from. @param source_object_name: The name of the object to copy. @param dest_bucket: Optionally, the S3 bucket to copy the object to. Defaults to C{source_bucket}. @param dest_object_name: Optionally, the name of the new object. Defaults to C{source_object_name}. @param metadata: A C{dict} used to build C{x-amz-meta-*} headers. @param amz_headers: A C{dict} used to build C{x-amz-*} headers. @return: A C{Deferred} that will fire with the result of request.
entailment
def get_object(self, bucket, object_name): """ Get an object from a bucket. """ details = self._details( method=b"GET", url_context=self._url_context(bucket=bucket, object_name=object_name), ) d = self._submit(self._query_factory(details)) d.addCallback(itemgetter(1)) return d
Get an object from a bucket.
entailment
def head_object(self, bucket, object_name): """ Retrieve object metadata only. """ details = self._details( method=b"HEAD", url_context=self._url_context(bucket=bucket, object_name=object_name), ) d = self._submit(self._query_factory(details)) d.addCallback(lambda (response, body): _to_dict(response.responseHeaders)) return d
Retrieve object metadata only.
entailment
def delete_object(self, bucket, object_name): """ Delete an object from a bucket. Once deleted, there is no method to restore or undelete an object. """ details = self._details( method=b"DELETE", url_context=self._url_context(bucket=bucket, object_name=object_name), ) d = self._submit(self._query_factory(details)) return d
Delete an object from a bucket. Once deleted, there is no method to restore or undelete an object.
entailment
def put_object_acl(self, bucket, object_name, access_control_policy): """ Set access control policy on an object. """ data = access_control_policy.to_xml() details = self._details( method=b"PUT", url_context=self._url_context( bucket=bucket, object_name='%s?acl' % (object_name,), ), body=data, ) query = self._query_factory(details) d = self._submit(query) d.addCallback(self._parse_acl) return d
Set access control policy on an object.
entailment
def put_request_payment(self, bucket, payer): """ Set request payment configuration on bucket to payer. @param bucket: The name of the bucket. @param payer: The name of the payer. @return: A C{Deferred} that will fire with the result of the request. """ data = RequestPayment(payer).to_xml() details = self._details( method=b"PUT", url_context=self._url_context(bucket=bucket, object_name="?requestPayment"), body=data, ) d = self._submit(self._query_factory(details)) return d
Set request payment configuration on bucket to payer. @param bucket: The name of the bucket. @param payer: The name of the payer. @return: A C{Deferred} that will fire with the result of the request.
entailment
def get_request_payment(self, bucket): """ Get the request payment configuration on a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will fire with the name of the payer. """ details = self._details( method=b"GET", url_context=self._url_context(bucket=bucket, object_name="?requestPayment"), ) d = self._submit(self._query_factory(details)) d.addCallback(self._parse_get_request_payment) return d
Get the request payment configuration on a bucket. @param bucket: The name of the bucket. @return: A C{Deferred} that will fire with the name of the payer.
entailment
def init_multipart_upload(self, bucket, object_name, content_type=None, amz_headers={}, metadata={}): """ Initiate a multipart upload to a bucket. @param bucket: The name of the bucket @param object_name: The object name @param content_type: The Content-Type for the object @param metadata: C{dict} containing additional metadata @param amz_headers: A C{dict} used to build C{x-amz-*} headers. @return: C{str} upload_id """ objectname_plus = '%s?uploads' % object_name details = self._details( method=b"POST", url_context=self._url_context(bucket=bucket, object_name=objectname_plus), headers=self._headers(content_type), metadata=metadata, amz_headers=amz_headers, ) d = self._submit(self._query_factory(details)) d.addCallback( lambda (response, body): MultipartInitiationResponse.from_xml(body) ) return d
Initiate a multipart upload to a bucket. @param bucket: The name of the bucket @param object_name: The object name @param content_type: The Content-Type for the object @param metadata: C{dict} containing additional metadata @param amz_headers: A C{dict} used to build C{x-amz-*} headers. @return: C{str} upload_id
entailment
def upload_part(self, bucket, object_name, upload_id, part_number, data=None, content_type=None, metadata={}, body_producer=None): """ Upload a part of data corresponding to a multipart upload. @param bucket: The bucket name @param object_name: The object name @param upload_id: The multipart upload id @param part_number: The part number @param data: Data (optional, requires body_producer if not specified) @param content_type: The Content-Type @param metadata: Additional metadata @param body_producer: an C{IBodyProducer} (optional, requires data if not specified) @return: the C{Deferred} from underlying query.submit() call """ parms = 'partNumber=%s&uploadId=%s' % (str(part_number), upload_id) objectname_plus = '%s?%s' % (object_name, parms) details = self._details( method=b"PUT", url_context=self._url_context(bucket=bucket, object_name=objectname_plus), headers=self._headers(content_type), metadata=metadata, body=data, ) d = self._submit(self._query_factory(details)) d.addCallback(lambda (response, data): _to_dict(response.responseHeaders)) return d
Upload a part of data corresponding to a multipart upload. @param bucket: The bucket name @param object_name: The object name @param upload_id: The multipart upload id @param part_number: The part number @param data: Data (optional, requires body_producer if not specified) @param content_type: The Content-Type @param metadata: Additional metadata @param body_producer: an C{IBodyProducer} (optional, requires data if not specified) @return: the C{Deferred} from underlying query.submit() call
entailment
def complete_multipart_upload(self, bucket, object_name, upload_id, parts_list, content_type=None, metadata={}): """ Complete a multipart upload. N.B. This can be possibly be a slow operation. @param bucket: The bucket name @param object_name: The object name @param upload_id: The multipart upload id @param parts_list: A List of all the parts (2-tuples of part sequence number and etag) @param content_type: The Content-Type of the object @param metadata: C{dict} containing additional metadata @return: a C{Deferred} that fires after request is complete """ data = self._build_complete_multipart_upload_xml(parts_list) objectname_plus = '%s?uploadId=%s' % (object_name, upload_id) details = self._details( method=b"POST", url_context=self._url_context(bucket=bucket, object_name=objectname_plus), headers=self._headers(content_type), metadata=metadata, body=data, ) d = self._submit(self._query_factory(details)) # TODO - handle error responses d.addCallback( lambda (response, body): MultipartCompletionResponse.from_xml(body) ) return d
Complete a multipart upload. N.B. This can be possibly be a slow operation. @param bucket: The bucket name @param object_name: The object name @param upload_id: The multipart upload id @param parts_list: A List of all the parts (2-tuples of part sequence number and etag) @param content_type: The Content-Type of the object @param metadata: C{dict} containing additional metadata @return: a C{Deferred} that fires after request is complete
entailment
def set_content_type(self): """ Set the content type based on the file extension used in the object name. """ if self.object_name and not self.content_type: # XXX nothing is currently done with the encoding... we may # need to in the future self.content_type, encoding = mimetypes.guess_type( self.object_name, strict=False)
Set the content type based on the file extension used in the object name.
entailment
def get_headers(self, instant): """ Build the list of headers needed in order to perform S3 operations. """ headers = {'x-amz-date': _auth_v4.makeAMZDate(instant)} if self.body_producer is None: data = self.data if data is None: data = b"" headers["x-amz-content-sha256"] = hashlib.sha256(data).hexdigest() else: data = None headers["x-amz-content-sha256"] = b"UNSIGNED-PAYLOAD" for key, value in self.metadata.iteritems(): headers["x-amz-meta-" + key] = value for key, value in self.amz_headers.iteritems(): headers["x-amz-" + key] = value # Before we check if the content type is set, let's see if we can set # it by guessing the the mimetype. self.set_content_type() if self.content_type is not None: headers["Content-Type"] = self.content_type if self.creds is not None: headers["Authorization"] = self.sign( headers, data, s3_url_context(self.endpoint, self.bucket, self.object_name), instant, method=self.action) return headers
Build the list of headers needed in order to perform S3 operations.
entailment
def sign(self, headers, data, url_context, instant, method, region=REGION_US_EAST_1): """Sign this query using its built in credentials.""" headers["host"] = url_context.get_encoded_host() if data is None: request = _auth_v4._CanonicalRequest.from_request_components( method=method, url=url_context.get_encoded_path(), headers=headers, headers_to_sign=('host', 'x-amz-date'), payload_hash=None, ) else: request = _auth_v4._CanonicalRequest.from_request_components_and_payload( method=method, url=url_context.get_encoded_path(), headers=headers, headers_to_sign=('host', 'x-amz-date'), payload=data, ) return _auth_v4._make_authorization_header( region=region, service="s3", canonical_request=request, credentials=self.creds, instant=instant)
Sign this query using its built in credentials.
entailment
def submit(self, url_context=None, utcnow=datetime.datetime.utcnow): """Submit this query. @return: A deferred from get_page """ if not url_context: url_context = s3_url_context( self.endpoint, self.bucket, self.object_name) d = self.get_page( url_context.get_encoded_url(), method=self.action, postdata=self.data or b"", headers=self.get_headers(utcnow()), ) return d.addErrback(s3_error_wrapper)
Submit this query. @return: A deferred from get_page
entailment
def attributes(self): if 'id' in self.node.attrib: yield PlaceholderAttribute('id', self.node.attrib['id']) if 'tei-tag' in self.node.attrib: yield PlaceholderAttribute('tei-tag', self.node.attrib['tei-tag']) """Contain attributes applicable to this element""" for attributes in self.node.iterchildren('attributes'): for attribute in self.__iter_attributes__(attributes): yield attribute
Contain attributes applicable to this element
entailment
def divisions(self): """ Recursively get all the text divisions directly part of this element. If an element contains parts or text without tag. Those will be returned in order and wrapped with a TextDivision. """ from .placeholder_division import PlaceholderDivision placeholder = None for item in self.__parts_and_divisions: if item.tag == 'part': if not placeholder: placeholder = PlaceholderDivision() placeholder.parts.append(item) else: if placeholder: yield placeholder placeholder = None yield item if placeholder: yield placeholder
Recursively get all the text divisions directly part of this element. If an element contains parts or text without tag. Those will be returned in order and wrapped with a TextDivision.
entailment
def all_parts(self): """ Recursively get the parts flattened and in document order constituting the entire text e.g. if something has emphasis, a footnote or is marked as foreign. Text without a container element will be returned in order and wrapped with a TextPart. """ for item in self.__parts_and_divisions: if item.tag == 'part' and item.is_placeholder: # A real part will always return a placeholder containing # its content. Placeholder parts cannot have children. yield item else: for part in item.all_parts: yield part
Recursively get the parts flattened and in document order constituting the entire text e.g. if something has emphasis, a footnote or is marked as foreign. Text without a container element will be returned in order and wrapped with a TextPart.
entailment
def parts(self): """ Get the parts directly below this element. """ for item in self.__parts_and_divisions: if item.tag == 'part': yield item else: # Divisions shouldn't be beneath a part, but here's a fallback # for if this does happen for part in item.parts: yield part
Get the parts directly below this element.
entailment
def __parts_and_divisions(self): """ The parts and divisions directly part of this element. """ from .division import Division from .part import Part from .placeholder_part import PlaceholderPart text = self.node.text if text: stripped_text = text.replace('\n', '') if stripped_text.strip(): yield PlaceholderPart(stripped_text) for item in self.node: if item.tag == 'part': yield Part(item) elif item.tag == 'div': yield Division(item) if item.tail: stripped_tail = item.tail.replace('\n', '') if stripped_tail.strip(): yield PlaceholderPart(stripped_tail)
The parts and divisions directly part of this element.
entailment
def tostring(self, inject): """ Convert an element to a single string and allow the passed inject method to place content before any element. """ return inject(self, '\n'.join(f'{division.tostring(inject)}' for division in self.divisions))
Convert an element to a single string and allow the passed inject method to place content before any element.
entailment
def RCL(input_shape, rec_conv_layers, dense_layers, output_layer=[1, 'sigmoid'], padding='same', optimizer='adam', loss='binary_crossentropy'): """Summary Args: input_shape (tuple): The shape of the input layer. output_nodes (int): Number of nodes in the output layer. It depends on the loss function used. rec_conv_layers (list): RCL descriptor [ [ [(filter, kernel), (pool_size, stride), leak, drop], [(filter, kernel), (pool_size, stride), leak, drop], [(filter, kernel), (pool_size, stride), leak, drop, timesteps], ], ... [ [],[],[] ] ] dense_layers (TYPE): Dense layer descriptor [[fully_connected, leak, drop], ... []] padding (str, optional): Type of padding for conv and pooling layers optimizer (str or object optional): Keras optimizer as string or keras optimizer Returns: model: The compiled Kears model, ready for training. """ inputs = Input(shape=input_shape) for i, c in enumerate(rec_conv_layers): conv = Conv1D(c[0][0][0], c[0][0][1], padding=padding)(inputs) batch = BatchNormalization()(conv) act = LeakyReLU(alpha=c[0][2])(batch) pool = MaxPooling1D(pool_size=c[0][1][0], strides=c[0][1][1], padding=padding)(act) d1 = Dropout(c[0][3])(pool) inner = time_steps( input=d1, conv_layer=c[1], time_conv_layer=c[2], padding=padding) drop = Flatten()(inner) for i, d in enumerate(dense_layers): dense = Dense(d[0], activation='relu')(drop) bn = BatchNormalization()(dense) act = LeakyReLU(alpha=d[1])(bn) drop = Dropout(d[2])(act) output = Dense(output_layer[0], activation=output_layer[1])(drop) model = Model(inputs=inputs, outputs=output) model.compile(loss=loss, optimizer=optimizer, metrics=['accuracy']) return model
Summary Args: input_shape (tuple): The shape of the input layer. output_nodes (int): Number of nodes in the output layer. It depends on the loss function used. rec_conv_layers (list): RCL descriptor [ [ [(filter, kernel), (pool_size, stride), leak, drop], [(filter, kernel), (pool_size, stride), leak, drop], [(filter, kernel), (pool_size, stride), leak, drop, timesteps], ], ... [ [],[],[] ] ] dense_layers (TYPE): Dense layer descriptor [[fully_connected, leak, drop], ... []] padding (str, optional): Type of padding for conv and pooling layers optimizer (str or object optional): Keras optimizer as string or keras optimizer Returns: model: The compiled Kears model, ready for training.
entailment
def VOICE(input_shape, conv_layers, dense_layers, output_layer=[1, 'sigmoid'], padding='same', optimizer='adam', loss='binary_crossentropy'): """Conv1D CNN used primarily for voice data. Args: input_shape (tuple): The shape of the input layer targets (int): Number of targets conv_layers (list): Conv layer descriptor [[(filter, kernel), (pool_size, stride), leak, drop], ... []] dense_layers (TYPE): Dense layer descriptor [[fully_connected, leak, drop]] padding (str, optional): Type of padding for conv and pooling layers optimizer (str or object optional): Keras optimizer as string or keras optimizer Returns: TYPE: model, build_arguments """ inputs = Input(shape=input_shape) for i, c in enumerate(conv_layers): if i == 0: conv = Conv1D(c[0][0], c[0][1], padding=padding)(inputs) else: conv = Conv1D(c[0][0], c[0][1], padding=padding)(drop) bn = BatchNormalization()(conv) act = LeakyReLU(alpha=c[2])(bn) pool = MaxPooling1D(pool_size=c[1][0], strides=c[1][1], padding=padding)(act) drop = Dropout(c[3])(pool) drop = Flatten()(drop) for i, d in enumerate(dense_layers): dense = Dense(d[0], activation='relu')(drop) bn = BatchNormalization()(dense) act = LeakyReLU(alpha=d[1])(bn) drop = Dropout(d[2])(act) output = Dense(output_layer[0], activation=output_layer[1])(drop) model = Model(inputs=inputs, outputs=output) model.compile(loss=loss, optimizer=optimizer, metrics=['accuracy']) return model
Conv1D CNN used primarily for voice data. Args: input_shape (tuple): The shape of the input layer targets (int): Number of targets conv_layers (list): Conv layer descriptor [[(filter, kernel), (pool_size, stride), leak, drop], ... []] dense_layers (TYPE): Dense layer descriptor [[fully_connected, leak, drop]] padding (str, optional): Type of padding for conv and pooling layers optimizer (str or object optional): Keras optimizer as string or keras optimizer Returns: TYPE: model, build_arguments
entailment
def DNN(input_shape, dense_layers, output_layer=[1, 'sigmoid'], optimizer='adam', loss='binary_crossentropy'): """Summary Args: input_shape (list): The shape of the input layer targets (int): Number of targets dense_layers (list): Dense layer descriptor [fully_connected] optimizer (str or object optional): Keras optimizer as string or keras optimizer Returns: TYPE: model, build_arguments """ inputs = Input(shape=input_shape) dense = inputs for i, d in enumerate(dense_layers): dense = Dense(d, activation='relu')(dense) dense = BatchNormalization()(dense) dense = Dropout(0.3)(dense) output = Dense(output_layer[0], activation=output_layer[1])(dense) model = Model(inputs=inputs, outputs=output) model.compile(loss=loss, optimizer=optimizer, metrics=['accuracy']) return model
Summary Args: input_shape (list): The shape of the input layer targets (int): Number of targets dense_layers (list): Dense layer descriptor [fully_connected] optimizer (str or object optional): Keras optimizer as string or keras optimizer Returns: TYPE: model, build_arguments
entailment
def initialize(self, originalTimeSeries, calculatedTimeSeries): """Initializes the ErrorMeasure. During initialization, all :py:meth:`BaseErrorMeasure.local_errors` are calculated. :param TimeSeries originalTimeSeries: TimeSeries containing the original data. :param TimeSeries calculatedTimeSeries: TimeSeries containing calculated data. Calculated data is smoothed or forecasted data. :return: Return :py:const:`True` if the error could be calculated, :py:const:`False` otherwise based on the minimalErrorCalculationPercentage. :rtype: boolean :raise: Raises a :py:exc:`StandardError` if the error measure is initialized multiple times. """ # ErrorMeasure was already initialized. if 0 < len(self._errorValues): raise StandardError("An ErrorMeasure can only be initialized once.") # sort the TimeSeries to reduce the required comparison operations originalTimeSeries.sort_timeseries() calculatedTimeSeries.sort_timeseries() # Performance optimization append = self._errorValues.append appendDate = self._errorDates.append local_error = self.local_error minCalcIdx = 0 # calculate all valid local errors for orgPair in originalTimeSeries: for calcIdx in xrange(minCalcIdx, len(calculatedTimeSeries)): calcPair = calculatedTimeSeries[calcIdx] # Skip values that can not be compared if calcPair[0] != orgPair[0]: continue append(local_error(orgPair[1:], calcPair[1:])) appendDate(orgPair[0]) # return False, if the error cannot be calculated calculatedErrors = len(filter(lambda item: item is not None, self._errorValues)) minCalculatedErrors = self._minimalErrorCalculationPercentage * len(originalTimeSeries) if calculatedErrors < minCalculatedErrors: self._errorValues = [] self._errorDates = [] return False return True
Initializes the ErrorMeasure. During initialization, all :py:meth:`BaseErrorMeasure.local_errors` are calculated. :param TimeSeries originalTimeSeries: TimeSeries containing the original data. :param TimeSeries calculatedTimeSeries: TimeSeries containing calculated data. Calculated data is smoothed or forecasted data. :return: Return :py:const:`True` if the error could be calculated, :py:const:`False` otherwise based on the minimalErrorCalculationPercentage. :rtype: boolean :raise: Raises a :py:exc:`StandardError` if the error measure is initialized multiple times.
entailment
def _get_error_values(self, startingPercentage, endPercentage, startDate, endDate): """Gets the defined subset of self._errorValues. Both parameters will be correct at this time. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :param float startDate: Epoch representing the start date used for error calculation. :param float endDate: Epoch representing the end date used in the error calculation. :return: Returns a list with the defined error values. :rtype: list :raise: Raises a ValueError if startDate or endDate do not represent correct boundaries for error calculation. """ if startDate is not None: possibleDates = filter(lambda date: date >= startDate, self._errorDates) if 0 == len(possibleDates): raise ValueError("%s does not represent a valid startDate." % startDate) startIdx = self._errorDates.index(min(possibleDates)) else: startIdx = int((startingPercentage * len(self._errorValues)) / 100.0) if endDate is not None: possibleDates = filter(lambda date: date <= endDate, self._errorDates) if 0 == len(possibleDates): raise ValueError("%s does not represent a valid endDate." % endDate) endIdx = self._errorDates.index(max(possibleDates)) + 1 else: endIdx = int((endPercentage * len(self._errorValues)) / 100.0) return self._errorValues[startIdx:endIdx]
Gets the defined subset of self._errorValues. Both parameters will be correct at this time. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :param float startDate: Epoch representing the start date used for error calculation. :param float endDate: Epoch representing the end date used in the error calculation. :return: Returns a list with the defined error values. :rtype: list :raise: Raises a ValueError if startDate or endDate do not represent correct boundaries for error calculation.
entailment
def get_error(self, startingPercentage=0.0, endPercentage=100.0, startDate=None, endDate=None): """Calculates the error for the given interval (startingPercentage, endPercentage) between the TimeSeries given during :py:meth:`BaseErrorMeasure.initialize`. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :param float startDate: Epoch representing the start date used for error calculation. :param float endDate: Epoch representing the end date used in the error calculation. :return: Returns a float representing the error. :rtype: float :raise: Raises a :py:exc:`ValueError` in one of the following cases: - startingPercentage not in [0.0, 100.0] - endPercentage not in [0.0, 100.0] - endPercentage < startingPercentage :raise: Raises a :py:exc:`StandardError` if :py:meth:`BaseErrorMeasure.initialize` was not successfull before. """ # not initialized: if len(self._errorValues) == 0: raise StandardError("The last call of initialize(...) was not successfull.") # check for wrong parameters if not (0.0 <= startingPercentage <= 100.0): raise ValueError("startingPercentage has to be in [0.0, 100.0].") if not (0.0 <= endPercentage <= 100.0): raise ValueError("endPercentage has to be in [0.0, 100.0].") if endPercentage < startingPercentage: raise ValueError("endPercentage has to be greater or equal than startingPercentage.") return self._calculate(startingPercentage, endPercentage, startDate, endDate)
Calculates the error for the given interval (startingPercentage, endPercentage) between the TimeSeries given during :py:meth:`BaseErrorMeasure.initialize`. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :param float startDate: Epoch representing the start date used for error calculation. :param float endDate: Epoch representing the end date used in the error calculation. :return: Returns a float representing the error. :rtype: float :raise: Raises a :py:exc:`ValueError` in one of the following cases: - startingPercentage not in [0.0, 100.0] - endPercentage not in [0.0, 100.0] - endPercentage < startingPercentage :raise: Raises a :py:exc:`StandardError` if :py:meth:`BaseErrorMeasure.initialize` was not successfull before.
entailment
def confidence_interval(self, confidenceLevel): """Calculates for which value confidenceLevel% of the errors are closer to 0. :param float confidenceLevel: percentage of the errors that should be smaller than the returned value for overestimations and larger than the returned value for underestimations. confidenceLevel has to be in [0.0, 1.0] :return: return a tuple containing the underestimation and overestimation for the given confidenceLevel :rtype: tuple :warning: Index is still not calculated correctly """ if not (confidenceLevel >= 0 and confidenceLevel <= 1): raise ValueError("Parameter percentage has to be in [0,1]") underestimations = [] overestimations = [] for error in self._errorValues: if error is None: # None was in the lists causing some confidenceLevels not be calculated, not sure if that was intended, I suggested ignoring None values continue #Want 0 errors in both lists! if error >= 0: overestimations.append(error) if error <= 0: underestimations.append(error) #sort and cut off at confidence level. overestimations.sort() underestimations.sort(reverse=True) overIdx = int(len(overestimations) * confidenceLevel) - 1 underIdx = int(len(underestimations) * confidenceLevel) - 1 overestimation = 0.0 underestimation = 0.0 if overIdx >= 0: overestimation = overestimations[overIdx] else: print len(overestimations), confidenceLevel if underIdx >= 0: underestimation = underestimations[underIdx] return underestimation, overestimation
Calculates for which value confidenceLevel% of the errors are closer to 0. :param float confidenceLevel: percentage of the errors that should be smaller than the returned value for overestimations and larger than the returned value for underestimations. confidenceLevel has to be in [0.0, 1.0] :return: return a tuple containing the underestimation and overestimation for the given confidenceLevel :rtype: tuple :warning: Index is still not calculated correctly
entailment
def load_cloudupdrs_data(filename, convert_times=1000000000.0): """ This method loads data in the cloudupdrs format Usually the data will be saved in a csv file and it should look like this: .. code-block:: json timestamp_0, x_0, y_0, z_0 timestamp_1, x_1, y_1, z_1 timestamp_2, x_2, y_2, z_2 . . . timestamp_n, x_n, y_n, z_n where x, y, z are the components of the acceleration :param filename: The path to load data from :type filename: string :param convert_times: Convert times. The default is from from nanoseconds to seconds. :type convert_times: float """ # data_m = pd.read_table(filename, sep=',', header=None) try: data_m = np.genfromtxt(filename, delimiter=',', invalid_raise=False) date_times = pd.to_datetime((data_m[:, 0] - data_m[0, 0])) time_difference = (data_m[:, 0] - data_m[0, 0]) / convert_times magnitude_sum_acceleration = \ np.sqrt(data_m[:, 1] ** 2 + data_m[:, 2] ** 2 + data_m[:, 3] ** 2) data = {'td': time_difference, 'x': data_m[:, 1], 'y': data_m[:, 2], 'z': data_m[:, 3], 'mag_sum_acc': magnitude_sum_acceleration} data_frame = pd.DataFrame(data, index=date_times, columns=['td', 'x', 'y', 'z', 'mag_sum_acc']) return data_frame except IOError as e: ierr = "({}): {}".format(e.errno, e.strerror) logging.error("load data, file not found, I/O error %s", ierr) except ValueError as verr: logging.error("load data ValueError ->%s", verr.message) except: logging.error("Unexpected error on load data method: %s", sys.exc_info()[0])
This method loads data in the cloudupdrs format Usually the data will be saved in a csv file and it should look like this: .. code-block:: json timestamp_0, x_0, y_0, z_0 timestamp_1, x_1, y_1, z_1 timestamp_2, x_2, y_2, z_2 . . . timestamp_n, x_n, y_n, z_n where x, y, z are the components of the acceleration :param filename: The path to load data from :type filename: string :param convert_times: Convert times. The default is from from nanoseconds to seconds. :type convert_times: float
entailment
def load_segmented_data(filename): """ Helper function to load segmented gait time series data. :param filename: The full path of the file that contais our data. This should be a comma separated value (csv file). :type filename: str :return: The gait time series segmented data, with a x, y, z, mag_acc_sum and segmented columns. :rtype: pandas.DataFrame """ data = pd.read_csv(filename, index_col=0) data.index = data.index.astype(np.datetime64) return data
Helper function to load segmented gait time series data. :param filename: The full path of the file that contais our data. This should be a comma separated value (csv file). :type filename: str :return: The gait time series segmented data, with a x, y, z, mag_acc_sum and segmented columns. :rtype: pandas.DataFrame
entailment
def load_mpower_data(filename, convert_times=1000000000.0): """ This method loads data in the `mpower <https://www.synapse.org/#!Synapse:syn4993293/wiki/247859>`_ format The format is like: .. code-block:: json [ { "timestamp":19298.67999479167, "x": ... , "y": ..., "z": ..., }, {...}, {...} ] :param filename: The path to load data from :type filename: string :param convert_times: Convert times. The default is from from nanoseconds to seconds. :type convert_times: float """ raw_data = pd.read_json(filename) date_times = pd.to_datetime(raw_data.timestamp * convert_times - raw_data.timestamp[0] * convert_times) time_difference = (raw_data.timestamp - raw_data.timestamp[0]) time_difference = time_difference.values magnitude_sum_acceleration = \ np.sqrt(raw_data.x.values ** 2 + raw_data.y.values ** 2 + raw_data.z.values ** 2) data = {'td': time_difference, 'x': raw_data.x.values, 'y': raw_data.y.values, 'z': raw_data.z.values, 'mag_sum_acc': magnitude_sum_acceleration} data_frame = pd.DataFrame(data, index=date_times, columns=['td', 'x', 'y', 'z', 'mag_sum_acc']) return data_frame
This method loads data in the `mpower <https://www.synapse.org/#!Synapse:syn4993293/wiki/247859>`_ format The format is like: .. code-block:: json [ { "timestamp":19298.67999479167, "x": ... , "y": ..., "z": ..., }, {...}, {...} ] :param filename: The path to load data from :type filename: string :param convert_times: Convert times. The default is from from nanoseconds to seconds. :type convert_times: float
entailment
def load_finger_tapping_cloudupdrs_data(filename, convert_times=1000.0): """ This method loads data in the cloudupdrs format for the finger tapping processor Usually the data will be saved in a csv file and it should look like this: .. code-block:: json timestamp_0, . , action_type_0, x_0, y_0, . , . , x_target_0, y_target_0 timestamp_1, . , action_type_1, x_1, y_1, . , . , x_target_1, y_target_1 timestamp_2, . , action_type_2, x_2, y_2, . , . , x_target_2, y_target_2 . . . timestamp_n, . , action_type_n, x_n, y_n, . , . , x_target_n, y_target_n where data_frame.x, data_frame.y: components of tapping position. data_frame.x_target, data_frame.y_target their target. :param filename: The path to load data from :type filename: string :param convert_times: Convert times. The default is from from milliseconds to seconds. :type convert_times: float """ data_m = np.genfromtxt(filename, delimiter=',', invalid_raise=False, skip_footer=1) date_times = pd.to_datetime((data_m[:, 0] - data_m[0, 0])) time_difference = (data_m[:, 0] - data_m[0, 0]) / convert_times data = {'td': time_difference, 'action_type': data_m[:, 2],'x': data_m[:, 3], 'y': data_m[:, 4], 'x_target': data_m[:, 7], 'y_target': data_m[:, 8]} data_frame = pd.DataFrame(data, index=date_times, columns=['td', 'action_type','x', 'y', 'x_target', 'y_target']) return data_frame
This method loads data in the cloudupdrs format for the finger tapping processor Usually the data will be saved in a csv file and it should look like this: .. code-block:: json timestamp_0, . , action_type_0, x_0, y_0, . , . , x_target_0, y_target_0 timestamp_1, . , action_type_1, x_1, y_1, . , . , x_target_1, y_target_1 timestamp_2, . , action_type_2, x_2, y_2, . , . , x_target_2, y_target_2 . . . timestamp_n, . , action_type_n, x_n, y_n, . , . , x_target_n, y_target_n where data_frame.x, data_frame.y: components of tapping position. data_frame.x_target, data_frame.y_target their target. :param filename: The path to load data from :type filename: string :param convert_times: Convert times. The default is from from milliseconds to seconds. :type convert_times: float
entailment
def load_finger_tapping_mpower_data(filename, button_left_rect, button_right_rect, convert_times=1000.0): """ This method loads data in the `mpower <https://www.synapse.org/#!Synapse:syn4993293/wiki/247859>`_ format """ raw_data = pd.read_json(filename) date_times = pd.to_datetime(raw_data.TapTimeStamp * convert_times - raw_data.TapTimeStamp[0] * convert_times) time_difference = (raw_data.TapTimeStamp - raw_data.TapTimeStamp[0]) time_difference = time_difference.values x = [] y = [] x_target = [] y_target = [] x_left, y_left, width_left, height_left = re.findall(r'-?\d+\.?\d*', button_left_rect) x_right, y_right, width_right, height_right = re.findall(r'-?\d+\.?\d*', button_right_rect) x_left_target = float(x_left) + ( float(width_left) / 2.0 ) y_left_target = float(y_left) + ( float(height_left) / 2.0 ) x_right_target = float(x_right) + ( float(width_right) / 2.0 ) y_right_target = float(y_right) + ( float(height_right) / 2.0 ) for row_index, row in raw_data.iterrows(): x_coord, y_coord = re.findall(r'-?\d+\.?\d*', row.TapCoordinate) x.append(float(x_coord)) y.append(float(y_coord)) if row.TappedButtonId == 'TappedButtonLeft': x_target.append(x_left_target) y_target.append(y_left_target) else: x_target.append(x_right_target) y_target.append(y_right_target) data = {'td': time_difference, 'action_type': 1.0, 'x': x, 'y': y, 'x_target': x_target, 'y_target': y_target} data_frame = pd.DataFrame(data, index=date_times, columns=['td', 'action_type', 'x', 'y', 'x_target', 'y_target']) data_frame.index.name = 'timestamp' return data_frame
This method loads data in the `mpower <https://www.synapse.org/#!Synapse:syn4993293/wiki/247859>`_ format
entailment
def load_data(filename, format_file='cloudupdrs', button_left_rect=None, button_right_rect=None): """ This is a general load data method where the format of data to load can be passed as a parameter, :param filename: The path to load data from :type filename: str :param format_file: format of the file. Default is CloudUPDRS. Set to mpower for mpower data. :type format_file: str :param button_left_rect: mpower param :type button_left_rect: str :param button_right_rect: mpower param :type button_right_rect: str """ if format_file == 'mpower': return load_mpower_data(filename) elif format_file == 'segmented': return load_segmented_data(filename) elif format_file == 'accapp': return load_accapp_data(filename) elif format_file == 'physics': return load_physics_data(filename) elif format_file == 'freeze': return load_freeze_data(filename) elif format_file == 'huga': return load_huga_data(filename) else: if format_file == 'ft_cloudupdrs': return load_finger_tapping_cloudupdrs_data(filename) else: if format_file == 'ft_mpower': if button_left_rect is not None and button_right_rect is not None: return load_finger_tapping_mpower_data(filename, button_left_rect, button_right_rect) else: return load_cloudupdrs_data(filename)
This is a general load data method where the format of data to load can be passed as a parameter, :param filename: The path to load data from :type filename: str :param format_file: format of the file. Default is CloudUPDRS. Set to mpower for mpower data. :type format_file: str :param button_left_rect: mpower param :type button_left_rect: str :param button_right_rect: mpower param :type button_right_rect: str
entailment
def numerical_integration(signal, sampling_frequency): """ Numerically integrate a signal with it's sampling frequency. :param signal: A 1-dimensional array or list (the signal). :type signal: array :param sampling_frequency: The sampling frequency for the signal. :type sampling_frequency: float :return: The integrated signal. :rtype: numpy.ndarray """ integrate = sum(signal[1:]) / sampling_frequency + sum(signal[:-1]) integrate /= sampling_frequency * 2 return np.array(integrate)
Numerically integrate a signal with it's sampling frequency. :param signal: A 1-dimensional array or list (the signal). :type signal: array :param sampling_frequency: The sampling frequency for the signal. :type sampling_frequency: float :return: The integrated signal. :rtype: numpy.ndarray
entailment
def autocorrelation(signal): """ The `correlation <https://en.wikipedia.org/wiki/Autocorrelation#Estimation>`_ of a signal with a delayed copy of itself. :param signal: A 1-dimensional array or list (the signal). :type signal: array :return: The autocorrelated signal. :rtype: numpy.ndarray """ signal = np.array(signal) n = len(signal) variance = signal.var() signal -= signal.mean() r = np.correlate(signal, signal, mode = 'full')[-n:] result = r / (variance * (np.arange(n, 0, -1))) return np.array(result)
The `correlation <https://en.wikipedia.org/wiki/Autocorrelation#Estimation>`_ of a signal with a delayed copy of itself. :param signal: A 1-dimensional array or list (the signal). :type signal: array :return: The autocorrelated signal. :rtype: numpy.ndarray
entailment
def peakdet(signal, delta, x=None): """ Find the local maxima and minima (peaks) in a 1-dimensional signal. Converted from MATLAB script <http://billauer.co.il/peakdet.html> :param array signal: A 1-dimensional array or list (the signal). :type signal: array :param delta: The peak threashold. A point is considered a maximum peak if it has the maximal value, and was preceded (to the left) by a value lower by delta. :type delta: float :param x: Indices in local maxima and minima are replaced with the corresponding values in x (None default). :type x: array :return maxtab: The highest peaks. :rtype maxtab: numpy.ndarray :return mintab: The lowest peaks. :rtype mintab: numpy.ndarray """ maxtab = [] mintab = [] if x is None: x = np.arange(len(signal)) v = np.asarray(signal) if len(v) != len(x): sys.exit('Input vectors v and x must have same length') if not np.isscalar(delta): sys.exit('Input argument delta must be a scalar') if delta <= 0: sys.exit('Input argument delta must be positive') mn, mx = np.inf, -np.inf mnpos, mxpos = np.nan, np.nan lookformax = True for i in np.arange(len(v)): this = v[i] if this > mx: mx = this mxpos = x[i] if this < mn: mn = this mnpos = x[i] if lookformax: if this < mx - delta: maxtab.append((mxpos, mx)) mn = this mnpos = x[i] lookformax = False else: if this > mn + delta: mintab.append((mnpos, mn)) mx = this mxpos = x[i] lookformax = True return np.array(maxtab), np.array(mintab)
Find the local maxima and minima (peaks) in a 1-dimensional signal. Converted from MATLAB script <http://billauer.co.il/peakdet.html> :param array signal: A 1-dimensional array or list (the signal). :type signal: array :param delta: The peak threashold. A point is considered a maximum peak if it has the maximal value, and was preceded (to the left) by a value lower by delta. :type delta: float :param x: Indices in local maxima and minima are replaced with the corresponding values in x (None default). :type x: array :return maxtab: The highest peaks. :rtype maxtab: numpy.ndarray :return mintab: The lowest peaks. :rtype mintab: numpy.ndarray
entailment
def compute_interpeak(data, sample_rate): """ Compute number of samples between signal peaks using the real part of FFT. :param data: 1-dimensional time series data. :type data: array :param sample_rate: Sample rate of accelerometer reading (Hz) :type sample_rate: float :return interpeak: Number of samples between peaks :rtype interpeak: int :Examples: >>> import numpy as np >>> from mhealthx.signals import compute_interpeak >>> data = np.random.random(10000) >>> sample_rate = 100 >>> interpeak = compute_interpeak(data, sample_rate) """ # Real part of FFT: freqs = fftfreq(data.size, d=1.0/sample_rate) f_signal = rfft(data) # Maximum non-zero frequency: imax_freq = np.argsort(f_signal)[-2] freq = np.abs(freqs[imax_freq]) # Inter-peak samples: interpeak = np.int(np.round(sample_rate / freq)) return interpeak
Compute number of samples between signal peaks using the real part of FFT. :param data: 1-dimensional time series data. :type data: array :param sample_rate: Sample rate of accelerometer reading (Hz) :type sample_rate: float :return interpeak: Number of samples between peaks :rtype interpeak: int :Examples: >>> import numpy as np >>> from mhealthx.signals import compute_interpeak >>> data = np.random.random(10000) >>> sample_rate = 100 >>> interpeak = compute_interpeak(data, sample_rate)
entailment
def butter_lowpass_filter(data, sample_rate, cutoff=10, order=4, plot=False): """ `Low-pass filter <http://stackoverflow.com/questions/25191620/ creating-lowpass-filter-in-scipy-understanding-methods-and-units>`_ data by the [order]th order zero lag Butterworth filter whose cut frequency is set to [cutoff] Hz. :param data: time-series data, :type data: numpy array of floats :param: sample_rate: data sample rate :type sample_rate: integer :param cutoff: filter cutoff :type cutoff: float :param order: order :type order: integer :return y: low-pass-filtered data :rtype y: numpy array of floats :Examples: >>> from mhealthx.signals import butter_lowpass_filter >>> data = np.random.random(100) >>> sample_rate = 10 >>> cutoff = 5 >>> order = 4 >>> y = butter_lowpass_filter(data, sample_rate, cutoff, order) """ nyquist = 0.5 * sample_rate normal_cutoff = cutoff / nyquist b, a = butter(order, normal_cutoff, btype='low', analog=False) if plot: w, h = freqz(b, a, worN=8000) plt.subplot(2, 1, 1) plt.plot(0.5*sample_rate*w/np.pi, np.abs(h), 'b') plt.plot(cutoff, 0.5*np.sqrt(2), 'ko') plt.axvline(cutoff, color='k') plt.xlim(0, 0.5*sample_rate) plt.title("Lowpass Filter Frequency Response") plt.xlabel('Frequency [Hz]') plt.grid() plt.show() y = lfilter(b, a, data) return y
`Low-pass filter <http://stackoverflow.com/questions/25191620/ creating-lowpass-filter-in-scipy-understanding-methods-and-units>`_ data by the [order]th order zero lag Butterworth filter whose cut frequency is set to [cutoff] Hz. :param data: time-series data, :type data: numpy array of floats :param: sample_rate: data sample rate :type sample_rate: integer :param cutoff: filter cutoff :type cutoff: float :param order: order :type order: integer :return y: low-pass-filtered data :rtype y: numpy array of floats :Examples: >>> from mhealthx.signals import butter_lowpass_filter >>> data = np.random.random(100) >>> sample_rate = 10 >>> cutoff = 5 >>> order = 4 >>> y = butter_lowpass_filter(data, sample_rate, cutoff, order)
entailment
def crossings_nonzero_pos2neg(data): """ Find `indices of zero crossings from positive to negative values <http://stackoverflow.com/questions/3843017/efficiently-detect-sign-changes-in-python>`_. :param data: numpy array of floats :type data: numpy array of floats :return crossings: crossing indices to data :rtype crossings: numpy array of integers :Examples: >>> import numpy as np >>> from mhealthx.signals import crossings_nonzero_pos2neg >>> data = np.random.random(100) >>> crossings = crossings_nonzero_pos2neg(data) """ import numpy as np if isinstance(data, np.ndarray): pass elif isinstance(data, list): data = np.asarray(data) else: raise IOError('data should be a numpy array') pos = data > 0 crossings = (pos[:-1] & ~pos[1:]).nonzero()[0] return crossings
Find `indices of zero crossings from positive to negative values <http://stackoverflow.com/questions/3843017/efficiently-detect-sign-changes-in-python>`_. :param data: numpy array of floats :type data: numpy array of floats :return crossings: crossing indices to data :rtype crossings: numpy array of integers :Examples: >>> import numpy as np >>> from mhealthx.signals import crossings_nonzero_pos2neg >>> data = np.random.random(100) >>> crossings = crossings_nonzero_pos2neg(data)
entailment
def autocorrelate(data, unbias=2, normalize=2): """ Compute the autocorrelation coefficients for time series data. Here we use scipy.signal.correlate, but the results are the same as in Yang, et al., 2012 for unbias=1: "The autocorrelation coefficient refers to the correlation of a time series with its own past or future values. iGAIT uses unbiased autocorrelation coefficients of acceleration data to scale the regularity and symmetry of gait. The autocorrelation coefficients are divided by :math:`fc(0)`, so that the autocorrelation coefficient is equal to :math:`1` when :math:`t=0`: .. math:: NFC(t) = \\frac{fc(t)}{fc(0)} Here :math:`NFC(t)` is the normalised autocorrelation coefficient, and :math:`fc(t)` are autocorrelation coefficients." :param data: time series data :type data: numpy array :param unbias: autocorrelation, divide by range (1) or by weighted range (2) :type unbias: integer or None :param normalize: divide by 1st coefficient (1) or by maximum abs. value (2) :type normalize: integer or None :return coefficients: autocorrelation coefficients [normalized, unbiased] :rtype coefficients: numpy array :return N: number of coefficients :rtype N: integer :Examples: >>> import numpy as np >>> from mhealthx.signals import autocorrelate >>> data = np.random.random(100) >>> unbias = 2 >>> normalize = 2 >>> plot_test = True >>> coefficients, N = autocorrelate(data, unbias, normalize, plot_test) """ # Autocorrelation: coefficients = correlate(data, data, 'full') size = np.int(coefficients.size/2) coefficients = coefficients[size:] N = coefficients.size # Unbiased: if unbias: if unbias == 1: coefficients /= (N - np.arange(N)) elif unbias == 2: coefficient_ratio = coefficients[0]/coefficients[-1] coefficients /= np.linspace(coefficient_ratio, 1, N) else: raise IOError("unbias should be set to 1, 2, or None") # Normalize: if normalize: if normalize == 1: coefficients /= np.abs(coefficients[0]) elif normalize == 2: coefficients /= np.max(np.abs(coefficients)) else: raise IOError("normalize should be set to 1, 2, or None") return coefficients, N
Compute the autocorrelation coefficients for time series data. Here we use scipy.signal.correlate, but the results are the same as in Yang, et al., 2012 for unbias=1: "The autocorrelation coefficient refers to the correlation of a time series with its own past or future values. iGAIT uses unbiased autocorrelation coefficients of acceleration data to scale the regularity and symmetry of gait. The autocorrelation coefficients are divided by :math:`fc(0)`, so that the autocorrelation coefficient is equal to :math:`1` when :math:`t=0`: .. math:: NFC(t) = \\frac{fc(t)}{fc(0)} Here :math:`NFC(t)` is the normalised autocorrelation coefficient, and :math:`fc(t)` are autocorrelation coefficients." :param data: time series data :type data: numpy array :param unbias: autocorrelation, divide by range (1) or by weighted range (2) :type unbias: integer or None :param normalize: divide by 1st coefficient (1) or by maximum abs. value (2) :type normalize: integer or None :return coefficients: autocorrelation coefficients [normalized, unbiased] :rtype coefficients: numpy array :return N: number of coefficients :rtype N: integer :Examples: >>> import numpy as np >>> from mhealthx.signals import autocorrelate >>> data = np.random.random(100) >>> unbias = 2 >>> normalize = 2 >>> plot_test = True >>> coefficients, N = autocorrelate(data, unbias, normalize, plot_test)
entailment
def get_signal_peaks_and_prominences(data): """ Get the signal peaks and peak prominences. :param data array: One-dimensional array. :return peaks array: The peaks of our signal. :return prominences array: The prominences of the peaks. """ peaks, _ = sig.find_peaks(data) prominences = sig.peak_prominences(data, peaks)[0] return peaks, prominences
Get the signal peaks and peak prominences. :param data array: One-dimensional array. :return peaks array: The peaks of our signal. :return prominences array: The prominences of the peaks.
entailment
def smoothing_window(data, window=[1, 1, 1]): """ This is a smoothing functionality so we can fix misclassifications. It will run a sliding window of form [border, smoothing, border] on the signal and if the border elements are the same it will change the smooth elements to match the border. An example would be for a window of [2, 1, 2] we have the following elements [1, 1, 0, 1, 1], this will transform it into [1, 1, 1, 1, 1]. So if the border elements match it will transform the middle (smoothing) into the same as the border. :param data array: One-dimensional array. :param window array: Used to define the [border, smoothing, border] regions. :return data array: The smoothed version of the original data. """ for i in range(len(data) - sum(window)): start_window_from = i start_window_to = i+window[0] end_window_from = start_window_to + window[1] end_window_to = end_window_from + window[2] if np.all(data[start_window_from: start_window_to] == data[end_window_from: end_window_to]): data[start_window_from: end_window_to] = data[start_window_from] return data
This is a smoothing functionality so we can fix misclassifications. It will run a sliding window of form [border, smoothing, border] on the signal and if the border elements are the same it will change the smooth elements to match the border. An example would be for a window of [2, 1, 2] we have the following elements [1, 1, 0, 1, 1], this will transform it into [1, 1, 1, 1, 1]. So if the border elements match it will transform the middle (smoothing) into the same as the border. :param data array: One-dimensional array. :param window array: Used to define the [border, smoothing, border] regions. :return data array: The smoothed version of the original data.
entailment
def plot_segmentation(data, peaks, segment_indexes, figsize=(10, 5)): """ Will plot the data and segmentation based on the peaks and segment indexes. :param 1d-array data: The orginal axis of the data that was segmented into sections. :param 1d-array peaks: Peaks of the data. :param 1d-array segment_indexes: These are the different classes, corresponding to each peak. Will not return anything, instead it will plot the data and peaks with different colors for each class. """ fig, ax = plt.subplots(figsize=figsize) plt.plot(data); for segment in np.unique(segment_indexes): plt.plot(peaks[np.where(segment_indexes == segment)[0]], data[peaks][np.where(segment_indexes == segment)[0]], 'o') plt.show()
Will plot the data and segmentation based on the peaks and segment indexes. :param 1d-array data: The orginal axis of the data that was segmented into sections. :param 1d-array peaks: Peaks of the data. :param 1d-array segment_indexes: These are the different classes, corresponding to each peak. Will not return anything, instead it will plot the data and peaks with different colors for each class.
entailment