sentence1
stringlengths
52
3.87M
sentence2
stringlengths
1
47.2k
label
stringclasses
1 value
def freeze_of_gait(self, x): """ This method assess freeze of gait following :cite:`g-BachlinPRMHGT10`. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :return freeze_time: What times do freeze of gait events occur. [measured in time (h:m:s)] :rtype freeze_time: numpy.ndarray :return freeze_indexe: Freeze Index is defined as the power in the “freeze” band [3–8 Hz] divided by the power in the “locomotor” band [0.5–3 Hz] [3]. [measured in Hz] :rtype freeze_indexe: numpy.ndarray :return list locomotor_freeze_index: Locomotor freeze index is the power in the “freeze” band [3–8 Hz] added to power in the “locomotor” band [0.5–3 Hz]. [measured in Hz] :rtype locomotor_freeze_index: numpy.ndarray """ data = self.resample_signal(x).values f_res = self.sampling_frequency / self.window f_nr_LBs = int(self.loco_band[0] / f_res) f_nr_LBe = int(self.loco_band[1] / f_res) f_nr_FBs = int(self.freeze_band[0] / f_res) f_nr_FBe = int(self.freeze_band[1] / f_res) jPos = self.window + 1 i = 0 time = [] sumLocoFreeze = [] freezeIndex = [] while jPos < len(data): jStart = jPos - self.window time.append(jPos) y = data[int(jStart):int(jPos)] y = y - np.mean(y) Y = np.fft.fft(y, int(self.window)) Pyy = abs(Y*Y) / self.window areaLocoBand = numerical_integration( Pyy[f_nr_LBs-1 : f_nr_LBe], self.sampling_frequency ) areaFreezeBand = numerical_integration( Pyy[f_nr_FBs-1 : f_nr_FBe], self.sampling_frequency ) sumLocoFreeze.append(areaFreezeBand + areaLocoBand) freezeIndex.append(areaFreezeBand / areaLocoBand) jPos = jPos + self.step_size i = i + 1 freeze_time = np.asarray(time, dtype=np.int32) freeze_index = np.asarray(freezeIndex, dtype=np.float32) locomotor_freeze_index = np.asarray(sumLocoFreeze, dtype=np.float32) return freeze_time, freeze_index, locomotor_freeze_index
This method assess freeze of gait following :cite:`g-BachlinPRMHGT10`. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :return freeze_time: What times do freeze of gait events occur. [measured in time (h:m:s)] :rtype freeze_time: numpy.ndarray :return freeze_indexe: Freeze Index is defined as the power in the “freeze” band [3–8 Hz] divided by the power in the “locomotor” band [0.5–3 Hz] [3]. [measured in Hz] :rtype freeze_indexe: numpy.ndarray :return list locomotor_freeze_index: Locomotor freeze index is the power in the “freeze” band [3–8 Hz] added to power in the “locomotor” band [0.5–3 Hz]. [measured in Hz] :rtype locomotor_freeze_index: numpy.ndarray
entailment
def frequency_of_peaks(self, x, start_offset=100, end_offset=100): """ This method assess the frequency of the peaks on any given 1-dimensional time series. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :param start_offset: Signal to leave out (of calculations) from the begining of the time series (100 default). :type start_offset: int :param end_offset: Signal to leave out (from calculations) from the end of the time series (100 default). :type end_offset: int :return frequency_of_peaks: The frequency of peaks on the provided time series [measured in Hz]. :rtype frequency_of_peaks: float """ peaks_data = x[start_offset: -end_offset].values maxtab, mintab = peakdet(peaks_data, self.delta) x = np.mean(peaks_data[maxtab[1:,0].astype(int)] - peaks_data[maxtab[:-1,0].astype(int)]) frequency_of_peaks = abs(1/x) return frequency_of_peaks
This method assess the frequency of the peaks on any given 1-dimensional time series. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :param start_offset: Signal to leave out (of calculations) from the begining of the time series (100 default). :type start_offset: int :param end_offset: Signal to leave out (from calculations) from the end of the time series (100 default). :type end_offset: int :return frequency_of_peaks: The frequency of peaks on the provided time series [measured in Hz]. :rtype frequency_of_peaks: float
entailment
def speed_of_gait(self, x, wavelet_type='db3', wavelet_level=6): """ This method assess the speed of gait following :cite:`g-MartinSB11`. It extracts the gait speed from the energies of the approximation coefficients of wavelet functions. Prefferably you should use the magnitude of x, y and z (mag_acc_sum) here, as the time series. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :param wavelet_type: The type of wavelet to use. See https://pywavelets.readthedocs.io/en/latest/ref/wavelets.html for a full list ('db3' default). :type wavelet_type: str :param wavelet_level: The number of cycles the used wavelet should have. See https://pywavelets.readthedocs.io/en/latest/ref/wavelets.html for a fill list (6 default). :type wavelet_level: int :return: The speed of gait [measured in meters/second]. :rtype: float """ coeffs = wavedec(x.values, wavelet=wavelet_type, level=wavelet_level) energy = [sum(coeffs[wavelet_level - i]**2) / len(coeffs[wavelet_level - i]) for i in range(wavelet_level)] WEd1 = energy[0] / (5 * np.sqrt(2)) WEd2 = energy[1] / (4 * np.sqrt(2)) WEd3 = energy[2] / (3 * np.sqrt(2)) WEd4 = energy[3] / (2 * np.sqrt(2)) WEd5 = energy[4] / np.sqrt(2) WEd6 = energy[5] / np.sqrt(2) gait_speed = 0.5 * np.sqrt(WEd1+(WEd2/2)+(WEd3/3)+(WEd4/4)+(WEd5/5)+(WEd6/6)) return gait_speed
This method assess the speed of gait following :cite:`g-MartinSB11`. It extracts the gait speed from the energies of the approximation coefficients of wavelet functions. Prefferably you should use the magnitude of x, y and z (mag_acc_sum) here, as the time series. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :param wavelet_type: The type of wavelet to use. See https://pywavelets.readthedocs.io/en/latest/ref/wavelets.html for a full list ('db3' default). :type wavelet_type: str :param wavelet_level: The number of cycles the used wavelet should have. See https://pywavelets.readthedocs.io/en/latest/ref/wavelets.html for a fill list (6 default). :type wavelet_level: int :return: The speed of gait [measured in meters/second]. :rtype: float
entailment
def walk_regularity_symmetry(self, data_frame): """ This method extracts the step and stride regularity and also walk symmetry. :param data_frame: The data frame. It should have x, y, and z columns. :type data_frame: pandas.DataFrame :return step_regularity: Regularity of steps on [x, y, z] coordinates, defined as the consistency of the step-to-step pattern. :rtype step_regularity: numpy.ndarray :return stride_regularity: Regularity of stride on [x, y, z] coordinates, defined as the consistency of the stride-to-stride pattern. :rtype stride_regularity: numpy.ndarray :return walk_symmetry: Symmetry of walk on [x, y, z] coordinates, defined as the difference between step and stride regularity. :rtype walk_symmetry: numpy.ndarray """ def _symmetry(v): maxtab, _ = peakdet(v, self.delta) return maxtab[1][1], maxtab[2][1] step_regularity_x, stride_regularity_x = _symmetry(autocorrelation(data_frame.x)) step_regularity_y, stride_regularity_y = _symmetry(autocorrelation(data_frame.y)) step_regularity_z, stride_regularity_z = _symmetry(autocorrelation(data_frame.z)) symmetry_x = step_regularity_x - stride_regularity_x symmetry_y = step_regularity_y - stride_regularity_y symmetry_z = step_regularity_z - stride_regularity_z step_regularity = np.array([step_regularity_x, step_regularity_y, step_regularity_z]) stride_regularity = np.array([stride_regularity_x, stride_regularity_y, stride_regularity_z]) walk_symmetry = np.array([symmetry_x, symmetry_y, symmetry_z]) return step_regularity, stride_regularity, walk_symmetry
This method extracts the step and stride regularity and also walk symmetry. :param data_frame: The data frame. It should have x, y, and z columns. :type data_frame: pandas.DataFrame :return step_regularity: Regularity of steps on [x, y, z] coordinates, defined as the consistency of the step-to-step pattern. :rtype step_regularity: numpy.ndarray :return stride_regularity: Regularity of stride on [x, y, z] coordinates, defined as the consistency of the stride-to-stride pattern. :rtype stride_regularity: numpy.ndarray :return walk_symmetry: Symmetry of walk on [x, y, z] coordinates, defined as the difference between step and stride regularity. :rtype walk_symmetry: numpy.ndarray
entailment
def walk_direction_preheel(self, data_frame): """ Estimate local walk (not cardinal) direction with pre-heel strike phase. Inspired by Nirupam Roy's B.E. thesis: "WalkCompass: Finding Walking Direction Leveraging Smartphone's Inertial Sensors" :param data_frame: The data frame. It should have x, y, and z columns. :type data_frame: pandas.DataFrame :return: Unit vector of local walk (not cardinal) direction. :rtype: numpy.ndarray """ # Sum of absolute values across accelerometer axes: data = data_frame.x.abs() + data_frame.y.abs() + data_frame.z.abs() # Find maximum peaks of smoothed data: dummy, ipeaks_smooth = self.heel_strikes(data) data = data.values # Compute number of samples between peaks using the real part of the FFT: interpeak = compute_interpeak(data, self.sampling_frequency) decel = np.int(np.round(self.stride_fraction * interpeak)) # Find maximum peaks close to maximum peaks of smoothed data: ipeaks = [] for ipeak_smooth in ipeaks_smooth: ipeak = np.argmax(data[ipeak_smooth - decel:ipeak_smooth + decel]) ipeak += ipeak_smooth - decel ipeaks.append(ipeak) # Compute the average vector for each deceleration phase: vectors = [] for ipeak in ipeaks: decel_vectors = np.asarray([[data_frame.x[i], data_frame.y[i], data_frame.z[i]] for i in range(ipeak - decel, ipeak)]) vectors.append(np.mean(decel_vectors, axis=0)) # Compute the average deceleration vector and take the opposite direction: direction = -1 * np.mean(vectors, axis=0) # Return the unit vector in this direction: direction /= np.sqrt(direction.dot(direction)) return direction
Estimate local walk (not cardinal) direction with pre-heel strike phase. Inspired by Nirupam Roy's B.E. thesis: "WalkCompass: Finding Walking Direction Leveraging Smartphone's Inertial Sensors" :param data_frame: The data frame. It should have x, y, and z columns. :type data_frame: pandas.DataFrame :return: Unit vector of local walk (not cardinal) direction. :rtype: numpy.ndarray
entailment
def heel_strikes(self, x): """ Estimate heel strike times between sign changes in accelerometer data. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :return strikes: Heel strike timings measured in seconds. :rtype striles: numpy.ndarray :return strikes_idx: Heel strike timing indices of the time series. :rtype strikes_idx: numpy.ndarray """ # Demean data: data = x.values data -= data.mean() # TODO: fix this # Low-pass filter the AP accelerometer data by the 4th order zero lag # Butterworth filter whose cut frequency is set to 5 Hz: filtered = butter_lowpass_filter(data, self.sampling_frequency, self.cutoff_frequency, self.filter_order) # Find transitional positions where AP accelerometer changes from # positive to negative. transitions = crossings_nonzero_pos2neg(filtered) # Find the peaks of AP acceleration preceding the transitional positions, # and greater than the product of a threshold and the maximum value of # the AP acceleration: strike_indices_smooth = [] filter_threshold = np.abs(self.delta * np.max(filtered)) for i in range(1, np.size(transitions)): segment = range(transitions[i-1], transitions[i]) imax = np.argmax(filtered[segment]) if filtered[segment[imax]] > filter_threshold: strike_indices_smooth.append(segment[imax]) # Compute number of samples between peaks using the real part of the FFT: interpeak = compute_interpeak(data, self.sampling_frequency) decel = np.int(interpeak / 2) # Find maximum peaks close to maximum peaks of smoothed data: strikes_idx = [] for ismooth in strike_indices_smooth: istrike = np.argmax(data[ismooth - decel:ismooth + decel]) istrike = istrike + ismooth - decel strikes_idx.append(istrike) strikes = np.asarray(strikes_idx) strikes -= strikes[0] strikes = strikes / self.sampling_frequency return strikes, np.array(strikes_idx)
Estimate heel strike times between sign changes in accelerometer data. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :return strikes: Heel strike timings measured in seconds. :rtype striles: numpy.ndarray :return strikes_idx: Heel strike timing indices of the time series. :rtype strikes_idx: numpy.ndarray
entailment
def gait_regularity_symmetry(self, x, average_step_duration='autodetect', average_stride_duration='autodetect', unbias=1, normalize=2): """ Compute step and stride regularity and symmetry from accelerometer data with the help of steps and strides. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :param average_step_duration: Average duration of each step using the same time unit as the time series. If this is set to 'autodetect' it will infer this from the time series. :type average_step_duration: float :param average_stride_duration: Average duration of each stride using the same time unit as the time series. If this is set to 'autodetect' it will infer this from the time series. :type average_stride_duration: float :param unbias: Unbiased autocorrelation: divide by range (unbias=1) or by weighted range (unbias=2). :type unbias: int :param int normalize: Normalize: divide by 1st coefficient (normalize=1) or by maximum abs. value (normalize=2). :type normalize: int :return step_regularity: Step regularity measure along axis. :rtype step_regularity: float :return stride_regularity: Stride regularity measure along axis. :rtype stride_regularity: float :return symmetry: Symmetry measure along axis. :rtype symmetry: float """ if (average_step_duration=='autodetect') or (average_stride_duration=='autodetect'): strikes, _ = self.heel_strikes(x) step_durations = [] for i in range(1, np.size(strikes)): step_durations.append(strikes[i] - strikes[i-1]) average_step_duration = np.mean(step_durations) number_of_steps = np.size(strikes) strides1 = strikes[0::2] strides2 = strikes[1::2] stride_durations1 = [] for i in range(1, np.size(strides1)): stride_durations1.append(strides1[i] - strides1[i-1]) stride_durations2 = [] for i in range(1, np.size(strides2)): stride_durations2.append(strides2[i] - strides2[i-1]) strides = [strides1, strides2] stride_durations = [stride_durations1, stride_durations2] average_stride_duration = np.mean((np.mean(stride_durations1), np.mean(stride_durations2))) return self.gait_regularity_symmetry(x, average_step_duration, average_stride_duration) else: coefficients, _ = autocorrelate(x, unbias=1, normalize=2) step_period = np.int(np.round(1 / average_step_duration)) stride_period = np.int(np.round(1 / average_stride_duration)) step_regularity = coefficients[step_period] stride_regularity = coefficients[stride_period] symmetry = np.abs(stride_regularity - step_regularity) return step_regularity, stride_regularity, symmetry
Compute step and stride regularity and symmetry from accelerometer data with the help of steps and strides. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :param average_step_duration: Average duration of each step using the same time unit as the time series. If this is set to 'autodetect' it will infer this from the time series. :type average_step_duration: float :param average_stride_duration: Average duration of each stride using the same time unit as the time series. If this is set to 'autodetect' it will infer this from the time series. :type average_stride_duration: float :param unbias: Unbiased autocorrelation: divide by range (unbias=1) or by weighted range (unbias=2). :type unbias: int :param int normalize: Normalize: divide by 1st coefficient (normalize=1) or by maximum abs. value (normalize=2). :type normalize: int :return step_regularity: Step regularity measure along axis. :rtype step_regularity: float :return stride_regularity: Stride regularity measure along axis. :rtype stride_regularity: float :return symmetry: Symmetry measure along axis. :rtype symmetry: float
entailment
def gait(self, x): """ Extract gait features from estimated heel strikes and accelerometer data. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :return number_of_steps: Estimated number of steps based on heel strikes [number of steps]. :rtype number_of_steps: int :return velocity: Velocity (if distance is provided) [meters/second]. :rtype velocity: float :return avg_step_length: Average step length (if distance is provided) [meters]. :rtype avg_step_length: float :return avg_stride_length: Average stride length (if distance is provided) [meters]. :rtyoe avg_stride_length: float :return cadence: Number of steps divided by duration [steps/second]. :rtype cadence: float :return array step_durations: Step duration [seconds]. :rtype step_durations: np.ndarray :return float avg_step_duration: Average step duration [seconds]. :rtype avg_step_duration: float :return float sd_step_durations: Standard deviation of step durations [seconds]. :rtype sd_step_durations: np.ndarray :return list strides: Stride timings for each side [seconds]. :rtype strides: numpy.ndarray :return float avg_number_of_strides: Estimated number of strides based on alternating heel strikes [number of strides]. :rtype avg_number_of_strides: float :return list stride_durations: Estimated stride durations [seconds]. :rtype stride_durations: numpy.ndarray :return float avg_stride_duration: Average stride duration [seconds]. :rtype avg_stride_duration: float :return float sd_step_durations: Standard deviation of stride durations [seconds]. :rtype sd_step_duration: float :return float step_regularity: Measure of step regularity along axis [percentage consistency of the step-to-step pattern]. :rtype step_regularity: float :return float stride_regularity: Measure of stride regularity along axis [percentage consistency of the stride-to-stride pattern]. :rtype stride_regularity: float :return float symmetry: Measure of gait symmetry along axis [difference between step and stride regularity]. :rtype symmetry: float """ data = x strikes, _ = self.heel_strikes(data) step_durations = [] for i in range(1, np.size(strikes)): step_durations.append(strikes[i] - strikes[i-1]) avg_step_duration = np.mean(step_durations) sd_step_durations = np.std(step_durations) number_of_steps = np.size(strikes) strides1 = strikes[0::2] strides2 = strikes[1::2] stride_durations1 = [] for i in range(1, np.size(strides1)): stride_durations1.append(strides1[i] - strides1[i-1]) stride_durations2 = [] for i in range(1, np.size(strides2)): stride_durations2.append(strides2[i] - strides2[i-1]) strides = [strides1, strides2] stride_durations = [stride_durations1, stride_durations2] avg_number_of_strides = np.mean([np.size(strides1), np.size(strides2)]) avg_stride_duration = np.mean((np.mean(stride_durations1), np.mean(stride_durations2))) sd_stride_durations = np.mean((np.std(stride_durations1), np.std(stride_durations2))) step_period = np.int(np.round(1 / avg_step_duration)) stride_period = np.int(np.round(1 / avg_stride_duration)) step_regularity, stride_regularity, symmetry = self.gait_regularity_symmetry(data, average_step_duration=avg_step_duration, average_stride_duration=avg_stride_duration) cadence = None if self.duration: cadence = number_of_steps / self.duration velocity = None avg_step_length = None avg_stride_length = None if self.distance: velocity = self.distance / self.duration avg_step_length = number_of_steps / self.distance avg_stride_length = avg_number_of_strides / self.distance return [number_of_steps, cadence, velocity, avg_step_length, avg_stride_length, step_durations, avg_step_duration, sd_step_durations, strides, stride_durations, avg_number_of_strides, avg_stride_duration, sd_stride_durations, step_regularity, stride_regularity, symmetry]
Extract gait features from estimated heel strikes and accelerometer data. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :return number_of_steps: Estimated number of steps based on heel strikes [number of steps]. :rtype number_of_steps: int :return velocity: Velocity (if distance is provided) [meters/second]. :rtype velocity: float :return avg_step_length: Average step length (if distance is provided) [meters]. :rtype avg_step_length: float :return avg_stride_length: Average stride length (if distance is provided) [meters]. :rtyoe avg_stride_length: float :return cadence: Number of steps divided by duration [steps/second]. :rtype cadence: float :return array step_durations: Step duration [seconds]. :rtype step_durations: np.ndarray :return float avg_step_duration: Average step duration [seconds]. :rtype avg_step_duration: float :return float sd_step_durations: Standard deviation of step durations [seconds]. :rtype sd_step_durations: np.ndarray :return list strides: Stride timings for each side [seconds]. :rtype strides: numpy.ndarray :return float avg_number_of_strides: Estimated number of strides based on alternating heel strikes [number of strides]. :rtype avg_number_of_strides: float :return list stride_durations: Estimated stride durations [seconds]. :rtype stride_durations: numpy.ndarray :return float avg_stride_duration: Average stride duration [seconds]. :rtype avg_stride_duration: float :return float sd_step_durations: Standard deviation of stride durations [seconds]. :rtype sd_step_duration: float :return float step_regularity: Measure of step regularity along axis [percentage consistency of the step-to-step pattern]. :rtype step_regularity: float :return float stride_regularity: Measure of stride regularity along axis [percentage consistency of the stride-to-stride pattern]. :rtype stride_regularity: float :return float symmetry: Measure of gait symmetry along axis [difference between step and stride regularity]. :rtype symmetry: float
entailment
def separate_into_sections(self, data_frame, labels_col='anno', labels_to_keep=[1,2], min_labels_in_sequence=100): """ Helper function to separate a time series into multiple sections based on a labeled column. :param data_frame: The data frame. It should have x, y, and z columns. :type data_frame: pandas.DataFrame :param labels_col: The column which has the labels we would like to separate the data_frame on on ('anno' default). :type labels_col: str :param labels_to_keep: The unique labele ids of the labels which we would like to keep, out of all the labels in the labels_col ([1, 2] default). :type labels_to_keep: list :param min_labels_in_sequence: The minimum number of samples which can make up a section (100 default). :type min_labels_in_sequence: int :return: A list of DataFrames, segmented accordingly. :rtype: list """ sections = [[]] mask = data_frame[labels_col].apply(lambda x: x in labels_to_keep) for i,m in enumerate(mask): if m: sections[-1].append(i) if not m and len(sections[-1]) > min_labels_in_sequence: sections.append([]) sections.pop() sections = [self.rebuild_indexes(data_frame.iloc[s]) for s in sections] return sections
Helper function to separate a time series into multiple sections based on a labeled column. :param data_frame: The data frame. It should have x, y, and z columns. :type data_frame: pandas.DataFrame :param labels_col: The column which has the labels we would like to separate the data_frame on on ('anno' default). :type labels_col: str :param labels_to_keep: The unique labele ids of the labels which we would like to keep, out of all the labels in the labels_col ([1, 2] default). :type labels_to_keep: list :param min_labels_in_sequence: The minimum number of samples which can make up a section (100 default). :type min_labels_in_sequence: int :return: A list of DataFrames, segmented accordingly. :rtype: list
entailment
def bellman_segmentation(self, x, states): """ Divide a univariate time-series, data_frame, into states contiguous segments, using Bellman k-segmentation algorithm on the peak prominences of the data. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :param states: Number of contigous segments. :type states: int :return peaks: The peaks in our data_frame. :rtype peaks: list :return prominences: Peaks prominences. :rtype prominences: list :return bellman_idx: The indices of the segments. :rtype bellman_idx: list """ peaks, prominences = get_signal_peaks_and_prominences(x) bellman_idx = BellmanKSegment(prominences, states) return peaks, prominences, bellman_idx
Divide a univariate time-series, data_frame, into states contiguous segments, using Bellman k-segmentation algorithm on the peak prominences of the data. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :param states: Number of contigous segments. :type states: int :return peaks: The peaks in our data_frame. :rtype peaks: list :return prominences: Peaks prominences. :rtype prominences: list :return bellman_idx: The indices of the segments. :rtype bellman_idx: list
entailment
def sklearn_segmentation(self, x, cluster_fn): """ Divide a univariate time-series, data_frame, into states contiguous segments, using sk-learn clustering algorithms on the peak prominences of the data. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :param cluster_fn: Any unsupervised learning algorithm from the sklearn library. It needs to have the `fit_predict` method. :param cluster_fn: sklearn.aglorithm :return peaks: The peaks in our data_frame. :rtype peaks: list :return prominences: Peaks prominences. :rtype prominences: list :return sklearn_idx: The indices of the segments. :rtype sklearn_idx: list """ peaks, prominences = get_signal_peaks_and_prominences(x) # sklearn fix: reshape to (-1, 1) sklearn_idx = cluster_fn.fit_predict(prominences.reshape(-1, 1)) return peaks, prominences, sklearn_idx
Divide a univariate time-series, data_frame, into states contiguous segments, using sk-learn clustering algorithms on the peak prominences of the data. :param x: The time series to assess freeze of gait on. This could be x, y, z or mag_sum_acc. :type x: pandas.Series :param cluster_fn: Any unsupervised learning algorithm from the sklearn library. It needs to have the `fit_predict` method. :param cluster_fn: sklearn.aglorithm :return peaks: The peaks in our data_frame. :rtype peaks: list :return prominences: Peaks prominences. :rtype prominences: list :return sklearn_idx: The indices of the segments. :rtype sklearn_idx: list
entailment
def add_manual_segmentation_to_data_frame(self, data_frame, segmentation_dictionary): """ Utility method to store manual segmentation of gait time series. :param data_frame: The data frame. It should have x, y, and z columns. :type data_frame: pandas.DataFrame :param segmentation_dictionary: A dictionary of the form {'signal_type': [(from, to), (from, to)], ..., 'signal_type': [(from, to), (from, to)]}. The from and to can either be of type numpy.datetime64 or int, depending on how you are segmenting the time series. :type segmentation_dictionary: dict :return: The data_frame with a new column named 'segmentation'. :rtype: pandas.DataFrame """ # add some checks to see if dictionary is in the right format! data_frame['segmentation'] = 'unknown' for i, (k, v) in enumerate(segmentation_dictionary.items()): for start, end in v: if type(start) != np.datetime64: if start < 0: start = 0 if end > data_frame.size: end = data_frame.size start = data_frame.index.values[start] end = data_frame.index.values[end] data_frame.loc[start: end, 'segmentation'] = k return data_frame
Utility method to store manual segmentation of gait time series. :param data_frame: The data frame. It should have x, y, and z columns. :type data_frame: pandas.DataFrame :param segmentation_dictionary: A dictionary of the form {'signal_type': [(from, to), (from, to)], ..., 'signal_type': [(from, to), (from, to)]}. The from and to can either be of type numpy.datetime64 or int, depending on how you are segmenting the time series. :type segmentation_dictionary: dict :return: The data_frame with a new column named 'segmentation'. :rtype: pandas.DataFrame
entailment
def plot_segmentation_dictionary(self, x, segmentation_dictionary, figsize=(10, 5)): """ Utility method used to visualize how the segmentation dictionary interacts with the time series. :param data_frame: The data frame. It should have x, y, and z columns. :type data_frame: pandas.DataFrame :param segmentation_dictionary: A dictionary of the form {'signal_type': [(from, to), (from, to)], ..., 'signal_type': [(from, to), (from, to)]}. :type segmentation_dictionary: dict :param figsize: The size of the figure where we will plot the segmentation on top of the provided time series ((10, 5) default). :type figsize: tuple """ data = x fig, ax = plt.subplots() fig.set_size_inches(figsize[0], figsize[1]) # fix this!! colors = 'bgrcmykwbgrcmykwbgrcmykw' data.plot(ax=ax) for i, (k, v) in enumerate(segmentation_dictionary.items()): for start, end in v: if type(start) != np.datetime64: start = data.index.values[start] end = data.index.values[end] plt.axvspan(start, end, color=colors[i], alpha=0.5) legend = [mpatches.Patch(color=colors[i], label="{}".format(k)) for i, k in enumerate(segmentation_dictionary.keys())] plt.legend(handles=legend) plt.show()
Utility method used to visualize how the segmentation dictionary interacts with the time series. :param data_frame: The data frame. It should have x, y, and z columns. :type data_frame: pandas.DataFrame :param segmentation_dictionary: A dictionary of the form {'signal_type': [(from, to), (from, to)], ..., 'signal_type': [(from, to), (from, to)]}. :type segmentation_dictionary: dict :param figsize: The size of the figure where we will plot the segmentation on top of the provided time series ((10, 5) default). :type figsize: tuple
entailment
def plot_segmentation_data_frame(self, segmented_data_frame, axis='mag_sum_acc', figsize=(10, 5)): """ Utility method used to visualize how the segmentation dictionary interacts with the time series. :param segmented_data_frame: The segmented data frame. It should have x, y, z and segmentation columns. :type segmented_data_frame: pandas.DataFrame :param axis: The axis which we want to plot. We can choose from x, y, z and mag_sum_acc ('mag_acc_sum' default). :type axis: str :param figsize: The size of the figure where we will plot the segmentation on top of the provided time series ((10, 5) default). :type figsize: tuple """ # fix this!! colors = 'bgrcmykwbgrcmykwbgrcmykw' keys = np.unique(segmented_data_frame['segmentation']) fig, ax = plt.subplots() fig.set_size_inches(figsize[0], figsize[1]) segmented_data_frame[axis].plot(ax=ax) for i, k in enumerate(keys): patch = segmented_data_frame['segmentation'].loc[segmented_data_frame['segmentation'] == k].index for p in patch: ax.axvline(p, color=colors[i], alpha=0.1) legend = [mpatches.Patch(color=colors[i], label="{}".format(k)) for i, k in enumerate(keys)] plt.legend(handles=legend) plt.show()
Utility method used to visualize how the segmentation dictionary interacts with the time series. :param segmented_data_frame: The segmented data frame. It should have x, y, z and segmentation columns. :type segmented_data_frame: pandas.DataFrame :param axis: The axis which we want to plot. We can choose from x, y, z and mag_sum_acc ('mag_acc_sum' default). :type axis: str :param figsize: The size of the figure where we will plot the segmentation on top of the provided time series ((10, 5) default). :type figsize: tuple
entailment
def error_wrapper(error, errorClass): """ We want to see all error messages from cloud services. Amazon's EC2 says that their errors are accompanied either by a 400-series or 500-series HTTP response code. As such, the first thing we want to do is check to see if the error is in that range. If it is, we then need to see if the error message is an EC2 one. In the event that an error is not a Twisted web error nor an EC2 one, the original exception is raised. """ http_status = 0 if error.check(TwistedWebError): xml_payload = error.value.response if error.value.status: http_status = int(error.value.status) else: error.raiseException() if http_status >= 400: if not xml_payload: error.raiseException() try: fallback_error = errorClass( xml_payload, error.value.status, str(error.value), error.value.response) except (ParseError, AWSResponseParseError): error_message = http.RESPONSES.get(http_status) fallback_error = TwistedWebError( http_status, error_message, error.value.response) raise fallback_error elif 200 <= http_status < 300: return str(error.value) else: error.raiseException()
We want to see all error messages from cloud services. Amazon's EC2 says that their errors are accompanied either by a 400-series or 500-series HTTP response code. As such, the first thing we want to do is check to see if the error is in that range. If it is, we then need to see if the error message is an EC2 one. In the event that an error is not a Twisted web error nor an EC2 one, the original exception is raised.
entailment
def _get_joined_path(ctx): """ @type ctx: L{_URLContext} @param ctx: A URL context. @return: The path component, un-urlencoded, but joined by slashes. @rtype: L{bytes} """ return b'/' + b'/'.join(seg.encode('utf-8') for seg in ctx.path)
@type ctx: L{_URLContext} @param ctx: A URL context. @return: The path component, un-urlencoded, but joined by slashes. @rtype: L{bytes}
entailment
def get_page(self, url, *args, **kwds): """ Define our own get_page method so that we can easily override the factory when we need to. This was copied from the following: * twisted.web.client.getPage * twisted.web.client._makeGetterFactory """ contextFactory = None scheme, host, port, path = parse(url) data = kwds.get('postdata', None) self._method = method = kwds.get('method', 'GET') self.request_headers = self._headers(kwds.get('headers', {})) if (self.body_producer is None) and (data is not None): self.body_producer = FileBodyProducer(StringIO(data)) if self.endpoint.ssl_hostname_verification: contextFactory = None else: contextFactory = WebClientContextFactory() agent = _get_agent(scheme, host, self.reactor, contextFactory) if scheme == "https": self.client.url = url d = agent.request(method, url, self.request_headers, self.body_producer) d.addCallback(self._handle_response) return d
Define our own get_page method so that we can easily override the factory when we need to. This was copied from the following: * twisted.web.client.getPage * twisted.web.client._makeGetterFactory
entailment
def _headers(self, headers_dict): """ Convert dictionary of headers into twisted.web.client.Headers object. """ return Headers(dict((k,[v]) for (k,v) in headers_dict.items()))
Convert dictionary of headers into twisted.web.client.Headers object.
entailment
def _unpack_headers(self, headers): """ Unpack twisted.web.client.Headers object to dict. This is to provide backwards compatability. """ return dict((k,v[0]) for (k,v) in headers.getAllRawHeaders())
Unpack twisted.web.client.Headers object to dict. This is to provide backwards compatability.
entailment
def get_request_headers(self, *args, **kwds): """ A convenience method for obtaining the headers that were sent to the S3 server. The AWS S3 API depends upon setting headers. This method is provided as a convenience for debugging issues with the S3 communications. """ if self.request_headers: return self._unpack_headers(self.request_headers)
A convenience method for obtaining the headers that were sent to the S3 server. The AWS S3 API depends upon setting headers. This method is provided as a convenience for debugging issues with the S3 communications.
entailment
def _handle_response(self, response): """ Handle the HTTP response by memoing the headers and then delivering bytes. """ self.client.status = response.code self.response_headers = response.headers # XXX This workaround (which needs to be improved at that) for possible # bug in Twisted with new client: # http://twistedmatrix.com/trac/ticket/5476 if self._method.upper() == 'HEAD' or response.code == NO_CONTENT: return succeed('') receiver = self.receiver_factory() receiver.finished = d = Deferred() receiver.content_length = response.length response.deliverBody(receiver) if response.code >= 400: d.addCallback(self._fail_response, response) return d
Handle the HTTP response by memoing the headers and then delivering bytes.
entailment
def get_response_headers(self, *args, **kwargs): """ A convenience method for obtaining the headers that were sent from the S3 server. The AWS S3 API depends upon setting headers. This method is used by the head_object API call for getting a S3 object's metadata. """ if self.response_headers: return self._unpack_headers(self.response_headers)
A convenience method for obtaining the headers that were sent from the S3 server. The AWS S3 API depends upon setting headers. This method is used by the head_object API call for getting a S3 object's metadata.
entailment
def create_jwt_token(secret, client_id): """ Create JWT token for GOV.UK Notify Tokens have standard header: { "typ": "JWT", "alg": "HS256" } Claims consist of: iss: identifier for the client iat: issued at in epoch seconds (UTC) :param secret: Application signing secret :param client_id: Identifier for the client :return: JWT token for this request """ assert secret, "Missing secret key" assert client_id, "Missing client id" headers = { "typ": __type__, "alg": __algorithm__ } claims = { 'iss': client_id, 'iat': epoch_seconds() } return jwt.encode(payload=claims, key=secret, headers=headers).decode()
Create JWT token for GOV.UK Notify Tokens have standard header: { "typ": "JWT", "alg": "HS256" } Claims consist of: iss: identifier for the client iat: issued at in epoch seconds (UTC) :param secret: Application signing secret :param client_id: Identifier for the client :return: JWT token for this request
entailment
def get_token_issuer(token): """ Issuer of a token is the identifier used to recover the secret Need to extract this from token to ensure we can proceed to the signature validation stage Does not check validity of the token :param token: signed JWT token :return issuer: iss field of the JWT token :raises TokenIssuerError: if iss field not present :raises TokenDecodeError: if token does not conform to JWT spec """ try: unverified = decode_token(token) if 'iss' not in unverified: raise TokenIssuerError return unverified.get('iss') except jwt.DecodeError: raise TokenDecodeError
Issuer of a token is the identifier used to recover the secret Need to extract this from token to ensure we can proceed to the signature validation stage Does not check validity of the token :param token: signed JWT token :return issuer: iss field of the JWT token :raises TokenIssuerError: if iss field not present :raises TokenDecodeError: if token does not conform to JWT spec
entailment
def decode_jwt_token(token, secret): """ Validates and decodes the JWT token Token checked for - signature of JWT token - token issued date is valid :param token: jwt token :param secret: client specific secret :return boolean: True if valid token, False otherwise :raises TokenIssuerError: if iss field not present :raises TokenIssuedAtError: if iat field not present :raises jwt.DecodeError: If signature validation fails """ try: # check signature of the token decoded_token = jwt.decode( token, key=secret.encode(), verify=True, algorithms=[__algorithm__], leeway=__bound__ ) # token has all the required fields if 'iss' not in decoded_token: raise TokenIssuerError if 'iat' not in decoded_token: raise TokenIssuedAtError # check iat time is within bounds now = epoch_seconds() iat = int(decoded_token['iat']) if now > (iat + __bound__): raise TokenExpiredError("Token has expired", decoded_token) if iat > (now + __bound__): raise TokenExpiredError("Token can not be in the future", decoded_token) return True except jwt.InvalidIssuedAtError: raise TokenExpiredError("Token has invalid iat field", decode_token(token)) except jwt.DecodeError: raise TokenDecodeError
Validates and decodes the JWT token Token checked for - signature of JWT token - token issued date is valid :param token: jwt token :param secret: client specific secret :return boolean: True if valid token, False otherwise :raises TokenIssuerError: if iss field not present :raises TokenIssuedAtError: if iat field not present :raises jwt.DecodeError: If signature validation fails
entailment
def parse(stream, with_text=False): # type: (Iterator[str], bool) -> Iterator[Union[Tuple[str, LexicalUnit], LexicalUnit]] """Generates lexical units from a character stream. Args: stream (Iterator[str]): A character stream containing lexical units, superblanks and other text. with_text (Optional[bool]): A boolean defining whether to output preceding text with each lexical unit. Yields: :class:`LexicalUnit`: The next lexical unit found in the character stream. (if `with_text` is False) \n *(str, LexicalUnit)* - The next lexical unit found in the character stream and the the text that seperated it from the prior unit in a tuple. (if with_text is True) """ buffer = '' text_buffer = '' in_lexical_unit = False in_superblank = False for char in stream: if in_superblank: if char == ']': in_superblank = False text_buffer += char elif char == '\\': text_buffer += char text_buffer += next(stream) else: text_buffer += char elif in_lexical_unit: if char == '$': if with_text: yield (text_buffer, LexicalUnit(buffer)) else: yield LexicalUnit(buffer) buffer = '' text_buffer = '' in_lexical_unit = False elif char == '\\': buffer += char buffer += next(stream) else: buffer += char else: if char == '[': in_superblank = True text_buffer += char elif char == '^': in_lexical_unit = True elif char == '\\': text_buffer += char text_buffer += next(stream) else: text_buffer += char
Generates lexical units from a character stream. Args: stream (Iterator[str]): A character stream containing lexical units, superblanks and other text. with_text (Optional[bool]): A boolean defining whether to output preceding text with each lexical unit. Yields: :class:`LexicalUnit`: The next lexical unit found in the character stream. (if `with_text` is False) \n *(str, LexicalUnit)* - The next lexical unit found in the character stream and the the text that seperated it from the prior unit in a tuple. (if with_text is True)
entailment
def to_gnuplot_datafile(self, datafilepath): """Dumps the TimeSeries into a gnuplot compatible data file. :param string datafilepath: Path used to create the file. If that file already exists, it will be overwritten! :return: Returns :py:const:`True` if the data could be written, :py:const:`False` otherwise. :rtype: boolean """ try: datafile = file(datafilepath, "wb") except Exception: return False if self._timestampFormat is None: self._timestampFormat = _STR_EPOCHS datafile.write("# time_as_<%s> value\n" % self._timestampFormat) convert = TimeSeries.convert_epoch_to_timestamp for datapoint in self._timeseriesData: timestamp, value = datapoint if self._timestampFormat is not None: timestamp = convert(timestamp, self._timestampFormat) datafile.write("%s %s\n" % (timestamp, value)) datafile.close() return True
Dumps the TimeSeries into a gnuplot compatible data file. :param string datafilepath: Path used to create the file. If that file already exists, it will be overwritten! :return: Returns :py:const:`True` if the data could be written, :py:const:`False` otherwise. :rtype: boolean
entailment
def from_twodim_list(cls, datalist, tsformat=None): """Creates a new TimeSeries instance from the data stored inside a two dimensional list. :param list datalist: List containing multiple iterables with at least two values. The first item will always be used as timestamp in the predefined format, the second represents the value. All other items in those sublists will be ignored. :param string tsformat: Format of the given timestamp. This is used to convert the timestamp into UNIX epochs, if necessary. For valid examples take a look into the :py:func:`time.strptime` documentation. :return: Returns a TimeSeries instance containing the data from datalist. :rtype: TimeSeries """ # create and fill the given TimeSeries ts = TimeSeries() ts.set_timeformat(tsformat) for entry in datalist: ts.add_entry(*entry[:2]) # set the normalization level ts._normalized = ts.is_normalized() ts.sort_timeseries() return ts
Creates a new TimeSeries instance from the data stored inside a two dimensional list. :param list datalist: List containing multiple iterables with at least two values. The first item will always be used as timestamp in the predefined format, the second represents the value. All other items in those sublists will be ignored. :param string tsformat: Format of the given timestamp. This is used to convert the timestamp into UNIX epochs, if necessary. For valid examples take a look into the :py:func:`time.strptime` documentation. :return: Returns a TimeSeries instance containing the data from datalist. :rtype: TimeSeries
entailment
def initialize_from_sql_cursor(self, sqlcursor): """Initializes the TimeSeries's data from the given SQL cursor. You need to set the time stamp format using :py:meth:`TimeSeries.set_timeformat`. :param SQLCursor sqlcursor: Cursor that was holds the SQL result for any given "SELECT timestamp, value, ... FROM ..." SQL query. Only the first two attributes of the SQL result will be used. :return: Returns the number of entries added to the TimeSeries. :rtype: integer """ # initialize the result tuples = 0 # add the SQL result to the time series data = sqlcursor.fetchmany() while 0 < len(data): for entry in data: self.add_entry(str(entry[0]), entry[1]) data = sqlcursor.fetchmany() # set the normalization level self._normalized = self._check_normalization # return the number of tuples added to the timeseries. return tuples
Initializes the TimeSeries's data from the given SQL cursor. You need to set the time stamp format using :py:meth:`TimeSeries.set_timeformat`. :param SQLCursor sqlcursor: Cursor that was holds the SQL result for any given "SELECT timestamp, value, ... FROM ..." SQL query. Only the first two attributes of the SQL result will be used. :return: Returns the number of entries added to the TimeSeries. :rtype: integer
entailment
def convert_timestamp_to_epoch(cls, timestamp, tsformat): """Converts the given timestamp into a float representing UNIX-epochs. :param string timestamp: Timestamp in the defined format. :param string tsformat: Format of the given timestamp. This is used to convert the timestamp into UNIX epochs. For valid examples take a look into the :py:func:`time.strptime` documentation. :return: Returns an float, representing the UNIX-epochs for the given timestamp. :rtype: float """ return time.mktime(time.strptime(timestamp, tsformat))
Converts the given timestamp into a float representing UNIX-epochs. :param string timestamp: Timestamp in the defined format. :param string tsformat: Format of the given timestamp. This is used to convert the timestamp into UNIX epochs. For valid examples take a look into the :py:func:`time.strptime` documentation. :return: Returns an float, representing the UNIX-epochs for the given timestamp. :rtype: float
entailment
def convert_epoch_to_timestamp(cls, timestamp, tsformat): """Converts the given float representing UNIX-epochs into an actual timestamp. :param float timestamp: Timestamp as UNIX-epochs. :param string tsformat: Format of the given timestamp. This is used to convert the timestamp from UNIX epochs. For valid examples take a look into the :py:func:`time.strptime` documentation. :return: Returns the timestamp as defined in format. :rtype: string """ return time.strftime(tsformat, time.gmtime(timestamp))
Converts the given float representing UNIX-epochs into an actual timestamp. :param float timestamp: Timestamp as UNIX-epochs. :param string tsformat: Format of the given timestamp. This is used to convert the timestamp from UNIX epochs. For valid examples take a look into the :py:func:`time.strptime` documentation. :return: Returns the timestamp as defined in format. :rtype: string
entailment
def add_entry(self, timestamp, data): """Adds a new data entry to the TimeSeries. :param timestamp: Time stamp of the data. This has either to be a float representing the UNIX epochs or a string containing a timestamp in the given format. :param numeric data: Actual data value. """ self._normalized = self._predefinedNormalized self._sorted = self._predefinedSorted tsformat = self._timestampFormat if tsformat is not None: timestamp = TimeSeries.convert_timestamp_to_epoch(timestamp, tsformat) self._timeseriesData.append([float(timestamp), float(data)])
Adds a new data entry to the TimeSeries. :param timestamp: Time stamp of the data. This has either to be a float representing the UNIX epochs or a string containing a timestamp in the given format. :param numeric data: Actual data value.
entailment
def sort_timeseries(self, ascending=True): """Sorts the data points within the TimeSeries according to their occurrence inline. :param boolean ascending: Determines if the TimeSeries will be ordered ascending or descending. If this is set to descending once, the ordered parameter defined in :py:meth:`TimeSeries.__init__` will be set to False FOREVER. :return: Returns :py:obj:`self` for convenience. :rtype: TimeSeries """ # the time series is sorted by default if ascending and self._sorted: return sortorder = 1 if not ascending: sortorder = -1 self._predefinedSorted = False self._timeseriesData.sort(key=lambda i: sortorder * i[0]) self._sorted = ascending return self
Sorts the data points within the TimeSeries according to their occurrence inline. :param boolean ascending: Determines if the TimeSeries will be ordered ascending or descending. If this is set to descending once, the ordered parameter defined in :py:meth:`TimeSeries.__init__` will be set to False FOREVER. :return: Returns :py:obj:`self` for convenience. :rtype: TimeSeries
entailment
def sorted_timeseries(self, ascending=True): """Returns a sorted copy of the TimeSeries, preserving the original one. As an assumption this new TimeSeries is not ordered anymore if a new value is added. :param boolean ascending: Determines if the TimeSeries will be ordered ascending or descending. :return: Returns a new TimeSeries instance sorted in the requested order. :rtype: TimeSeries """ sortorder = 1 if not ascending: sortorder = -1 data = sorted(self._timeseriesData, key=lambda i: sortorder * i[0]) newTS = TimeSeries(self._normalized) for entry in data: newTS.add_entry(*entry) newTS._sorted = ascending return newTS
Returns a sorted copy of the TimeSeries, preserving the original one. As an assumption this new TimeSeries is not ordered anymore if a new value is added. :param boolean ascending: Determines if the TimeSeries will be ordered ascending or descending. :return: Returns a new TimeSeries instance sorted in the requested order. :rtype: TimeSeries
entailment
def normalize(self, normalizationLevel="minute", fusionMethod="mean", interpolationMethod="linear"): """Normalizes the TimeSeries data points. If this function is called, the TimeSeries gets ordered ascending automatically. The new timestamps will represent the center of each time bucket. Within a normalized TimeSeries, the temporal distance between two consecutive data points is constant. :param string normalizationLevel: Level of normalization that has to be applied. The available normalization levels are defined in :py:data:`timeseries.NormalizationLevels`. :param string fusionMethod: Normalization method that has to be used if multiple data entries exist within the same normalization bucket. The available methods are defined in :py:data:`timeseries.FusionMethods`. :param string interpolationMethod: Interpolation method that is used if a data entry at a specific time is missing. The available interpolation methods are defined in :py:data:`timeseries.InterpolationMethods`. :raise: Raises a :py:exc:`ValueError` if a normalizationLevel, fusionMethod or interpolationMethod hanve an unknown value. """ # do not normalize the TimeSeries if it is already normalized, either by # definition or a prior call of normalize(*) if self._normalizationLevel == normalizationLevel: if self._normalized: # pragma: no cover return # check if all parameters are defined correctly if normalizationLevel not in NormalizationLevels: raise ValueError("Normalization level %s is unknown." % normalizationLevel) if fusionMethod not in FusionMethods: raise ValueError("Fusion method %s is unknown." % fusionMethod) if interpolationMethod not in InterpolationMethods: raise ValueError("Interpolation method %s is unknown." % interpolationMethod) # (nearly) empty TimeSeries instances do not require normalization if len(self) < 2: self._normalized = True return # get the defined methods and parameter self._normalizationLevel = normalizationLevel normalizationLevel = NormalizationLevels[normalizationLevel] fusionMethod = FusionMethods[fusionMethod] interpolationMethod = InterpolationMethods[interpolationMethod] # sort the TimeSeries self.sort_timeseries() # prepare the required buckets start = self._timeseriesData[0][0] end = self._timeseriesData[-1][0] span = end - start bucketcnt = int(span / normalizationLevel) + 1 buckethalfwidth = normalizationLevel / 2.0 bucketstart = start + buckethalfwidth buckets = [[bucketstart + idx * normalizationLevel] for idx in xrange(bucketcnt)] # Step One: Populate buckets # Initialize the timeseries data iterators tsdStartIdx = 0 tsdEndIdx = 0 tsdlength = len(self) for idx in xrange(bucketcnt): # get the bucket to avoid multiple calls of buckets.__getitem__() bucket = buckets[idx] # get the range for the given bucket bucketend = bucket[0] + buckethalfwidth while tsdEndIdx < tsdlength and self._timeseriesData[tsdEndIdx][0] < bucketend: tsdEndIdx += 1 # continue, if no valid data entries exist if tsdStartIdx == tsdEndIdx: continue # use the given fusion method to calculate the fusioned value values = [i[1] for i in self._timeseriesData[tsdStartIdx:tsdEndIdx]] bucket.append(fusionMethod(values)) # set the new timeseries data index tsdStartIdx = tsdEndIdx # Step Two: Fill missing buckets missingCount = 0 lastIdx = 0 for idx in xrange(bucketcnt): # bucket is empty if 1 == len(buckets[idx]): missingCount += 1 continue # This is the first bucket. The first bucket is not empty by definition! if idx == 0: lastIdx = idx continue # update the lastIdx, if none was missing if 0 == missingCount: lastIdx = idx continue # calculate and fill in missing values missingValues = interpolationMethod(buckets[lastIdx][1], buckets[idx][1], missingCount) for idx2 in xrange(1, missingCount + 1): buckets[lastIdx + idx2].append(missingValues[idx2 - 1]) lastIdx = idx missingCount = 0 self._timeseriesData = buckets # at the end set self._normalized to True self._normalized = True
Normalizes the TimeSeries data points. If this function is called, the TimeSeries gets ordered ascending automatically. The new timestamps will represent the center of each time bucket. Within a normalized TimeSeries, the temporal distance between two consecutive data points is constant. :param string normalizationLevel: Level of normalization that has to be applied. The available normalization levels are defined in :py:data:`timeseries.NormalizationLevels`. :param string fusionMethod: Normalization method that has to be used if multiple data entries exist within the same normalization bucket. The available methods are defined in :py:data:`timeseries.FusionMethods`. :param string interpolationMethod: Interpolation method that is used if a data entry at a specific time is missing. The available interpolation methods are defined in :py:data:`timeseries.InterpolationMethods`. :raise: Raises a :py:exc:`ValueError` if a normalizationLevel, fusionMethod or interpolationMethod hanve an unknown value.
entailment
def _check_normalization(self): """Checks, if the TimeSeries is normalized. :return: Returns :py:const:`True` if all data entries of the TimeSeries have an equal temporal distance, :py:const:`False` otherwise. """ lastDistance = None distance = None for idx in xrange(len(self) - 1): distance = self[idx+1][0] - self[idx][0] # first run if lastDistance is None: lastDistance = distance continue if lastDistance != distance: return False lastDistance = distance return True
Checks, if the TimeSeries is normalized. :return: Returns :py:const:`True` if all data entries of the TimeSeries have an equal temporal distance, :py:const:`False` otherwise.
entailment
def apply(self, method): """Applies the given ForecastingAlgorithm or SmoothingMethod from the :py:mod:`pycast.methods` module to the TimeSeries. :param BaseMethod method: Method that should be used with the TimeSeries. For more information about the methods take a look into their corresponding documentation. :raise: Raises a StandardError when the TimeSeries was not normalized and hte method requires a normalized TimeSeries """ # check, if the methods requirements are fullfilled if method.has_to_be_normalized() and not self._normalized: raise StandardError("method requires a normalized TimeSeries instance.") if method.has_to_be_sorted(): self.sort_timeseries() return method.execute(self)
Applies the given ForecastingAlgorithm or SmoothingMethod from the :py:mod:`pycast.methods` module to the TimeSeries. :param BaseMethod method: Method that should be used with the TimeSeries. For more information about the methods take a look into their corresponding documentation. :raise: Raises a StandardError when the TimeSeries was not normalized and hte method requires a normalized TimeSeries
entailment
def sample(self, percentage): """Samples with replacement from the TimeSeries. Returns the sample and the remaining timeseries. The original timeseries is not changed. :param float percentage: How many percent of the original timeseries should be in the sample :return: A tuple containing (sample, rest) as two TimeSeries. :rtype: tuple(TimeSeries,TimeSeries) :raise: Raises a ValueError if percentage is not in (0.0, 1.0). """ if not (0.0 < percentage < 1.0): raise ValueError("Parameter percentage has to be in (0.0, 1.0).") cls = self.__class__ value_count = int(len(self) * percentage) values = random.sample(self, value_count) sample = cls.from_twodim_list(values) rest_values = self._timeseriesData[:] for value in values: rest_values.remove(value) rest = cls.from_twodim_list(rest_values) return sample, rest
Samples with replacement from the TimeSeries. Returns the sample and the remaining timeseries. The original timeseries is not changed. :param float percentage: How many percent of the original timeseries should be in the sample :return: A tuple containing (sample, rest) as two TimeSeries. :rtype: tuple(TimeSeries,TimeSeries) :raise: Raises a ValueError if percentage is not in (0.0, 1.0).
entailment
def add_entry(self, timestamp, data): """Adds a new data entry to the TimeSeries. :param timestamp: Time stamp of the data. This has either to be a float representing the UNIX epochs or a string containing a timestamp in the given format. :param list data: A list containing the actual dimension values. :raise: Raises a :py:exc:`ValueError` if data does not contain as many dimensions as defined in __init__. """ if not isinstance(data, list): data = [data] if len(data) != self._dimensionCount: raise ValueError("data does contain %s instead of %s dimensions.\n %s" % (len(data), self._dimensionCount, data)) self._normalized = self._predefinedNormalized self._sorted = self._predefinedSorted tsformat = self._timestampFormat if tsformat is not None: timestamp = TimeSeries.convert_timestamp_to_epoch(timestamp, tsformat) self._timeseriesData.append([float(timestamp)] + [float(dimensionValue) for dimensionValue in data])
Adds a new data entry to the TimeSeries. :param timestamp: Time stamp of the data. This has either to be a float representing the UNIX epochs or a string containing a timestamp in the given format. :param list data: A list containing the actual dimension values. :raise: Raises a :py:exc:`ValueError` if data does not contain as many dimensions as defined in __init__.
entailment
def to_twodim_list(self): """Serializes the MultiDimensionalTimeSeries data into a two dimensional list of [timestamp, [values]] pairs. :return: Returns a two dimensional list containing [timestamp, [values]] pairs. :rtype: list """ if self._timestampFormat is None: return self._timeseriesData datalist = [] append = datalist.append convert = TimeSeries.convert_epoch_to_timestamp for entry in self._timeseriesData: append([convert(entry[0], self._timestampFormat), entry[1:]]) return datalist
Serializes the MultiDimensionalTimeSeries data into a two dimensional list of [timestamp, [values]] pairs. :return: Returns a two dimensional list containing [timestamp, [values]] pairs. :rtype: list
entailment
def from_twodim_list(cls, datalist, tsformat=None, dimensions=1): """Creates a new MultiDimensionalTimeSeries instance from the data stored inside a two dimensional list. :param list datalist: List containing multiple iterables with at least two values. The first item will always be used as timestamp in the predefined format, the second is a list, containing the dimension values. :param string format: Format of the given timestamp. This is used to convert the timestamp into UNIX epochs, if necessary. For valid examples take a look into the :py:func:`time.strptime` documentation. :param integer dimensions: Number of dimensions the MultiDimensionalTimeSeries contains. :return: Returns a MultiDimensionalTimeSeries instance containing the data from datalist. :rtype: MultiDimensionalTimeSeries """ # create and fill the given TimeSeries ts = MultiDimensionalTimeSeries(dimensions=dimensions) ts.set_timeformat(tsformat) for entry in datalist: ts.add_entry(entry[0], entry[1]) # set the normalization level ts._normalized = ts.is_normalized() ts.sort_timeseries() return ts
Creates a new MultiDimensionalTimeSeries instance from the data stored inside a two dimensional list. :param list datalist: List containing multiple iterables with at least two values. The first item will always be used as timestamp in the predefined format, the second is a list, containing the dimension values. :param string format: Format of the given timestamp. This is used to convert the timestamp into UNIX epochs, if necessary. For valid examples take a look into the :py:func:`time.strptime` documentation. :param integer dimensions: Number of dimensions the MultiDimensionalTimeSeries contains. :return: Returns a MultiDimensionalTimeSeries instance containing the data from datalist. :rtype: MultiDimensionalTimeSeries
entailment
def to_gnuplot_datafile(self, datafilepath): """Dumps the TimeSeries into a gnuplot compatible data file. :param string datafilepath: Path used to create the file. If that file already exists, it will be overwritten! :return: Returns :py:const:`True` if the data could be written, :py:const:`False` otherwise. :rtype: boolean """ try: datafile = file(datafilepath, "wb") except Exception: return False if self._timestampFormat is None: self._timestampFormat = _STR_EPOCHS datafile.write("# time_as_<%s> value..." % self._timestampFormat) convert = TimeSeries.convert_epoch_to_timestamp for datapoint in self._timeseriesData: timestamp = datapoint[0] values = datapoint[1:] if self._timestampFormat is not None: timestamp = convert(timestamp, self._timestampFormat) datafile.write("%s %s" % (timestamp, " ".join([str(entry) for entry in values]))) datafile.close() return True
Dumps the TimeSeries into a gnuplot compatible data file. :param string datafilepath: Path used to create the file. If that file already exists, it will be overwritten! :return: Returns :py:const:`True` if the data could be written, :py:const:`False` otherwise. :rtype: boolean
entailment
def include(self, filename, is_file): """ Determines if a file should be included in a project for uploading. If file_exclude_regex is empty it will include everything. :param filename: str: filename to match it should not include directory :param is_file: bool: is this a file if not this will always return true :return: boolean: True if we should include the file. """ if self.exclude_regex and is_file: if self.exclude_regex.match(filename): return False return True else: return True
Determines if a file should be included in a project for uploading. If file_exclude_regex is empty it will include everything. :param filename: str: filename to match it should not include directory :param is_file: bool: is this a file if not this will always return true :return: boolean: True if we should include the file.
entailment
def add_filename_pattern(self, dir_name, pattern): """ Adds a Unix shell-style wildcard pattern underneath the specified directory :param dir_name: str: directory that contains the pattern :param pattern: str: Unix shell-style wildcard pattern """ full_pattern = '{}{}{}'.format(dir_name, os.sep, pattern) filename_regex = fnmatch.translate(full_pattern) self.regex_list.append(re.compile(filename_regex))
Adds a Unix shell-style wildcard pattern underneath the specified directory :param dir_name: str: directory that contains the pattern :param pattern: str: Unix shell-style wildcard pattern
entailment
def include(self, path): """ Returns False if any pattern matches the path :param path: str: filename path to test :return: boolean: True if we should include this path """ for regex_item in self.regex_list: if regex_item.match(path): return False return True
Returns False if any pattern matches the path :param path: str: filename path to test :return: boolean: True if we should include this path
entailment
def load_directory(self, top_path, followlinks): """ Traverse top_path directory and save patterns in any .ddsignore files found. :param top_path: str: directory name we should traverse looking for ignore files :param followlinks: boolean: should we traverse symbolic links """ for dir_name, child_dirs, child_files in os.walk(top_path, followlinks=followlinks): for child_filename in child_files: if child_filename == DDS_IGNORE_FILENAME: pattern_lines = self._read_non_empty_lines(dir_name, child_filename) self.add_patterns(dir_name, pattern_lines)
Traverse top_path directory and save patterns in any .ddsignore files found. :param top_path: str: directory name we should traverse looking for ignore files :param followlinks: boolean: should we traverse symbolic links
entailment
def add_patterns(self, dir_name, pattern_lines): """ Add patterns the should apply below dir_name :param dir_name: str: directory that contained the patterns :param pattern_lines: [str]: array of patterns """ for pattern_line in pattern_lines: self.pattern_list.add_filename_pattern(dir_name, pattern_line)
Add patterns the should apply below dir_name :param dir_name: str: directory that contained the patterns :param pattern_lines: [str]: array of patterns
entailment
def include(self, path, is_file): """ Returns False if any pattern matches the path :param path: str: filename path to test :return: boolean: True if we should include this path """ return self.pattern_list.include(path) and self.file_filter.include(os.path.basename(path), is_file)
Returns False if any pattern matches the path :param path: str: filename path to test :return: boolean: True if we should include this path
entailment
def create_project(self, name, description): """ Create a project with the specified name and description :param name: str: unique name for this project :param description: str: long description of this project :return: str: name of the project """ self._cache_project_list_once() if name in [project.name for project in self.projects]: raise DuplicateNameError("There is already a project named {}".format(name)) self.client.create_project(name, description) self.clear_project_cache() return name
Create a project with the specified name and description :param name: str: unique name for this project :param description: str: long description of this project :return: str: name of the project
entailment
def delete_project(self, project_name): """ Delete a project with the specified name. Raises ItemNotFound if no such project exists :param project_name: str: name of the project to delete :return: """ project = self._get_project_for_name(project_name) project.delete() self.clear_project_cache()
Delete a project with the specified name. Raises ItemNotFound if no such project exists :param project_name: str: name of the project to delete :return:
entailment
def list_files(self, project_name): """ Return a list of file paths that make up project_name :param project_name: str: specifies the name of the project to list contents of :return: [str]: returns a list of remote paths for all files part of the specified project qq """ project = self._get_project_for_name(project_name) file_path_dict = self._get_file_path_dict_for_project(project) return list(file_path_dict)
Return a list of file paths that make up project_name :param project_name: str: specifies the name of the project to list contents of :return: [str]: returns a list of remote paths for all files part of the specified project qq
entailment
def download_file(self, project_name, remote_path, local_path=None): """ Download a file from a project When local_path is None the file will be downloaded to the base filename :param project_name: str: name of the project to download a file from :param remote_path: str: remote path specifying which file to download :param local_path: str: optional argument to customize where the file will be downloaded to """ project = self._get_project_for_name(project_name) file = project.get_child_for_path(remote_path) file.download_to_path(local_path)
Download a file from a project When local_path is None the file will be downloaded to the base filename :param project_name: str: name of the project to download a file from :param remote_path: str: remote path specifying which file to download :param local_path: str: optional argument to customize where the file will be downloaded to
entailment
def upload_file(self, project_name, local_path, remote_path=None): """ Upload a file into project creating a new version if it already exists. Will also create project and parent folders if they do not exist. :param project_name: str: name of the project to upload a file to :param local_path: str: path to download the file into :param remote_path: str: remote path specifying file to upload to (defaults to local_path basename) """ project = self._get_or_create_project(project_name) file_upload = FileUpload(project, remote_path, local_path) file_upload.run()
Upload a file into project creating a new version if it already exists. Will also create project and parent folders if they do not exist. :param project_name: str: name of the project to upload a file to :param local_path: str: path to download the file into :param remote_path: str: remote path specifying file to upload to (defaults to local_path basename)
entailment
def delete_file(self, project_name, remote_path): """ Delete a file or folder from a project :param project_name: str: name of the project containing a file we will delete :param remote_path: str: remote path specifying file to delete """ project = self._get_or_create_project(project_name) remote_file = project.get_child_for_path(remote_path) remote_file.delete()
Delete a file or folder from a project :param project_name: str: name of the project containing a file we will delete :param remote_path: str: remote path specifying file to delete
entailment
def _determineLength(self, fObj): """ Determine how many bytes can be read out of C{fObj} (assuming it is not modified from this point on). If the determination cannot be made, return C{UNKNOWN_LENGTH}. """ try: seek = fObj.seek tell = fObj.tell except AttributeError: return UNKNOWN_LENGTH originalPosition = tell() seek(0, self._SEEK_END) end = tell() seek(originalPosition, self._SEEK_SET) return end - originalPosition
Determine how many bytes can be read out of C{fObj} (assuming it is not modified from this point on). If the determination cannot be made, return C{UNKNOWN_LENGTH}.
entailment
def startProducing(self, consumer): """ Start a cooperative task which will read bytes from the input file and write them to C{consumer}. Return a L{Deferred} which fires after all bytes have been written. @param consumer: Any L{IConsumer} provider """ self._task = self._cooperate(self._writeloop(consumer)) d = self._task.whenDone() def maybeStopped(reason): # IBodyProducer.startProducing's Deferred isn't support to fire if # stopProducing is called. reason.trap(task.TaskStopped) return defer.Deferred() d.addCallbacks(lambda ignored: None, maybeStopped) return d
Start a cooperative task which will read bytes from the input file and write them to C{consumer}. Return a L{Deferred} which fires after all bytes have been written. @param consumer: Any L{IConsumer} provider
entailment
def _writeloop(self, consumer): """ Return an iterator which reads one chunk of bytes from the input file and writes them to the consumer for each time it is iterated. """ while True: bytes = self._inputFile.read(self._readSize) if not bytes: self._inputFile.close() break consumer.write(bytes) yield None
Return an iterator which reads one chunk of bytes from the input file and writes them to the consumer for each time it is iterated.
entailment
def _count_differences(self): """ Count how many things we will be sending. :param local_project: LocalProject project we will send data from :return: LocalOnlyCounter contains counts for various items """ different_items = LocalOnlyCounter(self.config.upload_bytes_per_chunk) different_items.walk_project(self.local_project) return different_items
Count how many things we will be sending. :param local_project: LocalProject project we will send data from :return: LocalOnlyCounter contains counts for various items
entailment
def run(self): """ Upload different items within local_project to remote store showing a progress bar. """ progress_printer = ProgressPrinter(self.different_items.total_items(), msg_verb='sending') upload_settings = UploadSettings(self.config, self.remote_store.data_service, progress_printer, self.project_name_or_id, self.file_upload_post_processor) project_uploader = ProjectUploader(upload_settings) project_uploader.run(self.local_project) progress_printer.finished()
Upload different items within local_project to remote store showing a progress bar.
entailment
def dry_run_report(self): """ Returns text displaying the items that need to be uploaded or a message saying there are no files/folders to upload. :return: str: report text """ project_uploader = ProjectUploadDryRun() project_uploader.run(self.local_project) items = project_uploader.upload_items if not items: return "\n\nNo changes found. Nothing needs to be uploaded.\n\n" else: result = "\n\nFiles/Folders that need to be uploaded:\n" for item in items: result += "{}\n".format(item) result += "\n" return result
Returns text displaying the items that need to be uploaded or a message saying there are no files/folders to upload. :return: str: report text
entailment
def get_upload_report(self): """ Generate and print a report onto stdout. """ project = self.remote_store.fetch_remote_project(self.project_name_or_id, must_exist=True, include_children=False) report = UploadReport(project.name) report.walk_project(self.local_project) return report.get_content()
Generate and print a report onto stdout.
entailment
def get_url_msg(self): """ Print url to view the project via dds portal. """ msg = 'URL to view project' project_id = self.local_project.remote_id url = '{}: https://{}/#/project/{}'.format(msg, self.config.get_portal_url_base(), project_id) return url
Print url to view the project via dds portal.
entailment
def visit_file(self, item, parent): """ Increments counter if item needs to be sent. :param item: LocalFile :param parent: LocalFolder/LocalProject """ if item.need_to_send: self.files += 1 self.chunks += item.count_chunks(self.bytes_per_chunk)
Increments counter if item needs to be sent. :param item: LocalFile :param parent: LocalFolder/LocalProject
entailment
def result_str(self): """ Return a string representing the totals contained herein. :return: str counts/types string """ return '{}, {}, {}'.format(LocalOnlyCounter.plural_fmt('project', self.projects), LocalOnlyCounter.plural_fmt('folder', self.folders), LocalOnlyCounter.plural_fmt('file', self.files))
Return a string representing the totals contained herein. :return: str counts/types string
entailment
def plural_fmt(name, cnt): """ pluralize name if necessary and combine with cnt :param name: str name of the item type :param cnt: int number items of this type :return: str name and cnt joined """ if cnt == 1: return '{} {}'.format(cnt, name) else: return '{} {}s'.format(cnt, name)
pluralize name if necessary and combine with cnt :param name: str name of the item type :param cnt: int number items of this type :return: str name and cnt joined
entailment
def visit_folder(self, item, parent): """ Add folder to the report if it was sent. :param item: LocalFolder folder to possibly add :param parent: LocalFolder/LocalContent not used here """ if item.sent_to_remote: self._add_report_item(item.path, item.remote_id)
Add folder to the report if it was sent. :param item: LocalFolder folder to possibly add :param parent: LocalFolder/LocalContent not used here
entailment
def visit_file(self, item, parent): """ Add file to the report if it was sent. :param item: LocalFile file to possibly add. :param parent: LocalFolder/LocalContent not used here """ if item.sent_to_remote: self._add_report_item(item.path, item.remote_id, item.size, item.get_hash_value())
Add file to the report if it was sent. :param item: LocalFile file to possibly add. :param parent: LocalFolder/LocalContent not used here
entailment
def str_with_sizes(self, max_name, max_remote_id, max_size): """ Create string for report based on internal properties using sizes to line up columns. :param max_name: int width of the name column :param max_remote_id: int width of the remote_id column :return: str info from this report item """ name_str = self.name.ljust(max_name) remote_id_str = self.remote_id.ljust(max_remote_id) size_str = self.size.ljust(max_size) return u'{} {} {} {}'.format(name_str, remote_id_str, size_str, self.file_hash)
Create string for report based on internal properties using sizes to line up columns. :param max_name: int width of the name column :param max_remote_id: int width of the remote_id column :return: str info from this report item
entailment
def upload_async(data_service_auth_data, config, upload_id, filename, index, num_chunks_to_send, progress_queue): """ Method run in another process called from ParallelChunkProcessor.make_and_start_process. :param data_service_auth_data: tuple of auth data for rebuilding DataServiceAuth :param config: dds.Config configuration settings to use during upload :param upload_id: uuid unique id of the 'upload' we are uploading chunks into :param filename: str path to file who's contents we will be uploading :param index: int offset into filename where we will start sending bytes from (must multiply by upload_bytes_per_chunk) :param num_chunks_to_send: int number of chunks of config.upload_bytes_per_chunk size to send. :param progress_queue: ProgressQueue queue to send notifications of progress or errors """ auth = DataServiceAuth(config) auth.set_auth_data(data_service_auth_data) data_service = DataServiceApi(auth, config.url) sender = ChunkSender(data_service, upload_id, filename, config.upload_bytes_per_chunk, index, num_chunks_to_send, progress_queue) try: sender.send() except: error_msg = "".join(traceback.format_exception(*sys.exc_info())) progress_queue.error(error_msg)
Method run in another process called from ParallelChunkProcessor.make_and_start_process. :param data_service_auth_data: tuple of auth data for rebuilding DataServiceAuth :param config: dds.Config configuration settings to use during upload :param upload_id: uuid unique id of the 'upload' we are uploading chunks into :param filename: str path to file who's contents we will be uploading :param index: int offset into filename where we will start sending bytes from (must multiply by upload_bytes_per_chunk) :param num_chunks_to_send: int number of chunks of config.upload_bytes_per_chunk size to send. :param progress_queue: ProgressQueue queue to send notifications of progress or errors
entailment
def upload(self, project_id, parent_kind, parent_id): """ Upload file contents to project within a specified parent. :param project_id: str project uuid :param parent_kind: str type of parent ('dds-project' or 'dds-folder') :param parent_id: str uuid of parent :return: str uuid of the newly uploaded file """ path_data = self.local_file.get_path_data() hash_data = path_data.get_hash() self.upload_id = self.upload_operations.create_upload(project_id, path_data, hash_data, storage_provider_id=self.config.storage_provider_id) ParallelChunkProcessor(self).run() parent_data = ParentData(parent_kind, parent_id) remote_file_data = self.upload_operations.finish_upload(self.upload_id, hash_data, parent_data, self.local_file.remote_id) if self.file_upload_post_processor: self.file_upload_post_processor.run(self.data_service, remote_file_data) return remote_file_data['id']
Upload file contents to project within a specified parent. :param project_id: str project uuid :param parent_kind: str type of parent ('dds-project' or 'dds-folder') :param parent_id: str uuid of parent :return: str uuid of the newly uploaded file
entailment
def _create_upload(self, project_id, path_data, hash_data, remote_filename=None, storage_provider_id=None, chunked=True): """ Create upload for uploading multiple chunks or the non-chunked variety (includes upload url). :param project_id: str: uuid of the project :param path_data: PathData: holds file system data about the file we are uploading :param hash_data: HashData: contains hash alg and value for the file we are uploading :param remote_filename: str: name to use for our remote file (defaults to path_data basename otherwise) :param storage_provider_id: str: optional storage provider id :param chunked: bool: should we create a chunked upload :return: str: uuid for the upload """ if not remote_filename: remote_filename = path_data.name() mime_type = path_data.mime_type() size = path_data.size() def func(): return self.data_service.create_upload(project_id, remote_filename, mime_type, size, hash_data.value, hash_data.alg, storage_provider_id=storage_provider_id, chunked=chunked) resp = retry_until_resource_is_consistent(func, self.waiting_monitor) return resp.json()
Create upload for uploading multiple chunks or the non-chunked variety (includes upload url). :param project_id: str: uuid of the project :param path_data: PathData: holds file system data about the file we are uploading :param hash_data: HashData: contains hash alg and value for the file we are uploading :param remote_filename: str: name to use for our remote file (defaults to path_data basename otherwise) :param storage_provider_id: str: optional storage provider id :param chunked: bool: should we create a chunked upload :return: str: uuid for the upload
entailment
def create_upload(self, project_id, path_data, hash_data, remote_filename=None, storage_provider_id=None): """ Create a chunked upload id to pass to create_file_chunk_url to create upload urls. :param project_id: str: uuid of the project :param path_data: PathData: holds file system data about the file we are uploading :param hash_data: HashData: contains hash alg and value for the file we are uploading :param remote_filename: str: name to use for our remote file (defaults to path_data basename otherwise) :param storage_provider_id: str: optional storage provider id :return: str: uuid for the upload """ upload_response = self._create_upload(project_id, path_data, hash_data, remote_filename=remote_filename, storage_provider_id=storage_provider_id, chunked=True) return upload_response['id']
Create a chunked upload id to pass to create_file_chunk_url to create upload urls. :param project_id: str: uuid of the project :param path_data: PathData: holds file system data about the file we are uploading :param hash_data: HashData: contains hash alg and value for the file we are uploading :param remote_filename: str: name to use for our remote file (defaults to path_data basename otherwise) :param storage_provider_id: str: optional storage provider id :return: str: uuid for the upload
entailment
def create_upload_and_chunk_url(self, project_id, path_data, hash_data, remote_filename=None, storage_provider_id=None): """ Create an non-chunked upload that returns upload id and upload url. This type of upload doesn't allow additional upload urls. For single chunk files this method is more efficient than create_upload/create_file_chunk_url. :param project_id: str: uuid of the project :param path_data: PathData: holds file system data about the file we are uploading :param hash_data: HashData: contains hash alg and value for the file we are uploading :param remote_filename: str: name to use for our remote file (defaults to path_data basename otherwise) :param storage_provider_id:str: optional storage provider id :return: str, dict: uuid for the upload, upload chunk url dict """ upload_response = self._create_upload(project_id, path_data, hash_data, remote_filename=remote_filename, storage_provider_id=storage_provider_id, chunked=False) return upload_response['id'], upload_response['signed_url']
Create an non-chunked upload that returns upload id and upload url. This type of upload doesn't allow additional upload urls. For single chunk files this method is more efficient than create_upload/create_file_chunk_url. :param project_id: str: uuid of the project :param path_data: PathData: holds file system data about the file we are uploading :param hash_data: HashData: contains hash alg and value for the file we are uploading :param remote_filename: str: name to use for our remote file (defaults to path_data basename otherwise) :param storage_provider_id:str: optional storage provider id :return: str, dict: uuid for the upload, upload chunk url dict
entailment
def create_file_chunk_url(self, upload_id, chunk_num, chunk): """ Create a url for uploading a particular chunk to the datastore. :param upload_id: str: uuid of the upload this chunk is for :param chunk_num: int: where in the file does this chunk go (0-based index) :param chunk: bytes: data we are going to upload :return: """ chunk_len = len(chunk) hash_data = HashData.create_from_chunk(chunk) one_based_index = chunk_num + 1 def func(): return self.data_service.create_upload_url(upload_id, one_based_index, chunk_len, hash_data.value, hash_data.alg) resp = retry_until_resource_is_consistent(func, self.waiting_monitor) return resp.json()
Create a url for uploading a particular chunk to the datastore. :param upload_id: str: uuid of the upload this chunk is for :param chunk_num: int: where in the file does this chunk go (0-based index) :param chunk: bytes: data we are going to upload :return:
entailment
def send_file_external(self, url_json, chunk): """ Send chunk to external store specified in url_json. Raises ValueError on upload failure. :param data_service: data service to use for sending chunk :param url_json: dict contains where/how to upload chunk :param chunk: data to be uploaded """ http_verb = url_json['http_verb'] host = url_json['host'] url = url_json['url'] http_headers = url_json['http_headers'] resp = self._send_file_external_with_retry(http_verb, host, url, http_headers, chunk) if resp.status_code != 200 and resp.status_code != 201: raise ValueError("Failed to send file to external store. Error:" + str(resp.status_code) + host + url)
Send chunk to external store specified in url_json. Raises ValueError on upload failure. :param data_service: data service to use for sending chunk :param url_json: dict contains where/how to upload chunk :param chunk: data to be uploaded
entailment
def _send_file_external_with_retry(self, http_verb, host, url, http_headers, chunk): """ Send chunk to host, url using http_verb. If http_verb is PUT and a connection error occurs retry a few times. Pauses between retries. Raises if unsuccessful. """ count = 0 retry_times = 1 if http_verb == 'PUT': retry_times = SEND_EXTERNAL_PUT_RETRY_TIMES while True: try: return self.data_service.send_external(http_verb, host, url, http_headers, chunk) except requests.exceptions.ConnectionError: count += 1 if count < retry_times: if count == 1: # Only show a warning the first time we fail to send a chunk self._show_retry_warning(host) time.sleep(SEND_EXTERNAL_RETRY_SECONDS) self.data_service.recreate_requests_session() else: raise
Send chunk to host, url using http_verb. If http_verb is PUT and a connection error occurs retry a few times. Pauses between retries. Raises if unsuccessful.
entailment
def _show_retry_warning(host): """ Displays a message on stderr that we lost connection to a host and will retry. :param host: str: name of the host we are trying to communicate with """ sys.stderr.write("\nConnection to {} failed. Retrying.\n".format(host)) sys.stderr.flush()
Displays a message on stderr that we lost connection to a host and will retry. :param host: str: name of the host we are trying to communicate with
entailment
def finish_upload(self, upload_id, hash_data, parent_data, remote_file_id): """ Complete the upload and create or update the file. :param upload_id: str: uuid of the upload we are completing :param hash_data: HashData: hash info about the file :param parent_data: ParentData: info about the parent of this file :param remote_file_id: str: uuid of this file if it already exists or None if it is a new file :return: dict: DukeDS details about this file """ self.data_service.complete_upload(upload_id, hash_data.value, hash_data.alg) if remote_file_id: result = self.data_service.update_file(remote_file_id, upload_id) return result.json() else: result = self.data_service.create_file(parent_data.kind, parent_data.id, upload_id) return result.json()
Complete the upload and create or update the file. :param upload_id: str: uuid of the upload we are completing :param hash_data: HashData: hash info about the file :param parent_data: ParentData: info about the parent of this file :param remote_file_id: str: uuid of this file if it already exists or None if it is a new file :return: dict: DukeDS details about this file
entailment
def run(self): """ Sends contents of a local file to a remote data service. """ processes = [] progress_queue = ProgressQueue(Queue()) num_chunks = ParallelChunkProcessor.determine_num_chunks(self.config.upload_bytes_per_chunk, self.local_file.size) work_parcels = ParallelChunkProcessor.make_work_parcels(self.config.upload_workers, num_chunks) for (index, num_items) in work_parcels: processes.append(self.make_and_start_process(index, num_items, progress_queue)) wait_for_processes(processes, num_chunks, progress_queue, self.watcher, self.local_file)
Sends contents of a local file to a remote data service.
entailment
def determine_num_chunks(chunk_size, file_size): """ Figure out how many pieces we are sending the file in. NOTE: duke-data-service requires an empty chunk to be uploaded for empty files. """ if file_size == 0: return 1 return int(math.ceil(float(file_size) / float(chunk_size)))
Figure out how many pieces we are sending the file in. NOTE: duke-data-service requires an empty chunk to be uploaded for empty files.
entailment
def make_work_parcels(upload_workers, num_chunks): """ Make groups so we can split up num_chunks into similar sizes. Rounds up trying to keep work evenly split so sometimes it will not use all workers. For very small numbers it can result in (upload_workers-1) total workers. For example if there are two few items to distribute. :param upload_workers: int target number of workers :param num_chunks: int number of total items we need to send :return [(index, num_items)] - an array of tuples where array element will be in a separate process. """ chunks_per_worker = int(math.ceil(float(num_chunks) / float(upload_workers))) return ParallelChunkProcessor.divide_work(range(num_chunks), chunks_per_worker)
Make groups so we can split up num_chunks into similar sizes. Rounds up trying to keep work evenly split so sometimes it will not use all workers. For very small numbers it can result in (upload_workers-1) total workers. For example if there are two few items to distribute. :param upload_workers: int target number of workers :param num_chunks: int number of total items we need to send :return [(index, num_items)] - an array of tuples where array element will be in a separate process.
entailment
def divide_work(list_of_indexes, batch_size): """ Given a sequential list of indexes split them into num_parts. :param list_of_indexes: [int] list of indexes to be divided up :param batch_size: number of items to put in batch(not exact obviously) :return: [(int,int)] list of (index, num_items) to be processed """ grouped_indexes = [list_of_indexes[i:i + batch_size] for i in range(0, len(list_of_indexes), batch_size)] return [(batch[0], len(batch)) for batch in grouped_indexes]
Given a sequential list of indexes split them into num_parts. :param list_of_indexes: [int] list of indexes to be divided up :param batch_size: number of items to put in batch(not exact obviously) :return: [(int,int)] list of (index, num_items) to be processed
entailment
def make_and_start_process(self, index, num_items, progress_queue): """ Create and start a process to upload num_items chunks from our file starting at index. :param index: int offset into file(must be multiplied by upload_bytes_per_chunk to get actual location) :param num_items: int number chunks to send :param progress_queue: ProgressQueue queue to send notifications of progress or errors """ process = Process(target=upload_async, args=(self.data_service.auth.get_auth_data(), self.config, self.upload_id, self.local_file.path, index, num_items, progress_queue)) process.start() return process
Create and start a process to upload num_items chunks from our file starting at index. :param index: int offset into file(must be multiplied by upload_bytes_per_chunk to get actual location) :param num_items: int number chunks to send :param progress_queue: ProgressQueue queue to send notifications of progress or errors
entailment
def send(self): """ For each chunk we need to send, create upload url and send bytes. Raises exception on error. """ sent_chunks = 0 chunk_num = self.index with open(self.filename, 'rb') as infile: infile.seek(self.index * self.chunk_size) while sent_chunks != self.num_chunks_to_send: chunk = infile.read(self.chunk_size) self._send_chunk(chunk, chunk_num) self.progress_queue.processed(1) chunk_num += 1 sent_chunks += 1
For each chunk we need to send, create upload url and send bytes. Raises exception on error.
entailment
def _send_chunk(self, chunk, chunk_num): """ Send a single chunk to the remote service. :param chunk: bytes data we are uploading :param chunk_num: int number associated with this chunk """ url_info = self.upload_operations.create_file_chunk_url(self.upload_id, chunk_num, chunk) self.upload_operations.send_file_external(url_info, chunk)
Send a single chunk to the remote service. :param chunk: bytes data we are uploading :param chunk_num: int number associated with this chunk
entailment
def _in_valid_interval(self, parameter, value): """Returns if the parameter is within its valid interval. :param string parameter: Name of the parameter that has to be checked. :param numeric value: Value of the parameter. :return: Returns :py:const:`True` it the value for the given parameter is valid, :py:const:`False` otherwise. :rtype: boolean """ # return True, if not interval is defined for the parameter if parameter not in self._parameterIntervals: return True interval = self._parameterIntervals[parameter] if interval[2] and interval[3]: return interval[0] <= value <= interval[1] if not interval[2] and interval[3]: return interval[0] < value <= interval[1] if interval[2] and not interval[3]: return interval[0] <= value < interval[1] #if False == interval[2] and False == interval[3]: return interval[0] < value < interval[1]
Returns if the parameter is within its valid interval. :param string parameter: Name of the parameter that has to be checked. :param numeric value: Value of the parameter. :return: Returns :py:const:`True` it the value for the given parameter is valid, :py:const:`False` otherwise. :rtype: boolean
entailment
def _get_value_error_message_for_invalid_prarameter(self, parameter, value): """Returns the ValueError message for the given parameter. :param string parameter: Name of the parameter the message has to be created for. :param numeric value: Value outside the parameters interval. :return: Returns a string containing hte message. :rtype: string """ # return if not interval is defined for the parameter if parameter not in self._parameterIntervals: return interval = self._parameterIntervals[parameter] return "%s has to be in %s%s, %s%s. Current value is %s." % ( parameter, BaseMethod._interval_definitions[interval[2]][0], interval[0], interval[1], BaseMethod._interval_definitions[interval[3]][1], value )
Returns the ValueError message for the given parameter. :param string parameter: Name of the parameter the message has to be created for. :param numeric value: Value outside the parameters interval. :return: Returns a string containing hte message. :rtype: string
entailment
def set_parameter(self, name, value): """Sets a parameter for the BaseMethod. :param string name: Name of the parameter that has to be checked. :param numeric value: Value of the parameter. """ if not self._in_valid_interval(name, value): raise ValueError(self._get_value_error_message_for_invalid_prarameter(name, value)) #if name in self._parameters: # print "Parameter %s already existed. It's old value will be replaced with %s" % (name, value) self._parameters[name] = value
Sets a parameter for the BaseMethod. :param string name: Name of the parameter that has to be checked. :param numeric value: Value of the parameter.
entailment
def can_be_executed(self): """Returns if the method can already be executed. :return: Returns :py:const:`True` if all required parameters where already set, False otherwise. :rtype: boolean """ missingParams = filter(lambda rp: rp not in self._parameters, self._requiredParameters) return len(missingParams) == 0
Returns if the method can already be executed. :return: Returns :py:const:`True` if all required parameters where already set, False otherwise. :rtype: boolean
entailment
def set_parameter(self, name, value): """Sets a parameter for the BaseForecastingMethod. :param string name: Name of the parameter. :param numeric value: Value of the parameter. """ # set the furecast until variable to None if necessary if name == "valuesToForecast": self._forecastUntil = None # continue with the parents implementation return super(BaseForecastingMethod, self).set_parameter(name, value)
Sets a parameter for the BaseForecastingMethod. :param string name: Name of the parameter. :param numeric value: Value of the parameter.
entailment
def forecast_until(self, timestamp, tsformat=None): """Sets the forecasting goal (timestamp wise). This function enables the automatic determination of valuesToForecast. :param timestamp: timestamp containing the end date of the forecast. :param string tsformat: Format of the timestamp. This is used to convert the timestamp from UNIX epochs, if necessary. For valid examples take a look into the :py:func:`time.strptime` documentation. """ if tsformat is not None: timestamp = TimeSeries.convert_timestamp_to_epoch(timestamp, tsformat) self._forecastUntil = timestamp
Sets the forecasting goal (timestamp wise). This function enables the automatic determination of valuesToForecast. :param timestamp: timestamp containing the end date of the forecast. :param string tsformat: Format of the timestamp. This is used to convert the timestamp from UNIX epochs, if necessary. For valid examples take a look into the :py:func:`time.strptime` documentation.
entailment
def _calculate_values_to_forecast(self, timeSeries): """Calculates the number of values, that need to be forecasted to match the goal set in forecast_until. This sets the parameter "valuesToForecast" and should be called at the beginning of the :py:meth:`BaseMethod.execute` implementation. :param TimeSeries timeSeries: Should be a sorted and normalized TimeSeries instance. :raise: Raises a :py:exc:`ValueError` if the TimeSeries is either not normalized or sorted. """ # do not set anything, if it is not required if self._forecastUntil is None: return # check the TimeSeries for correctness if not timeSeries.is_sorted(): raise ValueError("timeSeries has to be sorted.") if not timeSeries.is_normalized(): raise ValueError("timeSeries has to be normalized.") timediff = timeSeries[-1][0] - timeSeries[-2][0] forecastSpan = self._forecastUntil - timeSeries[-1][0] self.set_parameter("valuesToForecast", int(forecastSpan / timediff) + 1)
Calculates the number of values, that need to be forecasted to match the goal set in forecast_until. This sets the parameter "valuesToForecast" and should be called at the beginning of the :py:meth:`BaseMethod.execute` implementation. :param TimeSeries timeSeries: Should be a sorted and normalized TimeSeries instance. :raise: Raises a :py:exc:`ValueError` if the TimeSeries is either not normalized or sorted.
entailment
def execute(self, timeSeries): """Creates a new TimeSeries containing the SMA values for the predefined windowsize. :param TimeSeries timeSeries: The TimeSeries used to calculate the simple moving average values. :return: TimeSeries object containing the smooth moving average. :rtype: TimeSeries :raise: Raises a :py:exc:`ValueError` wif the defined windowsize is larger than the number of elements in timeSeries :note: This implementation aims to support independent for loop execution. """ windowsize = self._parameters["windowsize"] if len (timeSeries) < windowsize: raise ValueError("windowsize is larger than the number of elements in timeSeries.") tsLength = len(timeSeries) nbrOfLoopRuns = tsLength - windowsize + 1 res = TimeSeries() for idx in xrange(nbrOfLoopRuns): end = idx + windowsize data = timeSeries[idx:end] timestamp = data[windowsize//2][0] value = sum([i[1] for i in data])/windowsize res.add_entry(timestamp, value) res.sort_timeseries() return res
Creates a new TimeSeries containing the SMA values for the predefined windowsize. :param TimeSeries timeSeries: The TimeSeries used to calculate the simple moving average values. :return: TimeSeries object containing the smooth moving average. :rtype: TimeSeries :raise: Raises a :py:exc:`ValueError` wif the defined windowsize is larger than the number of elements in timeSeries :note: This implementation aims to support independent for loop execution.
entailment
def add(self, child, min_occurs=1): """Add a child node. @param child: The schema for the child node. @param min_occurs: The minimum number of times the child node must occur, if C{None} is given the default is 1. """ if not min_occurs in (0, 1): raise RuntimeError("Unexpected min bound for node schema") self.children[child.tag] = child self.children_min_occurs[child.tag] = min_occurs return child
Add a child node. @param child: The schema for the child node. @param min_occurs: The minimum number of times the child node must occur, if C{None} is given the default is 1.
entailment
def _create_child(self, tag): """Create a new child element with the given tag.""" return etree.SubElement(self._root, self._get_namespace_tag(tag))
Create a new child element with the given tag.
entailment
def _find_child(self, tag): """Find the child C{etree.Element} with the matching C{tag}. @raises L{WSDLParseError}: If more than one such elements are found. """ tag = self._get_namespace_tag(tag) children = self._root.findall(tag) if len(children) > 1: raise WSDLParseError("Duplicate tag '%s'" % tag) if len(children) == 0: return None return children[0]
Find the child C{etree.Element} with the matching C{tag}. @raises L{WSDLParseError}: If more than one such elements are found.
entailment
def _check_value(self, tag, value): """Ensure that the element matching C{tag} can have the given C{value}. @param tag: The tag to consider. @param value: The value to check @return: The unchanged L{value}, if valid. @raises L{WSDLParseError}: If the value is invalid. """ if value is None: if self._schema.children_min_occurs[tag] > 0: raise WSDLParseError("Missing tag '%s'" % tag) return value return value
Ensure that the element matching C{tag} can have the given C{value}. @param tag: The tag to consider. @param value: The value to check @return: The unchanged L{value}, if valid. @raises L{WSDLParseError}: If the value is invalid.
entailment
def _get_tag(self, name): """Get the L{NodeItem} attribute name for the given C{tag}.""" if name.endswith("_"): if name[:-1] in self._schema.reserved: return name[:-1] return name
Get the L{NodeItem} attribute name for the given C{tag}.
entailment
def _get_namespace_tag(self, tag): """Return the given C{tag} with the namespace prefix added, if any.""" if self._namespace is not None: tag = "{%s}%s" % (self._namespace, tag) return tag
Return the given C{tag} with the namespace prefix added, if any.
entailment
def _get_schema(self, tag): """Return the child schema for the given C{tag}. @raises L{WSDLParseError}: If the tag doesn't belong to the schema. """ schema = self._schema.children.get(tag) if not schema: raise WSDLParseError("Unknown tag '%s'" % tag) return schema
Return the child schema for the given C{tag}. @raises L{WSDLParseError}: If the tag doesn't belong to the schema.
entailment