sentence1
stringlengths
52
3.87M
sentence2
stringlengths
1
47.2k
label
stringclasses
1 value
def separate_walks_turns(data, window=[1, 1, 1]): """ Will separate peaks into the clusters by following the trend in the clusters array. This is usedful because scipy's k-mean clustering will give us a continous clusters array. :param clusters array: A continous array representing different classes. :param peaks array: The peaks that we want to separate into the classes from the custers. :return walks arrays: An array of arrays that will have all the peaks corresponding to every individual walk. :return turns arraays: Array of array which has all the indices of the peaks that correspond to turning. """ clusters, peaks, promi = cluster_walk_turn(data, window=window) group_one = [] group_two = [] start = 0 for i in range(1, len(clusters)): if clusters[i-1] != clusters[i]: assert np.all(clusters[start: i] == clusters[start]), 'Some values are mixed up, please check!' add = group_one if clusters[start] == 0 else group_two add.append(peaks[start: i]) start = i # hacky fix for the last part of the signal ... # I need to change this ... if i == len(clusters)-1: if not peaks[start] in add[-1]: add = group_one if clusters[start] == 0 else group_two add.append(peaks[start: ]) maxes_one = [np.max(data[c]) for c in group_one] maxes_two = [np.max(data[c]) for c in group_two] walks, turns = group_two, group_one if np.max(maxes_one) > np.max(maxes_two): walks, turns = group_one, group_two # let's drop any turns at the end of the signal # if len(turns[-1]) > len(walks[-1]): # turns.pop() return walks, turns
Will separate peaks into the clusters by following the trend in the clusters array. This is usedful because scipy's k-mean clustering will give us a continous clusters array. :param clusters array: A continous array representing different classes. :param peaks array: The peaks that we want to separate into the classes from the custers. :return walks arrays: An array of arrays that will have all the peaks corresponding to every individual walk. :return turns arraays: Array of array which has all the indices of the peaks that correspond to turning.
entailment
def centroid_sort(centroids): """ Sort centroids. This is required so that the same cluster centroid is always the 0th one. It should also be the \ most negative. Order defined by the Euclidean distance between the centroid and an arbitrary "small" point \ [-100, -100] (in each dimension) to account for possible negatives. Cluster 0 is the closest to that point, etc. 0. Set up >>> from numpy.testing import assert_array_equal 1. Single centroids just return themselves. >>> centroid_sort(array([[1.1, 2.2]])) array([[ 1.1, 2.2]]) >>> centroid_sort(array([[1.1, 2.2, 3.3]])) array([[ 1.1, 2.2, 3.3]]) 2. Positive 2d centroids are ordered. >>> centroids = array([ ... [5.34443858, 0.63266844], # 3 ... [2.69156877, 0.76448578], # 1 ... [4.74784197, 1.0815235 ], # 2 ... [1.02330015, 0.16788118], # 0 ... ]) >>> expected_sorted_centroids = array([ ... [1.02330015, 0.16788118], # 0 ... [2.69156877, 0.76448578], # 1 ... [4.74784197, 1.0815235 ], # 2 ... [5.34443858, 0.63266844], # 3 ... ]) >>> result = centroid_sort(centroids) >>> assert_array_equal(result, expected_sorted_centroids) 3. 3d centroids spanning the origin are ordered. >>> centroids = array([ ... [ 3, 3, 4 ], # 3 ... [ 1.5, 2, 3 ], # 2 ... [-1, -1, -1 ], # 0 ... [ 0, 1, 0.5], # 1 ... ]) >>> expected_sorted_centroids = array([ ... [-1, -1, -1 ], # 0 ... [ 0, 1, 0.5], # 1 ... [ 1.5, 2, 3 ], # 2 ... [ 3, 3, 4 ], # 3 ... ]) >>> result = centroid_sort(centroids) >>> assert_array_equal(result, expected_sorted_centroids) :param centroids: array centroids :type centroids: numpy array :return centroids: array centroids :rtype centroids: numpy array """ dimensions = len(centroids[0]) negative_base_point = array(dimensions*[-100]) decorated = [ (euclidean(centroid, negative_base_point), centroid) for centroid in centroids ] decorated.sort() return array([centroid for dist, centroid in decorated])
Sort centroids. This is required so that the same cluster centroid is always the 0th one. It should also be the \ most negative. Order defined by the Euclidean distance between the centroid and an arbitrary "small" point \ [-100, -100] (in each dimension) to account for possible negatives. Cluster 0 is the closest to that point, etc. 0. Set up >>> from numpy.testing import assert_array_equal 1. Single centroids just return themselves. >>> centroid_sort(array([[1.1, 2.2]])) array([[ 1.1, 2.2]]) >>> centroid_sort(array([[1.1, 2.2, 3.3]])) array([[ 1.1, 2.2, 3.3]]) 2. Positive 2d centroids are ordered. >>> centroids = array([ ... [5.34443858, 0.63266844], # 3 ... [2.69156877, 0.76448578], # 1 ... [4.74784197, 1.0815235 ], # 2 ... [1.02330015, 0.16788118], # 0 ... ]) >>> expected_sorted_centroids = array([ ... [1.02330015, 0.16788118], # 0 ... [2.69156877, 0.76448578], # 1 ... [4.74784197, 1.0815235 ], # 2 ... [5.34443858, 0.63266844], # 3 ... ]) >>> result = centroid_sort(centroids) >>> assert_array_equal(result, expected_sorted_centroids) 3. 3d centroids spanning the origin are ordered. >>> centroids = array([ ... [ 3, 3, 4 ], # 3 ... [ 1.5, 2, 3 ], # 2 ... [-1, -1, -1 ], # 0 ... [ 0, 1, 0.5], # 1 ... ]) >>> expected_sorted_centroids = array([ ... [-1, -1, -1 ], # 0 ... [ 0, 1, 0.5], # 1 ... [ 1.5, 2, 3 ], # 2 ... [ 3, 3, 4 ], # 3 ... ]) >>> result = centroid_sort(centroids) >>> assert_array_equal(result, expected_sorted_centroids) :param centroids: array centroids :type centroids: numpy array :return centroids: array centroids :rtype centroids: numpy array
entailment
def non_zero_index(arr): """ Raises: ValueError: If no-non-zero rows can be found. 0. Empty array raises. >>> arr = array([]) >>> non_zero_index(arr) 1. Array with zero values raises. >>> arr = array([ ... [0, 0], ... [0, 0], ... [0, 0, 0], ... ]) >>> non_zero_index(arr) 2. Array with a non-zero value will have that index returned. >>> arr = array([ ... [0, 0], ... [0, 0, 0], ... [1, 0, 0], # Still has zeros ... [1, 1, 0], ... [0, 1, 1], ... [-1, 0, 0], ... [-1, 2, 3], # First non-zero array ... [1, 2, 3], ... ]) >>> non_zero_index(arr) 6 :param arr: array :type arr: numpy array :return index: Index of first non-zero entry in an array. :rtype index: int """ for index, row in enumerate(arr): if non_zero_row(row): return index raise ValueError('No non-zero values')
Raises: ValueError: If no-non-zero rows can be found. 0. Empty array raises. >>> arr = array([]) >>> non_zero_index(arr) 1. Array with zero values raises. >>> arr = array([ ... [0, 0], ... [0, 0], ... [0, 0, 0], ... ]) >>> non_zero_index(arr) 2. Array with a non-zero value will have that index returned. >>> arr = array([ ... [0, 0], ... [0, 0, 0], ... [1, 0, 0], # Still has zeros ... [1, 1, 0], ... [0, 1, 1], ... [-1, 0, 0], ... [-1, 2, 3], # First non-zero array ... [1, 2, 3], ... ]) >>> non_zero_index(arr) 6 :param arr: array :type arr: numpy array :return index: Index of first non-zero entry in an array. :rtype index: int
entailment
def non_zero_row(arr): """ 0. Empty row returns False. >>> arr = array([]) >>> non_zero_row(arr) False 1. Row with a zero returns False. >>> arr = array([1, 4, 3, 0, 5, -1, -2]) >>> non_zero_row(arr) False 2. Row with no zeros returns True. >>> arr = array([-1, -0.1, 0.001, 2]) >>> non_zero_row(arr) True :param arr: array :type arr: numpy array :return empty: If row is completely free of zeros :rtype empty: bool """ if len(arr) == 0: return False for item in arr: if item == 0: return False return True
0. Empty row returns False. >>> arr = array([]) >>> non_zero_row(arr) False 1. Row with a zero returns False. >>> arr = array([1, 4, 3, 0, 5, -1, -2]) >>> non_zero_row(arr) False 2. Row with no zeros returns True. >>> arr = array([-1, -0.1, 0.001, 2]) >>> non_zero_row(arr) True :param arr: array :type arr: numpy array :return empty: If row is completely free of zeros :rtype empty: bool
entailment
def window_features(idx, window_size=100, overlap=10): """ Generate indexes for a sliding window with overlap :param array idx: The indexes that need to be windowed. :param int window_size: The size of the window. :param int overlap: How much should each window overlap. :return array view: The indexes for the windows with overlap. """ overlap = window_size - overlap sh = (idx.size - window_size + 1, window_size) st = idx.strides * 2 view = np.lib.stride_tricks.as_strided(idx, strides=st, shape=sh)[0::overlap] return view
Generate indexes for a sliding window with overlap :param array idx: The indexes that need to be windowed. :param int window_size: The size of the window. :param int overlap: How much should each window overlap. :return array view: The indexes for the windows with overlap.
entailment
def get_pypi_version(): """ Returns the version info from pypi for this app. """ try: response = requests.get(PYPI_URL, timeout=HALF_SECOND_TIMEOUT) response.raise_for_status() data = response.json() version_str = data["info"]["version"] return _parse_version_str(version_str) except requests.exceptions.ConnectionError: raise VersionException(UNABLE_TO_ACCESS_PYPI + " Failed to connect.") except requests.exceptions.Timeout: raise VersionException(UNABLE_TO_ACCESS_PYPI + " Timeout")
Returns the version info from pypi for this app.
entailment
def get_project_by_id(self, project_id): """ Get details about project with the specified uuid :param project_id: str: uuid of the project to fetch :return: Project """ return self._create_item_response( self.data_service.get_project_by_id(project_id), Project)
Get details about project with the specified uuid :param project_id: str: uuid of the project to fetch :return: Project
entailment
def create_project(self, name, description): """ Create a new project with the specified name and description :param name: str: name of the project to create :param description: str: description of the project to create :return: Project """ return self._create_item_response( self.data_service.create_project(name, description), Project)
Create a new project with the specified name and description :param name: str: name of the project to create :param description: str: description of the project to create :return: Project
entailment
def create_folder(self, folder_name, parent_kind_str, parent_uuid): """ Create a folder under a particular parent :param folder_name: str: name of the folder to create :param parent_kind_str: str: kind of the parent of this folder :param parent_uuid: str: uuid of the parent of this folder (project or another folder) :return: Folder: folder metadata """ return self._create_item_response( self.data_service.create_folder(folder_name, parent_kind_str, parent_uuid), Folder )
Create a folder under a particular parent :param folder_name: str: name of the folder to create :param parent_kind_str: str: kind of the parent of this folder :param parent_uuid: str: uuid of the parent of this folder (project or another folder) :return: Folder: folder metadata
entailment
def get_project_children(self, project_id, name_contains=None): """ Get direct files and folders of a project. :param project_id: str: uuid of the project to list contents :param name_contains: str: filter children based on a pattern :return: [File|Folder]: list of Files/Folders contained by the project """ return self._create_array_response( self.data_service.get_project_children( project_id, name_contains ), DDSConnection._folder_or_file_constructor )
Get direct files and folders of a project. :param project_id: str: uuid of the project to list contents :param name_contains: str: filter children based on a pattern :return: [File|Folder]: list of Files/Folders contained by the project
entailment
def get_folder_children(self, folder_id, name_contains=None): """ Get direct files and folders of a folder. :param folder_id: str: uuid of the folder :param name_contains: str: filter children based on a pattern :return: File|Folder """ return self._create_array_response( self.data_service.get_folder_children( folder_id, name_contains ), DDSConnection._folder_or_file_constructor )
Get direct files and folders of a folder. :param folder_id: str: uuid of the folder :param name_contains: str: filter children based on a pattern :return: File|Folder
entailment
def get_file_download(self, file_id): """ Get a file download object that contains temporary url settings needed to download the contents of a file. :param file_id: str: uuid of the file :return: FileDownload """ return self._create_item_response( self.data_service.get_file_url(file_id), FileDownload )
Get a file download object that contains temporary url settings needed to download the contents of a file. :param file_id: str: uuid of the file :return: FileDownload
entailment
def upload_file(self, local_path, project_id, parent_data, existing_file_id=None, remote_filename=None): """ Upload a file under a specific location in DDSConnection possibly replacing an existing file. :param local_path: str: path to a local file to upload :param project_id: str: uuid of the project to add this file to :param parent_data: ParentData: info about the parent of this file :param existing_file_id: str: uuid of file to create a new version of (or None to create a new file) :param remote_filename: str: name to use for our remote file (defaults to local_path basename otherwise) :return: File """ path_data = PathData(local_path) hash_data = path_data.get_hash() file_upload_operations = FileUploadOperations(self.data_service, None) upload_id = file_upload_operations.create_upload(project_id, path_data, hash_data, remote_filename=remote_filename, storage_provider=self.config.storage_provider_id) context = UploadContext(self.config, self.data_service, upload_id, path_data) ParallelChunkProcessor(context).run() remote_file_data = file_upload_operations.finish_upload(upload_id, hash_data, parent_data, existing_file_id) return File(self, remote_file_data)
Upload a file under a specific location in DDSConnection possibly replacing an existing file. :param local_path: str: path to a local file to upload :param project_id: str: uuid of the project to add this file to :param parent_data: ParentData: info about the parent of this file :param existing_file_id: str: uuid of file to create a new version of (or None to create a new file) :param remote_filename: str: name to use for our remote file (defaults to local_path basename otherwise) :return: File
entailment
def _folder_or_file_constructor(dds_connection, data_dict): """ Create a File or Folder based on the kind value in data_dict :param dds_connection: DDSConnection :param data_dict: dict: payload received from DDSConnection API :return: File|Folder """ kind = data_dict['kind'] if kind == KindType.folder_str: return Folder(dds_connection, data_dict) elif data_dict['kind'] == KindType.file_str: return File(dds_connection, data_dict)
Create a File or Folder based on the kind value in data_dict :param dds_connection: DDSConnection :param data_dict: dict: payload received from DDSConnection API :return: File|Folder
entailment
def get_folder_by_id(self, folder_id): """ Get folder details for a folder id. :param folder_id: str: uuid of the folder :return: Folder """ return self._create_item_response( self.data_service.get_folder(folder_id), Folder )
Get folder details for a folder id. :param folder_id: str: uuid of the folder :return: Folder
entailment
def get_file_by_id(self, file_id): """ Get folder details for a file id. :param file_id: str: uuid of the file :return: File """ return self._create_item_response( self.data_service.get_file(file_id), File )
Get folder details for a file id. :param file_id: str: uuid of the file :return: File
entailment
def save_to_path(self, file_path, chunk_size=DOWNLOAD_FILE_CHUNK_SIZE): """ Save the contents of the remote file to a local path. :param file_path: str: file path :param chunk_size: chunk size used to write local file """ response = self._get_download_response() with open(file_path, 'wb') as f: for chunk in response.iter_content(chunk_size=chunk_size): if chunk: # filter out keep-alive new chunks f.write(chunk)
Save the contents of the remote file to a local path. :param file_path: str: file path :param chunk_size: chunk size used to write local file
entailment
def get_child(self): """ Find file or folder at the remote_path :return: File|Folder """ path_parts = self.remote_path.split(os.sep) return self._get_child_recurse(path_parts, self.node)
Find file or folder at the remote_path :return: File|Folder
entailment
def execute_task_async(task_func, task_id, context): """ Global function run for Task. multiprocessing requires a top level function. :param task_func: function: function to run (must be pickle-able) :param task_id: int: unique id of this task :param context: object: single argument to task_func (must be pickle-able) :return: (task_id, object): return passed in task id and result object """ try: result = task_func(context) return task_id, result except: # Put all exception text into an exception and raise that so main process will print this out raise Exception("".join(traceback.format_exception(*sys.exc_info())))
Global function run for Task. multiprocessing requires a top level function. :param task_func: function: function to run (must be pickle-able) :param task_id: int: unique id of this task :param context: object: single argument to task_func (must be pickle-able) :return: (task_id, object): return passed in task id and result object
entailment
def add(self, task): """ Add this task to the lookup based on it's wait_for_task_id property. :param task: Task: task to add to the list """ wait_id = task.wait_for_task_id task_list = self.wait_id_to_task.get(wait_id, []) task_list.append(task) self.wait_id_to_task[wait_id] = task_list
Add this task to the lookup based on it's wait_for_task_id property. :param task: Task: task to add to the list
entailment
def add(self, parent_task_id, command): """ Create a task for the command that will wait for parent_task_id before starting. :param parent_task_id: int: id of task to wait for or None if it can start immediately :param command: TaskCommand: contains data function to run :return: int: task id we created for this command """ task_id = self._claim_next_id() self.waiting_task_list.add(Task(task_id, parent_task_id, command)) return task_id
Create a task for the command that will wait for parent_task_id before starting. :param parent_task_id: int: id of task to wait for or None if it can start immediately :param command: TaskCommand: contains data function to run :return: int: task id we created for this command
entailment
def run(self): """ Runs all tasks in this runner on the executor. Blocks until all tasks have been completed. :return: """ for task in self.get_next_tasks(None): self.executor.add_task(task, None) while not self.executor.is_done(): done_task_and_result = self.executor.wait_for_tasks() for task, task_result in done_task_and_result: self._add_sub_tasks_to_executor(task, task_result)
Runs all tasks in this runner on the executor. Blocks until all tasks have been completed. :return:
entailment
def _add_sub_tasks_to_executor(self, parent_task, parent_task_result): """ Add all subtasks for parent_task to the executor. :param parent_task: Task: task that has just finished :param parent_task_result: object: result of task that is finished """ for sub_task in self.waiting_task_list.get_next_tasks(parent_task.id): self.executor.add_task(sub_task, parent_task_result)
Add all subtasks for parent_task to the executor. :param parent_task: Task: task that has just finished :param parent_task_result: object: result of task that is finished
entailment
def add_task(self, task, parent_task_result): """ Add a task to run with the specified result from this tasks parent(can be None) :param task: Task: task that should be run :param parent_task_result: object: value to be passed to task for setup """ self.tasks.append((task, parent_task_result)) self.task_id_to_task[task.id] = task
Add a task to run with the specified result from this tasks parent(can be None) :param task: Task: task that should be run :param parent_task_result: object: value to be passed to task for setup
entailment
def wait_for_tasks(self): """ Wait for one or more tasks to finish or return empty list if we are done. Starts new tasks if we have less than task_at_once currently running. :return: [(Task,object)]: list of (task,result) for finished tasks """ finished_tasks_and_results = [] while len(finished_tasks_and_results) == 0: if self.is_done(): break self.start_tasks() self.process_all_messages_in_queue() finished_tasks_and_results = self.get_finished_results() return finished_tasks_and_results
Wait for one or more tasks to finish or return empty list if we are done. Starts new tasks if we have less than task_at_once currently running. :return: [(Task,object)]: list of (task,result) for finished tasks
entailment
def start_tasks(self): """ Start however many tasks we can based on our limits and what we have left to finish. """ while self.tasks_at_once > len(self.pending_results) and self._has_more_tasks(): task, parent_result = self.tasks.popleft() self.execute_task(task, parent_result)
Start however many tasks we can based on our limits and what we have left to finish.
entailment
def execute_task(self, task, parent_result): """ Run a single task in another process saving the result to our list of pending results. :param task: Task: function and data we can run in another process :param parent_result: object: result from our parent task """ task.before_run(parent_result) context = task.create_context(self.message_queue) pending_result = self.pool.apply_async(execute_task_async, (task.func, task.id, context)) self.pending_results.append(pending_result)
Run a single task in another process saving the result to our list of pending results. :param task: Task: function and data we can run in another process :param parent_result: object: result from our parent task
entailment
def process_single_message_from_queue(self): """ Tries to read a single message from the queue and let the associated task process it. :return: bool: True if we processed a message, otherwise False """ try: message = self.message_queue.get_nowait() task_id, data = message task = self.task_id_to_task[task_id] task.on_message(data) return True except queue.Empty: return False
Tries to read a single message from the queue and let the associated task process it. :return: bool: True if we processed a message, otherwise False
entailment
def get_finished_results(self): """ Go through pending results and retrieve the results if they are done. Then start child tasks for the task that finished. """ task_and_results = [] for pending_result in self.pending_results: if pending_result.ready(): ret = pending_result.get() task_id, result = ret task = self.task_id_to_task[task_id] # process any pending messages for this task (will also process other tasks messages) self.process_all_messages_in_queue() task.after_run(result) task_and_results.append((task, result)) self.pending_results.remove(pending_result) return task_and_results
Go through pending results and retrieve the results if they are done. Then start child tasks for the task that finished.
entailment
def to_xml(body_element): """ Serialize a L{twisted.web.template.Tag} to a UTF-8 encoded XML document with an XML doctype header. """ doctype = b"""<?xml version="1.0" encoding="UTF-8"?>\n""" if body_element is None: return succeed(b"") d = flattenString(None, body_element) d.addCallback(lambda flattened: doctype + flattened) return d
Serialize a L{twisted.web.template.Tag} to a UTF-8 encoded XML document with an XML doctype header.
entailment
def get_route53_client(agent, region, cooperator=None): """ Get a non-registration Route53 client. """ if cooperator is None: cooperator = task return region.get_client( _Route53Client, agent=agent, creds=region.creds, region=REGION_US_EAST_1, endpoint=AWSServiceEndpoint(_OTHER_ENDPOINT), cooperator=cooperator, )
Get a non-registration Route53 client.
entailment
def _route53_op(body=None, **kw): """ Construct an L{_Operation} representing a I{Route53} service API call. """ op = _Operation(service=b"route53", **kw) if body is None: return succeed(op) d = to_xml(body) d.addCallback(lambda body: attr.assoc(op, body=body)) return d
Construct an L{_Operation} representing a I{Route53} service API call.
entailment
def hostedzone_from_element(zone): """ Construct a L{HostedZone} instance from a I{HostedZone} XML element. """ return HostedZone( name=maybe_bytes_to_unicode(zone.find("Name").text).encode("ascii").decode("idna"), identifier=maybe_bytes_to_unicode(zone.find("Id").text).replace(u"/hostedzone/", u""), rrset_count=int(zone.find("ResourceRecordSetCount").text), reference=maybe_bytes_to_unicode(zone.find("CallerReference").text), )
Construct a L{HostedZone} instance from a I{HostedZone} XML element.
entailment
def to_element(change): """ @param change: An L{txaws.route53.interface.IRRSetChange} provider. @return: The L{twisted.web.template} element which describes this change. """ return tags.Change( tags.Action( change.action, ), tags.ResourceRecordSet( tags.Name( unicode(change.rrset.label), ), tags.Type( change.rrset.type, ), tags.TTL( u"{}".format(change.rrset.ttl), ), tags.ResourceRecords(list( tags.ResourceRecord(tags.Value(rr.to_text())) for rr in sorted(change.rrset.records) )) ), )
@param change: An L{txaws.route53.interface.IRRSetChange} provider. @return: The L{twisted.web.template} element which describes this change.
entailment
def add(self, method_class, action, version=None): """Add a method class to the regitry. @param method_class: The method class to add @param action: The action that the method class can handle @param version: The version that the method class can handle """ by_version = self._by_action.setdefault(action, {}) if version in by_version: raise RuntimeError("A method was already registered for action" " %s in version %s" % (action, version)) by_version[version] = method_class
Add a method class to the regitry. @param method_class: The method class to add @param action: The action that the method class can handle @param version: The version that the method class can handle
entailment
def check(self, action, version=None): """Check if the given action is supported in the given version. @raises APIError: If there's no method class registered for handling the given action or version. """ if action not in self._by_action: raise APIError(400, "InvalidAction", "The action %s is not valid " "for this web service." % action) by_version = self._by_action[action] if None not in by_version: # There's no catch-all method, let's try the version-specific one if version not in by_version: raise APIError(400, "InvalidVersion", "Invalid API version.")
Check if the given action is supported in the given version. @raises APIError: If there's no method class registered for handling the given action or version.
entailment
def get(self, action, version=None): """Get the method class handing the given action and version.""" by_version = self._by_action[action] if version in by_version: return by_version[version] else: return by_version[None]
Get the method class handing the given action and version.
entailment
def scan(self, module, onerror=None, ignore=None): """Scan the given module object for L{Method}s and register them.""" from venusian import Scanner scanner = Scanner(registry=self) kwargs = {"onerror": onerror, "categories": ["method"]} if ignore is not None: # Only pass it if specified, for backward compatibility kwargs["ignore"] = ignore scanner.scan(module, **kwargs)
Scan the given module object for L{Method}s and register them.
entailment
def calculate_parameters(self, independentTs, dependentTs): """Calculate and return the parameters for the regression line Return the parameter for the line describing the relationship between the input variables. :param Timeseries independentTs: The Timeseries used for the independent variable (x-axis). The Timeseries must have at least 2 datapoints with different dates and values :param Timeseries dependentTs: The Timeseries used as the dependent variable (y-axis). The Timeseries must have at least 2 datapoints, which dates match with independentTs :return: A tuple containing the y-axis intercept and the slope used to execute the regression :rtype: tuple :raise: Raises an :py:exc:`ValueError` if - independentTs and dependentTs have not at least two matching dates - independentTs has only one distinct value - The dates in one or both Timeseries are not distinct. """ listX, listY = self.match_time_series(independentTs, dependentTs) if len(listX) == 0 or len(listY) == 0: raise ValueError("Lists need to have some equal dates or cannot be empty") if len(listX) != len(listY): raise ValueError("Each Timeseries need to have distinct dates") xValues = map(lambda item: item[1], listX) yValues = map(lambda item: item[1], listY) xMean = FusionMethods["mean"](xValues) yMean = FusionMethods["mean"](yValues) xDeviation = map(lambda item: (item - xMean), xValues) yDeviation = map(lambda item: (item - yMean), yValues) try: parameter1 = sum(x * y for x, y in zip(xDeviation, yDeviation)) / sum(x * x for x in xDeviation) except ZeroDivisionError: # error occures if xDeviation is always 0, which means that all x values are the same raise ValueError("Not enough distinct x values") parameter0 = yMean - (parameter1 * xMean) return (parameter0, parameter1)
Calculate and return the parameters for the regression line Return the parameter for the line describing the relationship between the input variables. :param Timeseries independentTs: The Timeseries used for the independent variable (x-axis). The Timeseries must have at least 2 datapoints with different dates and values :param Timeseries dependentTs: The Timeseries used as the dependent variable (y-axis). The Timeseries must have at least 2 datapoints, which dates match with independentTs :return: A tuple containing the y-axis intercept and the slope used to execute the regression :rtype: tuple :raise: Raises an :py:exc:`ValueError` if - independentTs and dependentTs have not at least two matching dates - independentTs has only one distinct value - The dates in one or both Timeseries are not distinct.
entailment
def calculate_parameters_with_confidence(self, independentTs, dependentTs, confidenceLevel, samplePercentage=.1): """Same functionality as calculate_parameters, just that additionally the confidence interval for a given confidenceLevel is calculated. This is done based on a sample of the dependentTs training data that is validated against the prediction. The signed error of the predictions and the sample is then used to calculate the bounds of the interval. further reading: http://en.wikipedia.org/wiki/Confidence_interval :param Timeseries independentTs: The Timeseries used for the independent variable (x-axis). The Timeseries must have at least 2 datapoints with different dates and values :param Timeseries dependentTs: The Timeseries used as the dependent variable (y-axis). The Timeseries must have at least 2 datapoints, which dates match with independentTs :param float confidenceLevel: The percentage of entries in the sample that should have an prediction error closer or equal to 0 than the bounds of the confidence interval. :param float samplePercentage: How much of the dependentTs should be used for sampling :return: A tuple containing the y-axis intercept and the slope used to execute the regression and the (underestimation, overestimation) for the given confidenceLevel :rtype: tuple :raise: Raises an :py:exc:`ValueError` if - independentTs and dependentTs have not at least two matching dates - independentTs has only one distinct value - The dates in one or both Timeseries are not distinct. """ #First split the time series into sample and training data sampleY, trainingY = dependentTs.sample(samplePercentage) sampleX_list = self.match_time_series(sampleY, independentTs)[1] trainingX_list = self.match_time_series(trainingY, independentTs)[1] sampleX = TimeSeries.from_twodim_list(sampleX_list) trainingX = TimeSeries.from_twodim_list(trainingX_list) #Then calculate parameters based on the training data n, m = self.calculate_parameters(trainingX, trainingY) #predict prediction = self.predict(sampleX, n, m) #calculate the signed error at each location, note that MSD(x,y) != MSD(y,x) msd = MSD() msd.initialize(prediction, sampleY) return (n, m, msd.confidence_interval(confidenceLevel))
Same functionality as calculate_parameters, just that additionally the confidence interval for a given confidenceLevel is calculated. This is done based on a sample of the dependentTs training data that is validated against the prediction. The signed error of the predictions and the sample is then used to calculate the bounds of the interval. further reading: http://en.wikipedia.org/wiki/Confidence_interval :param Timeseries independentTs: The Timeseries used for the independent variable (x-axis). The Timeseries must have at least 2 datapoints with different dates and values :param Timeseries dependentTs: The Timeseries used as the dependent variable (y-axis). The Timeseries must have at least 2 datapoints, which dates match with independentTs :param float confidenceLevel: The percentage of entries in the sample that should have an prediction error closer or equal to 0 than the bounds of the confidence interval. :param float samplePercentage: How much of the dependentTs should be used for sampling :return: A tuple containing the y-axis intercept and the slope used to execute the regression and the (underestimation, overestimation) for the given confidenceLevel :rtype: tuple :raise: Raises an :py:exc:`ValueError` if - independentTs and dependentTs have not at least two matching dates - independentTs has only one distinct value - The dates in one or both Timeseries are not distinct.
entailment
def predict(self, timeseriesX, n, m): """ Calculates the dependent timeseries Y for the given parameters and independent timeseries. (y=m*x + n) :param TimeSeries timeseriesX: the independent Timeseries. :param float n: The interception with the x access that has been calculated during regression :param float m: The slope of the function that has been calculated during regression :return TimeSeries timeseries_y: the predicted values for the dependent TimeSeries. Its length and first dimension will equal to timeseriesX. """ new_entries = [] for entry in timeseriesX: predicted_value = m * entry[1] + n new_entries.append([entry[0], predicted_value]) return TimeSeries.from_twodim_list(new_entries)
Calculates the dependent timeseries Y for the given parameters and independent timeseries. (y=m*x + n) :param TimeSeries timeseriesX: the independent Timeseries. :param float n: The interception with the x access that has been calculated during regression :param float m: The slope of the function that has been calculated during regression :return TimeSeries timeseries_y: the predicted values for the dependent TimeSeries. Its length and first dimension will equal to timeseriesX.
entailment
def match_time_series(self, timeseries1, timeseries2): """Return two lists of the two input time series with matching dates :param TimeSeries timeseries1: The first timeseries :param TimeSeries timeseries2: The second timeseries :return: Two two dimensional lists containing the matched values, :rtype: two List """ time1 = map(lambda item: item[0], timeseries1.to_twodim_list()) time2 = map(lambda item: item[0], timeseries2.to_twodim_list()) matches = filter(lambda x: (x in time1), time2) listX = filter(lambda x: (x[0] in matches), timeseries1.to_twodim_list()) listY = filter(lambda x: (x[0] in matches), timeseries2.to_twodim_list()) return listX, listY
Return two lists of the two input time series with matching dates :param TimeSeries timeseries1: The first timeseries :param TimeSeries timeseries2: The second timeseries :return: Two two dimensional lists containing the matched values, :rtype: two List
entailment
def lstsq(cls, a, b): """Return the least-squares solution to a linear matrix equation. :param Matrix a: Design matrix with the values of the independent variables. :param Matrix b: Matrix with the "dependent variable" values. b can only have one column. :raise: Raises an :py:exc:`ValueError`, if - the number of rows of a and b does not match. - b has more than one column. :note: The algorithm solves the following equations. beta = a^+ b. """ # Check if the size of the input matrices matches if a.get_height() != b.get_height(): raise ValueError("Size of input matrices does not match") if b.get_width() != 1: raise ValueError("Matrix with dependent variable has more than 1 column") aPseudo = a.pseudoinverse() # The following code could be used if c is regular. # aTrans = a.transform() # c = aTrans * a # invers() raises an ValueError, if c is not invertible # cInvers = c.invers() # beta = cInvers * aTrans * b beta = aPseudo * b return beta
Return the least-squares solution to a linear matrix equation. :param Matrix a: Design matrix with the values of the independent variables. :param Matrix b: Matrix with the "dependent variable" values. b can only have one column. :raise: Raises an :py:exc:`ValueError`, if - the number of rows of a and b does not match. - b has more than one column. :note: The algorithm solves the following equations. beta = a^+ b.
entailment
def _calculate(self, startingPercentage, endPercentage, startDate, endDate): """This is the error calculation function that gets called by :py:meth:`BaseErrorMeasure.get_error`. Both parameters will be correct at this time. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :param float startDate: Epoch representing the start date used for error calculation. :param float endDate: Epoch representing the end date used in the error calculation. :return: Returns a float representing the error. :rtype: float """ # get the defined subset of error values errorValues = self._get_error_values(startingPercentage, endPercentage, startDate, endDate) errorValues = filter(lambda item: item is not None, errorValues) return float(sum(errorValues)) / float(len(errorValues))
This is the error calculation function that gets called by :py:meth:`BaseErrorMeasure.get_error`. Both parameters will be correct at this time. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :param float startDate: Epoch representing the start date used for error calculation. :param float endDate: Epoch representing the end date used in the error calculation. :return: Returns a float representing the error. :rtype: float
entailment
def local_error(self, originalValue, calculatedValue): """Calculates the error between the two given values. :param list originalValue: List containing the values of the original data. :param list calculatedValue: List containing the values of the calculated TimeSeries that corresponds to originalValue. :return: Returns the error measure of the two given values. :rtype: numeric """ originalValue = originalValue[0] calculatedValue = calculatedValue[0] if 0 == originalValue: return None return (math.fabs((calculatedValue - originalValue)/float(originalValue))) * 100.0
Calculates the error between the two given values. :param list originalValue: List containing the values of the original data. :param list calculatedValue: List containing the values of the calculated TimeSeries that corresponds to originalValue. :return: Returns the error measure of the two given values. :rtype: numeric
entailment
def _get_historic_means(self, timeSeries): """Calculates the mean value for the history of the MeanAbsoluteScaledError. :param TimeSeries timeSeries: Original TimeSeries used to calculate the mean historic values. :return: Returns a list containing the historic means. :rtype: list """ # calculate the history values historyLength = self._historyLength historicMeans = [] append = historicMeans.append # not most optimized loop in case of calculation operations for startIdx in xrange(len(timeSeries) - historyLength - 1): value = 0 for idx in xrange(startIdx, startIdx + historyLength): value += abs(timeSeries[idx+1][1] - timeSeries[idx][1]) append(value / float(historyLength)) return historicMeans
Calculates the mean value for the history of the MeanAbsoluteScaledError. :param TimeSeries timeSeries: Original TimeSeries used to calculate the mean historic values. :return: Returns a list containing the historic means. :rtype: list
entailment
def initialize(self, originalTimeSeries, calculatedTimeSeries): """Initializes the ErrorMeasure. During initialization, all :py:meth:`BaseErrorMeasure.local_error()` are calculated. :param TimeSeries originalTimeSeries: TimeSeries containing the original data. :param TimeSeries calculatedTimeSeries: TimeSeries containing calculated data. Calculated data is smoothed or forecasted data. :return: Return :py:const:`True` if the error could be calculated, :py:const:`False` otherwise based on the minimalErrorCalculationPercentage. :rtype: boolean :raise: Raises a :py:exc:`StandardError` if the error measure is initialized multiple times. """ # ErrorMeasure was already initialized. if 0 < len(self._errorValues): raise StandardError("An ErrorMeasure can only be initialized once.") # calculating the number of datapoints used within the history if isinstance(self._historyLength, float): self._historyLength = int((self._historyLength * len(originalTimeSeries)) / 100.0) # sort the TimeSeries to reduce the required comparison operations originalTimeSeries.sort_timeseries() calculatedTimeSeries.sort_timeseries() self._historicMeans = self._get_historic_means(originalTimeSeries) # Performance optimization append = self._errorValues.append appendDates = self._errorDates.append local_error = self.local_error minCalcIdx = self._historyLength + 1 # calculate all valid local errors for orgPair in originalTimeSeries[minCalcIdx:]: for calcIdx in xrange(minCalcIdx, len(calculatedTimeSeries)): calcPair = calculatedTimeSeries[calcIdx] # Skip values that can not be compared if calcPair[0] != orgPair[0]: continue append(local_error(orgPair[1:], calcPair[1:])) appendDates(orgPair[0]) # return False, if the error cannot be calculated if len(filter(lambda item: item is not None, self._errorValues)) < self._minimalErrorCalculationPercentage * len(originalTimeSeries): self._errorValues = [] self._errorDates = [] self._historicMeans = [] return False return True
Initializes the ErrorMeasure. During initialization, all :py:meth:`BaseErrorMeasure.local_error()` are calculated. :param TimeSeries originalTimeSeries: TimeSeries containing the original data. :param TimeSeries calculatedTimeSeries: TimeSeries containing calculated data. Calculated data is smoothed or forecasted data. :return: Return :py:const:`True` if the error could be calculated, :py:const:`False` otherwise based on the minimalErrorCalculationPercentage. :rtype: boolean :raise: Raises a :py:exc:`StandardError` if the error measure is initialized multiple times.
entailment
def _calculate(self, startingPercentage, endPercentage, startDate, endDate): """This is the error calculation function that gets called by :py:meth:`BaseErrorMeasure.get_error`. Both parameters will be correct at this time. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :param float startDate: Epoch representing the start date used for error calculation. :param float endDate: Epoch representing the end date used in the error calculation. :return: Returns a float representing the error. :rtype: float """ # get the defined subset of error values errorValues = self._get_error_values(startingPercentage, endPercentage, startDate, endDate) # get the historic mean if startDate is not None: possibleDates = filter(lambda date: date >= startDate, self._errorDates) # This piece of code is not required, because _get_error_values already ensured that the startDate # was correct. Otherwise it would have thrown an exception. #if 0 == len(possibleDates): # raise ValueError("%s does not represent a valid startDate." % startDate) meanIdx = self._errorDates.index(min(possibleDates)) else: meanIdx = int((startingPercentage * len(self._errorValues)) / 100.0) mad = sum(errorValues) / float(len(errorValues)) historicMean = self._historicMeans[meanIdx] return mad / historicMean
This is the error calculation function that gets called by :py:meth:`BaseErrorMeasure.get_error`. Both parameters will be correct at this time. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :param float startDate: Epoch representing the start date used for error calculation. :param float endDate: Epoch representing the end date used in the error calculation. :return: Returns a float representing the error. :rtype: float
entailment
def __train(self, n_clusters=4): """ Calculate cluster's centroids and standard deviations. If there are at least the number of threshold rows \ then: * Observations will be normalised. * Standard deviations will be returned. * Clusters will be returned. * Centroids are ordered based on their distance from an arbitrary -100, -100 point. If there are not enough Observations, then centroids and standard deviations will be set to the empty list. General strategy: Use numpy.array for calculations. Keep everything in float. Convert arrays back to lists \ at the end. :param n_clusters: the number of clusters :type n_clusters: int """ try: for obs in self.observations: features, ids = self.__get_features_for_observation(observation=obs, last_column_is_id=True) # the last column is the observation id normalised_data = whiten(features) # skip any rows that contain just zero values... they create nans first_safe_row = pdkit.utils.non_zero_index(normalised_data) observation_ids = features.tolist() sd = features[first_safe_row] / normalised_data[first_safe_row] # Calculate centroids and sort result centroids_array, _ = kmeans(normalised_data, n_clusters) sorted_centroids = pdkit.utils.centroid_sort(centroids_array) if not self.clusters: self.clusters = [[obs, sd.tolist(), sorted_centroids.tolist()]] else: self.clusters.append([obs, sd.tolist(),sorted_centroids.tolist()]) except IOError as e: ierr = "({}): {}".format(e.errno, e.strerror) logging.error("Error training UPDRS, file not found, I/O error %s", ierr) except ValueError as verr: logging.error("Error training UPDRS ValueError ->%s", verr.message) except: logging.error("Unexpected error on training UPDRS init: %s", sys.exc_info()[0])
Calculate cluster's centroids and standard deviations. If there are at least the number of threshold rows \ then: * Observations will be normalised. * Standard deviations will be returned. * Clusters will be returned. * Centroids are ordered based on their distance from an arbitrary -100, -100 point. If there are not enough Observations, then centroids and standard deviations will be set to the empty list. General strategy: Use numpy.array for calculations. Keep everything in float. Convert arrays back to lists \ at the end. :param n_clusters: the number of clusters :type n_clusters: int
entailment
def get_single_score(self, point, centroids=None, sd=None): """ Get a single score is a wrapper around the result of classifying a Point against a group of centroids. \ Attributes: observation_score (dict): Original received point and normalised point. :Example: >>> { "original": [0.40369016, 0.65217912], "normalised": [1.65915104, 3.03896181]} nearest_cluster (int): Index of the nearest cluster. If distances match, then lowest numbered cluster \ wins. distances (list (float)): List of distances from the Point to each cluster centroid. E.g: >>> [2.38086238, 0.12382605, 2.0362993, 1.43195021] centroids (list (list (float))): A list of the current centroidswhen queried. E.g: >>> [ [0.23944831, 1.12769265], [1.75621978, 3.11584191], [2.65884563, 1.26494783], \ [0.39421099, 2.36783733] ] :param point: the point to classify :type point: pandas.DataFrame :param centroids: the centroids :type centroids: np.array :param sd: the standard deviation :type sd: np.array :return score: the score for a given observation :rtype score: int """ normalised_point = array(point) / array(sd) observation_score = { 'original': point, 'normalised': normalised_point.tolist(), } distances = [ euclidean(normalised_point, centroid) for centroid in centroids ] return int(distances.index(min(distances)))
Get a single score is a wrapper around the result of classifying a Point against a group of centroids. \ Attributes: observation_score (dict): Original received point and normalised point. :Example: >>> { "original": [0.40369016, 0.65217912], "normalised": [1.65915104, 3.03896181]} nearest_cluster (int): Index of the nearest cluster. If distances match, then lowest numbered cluster \ wins. distances (list (float)): List of distances from the Point to each cluster centroid. E.g: >>> [2.38086238, 0.12382605, 2.0362993, 1.43195021] centroids (list (list (float))): A list of the current centroidswhen queried. E.g: >>> [ [0.23944831, 1.12769265], [1.75621978, 3.11584191], [2.65884563, 1.26494783], \ [0.39421099, 2.36783733] ] :param point: the point to classify :type point: pandas.DataFrame :param centroids: the centroids :type centroids: np.array :param sd: the standard deviation :type sd: np.array :return score: the score for a given observation :rtype score: int
entailment
def write_model(self, filename='scores', filepath='', output_format='csv'): """ This method calculates the scores and writes them to a file the data frame received. If the output format is other than 'csv' it will print the scores. :param filename: the name to give to the file :type filename: string :param filepath: the path to save the file :type filepath: string :param output_format: the format of the file to write ('csv') :type output_format: string """ scores_array = np.array([]) for obs in self.observations: c, sd = self.__get_centroids_sd(obs) points, ids = self.__get_features_for_observation(observation=obs, last_column_is_id=True) b = np.array([]) for p in points: b = np.append(b, [self.get_single_score(p, centroids=c, sd=sd)]) scores_array = np.vstack([scores_array, b]) if scores_array.size else b scores_array = np.concatenate((ids[:, np.newaxis], scores_array.transpose()), axis=1) header = 'id,'+','.join(self.observations) try: if output_format == 'csv': filename = join(filepath, filename) + '.' + output_format np.savetxt(filename, scores_array, delimiter=",", fmt='%i', header=header,comments='') else: print(scores_array) except: logging.error("Unexpected error on writing output")
This method calculates the scores and writes them to a file the data frame received. If the output format is other than 'csv' it will print the scores. :param filename: the name to give to the file :type filename: string :param filepath: the path to save the file :type filepath: string :param output_format: the format of the file to write ('csv') :type output_format: string
entailment
def score(self, measurement, output_format='array'): """ Method to score/classify a measurement against the trained knn clusters :param measurement: the point to classify :type measurement: pandas.DataFrame :param output_format: the format to return the scores ('array' or 'str') :type output_format: string :return scores: the scores for a given test/point :rtype scores: np.array """ scores = np.array([]) for obs in self.observations: c, sd = self.__get_centroids_sd(obs) p, ids = self.__get_features_for_observation(data_frame = measurement, observation=obs, last_column_is_id=True) scores = np.append(scores, [self.get_single_score(p, centroids=c, sd=sd)], axis=0) if output_format == 'array': return scores.astype(int) else: return np.array_str(scores.astype(int))
Method to score/classify a measurement against the trained knn clusters :param measurement: the point to classify :type measurement: pandas.DataFrame :param output_format: the format to return the scores ('array' or 'str') :type output_format: string :return scores: the scores for a given test/point :rtype scores: np.array
entailment
def download_file_part_run(download_context): """ Function run by CreateProjectCommand to create the project. Runs in a background process. :param download_context: UploadContext: contains data service setup and project name to create. """ destination_dir, file_url_data_dict, seek_amt, bytes_to_read = download_context.params project_file = ProjectFile(file_url_data_dict) local_path = project_file.get_local_path(destination_dir) retry_chunk_downloader = RetryChunkDownloader(project_file, local_path, seek_amt, bytes_to_read, download_context) retry_chunk_downloader.run() return 'ok'
Function run by CreateProjectCommand to create the project. Runs in a background process. :param download_context: UploadContext: contains data service setup and project name to create.
entailment
def run(self): """ Download the contents of the specified project name or id to dest_directory. """ files_to_download = self.get_files_to_download() total_files_size = self.get_total_files_size(files_to_download) if self.file_download_pre_processor: self.run_preprocessor(files_to_download) self.try_create_dir(self.dest_directory) watcher = ProgressPrinter(total_files_size, msg_verb='downloading') self.download_files(files_to_download, watcher) watcher.finished() warnings = self.check_warnings() if warnings: watcher.show_warning(warnings)
Download the contents of the specified project name or id to dest_directory.
entailment
def run_preprocessor(self, files_to_download): """ Run file_download_pre_processor for each file we are about to download. :param files_to_download: [ProjectFile]: files that will be downloaded """ for project_file in files_to_download: self.file_download_pre_processor.run(self.remote_store.data_service, project_file)
Run file_download_pre_processor for each file we are about to download. :param files_to_download: [ProjectFile]: files that will be downloaded
entailment
def try_create_dir(self, path): """ Try to create a directory if it doesn't exist and raise error if there is a non-directory with the same name. :param path: str path to the directory """ if not os.path.exists(path): os.mkdir(path) elif not os.path.isdir(path): ValueError("Unable to create directory:" + path + " because a file already exists with the same name.")
Try to create a directory if it doesn't exist and raise error if there is a non-directory with the same name. :param path: str path to the directory
entailment
def _get_parent_remote_paths(self): """ Get list of remote folders based on the list of all file urls :return: set([str]): set of remote folders (that contain files) """ parent_paths = set([item.get_remote_parent_path() for item in self.file_urls]) if '' in parent_paths: parent_paths.remove('') return parent_paths
Get list of remote folders based on the list of all file urls :return: set([str]): set of remote folders (that contain files)
entailment
def make_local_directories(self): """ Create directories necessary to download the files into dest_directory """ for remote_path in self._get_parent_remote_paths(): local_path = os.path.join(self.dest_directory, remote_path) self._assure_dir_exists(local_path)
Create directories necessary to download the files into dest_directory
entailment
def make_big_empty_files(self): """ Write out a empty file so the workers can seek to where they should write and write their data. """ for file_url in self.file_urls: local_path = file_url.get_local_path(self.dest_directory) with open(local_path, "wb") as outfile: if file_url.size > 0: outfile.seek(int(file_url.size) - 1) outfile.write(b'\0')
Write out a empty file so the workers can seek to where they should write and write their data.
entailment
def make_ranges(self, file_url): """ Divides file_url size into an array of ranges to be downloaded by workers. :param: file_url: ProjectFileUrl: file url to download :return: [(int,int)]: array of (start, end) tuples """ size = file_url.size bytes_per_chunk = self.determine_bytes_per_chunk(size) start = 0 ranges = [] while size > 0: amount = bytes_per_chunk if amount > size: amount = size ranges.append((start, start + amount - 1)) start += amount size -= amount return ranges
Divides file_url size into an array of ranges to be downloaded by workers. :param: file_url: ProjectFileUrl: file url to download :return: [(int,int)]: array of (start, end) tuples
entailment
def determine_bytes_per_chunk(self, size): """ Calculate the size of chunk a worker should download. The last worker may download less than this depending on file size. :return: int: byte size for a worker """ workers = self.settings.config.download_workers if not workers or workers == 'None': workers = 1 bytes_per_chunk = int(math.ceil(size / float(workers))) if bytes_per_chunk < self.bytes_per_chunk: bytes_per_chunk = self.bytes_per_chunk return bytes_per_chunk
Calculate the size of chunk a worker should download. The last worker may download less than this depending on file size. :return: int: byte size for a worker
entailment
def split_file_urls_by_size(self, size): """ Return tuple that contains a list large files and a list of small files based on size parameter :param size: int: size (in bytes) that determines if a file is large or small :return: ([ProjectFileUrl],[ProjectFileUrl]): (large file urls, small file urls) """ large_items = [] small_items = [] for file_url in self.file_urls: if file_url.size >= size: large_items.append(file_url) else: small_items.append(file_url) return large_items, small_items
Return tuple that contains a list large files and a list of small files based on size parameter :param size: int: size (in bytes) that determines if a file is large or small :return: ([ProjectFileUrl],[ProjectFileUrl]): (large file urls, small file urls)
entailment
def check_downloaded_files_sizes(self): """ Make sure the files sizes are correct. Since we manually create the files this will only catch overruns. Raises ValueError if there is a problematic file. """ for file_url in self.file_urls: local_path = file_url.get_local_path(self.dest_directory) self.check_file_size(file_url.size, local_path)
Make sure the files sizes are correct. Since we manually create the files this will only catch overruns. Raises ValueError if there is a problematic file.
entailment
def check_file_size(file_size, path): """ Raise an error if we didn't get all of the file. :param file_size: int: size of this file :param path: str path where we downloaded the file to """ stat_info = os.stat(path) if stat_info.st_size != file_size: format_str = "Error occurred downloading {}. Got a file size {}. Expected file size:{}" msg = format_str.format(path, stat_info.st_size, file_size) raise ValueError(msg)
Raise an error if we didn't get all of the file. :param file_size: int: size of this file :param path: str path where we downloaded the file to
entailment
def create_context(self, message_queue, task_id): """ Create data needed by upload_project_run(DukeDS connection info). :param message_queue: Queue: queue background process can send messages to us on :param task_id: int: id of this command's task so message will be routed correctly """ params = (self.settings.dest_directory, self.file_url.json_data, self.seek_amt, self.bytes_to_read) return DownloadContext(self.settings, params, message_queue, task_id)
Create data needed by upload_project_run(DukeDS connection info). :param message_queue: Queue: queue background process can send messages to us on :param task_id: int: id of this command's task so message will be routed correctly
entailment
def get_url_and_headers_for_range(self, file_download): """ Return url and headers to use for downloading part of a file, adding range headers. :param file_download: FileDownload: contains data about file we will download :return: str, dict: url to download and headers to use """ headers = self.get_range_headers() if file_download.http_headers: headers.update(file_download.http_headers) separator = "" if not file_download.url.startswith("/"): separator = "/" url = '{}{}{}'.format(file_download.host, separator, file_download.url) return url, headers
Return url and headers to use for downloading part of a file, adding range headers. :param file_download: FileDownload: contains data about file we will download :return: str, dict: url to download and headers to use
entailment
def download_chunk(self, url, headers): """ Download part of a file and write to our file :param url: str: URL to download this file :param headers: dict: headers used to download this file chunk """ response = requests.get(url, headers=headers, stream=True) if response.status_code == SWIFT_EXPIRED_STATUS_CODE \ or response.status_code == S3_EXPIRED_STATUS_CODE: raise DownloadInconsistentError(response.text) response.raise_for_status() self.actual_bytes_read = 0 self._write_response_to_file(response) self._verify_download_complete()
Download part of a file and write to our file :param url: str: URL to download this file :param headers: dict: headers used to download this file chunk
entailment
def _write_response_to_file(self, response): """ Write response to the appropriate section of the file at self.local_path. :param response: requests.Response: response containing stream-able data """ with open(self.local_path, 'r+b') as outfile: # open file for read/write (no truncate) outfile.seek(self.seek_amt) for chunk in response.iter_content(chunk_size=self.bytes_per_chunk): if chunk: # filter out keep-alive chunks outfile.write(chunk) self._on_bytes_read(len(chunk))
Write response to the appropriate section of the file at self.local_path. :param response: requests.Response: response containing stream-able data
entailment
def _on_bytes_read(self, num_bytes_read): """ Record our progress so we can validate that we receive all the data :param num_bytes_read: int: number of bytes we received as part of one chunk """ self.actual_bytes_read += num_bytes_read if self.actual_bytes_read > self.bytes_to_read: raise TooLargeChunkDownloadError(self.actual_bytes_read, self.bytes_to_read, self.local_path) self.download_context.send_processed_message(num_bytes_read)
Record our progress so we can validate that we receive all the data :param num_bytes_read: int: number of bytes we received as part of one chunk
entailment
def _verify_download_complete(self): """ Make sure we received all the data """ if self.actual_bytes_read > self.bytes_to_read: raise TooLargeChunkDownloadError(self.actual_bytes_read, self.bytes_to_read, self.local_path) elif self.actual_bytes_read < self.bytes_to_read: raise PartialChunkDownloadError(self.actual_bytes_read, self.bytes_to_read, self.local_path)
Make sure we received all the data
entailment
def optimize(self, timeSeries, forecastingMethods=None, startingPercentage=0.0, endPercentage=100.0): """Runs the optimization of the given TimeSeries. :param TimeSeries timeSeries: TimeSeries instance that requires an optimized forecast. :param list forecastingMethods: List of forecastingMethods that will be used for optimization. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :return: Returns the optimized forecasting method, the corresponding error measure and the forecasting methods parameters. :rtype: [BaseForecastingMethod, BaseErrorMeasure, Dictionary] :raise: Raises a :py:exc:`ValueError` ValueError if no forecastingMethods is empty. """ if forecastingMethods is None or len(forecastingMethods) == 0: raise ValueError("forecastingMethods cannot be empty.") self._startingPercentage = startingPercentage self._endPercentage = endPercentage results = [] for forecastingMethod in forecastingMethods: results.append([forecastingMethod] + self.optimize_forecasting_method(timeSeries, forecastingMethod)) # get the forecasting method with the smallest error bestForecastingMethod = min(results, key=lambda item: item[1].get_error(self._startingPercentage, self._endPercentage)) for parameter in bestForecastingMethod[2]: bestForecastingMethod[0].set_parameter(parameter, bestForecastingMethod[2][parameter]) return bestForecastingMethod
Runs the optimization of the given TimeSeries. :param TimeSeries timeSeries: TimeSeries instance that requires an optimized forecast. :param list forecastingMethods: List of forecastingMethods that will be used for optimization. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :return: Returns the optimized forecasting method, the corresponding error measure and the forecasting methods parameters. :rtype: [BaseForecastingMethod, BaseErrorMeasure, Dictionary] :raise: Raises a :py:exc:`ValueError` ValueError if no forecastingMethods is empty.
entailment
def _generate_next_parameter_value(self, parameter, forecastingMethod): """Generator for a specific parameter of the given forecasting method. :param string parameter: Name of the parameter the generator is used for. :param BaseForecastingMethod forecastingMethod: Instance of a ForecastingMethod. :return: Creates a generator used to iterate over possible parameters. :rtype: generator """ interval = forecastingMethod.get_interval(parameter) precision = 10**self._precison startValue = interval[0] endValue = interval[1] if not interval[2]: startValue += precision if interval[3]: endValue += precision while startValue < endValue: # fix the parameter precision parameterValue = startValue yield parameterValue startValue += precision
Generator for a specific parameter of the given forecasting method. :param string parameter: Name of the parameter the generator is used for. :param BaseForecastingMethod forecastingMethod: Instance of a ForecastingMethod. :return: Creates a generator used to iterate over possible parameters. :rtype: generator
entailment
def optimize_forecasting_method(self, timeSeries, forecastingMethod): """Optimizes the parameters for the given timeSeries and forecastingMethod. :param TimeSeries timeSeries: TimeSeries instance, containing hte original data. :param BaseForecastingMethod forecastingMethod: ForecastingMethod that is used to optimize the parameters. :return: Returns a tuple containing only the smallest BaseErrorMeasure instance as defined in :py:meth:`BaseOptimizationMethod.__init__` and the forecastingMethods parameter. :rtype: tuple """ tuneableParameters = forecastingMethod.get_optimizable_parameters() remainingParameters = [] for tuneableParameter in tuneableParameters: remainingParameters.append([tuneableParameter, [item for item in self._generate_next_parameter_value(tuneableParameter, forecastingMethod)]]) # Collect the forecasting results forecastingResults = self.optimization_loop(timeSeries, forecastingMethod, remainingParameters) # Debugging GridSearchTest.inner_optimization_result_test #print "" #print "GridSearch" #print "Instance / SMAPE / Alpha" #for item in forecastingResults: # print "%s / %s / %s" % ( # str(item[0])[-12:-1], # str(item[0].get_error(self._startingPercentage, self._endPercentage))[:8], # item[1]["smoothingFactor"] #) #print "" # Collect the parameters that resulted in the smallest error bestForecastingResult = min(forecastingResults, key=lambda item: item[0].get_error(self._startingPercentage, self._endPercentage)) # return the determined parameters return bestForecastingResult
Optimizes the parameters for the given timeSeries and forecastingMethod. :param TimeSeries timeSeries: TimeSeries instance, containing hte original data. :param BaseForecastingMethod forecastingMethod: ForecastingMethod that is used to optimize the parameters. :return: Returns a tuple containing only the smallest BaseErrorMeasure instance as defined in :py:meth:`BaseOptimizationMethod.__init__` and the forecastingMethods parameter. :rtype: tuple
entailment
def optimization_loop(self, timeSeries, forecastingMethod, remainingParameters, currentParameterValues=None): """The optimization loop. This function is called recursively, until all parameter values were evaluated. :param TimeSeries timeSeries: TimeSeries instance that requires an optimized forecast. :param BaseForecastingMethod forecastingMethod: ForecastingMethod that is used to optimize the parameters. :param list remainingParameters: List containing all parameters with their corresponding values that still need to be evaluated. When this list is empty, the most inner optimization loop is reached. :param dictionary currentParameterValues: The currently evaluated forecast parameter combination. :return: Returns a list containing a BaseErrorMeasure instance as defined in :py:meth:`BaseOptimizationMethod.__init__` and the forecastingMethods parameter. :rtype: list """ if currentParameterValues is None: currentParameterValues = {} # The most inner loop is reached if 0 == len(remainingParameters): # set the forecasting parameters for parameter in currentParameterValues: forecastingMethod.set_parameter(parameter, currentParameterValues[parameter]) # calculate the forecast forecast = timeSeries.apply(forecastingMethod) # create and initialize the ErrorMeasure error = self._errorClass(**self._errorMeasureKWArgs) # when the error could not be calculated, return an empty result if not error.initialize(timeSeries, forecast): return [] # Debugging GridSearchTest.inner_optimization_result_test #print "Instance / SMAPE / Alpha: %s / %s / %s" % ( # str(error)[-12:-1], # str(error.get_error(self._startingPercentage, self._endPercentage))[:8], # currentParameterValues["smoothingFactor"] #) # return the result return [[error, dict(currentParameterValues)]] # If this is not the most inner loop than extract an additional parameter localParameter = remainingParameters[-1] localParameterName = localParameter[0] localParameterValues = localParameter[1] # initialize the result results = [] # check the next level for each existing parameter for value in localParameterValues: currentParameterValues[localParameterName] = value remainingParameters = remainingParameters[:-1] results += self.optimization_loop(timeSeries, forecastingMethod, remainingParameters, currentParameterValues) return results
The optimization loop. This function is called recursively, until all parameter values were evaluated. :param TimeSeries timeSeries: TimeSeries instance that requires an optimized forecast. :param BaseForecastingMethod forecastingMethod: ForecastingMethod that is used to optimize the parameters. :param list remainingParameters: List containing all parameters with their corresponding values that still need to be evaluated. When this list is empty, the most inner optimization loop is reached. :param dictionary currentParameterValues: The currently evaluated forecast parameter combination. :return: Returns a list containing a BaseErrorMeasure instance as defined in :py:meth:`BaseOptimizationMethod.__init__` and the forecastingMethods parameter. :rtype: list
entailment
def energy_data(): """ Connects to the database and loads Readings for device 8. """ cur = db.cursor().execute("""SELECT timestamp, current FROM Readings""") original = TimeSeries() original.initialize_from_sql_cursor(cur) original.normalize("day", fusionMethod = "sum") return itty.Response(json.dumps(original, cls=PycastEncoder), content_type='application/json')
Connects to the database and loads Readings for device 8.
entailment
def optimize(request): """ Performs Holt Winters Parameter Optimization on the given post data. Expects the following values set in the post of the request: seasonLength - integer valuesToForecast - integer data - two dimensional array of [timestamp, value] """ #Parse arguments seasonLength = int(request.POST.get('seasonLength', 6)) valuesToForecast = int(request.POST.get('valuesToForecast', 0)) data = json.loads(request.POST.get('data', [])) original = TimeSeries.from_twodim_list(data) original.normalize("day") #due to bug in TimeSeries.apply original.set_timeformat("%d.%m") #optimize smoothing hwm = HoltWintersMethod(seasonLength = seasonLength, valuesToForecast = valuesToForecast) gridSearch = GridSearch(SMAPE) optimal_forecasting, error, optimal_params = gridSearch.optimize(original, [hwm]) #perform smoothing smoothed = optimal_forecasting.execute(original) smoothed.set_timeformat("%d.%m") result = { 'params': optimal_params, 'original': original, 'smoothed': smoothed, 'error': round(error.get_error(), 3) } return itty.Response(json.dumps(result, cls=PycastEncoder), content_type='application/json')
Performs Holt Winters Parameter Optimization on the given post data. Expects the following values set in the post of the request: seasonLength - integer valuesToForecast - integer data - two dimensional array of [timestamp, value]
entailment
def holtWinters(request): """ Performs Holt Winters Smoothing on the given post data. Expects the following values set in the post of the request: smoothingFactor - float trendSmoothingFactor - float seasonSmoothingFactor - float seasonLength - integer valuesToForecast - integer data - two dimensional array of [timestamp, value] """ #Parse arguments smoothingFactor = float(request.POST.get('smoothingFactor', 0.2)) trendSmoothingFactor = float(request.POST.get('trendSmoothingFactor', 0.3)) seasonSmoothingFactor = float(request.POST.get('seasonSmoothingFactor', 0.4)) seasonLength = int(request.POST.get('seasonLength', 6)) valuesToForecast = int(request.POST.get('valuesToForecast', 0)) data = json.loads(request.POST.get('data', [])) #perform smoothing hwm = HoltWintersMethod(smoothingFactor = smoothingFactor, trendSmoothingFactor = trendSmoothingFactor, seasonSmoothingFactor = seasonSmoothingFactor, seasonLength = seasonLength, valuesToForecast = valuesToForecast) original = TimeSeries.from_twodim_list(data) original.set_timeformat("%d.%m") smoothed = hwm.execute(original) smoothed.set_timeformat("%d.%m") error = SMAPE() error.initialize(original, smoothed) #process the result result = { 'original': original, 'smoothed': smoothed, 'error': round(error.get_error(), 3) } return itty.Response(json.dumps(result, cls=PycastEncoder), content_type='application/json')
Performs Holt Winters Smoothing on the given post data. Expects the following values set in the post of the request: smoothingFactor - float trendSmoothingFactor - float seasonSmoothingFactor - float seasonLength - integer valuesToForecast - integer data - two dimensional array of [timestamp, value]
entailment
def parse(self, schema, strict=True): """Update C{args} and C{rest}, parsing the raw request arguments. @param schema: The L{Schema} the parameters must be extracted with. @param strict: If C{True} an error is raised if parameters not included in the schema are found, otherwise the extra parameters will be saved in the C{rest} attribute. """ self.args, self.rest = schema.extract(self._raw_params) if strict and self.rest: raise APIError(400, "UnknownParameter", "The parameter %s is not " "recognized" % self.rest.keys()[0])
Update C{args} and C{rest}, parsing the raw request arguments. @param schema: The L{Schema} the parameters must be extracted with. @param strict: If C{True} an error is raised if parameters not included in the schema are found, otherwise the extra parameters will be saved in the C{rest} attribute.
entailment
def _name_to_child_map(children): """ Create a map of name to child based on a list. :param children [LocalFolder/LocalFile] list of children: :return: map child.name -> child """ name_to_child = {} for child in children: name_to_child[child.name] = child return name_to_child
Create a map of name to child based on a list. :param children [LocalFolder/LocalFile] list of children: :return: map child.name -> child
entailment
def _update_remote_children(remote_parent, children): """ Update remote_ids based on on parent matching up the names of children. :param remote_parent: RemoteProject/RemoteFolder who has children :param children: [LocalFolder,LocalFile] children to set remote_ids based on remote children """ name_to_child = _name_to_child_map(children) for remote_child in remote_parent.children: local_child = name_to_child.get(remote_child.name) if local_child: local_child.update_remote_ids(remote_child)
Update remote_ids based on on parent matching up the names of children. :param remote_parent: RemoteProject/RemoteFolder who has children :param children: [LocalFolder,LocalFile] children to set remote_ids based on remote children
entailment
def _build_project_tree(path, followsymlinks, file_filter): """ Build a tree of LocalFolder with children or just a LocalFile based on a path. :param path: str path to a directory to walk :param followsymlinks: bool should we follow symlinks when walking :param file_filter: FileFilter: include method returns True if we should include a file/folder :return: the top node of the tree LocalFile or LocalFolder """ result = None if os.path.isfile(path): result = LocalFile(path) else: result = _build_folder_tree(os.path.abspath(path), followsymlinks, file_filter) return result
Build a tree of LocalFolder with children or just a LocalFile based on a path. :param path: str path to a directory to walk :param followsymlinks: bool should we follow symlinks when walking :param file_filter: FileFilter: include method returns True if we should include a file/folder :return: the top node of the tree LocalFile or LocalFolder
entailment
def _build_folder_tree(top_abspath, followsymlinks, file_filter): """ Build a tree of LocalFolder with children based on a path. :param top_abspath: str path to a directory to walk :param followsymlinks: bool should we follow symlinks when walking :param file_filter: FileFilter: include method returns True if we should include a file/folder :return: the top node of the tree LocalFolder """ path_to_content = {} child_to_parent = {} ignore_file_patterns = IgnoreFilePatterns(file_filter) ignore_file_patterns.load_directory(top_abspath, followsymlinks) for dir_name, child_dirs, child_files in os.walk(top_abspath, followlinks=followsymlinks): abspath = os.path.abspath(dir_name) folder = LocalFolder(abspath) path_to_content[abspath] = folder # If we have a parent add us to it. parent_path = child_to_parent.get(abspath) if parent_path: path_to_content[parent_path].add_child(folder) remove_child_dirs = [] for child_dir in child_dirs: # Record dir_name as the parent of child_dir so we can call add_child when get to it. abs_child_path = os.path.abspath(os.path.join(dir_name, child_dir)) if ignore_file_patterns.include(abs_child_path, is_file=False): child_to_parent[abs_child_path] = abspath else: remove_child_dirs.append(child_dir) for remove_child_dir in remove_child_dirs: child_dirs.remove(remove_child_dir) for child_filename in child_files: abs_child_filename = os.path.join(dir_name, child_filename) if ignore_file_patterns.include(abs_child_filename, is_file=True): folder.add_child(LocalFile(abs_child_filename)) return path_to_content.get(top_abspath)
Build a tree of LocalFolder with children based on a path. :param top_abspath: str path to a directory to walk :param followsymlinks: bool should we follow symlinks when walking :param file_filter: FileFilter: include method returns True if we should include a file/folder :return: the top node of the tree LocalFolder
entailment
def add_path(self, path): """ Add the path and any children files/folders to the list of content. :param path: str path to add """ abspath = os.path.abspath(path) self.children.append(_build_project_tree(abspath, self.followsymlinks, self.file_filter))
Add the path and any children files/folders to the list of content. :param path: str path to add
entailment
def update_remote_ids(self, remote_project): """ Compare against remote_project saving off the matching uuids of of matching content. :param remote_project: RemoteProject project to compare against """ if remote_project: self.remote_id = remote_project.id _update_remote_children(remote_project, self.children)
Compare against remote_project saving off the matching uuids of of matching content. :param remote_project: RemoteProject project to compare against
entailment
def update_remote_ids(self, remote_folder): """ Set remote id based on remote_folder and check children against this folder's children. :param remote_folder: RemoteFolder to compare against """ self.remote_id = remote_folder.id _update_remote_children(remote_folder, self.children)
Set remote id based on remote_folder and check children against this folder's children. :param remote_folder: RemoteFolder to compare against
entailment
def update_remote_ids(self, remote_file): """ Based on a remote file try to assign a remote_id and compare hash info. :param remote_file: RemoteFile remote data pull remote_id from """ self.remote_id = remote_file.id hash_data = self.path_data.get_hash() if hash_data.matches(remote_file.hash_alg, remote_file.file_hash): self.need_to_send = False
Based on a remote file try to assign a remote_id and compare hash info. :param remote_file: RemoteFile remote data pull remote_id from
entailment
def count_chunks(self, bytes_per_chunk): """ Based on the size of the file determine how many chunks we will need to upload. For empty files 1 chunk is returned (DukeDS requires an empty chunk for empty files). :param bytes_per_chunk: int: how many bytes should chunks to spglit the file into :return: int: number of chunks that will need to be sent """ chunks = math.ceil(float(self.size) / float(bytes_per_chunk)) return max(chunks, 1)
Based on the size of the file determine how many chunks we will need to upload. For empty files 1 chunk is returned (DukeDS requires an empty chunk for empty files). :param bytes_per_chunk: int: how many bytes should chunks to spglit the file into :return: int: number of chunks that will need to be sent
entailment
def matches(self, hash_alg, hash_value): """ Does our algorithm and hash value match the specified arguments. :param hash_alg: str: hash algorithm :param hash_value: str: hash value :return: boolean """ return self.alg == hash_alg and self.value == hash_value
Does our algorithm and hash value match the specified arguments. :param hash_alg: str: hash algorithm :param hash_value: str: hash value :return: boolean
entailment
def mime_type(self): """ Guess the mimetype of a file or 'application/octet-stream' if unable to guess. :return: str: mimetype """ mime_type, encoding = mimetypes.guess_type(self.path) if not mime_type: mime_type = 'application/octet-stream' return mime_type
Guess the mimetype of a file or 'application/octet-stream' if unable to guess. :return: str: mimetype
entailment
def read_whole_file(self): """ Slurp the whole file into memory. Should only be used with relatively small files. :return: str: file contents """ chunk = None with open(self.path, 'rb') as infile: chunk = infile.read() return chunk
Slurp the whole file into memory. Should only be used with relatively small files. :return: str: file contents
entailment
def add_file(self, filename, block_size=4096): """ Add an entire file to this hash. :param filename: str filename of the file to hash :param block_size: int size of chunks when reading the file """ with open(filename, "rb") as f: for chunk in iter(lambda: f.read(block_size), b""): self.hash.update(chunk)
Add an entire file to this hash. :param filename: str filename of the file to hash :param block_size: int size of chunks when reading the file
entailment
def get(cls, parent=None, id=None, data=None): """Inherit info from parent and return new object""" # TODO - allow fetching of parent based on child? if parent is not None: route = copy(parent.route) else: route = {} if id is not None and cls.ID_NAME is not None: route[cls.ID_NAME] = id obj = cls(key=parent.key, route=route, config=parent.config) if data: # This is used in "get all" queries obj.data = data else: obj.fetch() return obj
Inherit info from parent and return new object
entailment
def get_all(cls, parent=None, **params): if parent is not None: route = copy(parent.route) else: route = {} if cls.ID_NAME is not None: # Empty string triggers "get all resources" route[cls.ID_NAME] = "" base_obj = cls(key=parent.key, route=route, config=parent.config) """Perform a read request against the resource""" start = datetime.now() r = requests.get( base_obj._url(), auth=(base_obj.key, ""), params=params) cls._delay_for_ratelimits(start) if r.status_code not in cls.TRUTHY_CODES: return base_obj._handle_request_exception(r) response = r.json() objects_data = response.get(base_obj.ENVELOPE or base_obj, []) return_objects = [] for data in objects_data: # Note that this approach does not get meta data return_objects.append( cls.get( parent=parent, id=data.get(cls.ID_NAME, data.get("id")), data=data)) return return_objects
Perform a read request against the resource
entailment
def _url(self): """Get the URL for the resource""" if self.ID_NAME not in self.route.keys() and "id" in self.data.keys(): self.route[self.ID_NAME] = self.data["id"] return self.config.BASE + self.PATH.format(**self.route)
Get the URL for the resource
entailment
def _handle_request_exception(request): """Raise the proper exception based on the response""" try: data = request.json() except: data = {} code = request.status_code if code == requests.codes.bad: raise BadRequestException(response=data) if code == requests.codes.unauthorized: raise UnauthorizedException(response=data) if code == requests.codes.not_found: raise NotFoundException(response=data) # Generic error fallback request.raise_for_status()
Raise the proper exception based on the response
entailment
def fetch(self): """Perform a read request against the resource""" start = datetime.now() r = requests.get(self._url(), auth=(self.key, "")) self._delay_for_ratelimits(start) if r.status_code not in self.TRUTHY_CODES: return self._handle_request_exception(r) response = r.json() if self.ENVELOPE: self.data = response.get(self.ENVELOPE, {}) else: self.data = response # Move to separate function so it can be overrridden self._process_meta(response)
Perform a read request against the resource
entailment
def _process_meta(self, response): """Process additional data sent in response""" for key in self.META_ENVELOPES: self.meta[key] = response.get(key)
Process additional data sent in response
entailment
def delete(self): """Delete the object""" start = datetime.now() r = requests.delete(self._url(), auth=(self.key, "")) self._delay_for_ratelimits(start) if r.status_code not in self.TRUTHY_CODES: return self._handle_request_exception(r)
Delete the object
entailment
def patch(self, **kwargs): """Change attributes of the item""" start = datetime.now() r = requests.patch(self._url(), auth=(self.key, ""), data=kwargs) self._delay_for_ratelimits(start) if r.status_code not in self.TRUTHY_CODES: return self._handle_request_exception(r) # Refetch for safety. We could modify based on response, # but I'm afraid of some edge cases and marshal functions. self.fetch()
Change attributes of the item
entailment
def create(cls, parent=None, **kwargs): """Create an object and return it""" if parent is None: raise Exception("Parent class is required") route = copy(parent.route) if cls.ID_NAME is not None: route[cls.ID_NAME] = "" obj = cls(key=parent.key, route=route, config=parent.config) start = datetime.now() response = requests.post(obj._url(), auth=(obj.key, ""), data=kwargs) cls._delay_for_ratelimits(start) if response.status_code not in cls.TRUTHY_CODES: return cls._handle_request_exception(response) # No envelope on post requests data = response.json() obj.route[obj.ID_NAME] = data.get("id", data.get(obj.ID_NAME)) obj.data = data return obj
Create an object and return it
entailment