code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
def split_kv_pairs(lines, comment_char="#", filter_string=None, split_on="=", use_partition=False, ordered=False): """Split lines of a list into key/value pairs Use this function to filter and split all lines of a list of strings into a dictionary. Named arguments may be used to control how the line is split, how lines are filtered and the type of output returned. See parameters for more information. When splitting key/value, the first occurence of the split character is used, other occurrences of the split char in the line will be ignored. ::func:`get_active_lines` is called to strip comments and blank lines from the data. Parameters: lines (list of str): List of the strings to be split. comment_char (str): Char that when present in the line indicates all following chars are part of a comment. If this is present, all comments and all blank lines are removed from list before further processing. The default comment char is the `#` character. filter_string (str): If the filter string is present, then only lines containing the filter will be processed, other lines will be ignored. split_on (str): Character to use when splitting a line. Only the first occurence of the char is used when splitting, so only one split is performed at the first occurrence of `split_on`. The default string is `=`. use_partition (bool): If this parameter is `True` then the python `partition` function will be used to split the line. If `False` then the pyton `split` function will be used. The difference is that when `False`, if the split character is not present in the line then the line is ignored and when `True` the line will be parsed regardless. Set `use_partition` to `True` if you have valid lines that do not contain the `split_on` character. Set `use_partition` to `False` if you want to ignore lines that do not contain the `split_on` character. The default value is `False`. ordered (bool): If this parameter is `True` then the resulting dictionary will be in the same order as in the original file, a python `OrderedDict` type is used. If this parameter is `False` then the resulting dictionary is in no particular order, a base python `dict` type is used. The default is `False`. Returns: dict: Return value is a dictionary of the key/value pairs. If parameter `keyword` is `True` then an OrderedDict is returned, otherwise a dict is returned. Examples: >>> from .. import split_kv_pairs >>> for line in lines: ... print line # Comment line # Blank lines will also be removed keyword1 = value1 # Inline comments keyword2 = value2a=True, value2b=100M keyword3 # Key with no separator >>> split_kv_pairs(lines) {'keyword2': 'value2a=True, value2b=100M', 'keyword1': 'value1'} >>> split_kv_pairs(lines, comment_char='#') {'keyword2': 'value2a=True, value2b=100M', 'keyword1': 'value1'} >>> split_kv_pairs(lines, filter_string='keyword2') {'keyword2': 'value2a=True, value2b=100M'} >>> split_kv_pairs(lines, use_partition=True) {'keyword3': '', 'keyword2': 'value2a=True, value2b=100M', 'keyword1': 'value1'} >>> split_kv_pairs(lines, use_partition=True, ordered=True) OrderedDict([('keyword1', 'value1'), ('keyword2', 'value2a=True, value2b=100M'), ('keyword3', '')]) """ _lines = lines if comment_char is None else get_active_lines(lines, comment_char=comment_char) _lines = _lines if filter_string is None else [l for l in _lines if filter_string in l] kv_pairs = OrderedDict() if ordered else {} for line in _lines: if not use_partition: if split_on in line: k, v = line.split(split_on, 1) kv_pairs[k.strip()] = v.strip() else: k, _, v = line.partition(split_on) kv_pairs[k.strip()] = v.strip() return kv_pairs
Split lines of a list into key/value pairs Use this function to filter and split all lines of a list of strings into a dictionary. Named arguments may be used to control how the line is split, how lines are filtered and the type of output returned. See parameters for more information. When splitting key/value, the first occurence of the split character is used, other occurrences of the split char in the line will be ignored. ::func:`get_active_lines` is called to strip comments and blank lines from the data. Parameters: lines (list of str): List of the strings to be split. comment_char (str): Char that when present in the line indicates all following chars are part of a comment. If this is present, all comments and all blank lines are removed from list before further processing. The default comment char is the `#` character. filter_string (str): If the filter string is present, then only lines containing the filter will be processed, other lines will be ignored. split_on (str): Character to use when splitting a line. Only the first occurence of the char is used when splitting, so only one split is performed at the first occurrence of `split_on`. The default string is `=`. use_partition (bool): If this parameter is `True` then the python `partition` function will be used to split the line. If `False` then the pyton `split` function will be used. The difference is that when `False`, if the split character is not present in the line then the line is ignored and when `True` the line will be parsed regardless. Set `use_partition` to `True` if you have valid lines that do not contain the `split_on` character. Set `use_partition` to `False` if you want to ignore lines that do not contain the `split_on` character. The default value is `False`. ordered (bool): If this parameter is `True` then the resulting dictionary will be in the same order as in the original file, a python `OrderedDict` type is used. If this parameter is `False` then the resulting dictionary is in no particular order, a base python `dict` type is used. The default is `False`. Returns: dict: Return value is a dictionary of the key/value pairs. If parameter `keyword` is `True` then an OrderedDict is returned, otherwise a dict is returned. Examples: >>> from .. import split_kv_pairs >>> for line in lines: ... print line # Comment line # Blank lines will also be removed keyword1 = value1 # Inline comments keyword2 = value2a=True, value2b=100M keyword3 # Key with no separator >>> split_kv_pairs(lines) {'keyword2': 'value2a=True, value2b=100M', 'keyword1': 'value1'} >>> split_kv_pairs(lines, comment_char='#') {'keyword2': 'value2a=True, value2b=100M', 'keyword1': 'value1'} >>> split_kv_pairs(lines, filter_string='keyword2') {'keyword2': 'value2a=True, value2b=100M'} >>> split_kv_pairs(lines, use_partition=True) {'keyword3': '', 'keyword2': 'value2a=True, value2b=100M', 'keyword1': 'value1'} >>> split_kv_pairs(lines, use_partition=True, ordered=True) OrderedDict([('keyword1', 'value1'), ('keyword2', 'value2a=True, value2b=100M'), ('keyword3', '')])
def find_source_files(input_path, excludes): """ Get a list of filenames for all Java source files within the given directory. """ java_files = [] input_path = os.path.normpath(os.path.abspath(input_path)) for dirpath, dirnames, filenames in os.walk(input_path): if is_excluded(dirpath, excludes): del dirnames[:] continue for filename in filenames: if filename.endswith(".java"): java_files.append(os.path.join(dirpath, filename)) return java_files
Get a list of filenames for all Java source files within the given directory.
def _get_stddevs(self, coeffs, stddev_types): """ Equation (11) on p. 207 for total standard error at a given site: ``σ{ln(ε_site)} = sqrt(σ{ln(ε_br)}**2 + σ{ln(δ_site)}**2)`` """ for stddev_type in stddev_types: assert stddev_type in self.DEFINED_FOR_STANDARD_DEVIATION_TYPES return np.sqrt(coeffs['sigma_bedrock']**2 + coeffs['sigma_site']**2)
Equation (11) on p. 207 for total standard error at a given site: ``σ{ln(ε_site)} = sqrt(σ{ln(ε_br)}**2 + σ{ln(δ_site)}**2)``
def generateImplicitParameters(cls, obj): """ Create PRODID, VERSION, and VTIMEZONEs if needed. VTIMEZONEs will need to exist whenever TZID parameters exist or when datetimes with tzinfo exist. """ for comp in obj.components(): if comp.behavior is not None: comp.behavior.generateImplicitParameters(comp) if not hasattr(obj, 'prodid'): obj.add(ContentLine('PRODID', [], PRODID)) if not hasattr(obj, 'version'): obj.add(ContentLine('VERSION', [], cls.versionString)) tzidsUsed = {} def findTzids(obj, table): if isinstance(obj, ContentLine) and (obj.behavior is None or not obj.behavior.forceUTC): if getattr(obj, 'tzid_param', None): table[obj.tzid_param] = 1 else: if type(obj.value) == list: for item in obj.value: tzinfo = getattr(obj.value, 'tzinfo', None) tzid = TimezoneComponent.registerTzinfo(tzinfo) if tzid: table[tzid] = 1 else: tzinfo = getattr(obj.value, 'tzinfo', None) tzid = TimezoneComponent.registerTzinfo(tzinfo) if tzid: table[tzid] = 1 for child in obj.getChildren(): if obj.name != 'VTIMEZONE': findTzids(child, table) findTzids(obj, tzidsUsed) oldtzids = [toUnicode(x.tzid.value) for x in getattr(obj, 'vtimezone_list', [])] for tzid in tzidsUsed.keys(): tzid = toUnicode(tzid) if tzid != u'UTC' and tzid not in oldtzids: obj.add(TimezoneComponent(tzinfo=getTzid(tzid)))
Create PRODID, VERSION, and VTIMEZONEs if needed. VTIMEZONEs will need to exist whenever TZID parameters exist or when datetimes with tzinfo exist.
def list_voices( self, language_code=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Returns a list of ``Voice`` supported for synthesis. Example: >>> from google.cloud import texttospeech_v1beta1 >>> >>> client = texttospeech_v1beta1.TextToSpeechClient() >>> >>> response = client.list_voices() Args: language_code (str): Optional (but recommended) `BCP-47 <https://www.rfc-editor.org/rfc/bcp/bcp47.txt>`__ language tag. If specified, the ListVoices call will only return voices that can be used to synthesize this language\_code. E.g. when specifying "en-NZ", you will get supported "en-*" voices; when specifying "no", you will get supported "no-*" (Norwegian) and "nb-*" (Norwegian Bokmal) voices; specifying "zh" will also get supported "cmn-*" voices; specifying "zh-hk" will also get supported "yue-\*" voices. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.texttospeech_v1beta1.types.ListVoicesResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """ # Wrap the transport method to add retry and timeout logic. if "list_voices" not in self._inner_api_calls: self._inner_api_calls[ "list_voices" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.list_voices, default_retry=self._method_configs["ListVoices"].retry, default_timeout=self._method_configs["ListVoices"].timeout, client_info=self._client_info, ) request = cloud_tts_pb2.ListVoicesRequest(language_code=language_code) return self._inner_api_calls["list_voices"]( request, retry=retry, timeout=timeout, metadata=metadata )
Returns a list of ``Voice`` supported for synthesis. Example: >>> from google.cloud import texttospeech_v1beta1 >>> >>> client = texttospeech_v1beta1.TextToSpeechClient() >>> >>> response = client.list_voices() Args: language_code (str): Optional (but recommended) `BCP-47 <https://www.rfc-editor.org/rfc/bcp/bcp47.txt>`__ language tag. If specified, the ListVoices call will only return voices that can be used to synthesize this language\_code. E.g. when specifying "en-NZ", you will get supported "en-*" voices; when specifying "no", you will get supported "no-*" (Norwegian) and "nb-*" (Norwegian Bokmal) voices; specifying "zh" will also get supported "cmn-*" voices; specifying "zh-hk" will also get supported "yue-\*" voices. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.texttospeech_v1beta1.types.ListVoicesResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid.
def power(maf=0.5,beta=0.1, N=100, cutoff=5e-8): """ estimate power for a given allele frequency, effect size beta and sample size N Assumption: z-score = beta_ML distributed as p(0) = N(0,1.0(maf*(1-maf)*N))) under the null hypothesis the actual beta_ML is distributed as p(alt) = N( beta , 1.0/(maf*(1-maf)N) ) Arguments: maf: minor allele frequency of the SNP beta: effect size of the SNP N: sample size (number of individuals) Returns: power: probability to detect a SNP in that study with the given parameters """ """ std(snp)=sqrt(2.0*maf*(1-maf)) power = \int beta_ML = (snp^T*snp)^{-1}*snp^T*Y = cov(snp,Y)/var(snp) E[beta_ML] = (snp^T*snp)^{-1}*snp^T*E[Y] = (snp^T*snp)^{-1}*snp^T*snp * beta = beta Var[beta_ML]= (snp^T*snp)^{-1}*(snp^T*snp)*(snp^T*snp)^{-1} = (snp^T*snp)^{-1} = 1/N * var(snp) = 1/N * maf*(1-maf) """ assert maf>=0.0 and maf<=0.5, "maf needs to be between 0.0 and 0.5, got %f" % maf if beta<0.0: beta=-beta std_beta = 1.0/np.sqrt(N*(2.0 * maf*(1.0-maf))) non_centrality = beta beta_samples = np.random.normal(loc=non_centrality, scale=std_beta) n_grid = 100000 beta_in = np.arange(0.5/(n_grid+1.0),(n_grid-0.5)/(n_grid+1.0),1.0/(n_grid+1.0)) beta_theoretical = ((st.norm.isf(beta_in)* std_beta) + non_centrality) pvals = st.chi2.sf( (beta_theoretical/std_beta)*(beta_theoretical/std_beta) ,1.0) power = (pvals<cutoff).mean() return power, pvals
estimate power for a given allele frequency, effect size beta and sample size N Assumption: z-score = beta_ML distributed as p(0) = N(0,1.0(maf*(1-maf)*N))) under the null hypothesis the actual beta_ML is distributed as p(alt) = N( beta , 1.0/(maf*(1-maf)N) ) Arguments: maf: minor allele frequency of the SNP beta: effect size of the SNP N: sample size (number of individuals) Returns: power: probability to detect a SNP in that study with the given parameters
def message(self, to, subject, text): """Alias for :meth:`compose`.""" return self.compose(to, subject, text)
Alias for :meth:`compose`.
def as_pseudo(cls, obj): """ Convert obj into a pseudo. Accepts: * Pseudo object. * string defining a valid path. """ return obj if isinstance(obj, cls) else cls.from_file(obj)
Convert obj into a pseudo. Accepts: * Pseudo object. * string defining a valid path.
def change_site(self, old_site_name, new_site_name, new_location_name=None, new_er_data=None, new_pmag_data=None, replace_data=False): """ Find actual data objects for site and location. Then call the Site class change method to update site name and data. """ site = self.find_by_name(old_site_name, self.sites) if not site: print('-W- {} is not a currently existing site, so it cannot be updated.'.format(old_site_name)) return False if new_location_name: if site.location: old_location = self.find_by_name(site.location.name, self.locations) if old_location: old_location.sites.remove(site) new_location = self.find_by_name(new_location_name, self.locations) if not new_location: print("""-W- {} is not a currently existing location. Adding location with name: {}""".format(new_location_name, new_location_name)) new_location = self.add_location(new_location_name) new_location.sites.append(site) else: new_location = None ## check all declinations/azimuths/longitudes in range 0=>360. #for key, value in new_er_data.items(): # new_er_data[key] = pmag.adjust_to_360(value, key) site.change_site(new_site_name, new_location, new_er_data, new_pmag_data, replace_data) return site
Find actual data objects for site and location. Then call the Site class change method to update site name and data.
def confirm_answer(self, answer, message=None): """ Prompts the user to confirm a question with a yes/no prompt. If no message is specified, the default message is: "You entered {}. Is this correct?" :param answer: the answer to confirm. :param message: a message to display rather than the default message. :return: True if the user confirmed Yes, or False if user specified No. """ if message is None: message = "\nYou entered {0}. Is this correct?".format(answer) return self.prompt_for_yes_or_no(message)
Prompts the user to confirm a question with a yes/no prompt. If no message is specified, the default message is: "You entered {}. Is this correct?" :param answer: the answer to confirm. :param message: a message to display rather than the default message. :return: True if the user confirmed Yes, or False if user specified No.
def get_tectonic_regionalisation(self, regionalisation, region_type=None): ''' Defines the tectonic region and updates the shear modulus, magnitude scaling relation and displacement to length ratio using the regional values, if not previously defined for the fault :param regionalistion: Instance of the :class: openquake.hmtk.faults.tectonic_regionalisaion.TectonicRegionalisation :param str region_type: Name of the region type - if not in regionalisation an error will be raised ''' if region_type: self.trt = region_type if not self.trt in regionalisation.key_list: raise ValueError('Tectonic region classification missing or ' 'not defined in regionalisation') for iloc, key_val in enumerate(regionalisation.key_list): if self.trt in key_val: self.regionalisation = regionalisation.regionalisation[iloc] # Update undefined shear modulus from tectonic regionalisation if not self.shear_modulus: self.shear_modulus = self.regionalisation.shear_modulus # Update undefined scaling relation from tectonic # regionalisation if not self.msr: self.msr = self.regionalisation.scaling_rel # Update undefined displacement to length ratio from tectonic # regionalisation if not self.disp_length_ratio: self.disp_length_ratio = \ self.regionalisation.disp_length_ratio break return
Defines the tectonic region and updates the shear modulus, magnitude scaling relation and displacement to length ratio using the regional values, if not previously defined for the fault :param regionalistion: Instance of the :class: openquake.hmtk.faults.tectonic_regionalisaion.TectonicRegionalisation :param str region_type: Name of the region type - if not in regionalisation an error will be raised
def download_to_bytes(url, chunk_size=1024 * 1024 * 10, loadbar_length=10): """Download a url to bytes. if chunk_size is not None, prints a simple loading bar [=*loadbar_length] to show progress (in console and notebook) :param url: str or url :param chunk_size: None or int in bytes :param loadbar_length: int length of load bar :return: (bytes, encoding) """ stream = False if chunk_size is None else True print("Downloading {0:s}: ".format(url), end="") response = requests.get(url, stream=stream) # raise error if download was unsuccessful response.raise_for_status() encoding = response.encoding total_length = response.headers.get('content-length') if total_length is not None: total_length = float(total_length) if stream: print("{0:.2f}Mb/{1:} ".format(total_length / (1024 * 1024), loadbar_length), end="") else: print("{0:.2f}Mb ".format(total_length / (1024 * 1024)), end="") if stream: print("[", end="") chunks = [] loaded = 0 loaded_size = 0 for chunk in response.iter_content(chunk_size=chunk_size): if chunk: # filter out keep-alive new chunks # print our progress bar if total_length is not None: while loaded < loadbar_length * loaded_size / total_length: print("=", end='') loaded += 1 loaded_size += chunk_size chunks.append(chunk) if total_length is None: print("=" * loadbar_length, end='') else: while loaded < loadbar_length: print("=", end='') loaded += 1 content = b"".join(chunks) print("] ", end="") else: content = response.content print("Finished") response.close() return content, encoding
Download a url to bytes. if chunk_size is not None, prints a simple loading bar [=*loadbar_length] to show progress (in console and notebook) :param url: str or url :param chunk_size: None or int in bytes :param loadbar_length: int length of load bar :return: (bytes, encoding)
def split_traversal(traversal, edges, edges_hash=None): """ Given a traversal as a list of nodes, split the traversal if a sequential index pair is not in the given edges. Parameters -------------- edges : (n, 2) int Graph edge indexes traversal : (m,) int Traversal through edges edge_hash : (n,) Edges sorted on axis=1 and passed to grouping.hashable_rows Returns --------------- split : sequence of (p,) int """ traversal = np.asanyarray(traversal, dtype=np.int64) # hash edge rows for contains checks if edges_hash is None: edges_hash = grouping.hashable_rows( np.sort(edges, axis=1)) # turn the (n,) traversal into (n-1,2) edges trav_edge = np.column_stack((traversal[:-1], traversal[1:])) # hash each edge so we can compare to edge set trav_hash = grouping.hashable_rows( np.sort(trav_edge, axis=1)) # check if each edge is contained in edge set contained = np.in1d(trav_hash, edges_hash) # exit early if every edge of traversal exists if contained.all(): # just reshape one traversal split = [traversal] else: # find contiguous groups of contained edges blocks = grouping.blocks(contained, min_len=1, only_nonzero=True) # turn edges back in to sequence of traversals split = [np.append(trav_edge[b][:, 0], trav_edge[b[-1]][1]) for b in blocks] # close traversals if necessary for i, t in enumerate(split): # make sure elements of sequence are numpy arrays split[i] = np.asanyarray(split[i], dtype=np.int64) # don't close if its a single edge if len(t) <= 2: continue # make sure it's not already closed edge = np.sort([t[0], t[-1]]) if edge.ptp() == 0: continue close = grouping.hashable_rows(edge.reshape((1, 2)))[0] # if we need the edge add it if close in edges_hash: split[i] = np.append(t, t[0]).astype(np.int64) result = np.array(split) return result
Given a traversal as a list of nodes, split the traversal if a sequential index pair is not in the given edges. Parameters -------------- edges : (n, 2) int Graph edge indexes traversal : (m,) int Traversal through edges edge_hash : (n,) Edges sorted on axis=1 and passed to grouping.hashable_rows Returns --------------- split : sequence of (p,) int
def _print_task_data(self, task): """Pretty-prints task data. Args: task: Task dict generated by Turbinia. """ print(' {0:s} ({1:s})'.format(task['name'], task['id'])) paths = task.get('saved_paths', []) if not paths: return for path in paths: if path.endswith('worker-log.txt'): continue if path.endswith('{0:s}.log'.format(task.get('id'))): continue if path.startswith('/'): continue print(' ' + path)
Pretty-prints task data. Args: task: Task dict generated by Turbinia.
def make_stream_features(self, stream, features): """Add resource binding feature to the <features/> element of the stream. [receving entity only] :returns: update <features/> element. """ self.stream = stream if stream.peer_authenticated and not stream.peer.resource: ElementTree.SubElement(features, FEATURE_BIND)
Add resource binding feature to the <features/> element of the stream. [receving entity only] :returns: update <features/> element.
def _process_data(self, sd, ase, offsets, data): # type: (SyncCopy, blobxfer.models.synccopy.Descriptor, # blobxfer.models.azure.StorageEntity, # blobxfer.models.synccopy.Offsets, bytes) -> None """Process downloaded data for upload :param SyncCopy self: this :param blobxfer.models.synccopy.Descriptor sd: synccopy descriptor :param blobxfer.models.azure.StorageEntity ase: storage entity :param blobxfer.models.synccopy.Offsets offsets: offsets :param bytes data: data to process """ # issue put data self._put_data(sd, ase, offsets, data) # accounting with self._transfer_lock: self._synccopy_bytes_sofar += offsets.num_bytes # complete offset upload and save resume state sd.complete_offset_upload(offsets.chunk_num)
Process downloaded data for upload :param SyncCopy self: this :param blobxfer.models.synccopy.Descriptor sd: synccopy descriptor :param blobxfer.models.azure.StorageEntity ase: storage entity :param blobxfer.models.synccopy.Offsets offsets: offsets :param bytes data: data to process
def truncate(self, table): """Send DDL to truncate the specified `table` :Parameters: - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write. Returns None """ truncate_sql, serial_key_sql = super(PostgresDbWriter, self).truncate(table) self.execute(truncate_sql) if serial_key_sql: self.execute(serial_key_sql)
Send DDL to truncate the specified `table` :Parameters: - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write. Returns None
def _PSat_h(h): """Define the saturated line, P=f(h) for region 3 Parameters ---------- h : float Specific enthalpy, [kJ/kg] Returns ------- P : float Pressure, [MPa] Notes ------ Raise :class:`NotImplementedError` if input isn't in limit: * h'(623.15K) ≤ h ≤ h''(623.15K) References ---------- IAPWS, Revised Supplementary Release on Backward Equations for the Functions T(p,h), v(p,h) and T(p,s), v(p,s) for Region 3 of the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam, http://www.iapws.org/relguide/Supp-Tv%28ph,ps%293-2014.pdf, Eq 10 Examples -------- >>> _PSat_h(1700) 17.24175718 >>> _PSat_h(2400) 20.18090839 """ # Check input parameters hmin_Ps3 = _Region1(623.15, Ps_623)["h"] hmax_Ps3 = _Region2(623.15, Ps_623)["h"] if h < hmin_Ps3 or h > hmax_Ps3: raise NotImplementedError("Incoming out of bound") nu = h/2600 I = [0, 1, 1, 1, 1, 5, 7, 8, 14, 20, 22, 24, 28, 36] J = [0, 1, 3, 4, 36, 3, 0, 24, 16, 16, 3, 18, 8, 24] n = [0.600073641753024, -0.936203654849857e1, 0.246590798594147e2, -0.107014222858224e3, -0.915821315805768e14, -0.862332011700662e4, -0.235837344740032e2, 0.252304969384128e18, -0.389718771997719e19, -0.333775713645296e23, 0.356499469636328e11, -0.148547544720641e27, 0.330611514838798e19, 0.813641294467829e38] suma = 0 for i, j, ni in zip(I, J, n): suma += ni * (nu-1.02)**i * (nu-0.608)**j return 22*suma
Define the saturated line, P=f(h) for region 3 Parameters ---------- h : float Specific enthalpy, [kJ/kg] Returns ------- P : float Pressure, [MPa] Notes ------ Raise :class:`NotImplementedError` if input isn't in limit: * h'(623.15K) ≤ h ≤ h''(623.15K) References ---------- IAPWS, Revised Supplementary Release on Backward Equations for the Functions T(p,h), v(p,h) and T(p,s), v(p,s) for Region 3 of the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam, http://www.iapws.org/relguide/Supp-Tv%28ph,ps%293-2014.pdf, Eq 10 Examples -------- >>> _PSat_h(1700) 17.24175718 >>> _PSat_h(2400) 20.18090839
def points_from_sql(self, db_name): """ Retrieve point list from SQL database """ points = self._read_from_sql("SELECT * FROM history;", db_name) return list(points.columns.values)[1:]
Retrieve point list from SQL database
def set_pixel_size(self, deltaPix): """ update pixel size :param deltaPix: :return: """ self._pixel_size = deltaPix if self.psf_type == 'GAUSSIAN': try: del self._kernel_point_source except: pass
update pixel size :param deltaPix: :return:
def emit(self, record): """ Function inserts log messages to list_view """ msg = record.getMessage() list_store = self.list_view.get_model() Gdk.threads_enter() if msg: # Underline URLs in the record message msg = replace_markup_chars(record.getMessage()) record.msg = URL_FINDER.sub(r'<u>\1</u>', msg) self.parent.debug_logs['logs'].append(record) # During execution if level is bigger then DEBUG # then GUI shows the message. event_type = getattr(record, 'event_type', '') if event_type: if event_type == 'dep_installation_start': switch_cursor(Gdk.CursorType.WATCH, self.parent.run_window) list_store.append([format_entry(record)]) if event_type == 'dep_installation_end': switch_cursor(Gdk.CursorType.ARROW, self.parent.run_window) if not self.parent.debugging: # We will show only INFO messages and messages who have no dep_ event_type if int(record.levelno) > 10: if event_type == "dep_check" or event_type == "dep_found": list_store.append([format_entry(record)]) elif not event_type.startswith("dep_"): list_store.append([format_entry(record, colorize=True)]) if self.parent.debugging: if event_type != "cmd_retcode": list_store.append([format_entry(record, show_level=True, colorize=True)]) Gdk.threads_leave()
Function inserts log messages to list_view
def data_to_binary(self): """ :return: bytes """ if self.channel == 0x01: tmp = 0x03 else: tmp = 0x0C return bytes([ COMMAND_CODE, tmp ]) + struct.pack('>L', self.delay_time)[-3:]
:return: bytes
def update_scope(self, patch_document, scope_id): """UpdateScope. [Preview API] :param :class:`<[JsonPatchOperation]> <azure.devops.v5_0.identity.models.[JsonPatchOperation]>` patch_document: :param str scope_id: """ route_values = {} if scope_id is not None: route_values['scopeId'] = self._serialize.url('scope_id', scope_id, 'str') content = self._serialize.body(patch_document, '[JsonPatchOperation]') self._send(http_method='PATCH', location_id='4e11e2bf-1e79-4eb5-8f34-a6337bd0de38', version='5.0-preview.2', route_values=route_values, content=content, media_type='application/json-patch+json')
UpdateScope. [Preview API] :param :class:`<[JsonPatchOperation]> <azure.devops.v5_0.identity.models.[JsonPatchOperation]>` patch_document: :param str scope_id:
def is_legal_sequence(self, packet: DataPacket) -> bool: """ Check if the Sequence number of the DataPacket is legal. For more information see page 17 of http://tsp.esta.org/tsp/documents/docs/E1-31-2016.pdf. :param packet: the packet to check :return: true if the sequence is legal. False if the sequence number is bad """ # if the sequence of the packet is smaller than the last received sequence, return false # therefore calculate the difference between the two values: try: # try, because self.lastSequence might not been initialized diff = packet.sequence - self.lastSequence[packet.universe] # if diff is between ]-20,0], return False for a bad packet sequence if 0 >= diff > -20: return False except: pass # if the sequence is good, return True and refresh the list with the new value self.lastSequence[packet.universe] = packet.sequence return True
Check if the Sequence number of the DataPacket is legal. For more information see page 17 of http://tsp.esta.org/tsp/documents/docs/E1-31-2016.pdf. :param packet: the packet to check :return: true if the sequence is legal. False if the sequence number is bad
def firmware_checksum(self, firmware_checksum): """ Sets the firmware_checksum of this DeviceDataPostRequest. The SHA256 checksum of the current firmware image. :param firmware_checksum: The firmware_checksum of this DeviceDataPostRequest. :type: str """ if firmware_checksum is not None and len(firmware_checksum) > 64: raise ValueError("Invalid value for `firmware_checksum`, length must be less than or equal to `64`") self._firmware_checksum = firmware_checksum
Sets the firmware_checksum of this DeviceDataPostRequest. The SHA256 checksum of the current firmware image. :param firmware_checksum: The firmware_checksum of this DeviceDataPostRequest. :type: str
def handle_event(self, event): """Handle incoming packet from rflink gateway.""" if event.get('command'): if event['command'] == 'on': cmd = 'off' else: cmd = 'on' task = self.send_command_ack(event['id'], cmd) self.loop.create_task(task)
Handle incoming packet from rflink gateway.
def label_from_func(self, func:Callable, label_cls:Callable=None, **kwargs)->'LabelList': "Apply `func` to every input to get its label." return self._label_from_list([func(o) for o in self.items], label_cls=label_cls, **kwargs)
Apply `func` to every input to get its label.
def get_select_sql(self, columns, order=None, limit=0, skip=0): """ Build a SELECT query based on the current state of the builder. :param columns: SQL fragment describing which columns to select i.e. 'e.obstoryID, s.statusID' :param order: Optional ordering constraint, i.e. 'e.eventTime DESC' :param limit: Optional, used to build the 'LIMIT n' clause. If not specified no limit is imposed. :param skip: Optional, used to build the 'OFFSET n' clause. If not specified results are returned from the first item available. Note that this parameter must be combined with 'order', otherwise there's no ordering imposed on the results and subsequent queries may return overlapping data randomly. It's unlikely that this will actually happen as almost all databases do in fact create an internal ordering, but there's no guarantee of this (and some operations such as indexing will definitely break this property unless explicitly set). :returns: A SQL SELECT query, which will make use of self.sql_args when executed. """ sql = 'SELECT ' sql += '{0} FROM {1} '.format(columns, self.tables) if len(self.where_clauses) > 0: sql += ' WHERE ' sql += ' AND '.join(self.where_clauses) if order is not None: sql += ' ORDER BY {0}'.format(order) if limit > 0: sql += ' LIMIT {0} '.format(limit) if skip > 0: sql += ' OFFSET {0} '.format(skip) return sql
Build a SELECT query based on the current state of the builder. :param columns: SQL fragment describing which columns to select i.e. 'e.obstoryID, s.statusID' :param order: Optional ordering constraint, i.e. 'e.eventTime DESC' :param limit: Optional, used to build the 'LIMIT n' clause. If not specified no limit is imposed. :param skip: Optional, used to build the 'OFFSET n' clause. If not specified results are returned from the first item available. Note that this parameter must be combined with 'order', otherwise there's no ordering imposed on the results and subsequent queries may return overlapping data randomly. It's unlikely that this will actually happen as almost all databases do in fact create an internal ordering, but there's no guarantee of this (and some operations such as indexing will definitely break this property unless explicitly set). :returns: A SQL SELECT query, which will make use of self.sql_args when executed.
def is_valid(self): # type: () -> bool """ Checks if the component is valid :return: Always True if it doesn't raise an exception :raises AssertionError: Invalid properties """ assert self._bundle_context assert self._container_props is not None assert self._get_distribution_provider() assert self.get_config_name() assert self.get_namespace() return True
Checks if the component is valid :return: Always True if it doesn't raise an exception :raises AssertionError: Invalid properties
def get_argument_topology(self): """ Helper function to get topology argument. Raises exception if argument is missing. Returns the topology argument. """ try: topology = self.get_argument(constants.PARAM_TOPOLOGY) return topology except tornado.web.MissingArgumentError as e: raise Exception(e.log_message)
Helper function to get topology argument. Raises exception if argument is missing. Returns the topology argument.
def num_batches(n, batch_size): """Compute the number of mini-batches required to cover a data set of size `n` using batches of size `batch_size`. Parameters ---------- n: int the number of samples in the data set batch_size: int the mini-batch size Returns ------- int: the number of batches required """ b = n // batch_size if n % batch_size > 0: b += 1 return b
Compute the number of mini-batches required to cover a data set of size `n` using batches of size `batch_size`. Parameters ---------- n: int the number of samples in the data set batch_size: int the mini-batch size Returns ------- int: the number of batches required
def check_need_install(): """Check if installed package are exactly the same to this one. By checking md5 value of all files. """ need_install_flag = False for root, _, basename_list in os.walk(SRC): if os.path.basename(root) != "__pycache__": for basename in basename_list: src = os.path.join(root, basename) dst = os.path.join(root.replace(SRC, DST), basename) if os.path.exists(dst): if md5_of_file(src) != md5_of_file(dst): return True else: return True return need_install_flag
Check if installed package are exactly the same to this one. By checking md5 value of all files.
def typechecked_module(md, force_recursive = False): """Works like typechecked, but is only applicable to modules (by explicit call). md must be a module or a module name contained in sys.modules. """ if not pytypes.checking_enabled: return md if isinstance(md, str): if md in sys.modules: md = sys.modules[md] if md is None: return md elif md in _pending_modules: # if import is pending, we just store this call for later _pending_modules[md].append(lambda t: typechecked_module(t, True)) return md assert(ismodule(md)) if md.__name__ in _pending_modules: # if import is pending, we just store this call for later _pending_modules[md.__name__].append(lambda t: typechecked_module(t, True)) # we already process the module now as far as possible for its internal use # todo: Issue warning here that not the whole module might be covered yet if md.__name__ in _fully_typechecked_modules and \ _fully_typechecked_modules[md.__name__] == len(md.__dict__): return md # To play it safe we avoid to modify the dict while iterating over it, # so we previously cache keys. # For this we don't use keys() because of Python 3. # Todo: Better use inspect.getmembers here keys = [key for key in md.__dict__] for key in keys: memb = md.__dict__[key] if force_recursive or not is_no_type_check(memb) and hasattr(memb, '__module__'): if _check_as_func(memb) and memb.__module__ == md.__name__ and \ has_type_hints(memb): setattr(md, key, typechecked_func(memb, force_recursive)) elif isclass(memb) and memb.__module__ == md.__name__: typechecked_class(memb, force_recursive, force_recursive) if not md.__name__ in _pending_modules: _fully_typechecked_modules[md.__name__] = len(md.__dict__) return md
Works like typechecked, but is only applicable to modules (by explicit call). md must be a module or a module name contained in sys.modules.
def safety_set_allowed_area_encode(self, target_system, target_component, frame, p1x, p1y, p1z, p2x, p2y, p2z): ''' Set a safety zone (volume), which is defined by two corners of a cube. This message can be used to tell the MAV which setpoints/MISSIONs to accept and which to reject. Safety areas are often enforced by national or competition regulations. target_system : System ID (uint8_t) target_component : Component ID (uint8_t) frame : Coordinate frame, as defined by MAV_FRAME enum in mavlink_types.h. Can be either global, GPS, right-handed with Z axis up or local, right handed, Z axis down. (uint8_t) p1x : x position 1 / Latitude 1 (float) p1y : y position 1 / Longitude 1 (float) p1z : z position 1 / Altitude 1 (float) p2x : x position 2 / Latitude 2 (float) p2y : y position 2 / Longitude 2 (float) p2z : z position 2 / Altitude 2 (float) ''' return MAVLink_safety_set_allowed_area_message(target_system, target_component, frame, p1x, p1y, p1z, p2x, p2y, p2z)
Set a safety zone (volume), which is defined by two corners of a cube. This message can be used to tell the MAV which setpoints/MISSIONs to accept and which to reject. Safety areas are often enforced by national or competition regulations. target_system : System ID (uint8_t) target_component : Component ID (uint8_t) frame : Coordinate frame, as defined by MAV_FRAME enum in mavlink_types.h. Can be either global, GPS, right-handed with Z axis up or local, right handed, Z axis down. (uint8_t) p1x : x position 1 / Latitude 1 (float) p1y : y position 1 / Longitude 1 (float) p1z : z position 1 / Altitude 1 (float) p2x : x position 2 / Latitude 2 (float) p2y : y position 2 / Longitude 2 (float) p2z : z position 2 / Altitude 2 (float)
def isNull(self): """ Returns whether or not this option set has been modified. :return <bool> """ check = self.raw_values.copy() scope = check.pop('scope', {}) return len(check) == 0 and len(scope) == 0
Returns whether or not this option set has been modified. :return <bool>
def patch(self, patch: int) -> None: """ param patch Patch version number property. Must be a non-negative integer. """ self.filter_negatives(patch) self._patch = patch
param patch Patch version number property. Must be a non-negative integer.
def _CreateMethod(self, method_name): """Create a method wrapping an invocation to the SOAP service. Args: method_name: A string identifying the name of the SOAP method to call. Returns: A callable that can be used to make the desired SOAP request. """ soap_service_method = getattr(self.suds_client.service, method_name) def MakeSoapRequest(*args): """Perform a SOAP call.""" AddToUtilityRegistry('suds') self.SetHeaders( self._header_handler.GetSOAPHeaders(self.CreateSoapElementForType), self._header_handler.GetHTTPHeaders()) try: return soap_service_method( *[_PackForSuds(arg, self.suds_client.factory, self._packer) for arg in args]) except suds.WebFault as e: if _logger.isEnabledFor(logging.WARNING): _logger.warning('Response summary - %s', _ExtractResponseSummaryFields(e.document)) _logger.debug('SOAP response:\n%s', e.document.str()) if not hasattr(e.fault, 'detail'): exc = (googleads.errors. GoogleAdsServerFault(e.document, message=e.fault.faultstring)) raise exc # Done this way for 2to3 # Before re-throwing the WebFault exception, an error object needs to be # wrapped in a list for safe iteration. fault = e.fault.detail.ApiExceptionFault if not hasattr(fault, 'errors') or fault.errors is None: exc = (googleads.errors. GoogleAdsServerFault(e.document, message=e.fault.faultstring)) raise exc # Done this way for 2to3 obj = fault.errors if not isinstance(obj, list): fault.errors = [obj] exc = googleads.errors.GoogleAdsServerFault(e.document, fault.errors, message=e.fault.faultstring) raise exc # Done this way for 2to3 return MakeSoapRequest
Create a method wrapping an invocation to the SOAP service. Args: method_name: A string identifying the name of the SOAP method to call. Returns: A callable that can be used to make the desired SOAP request.
def is_fundamental(type_): """returns True, if type represents C++ fundamental type""" return does_match_definition( type_, cpptypes.fundamental_t, (cpptypes.const_t, cpptypes.volatile_t)) \ or does_match_definition( type_, cpptypes.fundamental_t, (cpptypes.volatile_t, cpptypes.const_t))
returns True, if type represents C++ fundamental type
def get_densities(self, spin=None): """ Returns the density of states for a particular spin. Args: spin: Spin Returns: Returns the density of states for a particular spin. If Spin is None, the sum of all spins is returned. """ if self.densities is None: result = None elif spin is None: if Spin.down in self.densities: result = self.densities[Spin.up] + self.densities[Spin.down] else: result = self.densities[Spin.up] else: result = self.densities[spin] return result
Returns the density of states for a particular spin. Args: spin: Spin Returns: Returns the density of states for a particular spin. If Spin is None, the sum of all spins is returned.
def render_field(self, field, render_kw): """ Returns the rendered field after adding auto–attributes. Calls the field`s widget with the following kwargs: 1. the *render_kw* set on the field are used as based 2. and are updated with the *render_kw* arguments from the render call 3. this is used as an argument for a call to `get_html5_kwargs` 4. the return value of the call is used as final *render_kw* """ field_kw = getattr(field, 'render_kw', None) if field_kw is not None: render_kw = dict(field_kw, **render_kw) render_kw = get_html5_kwargs(field, render_kw) return field.widget(field, **render_kw)
Returns the rendered field after adding auto–attributes. Calls the field`s widget with the following kwargs: 1. the *render_kw* set on the field are used as based 2. and are updated with the *render_kw* arguments from the render call 3. this is used as an argument for a call to `get_html5_kwargs` 4. the return value of the call is used as final *render_kw*
def closeEvent(self, event): """ things to be done when gui closes, like save the settings """ self.save_config(self.gui_settings['gui_settings']) self.script_thread.quit() self.read_probes.quit() event.accept() print('\n\n======================================================') print('================= Closing B26 Python LAB =============') print('======================================================\n\n')
things to be done when gui closes, like save the settings
def register_json_encoder(self, encoder_type: type, encoder: JSONEncoder): """ Register the given JSON encoder for use with the given object type. :param encoder_type: the type of object to encode :param encoder: the JSON encoder :return: this builder """ self._json_encoders[encoder_type] = encoder return self
Register the given JSON encoder for use with the given object type. :param encoder_type: the type of object to encode :param encoder: the JSON encoder :return: this builder
def session(self): """Return an instance of Requests Session configured for the ThreatConnect API.""" if self._session is None: from .tcex_session import TcExSession self._session = TcExSession(self) return self._session
Return an instance of Requests Session configured for the ThreatConnect API.
def get_similar_users(self, users=None, k=10): """Get the k most similar users for each entry in `users`. Each type of recommender has its own model for the similarity between users. For example, the factorization_recommender will return the nearest users based on the cosine similarity between latent user factors. (This method is not currently available for item_similarity models.) Parameters ---------- users : SArray or list; optional An :class:`~turicreate.SArray` or list of user ids for which to get similar users. If 'None', then return the `k` most similar users for all users in the training set. k : int, optional The number of neighbors to return for each user. Returns ------- out : SFrame A SFrame with the top ranked similar users for each user. The columns `user`, 'similar', 'score' and 'rank', where `user` matches the user column name specified at training time. The 'rank' is between 1 and `k` and 'score' gives the similarity score of that user. The value of the score depends on the method used for computing user similarities. Examples -------- >>> sf = turicreate.SFrame({'user_id': ["0", "0", "0", "1", "1", "2", "2", "2"], 'item_id': ["a", "b", "c", "a", "b", "b", "c", "d"]}) >>> m = turicreate.factorization_recommender.create(sf) >>> nn = m.get_similar_users() """ if users is None: get_all_users = True users = _SArray() else: get_all_users = False if isinstance(users, list): users = _SArray(users) def check_type(arg, arg_name, required_type, allowed_types): if not isinstance(arg, required_type): raise TypeError("Parameter " + arg_name + " must be of type(s) " + (", ".join(allowed_types) ) + "; Type '" + str(type(arg)) + "' not recognized.") check_type(users, "users", _SArray, ["SArray", "list"]) check_type(k, "k", int, ["int"]) opt = {'model': self.__proxy__, 'users': users, 'get_all_users' : get_all_users, 'k': k} response = self.__proxy__.get_similar_users(users, k, get_all_users) return response
Get the k most similar users for each entry in `users`. Each type of recommender has its own model for the similarity between users. For example, the factorization_recommender will return the nearest users based on the cosine similarity between latent user factors. (This method is not currently available for item_similarity models.) Parameters ---------- users : SArray or list; optional An :class:`~turicreate.SArray` or list of user ids for which to get similar users. If 'None', then return the `k` most similar users for all users in the training set. k : int, optional The number of neighbors to return for each user. Returns ------- out : SFrame A SFrame with the top ranked similar users for each user. The columns `user`, 'similar', 'score' and 'rank', where `user` matches the user column name specified at training time. The 'rank' is between 1 and `k` and 'score' gives the similarity score of that user. The value of the score depends on the method used for computing user similarities. Examples -------- >>> sf = turicreate.SFrame({'user_id': ["0", "0", "0", "1", "1", "2", "2", "2"], 'item_id': ["a", "b", "c", "a", "b", "b", "c", "d"]}) >>> m = turicreate.factorization_recommender.create(sf) >>> nn = m.get_similar_users()
async def self_check(self): """ Checks that the platforms configuration is all right. """ platforms = set() for platform in get_platform_settings(): try: name = platform['class'] cls: Type[Platform] = import_class(name) except KeyError: yield HealthCheckFail( '00004', 'Missing platform `class` name in configuration.' ) except (AttributeError, ImportError, ValueError): yield HealthCheckFail( '00003', f'Platform "{name}" cannot be imported.' ) else: if cls in platforms: yield HealthCheckFail( '00002', f'Platform "{name}" is imported more than once.' ) platforms.add(cls) # noinspection PyTypeChecker async for check in cls.self_check(): yield check
Checks that the platforms configuration is all right.
def visit_FunctionDef(self, node, **kwargs): """ Handles function definitions within code. Process a function's docstring, keeping well aware of the function's context and whether or not it's part of an interface definition. """ if self.options.debug: stderr.write("# Function {0.name}{1}".format(node, linesep)) # Push either 'interface' or 'class' onto our containing nodes # hierarchy so we can keep track of context. This will let us tell # if a function is nested within another function or even if a class # is nested within a function. containingNodes = kwargs.get('containingNodes') or [] containingNodes.append((node.name, 'function')) if self.options.topLevelNamespace: fullPathNamespace = self._getFullPathName(containingNodes) contextTag = '.'.join(pathTuple[0] for pathTuple in fullPathNamespace) modifiedContextTag = self._processMembers(node, contextTag) tail = '@namespace {0}'.format(modifiedContextTag) else: tail = self._processMembers(node, '') if get_docstring(node): self._processDocstring(node, tail, containingNodes=containingNodes) # Visit any contained nodes. self.generic_visit(node, containingNodes=containingNodes) # Remove the item we pushed onto the containing nodes hierarchy. containingNodes.pop()
Handles function definitions within code. Process a function's docstring, keeping well aware of the function's context and whether or not it's part of an interface definition.
def calc_integral_merger_rate(self): ''' calculates the integral int_0^t (k(t')-1)/2Tc(t') dt' and stores it as self.integral_merger_rate. This differences of this quantity evaluated at different times points are the cost of a branch. ''' # integrate the piecewise constant branch count function. tvals = np.unique(self.nbranches.x[1:-1]) rate = self.branch_merger_rate(tvals) avg_rate = 0.5*(rate[1:] + rate[:-1]) cost = np.concatenate(([0],np.cumsum(np.diff(tvals)*avg_rate))) # make interpolation objects for the branch count and its integral # the latter is scaled by 0.5/Tc # need to add extra point at very large time before present to # prevent 'out of interpolation range' errors self.integral_merger_rate = interp1d(np.concatenate(([-ttconf.BIG_NUMBER], tvals,[ttconf.BIG_NUMBER])), np.concatenate(([cost[0]], cost,[cost[-1]])), kind='linear')
calculates the integral int_0^t (k(t')-1)/2Tc(t') dt' and stores it as self.integral_merger_rate. This differences of this quantity evaluated at different times points are the cost of a branch.
def truncate(self, index, chain=-1): """Tell the traces to truncate themselves at the given index.""" chain = range(self.chains)[chain] for name in self.trace_names[chain]: self._traces[name].truncate(index, chain)
Tell the traces to truncate themselves at the given index.
def add_log_hooks_to_pytorch_module(self, module, name=None, prefix='', log_parameters=True, log_gradients=True, log_freq=0): """ This instuments hooks into the pytorch module log_parameters - log parameters after a forward pass log_gradients - log gradients after a backward pass log_freq - log gradients/parameters every N batches """ if name is not None: prefix = prefix + name if log_parameters: def parameter_log_hook(module, input_, output, log_track): if not log_track_update(log_track): return for name, parameter in module.named_parameters(): # for pytorch 0.3 Variables if isinstance(parameter, torch.autograd.Variable): data = parameter.data else: data = parameter self.log_tensor_stats( data.cpu(), 'parameters/' + prefix + name) log_track_params = log_track_init(log_freq) module.register_forward_hook( lambda mod, inp, outp: parameter_log_hook(mod, inp, outp, log_track_params)) if log_gradients: for name, parameter in module.named_parameters(): if parameter.requires_grad: log_track_grad = log_track_init(log_freq) self._hook_variable_gradient_stats( parameter, 'gradients/' + prefix + name, log_track_grad)
This instuments hooks into the pytorch module log_parameters - log parameters after a forward pass log_gradients - log gradients after a backward pass log_freq - log gradients/parameters every N batches
def forward_char_extend_selection(self, e): # u"""Move forward a character. """ self.l_buffer.forward_char_extend_selection(self.argument_reset) self.finalize()
u"""Move forward a character.
def autoregister(self, cls): """ Autoregister a class that is encountered for the first time. :param cls: The class that should be registered. """ params = self.get_meta_attributes(cls) return self.register(cls, params)
Autoregister a class that is encountered for the first time. :param cls: The class that should be registered.
def parse_ethnicity(parts): """ Parse the ethnicity from the Backpage ad. Returns the higher level ethnicities associated with an ethnicity. For example, if "russian" is found in the ad, this function will return ["russian", "eastern_european", "white_non_hispanic"]. This allows for us to look at ethnicities numerically and uniformally. Note: The code for this function is pretty old and messy, but still works well enough for our purposes. parts -> """ eastern_european = ['russian', 'ukrainian', 'moldova', 'bulgarian', 'slovakian', 'hungarian', 'romanian', 'polish', 'latvian', 'lithuanian', 'estonia', 'czech', 'croatian', 'bosnian', 'montenegro', 'macedonian', 'albanian', 'slovenian', 'serbian', 'kosovo', 'armenian', 'siberian', 'belarusian'] western_european = ['british', 'german', 'france', 'greek', 'italian', 'belgian', 'netherlands', 'swiss', 'irish', 'danish', 'sweden', 'finnish', 'norwegian', 'portugese', 'austrian', 'sanmarino', 'turkish', 'liechtenstein', 'australian', 'newzealand', 'andorra', 'luxembourg', 'israeli', 'jewish'] caribbean = ['bahamian', 'haitian', 'dominican', 'puertorican', 'jamaican', 'cuban', 'caymanislands', 'trinidad', 'caribbean', 'guadeloupe', 'martinique', 'barbados', 'saintlucia', 'stlucia', 'curacao', 'aruban', 'saintvincent', 'stvincent', 'creole', 'grenadines', 'grenada', 'barbuda', 'saintkitts', 'saintmartin', 'anguilla', 'virginislands', 'montserrat', 'saintbarthelemy'] south_central_american = ['guatemalan', 'belizean', 'honduras', 'nicaraguan', 'elsalvador', 'panamanian', 'costarican', 'colombian', 'columbian', 'venezuelan', 'ecuadorian', 'peruvian', 'bolivian', 'chilean', 'argentine', 'uruguayan', 'paraguayan', 'brazilian', 'guyana', 'suriname'] mexican = ['mexican'] spanish = ['spanish'] east_asian = ['thai', 'vietnamese', 'cambodian', 'malaysian', 'filipino', 'singaporean', 'indonesian', 'japanese', 'chinese', 'taiwanese', 'northkorean', 'southkorean', 'korean'] korean = ['northkorean', 'southkorean', 'korean'] south_asian = ['nepalese', 'bangladeshi', 'bhutanese', 'indian'] hawaiian_pacific_islanders = ['hawaiian', 'guamanian', 'newguinea', 'fiji', 'marianaislands', 'solomonislands', 'micronesia', 'tuvalu', 'samoan', 'vanuata', 'polynesia', 'cookislands', 'pitcaimislands', 'marshallese'] middle_eastern = ['iraqi', 'iranian', 'pakistani', 'afghan', 'kazakhstan', 'uzbekistan', 'tajikistan', 'turkmenistan', 'azerbaijan', 'kyrgyzstan', 'syrian', 'lebanese', 'jordanian', 'saudiarabian', 'unitedarabemirates', 'bahrain', 'kuwait', 'persian', 'kurdish', 'middleeastern'] north_african = ['egyptian', 'libyan', 'algerian', 'tunisian', 'moroccan', 'westernsaharan', 'mauritanian', 'senegal', 'djibouti'] # Broad, high level ethnicity classes white_non_hispanic = eastern_european + western_european hispanic_latino = caribbean + south_central_american + mexican + spanish # Get tribe names!!! american_indian = ['nativeamerican', 'canadian', 'alaskan', 'apache', 'aztec', 'cherokee', 'chinook', 'comanche', 'eskimo', 'incan', 'iroquois', 'kickapoo', 'mayan', 'mohave', 'mojave', 'navaho', 'navajo', 'seminole'] asian = east_asian + south_asian + hawaiian_pacific_islanders midEast_nAfrica = middle_eastern + north_african african_american = ['black', 'african american'] ss_african = ['gambia', 'bissau', 'guinea', 'sierraleone', 'liberian', 'ghana', 'malian', 'burkinafaso', 'beninese', 'nigerian', 'sudanese', 'eritrea', 'ethiopian', 'cameroon', 'centralafricanrepublic', 'somalian', 'gabon', 'congo', 'ugandan', 'kenyan', 'tanzanian', 'rwandan', 'burundi', 'angola', 'zambian', 'mozambique', 'malawi', 'zimbabwe', 'namibia', 'botswana', 'lesotho', 'southafrican', 'swaziland', 'madagascar', 'comoros', 'mauritius', 'saintdenis', 'seychelles', 'saotome'] # Add various identifying values to Category lists white_non_hispanic.append('european') white_non_hispanic.append('white') hispanic_latino.extend(['hispanic', 'latina']) asian.extend(['asian', 'oriental']) midEast_nAfrica.extend(['arabian', 'muslim']) # "from ____" to handle false positives as names from_names = ['malaysia'] # One massive ethnicities list ethnicities = white_non_hispanic + hispanic_latino + american_indian + asian + midEast_nAfrica + african_american + ss_african num = 0 found = [] # Check each part of the body clean_parts = [] for p in parts: part = parser_helpers.clean_part_ethn(p) clean_parts.append(part) # handle "from _____" ethnicities to avoid false positives in names for name in from_names: if re.compile(r'from +' + re.escape(name)).search(part): found.append(name) if any(eth in part for eth in ethnicities): # At least one ethnicity was found for ethn in ethnicities: if ethn in part: index=part.index(ethn) if (' no ' in part and part.index(' no ')+4==index) or ('no ' in part and part.index('no')==0 and part.index('no ') + 3==index) or ('.no ' in part and part.index('.no ') + 4==index): pass else: # Found the current ethnicity if ethn in ['black', 'african american']: ethn = "african_american" if ethn == 'white': ethn = 'white_non_hispanic' if ethn not in found: # Add to Found list, check for subsets found.append(ethn) if ethn in eastern_european: found.append("eastern_european") if ethn in western_european: found.append("western_european") if ethn in caribbean: found.append("caribbean") if ethn in south_central_american: found.append("south_central_american") if ethn in east_asian: found.append("east_asian") if ethn in south_asian: found.append("south_asian_indian") if ethn in hawaiian_pacific_islanders: found.append("hawaiian_pacific_islanders") if ethn in middle_eastern: found.append("middle_eastern") if ethn in north_african: found.append("north_african") # Check the most general ethnicity categories if ethn in white_non_hispanic and "white_non_hispanic" not in found: found.append("white_non_hispanic") num += 1 if ethn in hispanic_latino and "hispanic_latino" not in found: found.append("hispanic_latino") num += 1 if ethn in american_indian and "american_indian" not in found: found.append("american_indian") num += 1 if ethn in asian and "asian" not in found: if ethn != "asian": found.append("asian") num += 1 if ethn in midEast_nAfrica and "midEast_nAfrican" not in found: found.append("midEast_nAfrican") num += 1 if ethn in ss_african and "subsaharan_african" not in found: found.append("subsaharan_african") num += 1 if ethn == "african_american": num += 1 # Remove ethnicity from all parts output_parts = [] for p in clean_parts: part = p if any(eth in part for eth in found): # Ethnicity(s) found in this part for eth in found: if eth in part: # Remove ethnicity part = re.sub(eth, "", part) # Add part to output if len(part) > 2: output_parts.append(part) # Check if there was more than one general ethnicity. If so, the ad is multi-racial. if num > 1: found.append("multiracial") found = list(set(found)) return (found, output_parts)
Parse the ethnicity from the Backpage ad. Returns the higher level ethnicities associated with an ethnicity. For example, if "russian" is found in the ad, this function will return ["russian", "eastern_european", "white_non_hispanic"]. This allows for us to look at ethnicities numerically and uniformally. Note: The code for this function is pretty old and messy, but still works well enough for our purposes. parts ->
def startup(api=None): """Runs the provided function on startup, passing in an instance of the api""" def startup_wrapper(startup_function): apply_to_api = hug.API(api) if api else hug.api.from_object(startup_function) apply_to_api.add_startup_handler(startup_function) return startup_function return startup_wrapper
Runs the provided function on startup, passing in an instance of the api
def apply_default_prefetch(input_source_or_dataflow, trainer): """ Apply a set of default rules to make a fast :class:`InputSource`. Args: input_source_or_dataflow(InputSource | DataFlow): trainer (Trainer): Returns: InputSource """ if not isinstance(input_source_or_dataflow, InputSource): # to mimic same behavior of the old trainer interface if type(trainer) == SimpleTrainer: input = FeedInput(input_source_or_dataflow) else: logger.info("Automatically applying QueueInput on the DataFlow.") input = QueueInput(input_source_or_dataflow) else: input = input_source_or_dataflow if hasattr(trainer, 'devices'): towers = trainer.devices if len(towers) > 1: # seem to only improve on >1 GPUs assert not isinstance(trainer, SimpleTrainer) if isinstance(input, FeedfreeInput) and \ not isinstance(input, (StagingInput, DummyConstantInput)): logger.info("Automatically applying StagingInput on the DataFlow.") input = StagingInput(input) return input
Apply a set of default rules to make a fast :class:`InputSource`. Args: input_source_or_dataflow(InputSource | DataFlow): trainer (Trainer): Returns: InputSource
def format_vertices_section(self): """format vertices section. assign_vertexid() should be called before this method, because self.valid_vetices should be available and member self.valid_vertices should have valid index. """ buf = io.StringIO() buf.write('vertices\n') buf.write('(\n') for v in self.valid_vertices: buf.write(' ' + v.format() + '\n') buf.write(');') return buf.getvalue()
format vertices section. assign_vertexid() should be called before this method, because self.valid_vetices should be available and member self.valid_vertices should have valid index.
def shutdown(self): 'Close all peer connections and stop listening for new ones' log.info("shutting down") for peer in self._dispatcher.peers.values(): peer.go_down(reconnect=False) if self._listener_coro: backend.schedule_exception( errors._BailOutOfListener(), self._listener_coro) if self._udp_listener_coro: backend.schedule_exception( errors._BailOutOfListener(), self._udp_listener_coro)
Close all peer connections and stop listening for new ones
def _outlier_rejection(self, params, model, signal, ii): """ Helper function to reject outliers DRY! """ # Z score across repetitions: z_score = (params - np.mean(params, 0))/np.std(params, 0) # Silence warnings: with warnings.catch_warnings(): warnings.simplefilter("ignore") outlier_idx = np.where(np.abs(z_score)>3.0)[0] nan_idx = np.where(np.isnan(params))[0] outlier_idx = np.unique(np.hstack([nan_idx, outlier_idx])) ii[outlier_idx] = 0 model[outlier_idx] = np.nan signal[outlier_idx] = np.nan params[outlier_idx] = np.nan return model, signal, params, ii
Helper function to reject outliers DRY!
def attr(self, kw=None, _attributes=None, **attrs): """Add a general or graph/node/edge attribute statement. Args: kw: Attributes target (``None`` or ``'graph'``, ``'node'``, ``'edge'``). attrs: Attributes to be set (must be strings, may be empty). See the :ref:`usage examples in the User Guide <attributes>`. """ if kw is not None and kw.lower() not in ('graph', 'node', 'edge'): raise ValueError('attr statement must target graph, node, or edge: ' '%r' % kw) if attrs or _attributes: if kw is None: a_list = self._a_list(None, attrs, _attributes) line = self._attr_plain % a_list else: attr_list = self._attr_list(None, attrs, _attributes) line = self._attr % (kw, attr_list) self.body.append(line)
Add a general or graph/node/edge attribute statement. Args: kw: Attributes target (``None`` or ``'graph'``, ``'node'``, ``'edge'``). attrs: Attributes to be set (must be strings, may be empty). See the :ref:`usage examples in the User Guide <attributes>`.
def enable_pointer_type(self): """ If a type is a pointer, a platform-independent POINTER_T type needs to be in the generated code. """ # 2015-01 reactivating header templates #log.warning('enable_pointer_type deprecated - replaced by generate_headers') # return # FIXME ignore self.enable_pointer_type = lambda: True import pkgutil headers = pkgutil.get_data('ctypeslib', 'data/pointer_type.tpl').decode() import ctypes from clang.cindex import TypeKind # assuming a LONG also has the same sizeof than a pointer. word_size = self.parser.get_ctypes_size(TypeKind.POINTER) // 8 word_type = self.parser.get_ctypes_name(TypeKind.ULONG) # pylint: disable=protected-access word_char = getattr(ctypes, word_type)._type_ # replacing template values headers = headers.replace('__POINTER_SIZE__', str(word_size)) headers = headers.replace('__REPLACEMENT_TYPE__', word_type) headers = headers.replace('__REPLACEMENT_TYPE_CHAR__', word_char) print(headers, file=self.imports) return
If a type is a pointer, a platform-independent POINTER_T type needs to be in the generated code.
def _build_ocsp_response(self, ocsp_request: OCSPRequest) -> OCSPResponse: """ Create and return an OCSP response from an OCSP request. """ # Get the certificate serial tbs_request = ocsp_request['tbs_request'] request_list = tbs_request['request_list'] if len(request_list) != 1: logger.warning('Received OCSP request with multiple sub requests') raise NotImplemented('Combined requests not yet supported') single_request = request_list[0] # TODO: Support more than one request req_cert = single_request['req_cert'] serial = req_cert['serial_number'].native # Check certificate status try: certificate_status, revocation_date = self._validate(serial) except Exception as e: logger.exception('Could not determine certificate status: %s', e) return self._fail(ResponseStatus.internal_error) # Retrieve certificate try: subject_cert_contents = self._cert_retrieve(serial) except Exception as e: logger.exception('Could not retrieve certificate with serial %s: %s', serial, e) return self._fail(ResponseStatus.internal_error) # Parse certificate try: subject_cert = asymmetric.load_certificate(subject_cert_contents.encode('utf8')) except Exception as e: logger.exception('Returned certificate with serial %s is invalid: %s', serial, e) return self._fail(ResponseStatus.internal_error) # Build the response builder = OCSPResponseBuilder(**{ 'response_status': ResponseStatus.successful.value, 'certificate': subject_cert, 'certificate_status': certificate_status.value, 'revocation_date': revocation_date, }) # Parse extensions for extension in tbs_request['request_extensions']: extn_id = extension['extn_id'].native critical = extension['critical'].native value = extension['extn_value'].parsed # This variable tracks whether any unknown extensions were encountered unknown = False # Handle nonce extension if extn_id == 'nonce': builder.nonce = value.native # That's all we know else: unknown = True # If an unknown critical extension is encountered (which should not # usually happen, according to RFC 6960 4.1.2), we should throw our # hands up in despair and run. if unknown is True and critical is True: logger.warning('Could not parse unknown critical extension: %r', dict(extension.native)) return self._fail(ResponseStatus.internal_error) # If it's an unknown non-critical extension, we can safely ignore it. elif unknown is True: logger.info('Ignored unknown non-critical extension: %r', dict(extension.native)) # Set certificate issuer builder.certificate_issuer = self._issuer_cert # Set next update date builder.next_update = datetime.now(timezone.utc) + timedelta(days=self._next_update_days) return builder.build(self._responder_key, self._responder_cert)
Create and return an OCSP response from an OCSP request.
def runcode(self, code): """Execute a code object. When an exception occurs, self.showtraceback() is called to display a traceback. All exceptions are caught except SystemExit, which is reraised. A note about KeyboardInterrupt: this exception may occur elsewhere in this code, and may not always be caught. The caller should be prepared to deal with it. """ try: Exec(code, self.frame.f_globals, self.frame.f_locals) pydevd_save_locals.save_locals(self.frame) except SystemExit: raise except: # In case sys.excepthook called, use original excepthook #PyDev-877: Debug console freezes with Python 3.5+ # (showtraceback does it on python 3.5 onwards) sys.excepthook = sys.__excepthook__ try: self.showtraceback() finally: sys.__excepthook__ = sys.excepthook
Execute a code object. When an exception occurs, self.showtraceback() is called to display a traceback. All exceptions are caught except SystemExit, which is reraised. A note about KeyboardInterrupt: this exception may occur elsewhere in this code, and may not always be caught. The caller should be prepared to deal with it.
def get_folder_children(self, folder_id, name_contains=None): """ Get direct files and folders of a folder. :param folder_id: str: uuid of the folder :param name_contains: str: filter children based on a pattern :return: File|Folder """ return self._create_array_response( self.data_service.get_folder_children( folder_id, name_contains ), DDSConnection._folder_or_file_constructor )
Get direct files and folders of a folder. :param folder_id: str: uuid of the folder :param name_contains: str: filter children based on a pattern :return: File|Folder
def write_single_response(self, response_obj): """ Writes a json rpc response ``{"result": result, "error": error, "id": id}``. If the ``id`` is ``None``, the response will not contain an ``id`` field. The response is sent to the client as an ``application/json`` response. Only one call per response is allowed :param response_obj: A Json rpc response object :return: """ if not isinstance(response_obj, JsonRpcResponse): raise ValueError( "Expected JsonRpcResponse, but got {} instead".format(type(response_obj).__name__)) if not self.response_is_sent: self.set_status(200) self.set_header("Content-Type", "application/json") self.finish(response_obj.to_string()) self.response_is_sent = True
Writes a json rpc response ``{"result": result, "error": error, "id": id}``. If the ``id`` is ``None``, the response will not contain an ``id`` field. The response is sent to the client as an ``application/json`` response. Only one call per response is allowed :param response_obj: A Json rpc response object :return:
def dedupFasta(reads): """ Remove sequence duplicates (based on sequence) from FASTA. @param reads: a C{dark.reads.Reads} instance. @return: a generator of C{dark.reads.Read} instances with no duplicates. """ seen = set() add = seen.add for read in reads: hash_ = md5(read.sequence.encode('UTF-8')).digest() if hash_ not in seen: add(hash_) yield read
Remove sequence duplicates (based on sequence) from FASTA. @param reads: a C{dark.reads.Reads} instance. @return: a generator of C{dark.reads.Read} instances with no duplicates.
def do_list_queue(self, line): """list_queue <peer> """ def f(p, args): o = p.get() if o.resources.queue: for q in o.resources.queue: print('%s %s' % (q.resource_id, q.port)) self._request(line, f)
list_queue <peer>
def main(): '''i am winston wolfe, i solve problems''' arguments = docopt(__doc__, version=__version__) if arguments['on']: print 'Mr. Wolfe is at your service' print 'If any of your programs run into an error' print 'use wolfe $l' print 'To undo the changes made by mr wolfe in your bashrc, do wolfe off' on() elif arguments['off']: off() print 'Mr. Wolfe says goodbye!' elif arguments['QUERY']: last(arguments['QUERY'], arguments['-g'] or arguments['--google']) else: print __doc__
i am winston wolfe, i solve problems
def get_requirements(filename='requirements.txt'): """ Get the contents of a file listing the requirements. :param filename: path to a requirements file :type filename: str :returns: the list of requirements :return type: list """ with open(filename) as f: return [ line.rstrip().split('#')[0] for line in f.readlines() if not line.startswith('#') ]
Get the contents of a file listing the requirements. :param filename: path to a requirements file :type filename: str :returns: the list of requirements :return type: list
def restore_from_disk(self, clean_old_snapshot=False): """Restore the state of the BF using previous snapshots. :clean_old_snapshot: Delete the old snapshot on the disk (period < current - expiration) """ base_filename = "%s/%s_%s_*.dat" % (self.snapshot_path, self.name, self.expiration) availables_snapshots = glob.glob(base_filename) last_period = self.current_period - dt.timedelta(days=self.expiration-1) for filename in availables_snapshots: snapshot_period = dt.datetime.strptime(filename.split('_')[-1].strip('.dat'), "%Y-%m-%d") if snapshot_period < last_period and not clean_old_snapshot: continue else: self._union_bf_from_file(filename) if snapshot_period == self.current_period: self._union_bf_from_file(filename, current=True) if snapshot_period < last_period and clean_old_snapshot: os.remove(filename) self.ready = True
Restore the state of the BF using previous snapshots. :clean_old_snapshot: Delete the old snapshot on the disk (period < current - expiration)
def get_project(self, project_short_name): """Return project object.""" project = pbclient.find_project(short_name=project_short_name, all=self.all) if (len(project) == 1): return project[0] else: raise ProjectNotFound(project_short_name)
Return project object.
def write_byte(self, addr, val): """Write a single byte to the specified device.""" assert self._device is not None, 'Bus must be opened before operations are made against it!' self._select_device(addr) data = bytearray(1) data[0] = val & 0xFF self._device.write(data)
Write a single byte to the specified device.
def pull(self): """Print out summary information about each packet from the input_stream""" # For each packet in the pcap process the contents for item in self.input_stream: # Print out the timestamp in UTC print('%s -' % item['timestamp'], end='') # Transport info if item['transport']: print(item['transport']['type'], end='') # Print out the Packet info packet_type = item['packet']['type'] print(packet_type, end='') packet = item['packet'] if packet_type in ['IP', 'IP6']: # Is there domain info? if 'src_domain' in packet: print('%s(%s) --> %s(%s)' % (net_utils.inet_to_str(packet['src']), packet['src_domain'], net_utils.inet_to_str(packet['dst']), packet['dst_domain']), end='') else: print('%s --> %s' % (net_utils.inet_to_str(packet['src']), net_utils.inet_to_str(packet['dst'])), end='') else: print(str(packet)) # Only include application if we have it if item['application']: print('Application: %s' % item['application']['type'], end='') print(str(item['application']), end='') # Just for newline print()
Print out summary information about each packet from the input_stream
def list_all(prefix=None, app=None, owner=None, description_contains=None, name_not_contains=None, profile="splunk"): ''' Get all splunk search details. Produces results that can be used to create an sls file. if app or owner are specified, results will be limited to matching saved searches. if description_contains is specified, results will be limited to those where "description_contains in description" is true if name_not_contains is specified, results will be limited to those where "name_not_contains not in name" is true. If prefix parameter is given, alarm names in the output will be prepended with the prefix; alarms that have the prefix will be skipped. This can be used to convert existing alarms to be managed by salt, as follows: CLI example: 1. Make a "backup" of all existing searches $ salt-call splunk_search.list_all --out=txt | sed "s/local: //" > legacy_searches.sls 2. Get all searches with new prefixed names $ salt-call splunk_search.list_all "prefix=**MANAGED BY SALT** " --out=txt | sed "s/local: //" > managed_searches.sls 3. Insert the managed searches into splunk $ salt-call state.sls managed_searches.sls 4. Manually verify that the new searches look right 5. Delete the original searches $ sed s/present/absent/ legacy_searches.sls > remove_legacy_searches.sls $ salt-call state.sls remove_legacy_searches.sls 6. Get all searches again, verify no changes $ salt-call splunk_search.list_all --out=txt | sed "s/local: //" > final_searches.sls $ diff final_searches.sls managed_searches.sls ''' client = _get_splunk(profile) # splunklib doesn't provide the default settings for saved searches. # so, in order to get the defaults, we create a search with no # configuration, get that search, and then delete it. We use its contents # as the default settings name = "splunk_search.list_all get defaults" try: client.saved_searches.delete(name) except Exception: pass search = client.saved_searches.create(name, search="nothing") defaults = dict(search.content) client.saved_searches.delete(name) # stuff that splunk returns but that you should not attempt to set. # cf http://dev.splunk.com/view/python-sdk/SP-CAAAEK2 readonly_keys = ("triggered_alert_count", "action.email", "action.populate_lookup", "action.rss", "action.script", "action.summary_index", "qualifiedSearch", "next_scheduled_time") results = OrderedDict() # sort the splunk searches by name, so we get consistent output searches = sorted([(s.name, s) for s in client.saved_searches]) for name, search in searches: if app and search.access.app != app: continue if owner and search.access.owner != owner: continue if name_not_contains and name_not_contains in name: continue if prefix: if name.startswith(prefix): continue name = prefix + name # put name in the OrderedDict first d = [{"name": name}] # add the rest of the splunk settings, ignoring any defaults description = '' for (k, v) in sorted(search.content.items()): if k in readonly_keys: continue if k.startswith("display."): continue if not v: continue if k in defaults and defaults[k] == v: continue d.append({k: v}) if k == 'description': description = v if description_contains and description_contains not in description: continue results["manage splunk search " + name] = {"splunk_search.present": d} return salt.utils.yaml.safe_dump(results, default_flow_style=False, width=120)
Get all splunk search details. Produces results that can be used to create an sls file. if app or owner are specified, results will be limited to matching saved searches. if description_contains is specified, results will be limited to those where "description_contains in description" is true if name_not_contains is specified, results will be limited to those where "name_not_contains not in name" is true. If prefix parameter is given, alarm names in the output will be prepended with the prefix; alarms that have the prefix will be skipped. This can be used to convert existing alarms to be managed by salt, as follows: CLI example: 1. Make a "backup" of all existing searches $ salt-call splunk_search.list_all --out=txt | sed "s/local: //" > legacy_searches.sls 2. Get all searches with new prefixed names $ salt-call splunk_search.list_all "prefix=**MANAGED BY SALT** " --out=txt | sed "s/local: //" > managed_searches.sls 3. Insert the managed searches into splunk $ salt-call state.sls managed_searches.sls 4. Manually verify that the new searches look right 5. Delete the original searches $ sed s/present/absent/ legacy_searches.sls > remove_legacy_searches.sls $ salt-call state.sls remove_legacy_searches.sls 6. Get all searches again, verify no changes $ salt-call splunk_search.list_all --out=txt | sed "s/local: //" > final_searches.sls $ diff final_searches.sls managed_searches.sls
def perfect_platonic_per_pixel(N, R, scale=11, pos=None, zscale=1.0, returnpix=None): """ Create a perfect platonic sphere of a given radius R by supersampling by a factor scale on a grid of size N. Scale must be odd. We are able to perfectly position these particles up to 1/scale. Therefore, let's only allow those types of shifts for now, but return the actual position used for the placement. """ # enforce odd scale size if scale % 2 != 1: scale += 1 if pos is None: # place the default position in the center of the grid pos = np.array([(N-1)/2.0]*3) # limit positions to those that are exact on the size 1./scale # positions have the form (d = divisions): # p = N + m/d s = 1.0/scale f = zscale**2 i = pos.astype('int') p = i + s*((pos - i)/s).astype('int') pos = p + 1e-10 # unfortunately needed to break ties # make the output arrays image = np.zeros((N,)*3) x,y,z = np.meshgrid(*(xrange(N),)*3, indexing='ij') # for each real pixel in the image, integrate a bunch of superres pixels for x0,y0,z0 in zip(x.flatten(),y.flatten(),z.flatten()): # short-circuit things that are just too far away! ddd = np.sqrt(f*(x0-pos[0])**2 + (y0-pos[1])**2 + (z0-pos[2])**2) if ddd > R + 4: image[x0,y0,z0] = 0.0 continue # otherwise, build the local mesh and count the volume xp,yp,zp = np.meshgrid( *(np.linspace(i-0.5+s/2, i+0.5-s/2, scale, endpoint=True) for i in (x0,y0,z0)), indexing='ij' ) ddd = np.sqrt(f*(xp-pos[0])**2 + (yp-pos[1])**2 + (zp-pos[2])**2) if returnpix is not None and returnpix == [x0,y0,z0]: outpix = 1.0 * (ddd < R) vol = (1.0*(ddd < R) + 0.0*(ddd == R)).sum() image[x0,y0,z0] = vol / float(scale**3) #vol_true = 4./3*np.pi*R**3 #vol_real = image.sum() #print vol_true, vol_real, (vol_true - vol_real)/vol_true if returnpix: return image, pos, outpix return image, pos
Create a perfect platonic sphere of a given radius R by supersampling by a factor scale on a grid of size N. Scale must be odd. We are able to perfectly position these particles up to 1/scale. Therefore, let's only allow those types of shifts for now, but return the actual position used for the placement.
def _set_drop_precedence_force(self, v, load=False): """ Setter method for drop_precedence_force, mapped from YANG variable /ipv6_acl/ipv6/access_list/extended/seq/drop_precedence_force (ip-access-list:drop-prec-uint) If this variable is read-only (config: false) in the source YANG file, then _set_drop_precedence_force is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_drop_precedence_force() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..2']}), is_leaf=True, yang_name="drop-precedence-force", rest_name="drop-precedence-force", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Force drop precedence', u'cli-optional-in-sequence': None, u'cli-suppress-no': None}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-access-list', defining_module='brocade-ipv6-access-list', yang_type='ip-access-list:drop-prec-uint', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """drop_precedence_force must be of a type compatible with ip-access-list:drop-prec-uint""", 'defined-type': "ip-access-list:drop-prec-uint", 'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..2']}), is_leaf=True, yang_name="drop-precedence-force", rest_name="drop-precedence-force", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Force drop precedence', u'cli-optional-in-sequence': None, u'cli-suppress-no': None}}, namespace='urn:brocade.com:mgmt:brocade-ipv6-access-list', defining_module='brocade-ipv6-access-list', yang_type='ip-access-list:drop-prec-uint', is_config=True)""", }) self.__drop_precedence_force = t if hasattr(self, '_set'): self._set()
Setter method for drop_precedence_force, mapped from YANG variable /ipv6_acl/ipv6/access_list/extended/seq/drop_precedence_force (ip-access-list:drop-prec-uint) If this variable is read-only (config: false) in the source YANG file, then _set_drop_precedence_force is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_drop_precedence_force() directly.
def get_3_tuple(self,obj,default=None): """Return 3-tuple from number -> (obj,default[1],default[2]) 0-sequence|None -> default 1-sequence -> (obj[0],default[1],default[2]) 2-sequence -> (obj[0],obj[1],default[2]) (3 or more)-sequence -> (obj[0],obj[1],obj[2]) """ if not (default is not None \ and type(default) is tuple \ and len(default)==3): raise ValueError('argument default must be 3-tuple|None but got %s'%(default)) if is_sequence(obj): n = len(obj) if n>3: log.warning('expected 3-sequence but got %s-%s'%(n,type(obj))) if n>=3: return tuple(obj) log.warning('filling with default value (%s) to obtain size=3'%(default[0])) if default is not None: if n==0: return default elif n==1: return (obj[0],default[1],default[2]) elif n==2: return (obj[0],obj[1],default[2]) elif is_number(obj) and default is not None: log.warning('filling with default value (%s) to obtain size=3'%(default[0])) return (obj,default[1],default[2]) elif obj is None and default is not None: log.warning('filling with default value (%s) to obtain size=3'%(default[0])) return default raise ValueError('failed to construct 3-tuple from %s-%s'%(n,type(obj)))
Return 3-tuple from number -> (obj,default[1],default[2]) 0-sequence|None -> default 1-sequence -> (obj[0],default[1],default[2]) 2-sequence -> (obj[0],obj[1],default[2]) (3 or more)-sequence -> (obj[0],obj[1],obj[2])
def motto(self): """获取用户自我介绍,由于历史原因,我还是把这个属性叫做motto吧. :return: 用户自我介绍 :rtype: str """ if self.url is None: return '' else: if self.soup is not None: bar = self.soup.find( 'div', class_='title-section') if len(bar.contents) < 4: return '' else: return bar.contents[3].text else: assert self.card is not None motto = self.card.find('div', class_='tagline') return motto.text if motto is not None else ''
获取用户自我介绍,由于历史原因,我还是把这个属性叫做motto吧. :return: 用户自我介绍 :rtype: str
def length_between(min_len, max_len, open_left=False, # type: bool open_right=False # type: bool ): """ 'Is length between' validation_function generator. Returns a validation_function to check that `min_len <= len(x) <= max_len (default)`. `open_right` and `open_left` flags allow to transform each side into strict mode. For example setting `open_left=True` will enforce `min_len < len(x) <= max_len`. :param min_len: minimum length for x :param max_len: maximum length for x :param open_left: Boolean flag to turn the left inequality to strict mode :param open_right: Boolean flag to turn the right inequality to strict mode :return: """ if open_left and open_right: def length_between_(x): if (min_len < len(x)) and (len(x) < max_len): return True else: # raise Failure('length between: {} < len(x) < {} does not hold for x={}'.format(min_len, max_len, # x)) raise LengthNotInRange(wrong_value=x, min_length=min_len, left_strict=True, max_length=max_len, right_strict=True) elif open_left: def length_between_(x): if (min_len < len(x)) and (len(x) <= max_len): return True else: # raise Failure('length between: {} < len(x) <= {} does not hold for x={}'.format(min_len, max_len, # x)) raise LengthNotInRange(wrong_value=x, min_length=min_len, left_strict=True, max_length=max_len, right_strict=False) elif open_right: def length_between_(x): if (min_len <= len(x)) and (len(x) < max_len): return True else: # raise Failure('length between: {} <= len(x) < {} does not hold for x={}'.format(min_len, max_len, # x)) raise LengthNotInRange(wrong_value=x, min_length=min_len, left_strict=False, max_length=max_len, right_strict=True) else: def length_between_(x): if (min_len <= len(x)) and (len(x) <= max_len): return True else: # raise Failure('length between: {} <= len(x) <= {} does not hold for x={}'.format(min_len, # max_len, x)) raise LengthNotInRange(wrong_value=x, min_length=min_len, left_strict=False, max_length=max_len, right_strict=False) length_between_.__name__ = 'length_between_{}_and_{}'.format(min_len, max_len) return length_between_
'Is length between' validation_function generator. Returns a validation_function to check that `min_len <= len(x) <= max_len (default)`. `open_right` and `open_left` flags allow to transform each side into strict mode. For example setting `open_left=True` will enforce `min_len < len(x) <= max_len`. :param min_len: minimum length for x :param max_len: maximum length for x :param open_left: Boolean flag to turn the left inequality to strict mode :param open_right: Boolean flag to turn the right inequality to strict mode :return:
def get_single_series(self, id): """Fetches a single comic series by id. get /v1/public/series/{seriesId} :param id: ID of Series :type params: int :returns: SeriesDataWrapper >>> m = Marvel(public_key, private_key) >>> response = m.get_single_series(12429) >>> print response.data.result.title 5 Ronin (2010) """ url = "%s/%s" % (Series.resource_url(), id) response = json.loads(self._call(url).text) return SeriesDataWrapper(self, response)
Fetches a single comic series by id. get /v1/public/series/{seriesId} :param id: ID of Series :type params: int :returns: SeriesDataWrapper >>> m = Marvel(public_key, private_key) >>> response = m.get_single_series(12429) >>> print response.data.result.title 5 Ronin (2010)
def transform(x): """ Transform from Timeddelta to numerical format """ # microseconds try: x = np.array([_x.total_seconds()*10**6 for _x in x]) except TypeError: x = x.total_seconds()*10**6 return x
Transform from Timeddelta to numerical format
def terminate(self, devices): """Terminate one or more running or stopped instances. """ for device in devices: self.logger.info('Terminating: %s', device.id) try: device.delete() except packet.baseapi.Error: raise PacketManagerException('Unable to terminate instance "{}"'.format(device.id))
Terminate one or more running or stopped instances.
def find_point_in_section_list(point, section_list): """Returns the start of the section the given point belongs to. The given list is assumed to contain start points of consecutive sections, except for the final point, assumed to be the end point of the last section. For example, the list [5, 8, 30, 31] is interpreted as the following list of sections: [5-8), [8-30), [30-31], so the points -32, 4.5, 32 and 100 all match no section, while 5 and 7.5 match [5-8) and so for them the function returns 5, and 30, 30.7 and 31 all match [30-31]. Parameters --------- point : float The point for which to match a section. section_list : sortedcontainers.SortedList A list of start points of consecutive sections. Returns ------- float The start of the section the given point belongs to. None if no match was found. Example ------- >>> from sortedcontainers import SortedList >>> seclist = SortedList([5, 8, 30, 31]) >>> find_point_in_section_list(4, seclist) >>> find_point_in_section_list(5, seclist) 5 >>> find_point_in_section_list(27, seclist) 8 >>> find_point_in_section_list(31, seclist) 30 """ if point < section_list[0] or point > section_list[-1]: return None if point in section_list: if point == section_list[-1]: return section_list[-2] ind = section_list.bisect(point)-1 if ind == 0: return section_list[0] return section_list[ind] try: ind = section_list.bisect(point) return section_list[ind-1] except IndexError: return None
Returns the start of the section the given point belongs to. The given list is assumed to contain start points of consecutive sections, except for the final point, assumed to be the end point of the last section. For example, the list [5, 8, 30, 31] is interpreted as the following list of sections: [5-8), [8-30), [30-31], so the points -32, 4.5, 32 and 100 all match no section, while 5 and 7.5 match [5-8) and so for them the function returns 5, and 30, 30.7 and 31 all match [30-31]. Parameters --------- point : float The point for which to match a section. section_list : sortedcontainers.SortedList A list of start points of consecutive sections. Returns ------- float The start of the section the given point belongs to. None if no match was found. Example ------- >>> from sortedcontainers import SortedList >>> seclist = SortedList([5, 8, 30, 31]) >>> find_point_in_section_list(4, seclist) >>> find_point_in_section_list(5, seclist) 5 >>> find_point_in_section_list(27, seclist) 8 >>> find_point_in_section_list(31, seclist) 30
def handler(self): """gets the security handler for the class""" if hasNTLM: if self._handler is None: passman = request.HTTPPasswordMgrWithDefaultRealm() passman.add_password(None, self._parsed_org_url, self._login_username, self._password) self._handler = HTTPNtlmAuthHandler.HTTPNtlmAuthHandler(passman) return self._handler else: raise Exception("Missing Ntlm python package.")
gets the security handler for the class
def addMethod(self, m): """ Adds a L{Method} to the interface """ if m.nargs == -1: m.nargs = len([a for a in marshal.genCompleteTypes(m.sigIn)]) m.nret = len([a for a in marshal.genCompleteTypes(m.sigOut)]) self.methods[m.name] = m self._xml = None
Adds a L{Method} to the interface
def txtopn(fname): """ Internal undocumented command for opening a new text file for subsequent write access. https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ftncls_c.html#Files https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ftncls_c.html#Examples :param fname: name of the new text file to be opened. :type fname: str :return: FORTRAN logical unit of opened file :rtype: int """ fnameP = stypes.stringToCharP(fname) unit_out = ctypes.c_int() fname_len = ctypes.c_int(len(fname)) libspice.txtopn_(fnameP, ctypes.byref(unit_out), fname_len) return unit_out.value
Internal undocumented command for opening a new text file for subsequent write access. https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ftncls_c.html#Files https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ftncls_c.html#Examples :param fname: name of the new text file to be opened. :type fname: str :return: FORTRAN logical unit of opened file :rtype: int
def gha(julian_day, f): """ returns greenwich hour angle """ rad = old_div(np.pi, 180.) d = julian_day - 2451545.0 + f L = 280.460 + 0.9856474 * d g = 357.528 + 0.9856003 * d L = L % 360. g = g % 360. # ecliptic longitude lamb = L + 1.915 * np.sin(g * rad) + .02 * np.sin(2 * g * rad) # obliquity of ecliptic epsilon = 23.439 - 0.0000004 * d # right ascension (in same quadrant as lambda) t = (np.tan(old_div((epsilon * rad), 2)))**2 r = old_div(1, rad) rl = lamb * rad alpha = lamb - r * t * np.sin(2 * rl) + \ (old_div(r, 2)) * t * t * np.sin(4 * rl) # alpha=mod(alpha,360.0) # declination delta = np.sin(epsilon * rad) * np.sin(lamb * rad) delta = old_div(np.arcsin(delta), rad) # equation of time eqt = (L - alpha) # utm = f * 24 * 60 H = old_div(utm, 4) + eqt + 180 H = H % 360.0 return H, delta
returns greenwich hour angle
def radius_server_host_timeout(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") radius_server = ET.SubElement(config, "radius-server", xmlns="urn:brocade.com:mgmt:brocade-aaa") host = ET.SubElement(radius_server, "host") hostname_key = ET.SubElement(host, "hostname") hostname_key.text = kwargs.pop('hostname') timeout = ET.SubElement(host, "timeout") timeout.text = kwargs.pop('timeout') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
def transmit_content_metadata(self, user): """ Transmit content metadata to integrated channel. """ exporter = self.get_content_metadata_exporter(user) transmitter = self.get_content_metadata_transmitter() transmitter.transmit(exporter.export())
Transmit content metadata to integrated channel.
def create_process(daemon, name, callback, *callbackParams): """创建进程 :param daemon: True主进程关闭而关闭, False主进程必须等待子进程结束 :param name: 进程名称 :param callback: 回调函数 :param callbackParams: 回调函数参数 :return: 返回一个进程对象 """ bp = Process(daemon=daemon, name=name, target=callback, args=callbackParams) return bp
创建进程 :param daemon: True主进程关闭而关闭, False主进程必须等待子进程结束 :param name: 进程名称 :param callback: 回调函数 :param callbackParams: 回调函数参数 :return: 返回一个进程对象
def requestAvatarId(self, credentials): """ Return the ID associated with these credentials. @param credentials: something which implements one of the interfaces in self.credentialInterfaces. @return: a Deferred which will fire a string which identifies an avatar, an empty tuple to specify an authenticated anonymous user (provided as checkers.ANONYMOUS) or fire a Failure(UnauthorizedLogin). @see: L{twisted.cred.credentials} """ username, domain = credentials.username.split("@") key = self.users.key(domain, username) if key is None: return defer.fail(UnauthorizedLogin()) def _cbPasswordChecked(passwordIsCorrect): if passwordIsCorrect: return username + '@' + domain else: raise UnauthorizedLogin() return defer.maybeDeferred(credentials.checkPassword, key).addCallback(_cbPasswordChecked)
Return the ID associated with these credentials. @param credentials: something which implements one of the interfaces in self.credentialInterfaces. @return: a Deferred which will fire a string which identifies an avatar, an empty tuple to specify an authenticated anonymous user (provided as checkers.ANONYMOUS) or fire a Failure(UnauthorizedLogin). @see: L{twisted.cred.credentials}
def normalize(self, expr, operation): """ Return a normalized expression transformed to its normal form in the given AND or OR operation. The new expression arguments will satisfy these conditions: - operation(*args) == expr (here mathematical equality is meant) - the operation does not occur in any of its arg. - NOT is only appearing in literals (aka. Negation normal form). The operation must be an AND or OR operation or a subclass. """ # ensure that the operation is not NOT assert operation in (self.AND, self.OR,) # Move NOT inwards. expr = expr.literalize() # Simplify first otherwise _rdistributive() may take forever. expr = expr.simplify() operation_example = operation(self.TRUE, self.FALSE) expr = self._rdistributive(expr, operation_example) # Canonicalize expr = expr.simplify() return expr
Return a normalized expression transformed to its normal form in the given AND or OR operation. The new expression arguments will satisfy these conditions: - operation(*args) == expr (here mathematical equality is meant) - the operation does not occur in any of its arg. - NOT is only appearing in literals (aka. Negation normal form). The operation must be an AND or OR operation or a subclass.
def restart(): """Restarts scapy""" if not conf.interactive or not os.path.isfile(sys.argv[0]): raise OSError("Scapy was not started from console") if WINDOWS: try: res_code = subprocess.call([sys.executable] + sys.argv) except KeyboardInterrupt: res_code = 1 finally: os._exit(res_code) os.execv(sys.executable, [sys.executable] + sys.argv)
Restarts scapy
def apply_mesh_programs(self, mesh_programs=None): """Applies mesh programs to meshes""" if not mesh_programs: mesh_programs = [ColorProgram(), TextureProgram(), FallbackProgram()] for mesh in self.meshes: for mp in mesh_programs: instance = mp.apply(mesh) if instance is not None: if isinstance(instance, MeshProgram): mesh.mesh_program = mp break else: raise ValueError("apply() must return a MeshProgram instance, not {}".format(type(instance))) if not mesh.mesh_program: print("WARING: No mesh program applied to '{}'".format(mesh.name))
Applies mesh programs to meshes
def request_halt(self, req, msg): """Halt the device server. Returns ------- success : {'ok', 'fail'} Whether scheduling the halt succeeded. Examples -------- :: ?halt !halt ok """ f = Future() @gen.coroutine def _halt(): req.reply("ok") yield gen.moment self.stop(timeout=None) raise AsyncReply self.ioloop.add_callback(lambda: chain_future(_halt(), f)) return f
Halt the device server. Returns ------- success : {'ok', 'fail'} Whether scheduling the halt succeeded. Examples -------- :: ?halt !halt ok
def has_key(tup, key): """has(tuple, string) -> bool Return whether a given tuple has a key and the key is bound. """ if isinstance(tup, framework.TupleLike): return tup.is_bound(key) if isinstance(tup, dict): return key in tup if isinstance(tup, list): if not isinstance(key, int): raise ValueError('Key must be integer when checking list index') return key < len(tup) raise ValueError('Not a tuple-like object: %r' % tup)
has(tuple, string) -> bool Return whether a given tuple has a key and the key is bound.
def main(): """Main function called upon script execution.""" # First get the wireless interface index. pack = struct.pack('16sI', OPTIONS['<interface>'].encode('ascii'), 0) sk = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) try: info = struct.unpack('16sI', fcntl.ioctl(sk.fileno(), 0x8933, pack)) except OSError: return error('Wireless interface {0} does not exist.'.format(OPTIONS['<interface>'])) finally: sk.close() if_index = int(info[1]) # Next open a socket to the kernel and bind to it. Same one used for sending and receiving. sk = nl_socket_alloc() # Creates an `nl_sock` instance. ok(0, genl_connect, sk) # Create file descriptor and bind socket. _LOGGER.debug('Finding the nl80211 driver ID...') driver_id = ok(0, genl_ctrl_resolve, sk, b'nl80211') _LOGGER.debug('Finding the nl80211 scanning group ID...') mcid = ok(0, genl_ctrl_resolve_grp, sk, b'nl80211', b'scan') # Scan for access points 1 or more (if requested) times. if not OPTIONS['--no-sudo']: print('Scanning for access points, may take about 8 seconds...') else: print("Attempting to read results of previous scan.") results = dict() for i in range(2, -1, -1): # Three tries on errors. if not OPTIONS['--no-sudo']: ret = ok(i, do_scan_trigger, sk, if_index, driver_id, mcid) if ret < 0: _LOGGER.warning('do_scan_trigger() returned %d, retrying in 5 seconds.', ret) time.sleep(5) continue ret = ok(i, do_scan_results, sk, if_index, driver_id, results) if ret < 0: _LOGGER.warning('do_scan_results() returned %d, retrying in 5 seconds.', ret) time.sleep(5) continue break if not results: print('No access points detected.') return # Print results. print('Found {0} access points:'.format(len(results))) print_table(results.values())
Main function called upon script execution.
def set_rotation(self, r=0, redraw=True): """ Sets the LED matrix rotation for viewing, adjust if the Pi is upside down or sideways. 0 is with the Pi HDMI port facing downwards """ if r in self._pix_map.keys(): if redraw: pixel_list = self.get_pixels() self._rotation = r if redraw: self.set_pixels(pixel_list) else: raise ValueError('Rotation must be 0, 90, 180 or 270 degrees')
Sets the LED matrix rotation for viewing, adjust if the Pi is upside down or sideways. 0 is with the Pi HDMI port facing downwards
def _process_ping(self): """ The server will be periodically sending a PING, and if the the client does not reply a PONG back a number of times, it will close the connection sending an `-ERR 'Stale Connection'` error. """ yield self.send_command(PONG_PROTO) if self._flush_queue.empty(): yield self._flush_pending()
The server will be periodically sending a PING, and if the the client does not reply a PONG back a number of times, it will close the connection sending an `-ERR 'Stale Connection'` error.
async def check_ping_timeout(self): """Make sure the client is still sending pings. This helps detect disconnections for long-polling clients. """ if self.closed: raise exceptions.SocketIsClosedError() if time.time() - self.last_ping > self.server.ping_interval + 5: self.server.logger.info('%s: Client is gone, closing socket', self.sid) # Passing abort=False here will cause close() to write a # CLOSE packet. This has the effect of updating half-open sockets # to their correct state of disconnected await self.close(wait=False, abort=False) return False return True
Make sure the client is still sending pings. This helps detect disconnections for long-polling clients.
def _get_symbolic_function_initial_state(self, function_addr, fastpath_mode_state=None): """ Symbolically execute the first basic block of the specified function, then returns it. We prepares the state using the already existing state in fastpath mode (if avaiable). :param function_addr: The function address :return: A symbolic state if succeeded, None otherwise """ if function_addr is None: return None if function_addr in self._symbolic_function_initial_state: return self._symbolic_function_initial_state[function_addr] if fastpath_mode_state is not None: fastpath_state = fastpath_mode_state else: if function_addr in self._function_input_states: fastpath_state = self._function_input_states[function_addr] else: raise AngrCFGError('The impossible happened. Please report to Fish.') symbolic_initial_state = self.project.factory.entry_state(mode='symbolic') if fastpath_state is not None: symbolic_initial_state = self.project.simos.prepare_call_state(fastpath_state, initial_state=symbolic_initial_state) # Find number of instructions of start block func = self.project.kb.functions.get(function_addr) start_block = func._get_block(function_addr) num_instr = start_block.instructions - 1 symbolic_initial_state.ip = function_addr path = self.project.factory.path(symbolic_initial_state) try: sim_successors = self.project.factory.successors(path.state, num_inst=num_instr) except (SimError, AngrError): return None # We execute all but the last instruction in this basic block, so we have a cleaner # state # Start execution! exits = sim_successors.flat_successors + sim_successors.unsat_successors if exits: final_st = None for ex in exits: if ex.satisfiable(): final_st = ex break else: final_st = None self._symbolic_function_initial_state[function_addr] = final_st return final_st
Symbolically execute the first basic block of the specified function, then returns it. We prepares the state using the already existing state in fastpath mode (if avaiable). :param function_addr: The function address :return: A symbolic state if succeeded, None otherwise
def GetNextWrittenEventSource(self): """Retrieves the next event source that was written after open. Returns: EventSource: event source or None if there are no newly written ones. Raises: IOError: when the storage writer is closed. OSError: when the storage writer is closed. """ if not self._is_open: raise IOError('Unable to read from closed storage writer.') if self._written_event_source_index >= len(self._event_sources): return None event_source = self._event_sources[self._written_event_source_index] self._written_event_source_index += 1 return event_source
Retrieves the next event source that was written after open. Returns: EventSource: event source or None if there are no newly written ones. Raises: IOError: when the storage writer is closed. OSError: when the storage writer is closed.