code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def pb_for_delete(document_path, option): write_pb = write_pb2.Write(delete=document_path) if option is not None: option.modify_write(write_pb) return write_pb
Make a ``Write`` protobuf for ``delete()`` methods. Args: document_path (str): A fully-qualified document path. option (optional[~.firestore_v1beta1.client.WriteOption]): A write option to make assertions / preconditions on the server state of the document before applying changes. Returns: google.cloud.firestore_v1beta1.types.Write: A ``Write`` protobuf instance for the ``delete()``.
juraj-google-style
def _ParseNoHeaderSingleLine(self, parser_mediator, structure): if (not self._last_event_data): logger.debug('SkyDrive, found isolated line with no previous events') return event_data = SkyDriveOldLogEventData() event_data.offset = self._last_event_data.offset event_data.text = structure.text event = time_events.DateTimeValuesEvent(self._last_date_time, definitions.TIME_DESCRIPTION_ADDED) parser_mediator.ProduceEventWithEventData(event, event_data) self._last_date_time = None self._last_event_data = None
Parse an isolated header line and store appropriate attributes. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. structure (pyparsing.ParseResults): structure of tokens derived from a line of a text file.
codesearchnet
def __init__(self, page_iterator): self._page_iterator = page_iterator self._current = None self._index = -1
Constructor. Args: page_iterator (PageIterator): the base iterator of getting pages.
juraj-google-style
def ask_question(self, field_name, pattern=NAME_PATTERN, is_required=False, password=False): input_value = '' question = 'Insert the field using the pattern below:\n{}\n{}: '.format(pattern[0], field_name) while (not input_value): input_value = (getpass(question) if password else input(question)) if (not (input_value or is_required)): break if password: confirm_password = getpass('Confirm your password: ') if (confirm_password != input_value): print('Password does not match') input_value = '' if (not self.valid_attribute(input_value, pattern[1])): error_message = 'The content must fit the pattern: {}\n' print(error_message.format(pattern[0])) input_value = '' return input_value
Ask a question and get the input values. This method will validade the input values. Args: field_name(string): Field name used to ask for input value. pattern(tuple): Pattern to validate the input value. is_required(bool): Boolean value if the input value is required. password(bool): Boolean value to get input password with mask. Returns: input_value(string): Input value validated.
codesearchnet
def new_message_from_message_type(message_type): message_type = str(message_type) if (message_type not in MESSAGE_TYPES): raise ValueError('"{}" is not known.'.format(message_type)) message_class = MESSAGE_TYPES.get(message_type) message_instance = message_class() return message_instance
Given an OpenFlow Message Type, return an empty message of that type. Args: messageType (:class:`~pyof.v0x01.common.header.Type`): Python-openflow message. Returns: Empty OpenFlow message of the requested message type. Raises: KytosUndefinedMessageType: Unkown Message_Type.
codesearchnet
def _add_results(self, results, trial_id): for result in results: self.logger.debug(('Appending result: %s' % result)) result['trial_id'] = trial_id result_record = ResultRecord.from_json(result) result_record.save()
Add a list of results into db. Args: results (list): A list of json results. trial_id (str): Id of the trial.
codesearchnet
def on_predict_end(self, logs=None):
Called at the end of prediction. Subclasses should override for any actions to run. Args: logs: Dict. Currently no data is passed to this argument for this method but that may change in the future.
github-repos
def pdf_to_text(pdf_filepath='', **kwargs): result = [] try: if (not os.path.exists(pdf_filepath)): raise ValueError('No valid pdf filepath introduced..') kwargs['outfp'] = kwargs.get('outfp', StringIO()) kwargs['laparams'] = kwargs.get('laparams', pdfminer.layout.LAParams()) kwargs['imagewriter'] = kwargs.get('imagewriter', None) kwargs['output_type'] = kwargs.get('output_type', 'text') kwargs['codec'] = kwargs.get('codec', 'utf-8') kwargs['disable_caching'] = kwargs.get('disable_caching', False) with open(pdf_filepath, 'rb') as f_pdf: pdfminer.high_level.extract_text_to_fp(f_pdf, **kwargs) result = kwargs.get('outfp').getvalue() except Exception: logger.error('fail pdf to text parsing') return result
Parse pdf to a list of strings using the pdfminer lib. Args: no_laparams=False, all_texts=None, detect_vertical=None, word_margin=None, char_margin=None, line_margin=None, boxes_flow=None, codec='utf-8', strip_control=False, maxpages=0, page_numbers=None, password="", scale=1.0, rotation=0, layoutmode='normal', debug=False, disable_caching=False,
codesearchnet
def path_set_md5(url): scheme, netloc, path, query_string, fragment = urlsplit(url) path += '.md5' return urlunsplit((scheme, netloc, path, query_string, fragment))
Given a file URL, return a md5 query of the file Args: url: a given URL Returns: URL of the md5 file
juraj-google-style
def singularize(plural): if (plural in UNCOUNTABLES): return plural for i in IRREGULAR: if (i[1] == plural): return i[0] for i in SINGULARIZE_PATTERNS: if re.search(i[0], plural): return re.sub(i[0], i[1], plural) return plural
Convert plural word to its singular form. Args: plural: A word in its plural form. Returns: The word in its singular form.
codesearchnet
def print_results(results): if not isinstance(results, list): results = [results] for r in results: try: r.log() except AttributeError: raise ValueError('Argument to print_results() must be a list of ' 'FileValidationResults or ObjectValidationResults.')
Print `results` (the results of validation) to stdout. Args: results: A list of FileValidationResults or ObjectValidationResults instances.
juraj-google-style
def maybe_merge_call(fn, strategy, *args, **kwargs): if strategy_supports_no_merge_call(): return fn(strategy, *args, **kwargs) else: return distribute_lib.get_replica_context().merge_call(fn, args=args, kwargs=kwargs)
Maybe invoke `fn` via `merge_call` which may or may not be fulfilled. The caller of this utility function requests to invoke `fn` via `merge_call` at `tf.distribute.Strategy`'s best efforts. It is `tf.distribute`'s internal whether the request is honored, depending on the `Strategy`. See `tf.distribute.ReplicaContext.merge_call()` for more information. This is an interim API which is subject to removal and does not guarantee backward-compatibility. Args: fn: the function to be invoked. strategy: the `tf.distribute.Strategy` to call `fn` with. *args: the positional arguments to be passed in to `fn`. **kwargs: the keyword arguments to be passed in to `fn`. Returns: The return value of the `fn` call.
github-repos
def update(self, value): with tf.name_scope(self._name + '/update'): if value.shape.ndims == self._mean.shape.ndims: value = value[None, ...] count = tf.shape(value)[0] with tf.control_dependencies([self._count.assign_add(count)]): step = tf.cast(self._count, tf.float32) mean_delta = tf.reduce_sum(value - self._mean[None, ...], 0) new_mean = self._mean + mean_delta / step new_mean = tf.cond(self._count > 1, lambda: new_mean, lambda: value[0]) var_delta = ( value - self._mean[None, ...]) * (value - new_mean[None, ...]) new_var_sum = self._var_sum + tf.reduce_sum(var_delta, 0) with tf.control_dependencies([new_mean, new_var_sum]): update = self._mean.assign(new_mean), self._var_sum.assign(new_var_sum) with tf.control_dependencies(update): if value.shape.ndims == 1: value = tf.reduce_mean(value) return self._summary('value', tf.reduce_mean(value))
Update the mean and variance estimates. Args: value: Batch or single value tensor. Returns: Summary tensor.
juraj-google-style
def add_state(self, name: str, state: State, initial: bool = False): if not issubclass(state.__class__, State): raise AttributeError("state must be subclass of spade.behaviour.State") self._states[name] = state if initial: self.current_state = name
Adds a new state to the FSM. Args: name (str): the name of the state, which is used as its identifier. state (spade.behaviour.State): The state class initial (bool, optional): wether the state is the initial state or not. (Only one initial state is allowed) (Default value = False)
juraj-google-style
def tv_credits(self, **kwargs): path = self._get_id_path('tv_credits') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Get the TV credits for a specific person id. Args: language: (optional) ISO 639-1 code. append_to_response: (optional) Comma separated, any person method. Returns: A dict respresentation of the JSON returned from the API.
juraj-google-style
def connect(self, chip_name, speed='auto', verbose=False): if verbose: self.exec_command('EnableRemarks = 1') self.exec_command(('Device = %s' % chip_name)) if (speed == 'auto'): self.set_speed(auto=True) elif (speed == 'adaptive'): self.set_speed(adaptive=True) else: self.set_speed(speed) result = self._dll.JLINKARM_Connect() if (result < 0): raise errors.JLinkException(result) try: self.halted() except errors.JLinkException: pass for index in range(self.num_supported_devices()): device = self.supported_device(index) if (device.name.lower() == chip_name.lower()): self._device = device break else: raise errors.JLinkException('Unsupported device was connected to.') return None
Connects the J-Link to its target. Args: self (JLink): the ``JLink`` instance chip_name (str): target chip name speed (int): connection speed, one of ``{5-12000, 'auto', 'adaptive'}`` verbose (bool): boolean indicating if connection should be verbose in logging Returns: ``None`` Raises: JLinkException: if connection fails to establish. TypeError: if given speed is invalid
codesearchnet
def _location_infos_equal(left, right): if not isinstance(left, LocationInfo) or not isinstance(right, LocationInfo): raise AssertionError( u'Unsupported LocationInfo comparison between types {} and {} ' u'with values {}, {}'.format(type(left), type(right), left, right)) optional_scopes_depth_equal = (left.optional_scopes_depth == right.optional_scopes_depth) parent_query_paths_equal = ( (left.parent_location is None and right.parent_location is None) or (left.parent_location.query_path == right.parent_location.query_path)) recursive_scopes_depths_equal = (left.recursive_scopes_depth == right.recursive_scopes_depth) types_equal = left.type == right.type return all([ optional_scopes_depth_equal, parent_query_paths_equal, recursive_scopes_depths_equal, types_equal, ])
Return True if LocationInfo objects are equivalent for the SQL backend, False otherwise. LocationInfo objects are considered equal for the SQL backend iff the optional scopes depth, recursive scopes depth, types and parent query paths are equal. Args: left: LocationInfo, left location info object to compare. right: LocationInfo, right location info object to compare. Returns: bool, True if LocationInfo objects equivalent, False otherwise.
juraj-google-style
def task_pivot(self, task_resource): resource = self.copy() resource._request_uri = '{}/{}'.format(task_resource.request_uri, resource._request_uri) return resource
Pivot point on Tasks for this resource. This method will return all *resources* (group, indicators, victims, etc) for this resource that are associated with the provided task id. **Example Endpoints URI's** +--------------+-------------------------------------------------------------+ | HTTP Method | API Endpoint URI's | +==============+=============================================================+ | GET | /v2/tasks/{resourceId}/groups/{resourceType} | +--------------+-------------------------------------------------------------+ | GET | /v2/tasks/{resourceId}/groups/{resourceType}/{uniqueId} | +--------------+-------------------------------------------------------------+ | GET | /v2/tasks/{resourceId}/indicators/{resourceType} | +--------------+-------------------------------------------------------------+ | GET | /v2/tasks/{resourceId}/indicators/{resourceType}/{uniqueId} | +--------------+-------------------------------------------------------------+ Args: resource_id (integer): The resource pivot id (task id).
codesearchnet
def forbidden(cls, errors=None): if cls.expose_status: cls.response.content_type = 'application/json' cls.response._status_line = '403 Forbidden' return cls(403, errors=errors).to_json
Shortcut API for HTTP 403 `Forbidden` response. Args: errors (list): Response key/value data. Returns: WSResponse Instance.
codesearchnet
def heartbeat(queue_name, task_id, owner, message, index): task = _get_task_with_policy(queue_name, task_id, owner) if (task.heartbeat_number > index): return False task.heartbeat = message task.heartbeat_number = index now = datetime.datetime.utcnow() timeout_delta = (task.eta - task.last_lease) task.eta = (now + timeout_delta) task.last_lease = now db.session.add(task) signals.task_updated.send(app, task=task) return True
Sets the heartbeat status of the task and extends its lease. The task's lease is extended by the same amount as its last lease to ensure that any operations following the heartbeat will still hold the lock for the original lock period. Args: queue_name: Name of the queue the work item is on. task_id: ID of the task that is finished. owner: Who or what has the current lease on the task. message: Message to report as the task's current status. index: Number of this message in the sequence of messages from the current task owner, starting at zero. This lets the API receive heartbeats out of order, yet ensure that the most recent message is actually saved to the database. This requires the owner issuing heartbeat messages to issue heartbeat indexes sequentially. Returns: True if the heartbeat message was set, False if it is lower than the current heartbeat index. Raises: TaskDoesNotExistError if the task does not exist. LeaseExpiredError if the lease is no longer active. NotOwnerError if the specified owner no longer owns the task.
codesearchnet
def url_is_project(url, default='not_a_func'): try: u = resolve(url) if (u and (u.func != default)): return True except Resolver404: static_url = settings.STATIC_URL static_url_wd = static_url.lstrip('/') if url.startswith(static_url): url = url[len(static_url):] elif url.startswith(static_url_wd): url = url[len(static_url_wd):] else: return False if finders.find(url): return True return False
Check if URL is part of the current project's URLs. Args: url (str): URL to check. default (callable): used to filter out some URLs attached to function. Returns:
codesearchnet
def _update_bird_conf_file(self, operation): conf_updated = False prefixes = [] ip_version = operation.ip_version config_file = self.bird_configuration[ip_version]['config_file'] variable_name = self.bird_configuration[ip_version]['variable_name'] changes_counter = self.bird_configuration[ip_version]['changes_counter'] dummy_ip_prefix = self.bird_configuration[ip_version]['dummy_ip_prefix'] try: prefixes = get_ip_prefixes_from_bird(config_file) except OSError as error: self.log.error('failed to open Bird configuration %s, this is a FATAL error, thus exiting main program', error) sys.exit(1) if (not prefixes): self.log.error('found empty bird configuration %s, this is a FATAL error, thus exiting main program', config_file) sys.exit(1) if (dummy_ip_prefix not in prefixes): self.log.warning("dummy IP prefix %s wasn't found in bird configuration, adding it. This shouldn't have happened!", dummy_ip_prefix) prefixes.insert(0, dummy_ip_prefix) conf_updated = True ip_prefixes_without_check = set(prefixes).difference(self.ip_prefixes[ip_version]) if ip_prefixes_without_check: self.log.warning("found %s IP prefixes in Bird configuration but we aren't configured to run health checks on them. Either someone modified the configuration manually or something went horrible wrong. We remove them from Bird configuration", ','.join(ip_prefixes_without_check)) prefixes[:] = (ip for ip in prefixes if (ip not in ip_prefixes_without_check)) conf_updated = True if operation.update(prefixes): conf_updated = True if (not conf_updated): self.log.info('no updates for bird configuration') return conf_updated if self.bird_configuration[ip_version]['keep_changes']: archive_bird_conf(config_file, changes_counter) tempname = write_temp_bird_conf(dummy_ip_prefix, config_file, variable_name, prefixes) try: os.rename(tempname, config_file) except OSError as error: self.log.critical('failed to create Bird configuration %s, this is a FATAL error, thus exiting main program', error) sys.exit(1) else: self.log.info('Bird configuration for IPv%s is updated', ip_version) if (len(prefixes) == 1): self.log.warning("Bird configuration doesn't have IP prefixes for any of the services we monitor! It means local node doesn't receive any traffic") return conf_updated
Update BIRD configuration. It adds to or removes IP prefix from BIRD configuration. It also updates generation time stamp in the configuration file. Main program will exit if configuration file cant be read/written. Arguments: operation (obj): Either an AddOperation or DeleteOperation object Returns: True if BIRD configuration was updated otherwise False.
codesearchnet
def to_json_file(self, json_file_path: Union[str, os.PathLike]): with open(json_file_path, 'w', encoding='utf-8') as writer: config_dict = self.to_dict() json_string = json.dumps(config_dict, indent=2, sort_keys=True) + '\n' writer.write(json_string)
Save this instance to a JSON file. Args: json_file_path (`str` or `os.PathLike`): Path to the JSON file in which this configuration instance's parameters will be saved. use_diff (`bool`, *optional*, defaults to `True`): If set to `True`, only the difference between the config instance and the default `QuantizationConfig()` is serialized to JSON file.
github-repos
def drop_incomplete_days(dataframe, shift=0): dropped = 0 if ((shift > 23) or (shift < 0)): print('Invalid shift parameter setting! Using defaults.') shift = 0 first = shift last = (first - 1) if (last < 0): last += 24 try: n = len(dataframe.index) except: print('Error: Invalid dataframe.') return dataframe delete = list() for i in range(0, n): if ((dataframe.index.hour[i] == first) and (dataframe.index.minute[i] == 0)): break else: delete.append(i) dropped += 1 for i in range((n - 1), 0, (- 1)): if ((dataframe.index.hour[i] == last) and (dataframe.index.minute[i] == 0)): break else: delete.append(i) dropped += 1 return dataframe.drop(dataframe.index[[delete]])
truncates a given dataframe to full days only This funtion truncates a given pandas dataframe (time series) to full days only, thus dropping leading and tailing hours of incomplete days. Please note that this methodology only applies to hourly time series. Args: dataframe: A pandas dataframe object with index defined as datetime shift (unsigned int, opt): First hour of daily recordings. For daily recordings of precipitation gages, 8 would be the first hour of the subsequent day of recordings since daily totals are usually recorded at 7. Omit defining this parameter if you intend to pertain recordings to 0-23h. Returns: A dataframe with full days only.
codesearchnet
def _reverse_transform_column(self, table, metadata, table_name): column_name = metadata['name'] if (column_name not in table): return null_name = ('?' + column_name) content = pd.DataFrame(columns=[column_name], index=table.index) transformer = self.transformers[(table_name, column_name)] content[column_name] = transformer.reverse_transform(table[column_name].to_frame()) if (self.missing and (null_name in table[column_name])): content[null_name] = table.pop(null_name) null_transformer = transformers.NullTransformer(metadata) content[column_name] = null_transformer.reverse_transform(content) return content
Reverses the transformtion on a column from table using the given parameters. Args: table (pandas.DataFrame): Dataframe containing column to transform. metadata (dict): Metadata for given column. table_name (str): Name of table in original dataset. Returns: pandas.DataFrame: Dataframe containing the transformed column. If self.missing=True, it will contain a second column containing 0 and 1 marking if that value was originally null or not. It will return None in the case the column is not in the table.
codesearchnet
def write_merged_bioassembly(inpath, outdir, outname, force_rerun=False): outpath = outfile=op.join(outdir, outname + '.pdb') if ssbio.utils.force_rerun(flag=force_rerun, outfile=op.join(outdir, outname + '.pdb')): s = StructProp('Model merging', structure_path=inpath, file_type='pdb') ss = s.parse_structure() merge_all_models_into_first_model(ss.structure) outpath = ss.write_pdb(custom_name=outname, out_dir=outdir, force_rerun=force_rerun) else: return outpath
Utility to take as input a bioassembly file and merge all its models into multiple chains in a single model. Args: infile (str): Path to input PDB file with multiple models that represent an oligomeric form of a structure. outdir (str): Path to output directory outname (str): New filename of structure file force_rerun (bool): If a new PDB should be written if the file exists Returns: str: Path to newly written PDB file.
juraj-google-style
def _merge_type(t0: '_instance_base.SimpleValue', t1: '_instance_base.SimpleValue', name: str, cls: 'class_mixin.Class') -> '_instance_base.SimpleValue': if t0 is None or isinstance(t0, _abstract.Unsolvable): return t1 if t1 is None or isinstance(t1, _abstract.Unsolvable): return t0 if t0 in t1.mro: return t1 if t1 in t0.mro: return t0 raise GenericTypeError(cls, f'Conflicting value for TypeVar {name}')
Merge two types. Rules: Type `Any` can match any type, we will return the other type if one of them is `Any`. Return the sub-class if the types have inheritance relationship. Args: t0: The first type. t1: The second type. name: Type parameter name. cls: The class_mixin.Class on which any error should be reported. Returns: A type. Raises: GenericTypeError: if the types don't match.
github-repos
def from_string(cls, table_id, default_project=None): from google.cloud.bigquery.dataset import DatasetReference (output_project_id, output_dataset_id, output_table_id) = _helpers._parse_3_part_id(table_id, default_project=default_project, property_name='table_id') return cls(DatasetReference(output_project_id, output_dataset_id), output_table_id)
Construct a table reference from table ID string. Args: table_id (str): A table ID in standard SQL format. If ``default_project`` is not specified, this must included a project ID, dataset ID, and table ID, each separated by ``.``. default_project (str): Optional. The project ID to use when ``table_id`` does not include a project ID. Returns: TableReference: Table reference parsed from ``table_id``. Examples: >>> TableReference.from_string('my-project.mydataset.mytable') TableRef...(DatasetRef...('my-project', 'mydataset'), 'mytable') Raises: ValueError: If ``table_id`` is not a fully-qualified table ID in standard SQL format.
codesearchnet
def expire_key(self, key): value = self.base_dict[key] del self[key] if self.callback is not None: self.callback( key, value, *self.callback_args, **self.callback_kwargs)
Expire the key, delete the value, and call the callback function if one is specified. Args: key: The ``TimedDict`` key
juraj-google-style
def from_file(cls, filename): yaml = YAML(typ="safe") with open(filename, "r") as f: d = yaml.load(f) return cls.from_dict(d)
Constructor that reads in a file in YAML format. Args: filename (str): Filename.
juraj-google-style
def windows_from_blocksize(self, blocksize_xy=512): meta = self._get_template_for_given_resolution(self.dst_res, 'meta') width = meta['width'] height = meta['height'] blocksize_wins = windows_from_blocksize(blocksize_xy, width, height) self.windows = np.array([win[1] for win in blocksize_wins]) self.windows_row = np.array([win[0][0] for win in blocksize_wins]) self.windows_col = np.array([win[0][1] for win in blocksize_wins]) return self
Create rasterio.windows.Window instances with given size which fully cover the raster. Arguments: blocksize_xy {int or list of two int} -- Size of the window. If one integer is given it defines the width and height of the window. If a list of two integers if given the first defines the width and the second the height. Returns: None -- But the attributes ``windows``, ``windows_row`` and ``windows_col`` are updated.
codesearchnet
def _wrap(text, columns=80): out = [] for cnt, char in enumerate(text): out.append(char) if (cnt + 1) % columns == 0: out.append("\n") return "".join(out)
Own "dumb" reimplementation of textwrap.wrap(). This is because calling .wrap() on bigger strings can take a LOT of processor power. And I mean like 8 seconds of 3GHz CPU just to wrap 20kB of text without spaces. Args: text (str): Text to wrap. columns (int): Wrap after `columns` characters. Returns: str: Wrapped text.
juraj-google-style
def signCertAs(self, cert, signas): cakey = self.getCaKey(signas) if (cakey is None): raise s_exc.NoCertKey(('Missing .key for %s' % signas)) cacert = self.getCaCert(signas) if (cacert is None): raise s_exc.NoCertKey(('Missing .crt for %s' % signas)) cert.set_issuer(cacert.get_subject()) cert.sign(cakey, self.signing_digest)
Signs a certificate with a CA keypair. Args: cert (OpenSSL.crypto.X509): The certificate to sign. signas (str): The CA keypair name to sign the new keypair with. Examples: Sign a certificate with the CA "myca": cdir.signCertAs(mycert, 'myca') Returns: None
codesearchnet
def push(stack, x, op_id): if isinstance(x, numpy.ndarray): x = x.copy() elif isinstance(x, list): x = x[:] if __debug__: stack.append((x, op_id)) else: stack.append(x)
Push a value onto the stack (i.e. record it on the tape). Args: stack: The stack object, which must support appending values. x: The value to append. If it is a mutable object like an array or list, it will be copied before being added onto the stack. op_id: A unique variable that is also passed into the corresponding pop. Allows optimization passes to track pairs of pushes and pops.
juraj-google-style
def __call__(self, *binary_args): if self.num_processors is None: return self.snr_function(0, binary_args, self.wavegen, self.signal_type, self.noise_interpolants, self.prefactor, self.verbose) other_args = (self.wavegen, self.signal_type, self.noise_interpolants, self.prefactor, self.verbose) self.prep_parallel(binary_args, other_args) return self.run_parallel(self.snr_function)
Input binary parameters and calculate the SNR Binary parameters are read in and adjusted based on shapes. They are then fed into ``run`` for calculation of the snr. Args: *args: Arguments for binary parameters (see `:meth:gwsnrcalc.utils.pyphenomd.__call__`) Returns: (dict): Dictionary with the SNR output from the calculation.
juraj-google-style
def __init__( self, encrypted_root_plist=None, password=None, parent=None, recovery_password=None, **kwargs): if not parent: raise ValueError('Missing parent value.') super(FVDEPathSpec, self).__init__(parent=parent, **kwargs) self.encrypted_root_plist = encrypted_root_plist self.password = password self.recovery_password = recovery_password
Initializes a path specification. Note that the FVDE path specification must have a parent. Args: encrypted_root_plist (Optional[str]): path to the EncryptedRoot.plist.wipekey file. password (Optional[str]): password. parent (Optional[PathSpec]): parent path specification. recovery_password (Optional[str]): recovery password. Raises: ValueError: when parent is not set.
juraj-google-style
def process(self, msg: str, kwargs: _KWARGS_TYPE) -> _PROCESS_RETURN_TYPE: new_msg = f'{self.extra[PrefixLoggerAdapter.EXTRA_KEY_LOG_PREFIX]} {msg}' return (new_msg, kwargs)
Processes the logging call to insert contextual information. Args: msg: The logging message. kwargs: Keyword arguments passed in to a logging call. Returns: The message and kwargs modified.
github-repos
def f(self, y, t): coupling = self.coupling_function[0] res = np.empty_like(self.y0) for j, m in enumerate(self.submodels): slicej = slice(self._si[j], self._si[j+1]) target_y = y[slicej] res[slicej] = m.f(target_y, t) sources = np.nonzero(self.network[:,j])[0] for i in sources: weight = self.network[i, j] source_y = y[slice(self._si[i], self._si[i+1])] res[slicej] += coupling(source_y, target_y, weight) return res
Deterministic term f of the complete network system dy = f(y, t)dt + G(y, t).dot(dW) (or for an ODE network system without noise, dy/dt = f(y, t)) Args: y (array of shape (d,)): where d is the dimension of the overall state space of the complete network system. Returns: f (array of shape (d,)): Defines the deterministic term of the complete network system
juraj-google-style
class Flatten(keras_layers.Flatten, base.Layer): pass
Flattens an input tensor while preserving the batch axis (axis 0). Args: data_format: A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, ..., channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, ...)`. Examples: ``` x = tf.compat.v1.placeholder(shape=(None, 4, 4), dtype='float32') y = Flatten()(x) # now `y` has shape `(None, 16)` x = tf.compat.v1.placeholder(shape=(None, 3, None), dtype='float32') y = Flatten()(x) # now `y` has shape `(None, None)` ```
github-repos
def marcxml2record(marcxml): marcjson = create_record(marcxml, keep_singletons=False) collections = _get_collections(marcjson) if ('conferences' in collections): return conferences.do(marcjson) elif ('data' in collections): return data.do(marcjson) elif ('experiment' in collections): return experiments.do(marcjson) elif ('hepnames' in collections): return hepnames.do(marcjson) elif ('institution' in collections): return institutions.do(marcjson) elif (('job' in collections) or ('jobhidden' in collections)): return jobs.do(marcjson) elif (('journals' in collections) or ('journalsnew' in collections)): return journals.do(marcjson) return hep.do(marcjson)
Convert a MARCXML string to a JSON record. Tries to guess which set of rules to use by inspecting the contents of the ``980__a`` MARC field, but falls back to HEP in case nothing matches, because records belonging to special collections logically belong to the Literature collection but don't have ``980__a:HEP``. Args: marcxml(str): a string containing MARCXML. Returns: dict: a JSON record converted from the string.
codesearchnet
def is_testcase_path(path): if not isinstance(path, (str, list)): return False if isinstance(path, list): for p in path: if not is_testcase_path(p): return False if isinstance(path, str): if not os.path.exists(path): return False return True
check if path is testcase path or path list. Args: path (str/list): file path or file path list. Returns: bool: True if path is valid file path or path list, otherwise False.
juraj-google-style
def set_window_position(self, x, y, window_handle='current'): self._execute(Command.SET_WINDOW_POSITION, { 'x': int(x), 'y': int(y), 'window_handle': window_handle})
Sets the x,y position of the current window. Support: Web(WebView) Args: x(int): the x-coordinate in pixels. y(int): the y-coordinate in pixels. window_handle(str): Identifier of window_handle, default to 'current'. Returns: WebDriver Object.
juraj-google-style
def l1_normalize(x, dim, epsilon=1e-12, name=None): with tf.name_scope(name, 'l1_normalize', [x]) as scope: x = tf.convert_to_tensor(x, name='x') x = tf.verify_tensor_all_finite(x, ('Error at input %s' % scope)) x_norm = tf.maximum(tf.reduce_sum(tf.abs(x), [dim], keep_dims=True), epsilon) return tf.div(x, x_norm, name=scope)
l1 normalizes x. Args: x: The tensor to normalize. dim: The dimension to normalize along. epsilon: Lower bound on the norm, used to avoid exploding gradients as the norm approaches 0. name: Optional name for this op. Returns: x normalized along dim.
codesearchnet
def __stripValue(self, value): if isinstance(value, str): if (((value[0] == '"') and (value[(- 1)] == '"')) or ((value[0] == '[') and (value[(- 1)] == ']'))): return value[1:(- 1)] return value
strip the special characters in the value Args: value: value string Returns: value string without special characters
codesearchnet
def validate_request_signature(body: str, headers: MutableMapping, signing_secret: str) -> None: request_timestamp = int(headers['X-Slack-Request-Timestamp']) if ((int(time.time()) - request_timestamp) > (60 * 5)): raise exceptions.InvalidTimestamp(timestamp=request_timestamp) slack_signature = headers['X-Slack-Signature'] calculated_signature = ('v0=' + hmac.new(signing_secret.encode('utf-8'), f"v0:{headers['X-Slack-Request-Timestamp']}:{body}".encode('utf-8'), digestmod=hashlib.sha256).hexdigest()) if (not hmac.compare_digest(slack_signature, calculated_signature)): raise exceptions.InvalidSlackSignature(slack_signature, calculated_signature)
Validate incoming request signature using the application signing secret. Contrary to the ``team_id`` and ``verification_token`` verification this method is not called by ``slack-sansio`` when creating object from incoming HTTP request. Because the body of the request needs to be provided as text and not decoded as json beforehand. Args: body: Raw request body headers: Request headers signing_secret: Application signing_secret Raise: :class:`slack.exceptions.InvalidSlackSignature`: when provided and calculated signature do not match :class:`slack.exceptions.InvalidTimestamp`: when incoming request timestamp is more than 5 minutes old
codesearchnet
def transformer_prepare_encoder(inputs, target_space, hparams, features=None): ishape_static = inputs.shape.as_list() encoder_input = inputs if (features and ('inputs_segmentation' in features)): inputs_segmentation = features['inputs_segmentation'] inputs_position = features['inputs_position'] targets_segmentation = features['targets_segmentation'] if (hasattr(hparams, 'unidirectional_encoder') and hparams.unidirectional_encoder): tf.logging.info('Using unidirectional encoder') encoder_self_attention_bias = common_attention.attention_bias_lower_triangle(common_layers.shape_list(inputs)[1]) else: encoder_self_attention_bias = common_attention.attention_bias_same_segment(inputs_segmentation, inputs_segmentation) encoder_decoder_attention_bias = common_attention.attention_bias_same_segment(targets_segmentation, inputs_segmentation) else: encoder_padding = common_attention.embedding_to_padding(encoder_input) ignore_padding = common_attention.attention_bias_ignore_padding(encoder_padding) if (hasattr(hparams, 'unidirectional_encoder') and hparams.unidirectional_encoder): tf.logging.info('Using unidirectional encoder') encoder_self_attention_bias = common_attention.attention_bias_lower_triangle(common_layers.shape_list(inputs)[1]) else: encoder_self_attention_bias = ignore_padding encoder_decoder_attention_bias = ignore_padding inputs_position = None if hparams.proximity_bias: encoder_self_attention_bias += common_attention.attention_bias_proximal(common_layers.shape_list(inputs)[1]) if ((target_space is not None) and hparams.get('use_target_space_embedding', True)): emb_target_space = common_layers.embedding(target_space, 32, ishape_static[(- 1)], name='target_space_embedding', dtype=hparams.get('activation_dtype', 'float32')) emb_target_space = tf.reshape(emb_target_space, [1, 1, (- 1)]) encoder_input += emb_target_space if (hparams.pos == 'timing'): if (inputs_position is not None): encoder_input = common_attention.add_timing_signal_1d_given_position(encoder_input, inputs_position) else: encoder_input = common_attention.add_timing_signal_1d(encoder_input) elif (hparams.pos == 'emb'): encoder_input = common_attention.add_positional_embedding(encoder_input, hparams.max_length, 'inputs_positional_embedding', inputs_position) encoder_self_attention_bias = common_layers.cast_like(encoder_self_attention_bias, encoder_input) encoder_decoder_attention_bias = common_layers.cast_like(encoder_decoder_attention_bias, encoder_input) return (encoder_input, encoder_self_attention_bias, encoder_decoder_attention_bias)
Prepare one shard of the model for the encoder. Args: inputs: a Tensor. target_space: a Tensor. hparams: run hyperparameters features: optionally pass the entire features dictionary as well. This is needed now for "packed" datasets. Returns: encoder_input: a Tensor, bottom of encoder stack encoder_self_attention_bias: a bias tensor for use in encoder self-attention encoder_decoder_attention_bias: a bias tensor for use in encoder-decoder attention
codesearchnet
def _get_input_target_path(self, local_file_path): path, filename = os.path.split(local_file_path) if '*' in filename: return path + '/' else: return local_file_path
Returns a directory or file path to be the target for "gsutil cp". If the filename contains a wildcard, then the target path must be a directory in order to ensure consistency whether the source pattern contains one or multiple files. Args: local_file_path: A full path terminating in a file or a file wildcard. Returns: The path to use as the "gsutil cp" target.
juraj-google-style
def get_int(self, min_int=_MIN_INT, max_int=_MAX_INT): return self.fdp.ConsumeIntInRange(min_int, max_int)
Consume a signed integer with given constraints. Args: min_int: Minimum allowed integer. max_int: Maximum allowed integer. Returns: Consumed integer based on input bytes and constraints.
github-repos
def convert_to_tensor(value, dtype=None, dtype_hint=None): if dtype is None and isinstance(value, int) and (value >= 2 ** 63): dtype = dtypes.uint64 elif dtype is None and dtype_hint is None and isinstance(value, float): dtype = np_dtypes.default_float_type() return tensor_conversion.convert_to_tensor_v2_with_dispatch(value, dtype=dtype, dtype_hint=dtype_hint)
Wrapper over `tf.convert_to_tensor`. Args: value: value to convert dtype: (optional) the type we would like it to be converted to. dtype_hint: (optional) soft preference for the type we would like it to be converted to. `tf.convert_to_tensor` will attempt to convert value to this type first, but will not fail if conversion is not possible falling back to inferring the type instead. Returns: Value converted to tf.Tensor.
github-repos
def add_channel(channel: EFBChannel): global master, slaves if isinstance(channel, EFBChannel): if channel.channel_type == ChannelType.Slave: slaves[channel.channel_id] = channel else: master = channel else: raise TypeError("Channel instance is expected")
Register the channel with the coordinator. Args: channel (EFBChannel): Channel to register
juraj-google-style
def regular_polygon_area(number_of_sides, length_of_sides): return (0.25 * number_of_sides * length_of_sides ** 2) / math.tan( math.pi / number_of_sides )
Calculates the area of a regular polygon (with sides of equal length). Args: number_of_sides: Integer, the number of sides of the polygon length_of_sides: Integer or floating point number, the length of the sides Returns: The area of a regular polygon as an integer or floating point number Requires: The math module
juraj-google-style
def __init__(self, force=False): self.colorize = force or sys.stdout.isatty() or os.environ.get('JPY_PARENT_PID', None)
Initialize the class. Args: force (bool): If True, render colorizes output no matter where the output is (default: False).
juraj-google-style
def snapshot(self, filename="tmp.png"): if not filename: filename = "tmp.png" if self.handle: try: screenshot(filename, self.handle) except win32gui.error: self.handle = None screenshot(filename) else: screenshot(filename) img = aircv.imread(filename) os.remove(filename) return img
Take a screenshot and save it to `tmp.png` filename by default Args: filename: name of file where to store the screenshot Returns: display the screenshot
juraj-google-style
def format_var_name(variable, var_list): z_index = None if (variable in var_list): var_name = variable elif (variable.ljust(6, '_') in var_list): var_name = variable.ljust(6, '_') elif any([(variable in v_sub.split('_')) for v_sub in var_list]): var_name = var_list[[(variable in v_sub.split('_')) for v_sub in var_list].index(True)] z_index = var_name.split('_').index(variable) else: raise KeyError('{0} not found in {1}'.format(variable, var_list)) return (var_name, z_index)
Searches var list for variable name, checks other variable name format options. Args: variable (str): Variable being loaded var_list (list): List of variables in file. Returns: Name of variable in file containing relevant data, and index of variable z-level if multiple variables contained in same array in file.
codesearchnet
def is_compatible_with(self, other): other = as_dtype(other) return self._type_enum in (other.as_datatype_enum, other.base_dtype.as_datatype_enum)
Returns True if the `other` DType will be converted to this DType (TF1). Programs written for TensorFlow 2.x do not need this function. Instead, they can do equality comparison on `DType` objects directly: `tf.as_dtype(this) == tf.as_dtype(other)`. This function exists only for compatibility with TensorFlow 1.x, where it additionally allows conversion from a reference type (used by `tf.compat.v1.Variable`) to its base type. Args: other: A `DType` (or object that may be converted to a `DType`). Returns: True if a Tensor of the `other` `DType` will be implicitly converted to this `DType`.
github-repos
def is_original_format(tweet): if ('created_at' in tweet): original_format = True elif ('postedTime' in tweet): original_format = False else: raise NotATweetError("This dict has neither 'created_at' or 'postedTime' as keys") return original_format
Simple checker to flag the format of a tweet. Args: tweet (Tweet): tweet in qustion Returns: Bool Example: >>> import tweet_parser.tweet_checking as tc >>> tweet = {"created_at": 124125125125, ... "text": "just setting up my twttr", ... "nested_field": {"nested_1": "field", "nested_2": "field2"}} >>> tc.is_original_format(tweet) True
codesearchnet
def cumulative_gain_curve(y_true, y_score, pos_label=None): (y_true, y_score) = (np.asarray(y_true), np.asarray(y_score)) classes = np.unique(y_true) if ((pos_label is None) and (not (np.array_equal(classes, [0, 1]) or np.array_equal(classes, [(- 1), 1]) or np.array_equal(classes, [0]) or np.array_equal(classes, [(- 1)]) or np.array_equal(classes, [1])))): raise ValueError('Data is not binary and pos_label is not specified') elif (pos_label is None): pos_label = 1.0 y_true = (y_true == pos_label) sorted_indices = np.argsort(y_score)[::(- 1)] y_true = y_true[sorted_indices] gains = np.cumsum(y_true) percentages = np.arange(start=1, stop=(len(y_true) + 1)) gains = (gains / float(np.sum(y_true))) percentages = (percentages / float(len(y_true))) gains = np.insert(gains, 0, [0]) percentages = np.insert(percentages, 0, [0]) return (percentages, gains)
This function generates the points necessary to plot the Cumulative Gain Note: This implementation is restricted to the binary classification task. Args: y_true (array-like, shape (n_samples)): True labels of the data. y_score (array-like, shape (n_samples)): Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by decision_function on some classifiers). pos_label (int or str, default=None): Label considered as positive and others are considered negative Returns: percentages (numpy.ndarray): An array containing the X-axis values for plotting the Cumulative Gains chart. gains (numpy.ndarray): An array containing the Y-axis values for one curve of the Cumulative Gains chart. Raises: ValueError: If `y_true` is not composed of 2 classes. The Cumulative Gain Chart is only relevant in binary classification.
codesearchnet
async def change_url(self, url: str, description: str=None): (await self._change(url=url, description=description))
change the url of that attachment |methcoro| Args: url: url you want to change description: *optional* description for your attachment Raises: ValueError: url must not be None APIException
codesearchnet
def endpoints(self): if (not self.__endpoints): self.__endpoints = Endpoints(self.__connection) return self.__endpoints
Gets the Endpoints API client. Returns: Endpoints:
codesearchnet
def _resource_context(fn): return os.path.join( os.path.dirname(__file__), DES_DIR, fn )
Compose path to the ``resources`` directory for given `fn`. Args: fn (str): Filename of file in ``resources`` directory. Returns: str: Absolute path to the file in resources directory.
juraj-google-style
def FlatMapTuple(fn, *args, **kwargs): if not callable(fn): raise TypeError('FlatMapTuple can be used only with callable objects. Received %r instead.' % fn) label = 'FlatMapTuple(%s)' % ptransform.label_from_callable(fn) arg_names, defaults = get_function_args_defaults(fn) num_defaults = len(defaults) if num_defaults < len(args) + len(kwargs): raise TypeError('Side inputs must have defaults for FlatMapTuple.') if defaults or args or kwargs: wrapper = lambda x, *args, **kwargs: fn(*tuple(x) + args, **kwargs) else: wrapper = lambda x: fn(*tuple(x)) type_hints = get_type_hints(fn).with_defaults(typehints.decorators.IOTypeHints.from_callable(fn)) if type_hints.input_types is not None: pass output_hint = type_hints.simple_output_type(label) if output_hint: wrapper = with_output_types(_strip_output_annotations(output_hint))(wrapper) modified_arg_names = ['tuple_element'] + arg_names[-num_defaults:] modified_argspec = (modified_arg_names, defaults) pardo = ParDo(CallableWrapperDoFn(wrapper, fullargspec=modified_argspec), *args, **kwargs) pardo.label = label return pardo
:func:`FlatMapTuple` is like :func:`FlatMap` but expects tuple inputs and flattens them into multiple input arguments. In other words beam.FlatMap(lambda start_end: range(start_end[0], start_end[1])) is equivalent to beam.FlatMapTuple(lambda start, end: range(start, end)) This can be useful when processing a PCollection of tuples (e.g. key-value pairs). Args: fn (callable): a callable object. *args: positional arguments passed to the transform callable. **kwargs: keyword arguments passed to the transform callable. Returns: ~apache_beam.pvalue.PCollection: A :class:`~apache_beam.pvalue.PCollection` containing the :func:`FlatMapTuple` outputs. Raises: TypeError: If the **fn** passed as argument is not a callable. Typical error is to pass a :class:`DoFn` instance which is supported only for :class:`ParDo`.
github-repos
def bin_hash160Bytes(bts): intermed = hashlib.sha256(bts).digest() return hashlib.new('ripemd160', intermed).digest()
Get a hash of the provided message using the ripemd160 algorithm. Args: bts (str): message to hash. Returns: bytes: hash.
juraj-google-style
def __call__(self, decision_points: List[pg.geno.DecisionPoint], global_state: Optional[pg.geno.AttributeDict]=None, step: int=0) -> List[pg.geno.DecisionPoint]: return self._call(decision_points, global_state=global_state, step=step)
Filtering decision points based on global state and current step. Args: decision_points: A list of decision points as candidates for filtering. global_state: An optional keyword argument as the global state. step: An optional keyword argument as current step of evolution. Returns: A list of decision points that should be kept.
github-repos
def save_cache(cache): with open(settings.DUP_FILTER_FILE, "w") as f: f.write( json.dumps(list(cache)) )
Save cahce to the disk. Args: cache (set): Set with cached data.
juraj-google-style
def port(self, container, private_port): res = self._get(self._url('/containers/{0}/json', container)) self._raise_for_status(res) json_ = res.json() private_port = str(private_port) h_ports = None port_settings = json_.get('NetworkSettings', {}).get('Ports') if (port_settings is None): return None if ('/' in private_port): return port_settings.get(private_port) for protocol in ['tcp', 'udp', 'sctp']: h_ports = port_settings.get(((private_port + '/') + protocol)) if h_ports: break return h_ports
Lookup the public-facing port that is NAT-ed to ``private_port``. Identical to the ``docker port`` command. Args: container (str): The container to look up private_port (int): The private port to inspect Returns: (list of dict): The mapping for the host ports Raises: :py:class:`docker.errors.APIError` If the server returns an error. Example: .. code-block:: bash $ docker run -d -p 80:80 ubuntu:14.04 /bin/sleep 30 7174d6347063a83f412fad6124c99cffd25ffe1a0807eb4b7f9cec76ac8cb43b .. code-block:: python >>> cli.port('7174d6347063', 80) [{'HostIp': '0.0.0.0', 'HostPort': '80'}]
codesearchnet
def passive_mode(self) -> Tuple[(str, int)]: (yield from self._control_stream.write_command(Command('PASV'))) reply = (yield from self._control_stream.read_reply()) self.raise_if_not_match('Passive mode', ReplyCodes.entering_passive_mode, reply) try: return wpull.protocol.ftp.util.parse_address(reply.text) except ValueError as error: raise ProtocolError(str(error)) from error
Enable passive mode. Returns: The address (IP address, port) of the passive port. Coroutine.
codesearchnet
def resolve_widget(self, field): if hasattr(field, 'field'): widget = field.field.widget else: widget = field.widget return widget
Given a Field or BoundField, return widget instance. Todo: Raise an exception if given field object does not have a widget. Arguments: field (Field or BoundField): A field instance. Returns: django.forms.widgets.Widget: Retrieved widget from given field.
codesearchnet
def _ParseValueData(self, knowledge_base, value_data): if not isinstance(value_data, py2to3.UNICODE_TYPE): raise errors.PreProcessFail( 'Unsupported Windows Registry value type: {0:s} for ' 'artifact: {1:s}.'.format( type(value_data), self.ARTIFACT_DEFINITION_NAME)) lookup_key = value_data.replace(' ', '') time_zone = time_zones.TIME_ZONES.get(lookup_key, value_data) if time_zone: try: knowledge_base.SetTimeZone(time_zone) except ValueError: time_zone = value_data logger.warning('Unable to map: "{0:s}" to time zone'.format( value_data))
Parses Windows Registry value data for a preprocessing attribute. Args: knowledge_base (KnowledgeBase): to fill with preprocessing information. value_data (object): Windows Registry value data. Raises: errors.PreProcessFail: if the preprocessing fails.
juraj-google-style
def __init__(self, scores=None, classes=None): if scores is not None and (not (isinstance(scores, tensor.Tensor) and scores.dtype.is_floating)): raise ValueError('Classification scores must be a float32 Tensor; got {}'.format(scores)) if classes is not None and (not (isinstance(classes, tensor.Tensor) and dtypes.as_dtype(classes.dtype) == dtypes.string)): raise ValueError('Classification classes must be a string Tensor; got {}'.format(classes)) if scores is None and classes is None: raise ValueError('At least one of scores and classes must be set.') self._scores = scores self._classes = classes
Constructor for `ClassificationOutput`. Args: scores: A float `Tensor` giving scores (sometimes but not always interpretable as probabilities) for each class. May be `None`, but only if `classes` is set. Interpretation varies-- see class doc. classes: A string `Tensor` giving predicted class labels. May be `None`, but only if `scores` is set. Interpretation varies-- see class doc. Raises: ValueError: if neither classes nor scores is set, or one of them is not a `Tensor` with the correct dtype.
github-repos
def install_exception_handler(handler): if (not isinstance(handler, ExceptionHandler)): raise TypeError(('handler of type %s does not inherit from ExceptionHandler' % type(handler))) EXCEPTION_HANDLERS.append(handler)
Installs an exception handler. Args: handler: ExceptionHandler, the exception handler to install. Raises: TypeError: Raised when the handler was not of the correct type. All installed exception handlers will be called if main() exits via an abnormal exception, i.e. not one of SystemExit, KeyboardInterrupt, FlagsError or UsageError.
codesearchnet
def delete(self, location): bucket = self.info['bucket'] prefix = self.info['prefix'] self.logger.debug('Connecting to S3') s3conn = self.client if location[0] == '/': location = location[1:] if location[-1] == '/': location = location[:-2] self.logger.debug('Deleting contents') for s3key in s3conn.list_objects(Bucket=bucket, Prefix=(prefix+'/'+location))['Contents']: s3conn.delete_object(Bucket=bucket, Key=s3key['Key']) self.logger.debug('Done!')
Delete content in bucket/prefix/location. Location can be a directory or a file (e.g., my_dir or my_dir/my_image.tif) If location is a directory, all files in the directory are deleted. If it is a file, then that file is deleted. Args: location (str): S3 location within prefix. Can be a directory or a file (e.g., my_dir or my_dir/my_image.tif).
juraj-google-style
def SampleTaskStatus(self, task, status): if self._tasks_profiler: self._tasks_profiler.Sample(task, status)
Takes a sample of the status of the task for profiling. Args: task (Task): a task. status (str): status.
juraj-google-style
def Histograms(self, run, tag): accumulator = self.GetAccumulator(run) return accumulator.Histograms(tag)
Retrieve the histogram events associated with a run and tag. Args: run: A string name of the run for which values are retrieved. tag: A string name of the tag for which values are retrieved. Raises: KeyError: If the run is not found, or the tag is not available for the given run. Returns: An array of `event_accumulator.HistogramEvents`.
codesearchnet
def first(series, order_by=None): if order_by is not None: series = order_series_by(series, order_by) first_s = series.iloc[0] return first_s
Returns the first value of a series. Args: series (pandas.Series): column to summarize. Kwargs: order_by: a pandas.Series or list of series (can be symbolic) to order the input series by before summarization.
juraj-google-style
def set_pyftpsync_logger(logger=True): global _logger prev_logger = _logger if logger is True: logging.basicConfig(level=logging.INFO) _logger = logging.getLogger("pyftpsync") _logger.setLevel(logging.DEBUG) else: _logger = logger return prev_logger
Define target for common output. Args: logger (bool | None | logging.Logger): Pass None to use `print()` to stdout instead of logging. Pass True to create a simple standard logger.
juraj-google-style
def GetCompressedStreamTypeIndicators(cls, path_spec, resolver_context=None): if (cls._compressed_stream_remainder_list is None or cls._compressed_stream_store is None): specification_store, remainder_list = cls._GetSpecificationStore( definitions.FORMAT_CATEGORY_COMPRESSED_STREAM) cls._compressed_stream_remainder_list = remainder_list cls._compressed_stream_store = specification_store if cls._compressed_stream_scanner is None: cls._compressed_stream_scanner = cls._GetSignatureScanner( cls._compressed_stream_store) return cls._GetTypeIndicators( cls._compressed_stream_scanner, cls._compressed_stream_store, cls._compressed_stream_remainder_list, path_spec, resolver_context=resolver_context)
Determines if a file contains a supported compressed stream types. Args: path_spec (PathSpec): path specification. resolver_context (Optional[Context]): resolver context, where None represents the built-in context which is not multi process safe. Returns: list[str]: supported format type indicators.
juraj-google-style
def _md5_file(fn, block_size=1048576): h = hashlib.md5() with open(fn) as fp: d = 1 while d: d = fp.read(block_size) h.update(d) return h.hexdigest()
Builds the MD5 of a file block by block Args: fn: File path block_size: Size of the blocks to consider (default 1048576) Returns: File MD5
codesearchnet
def last_checkpoints(self): return list((self._CheckpointFilename(p) for p in self._last_checkpoints))
List of not-yet-deleted checkpoint filenames. You can pass any of the returned values to `restore()`. Returns: A list of checkpoint filenames, sorted from oldest to newest.
github-repos
def __init__(self, columns: list[str], split_string_by_delimiter: Optional[str]=None, *, ngram_range: tuple[int, int]=(1, 1), ngrams_separator: Optional[str]=None, name: Optional[str]=None): super().__init__(columns) self.ngram_range = ngram_range self.ngrams_separator = ngrams_separator self.name = name self.split_string_by_delimiter = split_string_by_delimiter if ngram_range != (1, 1) and (not ngrams_separator): raise ValueError('ngrams_separator must be specified when ngram_range is not (1, 1)')
An n-gram is a contiguous sequence of n items from a given sample of text or speech. This operation applies an n-gram transformation to specified columns of incoming data, splitting the input data into a set of consecutive n-grams. Args: columns: A list of column names to apply the transformation on. split_string_by_delimiter: (Optional) A string that specifies the delimiter to split the input strings before computing ngrams. ngram_range: A tuple of integers(inclusive) specifying the range of n-gram sizes. ngrams_separator: A string that will be inserted between each ngram. name: A name for the operation (optional).
github-repos
def allsplit(self, x, mesh_axis, split_axis, which=None): if (which is None): which = self.laid_out_pcoord(mesh_axis) num_splits = self.shape[mesh_axis].size def my_fn(x, which): slice_begin = [(((dimsize slice_size = [((dimsize return tf.slice(x, slice_begin, slice_size) return self.slicewise(my_fn, x, which)
Inverse of allconcat - split each slice and keep only one piece of it. The number of ways to split is the number of processors in the group. The part that is kept corresponds to the processor's index in the group. Args: x: LaidOutTensor. mesh_axis: int, the mesh axis along which to split. split_axis: int, the Tensor axis along which to split. which: an optional LaidOutTensor of integer scalars. Selects the slice to to keep, instead of the coordinate. Returns: LaidOutTensor.
codesearchnet
def emit_tree_format(tree, verbose=False): if verbose: print(('Converting: ' + repr(tree))) ret_str = __recursive_formatter(tree) return ret_str
Returns a tree representation of a parse tree. Arguments: tree: the parse tree whose tree representation is to be generated verbose (bool): if True prints the parse tree to be formatted Returns: str: tree-like representation of the parse tree
codesearchnet
def _validate(self): errors = [] for k in self._defaults.keys(): try: validator = self._defaults[k]['validator'] if (validator is not None): self[k] = validator(self[k]) except ValueError as e: errors.append('\t{}: {}'.format(k, six.text_type(e))) if errors: raise ValueError('Invalid configuration values were set: \n{}'.format('\n'.join(errors)))
Run the validators found in self._defaults on all the corresponding values. Raises: ValueError: If the configuration contains an invalid configuration value.
codesearchnet
def _load_chunk(dat_path, cat_path, info_path): dat_array = read_binary_matrix(dat_path) dat_array = np.expand_dims(dat_array, -1) cat_array = read_binary_matrix(cat_path) info_array = read_binary_matrix(info_path) info_array = np.copy(info_array) info_array[:, 2] = info_array[:, 2] / 2 return dat_array, cat_array, info_array
Loads a data chunk as specified by the paths. Args: dat_path: Path to dat file of the chunk. cat_path: Path to cat file of the chunk. info_path: Path to info file of the chunk. Returns: Tuple with the dat, cat, info_arrays.
juraj-google-style
def _CheckAttribute(self, attribute, value): if (not isinstance(attribute, Attribute)): raise AttributeError(('Attribute %s must be of type aff4.Attribute()' % attribute)) if (not isinstance(value, attribute.attribute_type)): raise ValueError(('Value for attribute %s must be of type %s()' % (attribute, attribute.attribute_type.__name__)))
Check that the value is of the expected type. Args: attribute: An instance of Attribute(). value: An instance of RDFValue. Raises: ValueError: when the value is not of the expected type. AttributeError: When the attribute is not of type Attribute().
codesearchnet
def ExtractEvents(self, parser_mediator, registry_key, **kwargs): names_key = registry_key.GetSubkeyByName('Names') if not names_key: parser_mediator.ProduceExtractionWarning('missing subkey: Names.') return last_written_time_per_username = { registry_value.name: registry_value.last_written_time for registry_value in names_key.GetSubkeys()} for subkey in registry_key.GetSubkeys(): if subkey.name == 'Names': continue try: f_value = self._ParseFValue(subkey) except errors.ParseError as exception: parser_mediator.ProduceExtractionWarning( 'unable to parse F value with error: {0!s}'.format(exception)) continue registry_value = subkey.GetValueByName('V') if not registry_value: parser_mediator.ProduceExtractionWarning( 'missing Registry value: "V" in subkey: {0:s}.'.format( subkey.name)) continue v_value_map = self._GetDataTypeMap('v_value') try: v_value = self._ReadStructureFromByteStream( registry_value.data, 0, v_value_map) except (ValueError, errors.ParseError) as exception: parser_mediator.ProduceExtractionWarning( 'unable to parse V value with error: {0!s}'.format(exception)) continue username = self._ParseVValueString( parser_mediator, registry_value.data, v_value[1]) fullname = self._ParseVValueString( parser_mediator, registry_value.data, v_value[2]) comments = self._ParseVValueString( parser_mediator, registry_value.data, v_value[3]) last_written_time = last_written_time_per_username.get(username, None) if last_written_time: values_dict = { 'account_rid': f_value.rid, 'login_count': f_value.number_of_logons} if username: values_dict['username'] = username if fullname: values_dict['full_name'] = fullname if comments: values_dict['comments'] = comments event_data = windows_events.WindowsRegistryEventData() event_data.key_path = registry_key.path event_data.regvalue = values_dict event_data.source_append = self._SOURCE_APPEND event = time_events.DateTimeValuesEvent( last_written_time, definitions.TIME_DESCRIPTION_WRITTEN) parser_mediator.ProduceEventWithEventData(event, event_data) event_data = SAMUsersWindowsRegistryEventData() event_data.account_rid = f_value.rid event_data.comments = comments event_data.fullname = fullname event_data.key_path = registry_key.path event_data.login_count = f_value.number_of_logons event_data.username = username if f_value.last_login_time != 0: date_time = dfdatetime_filetime.Filetime( timestamp=f_value.last_login_time) event = time_events.DateTimeValuesEvent( date_time, definitions.TIME_DESCRIPTION_LAST_LOGIN) parser_mediator.ProduceEventWithEventData(event, event_data) if f_value.last_password_set_time != 0: date_time = dfdatetime_filetime.Filetime( timestamp=f_value.last_password_set_time) event = time_events.DateTimeValuesEvent( date_time, definitions.TIME_DESCRIPTION_LAST_PASSWORD_RESET) parser_mediator.ProduceEventWithEventData(event, event_data)
Extracts events from a Windows Registry key. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. registry_key (dfwinreg.WinRegistryKey): Windows Registry key.
juraj-google-style
def get_factors_iterative2(n): ans, stack, x = [], [], 2 while True: if x > n if not stack: return ans ans.append(stack + [n]) x = stack.pop() n *= x x += 1 elif n % x == 0: stack.append(x) n else: x += 1
[summary] analog as above Arguments: n {[int]} -- [description] Returns: [list of lists] -- [all factors of n]
juraj-google-style
def serialize_to_transport(self, doc_format="xml", *args, **kwargs): return super(ResourceMap, self).serialize( format=doc_format, encoding="utf-8", *args, **kwargs )
Serialize ResourceMap to UTF-8 encoded XML document. Args: doc_format: str One of: ``xml``, ``n3``, ``turtle``, ``nt``, ``pretty-xml``, ``trix``, ``trig`` and ``nquads``. args and kwargs: Optional arguments forwarded to rdflib.ConjunctiveGraph.serialize(). Returns: bytes: UTF-8 encoded XML doc. Note: Only the default, "xml", is automatically indexed by DataONE.
juraj-google-style
async def find_person(self, query): url = self.url_builder( 'search/person', dict(), url_params=OrderedDict([ ('query', query), ('include_adult', False) ]), ) data = await self.get_data(url) if data is None: return return [ Person.from_json(item, self.config['data'].get('images')) for item in data.get('results', []) ]
Retrieve person data by search query. Arguments: query (:py:class:`str`): Query to search for. Returns: :py:class:`list`: Possible matches.
juraj-google-style
def Unlock(fd, path): try: fcntl.flock(fd, (fcntl.LOCK_UN | fcntl.LOCK_NB)) except IOError as e: if (e.errno == errno.EWOULDBLOCK): raise IOError(('Exception unlocking %s. Locked by another process.' % path)) else: raise IOError(('Exception unlocking %s. %s.' % (path, str(e))))
Release the lock on the file. Args: fd: int, the file descriptor of the file to unlock. path: string, the name of the file to lock. Raises: IOError, raised from flock while attempting to release a file lock.
codesearchnet
def merge_wells(self, right, keys=None): wells = [] for w in self: rw = right.get_well(w.uwi) if rw is not None: if keys is None: keys = list(rw.data.keys()) for k in keys: try: w.data[k] = rw.data[k] except: pass wells.append(w) return Project(wells)
Returns a new Project object containing wells from self where curves from the wells on the right have been added. Matching between wells in self and right is based on uwi match and ony wells in self are considered Args: uwi (string): the UWI string for the well. Returns: project
juraj-google-style
def __type_to_tag(self, type_: Type) -> str: if (type_ in scalar_type_to_tag): return scalar_type_to_tag[type_] if is_generic_list(type_): return 'tag:yaml.org,2002:seq' if is_generic_dict(type_): return 'tag:yaml.org,2002:map' if (type_ in self._registered_classes.values()): return '!{}'.format(type_.__name__) raise RuntimeError('Unknown type {} in type_to_tag, please report a YAtiML bug.'.format(type_))
Convert a type to the corresponding YAML tag. Args: type_: The type to convert Returns: A string containing the YAML tag.
codesearchnet
def matmul(self, input_tensor: core.Tensor) -> Mapping[str, core.Tensor]: out = math_ops.matmul(input_tensor, self.filters, name='sample/matmul') if self.has_reshape(): input_shape = input_tensor.shape if len(input_shape) == 3: reshape_shape = (input_shape[0], -1, self.bias_size) else: reshape_shape = (-1, self.bias_size) out = array_ops.reshape(out, reshape_shape) if self.has_bias(): if self.use_biasadd: out = nn_ops.bias_add(out, self.bias) else: out = math_ops.add_v2(out, self.bias) if self.activation_fn is not None: out = self.activation_fn(out) return {'output': out}
Performs a matrix multiplication. Depending on self.has_bias and self.activation_fn, it may add a bias term or go through the activaction function. Args: input_tensor: Input tensor to matmul with the filter. Returns: A map of: output key -> output result.
github-repos
def creating_schema_and_index(self, models, func): waiting_models = [] self.base_thread.do_with_submit(func, models, waiting_models, threads=self.threads) if waiting_models: print('WAITING MODELS ARE CHECKING...') self.creating_schema_and_index(waiting_models, func)
Executes given functions with given models. Args: models: models to execute func: function name to execute Returns:
codesearchnet
def get_country_info_from_iso3(cls, iso3, use_live=True, exception=None): countriesdata = cls.countriesdata(use_live=use_live) country = countriesdata['countries'].get(iso3.upper()) if country is not None: return country if exception is not None: raise exception return None
Get country information from ISO3 code Args: iso3 (str): ISO3 code for which to get country information use_live (bool): Try to get use latest data from web rather than file in package. Defaults to True. exception (Optional[ExceptionUpperBound]): An exception to raise if country not found. Defaults to None. Returns: Optional[Dict[str]]: country information
juraj-google-style
def image_needs_building(image): d = docker_client() try: d.images.get(image) except docker.errors.ImageNotFound: pass else: return False return image_needs_pushing(image)
Return whether an image needs building Checks if the image exists (ignores commit range), either locally or on the registry. Args: image (str): the `repository:tag` image to be build. Returns: True: if image needs to be built False: if not (image already exists)
juraj-google-style
def md2tvd(self, kind='linear'): if self.position is None: return lambda x: x return interp1d(self.md, self.tvd, kind=kind, assume_sorted=True, fill_value="extrapolate", bounds_error=False)
Provides an transformation and interpolation function that converts MD to TVD. Args: kind (str): The kind of interpolation to do, e.g. 'linear', 'cubic', 'nearest'. Returns: function.
juraj-google-style
def provider(func=None, *, singleton=False, injector=None): def decorator(func): wrapped = _wrap_provider_func(func, {'singleton': singleton}) if injector: injector.register_provider(wrapped) return wrapped if func: return decorator(func) return decorator
Decorator to mark a function as a provider. Args: singleton (bool): The returned value should be a singleton or shared instance. If False (the default) the provider function will be invoked again for every time it's needed for injection. injector (Injector): If provided, the function is immediately registered as a provider with the injector instance. Example: @diay.provider(singleton=True) def myfunc() -> MyClass: return MyClass(args)
codesearchnet
class XLMSQuADHead(nn.Module): def __init__(self, config: XLMConfig): super().__init__() self.start_n_top = config.start_n_top self.end_n_top = config.end_n_top self.start_logits = XLMPoolerStartLogits(config) self.end_logits = XLMPoolerEndLogits(config) self.answer_class = XLMPoolerAnswerClass(config) @auto_docstring def forward(self, hidden_states: torch.FloatTensor, start_positions: Optional[torch.LongTensor]=None, end_positions: Optional[torch.LongTensor]=None, cls_index: Optional[torch.LongTensor]=None, is_impossible: Optional[torch.LongTensor]=None, p_mask: Optional[torch.FloatTensor]=None, return_dict: bool=False) -> Union[XLMSquadHeadOutput, Tuple[torch.FloatTensor]]: start_logits = self.start_logits(hidden_states, p_mask=p_mask) if start_positions is not None and end_positions is not None: for x in (start_positions, end_positions, cls_index, is_impossible): if x is not None and x.dim() > 1: x.squeeze_(-1) end_logits = self.end_logits(hidden_states, start_positions=start_positions, p_mask=p_mask) loss_fct = CrossEntropyLoss() start_loss = loss_fct(start_logits, start_positions) end_loss = loss_fct(end_logits, end_positions) total_loss = (start_loss + end_loss) / 2 if cls_index is not None and is_impossible is not None: cls_logits = self.answer_class(hidden_states, start_positions=start_positions, cls_index=cls_index) loss_fct_cls = nn.BCEWithLogitsLoss() cls_loss = loss_fct_cls(cls_logits, is_impossible) total_loss += cls_loss * 0.5 return XLMSquadHeadOutput(loss=total_loss) if return_dict else (total_loss,) else: bsz, slen, hsz = hidden_states.size() start_log_probs = nn.functional.softmax(start_logits, dim=-1) start_top_log_probs, start_top_index = torch.topk(start_log_probs, self.start_n_top, dim=-1) start_top_index_exp = start_top_index.unsqueeze(-1).expand(-1, -1, hsz) start_states = torch.gather(hidden_states, -2, start_top_index_exp) start_states = start_states.unsqueeze(1).expand(-1, slen, -1, -1) hidden_states_expanded = hidden_states.unsqueeze(2).expand_as(start_states) p_mask = p_mask.unsqueeze(-1) if p_mask is not None else None end_logits = self.end_logits(hidden_states_expanded, start_states=start_states, p_mask=p_mask) end_log_probs = nn.functional.softmax(end_logits, dim=1) end_top_log_probs, end_top_index = torch.topk(end_log_probs, self.end_n_top, dim=1) end_top_log_probs = end_top_log_probs.view(-1, self.start_n_top * self.end_n_top) end_top_index = end_top_index.view(-1, self.start_n_top * self.end_n_top) start_states = torch.einsum('blh,bl->bh', hidden_states, start_log_probs) cls_logits = self.answer_class(hidden_states, start_states=start_states, cls_index=cls_index) if not return_dict: return (start_top_log_probs, start_top_index, end_top_log_probs, end_top_index, cls_logits) else: return XLMSquadHeadOutput(start_top_log_probs=start_top_log_probs, start_top_index=start_top_index, end_top_log_probs=end_top_log_probs, end_top_index=end_top_index, cls_logits=cls_logits)
A SQuAD head inspired by XLNet. Args: config ([`XLMConfig`]): The config used by the model, will be used to grab the `hidden_size` of the model and the `layer_norm_eps` to use.
github-repos
def create_local_copy(self, effects=None, store=None): effects = self._build_effects(effects) store = store or '' data = { 'source': self.cdn_path(effects) } if store: data['store'] = store return rest_request('POST', 'files/', data=data)
Creates a Local File Copy on Uploadcare Storage. Args: - effects: Adds CDN image effects. If ``self.default_effects`` property is set effects will be combined with default effects. - store: If ``store`` option is set to False the copy of your file will be deleted in 24 hour period after the upload. Works only if `autostore` is enabled in the project.
juraj-google-style
def resize_video(in_file, out_file, size=None, ratio=None, keep_ar=False, log_level='info', print_cmd=False, **kwargs): if ((size is None) and (ratio is None)): raise ValueError('expected size or ratio must be specified') elif ((size is not None) and (ratio is not None)): raise ValueError('size and ratio cannot be specified at the same time') options = {'log_level': log_level} if size: if (not keep_ar): options['vf'] = 'scale={}:{}'.format(size[0], size[1]) else: options['vf'] = 'scale=w={}:h={}:force_original_aspect_ratio=decrease'.format(size[0], size[1]) else: if (not isinstance(ratio, tuple)): ratio = (ratio, ratio) options['vf'] = 'scale="trunc(iw*{}):trunc(ih*{})"'.format(ratio[0], ratio[1]) convert_video(in_file, out_file, print_cmd, **options)
Resize a video. Args: in_file (str): Input video filename. out_file (str): Output video filename. size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1). ratio (tuple or float): Expected resize ratio, (2, 0.5) means (w*2, h*0.5). keep_ar (bool): Whether to keep original aspect ratio. log_level (str): Logging level of ffmpeg. print_cmd (bool): Whether to print the final ffmpeg command.
codesearchnet