code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def load_b26_file(file_name): assert os.path.exists(file_name) with open(file_name, 'r') as infile: data = yaml.safe_load(infile) return data
loads a .b26 file into a dictionary Args: file_name: Returns: dictionary with keys instrument, scripts, probes
juraj-google-style
def append_from_list(self, content, fill_title=False): row_index = 0 for row in content: tr = TableRow() column_index = 0 for item in row: if row_index == 0 and fill_title: ti = TableTitle(item) else: ...
Appends rows created from the data contained in the provided list of tuples of strings. The first tuple of the list can be set as table title. Args: content (list): list of tuples of strings. Each tuple is a row. fill_title (bool): if true, the first tuple in the list will be set as title.
juraj-google-style
def struct_member_error(err, sid, name, offset, size): (exception, msg) = STRUCT_ERROR_MAP[err] struct_name = idc.GetStrucName(sid) return exception('AddStructMember(struct="{}", member="{}", offset={}, size={}) failed: {}'.format(struct_name, name, offset, size, msg))
Create and format a struct member exception. Args: err: The error value returned from struct member creation sid: The struct id name: The member name offset: Memeber offset size: Member size Returns: A ``SarkErrorAddStructMemeberFailed`` derivative exception, with an informative message.
codesearchnet
def validate_language_key(obj, key): backend = bigchaindb.config['database']['backend'] if backend == 'localmongodb': data = obj.get(key, {}) if isinstance(data, dict): validate_all_values_for_key_in_obj(data, 'language', validate_language) elif isinstance(data, list): ...
Validate all nested "language" key in `obj`. Args: obj (dict): dictionary whose "language" key is to be validated. Returns: None: validation successful Raises: ValidationError: will raise exception in case language is not valid.
juraj-google-style
def split_input(cls, mapper_spec, _reader=blobstore.BlobReader): params = _get_params(mapper_spec) blob_key = params[cls.BLOB_KEY_PARAM] zip_input = zipfile.ZipFile(_reader(blob_key)) zfiles = zip_input.infolist() total_size = sum((x.file_size for x in zfiles)) num_shards = min(mapper_spec.shard...
Returns a list of input shard states for the input spec. Args: mapper_spec: The MapperSpec for this InputReader. Must contain 'blob_key' parameter with one blob key. _reader: a callable that returns a file-like object for reading blobs. Used for dependency injection. Returns: A list of InputReaders spanning files wit...
codesearchnet
def to_cache_timer(datetime_func): if datetime_func is None: datetime_func = datetime.utcnow def _timer(): return (datetime_func() - datetime(1970, 1, 1)).total_seconds() return _timer
Converts a datetime_func to a timestamp_func. Args: datetime_func (callable[[datatime]]): a func that returns the current time Returns: time_func (callable[[timestamp]): a func that returns the timestamp from the epoch
juraj-google-style
def c_overturned(step): rbot, rtop = misc.get_rbounds(step) cinit, rad = init_c_overturn(step) radf = (rtop**3 + rbot**3 - rad**3)**(1 / 3) return cinit, radf
Theoretical overturned concentration. This compute the resulting composition profile if fractional crystallization of a SMO is assumed and then a purely radial overturn happens. Args: step (:class:`~stagpy.stagyydata._Step`): a step of a StagyyData instance. Returns: tuple of :class:`numpy.array`: the composition and...
juraj-google-style
def _add_result(self, dict_entry, entry, dt, start_time): time_entry = {} time_entry['dt'] = dt time_entry['start_time'] = start_time dict_entry[entry] = time_entry
Adds a result to the dictionary. Args: dict_entry: main dict to add entry entry: slot for this entry (likely an integer) dt: the timing for the entry start_time: when the entry started unix time float
codesearchnet
def _VerifyHandValues(self, tensor_in_sizes, filter_in_sizes, stride, padding, expected): total_size_1 = 1 total_size_2 = 1 for s in tensor_in_sizes: total_size_1 *= s for s in filter_in_sizes: total_size_2 *= s x1 = np.array([f * 1.0 for f in range(1, total_size_1 + 1)], dtype=np.fl...
Verifies the output values of the depthwise convolution function. Args: tensor_in_sizes: Input tensor dimensions in [batch, input_rows, input_cols, input_depth]. filter_in_sizes: Filter tensor dimensions in [filter_rows, filter_cols, input_depth, depth_multiplier]. stride: Stride. padding: Padding type. expected: An a...
github-repos
def get_all_instances(include_fastboot=False): if include_fastboot: serial_list = (list_adb_devices() + list_fastboot_devices()) return get_instances(serial_list) return get_instances(list_adb_devices())
Create AndroidDevice instances for all attached android devices. Args: include_fastboot: Whether to include devices in bootloader mode or not. Returns: A list of AndroidDevice objects each representing an android device attached to the computer.
codesearchnet
def replace_in_list(stringlist: Iterable[str], replacedict: Dict[str, str]) -> List[str]: newlist = [] for fromstring in stringlist: newlist.append(multiple_replace(fromstring, replacedict)) return newlist
Returns a list produced by applying :func:`multiple_replace` to every string in ``stringlist``. Args: stringlist: list of source strings replacedict: dictionary mapping "original" to "replacement" strings Returns: list of final strings
juraj-google-style
def _assert_rank_condition(x, rank, static_condition, dynamic_condition, data, summarize): assert_type(rank, dtypes.int32) rank_static = tensor_util.constant_value(rank) if rank_static is not None: if rank_static.ndim != 0: raise ValueError('Rank must be a scalar.') x_rank_static...
Assert `x` has a rank that satisfies a given condition. Args: x: Numeric `Tensor`. rank: Scalar `Tensor`. static_condition: A python function that takes `[actual_rank, given_rank]` and returns `True` if the condition is satisfied, `False` otherwise. dynamic_condition: An `op` that takes [actual_rank, given_rank] ...
github-repos
def __init__(self, client, conv_states, user_list, sync_timestamp): self._client = client self._conv_dict = {} self._sync_timestamp = sync_timestamp self._user_list = user_list for conv_state in conv_states: self._add_conversation(conv_state...
:class:`.Event` fired when an event occurs in any conversation. Args: conv_event: :class:`ConversationEvent` that occurred.
juraj-google-style
def confirm(question): if FORCE_YES: return True while True: answer = input(question + ' <Yes|No>').lower() if answer == 'yes' or answer == 'y': confirmed = True break if answer == 'no' or answer == 'n': confirmed = False bre...
Ask the user if he really want something to happen. Args: question(str): What can happen Returns: (boolean): Confirmed or not
juraj-google-style
def get_domain_workgroup(): with salt.utils.winapi.Com(): conn = wmi.WMI() for computer in conn.Win32_ComputerSystem(): if computer.PartOfDomain: return {'Domain': computer.Domain} else: return {'Workgroup': computer.Workgroup}
Get the domain or workgroup the computer belongs to. .. versionadded:: 2015.5.7 .. versionadded:: 2015.8.2 Returns: str: The name of the domain or workgroup CLI Example: .. code-block:: bash salt 'minion-id' system.get_domain_workgroup
codesearchnet
def bilinearly_sampled_image(texture, uv): h, w = tf.unstack(tf.shape(texture)[:2]) u, v = tf.split(uv, 2, axis=-1) v = 1.0 - v u, v = u * tf.to_float(w) - 0.5, v * tf.to_float(h) - 0.5 u0, u1 = tf.floor(u), tf.ceil(u) v0, v1 = tf.floor(v), tf.ceil(v) uf, vf = u - u0, v - v0 u0, u...
Build bilinear texture sampling graph. Coordinate transformation rules match OpenGL GL_REPEAT wrapping and GL_LINEAR interpolation modes. Args: texture: [tex_h, tex_w, channel_n] tensor. uv: [frame_h, frame_h, 2] tensor with per-pixel UV coordinates in range [0..1] Returns: [frame_h, frame_h, channel_n] tensor with ...
juraj-google-style
def RunScripts(self, script_dict): metadata_types = ['%s-script-url', '%s-script'] metadata_keys = [(key % self.script_type) for key in metadata_types] metadata_keys = [key for key in metadata_keys if script_dict.get(key)] if (not metadata_keys): self.logger.info('No %s scripts found in metadata...
Run the metadata scripts; execute a URL script first if one is provided. Args: script_dict: a dictionary mapping metadata keys to script files.
codesearchnet
def author_id_normalize_and_schema(uid, schema=None): def _get_uid_normalized_in_schema(_uid, _schema): (regex, template) = _RE_AUTHORS_UID[_schema] match = regex.match(_uid) if match: return template.format(match.group('uid')) if (idutils.is_orcid(uid) and (schema in (None,...
Detect and normalize an author UID schema. Args: uid (string): a UID string schema (string): try to resolve to schema Returns: Tuple[string, string]: a tuple (uid, schema) where: - uid: the UID normalized to comply with the id.json schema - schema: a schema of the UID or *None* if not recognised Raise: UnknownUIDSch...
codesearchnet
def load_configuration(yaml: yaml.ruamel.yaml.YAML, filename: str) -> DictLike: with open(filename, 'r') as f: config = yaml.load(f) return config
Load an analysis configuration from a file. Args: yaml: YAML object to use in loading the configuration. filename: Filename of the YAML configuration file. Returns: dict-like object containing the loaded configuration
codesearchnet
def is_within_strict_int_range(lower_bound: int, upper_bound: int) -> RuleChecker[Numeric]: def _checker(value: Numeric) -> RuleOutput: if lower_bound < value < upper_bound: return None else: return 'Value is not within the strict range.' return _checker
Checks if the provided numeric value IS strictly bounded by integers i.e. (lower_bound, upper_bound) with both bounds exclusive. Args: * lower_bound: lowest integer value (exclusive) * upper_bound: highest integer value (exclusive) Returns: * None: if lower_bound < value < upper_bound * Error message, otherwise
github-repos
def get_client_kwargs(self, path): container, obj = self.split_locator(path) kwargs = dict(container=container) if obj: kwargs['obj'] = obj return kwargs
Get base keyword arguments for client for a specific path. Args: path (str): Absolute path or URL. Returns: dict: client args
juraj-google-style
def _ip_unnumbered_type(self, **kwargs): method_name = 'interface_%s_ip_ip_config_unnumbered_ip_donor_'\ 'interface_type' % kwargs['int_type'] ip_unnumbered_type = getattr(self._interface, method_name) config = ip_unnumbered_type(**kwargs) if kwargs['delete']: ...
Return the `ip unnumbered` donor type XML. You should not use this method. You probably want `Interface.ip_unnumbered`. Args: int_type (str): Type of interface. (gigabitethernet, tengigabitethernet etc). delete (bool): Remove the configuration if ``True``. ip_donor_interface_type (str): The donor interface type (loop...
juraj-google-style
def is_native_ion_gate(gate: ops.Gate) -> bool: return isinstance(gate, (ops.XXPowGate, ops.MeasurementGate, ops.XPowGate, ops.YPowGate, ops.ZPowGate))
Check if a gate is a native ion gate. Args: gate: Input gate. Returns: True if the gate is native to the ion, false otherwise.
codesearchnet
def log_estimator_evaluation_result(self, eval_results): if (not isinstance(eval_results, dict)): tf.logging.warning('eval_results should be directory for logging. Got %s', type(eval_results)) return global_step = eval_results[tf.GraphKeys.GLOBAL_STEP] for key in sorted(eval_results): ...
Log the evaluation result for a estimator. The evaluate result is a directory that contains metrics defined in model_fn. It also contains a entry for global_step which contains the value of the global step when evaluation was performed. Args: eval_results: dict, the result of evaluate() from a estimator.
codesearchnet
def dawsn(x, name=None): with ops.name_scope(name, 'dawsn', [x]): return gen_special_math_ops.dawsn(x)
Computes Dawson's integral of `x` element-wise. Dawson's integral is defined as `exp(-x**2)` times the integral of `exp(t**2)` from `0` to `x`, with the domain of definition all real numbers. Dawson's function is odd. >>> tf.math.special.dawsn([-1., -0.5, 0.5, 1.]).numpy() array([-0.5380795, -0.4244364, 0.4244364, 0...
github-repos
def tag_file(filename, artist, title, year=None, genre=None, artwork_url=None, album=None, track_number=None, url=None): try: audio = EasyMP3(filename) audio.tags = None audio["artist"] = artist audio["title"] = title if year: audio["date"] = str(year) ...
Attempt to put ID3 tags on a file. Args: artist (str): title (str): year (int): genre (str): artwork_url (str): album (str): track_number (str): filename (str): url (str):
juraj-google-style
def compose(self, *args, **kwargs): linebreak = kwargs.pop("linebreak", "\n") if len(args) > 0: self.args = args self._update(**kwargs) fkwargs = {} modtmpl = [] for line in self: cline = copy(li...
Generate a file from the current template and given arguments. Warning: Make certain to check the formatted editor for correctness! Args: args: Positional arguments to update the template kwargs: Keyword arguments to update the template Returns: editor: An editor containing the formatted template.
juraj-google-style
def _AlignUncompressedDataOffset(self, uncompressed_data_offset): if self._zip_ext_file: self._zip_ext_file.close() self._zip_ext_file = None try: self._zip_ext_file = self._zip_file.open(self._zip_info, 'r') except zipfile.BadZipfile as exception: raise IO...
Aligns the compressed file with the uncompressed data offset. Args: uncompressed_data_offset (int): uncompressed data offset. Raises: IOError: if the ZIP file could not be opened. OSError: if the ZIP file could not be opened.
juraj-google-style
def _ParseLogline(self, parser_mediator, structure): month, day_of_month, year, hours, minutes, seconds, milliseconds = ( structure.date_time) time_elements_tuple = ( year, month, day_of_month, hours, minutes, seconds, milliseconds) try: date_time = dfdatetime_time_elements...
Parse a logline and store appropriate attributes. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. structure (pyparsing.ParseResults): structure of tokens derived from a line of a text file.
juraj-google-style
def _decode_linear_biases(linear_string, nodelist): linear_bytes = base64.b64decode(linear_string) return dict(zip(nodelist, struct.unpack(('<' + ('d' * (len(linear_bytes)
Inverse of _serialize_linear_biases. Args: linear_string (str): base 64 encoded string of little endian 8 byte floats, one for each of the nodes in nodelist. nodelist (list): list of the form [node1, node2, ...]. Returns: dict: linear biases in a dict. Examples: >>> _decode_linear_biases('AAAAAAAA8L8AAAAAAADwPwAAAAA...
codesearchnet
def connect(self, address) -> bytes: stdout = self._exec_adb_cmd('connect', address, shell=False, timeout=None, stderr=None) if PATTERN_ADB_CONNECT_SUCCESS.match(stdout.decode('utf-8')) is None: raise AdbError(cmd=f'connect {address}', stdout=stdout, stderr='', ret_code=0) return stdout
Executes the `adb connect` command with proper status checking. Args: address: string, the address of the Android instance to connect to. Returns: The stdout content. Raises: AdbError: if the connection failed.
github-repos
def getGridByCard(self, gssha_card_name): with tmp_chdir(self.project_directory): if (gssha_card_name not in (self.INPUT_MAPS + self.WMS_DATASETS)): raise ValueError('Card {0} not found in valid grid cards ...'.format(gssha_card_name)) gssha_grid_card = self.getCard(gssha_card_name) ...
Returns GDALGrid object of GSSHA grid Paramters: gssha_card_name(str): Name of GSSHA project card for grid. Returns: GDALGrid
codesearchnet
def parse_from_xml(root): if root.tag != 'ubcpi': raise UpdateFromXmlError(_('Every peer instruction tool must contain an "ubcpi" element.')) display_name_el = root.find('display_name') if display_name_el is None: raise UpdateFromXmlError(_('Every peer instruction tool must conta...
Update the UBCPI XBlock's content from an XML definition. We need to be strict about the XML we accept, to avoid setting the XBlock to an invalid state (which will then be persisted). Args: root (lxml.etree.Element): The XML definition of the XBlock's content. Returns: A dictionary of all of the XBlock's content. R...
juraj-google-style
def learn(self, features, labels): labels = np.ravel(labels) self.__learn_labels(labels) if len(labels) == 0: return labels = self.labels.transform(labels) if self.feature_length > 0 and hasattr(self.clf, 'partial_fit'): self.clf = s...
Fits the classifier If it's state is empty, the classifier is fitted, if not the classifier is partially fitted. See sklearn's SGDClassifier fit and partial_fit methods. Args: features (:obj:`list` of :obj:`list` of :obj:`float`) labels (:obj:`list` of :obj:`str`): Labels for each set of features. New features are le...
juraj-google-style
def get_type_key(self, seen: set['BaseValue'] | None=None): return self.get_default_type_key()
Build a key from the information used to perform type matching. Get a hashable object containing this value's type information. Type keys are only compared amongst themselves, so we don't care what the internals look like, only that values with different types *always* have different type keys and values with the same...
github-repos
def get_stored_metadata(self, temp_ver): with open(self._prefixed('%s.metadata' % temp_ver.name)) as f: return json.load(f)
Retrieves the metadata for the given template version from the store Args: temp_ver (TemplateVersion): template version to retrieve the metadata for Returns: dict: the metadata of the given template version
juraj-google-style
def loss(logits, labels): labels = tf.cast(labels, tf.int64) cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits, name='cross_entropy_per_example') cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy') tf.add_to_collection('losses', cross_entrop...
Add L2Loss to all the trainable variables. Add summary for "Loss" and "Loss/avg". Args: logits: Logits from inference(). labels: Labels from distorted_inputs or inputs(). 1-D tensor of shape [batch_size] Returns: Loss tensor of type float.
codesearchnet
def DeregisterDefinition(self, artifact_definition): artifact_definition_name = artifact_definition.name.lower() if artifact_definition_name not in self._artifact_definitions: raise KeyError( 'Artifact definition not set for name: {0:s}.'.format( artifact_definition.name)) ...
Deregisters an artifact definition. Artifact definitions are identified based on their lower case name. Args: artifact_definition (ArtifactDefinition): an artifact definition. Raises: KeyError: if an artifact definition is not set for the corresponding name.
juraj-google-style
def cost(self, logits, target): logits = tf.reshape(logits, [(self._num_steps * self._batch_size), (- 1)]) target = tf.reshape(target, [(self._num_steps * self._batch_size), (- 1)]) xent = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=target) loss = tf.reduce_sum(xent) return (loss /...
Returns cost. Args: logits: model output. target: target. Returns: Cross-entropy loss for a sequence of logits. The loss will be averaged across time steps if time_average_cost was enabled at construction time.
codesearchnet
def trace(self, predicate): self._handler = predicate if self.threading_support is None or self.threading_support: self._threading_previous = getattr(threading, '_trace_hook', None) threading.settrace(self) self._previous = sys.gettrace() sys.settrace(sel...
Starts tracing with the given callable. Args: predicate (callable that accepts a single :obj:`hunter.Event` argument): Return: self
juraj-google-style
def get_dataclass(self, json_dataclass: type[T]) -> T: if not mime_types.is_dataclass(self.mimetype): raise ValueError('Part is not a dataclass.') try: return json_dataclass.from_json(self.text) except AttributeError as e: raise ValueError(f'{json_dataclass.__name__} is not a valid j...
Returns representation of the Part as a given dataclass. Args: json_dataclass: A dataclass that can be converted to/from JSON. Returns: The dataclass representation of the Part.
github-repos
def ProcessFile(filename, vlevel, extra_check_functions=None): _SetVerboseLevel(vlevel) _BackupFilters() if (not ProcessConfigOverrides(filename)): _RestoreFilters() return lf_lines = [] crlf_lines = [] try: if (filename == '-'): lines = codecs.StreamReaderWri...
Does google-lint on a single file. Args: filename: The name of the file to parse. vlevel: The level of errors to report. Every error of confidence >= verbose_level will be reported. 0 is a good default. extra_check_functions: An array of additional check functions that will be run on each source line. Each functio...
codesearchnet
def visit_indexer(self, indexer: _evaluation.IndexerNode) -> _sql_data_types.Select: collection_result = self.visit(indexer.collection) index_result = self.visit(indexer.index) indexed_collection = f'SELECT ROW_NUMBER() OVER() AS row_,\n{collection_result.sql_alias}\nFROM {collection_result.to_subquery()}' ...
Translates a FHIRPath indexer expression to Standard SQL. Args: indexer: The `_Indexer` Expression node. Returns: A compiled Standard SQL expression.
github-repos
def solve(ast, builtins_pytd, protocols_pytd): builtins_pytd = transforms.RemoveMutableParameters(builtins_pytd) builtins_pytd = visitors.LookupClasses(builtins_pytd) protocols_pytd = visitors.LookupClasses(protocols_pytd) ast = visitors.LookupClasses(ast, builtins_pytd) return (TypeSolver(ast, buil...
Solve the unknowns in a pytd AST using the standard Python builtins. Args: ast: A pytd.TypeDeclUnit, containing classes named ~unknownXX. builtins_pytd: A pytd for builtins. protocols_pytd: A pytd for protocols. Returns: A tuple of (1) a dictionary (str->str) mapping unknown class names to known class names and (2) a...
github-repos
def generate_sentence(self, chain): def weighted_choice(choices): total_weight = sum((weight for (val, weight) in choices)) rand = random.uniform(0, total_weight) upto = 0 for (val, weight) in choices: if ((upto + weight) >= rand): return val ...
!DEMO! Demo function that shows how to generate a simple sentence starting with uppercase letter without lenght limit. Args: chain: MarkovChain that will be used to generate sentence
codesearchnet
def process_event(self, event_name: str, data: dict): if (isinstance(self.opt.get("learning_rate", None), float) and isinstance(self.opt.get("learning_rate_decay", None), float)): pass else: if event_name == 'after_train_log': if (self.get...
Process event after epoch Args: event_name: whether event is send after epoch or batch. Set of values: ``"after_epoch", "after_batch"`` data: event data (dictionary) Returns: None
juraj-google-style
def from_filenames(poscar_filenames, transformations=None, extend_collection=False): tstructs = [] for filename in poscar_filenames: with open(filename, 'r') as f: tstructs.append(TransformedStructure.from_poscar_string(f.read(), [])) return StandardTransmuter(tstructs, transformations, ...
Convenient constructor to generates a POSCAR transmuter from a list of POSCAR filenames. Args: poscar_filenames: List of POSCAR filenames transformations: New transformations to be applied to all structures. extend_collection: Same meaning as in __init__.
codesearchnet
def export(self, top=True): out = [] if top: out.append(self._internal_name) out.append(self._to_str(self.number_of_records_per_hour)) out.append(self._to_str(self.data_period_name_or_description)) out.append(self._to_str(self.data_period_start_day_of_week)) ...
Exports object to its string representation. Args: top (bool): if True appends `internal_name` before values. All non list objects should be exported with value top=True, all list objects, that are embedded in as fields inlist objects should be exported with `top`=False Returns: str: The objects string representatio...
juraj-google-style
def _consume_line(line_info, state): _update_section_state(line_info, state) if state.section.title is None: if state.summary.permitted: if line_info.remaining: state.summary.lines.append(line_info.remaining) elif state.summary.lines: state.summary...
Consumes one line of text, updating the state accordingly. When _consume_line is called, part of the line may already have been processed for header information. Args: line_info: Information about the current and next line of the docstring. state: The state of the docstring parser.
github-repos
def output_waiting(self): buf = array.array('I', [0]) try: fcntl.ioctl(self._fd, termios.TIOCOUTQ, buf, True) except OSError as e: raise SerialError(e.errno, ('Querying output waiting: ' + e.strerror)) return buf[0]
Query the number of bytes waiting to be written to the serial port. Returns: int: number of bytes waiting to be written. Raises: SerialError: if an I/O or OS error occurs.
codesearchnet
async def get_headline(self, name): resp = await self.send_command(OPERATIONS.CMD_QUERY_HEADLINE, {'name': name}, MESSAGES.QueryHeadlineResponse, timeout=5.0) if resp is not None: resp = states.ServiceMessage.FromDictionary(resp) ret...
Get stored messages for a service. Args: name (string): The name of the service to get messages from. Returns: ServiceMessage: the headline or None if no headline has been set
juraj-google-style
def update_metadata(self, resource, keys_vals): self.metadata_service.set_auth(self._token_metadata) self.metadata_service.update(resource, keys_vals)
Updates key-value pairs with the given resource. Will attempt to update all key-value pairs even if some fail. Keys must already exist. Args: resource (intern.resource.boss.BossResource) keys_vals (dictionary): Collection of key-value pairs to update on the given resource. Raises: HTTPErrorList on failure.
juraj-google-style
def __init__(self, storage_writer, knowledge_base, data_location=None): super(AnalysisMediator, self).__init__() self._abort = False self._data_location = data_location self._event_filter_expression = None self._knowledge_base = knowledge_base self._mount_path = None self._storage_write...
Initializes an analysis plugin mediator. Args: storage_writer (StorageWriter): storage writer. knowledge_base (KnowledgeBase): contains information from the source data needed for analysis. data_location (Optional[str]): location of data files used during analysis.
juraj-google-style
def make_tensor_model_fn(model_fn: str) -> TensorInferenceFn: def attr_fn(batch: Sequence[torch.Tensor], model: torch.nn.Module, device: str, inference_args: Optional[dict[str, Any]]=None, model_id: Optional[str]=None) -> Iterable[PredictionResult]: with torch.no_grad(): batched_tensors = torch...
Produces a TensorInferenceFn that uses a method of the model other that the forward() method. Args: model_fn: A string name of the method to be used. This is accessed through getattr(model, model_fn)
github-repos
def install_exception_handler(handler): if not isinstance(handler, ExceptionHandler): raise TypeError('handler of type %s does not inherit from ExceptionHandler' % type(handler)) EXCEPTION_HANDLERS.append(handler)
Installs an exception handler. Args: handler: ExceptionHandler, the exception handler to install. Raises: TypeError: Raised when the handler was not of the correct type. All installed exception handlers will be called if main() exits via an abnormal exception, i.e. not one of SystemExit, KeyboardInterrupt, FlagsErro...
juraj-google-style
def _get_context_id(self, context): if context in self._context_to_id: return self._context_to_id[context] graph_is_new = False with self._context_lock: if context not in self._context_to_id: graph_is_new = True context_id = _get_id() self._context_to_id[c...
Get a unique ID for an op-construction context (e.g., a graph). If the graph has been encountered before, reuse the same unique ID. When encountering a new context (graph), this methods writes a DebugEvent proto with the debugged_graph field to the proper DebugEvent file. Args: context: A context to get the unique ID...
github-repos
def _process_using_meta_feature_generator(self, X, meta_feature_generator): all_learner_meta_features = [] for idx, base_learner in enumerate(self.base_learners): single_learner_meta_features = getattr(base_learner, self.meta_featu...
Process using secondary learner meta-feature generator Since secondary learner meta-feature generator can be anything e.g. predict, predict_proba, this internal method gives the ability to use any string. Just make sure secondary learner has the method. Args: X (array-like): Features array meta_feature_generator (st...
juraj-google-style
def retry_target(target, predicate, sleep_generator, deadline, on_error=None): if (deadline is not None): deadline_datetime = (datetime_helpers.utcnow() + datetime.timedelta(seconds=deadline)) else: deadline_datetime = None last_exc = None for sleep in sleep_generator: try: ...
Call a function and retry if it fails. This is the lowest-level retry helper. Generally, you'll use the higher-level retry helper :class:`Retry`. Args: target(Callable): The function to call and retry. This must be a nullary function - apply arguments with `functools.partial`. predicate (Callable[Exception]): A calla...
codesearchnet
def __init__(self, cluster_resolver=None, communication_options=None, *, mesh=None): self._validate_init_args(mesh, cluster_resolver) if not mesh: if not cluster_resolver: cluster_resolver = tfconfig_cluster_resolver.TFConfigClusterResolver() dtensor_env_var = _parse_dtensor_env_var_...
Creates the strategy. Args: cluster_resolver: optional `tf.distribute.cluster_resolver.ClusterResolver`. In case neither `mesh` nor `cluster_resolver` are provided, `tf.distribute.cluster_resolver.TFConfigClusterResolver` is used. communication_options: currently ignore. mesh: optional Dtensor global mesh for the comp...
github-repos
def _ragged_getitem(rt_input, key_list): if not key_list: return rt_input row_key = key_list[0] inner_keys = key_list[1:] if row_key is Ellipsis: expanded_key_list = _expand_ellipsis(key_list, rt_input.shape.ndims) return _ragged_getitem(rt_input, expanded_key_list) if row_ke...
Helper for indexing and slicing ragged tensors with __getitem__(). Extracts the specified piece of the `rt_input`. See `RaggedTensor.__getitem__` for examples and restrictions. Args: rt_input: The `RaggedTensor` from which a piece should be returned. key_list: The list of keys specifying which piece to return. Each ...
github-repos
def parsed_forensic_reports_to_csv(reports): fields = ["feedback_type", "user_agent", "version", "original_envelope_id", "original_mail_from", "original_rcpt_to", "arrival_date", "arrival_date_utc", "subject", "message_id", "authentication_results", "dkim_domain", "sou...
Converts one or more parsed forensic reports to flat CSV format, including headers Args: reports: A parsed forensic report or list of parsed forensic reports Returns: str: Parsed forensic report data in flat CSV format, including headers
juraj-google-style
def GetRegistryFileMapping(self, registry_file): if not registry_file: return '' candidate_mappings = [] for mapping in self._REGISTRY_FILE_MAPPINGS_NT: if not mapping.unique_key_paths: continue match = True for key_path in mapping.unique_key_paths: regi...
Determines the Registry file mapping based on the content of the file. Args: registry_file (WinRegistyFile): Windows Registry file. Returns: str: key path prefix or an empty string. Raises: RuntimeError: if there are multiple matching mappings and the correct mapping cannot be resolved.
juraj-google-style
def move(self, to_project_id, **kwargs): path = '%s/%s/move' % (self.manager.path, self.get_id()) data = {'to_project_id': to_project_id} server_data = self.manager.gitlab.http_post(path, post_data=data, **kwargs) self._update_...
Move the issue to another project. Args: to_project_id(int): ID of the target project **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabUpdateError: If the issue could not be moved
juraj-google-style
def hr_dp004(self, value=None): if (value is not None): try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float for field `hr_dp004`'.format(value)) self._hr_dp004 = value
Corresponds to IDD Field `hr_dp004` humidity ratio corresponding to Dew-point temperature corresponding to 0.4% annual cumulative frequency of occurrence Args: value (float): value for IDD Field `hr_dp004` if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises:...
codesearchnet
def _compute_theoretical_jacobian(x, x_shape, x_data, dy, dy_shape, dx, extra_feed_dict): if x.dtype.is_complex: x_shape = tuple(x_shape) + (2,) dy_factor = 2 if dy.dtype.is_complex else 1 x_size = _product(x_shape) x_val_size = _product(x_shape[1:]) dy_size = _product(dy_shape) * dy_factor ...
Computes the theoretical Jacobian for dy/dx. Computes the theoretical Jacobian using the ops generated by compute_gradient(). Args: x: the tensor "x". x_shape: the dimensions of x as a tuple or an array of ints. x_data: a numpy parray as the input data for x dy: the tensor "dy". dy_shape: the dimensions of dy as a tu...
github-repos
def lowpass_filter(data: FLOATS_TYPE, sampling_freq_hz: float, cutoff_freq_hz: float, numtaps: int) -> FLOATS_TYPE: coeffs = firwin(numtaps=numtaps, cutoff=normalized_frequency(cutoff_freq_hz, sampling_freq_hz), pass_zero=True) filtered_data = lfilter(b=coeffs, a=1.0, x=data) return filtered_data
Apply a low-pass filter to the data. Args: data: time series of the data sampling_freq_hz: sampling frequency :math:`f_s`, in Hz (or other consistent units) cutoff_freq_hz: filter cutoff frequency in Hz (or other consistent units) numtaps: number of filter taps Returns: filtered data Note: number of filter taps = fi...
codesearchnet
def __add__(self, other): if not all(np.equal(self.x, other.x)): raise ValueError("X axis values are not compatible!") return self.__class__(self.x, self.y + other.y, *self._args, **self._kwargs)
Add two Spectrum object together. Checks that x scales are the same. Otherwise, a ValueError is thrown. Args: other: Another Spectrum object Returns: Sum of the two Spectrum objects
juraj-google-style
def set_lock_state(self, code, device_label, state): response = None try: response = requests.put(urls.set_lockstate(self._giid, device_label, state), headers={'Accept': 'application/json, text/javascript, */*; q=0.01', 'Content-Type': 'application/json', 'Cookie': 'vid={}'.format(self._vid)}, data=json...
Lock or unlock Args: code (str): Lock code device_label (str): device label of lock state (str): 'lock' or 'unlock'
codesearchnet
def _ConcatGradHelper(op: ops.Operation, grad, start_value_index, end_value_index, dim_index): def _CreateDenseMaskAndBegin(sizes, concat_dim): shape_of_shape = array_ops.shape(sizes[0]) mask = array_ops.concat([array_ops.zeros(array_ops.expand_dims(concat_dim, 0), dtype=dtypes.int32), [1]...
Gradient for concat op. Args: op: An operation. grad: `Tensor` or `IndexedSlices` representing the gradients with respect to each output of the op. start_value_index: An integer index of the first value in the op.inputs. end_value_index: An integer index of the last value in the op.inputs. dim_index: An integer index ...
github-repos
def delete_keyvault(access_token, subscription_id, rgname, vault_name): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourcegroups/', rgname, '/providers/Microsoft.KeyVault/vaults/', vault_name, '?api-version=', KEYVAULT_API]) return do_delete(endpoint, access_token)
Deletes a key vault in the named resource group. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. vault_name (str): Name of the new key vault. Returns: HTTP response. 200 OK.
codesearchnet
def get_user_stats(self, users, lang=None, concepts=None, since=None, recalculate=True): only_one_user = False if (not isinstance(users, list)): users = [users] only_one_user = True if recalculate: if (lang is None): raise ValueError('Recalculation without lang is not sup...
Finds all UserStats of given concepts and users. Recompute UserStats if necessary Args: users (Optional[list of users] or [user]): list of primary keys of user or users Defaults to None meaning all users. lang (string): use only concepts witch the lang. Defaults to None meaning all languages. concepts (Optional[list o...
codesearchnet
def info(self, channel_id): resource = 'v1/channel.info?channel_id={}'.format(channel_id) resp = self._rtm_client.get(resource) if resp.is_fail(): raise RTMServiceError('Failed to get channel information', resp) return resp.data['result']
Gets channel information by channel id Args: channel_id(int): the id of channel Returns: Channel Throws: RTMServiceError when request failed
codesearchnet
def get_sample_window(self, type_tag, size): md5_list = self.data_store.get_sample_window(type_tag, size) return self.store_sample_set(md5_list)
Get a sample from the DataStore. Args: type_tag: the type of samples ('pcap','exe','pdf') size: the size of the window in MegaBytes (10 = 10MB) Returns: A sample_set handle which represents the newest samples within the size window
juraj-google-style
def diff_parameters(old_params, new_params): [changes, diff] = diff_dictionaries(old_params, new_params) if (changes == 0): return [] return diff
Compares the old vs. new parameters and returns a "diff" If there are no changes, we return an empty list. Args: old_params(dict): old paramters new_params(dict): new parameters Returns: list: A list of differences
codesearchnet
async def update_example_status(example: Example, client: GRPCClient): datasets: List[api_pb2.Dataset] = [] for emulator in example.tag.emulators: dataset: Dataset = example.tag.datasets[emulator.topic.source_dataset] datasets.append(api_pb2.Dataset(type=api_pb2.EmulatorType.Value(f'EMULATOR_TYP...
Receive status for examples and update example.status and pipeline_id Use client to send requests to the backend: 1. Start code processing. 2. Ping the backend while status is STATUS_VALIDATING/ STATUS_PREPARING/STATUS_COMPILING/STATUS_EXECUTING Update example.status with resulting status. Args: example: beam example...
github-repos
def _CallAndUpdateTrace(component, args, component_trace, treatment='class', target=None): if not target: target = component filename, lineno = inspectutils.GetFileAndLine(component) metadata = decorators.GetMetadata(component) fn = component.__call__ if treatment == 'callable' else component ...
Call the component by consuming args from args, and update the FireTrace. The component could be a class, a routine, or a callable object. This function calls the component and adds the appropriate action to component_trace. Args: component: The component to call args: Args for calling the component component_trace: ...
github-repos
def ParseConversationRow(self, parser_mediator, query, row, **unused_kwargs): query_hash = hash(query) event_data = TangoAndroidConversationEventData() event_data.conversation_identifier = self._GetRowValue(query_hash, row, 'conv_id') date_time = dfdatetime_semantic_time.NotSet() event = time_events...
Parses a conversation row from the database. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. query (str): query that created the row. row (sqlite3.Row): row resulting from query.
codesearchnet
def _match_protocol_attribute(self, left, other_type, attribute, subst, view): left_attribute, left_is_bound = self._get_attribute_for_protocol_matching(left.cls, attribute, instance=left, unbind=True) if left_attribute is None: if attribute == '__iter__': left_attribute = self.ctx.convert.c...
Checks whether left and other_type are compatible in the given attribute. Args: left: An instance of a type. other_type: A protocol. attribute: An attribute name. subst: The current type parameter assignment. view: The current mapping of Variable to Value. Returns: A new type parameter assignment if the matching succ...
github-repos
def _prevent_2nd_derivative(x): def grad(dy): return array_ops.prevent_gradient( dy, message="Second derivative is not implemented.") return tf.identity(x), grad
Disables computation of the second derivatives for a tensor. NB: you need to apply a non-identity function to the output tensor for the exception to be raised. Arguments: x: A tensor. Returns: A tensor with the same value and the same derivative as x, but that raises LookupError when trying to compute the second der...
juraj-google-style
def update_in_hdx(self, update_resources=True, update_resources_by_name=True, remove_additional_resources=False, create_default_views=True, hxl_update=True): loaded = False if ('id' in self.data): self._check_existing_object('dataset', 'id') if self._dataset_load_from_hdx(self.data['id']): ...
Check if dataset exists in HDX and if so, update it Args: update_resources (bool): Whether to update resources. Defaults to True. update_resources_by_name (bool): Compare resource names rather than position in list. Defaults to True. remove_additional_resources (bool): Remove additional resources found in dataset. Def...
codesearchnet
def extract(self, text: str) -> List[Extraction]: doc = self._parser(text) extractions = list() for sent in doc.sents: this_extraction = Extraction(value=sent.text, extractor_name=self.name, start_token=sent[0], end_token=sent[(- 1)], start_char=sent.text[0], end_char=sent.text[(- 1)]) extra...
Splits text by sentences. Args: text (str): Input text to be extracted. Returns: List[Extraction]: the list of extraction or the empty list if there are no matches.
codesearchnet
def number_to_day(self, day_number): return [calendar.day_name[6], calendar.day_name[0], calendar.day_name[1], calendar.day_name[2], calendar.day_name[3], calendar.day_name[4], calendar.day_name[5]][day_number]
Returns localized day name by its CRON number Args: day_number: Number of a day Returns: Day corresponding to day_number Raises: IndexError: When day_number is not found
codesearchnet
def length(text, maxval=None, encoding=None): maxval = maxval or 4351 try: assert not isinstance(text, six.binary_type) except AssertionError: raise TypeError('helpers.length requires a unicode argument') return sum(2 if ord(x) > maxval else 1 for x in unicodedata.normalize('NFC', t...
Count the length of a str the way Twitter does, double-counting "wide" characters (e.g. ideographs, emoji) Args: text (str): Text to count. Must be a unicode string in Python 2 maxval (int): The maximum encoding that will be counted as 1 character. Defaults to 4351 (ჿ GEORGIAN LETTER LABIAL SIGN, U+10FF) Returns: int
juraj-google-style
def titles(self, unique=False): if unique: return tools.uniqued(title for _, title in self.iterfiles()) return [title for _, title in self.iterfiles()]
Return a list of all available spreadsheet titles. Args: unique (bool): drop duplicates Returns: list: list of title/name strings
juraj-google-style
def _closeElements(childs, HTMLElement): out = [] for e in childs: if not e.isTag(): out.append(e) continue if not e.isNonPairTag() and not e.isEndTag() and not e.isComment() \ and e.endtag is None: e.childs = _closeElements(e.childs, HT...
Create `endtags` to elements which looks like openers, but doesn't have proper :attr:`HTMLElement.endtag`. Args: childs (list): List of childs (:class:`HTMLElement` obj) - typically from :attr:`HTMLElement.childs` property. Returns: list: List of closed elements.
juraj-google-style
def transform_regex_replace(source, pattern, rewrite, name=None): with ops.name_scope(name, "TransformRegexReplace", [source]): source = convert_to_tensor_or_sparse_tensor(source, dtype=tf.string) if isinstance(source, tf.SparseTensor): result = tf.SparseTensor( ind...
Replace all substrings from `needle` to corresponding strings in `haystack` with source. Args: source: `Tensor` or `SparseTensor` of any shape, source strings for replacing. pattern: List of RE2 patterns to search in source rewrite: List of strings to replace with. Should have same length as `needle`. name: A name for...
juraj-google-style
def channels_unarchive(self, *, channel: str, **kwargs) -> SlackResponse: self._validate_xoxp_token() kwargs.update({"channel": channel}) return self.api_call("channels.unarchive", json=kwargs)
Unarchives a channel. Args: channel (str): The channel id. e.g. 'C1234567890'
juraj-google-style
def flush(self, hard=False): if not self.servers: return if hard: self.client.flush_all() self.reset_stats() else: from uuid import uuid4 tag = uuid4().hex if self.debug: tag = "flushed" + tag ...
Drop existing entries from the cache. Args: hard (bool): If True, all current entries are flushed from the server(s), which affects all users. If False, only the local process is affected.
juraj-google-style
def _CanPlaceOnSingleLine(line): token_types = [x.type for x in line.tokens] if style.Get('SPLIT_ARGUMENTS_WHEN_COMMA_TERMINATED') and any((token_types[token_index - 1] == token.COMMA for token_index, token_type in enumerate(token_types[1:], start=1) if token_type == token.RPAR)): return False if st...
Determine if the logical line can go on a single line. Arguments: line: (logical_line.LogicalLine) The line currently being formatted. Returns: True if the line can or should be added to a single line. False otherwise.
github-repos
def nearest_neighbors(self, word, top_k=10): point = self[word] diff = self.vectors - point distances = np.linalg.norm(diff, axis=1) top_ids = distances.argsort()[1:top_k+1] return [self.vocabulary.id_word[i] for i in top_ids]
Return the nearest k words to the given `word`. Args: word (string): single word. top_k (integer): decides how many neighbors to report. Returns: A list of words sorted by the distances. The closest is the first. Note: L2 metric is used to calculate distances.
juraj-google-style
def _fulfillment_from_details(data, _depth=0): if (_depth == 100): raise ThresholdTooDeep() if (data['type'] == 'ed25519-sha-256'): public_key = base58.b58decode(data['public_key']) return Ed25519Sha256(public_key=public_key) if (data['type'] == 'threshold-sha-256'): threshol...
Load a fulfillment for a signing spec dictionary Args: data: tx.output[].condition.details dictionary
codesearchnet
def splay_health(health_target): HealthCheck = collections.namedtuple('HealthCheck', ['path', 'port', 'proto', 'target']) proto, health_port_path = health_target.split(':') port, *health_path = health_port_path.split('/') if proto == 'TCP': path = '' elif not health_path: path...
Set Health Check path, port, and protocol. Args: health_target (str): The health target. ie ``HTTP:80`` Returns: HealthCheck: A **collections.namedtuple** class with *path*, *port*, *proto*, and *target* attributes.
juraj-google-style
def ParseSMS(self, parser_mediator, query, row, **unused_kwargs): query_hash = hash(query) phone_number = self._GetRowValue(query_hash, row, 'dstnum_sms') if phone_number: phone_number = phone_number.replace(' ', '') event_data = SkypeSMSEventData() event_data.number = phone_number event...
Parses an SMS. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. query (str): query that created the row. row (sqlite3.Row): row resulting from query.
codesearchnet
def add_child_url(self, url: str, inline: bool=False, link_type: Optional[LinkType]=None, post_data: Optional[str]=None, level: Optional[int]=None, replace: bool=False): url_properties = URLProperties() url_properties.level = ((self.url_record.level + 1) if (level is None) else level) url_properties.inline_...
Add links scraped from the document with automatic values. Args: url: A full URL. (It can't be a relative path.) inline: Whether the URL is an embedded object. link_type: Expected link type. post_data: URL encoded form data. The request will be made using POST. (Don't use this to upload files.) level: The child depth ...
codesearchnet
def from_files(cls, secrets=None, storage=None, scopes=None, no_webserver=False): creds = oauth2.get_credentials(scopes, secrets, storage, no_webserver) return cls(creds)
Return a spreadsheet collection making OAauth 2.0 credentials. Args: secrets (str): location of secrets file (default: ``%r``) storage (str): location of storage file (default: ``%r``) scopes: scope URL(s) or ``'read'`` or ``'write'`` (default: ``%r``) no_webserver (bool): URL/code prompt instead of webbrowser auth Re...
juraj-google-style
def first_return_times(dts, c=None, d=0.0): if (c is None): c = dts.mean() vmrt = distob.vectorize(analyses1.first_return_times) all_intervals = vmrt(dts, c, d) if hasattr(type(all_intervals), '__array_interface__'): return np.ravel(all_intervals) else: return np.hstack([dist...
For an ensemble of time series, return the set of all time intervals between successive returns to value c for all instances in the ensemble. If c is not given, the default is the mean across all times and across all time series in the ensemble. Args: dts (DistTimeseries) c (float): Optional target value (default is ...
codesearchnet
def load_terms(fo: IO, metadata: dict, forceupdate: bool): version = metadata["metadata"]["version"] with timy.Timer("Load Terms") as timer: es = bel.db.elasticsearch.get_client() es_version = version.replace("T", "").replace("-", "").replace(":", "") index_prefix = f"terms_...
Load terms into Elasticsearch and ArangoDB Forceupdate will create a new index in Elasticsearch regardless of whether an index with the resource version already exists. Args: fo: file obj - terminology file metadata: dict containing the metadata for terminology forceupdate: force full update - e.g. don't leave Elasti...
juraj-google-style
def plot(data, output_dir_path='.', width=10, height=8): if not isinstance(data, pd.DataFrame): data = pd.DataFrame(data) plot_accuracy(data, output_dir_path=output_dir_path, width=width, height=height) plot_loss(data, output_dir_path, width=width, height=height)
Create two plots: 1) loss 2) accuracy. Args: data: Panda dataframe in *the* format.
juraj-google-style
def from_str(format: str, output_path: Optional[str], input_path: Optional[str], column: Optional[str], overwrite=False) -> 'PipelineDataFormat': if format == 'json': return JsonPipelineDataFormat(output_path, input_path, column, overwrite=overwrite) elif format == 'csv': return CsvPipelineDataF...
Creates an instance of the right subclass of [`~pipelines.PipelineDataFormat`] depending on `format`. Args: format (`str`): The format of the desired pipeline. Acceptable values are `"json"`, `"csv"` or `"pipe"`. output_path (`str`, *optional*): Where to save the outgoing data. input_path (`str`, *optional*): Where to...
github-repos
def _ExtractInterfaceMetadata(self, metadata): interfaces = [] for network_interface in metadata: mac_address = network_interface.get('mac') interface = self.network_utils.GetNetworkInterface(mac_address) ip_addresses = [] if interface: ip_addresses.extend(network_interface....
Extracts network interface metadata. Args: metadata: dict, the metadata response with the new network interfaces. Returns: list, a list of NetworkInterface objects.
juraj-google-style