code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def to_affine(self): (X, Y, Z) = (self.x, self.y, self.inverse(self.z)) return (((X * (Z ** 2)) % P), ((Y * (Z ** 3)) % P))
Converts this point to an affine representation. Returns: AffinePoint: The affine reprsentation.
codesearchnet
def interpolate_radius(r1, r2, fraction): def f(a, b, c): ' Returns the length of the interpolated radius calculated\n using similar triangles.\n ' return (a + (c * (b - a))) return (f(r2, r1, (1.0 - fraction)) if (r1 > r2) else f(r1, r2, fraction))
Calculate the radius that corresponds to a point P that lies at a fraction of the length of a cut cone P1P2 where P1, P2 are the centers of the circles that bound the shape with radii r1 and r2 respectively. Args: r1: float Radius of the first node of the segment. r2: float Radius of the second node of the segment fraction: float The fraction at which the interpolated radius is calculated. Returns: float The interpolated radius. Note: The fraction is assumed from point P1, not from point P2.
codesearchnet
def jaccard(self, other): if other.seed != self.seed: raise ValueError("Cannot compute Jaccard given MinHash with\ different seeds") if len(self) != len(other): raise ValueError("Cannot compute Jaccard given MinHash with\ different numbers of permutation functions") return np.float(np.count_nonzero(self.hashvalues==other.hashvalues)) /\ np.float(len(self))
Estimate the `Jaccard similarity`_ (resemblance) between the sets represented by this MinHash and the other. Args: other (datasketch.MinHash): The other MinHash. Returns: float: The Jaccard similarity, which is between 0.0 and 1.0.
juraj-google-style
def load_map_coordinates(map_file): if map_file[-4:] == ".pkl": map_data = pickle.load(open(map_file)) lon = map_data['lon'] lat = map_data['lat'] else: map_data = Dataset(map_file) if "lon" in map_data.variables.keys(): lon = map_data.variables['lon'][:] lat = map_data.variables['lat'][:] else: lon = map_data.variables["XLONG"][0] lat = map_data.variables["XLAT"][0] return lon, lat
Loads map coordinates from netCDF or pickle file created by util.makeMapGrids. Args: map_file: Filename for the file containing coordinate information. Returns: Latitude and longitude grids as numpy arrays.
juraj-google-style
def create_worker(self, func, interval, *args, **kwargs): thread = StoppableWorkerThread(func, interval, args, kwargs) self._workers.append(thread) if self._started: thread.start()
Spawn a worker thread running func. The worker will be automatically be started when start() is called and terminated when stop() is called on this object. This must be called only from the main thread, not from a worker thread. create_worker must not be called after stop() has been called. If it is called before start() is called, the thread is started when start() is called, otherwise it is started immediately. Args: func (callable): Either a function that will be called in a loop with a sleep of interval seconds with *args and **kwargs or a generator function that will be called once and expected to yield periodically so that the worker can check if it should be killed. interval (float): The time interval between invocations of func. This should not be 0 so that the thread doesn't peg the CPU and should be short enough so that the worker checks if it should be killed in a timely fashion. *args: Arguments that are passed to func as positional args **kwargs: Arguments that are passed to func as keyword args
codesearchnet
def _all_correct_list(array): if (type(array) not in _ITERABLE_TYPES): return False for item in array: if (not (type(item) in _ITERABLE_TYPES)): return False if (len(item) != 2): return False return True
Make sure, that all items in `array` has good type and size. Args: array (list): Array of python types. Returns: True/False
codesearchnet
def write_file(self, filename, distance=6, velocity=8, charge=3): with open(filename, 'w') as f: f.write(self.get_string(distance=distance, velocity=velocity, charge=charge))
Writes LammpsData to file. Args: filename (str): Filename. distance (int): No. of significant figures to output for box settings (bounds and tilt) and atomic coordinates. Default to 6. velocity (int): No. of significant figures to output for velocities. Default to 8. charge (int): No. of significant figures to output for charges. Default to 3.
codesearchnet
def _gather_field_values(item, *, fields=None, field_map=FIELD_MAP, normalize_values=False, normalize_func=normalize_value): it = get_item_tags(item) if (fields is None): fields = list(it.keys()) normalize = (normalize_func if normalize_values else (lambda x: str(x))) field_values = [] for field in fields: field_values.append(normalize(list_to_single_value(get_field(it, field, field_map=field_map)))) return tuple(field_values)
Create a tuple of normalized metadata field values. Parameter: item (~collections.abc.Mapping, str, os.PathLike): Item dict or filepath. fields (list): A list of fields used to compare item dicts. field_map (~collections.abc.Mapping): A mapping field name aliases. Default: :data:`~google_music_utils.constants.FIELD_MAP` normalize_values (bool): Normalize metadata values to remove common differences between sources. Default: ``False`` normalize_func (function): Function to apply to metadata values if ``normalize_values`` is ``True``. Default: :func:`~google_music_utils.utils.normalize_value` Returns: tuple: Values from the given metadata fields.
codesearchnet
def extract_variable_info(kwargs: Any) -> Tuple[str, Tuple[int, ...], dtypes.DType, Callable[[], Any], Optional[int]]: def get_restore_uid(initial_value: Callable[..., Any]) -> int | None: return getattr(initial_value, 'restore_uid', None) if isinstance(kwargs['initial_value'], functools.partial) and ('shape' in kwargs['initial_value'].keywords or kwargs['initial_value'].args): if 'shape' in kwargs['initial_value'].keywords: shape = kwargs['initial_value'].keywords['shape'] else: shape = kwargs['initial_value'].args[0] return (kwargs['name'], shape, kwargs['initial_value'].keywords.get('dtype', kwargs['dtype']), kwargs['initial_value'].func, get_restore_uid(kwargs['initial_value'].func)) elif 'shape' not in kwargs or kwargs['shape'] is None or (not callable(kwargs['initial_value'])): raise ValueError('Unable to extract initializer function and shape from {}. Please either pass a function that expects a shape and dtype as the initial value for your variable or functools.partial object with the shape and dtype kwargs set. This is needed so that we can initialize the shards of the ShardedVariable locally.'.format(kwargs['initial_value'])) else: return (kwargs['name'], kwargs['shape'], kwargs['dtype'], kwargs['initial_value'], get_restore_uid(kwargs['initial_value']))
Extracts the variable creation attributes from the kwargs. Args: kwargs: a dict of keyword arguments that were passed to a variable creator scope. Returns: A tuple of variable name, shape, dtype, initialization function, restore_uid.
github-repos
def pan_and_scan_batched(self, images: 'torch.Tensor', pan_and_scan_min_crop_size: int, pan_and_scan_max_num_crops: int, pan_and_scan_min_ratio_to_activate: float): height, width = images.shape[-2:] if width >= height: if width / height < pan_and_scan_min_ratio_to_activate: return [] num_crops_w = int(math.floor(width / height + 0.5)) num_crops_w = min(int(math.floor(width / pan_and_scan_min_crop_size)), num_crops_w) num_crops_w = max(2, num_crops_w) num_crops_w = min(pan_and_scan_max_num_crops, num_crops_w) num_crops_h = 1 else: if height / width < pan_and_scan_min_ratio_to_activate: return [] num_crops_h = int(math.floor(height / width + 0.5)) num_crops_h = min(int(math.floor(height / pan_and_scan_min_crop_size)), num_crops_h) num_crops_h = max(2, num_crops_h) num_crops_h = min(pan_and_scan_max_num_crops, num_crops_h) num_crops_w = 1 crop_size_w = int(math.ceil(width / num_crops_w)) crop_size_h = int(math.ceil(height / num_crops_h)) if min(crop_size_w, crop_size_h) < pan_and_scan_min_crop_size: return [] crop_positions_w = [crop_size_w * i for i in range(num_crops_w)] crop_positions_h = [crop_size_h * i for i in range(num_crops_h)] return [images[..., pos_h:pos_h + crop_size_h, pos_w:pos_w + crop_size_w] for pos_h, pos_w in itertools.product(crop_positions_h, crop_positions_w)]
Pan and Scan an image, by cropping into smaller images when the aspect ratio exceeds minimum allowed ratio. Args: image (`torch.Tensor`): Image to resize. pan_and_scan_min_crop_size (`int`, *optional*): Minimum size of each crop in pan and scan. pan_and_scan_max_num_crops (`int`, *optional*): Maximum number of crops per image in pan and scan. pan_and_scan_min_ratio_to_activate (`float`, *optional*): Minimum aspect ratio to activate pan and scan.
github-repos
def encode_tf(self, s): ids = subword_text_encoder_ops.subword_text_encoder_encode( s, self._filepath) return ids[:-1]
Encode a tf.Scalar string to a tf.Tensor. This will be necessary for on-the-fly tokenization. Args: s: a tf.Scalar with dtype tf.string Returns: a 1d tf.Tensor with dtype tf.int32
juraj-google-style
def GetUnscannedSubNode(self): if ((not self.sub_nodes) and (not self.scanned)): return self for sub_node in self.sub_nodes: result = sub_node.GetUnscannedSubNode() if result: return result return None
Retrieves the first unscanned sub node. Returns: SourceScanNode: sub scan node or None if not available.
codesearchnet
def _read_at(self, d, interpolation='linear', index=False, return_basis=False): method = {'linear': utils.linear, 'none': None} i, d = utils.find_previous(self.basis, d, index=True, return_distance=True) if index: return i else: return method[interpolation](self[i], self[i+1], d)
Private function. Implements read_at() for a single depth. Args: d (float) interpolation (str) index(bool) return_basis (bool) Returns: float
juraj-google-style
def __copy_extracted(self, path, destination): unpacked_dir = (self.filename + '.unpacked') if (not os.path.isdir(unpacked_dir)): LOGGER.warn('Failed to copy extracted file %s, no extracted dir', path) return source_path = os.path.join(unpacked_dir, path) if (not os.path.exists(source_path)): LOGGER.warn('Failed to copy extracted file %s, does not exist', path) return destination_path = os.path.join(destination, path) shutil.copyfile(source_path, destination_path)
Copies a file that was already extracted to the destination directory. Args: path (str): Relative (to the root of the archive) of the file to copy. destination (str): Directory to extract the archive to.
codesearchnet
def not_function(function: _evaluation.NotFunction, operand_result: Optional[_sql_data_types.Select], params_result: Collection[_sql_data_types.StandardSqlExpression]) -> _sql_data_types.Select: del function, params_result if operand_result is None: raise ValueError('not() cannot be called without an operand.') return dataclasses.replace(operand_result, select_part=_sql_data_types.FunctionCall('NOT', (operand_result.select_part,), _sql_alias='not_', _sql_data_type=_sql_data_types.Boolean))
Generates Spark SQL representing the FHIRPath not() function. Returns `TRUE` if the input collection evaluates to `FALSE`. The operand is expected to be a table subquery of cardinality 1, whose value is a `BOOL` type. By default, `_NotFunction` will return `FALSE` if given no operator. Args: function: The FHIRPath AST `NotFunction` node operand_result: The expression which is being evaluated params_result: The parameter passed in to function Returns: A compiled Spark SQL expression. Raises: ValueError: When the function is called without an operand
github-repos
def umount(self, forced=True): if self.is_mounted(): if is_osx(): cmd = ["/usr/sbin/diskutil", "unmount", self.connection["mount_point"]] if forced: cmd.insert(2, "force") subprocess.check_call(cmd) else: cmd = ["umount", self.connection["mount_point"]] if forced: cmd.insert(1, "-f") subprocess.check_call(cmd)
Try to unmount our mount point. Defaults to using forced method. If OS is Linux, it will not delete the mount point. Args: forced: Bool whether to force the unmount. Default is True.
juraj-google-style
def __fa_process_container(self, container, find, start, end, avoid, initial_state, execution_state, trace_current, trace_final): ip = start while ip: try: instr = container.fetch(ip) except ReilContainerInvalidAddressError: logger.debug('Exception @ {: raise ReilContainerInvalidAddressError try: next_addr = container.get_next_address(ip) except Exception: logger.debug('Exception @ {: raise ReilContainerInvalidAddressError next_ip = self.__process_instr(instr, avoid, next_addr, initial_state, execution_state, trace_current) if (find and next_ip and (next_ip == find)): logger.debug('[+] Find address found!') trace_final.append(list(trace_current)) next_ip = None if (end and next_ip and (next_ip == end)): logger.debug('[+] End address found!') next_ip = None ip = (next_ip if next_ip else None) while (not ip): if (not execution_state.empty()): (ip, trace_current, registers, memory) = execution_state.get() if (split_address(ip)[1] == 0): logger.debug('[+] Popping execution state @ {: else: logger.debug('[+] Popping execution state @ {: self.__cpu.registers = registers self.__cpu.memory = memory logger.debug('[+] Next address: {: else: logger.debug('[+] No more paths to explore! Exiting...') break if (find and (ip == find)): logger.debug('[+] Find address found!') trace_final.append(list(trace_current)) ip = None if (end and (ip == end)): logger.debug('[+] End address found!') ip = None
Process a REIL container. Args: avoid (list): List of addresses to avoid while executing the code. container (ReilContainer): REIL container to execute. end (int): End address. execution_state (Queue): Queue of execution states. find (int): Address to find. initial_state (State): Initial state. start (int): Start address. trace_current: trace_final:
codesearchnet
def get_staking_cutoff(self, round_num=0, tournament=1): query = arguments = {'number': round_num, 'tournament': tournament} result = self.raw_query(query, arguments) result = result['data']['rounds'][0]['selection'] key = 'bCutoff' if round_num >= 154 or round_num == 0 else 'pCutoff' return utils.parse_float_string(result[key])
Compute staking cutoff for the given round and tournament. Args: round_num (int, optional): The round you are interested in, defaults to current round. tournament (int, optional): ID of the tournament, defaults to 1 Returns: decimal.Decimal: cutoff probability Raises: ValueError: in case of missing prize pool information
juraj-google-style
def __eq__(self, other) -> bool: if self.timeslots == other.timeslots: return True return False
Two time-slot collections are the same if they have the same time-slots. Args: other (TimeslotCollection): other TimeslotCollection
juraj-google-style
def CheckApproversForLabel(self, token, client_urn, requester, approvers, label): auth = self.reader.GetAuthorizationForSubject(label) if (not auth): return True if auth.requester_must_be_authorized: if (not self.CheckPermissions(requester, label)): raise access_control.UnauthorizedAccess(('User %s not in %s or groups:%s for %s' % (requester, auth.users, auth.groups, label)), subject=client_urn, requested_access=token.requested_access) approved_count = 0 for approver in approvers: if (self.CheckPermissions(approver, label) and (approver != requester)): approved_count += 1 if (approved_count < auth.num_approvers_required): raise access_control.UnauthorizedAccess(('Found %s approvers for %s, needed %s' % (approved_count, label, auth.num_approvers_required)), subject=client_urn, requested_access=token.requested_access) return True
Checks if requester and approvers have approval privileges for labels. Checks against list of approvers for each label defined in approvers.yaml to determine if the list of approvers is sufficient. Args: token: user token client_urn: ClientURN object of the client requester: username string of person requesting approval. approvers: list of username strings that have approved this client. label: label strings to check approval privs for. Returns: True if access is allowed, raises otherwise.
codesearchnet
def get_domain_template(distro, libvirt_ver, **kwargs): env = Environment(loader=PackageLoader('lago', 'providers/libvirt/templates'), trim_blocks=True, lstrip_blocks=True) template_name = 'dom_template-{0}.xml.j2'.format(distro) try: template = env.get_template(template_name) except TemplateNotFound: LOGGER.debug('could not find template %s using default', template_name) template = env.get_template('dom_template-base.xml.j2') return template.render(libvirt_ver=libvirt_ver, **kwargs)
Get a rendered Jinja2 domain template Args: distro(str): domain distro libvirt_ver(int): libvirt version kwargs(dict): args for template render Returns: str: rendered template
codesearchnet
def _export_mode(mode, has_saved_vars, builder, model, custom_objects, checkpoint_path, input_signature): compile_clone = mode != mode_keys.ModeKeys.PREDICT if compile_clone and (not model.optimizer): raise ValueError('Model does not have an optimizer. Cannot export mode %s' % mode) model_graph = ops.get_default_graph() with ops.Graph().as_default() as g, backend.learning_phase_scope(mode == mode_keys.ModeKeys.TRAIN): if input_signature is None: input_tensors = None else: input_tensors = nest.map_structure(create_placeholder, input_signature) clone = models_lib.clone_and_build_model(model, input_tensors=input_tensors, custom_objects=custom_objects, compile_clone=compile_clone) if compile_clone: g.add_to_collection(ops.GraphKeys.GLOBAL_STEP, clone.optimizer.iterations) train_op = None if mode == mode_keys.ModeKeys.TRAIN: clone._make_train_function() train_op = clone.train_function.updates_op elif mode == mode_keys.ModeKeys.TEST: clone._make_test_function() else: clone._make_predict_function() g.get_collection_ref(ops.GraphKeys.UPDATE_OPS).extend(clone.state_updates) with session.Session().as_default(): clone_var_list = _get_var_list(clone) if has_saved_vars: status = clone.load_weights(checkpoint_path) status.assert_existing_objects_matched() else: _assert_same_non_optimizer_objects(model, model_graph, clone, g) clone.load_weights(checkpoint_path) clone.save_weights(checkpoint_path, save_format='tf', overwrite=True) builder._has_saved_variables = True builder.add_meta_graph(model_utils.EXPORT_TAG_MAP[mode], signature_def_map=_create_signature_def_map(clone, mode), saver=saver_lib.Saver(clone_var_list, allow_empty=True), init_op=variables.local_variables_initializer(), train_op=train_op) return None
Exports a model, and optionally saves new vars from the clone model. Args: mode: A `KerasModeKeys` string. has_saved_vars: A `boolean` indicating whether the SavedModel has already exported variables. builder: A `SavedModelBuilder` object. model: A `tf.keras.Model` object. custom_objects: A dictionary mapping string names to custom classes or functions. checkpoint_path: String path to checkpoint. input_signature: Nested TensorSpec containing the expected inputs. Can be `None`, in which case the signature will be inferred from the model. Raises: ValueError: If the train/eval mode is being exported, but the model does not have an optimizer.
github-repos
def circuit_to_latex_using_qcircuit( circuit: circuits.Circuit, qubit_order: ops.QubitOrderOrList = ops.QubitOrder.DEFAULT) -> str: diagram = circuit.to_text_diagram_drawer( qubit_namer=qcircuit_qubit_namer, qubit_order=qubit_order, get_circuit_diagram_info=get_qcircuit_diagram_info) return _render(diagram)
Returns a QCircuit-based latex diagram of the given circuit. Args: circuit: The circuit to represent in latex. qubit_order: Determines the order of qubit wires in the diagram. Returns: Latex code for the diagram.
juraj-google-style
def _bash_comp_command(self, cmd, add_help=True): out = ['-h', '--help'] if add_help else [] cmd_dict = self._opt_cmds[cmd] if cmd else self._opt_bare for opt, sct in cmd_dict: out.extend(_names(self._conf[sct], opt)) return out
Build a list of all options for a given command. Args: cmd (str): command name, set to None or '' for bare command. add_help (bool): add an help option. Returns: list of str: list of CLI options strings.
juraj-google-style
def downloadMARCXML(doc_id, library, base="nkc"): downer = Downloader() data = downer.download( ALEPH_URL + Template(DOC_URL_TEMPLATE).substitute( DOC_ID=doc_id, LIBRARY=library ) ) dom = dhtmlparser.parseString(data) error = dom.find("login") if error: error_msg = error[0].find("error") if error_msg: raise LibraryNotFoundException( "Can't download document doc_id: '" + str(doc_id) + "' " + "(probably bad library: '" + library + "')!\nMessage: " + "\n".join(map(lambda x: x.getContent(), error_msg)) ) error = dom.find("ill-get-doc") if error: error_msg = error[0].find("error") if error_msg: raise DocumentNotFoundException( "\n".join(map(lambda x: x.getContent(), error_msg)) ) return data
Download MARC XML document with given `doc_id` from given `library`. Args: doc_id (DocumentID): You will get this from :func:`getDocumentIDs`. library (str): "``NKC01``" in our case, but don't worry, :func:`getDocumentIDs` adds library specification into :class:`DocumentID` named tuple. Returns: str: MARC XML unicode string. Raises: LibraryNotFoundException DocumentNotFoundException
juraj-google-style
def __call__(self, inputs: List[Any], global_state: Optional[pg.geno.AttributeDict]=None, step: int=0) -> List[Any]: if self.input_element_type is not None: elem_type = self.input_element_type for i, elem in enumerate(inputs): if not isinstance(elem, elem_type): raise TypeError(f'The input is expected to be a list of {elem_type!r} but {elem!r} is encountered at position {i}.') if global_state is None: global_state = pg.geno.AttributeDict() self._on_input(inputs) outputs = self._operate(inputs, global_state=global_state, step=step) if self.output_element_type is not None: elem_type = self.output_element_type for i, elem in enumerate(outputs): if not isinstance(elem, elem_type): raise TypeError(f'The output is expected to be a list of {elem_type!r} but {elem!r} is encountered at position {i}.') return outputs
Transform a list of input values to a list of output values. Args: inputs: A list of values as inputs. global_state: An `AttributeDict` object (dictionary that provides attribute access) as the global state container, which is readable/writable during the operation. step: Number of examples historically proposed, which can be used for determining a cross over schedule. Returns: A list of values as output of current operation.
github-repos
def _send_message(self, method, endpoint, params=None, data=None): url = self.url + endpoint r = self.session.request(method, url, params=params, data=data, auth=self.auth, timeout=30) return r.json()
Send API request. Args: method (str): HTTP method (get, post, delete, etc.) endpoint (str): Endpoint (to be added to base URL) params (Optional[dict]): HTTP request parameters data (Optional[str]): JSON-encoded string payload for POST Returns: dict/list: JSON response
juraj-google-style
def _process_returns_section(func_documentation, sig, config_class, indent_level): return_docstring = '' if func_documentation is not None and (match_start := re.search('(?m)^([ \\t]*)(?=Return)', func_documentation)) is not None: match_end = re.search('(?m)^([ \\t]*)(?=Example)', func_documentation) if match_end: return_docstring = func_documentation[match_start.start():match_end.start()] func_documentation = func_documentation[match_end.start():] else: return_docstring = func_documentation[match_start.start():] func_documentation = '' return_docstring = set_min_indent(return_docstring, indent_level + 4) elif sig.return_annotation is not None and sig.return_annotation != inspect._empty: add_intro, return_annotation = contains_type(sig.return_annotation, ModelOutput) return_docstring = _prepare_output_docstrings(return_annotation, config_class, add_intro=add_intro) return_docstring = return_docstring.replace('typing.', '') return_docstring = set_min_indent(return_docstring, indent_level + 4) return (return_docstring, func_documentation)
Process the returns section of the docstring. Args: func_documentation (`str`): Existing function documentation (manually specified in the docstring) sig (`inspect.Signature`): Function signature config_class (`str`): Config class for the model indent_level (`int`): Indentation level
github-repos
def _trace_variant_creation(self): variant = self._variant_tensor if not isinstance(variant, ops.EagerTensor): raise NotImplementedError('Constructing a tf.function that reproduces a given dataset is only supported for datasets created eagerly. Please file a feature request if this is important to you.') with context.eager_mode(), ops.device('CPU'): graph_def = graph_pb2.GraphDef().FromString(self._as_serialized_graph(external_state_policy=options_lib.ExternalStatePolicy.FAIL).numpy()) output_node_names = [] for node in graph_def.node: if node.op == '_Retval': output_node_names = node.input if len(output_node_names) != 1: raise AssertionError(f'Dataset graph is expected to only have one return value but found {len(output_node_names)} return values: {output_node_names}.') output_node_name = output_node_names[0] file_path_nodes = {} if ops.get_default_graph().building_function: asset_tracker = self._maybe_track_assets(graph_def) for key in asset_tracker: assets_list = [array_ops.expand_dims(asset.asset_path, axis=0) for asset in asset_tracker[key]] file_path_nodes[key] = array_ops.concat(assets_list, axis=0) variant_function = wrap_function.function_from_graph_def(graph_def, inputs=[], outputs=output_node_name + ':0', captures=file_path_nodes) for used_function in self._functions(): used_function.function.add_to_graph(variant_function.graph) return variant_function
Traces a function which outputs a variant `tf.Tensor` for this dataset. Note that creating this function involves evaluating an op, and is currently only supported when executing eagerly. Returns: A zero-argument `ConcreteFunction` which outputs a variant `tf.Tensor`.
github-repos
def cmd_path(self, cmd): for binscript in self.bin.files: if binscript.path.endswith('/{0}'.format(cmd)): return binscript.path raise ValueError('The command {0} was not found.'.format(cmd))
Get the path of a command in the virtual if it exists. Args: cmd (str): The command to look for. Returns: str: The full path to the command. Raises: ValueError: If the command is not present.
juraj-google-style
def reminders_add(self, *, text: str, time: str, **kwargs) -> SlackResponse: self._validate_xoxp_token() kwargs.update({'text': text, 'time': time}) return self.api_call('reminders.add', json=kwargs)
Creates a reminder. Args: text (str): The content of the reminder. e.g. 'eat a banana' time (str): When this reminder should happen: the Unix timestamp (up to five years from now e.g. '1602288000'), the number of seconds until the reminder (if within 24 hours), or a natural language description (Ex. 'in 15 minutes' or 'every Thursday')
codesearchnet
def parse_hunks(diff: str) -> list[Hunk]: diff_pattern = 'diff --git a/.* b/(.*)\\n(?:\\w+ file mode \\d+\\n)?index .*\\n--- .*\\n\\+\\+\\+ .*\\n' hunk_header_pattern = '@@ -\\d+,\\d+ \\+(\\d+),(\\d+) @@.*\\n' raw_per_file_hunks = re.split(diff_pattern, diff)[1:] parsed_hunks = [] for file, raw_hunks in batch(raw_per_file_hunks, 2): hunks = re.split(hunk_header_pattern, raw_hunks, re.MULTILINE)[1:] for start, length, body in batch(hunks, 3): lines = body.split('\n') lines = lines if lines[-1] else lines[:-1] parsed_hunks.append(Hunk(file, int(start), int(length), lines)) return parsed_hunks
Parses a diff into hunks. Arguments: diff: The raw output of git diff. Returns: A list of Hunks.
github-repos
def has_all_nonzero_neurite_radii(neuron, threshold=0.0): bad_ids = [] seen_ids = set() for s in _nf.iter_sections(neuron): for i, p in enumerate(s.points): info = (s.id, i) if p[COLS.R] <= threshold and info not in seen_ids: seen_ids.add(info) bad_ids.append(info) return CheckResult(len(bad_ids) == 0, bad_ids)
Check presence of neurite points with radius not above threshold Arguments: neuron(Neuron): The neuron object to test threshold: value above which a radius is considered to be non-zero Returns: CheckResult with result including list of (section ID, point ID) pairs of zero-radius points
juraj-google-style
def _solve(self, sense=None): while (len(self._remove_constr) > 0): self._remove_constr.pop().delete() try: return self._prob.solve(sense=sense) except lp.SolverError as e: raise_from(MOMAError(text_type(e)), e) finally: self._remove_constr = []
Remove old constraints and then solve the current problem. Args: sense: Minimize or maximize the objective. (:class:`.lp.ObjectiveSense) Returns: The Result object for the solved LP problem
codesearchnet
def remove_server_data(server_id): logger.debug("Removing server from serverdata") data = datatools.get_data() if server_id in data["discord"]["servers"]: data["discord"]["servers"].pop(server_id) datatools.write_data(data)
Remove a server from the server data Args: server_id (int): The server to remove from the server data
juraj-google-style
def _PrintPreprocessingInformation(self, storage_reader, session_number=None): knowledge_base_object = knowledge_base.KnowledgeBase() storage_reader.ReadPreprocessingInformation(knowledge_base_object) system_configuration = knowledge_base_object.GetSystemConfigurationArtifact( session_identifier=session_number) if not system_configuration: return title = 'System configuration' table_view = views.ViewsFactory.GetTableView( self._views_format_type, title=title) hostname = 'N/A' if system_configuration.hostname: hostname = system_configuration.hostname.name operating_system = system_configuration.operating_system or 'N/A' operating_system_product = ( system_configuration.operating_system_product or 'N/A') operating_system_version = ( system_configuration.operating_system_version or 'N/A') code_page = system_configuration.code_page or 'N/A' keyboard_layout = system_configuration.keyboard_layout or 'N/A' time_zone = system_configuration.time_zone or 'N/A' table_view.AddRow(['Hostname', hostname]) table_view.AddRow(['Operating system', operating_system]) table_view.AddRow(['Operating system product', operating_system_product]) table_view.AddRow(['Operating system version', operating_system_version]) table_view.AddRow(['Code page', code_page]) table_view.AddRow(['Keyboard layout', keyboard_layout]) table_view.AddRow(['Time zone', time_zone]) table_view.Write(self._output_writer) title = 'User accounts' table_view = views.ViewsFactory.GetTableView( self._views_format_type, column_names=['Username', 'User directory'], title=title) for user_account in system_configuration.user_accounts: table_view.AddRow([ user_account.username, user_account.user_directory]) table_view.Write(self._output_writer)
Prints the details of the preprocessing information. Args: storage_reader (StorageReader): storage reader. session_number (Optional[int]): session number.
juraj-google-style
def __init__(self, k_ranges, query_spec, key_range_iter_cls): self._key_ranges = k_ranges self._query_spec = query_spec self._key_range_iter_cls = key_range_iter_cls self._current_iter = None self._current_key_range = None
Init. Args: k_ranges: a key_ranges._KeyRanges object. query_spec: a model.query_spec object that defines how to retrieve entities from datastore. key_range_iter_cls: the class that iterates over a single key range. The value yielded by this class is yielded.
juraj-google-style
def get_numeric_sort_key_fn(numeric_values): value_types = _get_all_types(numeric_values) if len(value_types) != 1: raise ValueError(f'No common value type in {numeric_values}') value_type = next(iter(value_types)) if value_type == NUMBER_TYPE: return _get_value_as_primitive_value valid_indexes = set(range(_DATE_TUPLE_SIZE)) for numeric_value in numeric_values: value = _get_value_as_primitive_value(numeric_value) assert isinstance(value, tuple) for tuple_index, inner_value in enumerate(value): if inner_value is None: valid_indexes.discard(tuple_index) if not valid_indexes: raise ValueError(f'No common value in {numeric_values}') def _sort_key_fn(numeric_value): value = _get_value_as_primitive_value(numeric_value) return tuple((value[index] for index in valid_indexes)) return _sort_key_fn
Creates a function that can be used as a sort key or to compare the values. Maps to primitive types and finds the biggest common subset. Consider the values "05/05/2010" and "August 2007". With the corresponding primitive values (2010.,5.,5.) and (2007.,8., None). These values can be compared by year and date so we map to the sequence (2010., 5.), (2007., 8.). If we added a third value "2006" with primitive value (2006., None, None), we could only compare by the year so we would map to (2010.,), (2007.,) and (2006.,). Args: numeric_values: Values to compare Returns: A function that can be used as a sort key function (mapping numeric values to a comparable tuple) Raises: ValueError if values don't have a common type or are not comparable.
github-repos
def setup_logging(verbosity, formats=None): if (formats is None): formats = {} log_level = logging.INFO log_format = formats.get('info', INFO_FORMAT) if sys.stdout.isatty(): log_format = formats.get('color', COLOR_FORMAT) if (verbosity > 0): log_level = logging.DEBUG log_format = formats.get('debug', DEBUG_FORMAT) if (verbosity < 2): logging.getLogger('botocore').setLevel(logging.CRITICAL) hdlr = logging.StreamHandler() hdlr.setFormatter(ColorFormatter(log_format, ISO_8601)) logging.root.addHandler(hdlr) logging.root.setLevel(log_level)
Configure a proper logger based on verbosity and optional log formats. Args: verbosity (int): 0, 1, 2 formats (dict): Optional, looks for `info`, `color`, and `debug` keys which may override the associated default log formats.
codesearchnet
def __spawn_new_request(self): first_in_line = self.queue.get_first(QueueItem.STATUS_QUEUED) if (first_in_line is None): return False while self.routing.is_treshold_reached(first_in_line.request): self.queue.move(first_in_line, QueueItem.STATUS_CANCELLED) first_in_line = self.queue.get_first(QueueItem.STATUS_QUEUED) if (first_in_line is None): return False self.__request_start(first_in_line) return True
Spawn the first queued request if there is one available. Returns: bool: True if a new request was spawned, false otherwise.
codesearchnet
def get(self, dash_id): data = json.loads(r_db.hmget(config.DASH_CONTENT_KEY, dash_id)[0]) return build_response(dict(data=data, code=200))
Read dashboard content. Args: dash_id: dashboard id. Returns: A dict containing the content of that dashboard, not include the meta info.
codesearchnet
def __init__(self, credentials): if not has_httplib2: raise ImportError("No module named httplib2") super(GAPDecoratorAuthMethod, self).__init__() self._http = None self._credentials = credentials self._action_token = None
Initialize auth method with existing credentials. Args: credentials: OAuth2 credentials obtained via GAP OAuth2 library.
juraj-google-style
def __init__(self, label, ast_node, *, line_number=None, path): self.label = label self.ast_node = ast_node if line_number: self.line_number = line_number elif ast_node: self.line_number = ast_node.lineno else: self.line_number = None self.path = path self.ingoing = list() self.outgoing = list()
Create a Node that can be used in a CFG. Args: label(str): The label of the node, describing its expression. line_number(Optional[int]): The line of the expression of the Node.
juraj-google-style
def pprint_cell(self, row, col): ndims = self.ndims if col >= self.cols: raise Exception("Maximum column index is %d" % self.cols-1) elif row >= self.rows: raise Exception("Maximum row index is %d" % self.rows-1) elif row == 0: if col >= ndims: if self.vdims: return self.vdims[col - ndims].pprint_label else: return '' return self.kdims[col].pprint_label else: dim = self.get_dimension(col) return dim.pprint_value(self.iloc[row-1, col])
Formatted contents of table cell. Args: row (int): Integer index of table row col (int): Integer index of table column Returns: Formatted table cell contents
juraj-google-style
def ProcessStorage(self): self._CheckStorageFile(self._storage_file_path) self._status_view.SetMode(self._status_view_mode) self._status_view.SetStorageFileInformation(self._storage_file_path) status_update_callback = self._status_view.GetAnalysisStatusUpdateCallback() session = engine.BaseEngine.CreateSession(command_line_arguments=self._command_line_arguments, preferred_encoding=self.preferred_encoding) storage_reader = storage_factory.StorageFactory.CreateStorageReaderForFile(self._storage_file_path) if (not storage_reader): logger.error('Format of storage file: {0:s} not supported'.format(self._storage_file_path)) return self._number_of_analysis_reports = storage_reader.GetNumberOfAnalysisReports() storage_reader.Close() configuration = configurations.ProcessingConfiguration() configuration.data_location = self._data_location configuration.profiling.directory = self._profiling_directory configuration.profiling.sample_rate = self._profiling_sample_rate configuration.profiling.profilers = self._profilers analysis_counter = None if self._analysis_plugins: storage_writer = storage_factory.StorageFactory.CreateStorageWriterForFile(session, self._storage_file_path) analysis_engine = psort.PsortMultiProcessEngine(use_zeromq=self._use_zeromq) analysis_engine.AnalyzeEvents(self._knowledge_base, storage_writer, self._data_location, self._analysis_plugins, configuration, event_filter=self._event_filter, event_filter_expression=self._event_filter_expression, status_update_callback=status_update_callback, worker_memory_limit=self._worker_memory_limit) analysis_counter = collections.Counter() for (item, value) in iter(session.analysis_reports_counter.items()): analysis_counter[item] = value if (self._output_format != 'null'): storage_reader = storage_factory.StorageFactory.CreateStorageReaderForFile(self._storage_file_path) analysis_engine = psort.PsortMultiProcessEngine(use_zeromq=self._use_zeromq) analysis_engine.ExportEvents(self._knowledge_base, storage_reader, self._output_module, configuration, deduplicate_events=self._deduplicate_events, event_filter=self._event_filter, status_update_callback=status_update_callback, time_slice=self._time_slice, use_time_slicer=self._use_time_slicer) if self._quiet_mode: return self._output_writer.Write('Processing completed.\n') if analysis_counter: table_view = views.ViewsFactory.GetTableView(self._views_format_type, title='Analysis reports generated') for (element, count) in analysis_counter.most_common(): if (element != 'total'): table_view.AddRow([element, count]) table_view.AddRow(['Total', analysis_counter['total']]) table_view.Write(self._output_writer) storage_reader = storage_factory.StorageFactory.CreateStorageReaderForFile(self._storage_file_path) self._PrintAnalysisReportsDetails(storage_reader)
Processes a plaso storage file. Raises: BadConfigOption: when a configuration parameter fails validation. RuntimeError: if a non-recoverable situation is encountered.
codesearchnet
def remove_delegate(self, callback): if (callback not in self._delegate_methods): return self._delegate_methods.remove(callback)
Unregisters a registered delegate function or a method. Args: callback(function): method to trigger when push center receives events
codesearchnet
def convert_inner_node_data(nested, wrap=False): def _is_serialized_node_data(nested): if isinstance(nested, list) and len(nested) in [3, 4] and isinstance(nested[0], str): return True return False def _is_atomic_nested(nested): if isinstance(nested, ListWrapper): return True if _is_serialized_node_data(nested): return True return not nest.is_nested(nested) def _convert_object_or_list(nested): if wrap: if isinstance(nested, ListWrapper): return nested if _is_serialized_node_data(nested): return ListWrapper(nested) return nested else: if isinstance(nested, ListWrapper): return nested.as_list() return nested return map_structure_with_atomic(_is_atomic_nested, _convert_object_or_list, nested)
Either wraps or unwraps innermost node data lists in `ListWrapper` objects. Args: nested: A nested data structure. wrap: If `True`, wrap innermost lists in `ListWrapper` objects. If `False`, unwraps `ListWrapper` objects into lists. Returns: Structure of same type as nested, with lists wrapped/unwrapped.
github-repos
def _remove_outliers_from_hist(hist: Hist, outliers_start_index: int, outliers_removal_axis: OutliersRemovalAxis) -> None: if outliers_start_index > 0: x = ctypes.c_int(0) y = ctypes.c_int(0) z = ctypes.c_int(0) outliers_removal_axis_values: Dict[OutliersRemovalAxis, ctypes.c_int] = { projectors.TH1AxisType.x_axis: x, projectors.TH1AxisType.y_axis: y, projectors.TH1AxisType.z_axis: z, } for index in range(0, hist.GetNcells()): hist.GetBinXYZ(index, x, y, z) if hist.GetBinContent(index) < hist.GetBinError(index): logger.warning(f"Bin content < error. Name: {hist.GetName()}, Bin content: {hist.GetBinContent(index)}, Bin error: {hist.GetBinError(index)}, index: {index}, ({x.value}, {y.value})") if outliers_removal_axis_values[outliers_removal_axis].value >= outliers_start_index: hist.SetBinContent(index, 0) hist.SetBinError(index, 0) else: logger.info(f"Hist {hist.GetName()} did not have any outliers to cut")
Remove outliers from a given histogram. Args: hist: Histogram to check for outliers. outliers_start_index: Index in the truth axis where outliers begin. outliers_removal_axis: Axis along which outliers removal will be performed. Usually the particle level aixs. Returns: None. The histogram is modified in place.
juraj-google-style
def train_validation_split(arrays, validation_split): def _can_split(t): tensor_types = _get_tensor_types() return isinstance(t, tensor_types) or t is None flat_arrays = nest.flatten(arrays) unsplitable = [type(t) for t in flat_arrays if not _can_split(t)] if unsplitable: raise ValueError('`validation_split` is only supported for Tensors or NumPy arrays, found following types in the input: {}'.format(unsplitable)) if all((t is None for t in flat_arrays)): return (arrays, arrays) first_non_none = None for t in flat_arrays: if t is not None: first_non_none = t break batch_dim = int(first_non_none.shape[0]) split_at = int(math.floor(batch_dim * (1.0 - validation_split))) if split_at == 0 or split_at == batch_dim: raise ValueError('Training data contains {batch_dim} samples, which is not sufficient to split it into a validation and training set as specified by `validation_split={validation_split}`. Either provide more data, or a different value for the `validation_split` argument.'.format(batch_dim=batch_dim, validation_split=validation_split)) def _split(t, start, end): if t is None: return t return t[start:end] train_arrays = nest.map_structure(functools.partial(_split, start=0, end=split_at), arrays) val_arrays = nest.map_structure(functools.partial(_split, start=split_at, end=batch_dim), arrays) return (train_arrays, val_arrays)
Split arrays into train and validation subsets in deterministic order. The last part of data will become validation data. Args: arrays: Tensors to split. Allowed inputs are arbitrarily nested structures of Tensors and NumPy arrays. validation_split: Float between 0 and 1. The proportion of the dataset to include in the validation split. The rest of the dataset will be included in the training split. Returns: `(train_arrays, validation_arrays)`
github-repos
def merged(cls, *flatterms: 'FlatTerm') -> 'FlatTerm': return cls(cls._combined_wildcards_iter(sum(flatterms, cls.empty())))
Concatenate the given flatterms to a single flatterm. Args: *flatterms: The flatterms which are concatenated. Returns: The concatenated flatterms.
juraj-google-style
def _ParseIntegerValue(self, byte_stream, file_offset): data_type_map = self._GetDataTypeMap('int32be') try: return self._ReadStructureFromByteStream(byte_stream, file_offset, data_type_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError('Unable to parse integer value with error: {0!s}'.format(exception))
Parses an integer value. Args: byte_stream (bytes): byte stream. file_offset (int): offset of the attribute data relative to the start of the file-like object. Returns: int: integer value. Raises: ParseError: when the integer value cannot be parsed.
codesearchnet
def get(cls, keyval, key='id', user_id=None): if keyval is None: return None if (key in cls.__table__.columns and cls.__table__.columns[key].primary_key): return cls.query.get(keyval) else: result = cls.query.filter( getattr(cls, key) == keyval) return result.first()
Fetches a single instance which has value `keyval` for the attribute `key`. Args: keyval: The value of the attribute. key (str, optional): The attribute to search by. By default, it is 'id'. Returns: A model instance if found. Else None. Examples: >>> User.get(35) user35@i.com >>> User.get('user35@i.com', key='email') user35@i.com
juraj-google-style
def write_table(self, table, rows, append=False, gzip=False): _write_table(self.root, table, rows, self.table_relations(table), append=append, gzip=gzip, encoding=self.encoding)
Encode and write out *table* to the profile directory. Args: table: The name of the table to write rows: The rows to write to the table append: If `True`, append the encoded rows to any existing data. gzip: If `True`, compress the resulting table with `gzip`. The table's filename will have `.gz` appended.
juraj-google-style
def get_children_graph(self, item_ids=None, language=None, forbidden_item_ids=None): if forbidden_item_ids is None: forbidden_item_ids = set() def _children(item_ids): if item_ids is None: items = Item.objects.filter(active=True).prefetch_related('children') else: item_ids = [ii for iis in item_ids.values() for ii in iis] items = Item.objects.filter(id__in=item_ids, active=True).prefetch_related('children') return { item.id: sorted([ _item.id for _item in item.children.all() if _item.active and _item.id not in forbidden_item_ids ]) for item in items if item.id not in forbidden_item_ids } if item_ids is None: return self._reachable_graph(None, _children, language=language) else: graph = self.get_children_graph(None, language, forbidden_item_ids=forbidden_item_ids) return self._subset_graph(graph, set(item_ids) - set(forbidden_item_ids))
Get a subgraph of items reachable from the given set of items through the 'child' relation. Args: item_ids (list): items which are taken as roots for the reachability language (str): if specified, filter out items which are not available in the given language Returns: dict: item id -> list of items (child items), root items are referenced by None key
juraj-google-style
def update_data(self, index, data): datapack = self.built_embed.to_dict()['fields'][index] self.built_embed.set_field_at(index, name=datapack['name'], value=data, inline=datapack['inline'])
Updates a particular datapack's data Args: index (int): The index of the datapack data (str): The new value to set for this datapack
codesearchnet
def __init__(self, input_filename="lammps.in", bin="lammps"): self.lammps_bin = bin.split() if not which(self.lammps_bin[-1]): raise RuntimeError( "LammpsRunner requires the executable {} to be in the path. " "Please download and install LAMMPS from " \ "http: "Don't forget to add the binary to your path".format(self.lammps_bin[-1])) self.input_filename = input_filename
LAMMPS wrapper Args: input_filename (string): input file name bin (string): command to run, excluding the input file name
juraj-google-style
def __init__(self, input_reader=None, output_writer=None): super(PsortTool, self).__init__( input_reader=input_reader, output_writer=output_writer) self._analysis_manager = analysis_manager.AnalysisPluginManager self._analysis_plugins = None self._analysis_plugins_output_format = None self._command_line_arguments = None self._deduplicate_events = True self._event_filter_expression = None self._event_filter = None self._knowledge_base = knowledge_base.KnowledgeBase() self._number_of_analysis_reports = 0 self._preferred_language = 'en-US' self._process_memory_limit = None self._status_view_mode = status_view.StatusView.MODE_WINDOW self._status_view = status_view.StatusView(self._output_writer, self.NAME) self._stdout_output_writer = isinstance( self._output_writer, tools.StdoutOutputWriter) self._storage_file_path = None self._temporary_directory = None self._time_slice = None self._use_time_slicer = False self._use_zeromq = True self._worker_memory_limit = None self.list_analysis_plugins = False self.list_language_identifiers = False self.list_output_modules = False self.list_profilers = False
Initializes the CLI tool object. Args: input_reader (Optional[InputReader]): input reader, where None indicates that the stdin input reader should be used. output_writer (Optional[OutputWriter]): output writer, where None indicates that the stdout output writer should be used.
juraj-google-style
def create_failover_dns(self, primary_region='us-east-1'): dns_record = self.generated.dns()['global'] zone_ids = get_dns_zone_ids(env=self.env, facing=self.elb_subnet) elb_dns_aws = find_elb(name=self.app_name, env=self.env, region=self.region) elb_dns_zone_id = find_elb_dns_zone_id(name=self.app_name, env=self.env, region=self.region) if primary_region in elb_dns_aws: failover_state = 'PRIMARY' else: failover_state = 'SECONDARY' self.log.info("%s set as %s record", elb_dns_aws, failover_state) self.log.info('Updating Application Failover URL: %s', dns_record) dns_kwargs = { 'dns_name': dns_record, 'elb_dns_zone_id': elb_dns_zone_id, 'elb_aws_dns': elb_dns_aws, 'dns_ttl': self.dns_ttl, 'failover_state': failover_state, } for zone_id in zone_ids: self.log.debug('zone_id: %s', zone_id) update_failover_dns_record(self.env, zone_id, **dns_kwargs) return dns_record
Create dns entries in route53 for multiregion failover setups. Args: primary_region (str): primary AWS region for failover Returns: Auto-generated DNS name.
juraj-google-style
def _repeated_field_to_json(field, row_value): item_field = copy.deepcopy(field) item_field._mode = "NULLABLE" values = [] for item in row_value: values.append(_field_to_json(item_field, item)) return values
Convert a repeated/array field to its JSON representation. Args: field ( \ :class:`~google.cloud.bigquery.schema.SchemaField`, \ ): The SchemaField to use for type conversion and field name. The field mode must equal ``REPEATED``. row_value (Sequence[any]): A sequence of values to convert to JSON-serializable values. Returns: List[any]: A list of JSON-serializable objects.
juraj-google-style
def __contains__(self, id): if not isinstance(id, int): raise TypeError(id) return id in self._map
Return if the spreadsheet has a worksheet with the given id. Args: id (int): numeric id of the worksheet Returns: bool: ``True`` if such a worksheet is present else ``False`` Raises: TypeError: if ``id`` is not an ``int``
juraj-google-style
def reaction_charge(reaction, compound_charge): charge_sum = 0.0 for (compound, value) in reaction.compounds: charge = compound_charge.get(compound.name, float('nan')) charge_sum += (charge * float(value)) return charge_sum
Calculate the overall charge for the specified reaction. Args: reaction: :class:`psamm.reaction.Reaction`. compound_charge: a map from each compound to charge values.
codesearchnet
def heightmap_normalize( hm: np.ndarray, mi: float = 0.0, ma: float = 1.0 ) -> None: lib.TCOD_heightmap_normalize(_heightmap_cdata(hm), mi, ma)
Normalize heightmap values between ``mi`` and ``ma``. Args: mi (float): The lowest value after normalization. ma (float): The highest value after normalization.
juraj-google-style
def empty(self) -> 'Builder': return self._to_builder(_evaluation.EmptyFunction(self.node.context, self.node, []))
The FHIRPath empty() function. Returns: An expression that evaluates to True if the parent evaluates to empty.
github-repos
def ones_matrix_band_part(rows, cols, num_lower, num_upper, out_shape=None): if all([isinstance(el, int) for el in [rows, cols, num_lower, num_upper]]): if num_lower < 0: num_lower = rows - 1 if num_upper < 0: num_upper = cols - 1 lower_mask = np.tri(cols, rows, num_lower).T upper_mask = np.tri(rows, cols, num_upper) band = np.ones((rows, cols)) * lower_mask * upper_mask if out_shape: band = band.reshape(out_shape) band = tf.constant(band, tf.float32) else: band = tf.matrix_band_part( tf.ones([rows, cols]), tf.cast(num_lower, tf.int64), tf.cast(num_upper, tf.int64)) if out_shape: band = tf.reshape(band, out_shape) return band
Matrix band part of ones. Args: rows: int determining number of rows in output cols: int num_lower: int, maximum distance backward. Negative values indicate unlimited. num_upper: int, maximum distance forward. Negative values indicate unlimited. out_shape: shape to reshape output by. Returns: Tensor of size rows * cols reshaped into shape out_shape.
juraj-google-style
def get_permissions(self, grp_name, resource): self.project_service.set_auth(self._token_project) return self.project_service.get_permissions(grp_name, resource)
Get permissions associated the group has with the given resource. Args: grp_name (string): Name of group. resource (intern.resource.boss.Resource): Identifies which data model object to operate on. Returns: (list): List of permissions. Raises: requests.HTTPError on failure.
codesearchnet
def _ParseDistributedTrackingIdentifier(self, parser_mediator, uuid_object, origin): if (uuid_object.version == 1): event_data = windows_events.WindowsDistributedLinkTrackingEventData(uuid_object, origin) date_time = dfdatetime_uuid_time.UUIDTime(timestamp=uuid_object.time) event = time_events.DateTimeValuesEvent(date_time, definitions.TIME_DESCRIPTION_CREATION) parser_mediator.ProduceEventWithEventData(event, event_data) return '{{{0!s}}}'.format(uuid_object)
Extracts data from a Distributed Tracking identifier. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. uuid_object (uuid.UUID): UUID of the Distributed Tracking identifier. origin (str): origin of the event (event source). Returns: str: UUID string of the Distributed Tracking identifier.
codesearchnet
def convolve(image, pixel_filter, channels=3, name=None): with tf.name_scope(name, 'convolve'): tf.compat.v1.assert_type(image, tf.float32) channel_filter = tf.eye(channels) filter_ = (tf.expand_dims(tf.expand_dims(pixel_filter, (- 1)), (- 1)) * tf.expand_dims(tf.expand_dims(channel_filter, 0), 0)) result_batch = tf.nn.conv2d(tf.stack([image]), filter=filter_, strides=[1, 1, 1, 1], padding='SAME') return result_batch[0]
Perform a 2D pixel convolution on the given image. Arguments: image: A 3D `float32` `Tensor` of shape `[height, width, channels]`, where `channels` is the third argument to this function and the first two dimensions are arbitrary. pixel_filter: A 2D `Tensor`, representing pixel weightings for the kernel. This will be used to create a 4D kernel---the extra two dimensions are for channels (see `tf.nn.conv2d` documentation), and the kernel will be constructed so that the channels are independent: each channel only observes the data from neighboring pixels of the same channel. channels: An integer representing the number of channels in the image (e.g., 3 for RGB). Returns: A 3D `float32` `Tensor` of the same shape as the input.
codesearchnet
def get_request(profile, resource): url = get_url(profile, resource) headers = get_headers(profile) response = requests.get(url, headers=headers) return response.json()
Do a GET request to Github's API. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. resource The part of a Github API URL that comes after ``.../:repo/git``. For instance, for ``.../:repo/git/commits``, it's ``/commits``. Returns: The body of the response, converted from JSON into a Python dict.
codesearchnet
def random_get_int_mean( rnd: Optional[tcod.random.Random], mi: int, ma: int, mean: int ) -> int: return int( lib.TCOD_random_get_int_mean( rnd.random_c if rnd else ffi.NULL, mi, ma, mean ) )
Return a random weighted integer in the range: ``mi`` <= n <= ``ma``. The result is affacted by calls to :any:`random_set_distribution`. Args: rnd (Optional[Random]): A Random instance, or None to use the default. low (int): The lower bound of the random range, inclusive. high (int): The upper bound of the random range, inclusive. mean (int): The mean return value. Returns: int: A random weighted integer in the range ``mi`` <= n <= ``ma``.
juraj-google-style
def regex(self, regex = None): if regex is None: return self._regex if self._type != 'string': sys.stderr.write('can not set __regex__ for %s' % self._type) return if not isinstance(regex, (basestring, _REGEX_TYPE)): raise ValueError('__regex__') self._regex = regex
Regex Sets or gets the regular expression used to validate the Node Arguments: regex {str} -- A standard regular expression string Raises: ValueError Returns: None | str
juraj-google-style
def can_transition(self, status_from: str, status_to: str) -> bool: if (not self.STATUSES.can_transition(status_from=status_from, status_to=status_to)): _logger.info('`%s` tried to transition from status `%s` to non permitted status `%s`', str(self), status_from, status_to) return False return True
Update the status of the current instance. Returns: boolean: if the instance is updated.
codesearchnet
def make_hello_bot_agent() -> DefaultAgent: skill_hello = PatternMatchingSkill(['Hello world'], patterns=['hi', 'hello', 'good day']) skill_bye = PatternMatchingSkill(['Goodbye world', 'See you around'], patterns=['bye', 'chao', 'see you']) skill_fallback = PatternMatchingSkill(["I don't understand, sorry", 'I can say "Hello world"']) agent = DefaultAgent([skill_hello, skill_bye, skill_fallback], skills_processor=HighestConfidenceSelector()) return agent
Builds agent based on PatternMatchingSkill and HighestConfidenceSelector. This is agent building tutorial. You can use this .py file to check how hello-bot agent works. Returns: agent: Agent capable of handling several simple greetings.
codesearchnet
def get_sample(self, md5): if (len(md5) < 32): md5 = self.get_full_md5(md5, self.sample_collection) sample_info = self.database[self.sample_collection].find_one({'md5': md5}) if (not sample_info): return None try: grid_fs_id = sample_info['__grid_fs'] sample_info = self.clean_for_serialization(sample_info) sample_info.update({'raw_bytes': self.gridfs_handle.get(grid_fs_id).read()}) return sample_info except gridfs.errors.CorruptGridFile: self.database[self.sample_collection].update({'md5': md5}, {'md5': None}) return None
Get the sample from the data store. This method first fetches the data from datastore, then cleans it for serialization and then updates it with 'raw_bytes' item. Args: md5: The md5 digest of the sample to be fetched from datastore. Returns: The sample dictionary or None
codesearchnet
def read_vocab_file(file_path): with file_io.FileIO(file_path, 'r') as f: vocab_pd = pd.read_csv(f, header=None, names=['vocab', 'count'], dtype=str, na_filter=False) vocab = vocab_pd['vocab'].tolist() ex_count = vocab_pd['count'].astype(int).tolist() return (vocab, ex_count)
Reads a vocab file to memeory. Args: file_path: Each line of the vocab is in the form "token,example_count" Returns: Two lists, one for the vocab, and one for just the example counts.
codesearchnet
def save_output(results, output_directory='output'): aggregate_reports = results['aggregate_reports'] forensic_reports = results['forensic_reports'] if os.path.exists(output_directory): if (not os.path.isdir(output_directory)): raise ValueError('{0} is not a directory'.format(output_directory)) else: os.makedirs(output_directory) with open('{0}'.format(os.path.join(output_directory, 'aggregate.json')), 'w', newline='\n', encoding='utf-8') as agg_json: agg_json.write(json.dumps(aggregate_reports, ensure_ascii=False, indent=2)) with open('{0}'.format(os.path.join(output_directory, 'aggregate.csv')), 'w', newline='\n', encoding='utf-8') as agg_csv: csv = parsed_aggregate_reports_to_csv(aggregate_reports) agg_csv.write(csv) with open('{0}'.format(os.path.join(output_directory, 'forensic.json')), 'w', newline='\n', encoding='utf-8') as for_json: for_json.write(json.dumps(forensic_reports, ensure_ascii=False, indent=2)) with open('{0}'.format(os.path.join(output_directory, 'forensic.csv')), 'w', newline='\n', encoding='utf-8') as for_csv: csv = parsed_forensic_reports_to_csv(forensic_reports) for_csv.write(csv) samples_directory = os.path.join(output_directory, 'samples') if (not os.path.exists(samples_directory)): os.makedirs(samples_directory) sample_filenames = [] for forensic_report in forensic_reports: sample = forensic_report['sample'] message_count = 0 parsed_sample = forensic_report['parsed_sample'] subject = parsed_sample['filename_safe_subject'] filename = subject while (filename in sample_filenames): message_count += 1 filename = '{0} ({1})'.format(subject, message_count) sample_filenames.append(filename) filename = '{0}.eml'.format(filename) path = os.path.join(samples_directory, filename) with open(path, 'w', newline='\n', encoding='utf-8') as sample_file: sample_file.write(sample)
Save report data in the given directory Args: results (OrderedDict): Parsing results output_directory: The patch to the directory to save in
codesearchnet
def validate_restore_function(trackable, registered_name): try: _saver_registry.name_lookup(registered_name) except LookupError: raise ValueError(f"Error when restoring object {trackable} from checkpoint. This object was saved using a registered saver named '{registered_name}', but this saver cannot be found in the current context.") if not _saver_registry.get_predicate(registered_name)(trackable): raise ValueError(f"Object {trackable} was saved with the registered saver named '{registered_name}'. However, this saver cannot be used to restore the object because the predicate does not pass.")
Validates whether the trackable can be restored with the saver. When using a checkpoint saved with a registered saver, that same saver must also be also registered when loading. The name of that saver is saved to the checkpoint and set in the `registered_name` arg. Args: trackable: A `Trackable` object. registered_name: String name of the expected registered saver. This argument should be set using the name saved in a checkpoint. Raises: ValueError if the saver could not be found, or if the predicate associated with the saver does not pass.
github-repos
def check_whitelist(host, whitelist): if (':' not in host): host = (host + ':80') if (host in whitelist): return True return any((match_host(host, pattern) for pattern in whitelist))
Check a given request host against a whitelist. Args: host (str) : A host string to compare against a whitelist. If the host does not specify a port, then ``":80"`` is implicitly assumed. whitelist (seq[str]) : A list of host patterns to match against Returns: ``True``, if ``host`` matches any pattern in ``whitelist``, otherwise ``False``
codesearchnet
def CompleteBreakpoint(self, breakpoint_id): with self._lock: self._completed.add(breakpoint_id) if (breakpoint_id in self._active): self._active.pop(breakpoint_id).Clear()
Marks the specified breaking as completed. Appends the ID to set of completed breakpoints and clears it. Args: breakpoint_id: breakpoint ID to complete.
codesearchnet
def merge(self, ts): if ts.shape[1:] != self.shape[1:]: raise ValueError('Timeseries to merge must have compatible shapes') indices = np.vstack((self.tspan, ts.tspan)).argsort() return np.vstack((self, ts))[indices]
Merge another timeseries with this one Arguments: ts (Timeseries): The two timeseries being merged must have the same shape except for axis 0. Returns: Resulting merged timeseries which can have duplicate time points.
juraj-google-style
def reset_logformat_timestamped(logger: logging.Logger, extraname: str = "", level: int = logging.INFO) -> None: namebit = extraname + ":" if extraname else "" fmt = ("%(asctime)s.%(msecs)03d:%(levelname)s:%(name)s:" + namebit + "%(message)s") reset_logformat(logger, fmt=fmt) logger.setLevel(level)
Apply a simple time-stamped log format to an existing logger, and set its loglevel to either ``logging.DEBUG`` or ``logging.INFO``. Args: logger: logger to modify extraname: additional name to append to the logger's name level: log level to set
juraj-google-style
def parse_meta(meta): resources = {} for name in meta: if name.startswith("$"): continue resources[name] = resource = {} for action in meta[name]: if action.startswith("$"): continue url, httpmethod = res_to_url(name, action) resource[action] = { "url": url, "method": httpmethod } url_prefix = meta.get("$url_prefix", "").rstrip("/") return url_prefix, meta["$auth"]["header"].lower(), resources
Parse metadata of API Args: meta: metadata of API Returns: tuple(url_prefix, auth_header, resources)
juraj-google-style
def _ConvertRowToUnicode(self, parser_mediator, row): for (key, value) in iter(row.items()): if isinstance(value, py2to3.UNICODE_TYPE): continue try: row[key] = value.decode(self._encoding) except UnicodeDecodeError: replaced_value = value.decode(self._encoding, errors='replace') parser_mediator.ProduceExtractionWarning('error decoding DSV value: {0:s} as {1:s}, characters have been replaced in {2:s}'.format(key, self._encoding, replaced_value)) row[key] = replaced_value return row
Converts all strings in a DSV row dict to Unicode. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. row (dict[str, bytes]): a row from a DSV file, where the dictionary key contains the column name and the value a binary string. Returns: dict[str, str]: a row from the DSV file, where the dictionary key contains the column name and the value a Unicode string.
codesearchnet
def from_surface(renderer, surface): texture = object.__new__(Texture) texture._ptr = check_ptr_err(lib.SDL_CreateTextureFromSurface(renderer._ptr, surface._ptr)) return texture
Create a texture from an existing surface. Args: surface (Surface): The surface containing pixel data used to fill the texture. Returns: Texture: A texture containing the pixels from surface. Raises: SDLError: If an error is encountered.
juraj-google-style
def parsemeta(metadataloc): if os.path.isdir(metadataloc): metalist = glob.glob(os.path.join(metadataloc, METAPATTERN)) if not metalist: raise MTLParseError( "No files matching metadata file pattern in directory %s." % metadataloc) elif len(metalist) > 0: metadatafn = metalist[0] filehandle = open(metadatafn, 'r') if len(metalist) > 1: logging.warning( "More than one file in directory match metadata " + "file pattern. Using %s." % metadatafn) elif os.path.isfile(metadataloc): metadatafn = metadataloc filehandle = open(metadatafn, 'r') logging.info("Using file %s." % metadatafn) elif 'L1_METADATA_FILE' in metadataloc: filehandle = StringIO(metadataloc) else: raise MTLParseError( "File location %s is unavailable " % metadataloc + "or doesn't contain a suitable metadata file.") status = 0 metadata = {} grouppath = [] dictpath = [metadata] for line in filehandle: if status == 4: logging.warning( "Metadata file %s appears to " % metadatafn + "have extra lines after the end of the metadata. " + "This is probably, but not necessarily, harmless.") status = _checkstatus(status, line) grouppath, dictpath = _transstat(status, grouppath, dictpath, line) return metadata
Parses the metadata from a Landsat image bundle. Arguments: metadataloc: a filename or a directory. Returns metadata dictionary
juraj-google-style
def _all_reduce(self, reduce_op, value, replica_id, options): raise NotImplementedError('_all_reduce must be implemented in descendants.')
All-reduce the `value` across all replicas so that all get the result. `value` can be a nested structure of tensors or `IndexedSlices`. The implementation should generally batch the all-reduces when possible. `options` can be set to hint the batching behavior. This API must be called in a replica context. Args: reduce_op: A `tf.distribute.ReduceOp` value specifying how values should be combined. value: Value to be reduced. A tensor or a nested structure of tensors or `IndexedSlices`. replica_id: An integer indicating the id of the replica where this all_reduce is called under. This is the local replica id that ranges from 0 to len(local_devices) - 1. options: A `tf.distribute.experimental.CommunicationOptions`. Returns: A tensor/IndexedSlices or a nested structure of tensors/IndexedSlices with the reduced values. The structure is the same as `value`.
github-repos
def get_container_setting(name, container, settings): ret = dict() ps_cmd = list() ps_cmd_validate = list() container_path = 'IIS:\\{0}\\{1}'.format(container, name) if (not settings): log.warning('No settings provided') return ret ps_cmd.append('$Settings = @{};') for setting in settings: ps_cmd_validate.extend(['Get-ItemProperty', '-Path', "'{0}'".format(container_path), '-Name', "'{0}'".format(setting), '-ErrorAction', 'Stop', '|', 'Out-Null;']) ps_cmd.append("$Property = Get-ItemProperty -Path '{0}'".format(container_path)) ps_cmd.append("-Name '{0}' -ErrorAction Stop;".format(setting)) ps_cmd.append('if (([String]::IsNullOrEmpty($Property) -eq $False) -and') ps_cmd.append("($Property.GetType()).Name -eq 'ConfigurationAttribute') {") ps_cmd.append('$Property = $Property | Select-Object') ps_cmd.append('-ExpandProperty Value };') ps_cmd.append("$Settings['{0}'] = [String] $Property;".format(setting)) ps_cmd.append('$Property = $Null;') cmd_ret = _srvmgr(cmd=ps_cmd_validate, return_json=True) if (cmd_ret['retcode'] != 0): message = 'One or more invalid property names were specified for the provided container.' raise SaltInvocationError(message) ps_cmd.append('$Settings') cmd_ret = _srvmgr(cmd=ps_cmd, return_json=True) try: items = salt.utils.json.loads(cmd_ret['stdout'], strict=False) if isinstance(items, list): ret.update(items[0]) else: ret.update(items) except ValueError: raise CommandExecutionError('Unable to parse return data as Json.') return ret
Get the value of the setting for the IIS container. .. versionadded:: 2016.11.0 Args: name (str): The name of the IIS container. container (str): The type of IIS container. The container types are: AppPools, Sites, SslBindings settings (dict): A dictionary of the setting names and their values. Returns: dict: A dictionary of the provided settings and their values. CLI Example: .. code-block:: bash salt '*' win_iis.get_container_setting name='MyTestPool' container='AppPools' settings="['processModel.identityType']"
codesearchnet
def __getattr__(self, name: str): if name.startswith('__'): raise AttributeError(name) attr = getattr(self._builder, name) if isinstance(attr, expressions.Builder) and self._sealed: raise self._fhir_path_sealed_error(name) return ColumnExpressionBuilder._wrap_any(self, attr)
Redirects to the expressions.Builder when the attribute is not here. Note that in Python, '__getattribute__' always gets called first (the highest priority). Thus for attributes which has already been defined in this class, they won't be redirected to the expressions.Builder. Args: name: The attribute name as a string. Returns: The attribute get from expressions.Builder wrapped with _wrap_any. Raises: AttributeError: if the FHIR path in this class is already sealed, or if getting the attribute from self._builder fails.
github-repos
def visit_statements(self, nodes): for node in nodes: if isinstance(node, gast.AST): self.to_prepend.append(deque()) self.to_append.append(deque()) node = self.visit(node) self.visit_statements(self.to_prepend.pop()) if isinstance(node, gast.AST): self.to_insert[-1].append(node) elif node: self.to_insert[-1].extend(node) self.visit_statements(self.to_append.pop()) else: self.to_insert[-1].append(node) return self.to_insert[-1]
Visit a series of nodes in a node body. This function is factored out so that it can be called recursively on statements that are appended or prepended. This allows e.g. a nested expression to prepend a statement, and that statement can prepend a statement again, etc. Args: nodes: A list of statements. Returns: A list of transformed statements.
juraj-google-style
def wait_while_reachable(self, servers, timeout=60): t_start = time.time() while True: try: for server in servers: server_info = self.connection( hostname=server, timeout=5).admin.command('ismaster') logger.debug("server_info: {server_info}".format(server_info=server_info)) if int(server_info['ok']) != 1: raise pymongo.errors.OperationFailure("{server} is not reachable".format(**locals)) return True except (KeyError, AttributeError, pymongo.errors.AutoReconnect, pymongo.errors.OperationFailure): if time.time() - t_start > timeout: return False time.sleep(0.1)
wait while all servers be reachable Args: servers - list of servers
juraj-google-style
def write_vasp_input(self, vasp_input_set=MPRelaxSet, output_dir=".", create_directory=True, **kwargs): vasp_input_set(self.final_structure, **kwargs).write_input( output_dir, make_dir_if_not_present=create_directory) with open(os.path.join(output_dir, "transformations.json"), "w") as fp: json.dump(self.as_dict(), fp)
Writes VASP input to an output_dir. Args: vasp_input_set: pymatgen.io.vaspio_set.VaspInputSet like object that creates vasp input files from structures output_dir: Directory to output files create_directory: Create the directory if not present. Defaults to True. \\*\\*kwargs: All keyword args supported by the VASP input set.
juraj-google-style
def shape(input, name=None, out_type=None): if out_type is None: if flags.config().tf_shape_default_int64.value(): out_type = dtypes.int64 else: out_type = dtypes.int32 return shape_internal(input, name, optimize=True, out_type=out_type)
Returns the shape of a tensor. This operation returns a 1-D integer tensor representing the shape of `input`. For example: ```python t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]) tf.shape(t) # [2, 2, 3] ``` Args: input: A `Tensor` or `SparseTensor`. name: A name for the operation (optional). out_type: (Optional) The specified output type of the operation (`int32` or `int64`). Defaults to `tf.int32`. Returns: A `Tensor` of type `out_type`.
github-repos
def covariance_to_correlations(covariance): diagonal_ind = np.arange(covariance.shape[1]) diagonal_els = covariance[(:, diagonal_ind, diagonal_ind)] result = (covariance / np.sqrt((diagonal_els[(:, :, None)] * diagonal_els[(:, None, :)]))) result[np.isinf(result)] = 0 return np.clip(np.nan_to_num(result), (- 1), 1)
Transform a covariance matrix into a correlations matrix. This can be seen as dividing a covariance matrix by the outer product of the diagonal. As post processing we replace the infinities and the NaNs with zeros and clip the result to [-1, 1]. Args: covariance (ndarray): a matrix of shape (n, p, p) with for n problems the covariance matrix of shape (p, p). Returns: ndarray: the correlations matrix
codesearchnet
def delete_unspent_outputs(self, *unspent_outputs): if unspent_outputs: return backend.query.delete_unspent_outputs( self.connection, *unspent_outputs)
Deletes the given ``unspent_outputs`` (utxos). Args: *unspent_outputs (:obj:`tuple` of :obj:`dict`): Variable length tuple or list of unspent outputs.
juraj-google-style
def init_app(self, app): app.url_rule_class = partial(NavigationRule, copilot=self) app.context_processor(self.inject_context)
Register the extension with the application. Args: app (flask.Flask): The application to register with.
juraj-google-style
def _get_ssm_parameter(self, p): try: response = self._ssm.get_parameter(Name=p, WithDecryption=True) return response.get('Parameter', {}).get('Value', None) except Exception as ruh_roh: logging.error(ruh_roh, exc_info=False) return None
Get parameters from Simple Systems Manager Args: p - a parameter name Returns: a value, decrypted if needed, if successful or None if things go sideways.
juraj-google-style
def GrabObject(self, identifier): if (identifier not in self._values): raise KeyError('Missing cached object for identifier: {0:s}'.format(identifier)) cache_value = self._values[identifier] if (not cache_value): raise RuntimeError('Missing cache value for identifier: {0:s}'.format(identifier)) cache_value.IncrementReferenceCount()
Grabs a cached object based on the identifier. This method increments the cache value reference count. Args: identifier (str): VFS object identifier. Raises: KeyError: if the VFS object is not found in the cache. RuntimeError: if the cache value is missing.
codesearchnet
def extract_paths_dead(self, paths, ignore_nopath): if (not self._has_guestfs): raise LagoException('guestfs module not available, cannot '('extract files with libguestfs')) LOGGER.debug('%s: attempting to extract files with libguestfs', self.vm.name()) guestfs_tools.extract_paths(disk_path=self.vm.spec['disks'][0]['path'], disk_root=self.vm.spec['disks'][0]['metadata'].get('root-partition', 'root'), paths=paths, ignore_nopath=ignore_nopath)
Extract the given paths from the domain using guestfs. Using guestfs can have side-effects and should be used as a second option, mainly when SSH is not available. Args: paths(list of str): paths to extract ignore_nopath(boolean): if True will ignore none existing paths. Returns: None Raises: :exc:`~lago.utils.LagoException`: if :mod:`guestfs` is not importable. :exc:`~lago.plugins.vm.ExtractPathNoPathError`: if a none existing path was found on the VM, and `ignore_nopath` is True. :exc:`~lago.plugins.vm.ExtractPathError`: on failure extracting the files.
codesearchnet
def _get_filters(nodes, context): filters = [] for node in nodes: for filter_block in sql_context_helpers.get_filters(node, context): filter_sql_expression = _transform_filter_to_sql(filter_block, node, context) filters.append(filter_sql_expression) return filters
Get filters to apply to a list of SqlNodes. Args: nodes: List[SqlNode], the SqlNodes to get filters for. context: CompilationContext, global compilation state and metadata. Returns: List[Expression], list of SQLAlchemy expressions.
codesearchnet
def generate_func_call(name, args=None, kwargs=None): all_args = [] if args: all_args.extend(args) if kwargs: all_args.extend(('{}={}'.format(k, v) for (k, v) in kwargs if (v is not None))) return '{}({})'.format(name, ', '.join(all_args))
Generates code to call a function. Args: name (str): The function name. args (list[str]): Each positional argument. kwargs (list[tuple]): Each tuple is (arg: str, value: str). If value is None, then the keyword argument is omitted. Otherwise, if the value is not a string, then str() is called on it. Returns: str: Code to call a function.
codesearchnet
def call(self, y_true, y_pred): raise NotImplementedError('Must be implemented in subclasses.')
Invokes the `Loss` instance. Args: y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`, except sparse loss functions such as sparse categorical crossentropy where shape = `[batch_size, d0, .. dN-1]` y_pred: The predicted values. shape = `[batch_size, d0, .. dN]` Returns: Loss values with the shape `[batch_size, d0, .. dN-1]`.
github-repos