code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def item_at(self, row, column): return self.children[str(row)].children[str(column)]
Returns the TableItem instance at row, column cordinates Args: row (int): zero based index column (int): zero based index
juraj-google-style
def _get_remote(self, config, name): from dvc.remote import Remote remote = config.get(name) if (not remote): return None settings = self.repo.config.get_remote_settings(remote) return Remote(self.repo, settings)
The config file is stored in a way that allows you to have a cache for each remote. This is needed when specifying external outputs (as they require you to have an external cache location). Imagine a config file like the following: ['remote "dvc-storage"'] url = ssh://localhost/tmp ask_password = true [cache] ssh = dvc-storage This method resolves the name under the cache section into the correct Remote instance. Args: config (dict): The cache section on the config file name (str): Name of the section we are interested in to retrieve Returns: remote (dvc.Remote): Remote instance that the section is referring. None when there's no remote with that name. Example: >>> _get_remote(config={'ssh': 'dvc-storage'}, name='ssh')
codesearchnet
def ReadFileObject(self, artifacts_reader, file_object): for artifact_definition in artifacts_reader.ReadFileObject(file_object): self.RegisterDefinition(artifact_definition)
Reads artifact definitions into the registry from a file-like object. Args: artifacts_reader (ArtifactsReader): an artifacts reader. file_object (file): file-like object to read from.
juraj-google-style
def exceptions_raised(self): return self._exceptions_raised
Exceptions raised but not handled by the `QueueRunner` threads. Exceptions raised in queue runner threads are handled in one of two ways depending on whether or not a `Coordinator` was passed to `create_threads()`: * With a `Coordinator`, exceptions are reported to the coordinator and forgotten by the `QueueRunner`. * Without a `Coordinator`, exceptions are captured by the `QueueRunner` and made available in this `exceptions_raised` property. Returns: A list of Python `Exception` objects. The list is empty if no exception was captured. (No exceptions are captured when using a Coordinator.)
github-repos
def add_link_to_self(self, source, weight): if (not isinstance(source, list)): source = [source] for source_node in source: source_node.add_link(self, weight=weight)
Create and add a ``Link`` from a source node to ``self``. Args: source (Node): The node that will own the new ``Link`` pointing to ``self`` weight (int or float): The weight of the newly created ``Link`` Returns: None Example: >>> node_1 = Node('One') >>> node_2 = Node('Two') >>> node_1.add_link_to_self(node_2, 5) >>> new_link = node_2.link_list[0] >>> print('{} {}'.format(new_link.target.value, new_link.weight)) One 5 >>> print(new_link) node.Link instance pointing to node with value "One" with weight 5
codesearchnet
def get(identifier): if identifier is None: return None elif isinstance(identifier, dict): obj = deserialize(identifier) elif isinstance(identifier, str): config = {'class_name': identifier, 'config': {}} obj = deserialize(config) else: obj = identifier if isinstance(obj, Optimizer): return obj raise ValueError(f'Could not interpret optimizer identifier: {identifier}')
Retrieves a Keras Optimizer instance. Args: identifier: Optimizer identifier, one of: - String: name of an optimizer - Dictionary: configuration dictionary. - Keras Optimizer instance (it will be returned unchanged). Returns: A Keras Optimizer instance.
github-repos
def global_step(device=''): global_step_ref = tf.get_collection(tf.GraphKeys.GLOBAL_STEP) if global_step_ref: return global_step_ref[0] else: collections = [VARIABLES_TO_RESTORE, tf.GraphKeys.GLOBAL_VARIABLES, tf.GraphKeys.GLOBAL_STEP] with tf.device(variable_device(device, 'global_step')): return tf.get_variable('global_step', shape=[], dtype=tf.int64, initializer=tf.zeros_initializer(), trainable=False, collections=collections)
Returns the global step variable. Args: device: Optional device to place the variable. It can be an string or a function that is called to get the device for the variable. Returns: the tensor representing the global step variable.
codesearchnet
def add_all_database_reactions(model, compartments): added = set() for rxnid in model.database.reactions: reaction = model.database.get_reaction(rxnid) if all(((compound.compartment in compartments) for (compound, _) in reaction.compounds)): if (not model.has_reaction(rxnid)): added.add(rxnid) model.add_reaction(rxnid) return added
Add all reactions from database that occur in given compartments. Args: model: :class:`psamm.metabolicmodel.MetabolicModel`.
codesearchnet
def _var_key(var): if hasattr(var, '_distributed_container'): var = var._distributed_container() if var._in_graph_mode: return var._shared_name return var._unique_id
Key for representing a primary variable, for looking up slots. In graph mode the name is derived from the var shared name. In eager mode the name is derived from the var unique id. If distribution strategy exists, get the primary variable first. Args: var: the variable. Returns: the unique name of the variable.
github-repos
def __init__(self, task_type=None, task_id=None, rpc_layer=None, environment=None): self._task_type = task_type self._task_id = task_id self._rpc_layer = rpc_layer self._environment = environment
Creates a new TFConfigClusterResolver. Args: task_type: (String, optional) Overrides the task type specified in the TF_CONFIG environment variable. task_id: (Integer, optional) Overrides the task index specified in the TF_CONFIG environment variable. rpc_layer: (String, optional) Overrides the rpc layer TensorFlow uses. environment: (String, optional) Overrides the environment TensorFlow operates in.
github-repos
def parametrize_xnp(*, with_none: bool=False, restrict: Optional[Iterable[str]]=None, skip: Optional[Iterable[str]]=None) -> Callable[[_FnT], _FnT]: name_to_modules = {'np': lambda: np, 'jnp': lambda: lazy.jnp, 'tnp': lambda: lazy.tnp, 'torch': lambda: lazy.torch} keep = _normalize_set(restrict, default=name_to_modules, valid=name_to_modules) skip = _normalize_set(skip, default=[], valid=name_to_modules) name_to_modules = {k: v() for k, v in name_to_modules.items() if k not in skip and k in keep} if with_none: name_to_modules['no_np'] = None return pytest.mark.parametrize('xnp', list(name_to_modules.values()), ids=list(name_to_modules.keys()))
Parametrize over the numpy modules. Args: with_none: If `True`, also yield `None` among the values (to test `list`) restrict: If given, only test the given module (e.g. `restrict=['jnp']`) skip: If given, skip the given module from test (e.g. `skip=['torch']`) Returns: The fixture to apply to the `def test_xyz()` function
github-repos
def __init__(self, kind, required=False, default_factory=None, can_be_none=False): if required and default_factory is not None: raise ValueError("No default_factory value when option is required.") self.kind = kind self.required = required self.default_factory = default_factory self.can_be_none = can_be_none
Init. Args: kind: type of the option. required: whether user is required to supply a value. default_factory: a factory, when called, returns the default value. can_be_none: whether value can be None. Raises: ValueError: if arguments aren't compatible.
juraj-google-style
def __init__(self, existing_stack: Optional[list[TraceableObject[T]]]=None): self._stack: list[TraceableObject[T]] = existing_stack[:] if existing_stack else []
Constructor. Args: existing_stack: [TraceableObject, ...] If provided, this object will set its new stack to a SHALLOW COPY of existing_stack.
github-repos
def __init__(self, max_simultaneous_downloads=50, checksumer=None): self._executor = concurrent.futures.ThreadPoolExecutor( max_workers=max_simultaneous_downloads) self._checksumer = checksumer or hashlib.sha256 self._pbar_url = None self._pbar_dl_size = None
Init _Downloader instance. Args: max_simultaneous_downloads: `int`, max number of simultaneous downloads. checksumer: `hashlib.HASH`. Defaults to `hashlib.sha256`.
juraj-google-style
def init_benchmarks(n_values=None): if (n_values is None): n_values = (0, 5, 50, 250, 1000, 5000, 10000) string_tables = {n: gen_string_table(n) for n in n_values} regexs = gen_regex_table() data = [] for n in n_values: for id in xrange(len(regexs)): regex = regexs[id] string = string_tables[n][id] data.append((regex, string)) return data
Initialize the strings we'll run the regexes against. The strings used in the benchmark are prefixed and suffixed by strings that are repeated n times. The sequence n_values contains the values for n. If n_values is None the values of n from the original benchmark are used. The generated list of strings is cached in the string_tables variable, which is indexed by n. Returns: A list of string prefix/suffix lengths.
codesearchnet
def diagonalize_real_symmetric_matrix( matrix: np.ndarray, *, rtol: float = 1e-5, atol: float = 1e-8) -> np.ndarray: if np.any(np.imag(matrix) != 0) or not predicates.is_hermitian(matrix): raise ValueError('Input must be real and symmetric.') _, result = np.linalg.eigh(matrix) return result
Returns an orthogonal matrix that diagonalizes the given matrix. Args: matrix: A real symmetric matrix to diagonalize. rtol: float = 1e-5, atol: float = 1e-8 Returns: An orthogonal matrix P such that P.T @ matrix @ P is diagonal. Raises: ValueError: Matrix isn't real symmetric.
juraj-google-style
def _split_input_from_namespace(cls, app, namespace, entity_kind, shard_count): raw_entity_kind = cls._get_raw_entity_kind(entity_kind) if (shard_count == 1): return [key_range.KeyRange(namespace=namespace, _app=app)] ds_query = datastore.Query(kind=raw_entity_kind, namespace=namespace, _app=app, keys_only=True) ds_query.Order('__scatter__') random_keys = ds_query.Get((shard_count * cls._OVERSAMPLING_FACTOR)) if (not random_keys): return ([key_range.KeyRange(namespace=namespace, _app=app)] + ([None] * (shard_count - 1))) random_keys.sort() if (len(random_keys) >= shard_count): random_keys = cls._choose_split_points(random_keys, shard_count) key_ranges = [] key_ranges.append(key_range.KeyRange(key_start=None, key_end=random_keys[0], direction=key_range.KeyRange.ASC, include_start=False, include_end=False, namespace=namespace, _app=app)) for i in range(0, (len(random_keys) - 1)): key_ranges.append(key_range.KeyRange(key_start=random_keys[i], key_end=random_keys[(i + 1)], direction=key_range.KeyRange.ASC, include_start=True, include_end=False, namespace=namespace, _app=app)) key_ranges.append(key_range.KeyRange(key_start=random_keys[(- 1)], key_end=None, direction=key_range.KeyRange.ASC, include_start=True, include_end=False, namespace=namespace, _app=app)) if (len(key_ranges) < shard_count): key_ranges += ([None] * (shard_count - len(key_ranges))) return key_ranges
Helper for _split_input_from_params. If there are not enough Entities to make all of the given shards, the returned list of KeyRanges will include Nones. The returned list will contain KeyRanges ordered lexographically with any Nones appearing at the end. Args: app: the app. namespace: the namespace. entity_kind: entity kind as string. shard_count: the number of shards. Returns: KeyRange objects.
codesearchnet
def create_run_group(prj): from benchbuild.utils import schema as s session = s.Session() experiment = prj.experiment group = s.RunGroup(id=prj.run_uuid, experiment=experiment.id) session.add(group) session.commit() return (group, session)
Create a new 'run_group' in the database. This creates a new transaction in the database and creates a new run_group within this transaction. Afterwards we return both the transaction as well as the run_group itself. The user is responsible for committing it when the time comes. Args: prj - The project for which we open the run_group. Returns: A tuple (group, session) containing both the newly created run_group and the transaction object.
codesearchnet
def sort_sites_by_integrated_chg(self, r=0.4): if self.extrema_type is None: self.get_local_extrema() int_den = [] for isite in self.extrema_coords: mask = self._dist_mat(isite) < r vol_sphere = self.chgcar.structure.volume * (mask.sum()/self.chgcar.ngridpts) chg_in_sphere = np.sum(self.chgcar.data['total'] * mask) / mask.size / vol_sphere int_den.append(chg_in_sphere) self._extrema_df['avg_charge_den'] = int_den self._extrema_df.sort_values(by=['avg_charge_den'], inplace=True) self._extrema_df.reset_index(drop=True, inplace=True)
Get the average charge density around each local minima in the charge density and store the result in _extrema_df Args: r (float): radius of sphere around each site to evaluate the average
juraj-google-style
def find_rt_jar(javahome=None): if (not javahome): if ('JAVA_HOME' in os.environ): javahome = os.environ['JAVA_HOME'] elif (sys.platform == 'darwin'): javahome = _find_osx_javahome() else: javahome = _get_javahome_from_java(_find_java_binary()) rtpath = os.path.join(javahome, 'jre', 'lib', 'rt.jar') if (not os.path.isfile(rtpath)): msg = 'Could not find rt.jar: {} is not a file'.format(rtpath) raise ExtensionError(msg) return rtpath
Find the path to the Java standard library jar. The jar is expected to exist at the path 'jre/lib/rt.jar' inside a standard Java installation directory. The directory is found using the following procedure: 1. If the javehome argument is provided, use the value as the directory. 2. If the JAVA_HOME environment variable is set, use the value as the directory. 3. Find the location of the ``java`` binary in the current PATH and compute the installation directory from this location. Args: javahome: A path to a Java installation directory (optional).
codesearchnet
def check_task(taskid, timeout=DEFAULT_TASK_TIMEOUT, wait=2): max_attempts = int((timeout / wait)) try: return retry_call(partial(_check_task, taskid), max_attempts=max_attempts, wait=wait, exceptions=(AssertionError, ValueError)) except ValueError: raise SpinnakerTaskInconclusiveError('Task failed to complete in {0} seconds: {1}'.format(timeout, taskid))
Wrap check_task. Args: taskid (str): Existing Spinnaker Task ID. timeout (int, optional): Consider Task failed after given seconds. wait (int, optional): Seconds to pause between polling attempts. Returns: str: Task status. Raises: AssertionError: API did not respond with a 200 status code. :obj:`foremast.exceptions.SpinnakerTaskInconclusiveError`: Task did not reach a terminal state before the given time out.
codesearchnet
def extract_tree_without(self, labels, suppress_unifurcations=True): return self.extract_tree(labels, True, suppress_unifurcations)
Extract a copy of this ``Tree`` without the leaves labeled by the strings in ``labels`` Args: ``labels`` (``set``): Set of leaf labels to exclude ``suppress_unifurcations`` (``bool``): ``True`` to suppress unifurcations, otherwise ``False`` Returns: ``Tree``: Copy of this ``Tree``, exluding the leaves labeled by the strings in ``labels``
codesearchnet
def path_fraction_point(points, fraction): (seg_id, offset) = path_fraction_id_offset(points, fraction, relative_offset=True) return linear_interpolate(points[seg_id], points[(seg_id + 1)], offset)
Computes the point which corresponds to the fraction of the path length along the piecewise linear curve which is constructed from the set of points. Args: points: an iterable of indexable objects with indices 0, 1, 2 correspoding to 3D cartesian coordinates fraction: path length fraction (0 <= fraction <= 1) Returns: The 3D coordinates of the aforementioned point
codesearchnet
def escape_yaml(raw_str: str) -> str: escape_list = [char for char in raw_str if (char in ['!', '{', '['])] if (len(escape_list) == 0): return raw_str str_quotes = '"' i_str_quotes = "'" if ((str_quotes in raw_str) and (str_quotes not in raw_str[1:(- 1)])): return raw_str if (str_quotes in raw_str[1:(- 1)]): raw_str = ((i_str_quotes + raw_str) + i_str_quotes) else: raw_str = ((str_quotes + raw_str) + str_quotes) return raw_str
Shell-Escape a yaml input string. Args: raw_str: The unescaped string.
codesearchnet
def calculate_parent_python_path(test_filepath): split_path = test_filepath.rsplit(FLAGS.bazel_repo_root, 1) if len(split_path) < 2: raise ValueError(f'Filepath "{test_filepath}" does not contain repo root "{FLAGS.bazel_repo_root}"') path = FLAGS.bazel_repo_root + split_path[1] path = path.rsplit('/', 1)[0] return path.replace('/', '.')
Returns the absolute import path for the containing directory. Args: test_filepath: The filepath which Bazel invoked (ex: /filesystem/path/tensorflow/tensorflow/python/tpu/tpu_test) Returns: Absolute import path of parent (ex: tensorflow.python.tpu). Raises: ValueError: if bazel_repo_root does not appear within test_filepath.
github-repos
def get_by_hostname(self, hostname): resources = self._client.get_all() resources_filtered = [x for x in resources if (x['hostname'] == hostname)] if resources_filtered: return resources_filtered[0] else: return None
Retrieve a storage system by its hostname. Works only in API500 onwards. Args: hostname: Storage system hostname. Returns: dict
codesearchnet
def pop_all(self, event_name): if not self.started: raise IllegalStateError(("Dispatcher needs to be started before " "popping.")) results = [] try: self.lock.acquire() while True: e = self.event_dict[event_name].get(block=False) results.append(e) except (queue.Empty, KeyError): return results finally: self.lock.release()
Return and remove all stored events of a specified name. Pops all events from their queue. May miss the latest ones. If no event is available, return immediately. Args: event_name: Name of the events to be popped. Returns: List of the desired events. Raises: IllegalStateError: Raised if pop is called before the dispatcher starts polling.
juraj-google-style
def get_metric_fns(metric_names, labels, outputs): metric_fns = {} for metric_name in metric_names: metric_fn_name = metric_name.split("/")[-1] if hasattr(metrics, metric_fn_name): metric_fn = getattr(metrics, metric_fn_name) metric_fns[metric_name] = metric_fn(labels, outputs) else: raise ValueError("Metric {} is not implemented".format(metric_fn_name)) return metric_fns
Generate a dictionary of metric name to metric function. Args: metric_names: list of strings in the format "prefix/metric_function_name". metric_function_name should refer to a function name in metrics.py. The prefix will be included in the key in the returned dict. labels: a tensor where batch is the first dimension. outputs: a tensor of model predictions, same dimensionality as labels. Returns: metric_fns: dict of metric functions keyed by their name.
juraj-google-style
def register_app(self, app): app.route(self.uri, methods=self.methods)(self.callable_obj) return self
Register the route object to a `bottle.Bottle` app instance. Args: app (instance): Returns: Route instance (for chaining purposes)
codesearchnet
def fetch(clobber=False): dest_dir = fname_pattern = os.path.join(data_dir(), 'chen2014') url = 'http: dat_fname = os.path.join(dest_dir, 'chen2014.dat') h5_fname = os.path.join(dest_dir, 'chen2014.h5') md5 = 'f8a2bc46d411c57ca4c76dc344e291f1' if not clobber: h5_size = 52768768 h5_dsets = { 'dists': (30,), 'pix_lb': (557398, 2), 'A_r': (557398, 30), 'A_r_err': (557398, 30) } if fetch_utils.h5_file_exists(h5_fname, h5_size, dsets=h5_dsets): print('File appears to exist already. Call `fetch(clobber=True)` ' 'to force overwriting of existing file.') return print('Downloading {}'.format(url)) fetch_utils.download_and_verify(url, md5, fname=dat_fname) print('Repacking files...') ascii2h5(dat_fname, h5_fname) print('Removing original file...') os.remove(dat_fname)
Downloads the Chen et al. (2014) dust map. Args: clobber (Optional[:obj:`bool`]): If ``True``, any existing file will be overwritten, even if it appears to match. If ``False`` (the default), :obj:`fetch()` will attempt to determine if the dataset already exists. This determination is not 100\% robust against data corruption.
juraj-google-style
async def process_check_ins(self): params = {'include_participants': 1, 'include_matches': (1 if AUTO_GET_MATCHES else 0)} res = (await self.connection('POST', 'tournaments/{}/process_check_ins'.format(self._id), **params)) self._refresh_from_json(res)
finalize the check in phase |methcoro| Warning: |unstable| Note: |from_api| This should be invoked after a tournament's check-in window closes before the tournament is started. 1. Marks participants who have not checked in as inactive. 2. Moves inactive participants to bottom seeds (ordered by original seed). 3. Transitions the tournament state from 'checking_in' to 'checked_in' NOTE: Checked in participants on the waiting list will be promoted if slots become available. Raises: APIException
codesearchnet
def get(cls, resource_type): if isinstance(resource_type, str): obj = getattr(db, cls.__name__).find_one((cls.resource_type == resource_type)) elif isinstance(resource_type, int): obj = getattr(db, cls.__name__).find_one((cls.resource_type_id == resource_type)) elif isinstance(resource_type, cls): return resource_type else: obj = None if (not obj): obj = cls() obj.resource_type = resource_type db.session.add(obj) db.session.commit() db.session.refresh(obj) return obj
Returns the ResourceType object for `resource_type`. If no existing object was found, a new type will be created in the database and returned Args: resource_type (str): Resource type name Returns: :obj:`ResourceType`
codesearchnet
def write_new_config(self, updates): with open(self._new_config, 'w') as config_file: for update in updates: line = '{0}=={1} update.name, update.new_version, update.current_version ) config_file.write(line)
Given a list of updates, write the updates out to the provided configuartion file. Args: updates (list): List of Update objects.
juraj-google-style
def post_op(self, id: str, path_data: Union[dict, None], post_data: Any) -> dict: path = self._get_path_for_op_id(id) return self.post_path(path, path_data, post_data)
Modifies the ESI by looking up an operation id. Args: path: raw ESI URL path path_data: data to format the path with (can be None) post_data: data to send to ESI Returns: ESI data
juraj-google-style
def notify_rollover(self, stream): self.offset -= 1 if (not self.matches(stream)): return if (self._count == 0): raise InternalError('BufferedStreamWalker out of sync with storage engine, count was wrong.') self._count -= 1
Notify that a reading in the given stream was overwritten. Args: stream (DataStream): The stream that had overwritten data.
codesearchnet
def _list(self, dir_or_prefix): try: for path, (size, updated) in s3io.S3IO(options=self._options).list_files(dir_or_prefix, with_metadata=True): yield FileMetadata(path, size, updated) except Exception as e: raise BeamIOError('List operation failed', {dir_or_prefix: e})
List files in a location. Listing is non-recursive, for filesystems that support directories. Args: dir_or_prefix: (string) A directory or location prefix (for filesystems that don't have directories). Returns: Generator of ``FileMetadata`` objects. Raises: ``BeamIOError``: if listing fails, but not if no files were found.
github-repos
def add_config_paths(**kwargs): for k, path in kwargs.items(): if not os.path.exists(path): raise ValueError( 'Configuration file "{}" does not exist'.format(k)) if k in cf.get_option('config_paths'): raise ValueError('Configuration {!r} already exists'.format(k)) kwargs.update(**cf.get_option('config_paths')) cf.set_option('config_paths', kwargs)
Add to the pool of available configuration files for BIDSLayout. Args: kwargs: dictionary specifying where to find additional config files. Keys are names, values are paths to the corresponding .json file. Example: > add_config_paths(my_config='/path/to/config') > layout = BIDSLayout('/path/to/bids', config=['bids', 'my_config'])
juraj-google-style
def __learn_labels(self, labels): if (self.feature_length > 0): result = list(self.labels.classes_) else: result = [] for label in labels: result.append(label) self.labels.fit(result)
Learns new labels, this method is intended for internal use Args: labels (:obj:`list` of :obj:`str`): Labels to learn
codesearchnet
def main(conf_file, overwrite, logger): uid = pwd.getpwnam(get_username()).pw_uid logger.info('Stopping the daemon.') sh.service(get_service_name(), 'stop') logger.info('Creating config file.') create_config(cnf_file=conf_file, uid=uid, overwrite=overwrite) logger.info('Creating log file.') create_log(log_file=REQUIRED_SETTINGS['LogFile'], uid=uid) logger.info('Starting the daemon..') sh.service(get_service_name(), 'start')
Create configuration and log file. Restart the daemon when configuration is done. Args: conf_file (str): Path to the configuration file. overwrite (bool): Overwrite the configuration file with `clean` config?
codesearchnet
def do_patch(endpoint, body, access_token): headers = {'content-type': 'application/json', 'Authorization': ('Bearer ' + access_token)} headers['User-Agent'] = get_user_agent() return requests.patch(endpoint, data=body, headers=headers)
Do an HTTP PATCH request and return JSON. Args: endpoint (str): Azure Resource Manager management endpoint. body (str): JSON body of information to patch. access_token (str): A valid Azure authentication token. Returns: HTTP response. JSON body.
codesearchnet
def sg_summary_loss(tensor, prefix='losses', name=None): prefix = ('' if (prefix is None) else (prefix + '/')) name = ((prefix + _pretty_name(tensor)) if (name is None) else (prefix + name)) _scalar(name, tf.reduce_mean(tensor)) _histogram((name + '-h'), tensor)
r"""Register `tensor` to summary report as `loss` Args: tensor: A `Tensor` to log as loss prefix: A `string`. A prefix to display in the tensor board web UI. name: A `string`. A name to display in the tensor board web UI. Returns: None
codesearchnet
def _add_arg_python(self, key, value=None, mask=False): self._data[key] = value if not value: pass elif value is True: self._args.append('--{}'.format(key)) self._args_quoted.append('--{}'.format(key)) self._args_masked.append('--{}'.format(key)) else: self._args.append('--{}={}'.format(key, value)) if mask: value = 'x' * len(str(value)) else: value = self.quote(value) self._args_quoted.append('--{}={}'.format(key, value)) self._args_masked.append('--{}={}'.format(key, value))
Add CLI Arg formatted specifically for Python. Args: key (string): The CLI Args key (e.g., --name). value (string): The CLI Args value (e.g., bob). mask (boolean, default:False): Indicates whether no mask value.
juraj-google-style
def is_admin(name): groups = get_user_groups(name, True) for group in groups: if group in ('S-1-5-32-544', 'S-1-5-18'): return True return False
Is the passed user a member of the Administrators group Args: name (str): The name to check Returns: bool: True if user is a member of the Administrators group, False otherwise
juraj-google-style
def rgb_to_hex(cls, color): return ' cls._bound_color_value(color[0]), cls._bound_color_value(color[1]), cls._bound_color_value(color[2])).upper()
Convert an ``(r, g, b)`` color tuple to a hexadecimal string. Alphabetical characters in the output will be capitalized. Args: color (tuple): An rgb color tuple of form: (int, int, int) Returns: string Example: >>> SoftColor.rgb_to_hex((0, 0, 0)) '#000000' >>> SoftColor.rgb_to_hex((255, 255, 255)) '#FFFFFF'
juraj-google-style
def Lookup(self, name): if name == '@': return self.stack[-1].context parts = name.split('.') value = self._LookUpStack(parts[0]) for part in parts[1:]: try: value = value[part] except (KeyError, TypeError): return self._Undefined(part) return value
Get the value associated with a name in the current context. The current context could be an dictionary in a list, or a dictionary outside a list. Args: name: name to lookup, e.g. 'foo' or 'foo.bar.baz' Returns: The value, or self.undefined_str Raises: UndefinedVariable if self.undefined_str is not set
juraj-google-style
def set_session(session): global _SESSION _SESSION.session = session
Sets the global TensorFlow session. Args: session: A TF Session.
github-repos
def recover_cfg_all(self, entries, symbols=None, callback=None, arch_mode=None): if arch_mode is None: arch_mode = self.binary.architecture_mode self._load(arch_mode=arch_mode) symbols = {} if not symbols else symbols cfgs = [] addrs_processed = set() calls = entries while len(calls) > 0: start, calls = calls[0], calls[1:] cfg, calls_tmp = self._recover_cfg(start=start, symbols=symbols, callback=callback) addrs_processed.add(start) cfgs.append(cfg) for addr in sorted(calls_tmp): if addr not in addrs_processed and addr not in calls: calls.append(addr) return cfgs
Recover CFG for all functions from an entry point and/or symbol table. Args: entries (list): A list of function addresses' to start the CFG recovery process. symbols (dict): Symbol table. callback (function): A callback function which is called after each successfully recovered CFG. arch_mode (int): Architecture mode. Returns: list: A list of recovered CFGs.
juraj-google-style
def _ParseFiletime(self, byte_stream): filetime_map = self._GetDataTypeMap('filetime') try: filetime = self._ReadStructureFromByteStream( byte_stream, 0, filetime_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError( 'Unable to parse FILETIME value with error: {0!s}'.format( exception)) if filetime == 0: return None try: return dfdatetime_filetime.Filetime(timestamp=filetime) except ValueError: raise errors.ParseError( 'Invalid FILETIME value: 0x{0:08x}'.format(filetime))
Parses a FILETIME date and time value from a byte stream. Args: byte_stream (bytes): byte stream. Returns: dfdatetime.Filetime: FILETIME date and time value or None if no value is set. Raises: ParseError: if the FILETIME could not be parsed.
juraj-google-style
def duplicate_doc_file(doc_file: Union[str, os.PathLike], old_model_patterns: ModelPatterns, new_model_patterns: ModelPatterns, dest_file: Optional[Union[str, os.PathLike]]=None, frameworks: Optional[List[str]]=None): with open(doc_file, 'r', encoding='utf-8') as f: content = f.read() content = re.sub('<!--\\s*Copyright (\\d+)\\s', f'<!--Copyright {CURRENT_YEAR} ', content) if frameworks is None: frameworks = get_default_frameworks() if dest_file is None: dest_file = Path(doc_file).parent / f'{new_model_patterns.model_type}.md' lines = content.split('\n') blocks = [] current_block = [] for line in lines: if line.startswith(' blocks.append('\n'.join(current_block)) current_block = [line] else: current_block.append(line) blocks.append('\n'.join(current_block)) new_blocks = [] in_classes = False for block in blocks: if not block.startswith(' new_blocks.append(block) elif re.search('^ new_blocks.append(f' elif not in_classes and old_model_patterns.config_class in block.split('\n')[0]: in_classes = True new_blocks.append(DOC_OVERVIEW_TEMPLATE.format(model_name=new_model_patterns.model_name)) new_block, _ = replace_model_patterns(block, old_model_patterns, new_model_patterns) new_blocks.append(new_block) elif in_classes: in_classes = True block_title = block.split('\n')[0] block_class = re.search('^ new_block, _ = replace_model_patterns(block, old_model_patterns, new_model_patterns) if 'Tokenizer' in block_class: if old_model_patterns.tokenizer_class != new_model_patterns.tokenizer_class: new_blocks.append(new_block) elif 'ImageProcessor' in block_class: if old_model_patterns.image_processor_class != new_model_patterns.image_processor_class: new_blocks.append(new_block) elif 'ImageProcessorFast' in block_class: if old_model_patterns.image_processor_fast_class != new_model_patterns.image_processor_fast_class: new_blocks.append(new_block) elif 'FeatureExtractor' in block_class: if old_model_patterns.feature_extractor_class != new_model_patterns.feature_extractor_class: new_blocks.append(new_block) elif 'Processor' in block_class: if old_model_patterns.processor_class != new_model_patterns.processor_class: new_blocks.append(new_block) elif block_class.startswith('Flax'): if 'flax' in frameworks: new_blocks.append(new_block) elif block_class.startswith('TF'): if 'tf' in frameworks: new_blocks.append(new_block) elif len(block_class.split(' ')) == 1: if 'pt' in frameworks: new_blocks.append(new_block) else: new_blocks.append(new_block) with open(dest_file, 'w', encoding='utf-8') as f: f.write('\n'.join(new_blocks))
Duplicate a documentation file and adapts it for a new model. Args: module_file (`str` or `os.PathLike`): Path to the doc file to duplicate. old_model_patterns (`ModelPatterns`): The patterns for the old model. new_model_patterns (`ModelPatterns`): The patterns for the new model. dest_file (`str` or `os.PathLike`, *optional*): Path to the new doc file. Will default to the a file named `{new_model_patterns.model_type}.md` in the same folder as `module_file`. frameworks (`List[str]`, *optional*): If passed, will only keep the model classes corresponding to this list of frameworks in the new doc file.
github-repos
def __init__(self, channel): self.ListLogMetrics = channel.unary_unary( "/google.logging.v2.MetricsServiceV2/ListLogMetrics", request_serializer=google_dot_cloud_dot_logging__v2_dot_proto_dot_logging__metrics__pb2.ListLogMetricsRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_logging__v2_dot_proto_dot_logging__metrics__pb2.ListLogMetricsResponse.FromString, ) self.GetLogMetric = channel.unary_unary( "/google.logging.v2.MetricsServiceV2/GetLogMetric", request_serializer=google_dot_cloud_dot_logging__v2_dot_proto_dot_logging__metrics__pb2.GetLogMetricRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_logging__v2_dot_proto_dot_logging__metrics__pb2.LogMetric.FromString, ) self.CreateLogMetric = channel.unary_unary( "/google.logging.v2.MetricsServiceV2/CreateLogMetric", request_serializer=google_dot_cloud_dot_logging__v2_dot_proto_dot_logging__metrics__pb2.CreateLogMetricRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_logging__v2_dot_proto_dot_logging__metrics__pb2.LogMetric.FromString, ) self.UpdateLogMetric = channel.unary_unary( "/google.logging.v2.MetricsServiceV2/UpdateLogMetric", request_serializer=google_dot_cloud_dot_logging__v2_dot_proto_dot_logging__metrics__pb2.UpdateLogMetricRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_logging__v2_dot_proto_dot_logging__metrics__pb2.LogMetric.FromString, ) self.DeleteLogMetric = channel.unary_unary( "/google.logging.v2.MetricsServiceV2/DeleteLogMetric", request_serializer=google_dot_cloud_dot_logging__v2_dot_proto_dot_logging__metrics__pb2.DeleteLogMetricRequest.SerializeToString, response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def from_dict(self, dictionary): for remote_name, remote_value in dictionary.items(): local_name = next((name for name, attribute in self._attributes.items() if attribute.remote_name == remote_name), None) if local_name: setattr(self, local_name, remote_value) else: pass
Sets all the exposed ReST attribues from the given dictionary Args: dictionary (dict): dictionnary containing the raw object attributes and their values. Example: >>> info = {"name": "my group", "private": False} >>> group = NUGroup() >>> group.from_dict(info) >>> print "name: %s - private: %s" % (group.name, group.private) "name: my group - private: False"
juraj-google-style
def _convert_op_hints_to_stubs_helper(graph_def, write_callback=lambda sess, graph_def: None): hints = _find_all_hints_in_nodes(graph_def.node) hints_q = [] for hint in hints.values(): hints_q.append((hint.level, hint.uuid)) hints_q.sort(key=lambda tup: tup[0]) for i in range(len(hints_q) - 1, -1, -1): level, hint_uuid = hints_q[i] curr_graph_def = graph_def del graph_def for i in range(len(hints_q) - 1, -1, -1): level, hint_uuid = hints_q[i] if level >= 2: children_hints, curr_graph_def, function_def_nodes = _find_children_hints(hints[hint_uuid], curr_graph_def) assert len(children_hints) > 0 children_inputs_mappings = hints[hint_uuid].children_inputs_mappings for j, child_hint in enumerate(children_hints): if j == 0: for mapping in children_inputs_mappings['parent_first_child_input']: parent_input_index = _get_correct_mapping(mapping['parent_ophint_input_index'], hints[hint_uuid].inputs) child_input_index = _get_correct_mapping(mapping['first_child_ophint_input_index'], child_hint.inputs) child_hint.inputs[child_input_index] = hints[hint_uuid].inputs[parent_input_index] else: for mapping in children_inputs_mappings['internal_children_input_output']: input_index = _get_correct_mapping(mapping['child_input_index'], child_hint.inputs) output_index = _get_correct_mapping(mapping['child_output_index'], children_hints[j - 1].outputs) child_hint.inputs[input_index] = children_hints[j - 1].outputs[output_index] if j == len(children_hints) - 1: for mapping in children_inputs_mappings['parent_last_child_output']: parent_output_index = _get_correct_mapping(mapping['parent_output_index'], hints[hint_uuid].outputs) child_output_index = _get_correct_mapping(mapping['child_output_index'], child_hint.outputs) child_hint.outputs[child_output_index] = hints[hint_uuid].outputs[parent_output_index] for j, child_hint in enumerate(children_hints): curr_graph_def = _convert_single_op_hint_to_stub(child_hint, curr_graph_def, function_def_nodes, j == len(children_hints) - 1) else: curr_graph_def = _convert_single_op_hint_to_stub(hints[hint_uuid], curr_graph_def) write_callback(curr_graph_def, 'initial') curr_graph_def = _remove_redundant_stack_unstack(curr_graph_def) return curr_graph_def
Converts a graph_def to a new graph_def where all op hints are stubbed. Args: graph_def: A graph def that we should convert. write_callback: A function pointer that can be used to write intermediate steps of graph transformation (optional). Returns: A new stubbed graph_def.
github-repos
def _awaitReset(self, utcTimeStamp, verbose=True): resetTime = pytz.utc.localize(datetime.utcfromtimestamp(utcTimeStamp)) _vPrint(verbose, "--- Current Timestamp") _vPrint(verbose, " %s" % (time.strftime('%c'))) now = pytz.utc.localize(datetime.utcnow()) waitTime = round((resetTime - now).total_seconds()) + 1 _vPrint(verbose, "--- Current UTC Timestamp") _vPrint(verbose, " %s" % (now.strftime('%c'))) _vPrint(verbose, "--- GITHUB NEEDS A BREAK Until UTC Timestamp") _vPrint(verbose, " %s" % (resetTime.strftime('%c'))) self._countdown(waitTime, printString="--- Waiting %*d seconds...", verbose=verbose) _vPrint(verbose, "--- READY!")
Wait until the given UTC timestamp. Args: utcTimeStamp (int): A UTC format timestamp. verbose (Optional[bool]): If False, all extra printouts will be suppressed. Defaults to True.
juraj-google-style
def setAvatar(self, image): self.conn('PUT', '{0}/users/{1}/profile/avatar'.format(SkypeConnection.API_USER, self.userId), auth=SkypeConnection.Auth.SkypeToken, data=image.read())
Update the profile picture for the current user. Args: image (file): a file-like object to read the image from
codesearchnet
def from_lasio(cls, l, remap=None, funcs=None, data=True, req=None, alias=None, fname=None): curve_params = {} for (field, (sect, code)) in LAS_FIELDS['data'].items(): curve_params[field] = utils.lasio_get(l, sect, code, remap=remap, funcs=funcs) if req: reqs = utils.flatten_list([v for (k, v) in alias.items() if (k in req)]) if (l.depth_m[0] < l.depth_m[1]): curve_params['depth'] = l.depth_m else: curve_params['depth'] = np.flipud(l.depth_m) depth_curves = ['DEPT', 'TIME'] if (data and req): curves = {c.mnemonic: Curve.from_lasio_curve(c, **curve_params) for c in l.curves if ((c.mnemonic[:4] not in depth_curves) and (c.mnemonic in reqs))} elif (data and (not req)): curves = {c.mnemonic: Curve.from_lasio_curve(c, **curve_params) for c in l.curves if (c.mnemonic[:4] not in depth_curves)} elif ((not data) and req): curves = {c.mnemonic: True for c in l.curves if ((c.mnemonic[:4] not in depth_curves) and (c.mnemonic in reqs))} else: curves = {c.mnemonic: True for c in l.curves if (c.mnemonic[:4] not in depth_curves)} if req: aliases = utils.flatten_list([c.get_alias(alias) for (m, c) in curves.items()]) if (len(set(aliases)) < len(req)): return cls(params={}) params = {'las': l, 'header': Header.from_lasio(l, remap=remap, funcs=funcs), 'location': Location.from_lasio(l, remap=remap, funcs=funcs), 'data': curves, 'fname': fname} for (field, (sect, code)) in LAS_FIELDS['well'].items(): params[field] = utils.lasio_get(l, sect, code, remap=remap, funcs=funcs) return cls(params)
Constructor. If you already have the lasio object, then this makes a well object from it. Args: l (lasio object): a lasio object. remap (dict): Optional. A dict of 'old': 'new' LAS field names. funcs (dict): Optional. A dict of 'las field': function() for implementing a transform before loading. Can be a lambda. data (bool): Whether to load curves or not. req (dict): An alias list, giving all required curves. If not all of the aliases are present, the well is empty. Returns: well. The well object.
codesearchnet
def fetch(self, payment_id, data={}, **kwargs): return super(Payment, self).fetch(payment_id, data, **kwargs)
Fetch Payment for given Id Args: payment_id : Id for which payment object has to be retrieved Returns: Payment dict for given payment Id
codesearchnet
def get_states(self, n): return self.states[len(self.new_states):len(self.new_states) + n]
Get the next n recurrent states. Called by layers in "incremental" mode. Args: n: an integer Returns: a list of n Tensors
juraj-google-style
def update_q(self, state_key, action_key, reward_value, next_max_q): q = self.extract_q_df(state_key, action_key) new_q = (q + (self.alpha_value * ((reward_value + (self.gamma_value * next_max_q)) - q))) self.save_q_df(state_key, action_key, new_q)
Update Q-Value. Args: state_key: The key of state. action_key: The key of action. reward_value: R-Value(Reward). next_max_q: Maximum Q-Value.
codesearchnet
def _parse_peer_link(self, config): match = re.search('peer-link (\\S+)', config) value = (match.group(1) if match else None) return dict(peer_link=value)
Scans the config block and parses the peer-link value Args: config (str): The config block to scan Returns: dict: A dict object that is intended to be merged into the resource dict
codesearchnet
def create_page(cls, webdriver=None, **kwargs): if (not webdriver): webdriver = WTF_WEBDRIVER_MANAGER.get_driver() return PageFactory.create_page(cls, webdriver=webdriver, **kwargs)
Class method short cut to call PageFactory on itself. Use it to instantiate this PageObject using a webdriver. Args: webdriver (Webdriver): Instance of Selenium Webdriver. Returns: PageObject Raises: InvalidPageError
codesearchnet
def upload(self, resource_id, data): self.body = data self.content_type = 'application/octet-stream' self.resource_id(str(resource_id)) self._request_uri = '{}/upload'.format(self._request_uri)
Update the request URI to upload the a document to this resource. Args: resource_id (integer): The group id. data (any): The raw data to upload.
juraj-google-style
def import_class(classpath): (modname, classname) = classpath.rsplit('.', 1) module = importlib.import_module(modname) klass = getattr(module, classname) return klass
Import the class referred to by the fully qualified class path. Args: classpath: A full "foo.bar.MyClass" path to a class definition. Returns: The class referred to by the classpath. Raises: ImportError: If an error occurs while importing the module. AttributeError: IF the class does not exist in the imported module.
codesearchnet
def add_period_and_roll(self, date_tensor, period_tensor, roll_convention=constants.BusinessDayConvention.NONE): return self.roll_to_business_day(date_tensor + period_tensor, roll_convention)
Adds given periods to given dates and rolls to business days. The original dates are not rolled prior to addition. Args: date_tensor: DateTensor of dates to add to. period_tensor: PeriodTensor broadcastable to `date_tensor`. roll_convention: BusinessDayConvention. Determines how to roll a date that falls on a holiday. Returns: The resulting DateTensor.
github-repos
def post_info(self, name, message): self.post_command(OPERATIONS.CMD_POST_MESSAGE, _create_message(name, states.INFO_LEVEL, message))
Asynchronously post a user facing info message about a service. Args: name (string): The name of the service message (string): The user facing info message that will be stored for the service and can be queried later.
codesearchnet
def dict_to_xml(spec, full_document=False): middle = xmltodict.unparse(spec, full_document=full_document, pretty=True) return lxml.etree.fromstring(middle)
Convert dict to XML Args: spec(dict): dict to convert full_document(bool): whether to add XML headers Returns: lxml.etree.Element: XML tree
juraj-google-style
def _assert_float_dtype(dtype): dtype = dtypes.as_dtype(dtype) if not dtype.is_floating: raise ValueError('Expected floating point type, got %s.' % dtype) return dtype
Validate and return floating point type based on `dtype`. `dtype` must be a floating point type. Args: dtype: The data type to validate. Returns: Validated type. Raises: ValueError: if `dtype` is not a floating point type.
github-repos
def join(self, *data: Iterable[MaybeBytes]) -> bytes: return self.how.join([bytes(item) for item in chain(*data)])
Iterable join on a delimiter. Args: data: Iterable of items to join. Examples: :: BytesFormat(b' ').join([b'one', b'two', b'three'])
juraj-google-style
def __init__(self, command, short_help, params: List[ParameterDesc] = None): self.command = command self.short_help = short_help self.params = params if params else []
Command descriptor Args: command: 1 word command identifier short_help: short description of the purpose of the command params: list of parameter descriptions belonging to the command
juraj-google-style
def GetBatchJob(client, batch_job_id): batch_job_service = client.GetService('BatchJobService', 'v201809') selector = {'fields': ['Id', 'Status', 'DownloadUrl'], 'predicates': [{'field': 'Id', 'operator': 'EQUALS', 'values': [batch_job_id]}]} return batch_job_service.get(selector)['entries'][0]
Retrieves the BatchJob with the given id. Args: client: an instantiated AdWordsClient used to retrieve the BatchJob. batch_job_id: a long identifying the BatchJob to be retrieved. Returns: The BatchJob associated with the given id.
codesearchnet
def __init__(self, datastore_client, storage_client, round_name): self._datastore_client = datastore_client self._storage_client = storage_client self._round_name = round_name self._data = {}
Initializes ClassificationBatches. Args: datastore_client: instance of CompetitionDatastoreClient storage_client: instance of CompetitionStorageClient round_name: name of the round
juraj-google-style
def add_completions(replace_list: list, belstr: str, replace_span: Span, completion_text: str) -> List[Mapping[(str, Any)]]: completions = [] for r in replace_list: if (len(belstr) > 0): belstr_end = (len(belstr) - 1) else: belstr_end = 0 log.debug(f"Replace list {r} Replace_span {replace_span} BELstr: {belstr} Len: {belstr_end} Test1 {(r['type'] == 'Function')} Test2 {((replace_span[1] + 1) == len(belstr))}") if ((r['type'] == 'Function') and (replace_span[0] > 0) and (belstr[(replace_span[0] - 1)] == ',')): log.debug('prior char is a comma') replacement = (((belstr[0:replace_span[0]] + ' ') + f"{r['replacement']}()") + belstr[(replace_span[1] + 1):]) cursor_loc = len(((belstr[0:replace_span[0]] + ' ') + f"{r['replacement']}()")) elif ((replace_span[0] > 0) and (belstr[(replace_span[0] - 1)] == ',')): log.debug('prior char is a comma') replacement = (((belstr[0:replace_span[0]] + ' ') + r['replacement']) + belstr[(replace_span[1] + 1):]) cursor_loc = len(((belstr[0:replace_span[0]] + ' ') + r['replacement'])) elif ((r['type'] == 'Function') and (replace_span[1] >= belstr_end)): replacement = (belstr[0:replace_span[0]] + f"{r['replacement']}()") cursor_loc = (len(replacement) - 1) log.debug(f'Replacement: {replacement}') else: replacement = ((belstr[0:replace_span[0]] + r['replacement']) + belstr[(replace_span[1] + 1):]) cursor_loc = len((belstr[0:replace_span[0]] + r['replacement'])) completions.append({'replacement': replacement, 'cursor_loc': cursor_loc, 'highlight': r['highlight'], 'label': r['label']}) return completions
Create completions to return given replacement list Args: replace_list: list of completion replacement values belstr: BEL String replace_span: start, stop of belstr to replace completion_text: text to use for completion - used for creating highlight Returns: [{ "replacement": replacement, "cursor_loc": cursor_loc, "highlight": highlight, "label": label, }]
codesearchnet
def sg_input(shape=None, dtype=sg_floatx, name=None): if (shape is None): return tf.placeholder(dtype, shape=None, name=name) else: if (not isinstance(shape, (list, tuple))): shape = [shape] return tf.placeholder(dtype, shape=([None] + list(shape)), name=name)
r"""Creates a placeholder. Args: shape: A tuple/list of integers. If an integers is given, it will turn to a list. dtype: A data type. Default is float32. name: A name for the placeholder. Returns: A wrapped placeholder `Tensor`.
codesearchnet
def make_mapper(features): if not features: features = Feature(input=[], transformer=NullTransformer()) if not iterable(features): features = (features, ) return DataFrameMapper( [t.as_input_transformer_tuple() for t in features], input_df=True)
Make a DataFrameMapper from a feature or list of features Args: features (Union[Feature, List[Feature]]): feature or list of features Returns: DataFrameMapper: mapper made from features
juraj-google-style
def get_text_features(self, input_ids: TFModelInputType | None=None, attention_mask: np.ndarray | tf.Tensor | None=None, position_ids: np.ndarray | tf.Tensor | None=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, return_dict: Optional[bool]=None, training: bool=False) -> tf.Tensor: text_features = self.clip.get_text_features(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict) return text_features
Returns: text_features (`tf.Tensor` of shape `(batch_size, output_dim`): The text embeddings obtained by applying the projection layer to the pooled output of [`TFCLIPTextModel`]. Examples: ```python >>> from transformers import AutoTokenizer, TFCLIPModel >>> model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf") >>> text_features = model.get_text_features(**inputs) ```
github-repos
def get_anchor_labels(anchors, gt_boxes, crowd_boxes): def filter_box_label(labels, value, max_num): curr_inds = np.where(labels == value)[0] if len(curr_inds) > max_num: disable_inds = np.random.choice( curr_inds, size=(len(curr_inds) - max_num), replace=False) labels[disable_inds] = -1 curr_inds = np.where(labels == value)[0] return curr_inds NA, NB = len(anchors), len(gt_boxes) assert NB > 0 box_ious = np_iou(anchors, gt_boxes) ious_argmax_per_anchor = box_ious.argmax(axis=1) ious_max_per_anchor = box_ious.max(axis=1) ious_max_per_gt = np.amax(box_ious, axis=0, keepdims=True) anchors_with_max_iou_per_gt = np.where(box_ious == ious_max_per_gt)[0] anchor_labels = -np.ones((NA,), dtype='int32') anchor_labels[anchors_with_max_iou_per_gt] = 1 anchor_labels[ious_max_per_anchor >= cfg.RPN.POSITIVE_ANCHOR_THRESH] = 1 anchor_labels[ious_max_per_anchor < cfg.RPN.NEGATIVE_ANCHOR_THRESH] = 0 if crowd_boxes.size > 0: cand_inds = np.where(anchor_labels >= 0)[0] cand_anchors = anchors[cand_inds] ioas = np_ioa(crowd_boxes, cand_anchors) overlap_with_crowd = cand_inds[ioas.max(axis=0) > cfg.RPN.CROWD_OVERLAP_THRESH] anchor_labels[overlap_with_crowd] = -1 target_num_fg = int(cfg.RPN.BATCH_PER_IM * cfg.RPN.FG_RATIO) fg_inds = filter_box_label(anchor_labels, 1, target_num_fg) old_num_bg = np.sum(anchor_labels == 0) if old_num_bg == 0: raise MalformedData("No valid background for RPN!") target_num_bg = cfg.RPN.BATCH_PER_IM - len(fg_inds) filter_box_label(anchor_labels, 0, target_num_bg) anchor_boxes = np.zeros((NA, 4), dtype='float32') fg_boxes = gt_boxes[ious_argmax_per_anchor[fg_inds], :] anchor_boxes[fg_inds, :] = fg_boxes return anchor_labels, anchor_boxes
Label each anchor as fg/bg/ignore. Args: anchors: Ax4 float gt_boxes: Bx4 float, non-crowd crowd_boxes: Cx4 float Returns: anchor_labels: (A,) int. Each element is {-1, 0, 1} anchor_boxes: Ax4. Contains the target gt_box for each anchor when the anchor is fg.
juraj-google-style
def from_entity(entity, self_user_id): user_id = UserID(chat_id=entity.id.chat_id, gaia_id=entity.id.gaia_id) return User(user_id, entity.properties.display_name, entity.properties.first_name, entity.properties.photo_url, entity.properties.email, (self_user_id == user_id) or (self_user_id is None))
Construct user from ``Entity`` message. Args: entity: ``Entity`` message. self_user_id (~hangups.user.UserID or None): The ID of the current user. If ``None``, assume ``entity`` is the current user. Returns: :class:`~hangups.user.User` object.
juraj-google-style
def expect_equal(first, second, msg=None, extras=None): try: asserts.assert_equal(first, second, msg, extras) except signals.TestSignal as e: logging.exception('Expected %s equals to %s, but they are not.', first, second) recorder.add_error(e)
Expects the equality of objects, otherwise fail the test. If the expectation is not met, the test is marked as fail after its execution finishes. Error message is "first != second" by default. Additional explanation can be supplied in the message. Args: first: The first object to compare. second: The second object to compare. msg: A string that adds additional info about the failure. extras: An optional field for extra information to be included in test result.
github-repos
def cleandata(inputlist): output = [] for e in inputlist: new = [] for f in e: if f == "--": new.append(None) else: new.append(float(f)) output.append(new) return output
Helper function for parse.getdata. Remove empty variables, convert strings to float args: inputlist: list List of Variables Returns: ouput: Cleaned list
juraj-google-style
def is_coord_subset_pbc(subset, superset, atol=1e-08, mask=None): c1 = np.array(subset, dtype=np.float64) c2 = np.array(superset, dtype=np.float64) if (mask is not None): m = np.array(mask, dtype=np.int) else: m = np.zeros((len(subset), len(superset)), dtype=np.int) atol = (np.zeros(3, dtype=np.float64) + atol) return cuc.is_coord_subset_pbc(c1, c2, atol, m)
Tests if all fractional coords in subset are contained in superset. Args: subset, superset: List of fractional coords atol (float or size 3 array): Tolerance for matching mask (boolean array): Mask of matches that are not allowed. i.e. if mask[1,2] == True, then subset[1] cannot be matched to superset[2] Returns: True if all of subset is in superset.
codesearchnet
def distance_matrix(self, leaf_labels=False): M = dict(); leaf_dists = dict() for node in self.traverse_postorder(): if node.is_leaf(): leaf_dists[node] = [[node,0]] else: for c in node.children: if c.edge_length is not None: for i in range(len(leaf_dists[c])): leaf_dists[c][i][1] += c.edge_length for c1 in range(0,len(node.children)-1): leaves_c1 = leaf_dists[node.children[c1]] for c2 in range(c1+1,len(node.children)): leaves_c2 = leaf_dists[node.children[c2]] for i in range(len(leaves_c1)): for j in range(len(leaves_c2)): u,ud = leaves_c1[i]; v,vd = leaves_c2[j]; d = ud+vd if leaf_labels: u_key = u.label; v_key = v.label else: u_key = u; v_key = v if u_key not in M: M[u_key] = dict() M[u_key][v_key] = d if v_key not in M: M[v_key] = dict() M[v_key][u_key] = d leaf_dists[node] = leaf_dists[node.children[0]]; del leaf_dists[node.children[0]] for i in range(1,len(node.children)): leaf_dists[node] += leaf_dists[node.children[i]]; del leaf_dists[node.children[i]] return M
Return a distance matrix (2D dictionary) of the leaves of this ``Tree`` Args: ``leaf_labels`` (``bool``): ``True`` to have keys be labels of leaf ``Node`` objects, otherwise ``False`` to have keys be ``Node`` objects Returns: ``dict``: Distance matrix (2D dictionary) of the leaves of this ``Tree``, where keys are labels of leaves; ``M[u][v]`` = distance from ``u`` to ``v``
juraj-google-style
def as_fn(self, *binding_order): if len(binding_order) != len(self.unbound_vars): raise ValueError('All vars must be specified.') for arg in binding_order: if arg not in self.unbound_vars: raise ValueError('Unknown binding: %s' % arg) def func(*args, **kwargs): if len(binding_order) != len(args): raise ValueError('Missing values, expects: %s' % binding_order) values = dict(zip(binding_order, args)) values.update(kwargs) return self.construct(**values) func.__doc__ = _gen_ipython_string(func, binding_order, [], func.__doc__) return func
Creates a function by binding the arguments in the given order. Args: *binding_order: The unbound variables. This must include all values. Returns: A function that takes the arguments of binding_order. Raises: ValueError: If the bindings are missing values or include unknown values.
juraj-google-style
def get_dispatcher_event(self, name): e = self.__property_events.get(name) if (e is None): e = self.__events[name] return e
Retrieves an Event object by name Args: name (str): The name of the :class:`Event` or :class:`~pydispatch.properties.Property` object to retrieve Returns: The :class:`Event` instance for the event or property definition .. versionadded:: 0.1.0
codesearchnet
def get_pkg_names(pkgs): result = set() with open(join("mapping"), "r") as f: data = dict(x.strip().split(":") for x in f) for pkg in pkgs: result.add(data.get(pkg, pkg)) return sorted(result, key=lambda s: s.lower())
Get PyPI package names from a list of imports. Args: pkgs (List[str]): List of import names. Returns: List[str]: The corresponding PyPI package names.
juraj-google-style
def get_hash_of_dirs(directory): import hashlib sha = hashlib.sha512() if not os.path.exists(directory): return -1 for root, _, files in os.walk(directory): for name in files: filepath = local.path(root) / name if filepath.exists(): with open(filepath, 'rb') as next_file: for line in next_file: sha.update(line) return sha.hexdigest()
Recursively hash the contents of the given directory. Args: directory (str): The root directory we want to hash. Returns: A hash of all the contents in the directory.
juraj-google-style
def _read_opm(string): maneuvers = [] data = {} comments = {} for i, line in enumerate(string.splitlines()): if not line: continue if line.startswith("COMMENT"): comments[i] = line.split("COMMENT")[-1].strip() continue key, _, value = line.partition("=") key = key.strip() value = value.strip() if key.startswith('MAN_'): if key == "MAN_EPOCH_IGNITION": maneuvers.append({}) man_idx = len(maneuvers) - 1 if i - 1 in comments: maneuvers[man_idx]["comment"] = comments[i - 1] maneuvers[man_idx][key] = value else: data[key] = value try: name = data['OBJECT_NAME'] cospar_id = data['OBJECT_ID'] scale = data['TIME_SYSTEM'] frame = data['REF_FRAME'] date = Date.strptime(data['EPOCH'], "%Y-%m-%dT%H:%M:%S.%f", scale=scale) vx = _float(data['X_DOT']) vy = _float(data['Y_DOT']) vz = _float(data['Z_DOT']) x = _float(data['X']) y = _float(data['Y']) z = _float(data['Z']) except KeyError as e: raise ValueError('Missing mandatory parameter') orb = Orbit(date, [x, y, z, vx, vy, vz], 'cartesian', frame, None) orb.name = name orb.cospar_id = cospar_id for raw_man in maneuvers: man = {} man['date'] = Date.strptime(raw_man['MAN_EPOCH_IGNITION'], "%Y-%m-%dT%H:%M:%S.%f", scale=scale) man['duration'] = timedelta(seconds=_float(raw_man['MAN_DURATION'])) man['frame'] = raw_man['MAN_REF_FRAME'] if raw_man['MAN_REF_FRAME'] != frame else None man['delta_mass'] = raw_man['MAN_DELTA_MASS'] man['comment'] = raw_man.get('comment') for i in range(1, 4): man.setdefault('dv', []).append(_float(raw_man['MAN_DV_{}'.format(i)])) if man['duration'].total_seconds() == 0: orb.maneuvers.append(Maneuver(man['date'], man['dv'], frame=man['frame'], comment=man['comment'])) if 'CX_X' in data: frame = data.get('COV_REF_FRAME', orb.cov.PARENT_FRAME) if frame in ('RSW', 'RTN'): frame = "QSW" values = [ [data['CX_X'], data['CY_X'], data['CZ_X'], data['CX_DOT_X'], data['CY_DOT_X'], data['CZ_DOT_X']], [data['CY_X'], data['CY_Y'], data['CZ_Y'], data['CX_DOT_Y'], data['CY_DOT_Y'], data['CZ_DOT_Y']], [data['CZ_X'], data['CZ_Y'], data['CZ_Z'], data['CX_DOT_Z'], data['CY_DOT_Z'], data['CZ_DOT_Z']], [data['CX_DOT_X'], data['CX_DOT_Y'], data['CX_DOT_Z'], data['CX_DOT_X_DOT'], data['CY_DOT_X_DOT'], data['CZ_DOT_X_DOT']], [data['CY_DOT_X'], data['CY_DOT_Y'], data['CY_DOT_Z'], data['CY_DOT_X_DOT'], data['CY_DOT_Y_DOT'], data['CZ_DOT_Y_DOT']], [data['CZ_DOT_X'], data['CZ_DOT_Y'], data['CZ_DOT_Z'], data['CZ_DOT_X_DOT'], data['CZ_DOT_Y_DOT'], data['CZ_DOT_Z_DOT']] ] orb.cov = np.array(values).astype(np.float) * 1e6 orb.cov._frame = frame return orb
Read of OPM string Args: string (str): Text containing the OPM Return: Orbit:
juraj-google-style
def normalize_date(tmy_date, year): month = tmy_date.month day = tmy_date.day - 1 hour = tmy_date.hour if month is 1 and day is 0 and hour is 0: year = year + 1 return datetime.datetime(year, month, 1) + \ datetime.timedelta(days=day, hours=hour, minutes=0)
change TMY3 date to an arbitrary year. Args: tmy_date (datetime): date to mangle. year (int): desired year. Returns: (None)
juraj-google-style
def describe_enum_value(enum_value): enum_value_descriptor = EnumValueDescriptor() enum_value_descriptor.name = six.text_type(enum_value.name) enum_value_descriptor.number = enum_value.number return enum_value_descriptor
Build descriptor for Enum instance. Args: enum_value: Enum value to provide descriptor for. Returns: Initialized EnumValueDescriptor instance describing the Enum instance.
codesearchnet
def var(series): if np.issubdtype(series.dtype, np.number): return series.var() else: return np.nan
Returns the variance of values in a series. Args: series (pandas.Series): column to summarize.
codesearchnet
def bool(name, execute_bool=True, default=None): def wrapped(func): @functools.wraps(func) def _decorator(*args, **kwargs): if (core.isset(name) and (core.bool(name) == execute_bool)): return func(*args, **kwargs) elif ((default is not None) and (default == execute_bool)): return func(*args, **kwargs) return _decorator return wrapped
Only execute the function if the boolean variable is set. Args: name: The name of the environment variable execute_bool: The boolean value to execute the function on default: The default value if the environment variable is not set (respects `execute_bool`) Returns: The function return value or `None` if the function was skipped.
codesearchnet
def get_suffixes(): names = [] if at_least_libvips(8, 8): array = vips_lib.vips_foreign_get_suffixes() i = 0 while (array[i] != ffi.NULL): name = _to_string(array[i]) if (name not in names): names.append(name) glib_lib.g_free(array[i]) i += 1 glib_lib.g_free(array) return names
Get a list of all the filename suffixes supported by libvips. Returns: [string]
codesearchnet
def maybe_center_plot(result): begin = re.search('(% .* matplotlib2tikz v.*)', result) if begin: result = (('\\begin{center}\n' + result[begin.end():]) + '\n\\end{center}') return result
Embeds a possible tikz image inside a center environment. Searches for matplotlib2tikz last commend line to detect tikz images. Args: result: The code execution result Returns: The input result if no tikzpicture was found, otherwise a centered version.
codesearchnet
def create_socket(self): socket_path = os.path.join(self.config_dir, 'pueue.sock') try: if os.path.exists(socket_path): os.remove(socket_path) self.socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.socket.bind(socket_path) self.socket.setblocking(0) self.socket.listen(0) os.chmod(socket_path, stat.S_IRWXU) except Exception: self.logger.error("Daemon couldn't socket. Aborting") self.logger.exception() sys.exit(1) return self.socket
Create a socket for the daemon, depending on the directory location. Args: config_dir (str): The absolute path to the config directory used by the daemon. Returns: socket.socket: The daemon socket. Clients connect to this socket.
juraj-google-style
async def basic_consume(self, queue_name='', consumer_tag='', no_local=False, no_ack=False, exclusive=False, no_wait=False, arguments=None, wait_message=True, timeout=0): consumer_tag = (consumer_tag or ('ctag%i.%s' % (self.channel_id, uuid.uuid4().hex))) if (arguments is None): arguments = {} frame = amqp_frame.AmqpRequest(self.protocol._stream_writer, amqp_constants.TYPE_METHOD, self.channel_id) frame.declare_method(amqp_constants.CLASS_BASIC, amqp_constants.BASIC_CONSUME) request = amqp_frame.AmqpEncoder() request.write_short(0) request.write_shortstr(queue_name) request.write_shortstr(consumer_tag) request.write_bits(no_local, no_ack, exclusive, no_wait) request.write_table(arguments) self.consumer_queues[consumer_tag] = asyncio.Queue(self.max_queue_size) self.last_consumer_tag = consumer_tag consumer = self.CONSUMER_CLASS(self, self.consumer_queues[consumer_tag], consumer_tag, nowait=(not wait_message), timeout=timeout) (await self._write_frame_awaiting_response('basic_consume', frame, request, no_wait)) if (not no_wait): self._ctag_events[consumer_tag].set() return consumer
Starts the consumption of message into a queue. the callback will be called each time we're receiving a message. Args: queue_name: str, the queue to receive message from consumer_tag: str, optional consumer tag no_local: bool, if set the server will not send messages to the connection that published them. no_ack: bool, if set the server does not expect acknowledgements for messages exclusive: bool, request exclusive consumer access, meaning only this consumer can access the queue no_wait: bool, if set, the server will not respond to the method arguments: dict, AMQP arguments to be passed to the server wait_message: Indicates if the consumer should wait for new messages in the queue or simply return None if the queue is empty. timeout: A timeout for waiting messages. ``wait_message`` has precendence over timeout.
codesearchnet
def _embedding_dim(vocab_size): if not vocab_size or (vocab_size <= 0): raise ValueError("Invalid vocab_size %g." % vocab_size) return int(round(6.0 * math.sqrt(math.sqrt(vocab_size))))
Calculate a reasonable embedding size for a vocabulary. Rule of thumb is 6 * 4th root of vocab_size. Args: vocab_size: Size of the input vocabulary. Returns: The embedding size to use. Raises: ValueError: if `vocab_size` is invalid.
juraj-google-style
def learn_transportation_mode(track, clf): for segment in track.segments: tmodes = segment.transportation_modes points = segment.points features = [] labels = [] for tmode in tmodes: points_part = points[tmode['from']:tmode['to']] if len(points_part) > 0: features.append(extract_features_2(points_part)) labels.append(tmode['label']) clf.learn(features, labels)
Inserts transportation modes of a track into a classifier Args: track (:obj:`Track`) clf (:obj:`Classifier`)
juraj-google-style
def _on_resource_closure_failure(self, e): logging.info('[Worker %d] Clearing tagged queue after resource closure failure.', self.worker_index) with self._resource_tracking_lock: self._is_dead_with_error = e self._cluster.closure_queue.clear_tag_unlocked(self.worker_index) self._set_resources_aborted(e)
Clear tagged queue to ensure resource closures are rebuilt. Args: e: The exception arisen from the resource closure.
github-repos
def _update_dicts(name_scope, model_layer, input_to_in_layer, model_name_to_output, prev_node_name): layer_config = model_layer.get('config') if (not layer_config.get('layers')): raise ValueError('layer is not a model.') node_name = _scoped_name(name_scope, layer_config.get('name')) input_layers = layer_config.get('input_layers') output_layers = layer_config.get('output_layers') inbound_nodes = model_layer.get('inbound_nodes') is_functional_model = bool((input_layers and output_layers)) is_parent_functional_model = bool(inbound_nodes) if (is_parent_functional_model and is_functional_model): for (input_layer, inbound_node) in zip(input_layers, inbound_nodes): input_layer_name = _scoped_name(node_name, input_layer) inbound_node_name = _scoped_name(name_scope, inbound_node[0]) input_to_in_layer[input_layer_name] = inbound_node_name elif (is_parent_functional_model and (not is_functional_model)): prev_node_name = _scoped_name(name_scope, inbound_nodes[0][0][0]) elif ((not is_parent_functional_model) and prev_node_name and is_functional_model): assert (len(input_layers) == 1), ('Cannot have multi-input Functional model when parent model is not Functional. Number of input layers: %d' % len(input_layer)) input_layer = input_layers[0] input_layer_name = _scoped_name(node_name, input_layer) input_to_in_layer[input_layer_name] = prev_node_name if (is_functional_model and output_layers): layers = _norm_to_list_of_layers(output_layers) layer_names = [_scoped_name(node_name, layer[0]) for layer in layers] model_name_to_output[node_name] = layer_names else: last_layer = layer_config.get('layers')[(- 1)] last_layer_name = last_layer.get('config').get('name') output_node = _scoped_name(node_name, last_layer_name) model_name_to_output[node_name] = [output_node] return (input_to_in_layer, model_name_to_output, prev_node_name)
Updates input_to_in_layer, model_name_to_output, and prev_node_name based on the model_layer. Args: name_scope: a string representing a scope name, similar to that of tf.name_scope. model_layer: a dict representing a Keras model configuration. input_to_in_layer: a dict mapping Keras.layers.Input to inbound layer. model_name_to_output: a dict mapping Keras Model name to output layer of the model. prev_node_name: a string representing a previous, in sequential model layout, node name. Returns: A tuple of (input_to_in_layer, model_name_to_output, prev_node_name). input_to_in_layer: a dict mapping Keras.layers.Input to inbound layer. model_name_to_output: a dict mapping Keras Model name to output layer of the model. prev_node_name: a string representing a previous, in sequential model layout, node name.
codesearchnet
def flatten_(structure): if isinstance(structure, dict): if structure: structure = zip(*sorted(structure.items(), key=(lambda x: x[0])))[1] else: structure = () if isinstance(structure, (tuple, list)): result = [] for element in structure: result += flatten_(element) return tuple(result) return (structure,)
Combine all leaves of a nested structure into a tuple. The nested structure can consist of any combination of tuples, lists, and dicts. Dictionary keys will be discarded but values will ordered by the sorting of the keys. Args: structure: Nested structure. Returns: Flat tuple.
codesearchnet
def authenticate(self): endpoint = '/authenticate' payload = {'agent': {'name': 'Minecraft', 'version': self.ygg_version}, 'username': self.username, 'password': self.password, 'clientToken': self.client_token} rep = self._ygg_req(endpoint, payload) if ((not rep) or ('error' in rep)): return False self.access_token = rep['accessToken'] self.client_token = rep['clientToken'] self.available_profiles = rep['availableProfiles'] self.selected_profile = rep['selectedProfile'] return True
Generate an access token using an username and password. Any existing client token is invalidated if not provided. Returns: dict: Response or error dict
codesearchnet
def _MarkReachedOps(from_ops, reached_ops, func_graphs): queue = collections.deque() queue.extend(from_ops) while queue: op = queue.popleft() if op not in reached_ops: reached_ops.add(op) for output in op.outputs: if backprop_util.IsTrainable(output): queue.extend(_Consumers(output, func_graphs))
Mark all ops reached from "from_ops". Args: from_ops: list of Operations. reached_ops: set of Operations. func_graphs: list of FuncGraphs. This method will traverse through these functions if they capture from_ops or any reachable ops.
github-repos