code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def __init__(self, func, argnames, func_name=None, grad_func=None, python_grad_func=None, out_names=None, **kwargs): self._func = func self._argnames = argnames self._func_name = func_name assert grad_func is None or isinstance(grad_func, _OverloadedFunction) self._grad_func = grad_func self._python_grad_func = python_grad_func self._out_names = out_names self._extra_kwargs = kwargs self._overload = {}
Creates _DefinedFunction. Args: func: A python callable which constructs a tf function body. argnames: A list of strings for function argument names. func_name: The function name. Defaults to None, in which derives from 'func'. grad_func: This function's gradient function, if not None. Defaults to None. python_grad_func: A python callable implementing the gradient of the function python-side. out_names: A list of strings for the function return value names. **kwargs: The keyword arguments. **kwargs is passed to every call site of this function. Raises: ValueError: The function definition is invalid.
github-repos
def dstack(tup): arrays = list(tup) for i in range(len(arrays)): if arrays[i].ndim is 1: arrays[i] = arrays[i][np.newaxis, :] if arrays[i].ndim is 2: arrays[i] = arrays[i][:, :, np.newaxis] return concatenate(arrays, axis=2)
Stack arrays in sequence depth wise (along third dimension), handling ``RemoteArray`` and ``DistArray`` without moving data. Args: tup (sequence of array_like) Returns: res: `ndarray`, if inputs were all local `RemoteArray`, if inputs were all on the same remote engine `DistArray`, if inputs were already scattered on different engines
juraj-google-style
def close(self, virtual_account_id, data={}, **kwargs): url = "{}/{}".format(self.base_url, virtual_account_id) data['status'] = 'closed' return self.patch_url(url, data, **kwargs)
Close Virtual Account from given Id Args: virtual_account_id : Id for which Virtual Account objects has to be Closed
juraj-google-style
def query(self, query, additional_locals=None, safe_mode=False): logger.debug('Attempting to execute database query: %s', query) if (safe_mode and (not isinstance(query, dict))): raise SafetyViolationError(context=self.error_context) if isinstance(query, dict): logger.debug('Executing query in safe mode (MLAlchemy)') return mlalchemy.parse_query(query).to_sqlalchemy(self.session, self.tables).all() else: logger.debug('Executing unsafe query (Python exec())') if (additional_locals is not None): for (k, v) in iteritems(additional_locals): locals()[k] = v exec(compile(('result = %s' % query.strip()), '<string>', 'exec'), globals(), locals()) return locals()['result']
Executes the given SQLAlchemy query string. Args: query: The SQLAlchemy ORM query (or Python code) to be executed. additional_locals: Any additional local variables to inject into the execution context when executing the query. safe_mode: Boolean value indicating whether or not to execute queries in safe mode only. If True, this only allows MLAlchemy-style queries. If False, this allows both exec() and MLAlchemy-style queries. Default: False. Returns: The result of executing the query.
codesearchnet
def order_by(self, key_selector=identity): if self.closed(): raise ValueError('Attempt to call order_by() on a closed Queryable.') if (not is_callable(key_selector)): raise TypeError('order_by() parameter key_selector={key_selector} is not callable'.format(key_selector=repr(key_selector))) return self._create_ordered(iter(self), (- 1), key_selector)
Sorts by a key in ascending order. Introduces a primary sorting order to the sequence. Additional sort criteria should be specified by subsequent calls to then_by() and then_by_descending(). Calling order_by() or order_by_descending() on the results of a call to order_by() will introduce a new primary ordering which will override any already established ordering. This method performs a stable sort. The order of two elements with the same key will be preserved. Note: This method uses deferred execution. Args: key_selector: A unary function which extracts a key from each element using which the result will be ordered. Returns: An OrderedQueryable over the sorted elements. Raises: ValueError: If the Queryable is closed. TypeError: If the key_selector is not callable.
codesearchnet
def get_metrics_namespace(self) -> str: return 'BeamML_TF_Numpy'
Returns: A namespace for metrics collected by the RunInference transform.
github-repos
def defaultStorable(self, python_type=None, storable_type=None, version=None, **kwargs): if (python_type is None): python_type = lookup_type(storable_type) if self.verbose: print('generating storable instance for type: {}'.format(python_type)) self.storables.registerStorable(default_storable(python_type, version=version, storable_type=storable_type), **kwargs) return self.byPythonType(python_type, True).asVersion(version)
Generate a default storable instance. Arguments: python_type (type): Python type of the object. storable_type (str): storable type name. version (tuple): version number of the storable handler. Returns: StorableHandler: storable instance. Extra keyword arguments are passed to :meth:`registerStorable`.
codesearchnet
def copy_pkg(self, filename, id_=-1): for repo in self._children: repo.copy_pkg(filename, id_)
Copy a pkg, dmg, or zip to all repositories. Args: filename: String path to the local file to copy. id_: Integer ID you wish to associate package with for a JDS or CDP only. Default is -1, which is used for creating a new package object in the database.
juraj-google-style
def en004(self, value=None): if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `en004`'.format(value)) self._en004 = value
Corresponds to IDD Field `en004` mean coincident dry-bulb temperature to Enthalpy corresponding to 0.4% annual cumulative frequency of occurrence Args: value (float): value for IDD Field `en004` Unit: kJ/kg if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
juraj-google-style
def sendMessage(self, exchange, routing_key, message, properties=None, UUID=None): if (properties is None): properties = pika.BasicProperties(content_type=self.content_type, delivery_mode=1, headers={}) if (UUID is not None): if (properties.headers is None): properties.headers = {} properties.headers['UUID'] = UUID self.channel.basic_publish(exchange=exchange, routing_key=routing_key, properties=properties, body=message)
With this function, you can send message to `exchange`. Args: exchange (str): name of exchange you want to message to be delivered routing_key (str): which routing key to use in headers of message message (str): body of message properties (dict ,optional): properties of message - if not used, or set to ``None``, ``self.content_type`` and ``delivery_mode=2`` (persistent) is used UUID (str, optional): UUID of the message. If set, it is included into ``properties`` of the message.
codesearchnet
def where_function(function: _evaluation.WhereFunction, operand_result: Optional[_sql_data_types.Select], params_result: Collection[_sql_data_types.Select]) -> _sql_data_types.Select: del function if not operand_result: return _sql_data_types.Select(select_part=_sql_data_types.RawExpression('NULL', _sql_alias='where_clause_', _sql_data_type=_sql_data_types.Undefined), from_part=None, sql_dialect=_sql_data_types.SqlDialect.SPARK) criteria = list(params_result)[0] where_part = f'{operand_result.where_part} AND {criteria.as_operand()}' if operand_result.where_part else criteria.as_operand() if operand_result.from_part: from_part = operand_result.from_part elif isinstance(operand_result.sql_data_type, _sql_data_types.Struct): from_part = f'(SELECT {operand_result.select_part}.*) AS {operand_result.sql_alias}' return _sql_data_types.Select(select_part=operand_result.select_part, from_part=from_part, where_part=where_part, sql_dialect=_sql_data_types.SqlDialect.SPARK)
Returns a collection of all the items that match the criteria expression. This function takes one param (`criteria`) in addition to the operand. If the operand is not provided the matches function returns the empty set which in this function translates to NULL. Returns an error in the event that the `criteria` param is not provided or its data type is not bool. Args: function: The FHIRPath AST `WhereFunction` node operand_result: The expression which is being evaluated params_result: The parameter passed in to function Returns: A compiled Spark SQL expression.
github-repos
def install_napp(cls, mgr): try: LOG.info(' Searching local NApp...') mgr.install_local() LOG.info(' Found and installed.') except FileNotFoundError: LOG.info(' Not found. Downloading from NApps Server...') try: mgr.install_remote() LOG.info(' Downloaded and installed.') return except HTTPError as exception: if (exception.code == 404): LOG.error(' NApp not found.') else: LOG.error(' NApps Server error: %s', exception) except URLError as exception: LOG.error(' NApps Server error: %s', str(exception.reason)) raise KytosException('NApp not found.')
Install a NApp. Raises: KytosException: If a NApp hasn't been found.
codesearchnet
def read_meta_graph_file(filename): meta_graph_def = meta_graph_pb2.MetaGraphDef() if not file_io.file_exists(filename): raise IOError(f'File does not exist. Received: {filename}.') with file_io.FileIO(filename, 'rb') as f: file_content = f.read() try: meta_graph_def.ParseFromString(file_content) if sys.byteorder == 'big': bst.swap_tensor_content_in_graph_function(meta_graph_def, 'little', 'big') return meta_graph_def except Exception: pass try: text_format.Merge(file_content.decode('utf-8'), meta_graph_def) if sys.byteorder == 'big': bst.swap_tensor_content_in_graph_function(meta_graph_def, 'little', 'big') except text_format.ParseError as e: raise IOError(f'Cannot parse file {filename}: {str(e)}.') return meta_graph_def
Reads a file containing `MetaGraphDef` and returns the protocol buffer. Args: filename: `meta_graph_def` filename including the path. Returns: A `MetaGraphDef` protocol buffer. Raises: IOError: If the file doesn't exist, or cannot be successfully parsed.
github-repos
def __init__(self, form, features): self.form = form self.features = features
Construct Segment objectself. Args: form (string): the segment as ipa features (list): the segment as feature_names
juraj-google-style
def __getitem__(self, id): if id == slice(None, None): return list(self) response = backend.spreadsheet(self._sheets, id) result = models.SpreadSheet._from_response(response, self._sheets) result._api = self return result
Fetch and return the spreadsheet with the given id. Args: id (str): unique alphanumeric id of the spreadsheet Returns: SpreadSheet: new SpreadSheet instance Raises: KeyError: if no spreadsheet with the given ``id`` is found
juraj-google-style
def GetMessages(self, formatter_mediator, event): if self.DATA_TYPE != event.data_type: raise errors.WrongFormatter('Unsupported data type: {0:s}.'.format( event.data_type)) event_values = event.CopyToDict() return self._ConditionalFormatMessages(event_values)
Determines the formatted message strings for an event object. Args: formatter_mediator (FormatterMediator): mediates the interactions between formatters and other components, such as storage and Windows EventLog resources. event (EventObject): event. Returns: tuple(str, str): formatted message string and short message string. Raises: WrongFormatter: if the event object cannot be formatted by the formatter.
juraj-google-style
def write_model(model_object, output_tflite_file): if sys.byteorder == 'big': model_object = copy.deepcopy(model_object) byte_swap_tflite_model_obj(model_object, 'big', 'little') model_bytearray = convert_object_to_bytearray(model_object) with gfile.GFile(output_tflite_file, 'wb') as output_file_handle: output_file_handle.write(model_bytearray)
Writes the tflite model, a python object, into the output file. NOTE: This API only works for TFLite generated with _experimental_use_buffer_offset=false Args: model_object: A tflite model as a python object output_tflite_file: Full path name to the output tflite file. Raises: IOError: If output_tflite_file path is invalid or cannot be opened.
github-repos
def _pre_run(self): stage_name = STAGE_NAME_PRE_RUN record = records.TestResultRecord(stage_name, self.TAG) record.test_begin() self.current_test_info = runtime_test_info.RuntimeTestInfo(stage_name, self.log_path, record) try: with self._log_test_stage(stage_name): self.pre_run() return True except Exception as e: logging.exception('%s failed for %s.', stage_name, self.TAG) record.test_error(e) self.results.add_class_error(record) self.summary_writer.dump(record.to_dict(), records.TestSummaryEntryType.RECORD) return False
Proxy function to guarantee the base implementation of `pre_run` is called. Returns: True if setup is successful, False otherwise.
github-repos
def approve(self, sha=None, **kwargs): path = '%s/%s/approve' % (self.manager.path, self.get_id()) data = {} if sha: data['sha'] = sha server_data = self.manager.gitlab.http_post(path, post_data=data, **kwargs) self._update_attrs(server_data)
Approve the merge request. Args: sha (str): Head SHA of MR **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabMRApprovalError: If the approval failed
juraj-google-style
def get_text_features(self, input_ids: tf.Tensor | None=None, attention_mask: tf.Tensor | None=None, position_ids: tf.Tensor | None=None, return_dict: Optional[bool]=None) -> tf.Tensor: return_dict = return_dict if return_dict is not None else self.config.use_return_dict text_outputs = self.blip.text_model(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, return_dict=return_dict) pooled_output = text_outputs[1] text_features = self.blip.text_projection(pooled_output) return text_features
Returns: text_features (`tf.Tensor` of shape `(batch_size, output_dim`): The text embeddings obtained by applying the projection layer to the pooled output of [`TFBlipTextModel`]. Examples: ```python >>> from transformers import AutoProcessor, TFBlipModel >>> model = TFBlipModel.from_pretrained("Salesforce/blip-image-captioning-base") >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf") >>> text_features = model.get_text_features(**inputs) ```
github-repos
def _Read(self, input_file, schema, raw_binary=False): raw_binary = ['--raw-binary'] if raw_binary else [] with TemporaryDirectoryResource() as tempdir: basename = os.path.basename(input_file) basename_no_extension, extension = os.path.splitext(basename) if extension in ['.bin', '.tflite']: returncode = subprocess.call([self._flatc_path, '-t', '--strict-json', '--defaults-json'] + raw_binary + ['-o', tempdir, schema, '--', input_file]) if returncode != 0: raise RuntimeError('flatc failed to convert from binary to json.') json_file = os.path.join(tempdir, basename_no_extension + '.json') if not os.path.exists(json_file): raise RuntimeError('Could not find %r' % json_file) elif extension == '.json': json_file = input_file else: raise ValueError('Invalid extension on input file %r' % input_file) return json.load(open(json_file))
Read a tflite model assuming the given flatbuffer schema. If `input_file` is in bin, then we must use flatc to convert the schema from binary to json. Args: input_file: a binary (flatbuffer) or json file to read from. Extension must be `.tflite`, `.bin`, or `.json` for FlatBuffer Binary or FlatBuffer JSON. schema: which schema to use for reading raw_binary: whether to assume raw_binary (versions previous to v3) that lacked file_identifier require this. Raises: RuntimeError: 1. When flatc cannot be invoked. 2. When json file does not exists. ValueError: When the extension is not json or bin. Returns: A dictionary representing the read tflite model.
github-repos
def register_converter(src_type: Union[Type[Any], Tuple[Type[Any], ...]], dest_type: Union[Type[Any], Tuple[Type[Any], ...]], convert_fn: Callable[[Any], Any]) -> None: _TYPE_CONVERTER_REGISTRY.register(src_type, dest_type, convert_fn)
Register converter from source type to destination type. Examples:: # Add converter from int to float. pg.typing.register_converter(int, float, float) assert pg.typing.Float().apply(1) is 1.0 # Add converter from a dict to class A. def from_dict(d): return A(**d) assert isinstance(pg.typing.Object(A).apply({'x': 1, 'y': 2}), A) Args: src_type: Source value type. dest_type: Target value type. convert_fn: Function that performs the conversion, in signature (src_type) -> dest_type.
github-repos
def _RemoveForwardedIps(self, forwarded_ips, interface): for address in forwarded_ips: self.ip_forwarding_utils.RemoveForwardedIp(address, interface)
Remove the forwarded IP addresses from the network interface. Args: forwarded_ips: list, the forwarded IP address strings to delete. interface: string, the output device to use.
juraj-google-style
def assets(self, asset_type=None): if (not self.can_update()): self._tcex.handle_error(910, [self.type]) if (not asset_type): return self.tc_requests.adversary_assets(self.api_type, self.api_sub_type, self.unique_id) if (asset_type == 'PHONE'): return self.tc_requests.adversary_phone_assets(self.api_type, self.api_sub_type, self.unique_id) if (asset_type == 'HANDLER'): return self.tc_requests.adversary_handle_assets(self.api_type, self.api_sub_type, self.unique_id) if (asset_type == 'URL'): return self.tc_requests.adversary_url_assets(self.api_type, self.api_sub_type, self.unique_id) self._tcex.handle_error(925, ['asset_type', 'assets', 'asset_type', 'asset_type', asset_type]) return None
Retrieves all of the assets of a given asset_type Args: asset_type: (str) Either None, PHONE, HANDLER, or URL Returns:
codesearchnet
def inference(images): with tf.variable_scope('conv1') as scope: kernel = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64], stddev=5e-2, wd=0.0) conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0)) pre_activation = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(pre_activation, name=scope.name) _activation_summary(conv1) pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1') with tf.variable_scope('conv2') as scope: kernel = _variable_with_weight_decay('weights', shape=[5, 5, 64, 64], stddev=5e-2, wd=0.0) conv = tf.nn.conv2d(norm1, kernel, [1, 1, 1, 1], padding='SAME') biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.1)) pre_activation = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(pre_activation, name=scope.name) _activation_summary(conv2) norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm2') pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2') with tf.variable_scope('local3') as scope: reshape = tf.reshape(pool2, [FLAGS.batch_size, -1]) dim = reshape.get_shape()[1].value weights = _variable_with_weight_decay('weights', shape=[dim, 384], stddev=0.04, wd=0.004) biases = _variable_on_cpu('biases', [384], tf.constant_initializer(0.1)) local3 = tf.nn.relu(tf.matmul(reshape, weights) + biases, name=scope.name) _activation_summary(local3) with tf.variable_scope('local4') as scope: weights = _variable_with_weight_decay('weights', shape=[384, 192], stddev=0.04, wd=0.004) biases = _variable_on_cpu('biases', [192], tf.constant_initializer(0.1)) local4 = tf.nn.relu(tf.matmul(local3, weights) + biases, name=scope.name) _activation_summary(local4) with tf.variable_scope('softmax_linear') as scope: weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES], stddev=1/192.0, wd=0.0) biases = _variable_on_cpu('biases', [NUM_CLASSES], tf.constant_initializer(0.0)) softmax_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name) _activation_summary(softmax_linear) return softmax_linear
Build the CIFAR-10 model. Args: images: Images returned from distorted_inputs() or inputs(). Returns: Logits.
juraj-google-style
def __init__(self, parent, module_name, module_ui): super(ModuleUIBaseFrame, self).__init__(parent, padding=8) self.columnconfigure(0, weight=1) self.rowconfigure(1, weight=1) if module_ui is not None: module_ui.ModuleUIFrame(self).grid(row=0, column=0, sticky="W E N S") else: logger.debug("No _ui.py found for '{}'".format(module_name)) help_frame = ttk.LabelFrame(self, padding=8, text="Help") help_frame.grid(row=1, column=0, sticky="W E N S") help_frame.columnconfigure(0, weight=1) help_frame.rowconfigure(0, weight=1) _dir = os.path.realpath( os.path.join(os.getcwd(), os.path.dirname(__file__))) help_path = "{}/modules/{}/{}".format(_dir, module_name, "_help.json") if os.path.isfile(help_path): helptools.add_help_text(help_frame, help_path) else: tk.Label(help_frame, text="No _help.json file found for '{}'".format(module_name)).grid(row=0, column=0, sticky="W E N S")
Create a new base for a module UI Args: parent: A tk or ttk object module_name (str): The name of the module module_ui: The _ui.py file to add for the module
juraj-google-style
def parse_latitude(latitude, hemisphere): latitude = int(latitude[:2]) + float(latitude[2:]) / 60 if hemisphere == 'S': latitude = -latitude elif not hemisphere == 'N': raise ValueError('Incorrect North/South value %r' % hemisphere) return latitude
Parse a NMEA-formatted latitude pair. Args: latitude (str): Latitude in DDMM.MMMM hemisphere (str): North or South Returns: float: Decimal representation of latitude
juraj-google-style
def container(self, container_name): original_container = self._container with ops.init_scope(): original_init_container = ops.get_default_graph()._container try: self._container = container_name with ops.init_scope(): ops.get_default_graph()._container = container_name yield self._container finally: self._container = original_container with ops.init_scope(): ops.get_default_graph()._container = original_init_container
Returns a context manager that specifies the resource container to use. Overridden from `tf.Graph` to update both the init_scope container and the present inner container. This is necessary to make sure setting containers applies correctly both to created variables and to stateful ops. Args: container_name: container name string. Returns: A context manager for defining resource containers for stateful ops, yields the container name.
github-repos
def get_atten(self): return self.attenuation_device.get_atten(self.idx)
Gets the current attenuation setting of Attenuator. Returns: A float that is the current attenuation value. Unit is db.
github-repos
def setup_keyword(dist, _, value): if value is not True: return dist.entry_points = _ensure_entry_points_is_dict(dist.entry_points) for command, subcommands in six.iteritems(_get_commands(dist)): entry_point = '{command} = rcli.dispatcher:main'.format( command=command) entry_points = dist.entry_points.setdefault('console_scripts', []) if entry_point not in entry_points: entry_points.append(entry_point) dist.entry_points.setdefault('rcli', []).extend(subcommands)
Add autodetected commands as entry points. Args: dist: The distutils Distribution object for the project being installed. _: The keyword used in the setup function. Unused. value: The value set to the keyword in the setup function. If the value is not True, this function will do nothing.
juraj-google-style
def convert_to_tensor_or_indexed_slices(value, dtype=None, name=None): return internal_convert_to_tensor_or_indexed_slices(value=value, dtype=dtype, name=name, as_ref=False)
Converts the given object to a `Tensor` or an `IndexedSlices`. If `value` is an `IndexedSlices` or `SparseTensor` it is returned unmodified. Otherwise, it is converted to a `Tensor` using `convert_to_tensor()`. Args: value: An `IndexedSlices`, `SparseTensor`, or an object that can be consumed by `convert_to_tensor()`. dtype: (Optional.) The required `DType` of the returned `Tensor` or `IndexedSlices`. name: (Optional.) A name to use if a new `Tensor` is created. Returns: A `Tensor`, `IndexedSlices`, or `SparseTensor` based on `value`. Raises: ValueError: If `dtype` does not match the element type of `value`.
github-repos
def modify_module(channel, module_name, module_state): gui = ui_embed.UI( channel, "{} updated".format(module_name), "{} is now {}".format(module_name, "activated" if module_state else "deactivated"), modulename=modulename ) return gui
Creates an embed UI containing the module modified message Args: channel (discord.Channel): The Discord channel to bind the embed to module_name (str): The name of the module that was updated module_state (bool): The current state of the module Returns: embed: The created embed
juraj-google-style
def _SetBlankLinesBetweenCommentAndClassFunc(self, node): index = 0 while pytree_utils.IsCommentStatement(node.children[index]): self.Visit(node.children[index].children[0]) if not self.last_was_decorator: _SetNumNewlines(node.children[index].children[0], _ONE_BLANK_LINE) index += 1 if index and node.children[index].lineno - 1 == node.children[index - 1].children[0].lineno: _SetNumNewlines(node.children[index], _NO_BLANK_LINES) else: if self.last_comment_lineno + 1 == node.children[index].lineno: num_newlines = _NO_BLANK_LINES else: num_newlines = self._GetNumNewlines(node) _SetNumNewlines(node.children[index], num_newlines) return index
Set the number of blanks between a comment and class or func definition. Class and function definitions have leading comments as children of the classdef and functdef nodes. Arguments: node: (pytree.Node) The classdef or funcdef node. Returns: The index of the first child past the comment nodes.
github-repos
def add_help_text(parent, filepath, prefix="!"): import tkinter as tk import tkinter.ttk as ttk help_contents = get_help_data(filepath) text = tk.Text(parent, wrap='word', font=("Helvetica", 10)) text.grid(row=0, column=0, sticky="W E N S") text.tag_config("heading", font=("Helvetica", 14)) text.tag_config("command", font=("Courier", 10)) text.tag_config("param", font=("Courier", 10)) text.tag_config("description") scrollbar = ttk.Scrollbar(parent, orient="vertical", command=text.yview) scrollbar.grid(column=1, row=0, sticky="N S") text['yscrollcommand'] = scrollbar.set for d in help_contents: text.insert('end', d, "heading") text.insert('end', '\n') if "commands" in d.lower(): for c in help_contents[d]: if "name" not in c: continue command = prefix + c["name"] text.insert('end', command, ("command", "description")) if "params" in c: for param in c["params"]: text.insert('end', " [{}]".format(param), ("param", "description")) text.insert('end', ": ") if "description" in c: text.insert('end', c["description"], "description") text.insert('end', '\n') text.insert('end', '\n') else: text.insert('end', help_contents[d], "description") text.insert('end', '\n\n') text.config(state=tk.DISABLED)
Load help text from a file and adds it to the parent Args: parent: A tk or ttk object filepath (str): The file to load help text from prefix (str): The prefix to use for commands
juraj-google-style
def get_structure_from_mp(formula): m = MPRester() entries = m.get_entries(formula, inc_structure='final') if (len(entries) == 0): raise ValueError(('No structure with formula %s in Materials Project!' % formula)) elif (len(entries) > 1): warnings.warn(('%d structures with formula %s found in Materials Project. The lowest energy structure will be returned.' % (len(entries), formula))) return min(entries, key=(lambda e: e.energy_per_atom)).structure
Convenience method to get a crystal from the Materials Project database via the API. Requires PMG_MAPI_KEY to be set. Args: formula (str): A formula Returns: (Structure) The lowest energy structure in Materials Project with that formula.
codesearchnet
def save_binary(self, data: Union[dict, List[dict]]) -> str: path, _ = os.path.splitext(self.output_path) binary_path = os.path.extsep.join((path, 'pickle')) with open(binary_path, 'wb+') as f_output: pickle.dump(data, f_output) return binary_path
Save the provided data object as a pickle-formatted binary data on the disk. Args: data (`dict` or list of `dict`): The data to store. Returns: `str`: Path where the data has been saved.
github-repos
def write_wav(path, samples, sr=16000): max_value = np.abs(np.iinfo(np.int16).min) data = (samples * max_value).astype(np.int16) scipy.io.wavfile.write(path, sr, data)
Write to given samples to a wav file. The samples are expected to be floating point numbers in the range of -1.0 to 1.0. Args: path (str): The path to write the wav to. samples (np.array): A float array . sr (int): The sampling rate.
juraj-google-style
def SelectArtifacts(cls, os_name=None, cpe=None, labels=None, restrict_checks=None): results = set() for condition in cls.Conditions(None, os_name, cpe, labels): trigger = condition[1:] for chk in itervalues(cls.checks): if (restrict_checks and (chk.check_id not in restrict_checks)): continue results.update(chk.triggers.Artifacts(*trigger)) return results
Takes targeting info, identifies artifacts to fetch. Args: os_name: 0+ OS names. cpe: 0+ CPE identifiers. labels: 0+ GRR labels. restrict_checks: A list of check ids whose artifacts should be fetched. Returns: the artifacts that should be collected.
codesearchnet
def __getitem__(self, thing: Any) -> np.ndarray: if type(thing) is str: return self.__getattr__(thing) else: lm = LayerManager(None) for key, layer in self.items(): lm[key] = loompy.MemoryLoomLayer(key, layer[thing]) return lm
Access a layer by name, or slice through all the layers Args: thing: if string, return the specified layer ("" is the default layer) if slice 2-tuple, return a new LayerManager with all layers sliced
juraj-google-style
def _create_variables_and_slots(self) -> Dict[str, Dict[str, tf_variables.Variable]]: self._track_restore_info_for_cpu() variables = {} stacked_variables = self._create_variables_from_stacked_tables() for table in self._table_config: if table.name in stacked_variables: variables[table.name] = {'parameters': stacked_variables[table.name]} else: variables[table.name] = self._create_variables(table, trainable=True) return variables
Create variables for TPU embeddings. Returns: A dict of dicts. The outer dict is keyed by the table names and the inner dicts are keyed by 'parameters' and the slot variable names.
github-repos
def DeregisterHelper(cls, helper_class): helper_name = helper_class.NAME.lower() if helper_name not in cls._helper_classes: raise KeyError('Helper class not set for name: {0:s}.'.format( helper_class.NAME)) del cls._helper_classes[helper_name]
Deregisters a helper class. The helper classes are identified based on their lower case name. Args: helper_class (type): class object of the argument helper. Raises: KeyError: if helper class is not set for the corresponding name.
juraj-google-style
def _GetFileSystemCacheIdentifier(self, path_spec): string_parts = [] string_parts.append(getattr(path_spec.parent, 'comparable', '')) string_parts.append('type: {0:s}'.format(path_spec.type_indicator)) return ''.join(string_parts)
Determines the file system cache identifier for the path specification. Args: path_spec (PathSpec): path specification. Returns: str: identifier of the VFS object.
juraj-google-style
def penalize_boundary_complexity(shp, w=20, mask=None, C=0.5): def inner(T): arr = T("input") if mask is None: mask_ = np.ones(shp) mask_[:, w:-w, w:-w] = 0 else: mask_ = mask blur = _tf_blur(arr, w=5) diffs = (blur-arr)**2 diffs += 0.8*(arr-C)**2 return -tf.reduce_sum(diffs*mask_) return inner
Encourage the boundaries of an image to have less variation and of color C. Args: shp: shape of T("input") because this may not be known. w: width of boundary to penalize. Ignored if mask is set. mask: mask describing what area should be penalized. Returns: Objective.
juraj-google-style
def modify_prefix(channel, new_prefix): gui = ui_embed.UI(channel, 'Prefix updated', 'Modis prefix is now `{}`'.format(new_prefix), modulename=modulename) return gui
Creates an embed UI containing the prefix modified message Args: channel (discord.Channel): The Discord channel to bind the embed to new_prefix (str): The value of the new prefix Returns: embed: The created embed
codesearchnet
def load_json(task: Task, file: str) -> Result: kwargs: Dict[(str, Type[MutableMapping[(str, Any)]])] = {} with open(file, 'r') as f: data = json.loads(f.read(), **kwargs) return Result(host=task.host, result=data)
Loads a json file. Arguments: file: path to the file containing the json file to load Examples: Simple example with ``ordered_dict``:: > nr.run(task=load_json, file="mydata.json") file: path to the file containing the json file to load Returns: Result object with the following attributes set: * result (``dict``): dictionary with the contents of the file
codesearchnet
def FindStartOfExpressionInLine(line, endpos, stack): i = endpos while i >= 0: char = line[i] if char in ')]}': stack.append(char) elif char == '>': if (i > 0 and (line[i - 1] == '-' or Match(r'\s>=\s', line[i - 1:]) or Search(r'\boperator\s*$', line[0:i]))): i -= 1 else: stack.append('>') elif char == '<': if i > 0 and line[i - 1] == '<': i -= 1 else: if stack and stack[-1] == '>': stack.pop() if not stack: return (i, None) elif char in '([{': while stack and stack[-1] == '>': stack.pop() if not stack: return (-1, None) if ((char == '(' and stack[-1] == ')') or (char == '[' and stack[-1] == ']') or (char == '{' and stack[-1] == '}')): stack.pop() if not stack: return (i, None) else: return (-1, None) elif char == ';': while stack and stack[-1] == '>': stack.pop() if not stack: return (-1, None) i -= 1 return (-1, stack)
Find position at the matching start of current expression. This is almost the reverse of FindEndOfExpressionInLine, but note that the input position and returned position differs by 1. Args: line: a CleansedLines line. endpos: start searching at this position. stack: nesting stack at endpos. Returns: On finding matching start: (index at matching start, None) On finding an unclosed expression: (-1, None) Otherwise: (-1, new stack at beginning of this line)
juraj-google-style
def __init__(self, app=None, env=None, region='us-east-1', prop_path=None): self.app_name = app self.env = env self.region = region self.properties = get_properties(prop_path, env=self.env, region=self.region) self.datapipeline_data = self.properties['datapipeline'] generated = get_details(app=self.app_name) self.group = generated.data['project'] session = boto3.Session(profile_name=self.env, region_name=self.region) self.client = session.client('datapipeline') self.pipeline_id = None
AWS Data Pipeline object. Args: app (str): Application name env (str): Environment/Account region (str): AWS Region prop_path (str): Path of environment property file
juraj-google-style
def get_image_tokens(self, pixel_values: torch.FloatTensor): batch_size = pixel_values.shape[0] _, _, image_toks = self.vqmodel.encode(pixel_values) bpe_toks = self.vocabulary_mapping.convert_img2bpe(image_toks) bpe_toks = bpe_toks.view(batch_size, -1) return bpe_toks
Tokenizes images into discrete tokens with VQGAN module. Converts obtained image tokens into BPE tokens and wraps with "boi" and "eoi" special tokens. Args: pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)): The tensors corresponding to the input images.
github-repos
def FindFirst(cls, setting_matcher, device_matcher=None, **kwargs): try: return next(cls.FindDevices( setting_matcher, device_matcher=device_matcher, **kwargs)) except StopIteration: raise usb_exceptions.DeviceNotFoundError( 'No device available, or it is in the wrong configuration.')
Find and return the first matching device. Args: setting_matcher: See cls.FindDevices. device_matcher: See cls.FindDevices. **kwargs: See cls.FindDevices. Returns: An instance of UsbHandle. Raises: DeviceNotFoundError: Raised if the device is not available.
juraj-google-style
def set_snippet_client_verbose_logging(self, verbose): self._ad.log.info('Set verbose logging to %s.', verbose) self.verbose_logging = verbose
Switches verbose logging. True for logging full RPC response. By default it will only write max_rpc_return_value_length for Rpc return strings. If you need to see full message returned from Rpc, please turn on verbose logging. max_rpc_return_value_length will set to 1024 by default, the length contains full Rpc response in Json format, included 1st element "id". Args: verbose: bool. If True, turns on verbose logging, if False turns off
github-repos
def interface_required(interface): def _interface_required(func): @functools.wraps(func) def wrapper(self, *args, **kwargs): if self.tif != interface: raise errors.JLinkException('Unsupported for current interface.') return func(self, *args, **kwargs) return wrapper return _interface_required
Decorator to specify that a particular interface type is required for the given method to be used. Args: interface (int): attribute of ``JLinkInterfaces`` Returns: A decorator function.
juraj-google-style
def set_attr_text(self, attr_key, attr_val, el_idx=0): self.get_element_by_attr_key(attr_key, el_idx).attrib[attr_key] = attr_val
Set the value of the selected attribute of the selected element. Args: attr_key : str Name of attribute for which to search attr_val : str Text to set for the attribute. el_idx : int Index of element to use in the event that there are multiple sibling elements with the same name.
juraj-google-style
def data_groups(self, groups, entity_count): data = [] for xid in groups.keys(): assoc_group_data = self.data_group_association(xid) data += assoc_group_data entity_count += len(assoc_group_data) if (entity_count >= self._batch_max_chunk): break return (data, entity_count)
Process Group data. Args: groups (list): The list of groups to process. Returns: list: A list of groups including associations
codesearchnet
def _publish_to_subscribers(event: Event): subscribers = get_subscribers(event.object_type) for sub in subscribers: DB.prepend_to_list(_keys.published(event.object_type, sub), event.id, pipeline=True) event_dict = deepcopy(event.config) event_dict.pop('id') DB.set_hash_value(_keys.data(event.object_type, sub), event.id, str(event_dict), pipeline=True) DB.publish(event.object_type, event.id, pipeline=True)
Publish and event to all subscribers. - Adds the event id to the published event list for all subscribers. - Adds the event data to the published event data for all subscribers. - Publishes the event id notification to all subscribers. Args: event (Event): Event object to publish.
juraj-google-style
def _FormatDateTime(self, event): try: datetime_object = datetime.datetime(1970, 1, 1, 0, 0, 0, 0, tzinfo=pytz.UTC) datetime_object += datetime.timedelta(microseconds=event.timestamp) datetime_object.astimezone(self._output_mediator.timezone) return datetime_object.replace(tzinfo=None) except (OverflowError, ValueError) as exception: self._ReportEventError(event, 'unable to copy timestamp: {0!s} to a human readable date and time with error: {1!s}. Defaulting to: "ERROR"'.format(event.timestamp, exception)) return 'ERROR'
Formats the date to a datetime object without timezone information. Note: timezone information must be removed due to lack of support by xlsxwriter and Excel. Args: event (EventObject): event. Returns: datetime.datetime|str: date and time value or a string containing "ERROR" on OverflowError.
codesearchnet
def issuperset(self, other): other = self._cast_to_frameset(other) if (other is NotImplemented): return NotImplemented return (self.items >= other.items)
Check if the contents of `self` is a superset of the contents of `other.` Args: other (:class:`FrameSet`): Returns: bool: :class:`NotImplemented`: if `other` fails to convert to a :class:`FrameSet`
codesearchnet
def select(self, field_paths): query = query_mod.Query(self) return query.select(field_paths)
Create a "select" query with this collection as parent. See :meth:`~.firestore_v1beta1.query.Query.select` for more information on this method. Args: field_paths (Iterable[str, ...]): An iterable of field paths (``.``-delimited list of field names) to use as a projection of document fields in the query results. Returns: ~.firestore_v1beta1.query.Query: A "projected" query.
codesearchnet
def get_candidates(self, input_ids: torch.LongTensor) -> Tuple[torch.LongTensor, Optional[torch.FloatTensor]]: input_ids = input_ids.to(self.assistant_model.device) min_new_tokens, max_new_tokens = self._calculate_new_tokens(input_ids) if max_new_tokens == 0: return (input_ids, None) self._update_past_and_masks(input_ids) generation_args = self._prepare_generation_args(input_ids, min_new_tokens, max_new_tokens) candidate_ids, candidate_logits = self._generate_candidates(generation_args) return (candidate_ids, candidate_logits)
Fetches the candidates to be tried for the current input. Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids) Return: `torch.LongTensor` of shape `(batch_size, candidate_length)` containing the candidate sequences to be assessed by the model and a `torch.FloatTensor` of shape `(batch_size, candidate_length, vocabulary_size)` containing the logits associated to each candidate.
github-repos
def _conv_general_param_type_converter(window_strides, lhs_dilation, rhs_dilation, dim): def _as_list_of_size(item, size): if item is None: return None return [item] * size if isinstance(item, int) else list(item) return (_as_list_of_size(window_strides, dim), _as_list_of_size(lhs_dilation, dim), _as_list_of_size(rhs_dilation, dim))
Convert strides, lhs_dilation, rhs_dilation to match TF convention. For example, in the 3D case, if lhs_dilation = 2, then convert it to [2, 2, 2] if lhs_dilation = (2, 2, 2), convert it also to [2, 2, 2] Args: window_strides: window_strides to be converted lhs_dilation: lhs_dilation to be converted rhs_dilation: rhs_dilation to be converted dim: dim to be converted Returns: The updated window_strides, lhs_dilation and rhs_dilation
github-repos
def call_later(self, delay, callback): if hasattr(self._connection.ioloop, 'call_later'): self._connection.ioloop.call_later(delay, callback) else: self._connection.ioloop.add_timeout(delay, callback)
Schedule a one-shot timeout given delay seconds. This method is only useful for compatibility with older versions of pika. Args: delay (float): Non-negative number of seconds from now until expiration callback (method): The callback method, having the signature `callback()`
codesearchnet
def add_one(self, url: str, url_properties: Optional[URLProperties]=None, url_data: Optional[URLData]=None): self.add_many([AddURLInfo(url, url_properties, url_data)])
Add a single URL to the table. Args: url: The URL to be added url_properties: Additional values to be saved url_data: Additional data to be saved
codesearchnet
def __init__(self, session, max_retries, retry_backoff_base): self.on_connect = event.Event('Channel.on_connect') self.on_reconnect = event.Event('Channel.on_reconnect') self.on_disconnect = event.Event('Channel.on_disconnect') self.on_receive_array = event.Event('Channel.on_receive_array') self._max_retries = max_retries self._retry_backoff_base = retry_backoff_base self._is_connected = False self._on_connect_called = False self._chunk_parser = None self._session = session self._sid_param = None self._gsessionid_param = None
Create a new channel. Args: session (http_utils.Session): Request session. max_retries (int): Number of retries for long-polling request. retry_backoff_base (int): The base term for the long-polling exponential backoff.
juraj-google-style
def save_counter(self): self._maybe_create_save_counter() return self._save_counter
An integer variable which starts at zero and is incremented on save. Used to number checkpoints. Returns: The save counter variable.
github-repos
def deterministic_shuffle(list_, seed=0, rng=None): r rng = ensure_rng(seed if rng is None else rng) rng.shuffle(list_) return list_
r""" Args: list_ (list): seed (int): Returns: list: list_ CommandLine: python -m utool.util_numpy --test-deterministic_shuffle Example: >>> # ENABLE_DOCTEST >>> from utool.util_numpy import * # NOQA >>> list_ = [1, 2, 3, 4, 5, 6] >>> seed = 1 >>> list_ = deterministic_shuffle(list_, seed) >>> result = str(list_) >>> print(result) [3, 2, 5, 1, 4, 6]
juraj-google-style
def truepath_relative(path, otherpath=None): if (otherpath is None): otherpath = os.getcwd() otherpath = truepath(otherpath) path_ = normpath(relpath(path, otherpath)) return path_
Normalizes and returns absolute path with so specs Args: path (str): path to file or directory otherpath (None): (default = None) Returns: str: path_ CommandLine: python -m utool.util_path --exec-truepath_relative --show Example: >>> # ENABLE_DOCTEST >>> from utool.util_path import * # NOQA >>> import utool as ut >>> path = 'C:/foobar/foobiz' >>> otherpath = 'C:/foobar' >>> path_ = truepath_relative(path, otherpath) >>> result = ('path_ = %s' % (ut.repr2(path_),)) >>> print(result) path_ = 'foobiz'
codesearchnet
def remove(self, node, dirty=True): if (node.id in self._children): self._children[node.id].parent = None del self._children[node.id] if dirty: self.touch()
Remove the given child node. Args: node (gkeepapi.Node): Node to remove. dirty (bool): Whether this node should be marked dirty.
codesearchnet
def _get_child_class(self, path): if self._child_entity is None: return BIDSNode for i, child_ent in enumerate(listify(self._child_entity)): template = self.available_entities[child_ent].directory if template is None: return BIDSNode template = self.root_path + template to_rep = re.findall(r'\{(.*?)\}', template) for ent in to_rep: patt = self.available_entities[ent].pattern template = template.replace('{%s}' % ent, patt) template += r'[^\%s]*$' % os.path.sep if re.match(template, path): return listify(self._child_class)[i] return BIDSNode
Return the appropriate child class given a subdirectory path. Args: path (str): The path to the subdirectory. Returns: An uninstantiated BIDSNode or one of its subclasses.
juraj-google-style
def line_starts_subpgm(line: str) -> Tuple[bool, Optional[str]]: match = RE_SUB_START.match(line) if match != None: f_name = match.group(1) return (True, f_name) match = RE_FN_START.match(line) if match != None: f_name = match.group(1) return (True, f_name) return (False, None)
Indicates whether a line in the program is the first line of a subprogram definition. Args: line Returns: (True, f_name) if line begins a definition for subprogram f_name; (False, None) if line does not begin a subprogram definition.
juraj-google-style
def hpo_terms(self, query=None, hpo_term=None, text=None, limit=None): query_dict = {} search_term = None if query: query_dict = {'$or': [ {'hpo_id': {'$regex': query, '$options':'i'}}, {'description': {'$regex': query, '$options':'i'}}, ] } search_term = query elif text: new_string = '' for i,word in enumerate(text.split(' ')): if i == 0: new_string += word else: new_string += ' \"{0}\"'.format(word) LOG.info("Search HPO terms with %s", new_string) query_dict['$text'] = {'$search': new_string} search_term = text elif hpo_term: query_dict['hpo_id'] = hpo_term search_term = hpo_term limit = limit or int(10e10) res = self.hpo_term_collection.find(query_dict).limit(limit).sort('hpo_number',ASCENDING) LOG.info("Found {0} terms with search word {1}".format(res.count(), search_term)) return res
Return all HPO terms If a query is sent hpo_terms will try to match with regex on term or description. Args: query(str): Part of a hpoterm or description hpo_term(str): Search for a specific hpo term limit(int): the number of desired results Returns: result(pymongo.Cursor): A cursor with hpo terms
juraj-google-style
def release_client(self, client): if isinstance(client, Client): if not self._is_expired_client(client): LOG.debug('Client is not expired. Adding back to pool') self.__pool.append(client) elif client.is_connected(): LOG.debug('Client is expired and connected. Disconnecting') client.disconnect() if self.__sem is not None: self.__sem.release()
Releases a client object to the pool. Args: client: Client object.
juraj-google-style
def dismiss_confirm(self, text=None, wait=None): with self.driver.dismiss_modal('confirm', text=text, wait=wait): (yield)
Execute the wrapped code, dismissing a confirm. Args: text (str | RegexObject, optional): Text to match against the text in the modal. wait (int | float, optional): Maximum time to wait for the modal to appear after executing the wrapped code. Raises: ModalNotFound: If a modal dialog hasn't been found.
codesearchnet
def xslt_transformation(xml, template): transformer = ET.XSLT( _read_template(template) ) newdom = transformer( _read_marcxml(xml) ) return ET.tostring(newdom, pretty_print=True, encoding="utf-8")
Transform `xml` using XSLT `template`. Args: xml (str): Filename or XML string. Don't use ``\\n`` in case of filename. template (str): Filename or XML string. Don't use ``\\n`` in case of filename. Returns: str: Transformed `xml` as string.
juraj-google-style
def extract_response(self, extractors): if not extractors: return {} logger.log_debug("start to extract from response object.") extracted_variables_mapping = OrderedDict() extract_binds_order_dict = utils.ensure_mapping_format(extractors) for key, field in extract_binds_order_dict.items(): extracted_variables_mapping[key] = self.extract_field(field) return extracted_variables_mapping
extract value from requests.Response and store in OrderedDict. Args: extractors (list): [ {"resp_status_code": "status_code"}, {"resp_headers_content_type": "headers.content-type"}, {"resp_content": "content"}, {"resp_content_person_first_name": "content.person.name.first_name"} ] Returns: OrderDict: variable binds ordered dict
juraj-google-style
def merge_all_models_into_first_model(biop_structure): from string import ascii_uppercase idx = 1 first_model = biop_structure[0] for m in biop_structure.get_models(): if first_model.id == m.id: continue for c in m.get_chains(): c.id = ascii_uppercase[idx] first_model.add(c) idx += 1
Merge all existing models into a Structure's first_model attribute. This directly modifies the Biopython Structure object. Chains IDs will start from A and increment for each new chain (model that is converted). Args: biop_structure (Structure): Structure with multiple models that should be merged
juraj-google-style
def __init__(self, **kwargs): super(functionTagProcessor, self).__init__(**kwargs) self.include_function_signatures = kwargs.get( 'include_function_signatures', False)
Initializer. Args: **include_function_signatures: bool. See get_name() for more info.
juraj-google-style
def get_many(self, query: Mapping[(str, Any)], context: PipelineContext=None, streaming: bool=False) -> Iterable[T]: result = self._source.get_many(self._source_type, deepcopy(query), context) LOGGER.info('Got results "{result}" from query "{query}" of source "{source}"'.format(result=result, query=query, source=self._source)) if (not streaming): LOGGER.info('Non-streaming get_many request. Ensuring results "{result}" are a Iterable'.format(result=result)) result = list(result) LOGGER.info('Sending results "{result}" to sinks before converting'.format(result=result)) for sink in self._before_transform: sink.put_many(result, context) LOGGER.info('Converting results "{result}" to request type'.format(result=result)) result = [self._transform(data=item, context=context) for item in result] LOGGER.info('Sending results "{result}" to sinks after converting'.format(result=result)) for sink in self._after_transform: sink.put_many(result, context) return result else: LOGGER.info('Streaming get_many request. Returning result generator for results "{result}"'.format(result=result)) return self._get_many_generator(result)
Gets a query from the data source, where the query contains multiple elements to be extracted. 1) Extracts the query from the data source. 2) Inserts the result into any data sinks. 3) Transforms the results into the requested type if it wasn't already. 4) Inserts the transformed result into any data sinks. Args: query: The query being requested. context: The context for the extraction (mutable). streaming: Specifies whether the results should be returned as a generator (default False). Returns: The requested objects or a generator of the objects if streaming is True.
codesearchnet
def __init__(self, channel): self.ListInstanceConfigs = channel.unary_unary( "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstanceConfigs", request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstanceConfigsRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstanceConfigsResponse.FromString, ) self.GetInstanceConfig = channel.unary_unary( "/google.spanner.admin.instance.v1.InstanceAdmin/GetInstanceConfig", request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.GetInstanceConfigRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.InstanceConfig.FromString, ) self.ListInstances = channel.unary_unary( "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstances", request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstancesRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstancesResponse.FromString, ) self.GetInstance = channel.unary_unary( "/google.spanner.admin.instance.v1.InstanceAdmin/GetInstance", request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.GetInstanceRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.Instance.FromString, ) self.CreateInstance = channel.unary_unary( "/google.spanner.admin.instance.v1.InstanceAdmin/CreateInstance", request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.CreateInstanceRequest.SerializeToString, response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString, ) self.UpdateInstance = channel.unary_unary( "/google.spanner.admin.instance.v1.InstanceAdmin/UpdateInstance", request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.UpdateInstanceRequest.SerializeToString, response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString, ) self.DeleteInstance = channel.unary_unary( "/google.spanner.admin.instance.v1.InstanceAdmin/DeleteInstance", request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.DeleteInstanceRequest.SerializeToString, response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString, ) self.SetIamPolicy = channel.unary_unary( "/google.spanner.admin.instance.v1.InstanceAdmin/SetIamPolicy", request_serializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.SetIamPolicyRequest.SerializeToString, response_deserializer=google_dot_iam_dot_v1_dot_policy__pb2.Policy.FromString, ) self.GetIamPolicy = channel.unary_unary( "/google.spanner.admin.instance.v1.InstanceAdmin/GetIamPolicy", request_serializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.GetIamPolicyRequest.SerializeToString, response_deserializer=google_dot_iam_dot_v1_dot_policy__pb2.Policy.FromString, ) self.TestIamPermissions = channel.unary_unary( "/google.spanner.admin.instance.v1.InstanceAdmin/TestIamPermissions", request_serializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsRequest.SerializeToString, response_deserializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsResponse.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def remove_from_list(self, key: str, value, count: int = 0, pipeline: bool = False): if pipeline: if redis.__version__ == '2.10.6': self._pipeline.lrem(name=key, value=value, num=count) else: self._pipeline.lrem(key, count, value) else: if self._db.exists(key): if redis.__version__ == '2.10.6': self._db.lrem(name=key, value=value, num=count) else: self._db.lrem(key, count, value)
Remove specified value(s) from the list stored at key. Args: key (str): Key where the list is stored. value: value to remove count (int): Number of entries to remove, default 0 == all pipeline(bool): If True, start a transaction block. Default False.
juraj-google-style
def delete_project(self, resource): self.project_service.set_auth(self._token_project) self.project_service.delete(resource)
Deletes the entity described by the given resource. Args: resource (intern.resource.boss.BossResource) Raises: requests.HTTPError on a failure.
juraj-google-style
def mock(self, slot, rpc_id, value): address = slot.address if (address not in self.mock_rpcs): self.mock_rpcs[address] = {} self.mock_rpcs[address][rpc_id] = value
Store a mock return value for an RPC Args: slot (SlotIdentifier): The slot we are mocking rpc_id (int): The rpc we are mocking value (int): The value that should be returned when the RPC is called.
codesearchnet
def extract_signature(func, ignore_first=False): sig_params = get_signature_params(func) if ignore_first: if (len(sig_params) == 0): raise Exception("Methods must take a 'self' argument, but the method '{}' does not have one.".format(func.__name__)) sig_params = sig_params[1:] arg_names = [] arg_defaults = [] arg_is_positionals = [] keyword_names = set() for (arg_name, parameter) in sig_params: arg_names.append(arg_name) arg_defaults.append(parameter.default) arg_is_positionals.append((parameter.kind == parameter.VAR_POSITIONAL)) if (parameter.kind == Parameter.POSITIONAL_OR_KEYWORD): keyword_names.add(arg_name) return FunctionSignature(arg_names, arg_defaults, arg_is_positionals, keyword_names, func.__name__)
Extract the function signature from the function. Args: func: The function whose signature should be extracted. ignore_first: True if the first argument should be ignored. This should be used when func is a method of a class. Returns: A function signature object, which includes the names of the keyword arguments as well as their default values.
codesearchnet
def _supervised_signature_def(method_name, inputs, loss=None, predictions=None, metrics=None): if inputs is None or not inputs: raise ValueError(f'{method_name} `inputs` cannot be None or empty.') signature_inputs = {key: utils.build_tensor_info(tensor) for key, tensor in inputs.items()} signature_outputs = {} for output_set in (loss, predictions, metrics): if output_set is not None: sig_out = {key: utils.build_tensor_info(tensor) for key, tensor in output_set.items()} signature_outputs.update(sig_out) signature_def = build_signature_def(signature_inputs, signature_outputs, method_name) return signature_def
Creates a signature for training and eval data. This function produces signatures that describe the inputs and outputs of a supervised process, such as training or evaluation, that results in loss, metrics, and the like. Note that this function only requires inputs to be not None. Args: method_name: Method name of the SignatureDef as a string. inputs: dict of string to `Tensor`. loss: dict of string to `Tensor` representing computed loss. predictions: dict of string to `Tensor` representing the output predictions. metrics: dict of string to `Tensor` representing metric ops. Returns: A train- or eval-flavored signature_def. Raises: ValueError: If inputs or outputs is `None`.
github-repos
def __init__(self, endpoint, project, token, api_base="api/v1", is_skipped_an_issue=True, verify_ssl=True): super(ReportPortalService, self).__init__() self.endpoint = endpoint self.api_base = api_base self.project = project self.token = token self.is_skipped_an_issue = is_skipped_an_issue self.base_url = uri_join(self.endpoint, self.api_base, self.project) self.session = requests.Session() self.session.headers["Authorization"] = "bearer {0}".format(self.token) self.stack = [None] self.launch_id = None self.verify_ssl = verify_ssl
Init the service class. Args: endpoint: endpoint of report portal service. project: project name to use for launch names. token: authorization token. api_base: defaults to api/v1, can be changed to other version. is_skipped_an_issue: option to mark skipped tests as not 'To Investigate' items on Server side. verify_ssl: option to not verify ssl certificates
juraj-google-style
def record_markdown(text, cellid): from acorn.logging.database import record from time import time ekey = 'nb-{}'.format(cellid) global _cellid_map if (cellid not in _cellid_map): from acorn.logging.database import active_db from difflib import SequenceMatcher from acorn.logging.diff import cascade taskdb = active_db() if (ekey not in taskdb.entities): possible = [k for k in taskdb.entities if (k[0:3] == 'nb-')] (maxkey, maxvalue) = (None, 0.0) for pkey in possible: sequence = [e['c'] for e in taskdb.entities[pkey]] state = ''.join(cascade(sequence)) matcher = SequenceMatcher(a=state, b=text) ratio = matcher.quick_ratio() if ((ratio > maxvalue) and (ratio > 0.5)): (maxkey, maxvalue) = (pkey, ratio) if (maxkey is not None): ekey = pkey _cellid_map[cellid] = ekey ekey = _cellid_map[cellid] entry = {'m': 'md', 'a': None, 's': time(), 'r': None, 'c': text} record(ekey, entry, diff=True)
Records the specified markdown text to the acorn database. Args: text (str): the *raw* markdown text entered into the cell in the ipython notebook.
codesearchnet
def validate_env(app): if not hasattr(app.env, 'javalink_config_cache'): app.env.javalink_config_cache = {} for conf_attr, (_, _, env_attr) in ref.CONFIG_VALUES.iteritems(): if not env_attr: continue value = getattr(app.config, conf_attr) cached = app.env.javalink_config_cache.get(conf_attr, value) app.env.javalink_config_cache[conf_attr] = value if value != cached: app.verbose('[javalink] config.%s has changed, clearing related env', conf_attr) delattr(app.env, env_attr)
Purge expired values from the environment. When certain configuration values change, related values in the environment must be cleared. While Sphinx can rebuild documents on configuration changes, it does not notify extensions when this happens. Instead, cache relevant values in the environment in order to detect when they change. Args: app: The Sphinx application.
juraj-google-style
def convert(self): return super(TFLiteConverter, self).convert()
Converts a TensorFlow GraphDef based on instance variables. Returns: The converted data in serialized format, either a TFLite Flatbuffer or a Graphviz graph depending on value in `output_format`. Raises: ValueError: Input shape is not specified. None value for dimension in input_tensor.
github-repos
def touch(self, key, expire=0, noreply=None): if noreply is None: noreply = self.default_noreply key = self.check_key(key) cmd = b'touch ' + key + b' ' + six.text_type(expire).encode('ascii') if noreply: cmd += b' noreply' cmd += b'\r\n' results = self._misc_cmd([cmd], b'touch', noreply) if noreply: return True return results[0] == b'TOUCHED'
The memcached "touch" command. Args: key: str, see class docs for details. expire: optional int, number of seconds until the item is expired from the cache, or zero for no expiry (the default). noreply: optional bool, True to not wait for the reply (defaults to self.default_noreply). Returns: True if the expiration time was updated, False if the key wasn't found.
juraj-google-style
def _is_quantized_function(self, func: function_pb2.FunctionDef) -> bool: return func.signature.name.startswith('quantized_')
Determine whether a FunctionDef is quantized. Args: func: A FunctionDef object. Returns: True iff `func` is quantized.
github-repos
def FoldByteStream(self, mapped_value, context=None, **unused_kwargs): elements_data_size = self._CalculateElementsDataSize(context) if elements_data_size is not None: if elements_data_size != len(mapped_value): raise errors.FoldingError( 'Mismatch between elements data size and mapped value size') elif not self._HasElementsTerminator(): raise errors.FoldingError('Unable to determine elements data size') else: elements_terminator = self._data_type_definition.elements_terminator elements_terminator_size = len(elements_terminator) if mapped_value[-elements_terminator_size:] != elements_terminator: mapped_value = b''.join([mapped_value, elements_terminator]) return mapped_value
Folds the data type into a byte stream. Args: mapped_value (object): mapped value. context (Optional[DataTypeMapContext]): data type map context. Returns: bytes: byte stream. Raises: FoldingError: if the data type definition cannot be folded into the byte stream.
juraj-google-style
def _GetStringValue(self, data_dict, name, default_value=None): values = data_dict.get(name, None) if (not values): return default_value for (index, value) in enumerate(values): if (',' in value): values[index] = '"{0:s}"'.format(value) return ', '.join(values)
Retrieves a specific string value from the data dict. Args: data_dict (dict[str, list[str]): values per name. name (str): name of the value to retrieve. default_value (Optional[object]): value to return if the name has no value set in data_dict. Returns: str: value represented as a string.
codesearchnet
def _insert_back_keep_dims(x, axis): for i in sorted(axis): x = tf.expand_dims(x, axis=i) return x
Insert the dims in `axis` back as singletons after being removed. Args: x: `Tensor`. axis: Python list of integers. Returns: `Tensor` with same values as `x`, but additional singleton dimensions.
juraj-google-style
def read_uint16(self, little_endian=True): if little_endian: endian = '<' else: endian = '>' return self.unpack(('%sH' % endian), 2)
Read 2 byte as an unsigned integer value from the stream. Args: little_endian (bool): specify the endianness. (Default) Little endian. Returns: int:
codesearchnet
def start_time_distance(item_a, item_b, max_value): start_time_diff = np.abs((item_a.times[0] - item_b.times[0])) return (np.minimum(start_time_diff, max_value) / float(max_value))
Absolute difference between the starting times of each item. Args: item_a: STObject from the first set in TrackMatcher item_b: STObject from the second set in TrackMatcher max_value: Maximum distance value used as scaling value and upper constraint. Returns: Distance value between 0 and 1.
codesearchnet
def __parse(self, function_meta): self._func = get_mapping_function(function_meta['func_name'], self.functions_mapping) self.func_name = self._func.__name__ self._args = prepare_lazy_data(function_meta.get('args', []), self.functions_mapping, self.check_variables_set) self._kwargs = prepare_lazy_data(function_meta.get('kwargs', {}), self.functions_mapping, self.check_variables_set) if (self.func_name == 'load_csv_file'): if ((len(self._args) != 1) or self._kwargs): raise exceptions.ParamsError('P() should only pass in one argument!') self._args = [self._args[0]] elif (self.func_name == 'get_os_environ'): if ((len(self._args) != 1) or self._kwargs): raise exceptions.ParamsError('ENV() should only pass in one argument!') self._args = [self._args[0]]
init func as lazy functon instance Args: function_meta (dict): function meta including name, args and kwargs
codesearchnet
def ParseCall(self, parser_mediator, query, row, **unused_kwargs): query_hash = hash(query) guid = self._GetRowValue(query_hash, row, 'guid') is_incoming = self._GetRowValue(query_hash, row, 'is_incoming') videostatus = self._GetRowValue(query_hash, row, 'videostatus') try: aux = guid if aux: aux_list = aux.split('-') src_aux = aux_list[0] dst_aux = aux_list[1] else: src_aux = 'Unknown [no GUID]' dst_aux = 'Unknown [no GUID]' except IndexError: src_aux = 'Unknown [{0:s}]'.format(guid) dst_aux = 'Unknown [{0:s}]'.format(guid) if is_incoming == '0': user_start_call = True source = src_aux ip_address = self._GetRowValue(query_hash, row, 'ip_address') if ip_address: destination = '{0:s} <{1:s}>'.format(dst_aux, ip_address) else: destination = dst_aux else: user_start_call = False source = src_aux destination = dst_aux call_identifier = self._GetRowValue(query_hash, row, 'id') event_data = SkypeCallEventData() event_data.dst_call = destination event_data.offset = call_identifier event_data.query = query event_data.src_call = source event_data.user_start_call = user_start_call event_data.video_conference = videostatus == '3' timestamp = self._GetRowValue(query_hash, row, 'try_call') event_data.call_type = 'WAITING' date_time = dfdatetime_posix_time.PosixTime(timestamp=timestamp) event = time_events.DateTimeValuesEvent(date_time, 'Call from Skype') parser_mediator.ProduceEventWithEventData(event, event_data) try: timestamp = self._GetRowValue(query_hash, row, 'accept_call') timestamp = int(timestamp) except (ValueError, TypeError): timestamp = None if timestamp: event_data.call_type = 'ACCEPTED' date_time = dfdatetime_posix_time.PosixTime(timestamp=timestamp) event = time_events.DateTimeValuesEvent(date_time, 'Call from Skype') parser_mediator.ProduceEventWithEventData(event, event_data) try: call_duration = self._GetRowValue(query_hash, row, 'call_duration') call_duration = int(call_duration) except (ValueError, TypeError): parser_mediator.ProduceExtractionWarning( 'unable to determine when call: {0:s} was finished.'.format( call_identifier)) call_duration = None if call_duration: timestamp += call_duration event_data.call_type = 'FINISHED' date_time = dfdatetime_posix_time.PosixTime(timestamp=timestamp) event = time_events.DateTimeValuesEvent(date_time, 'Call from Skype') parser_mediator.ProduceEventWithEventData(event, event_data)
Parses a call. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. query (str): query that created the row. row (sqlite3.Row): row resulting from query. query (Optional[str]): query.
juraj-google-style
def order_by(self, *args): if self._solr_locked: raise Exception("Query already executed, no changes can be made." "%s %s" % (self._solr_query, self._solr_params) ) for arg in args: if arg.startswith('-'): self._solr_params['sort'][arg[1:]] = 'desc' else: self._solr_params['sort'][arg] = 'asc'
Applies query ordering. New parameters are appended to current ones, overwriting existing ones. Args: **args: Order by fields names. Defaults to ascending, prepend with hypen (-) for desecending ordering.
juraj-google-style
def write_label_list(path, label_list): entries = [] for label in label_list: entries.append([label.start, label.end, label.value]) textfile.write_separated_lines(path, entries, separator='\t')
Writes the given `label_list` to an audacity label file. Args: path (str): Path to write the file to. label_list (audiomate.annotations.LabelList): Label list
juraj-google-style
def get_info( self, userSpecifier, **kwargs ): request = Request( 'GET', '/v3/users/{userSpecifier}' ) request.set_path_param( 'userSpecifier', userSpecifier ) response = self.ctx.request(request) if response.content_type is None: return response if not response.content_type.startswith("application/json"): return response jbody = json.loads(response.raw_body) parsed_body = {} if str(response.status) == "200": if jbody.get('userInfo') is not None: parsed_body['userInfo'] = \ self.ctx.user.UserInfo.from_dict( jbody['userInfo'], self.ctx ) elif str(response.status) == "401": if jbody.get('errorCode') is not None: parsed_body['errorCode'] = \ jbody.get('errorCode') if jbody.get('errorMessage') is not None: parsed_body['errorMessage'] = \ jbody.get('errorMessage') elif str(response.status) == "403": if jbody.get('errorCode') is not None: parsed_body['errorCode'] = \ jbody.get('errorCode') if jbody.get('errorMessage') is not None: parsed_body['errorMessage'] = \ jbody.get('errorMessage') elif str(response.status) == "405": if jbody.get('errorCode') is not None: parsed_body['errorCode'] = \ jbody.get('errorCode') if jbody.get('errorMessage') is not None: parsed_body['errorMessage'] = \ jbody.get('errorMessage') else: parsed_body = jbody response.body = parsed_body return response
Fetch the user information for the specified user. This endpoint is intended to be used by the user themself to obtain their own information. Args: userSpecifier: The User Specifier Returns: v20.response.Response containing the results from submitting the request
juraj-google-style
def _GetPathSegmentIndexForOccurrenceWeights( self, occurrence_weights, value_weights): largest_weight = occurrence_weights.GetLargestWeight() if largest_weight > 0: occurrence_weight_indexes = occurrence_weights.GetIndexesForWeight( largest_weight) number_of_occurrence_indexes = len(occurrence_weight_indexes) else: number_of_occurrence_indexes = 0 path_segment_index = None if number_of_occurrence_indexes == 0: path_segment_index = self._GetPathSegmentIndexForValueWeights( value_weights) elif number_of_occurrence_indexes == 1: path_segment_index = occurrence_weight_indexes[0] else: largest_weight = 0 for occurrence_index in occurrence_weight_indexes: value_weight = value_weights.GetWeightForIndex(occurrence_index) if not path_segment_index or largest_weight < value_weight: largest_weight = value_weight path_segment_index = occurrence_index return path_segment_index
Retrieves the index of the path segment based on occurrence weights. Args: occurrence_weights: the occurrence weights object (instance of _PathSegmentWeights). value_weights: the value weights object (instance of _PathSegmentWeights). Returns: An integer containing the path segment index.
juraj-google-style
def get_nested_streams(dmap): return list({s for dmap in get_nested_dmaps(dmap) for s in dmap.streams})
Recurses supplied DynamicMap to find all streams Args: dmap: DynamicMap to recurse to look for streams Returns: List of streams that were found
juraj-google-style