code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def start_test(self, pipeline): global _TEST_MODE, _TEST_ROOT_PIPELINE_KEY self.start(pipeline, return_task=True) _TEST_MODE = True _TEST_ROOT_PIPELINE_KEY = pipeline._pipeline_key try: self.evaluate_test(pipeline, root=True) finally: _TEST_MODE = False
Starts a pipeline in the test mode. Args: pipeline: The Pipeline instance to test.
juraj-google-style
def createAndStartSwarm(client, clientInfo='', clientKey='', params='', minimumWorkers=None, maximumWorkers=None, alreadyRunning=False): if (minimumWorkers is None): minimumWorkers = Configuration.getInt('nupic.hypersearch.minWorkersPerSwarm') if (maximumWorkers is None): maximumWorkers = Config...
Create and start a swarm job. Args: client - A string identifying the calling client. There is a small limit for the length of the value. See ClientJobsDAO.CLIENT_MAX_LEN. clientInfo - JSON encoded dict of client specific information. clientKey - Foreign key. Limited in length, see ClientJobsDAO._initTables. params - ...
codesearchnet
def wrap_rich_text_lines(inp, cols): new_line_indices = [] if not isinstance(inp, RichTextLines): raise ValueError('Invalid type of input screen_output') if not isinstance(cols, int): raise ValueError('Invalid type of input cols') out = RichTextLines([]) row_counter = 0 for i, li...
Wrap RichTextLines according to maximum number of columns. Produces a new RichTextLines object with the text lines, font_attr_segs and annotations properly wrapped. This ought to be used sparingly, as in most cases, command handlers producing RichTextLines outputs should know the screen/panel width via the screen_info...
github-repos
def obtain_all_bond_lengths(sp1, sp2, default_bl=None): if isinstance(sp1, Element): sp1 = sp1.symbol if isinstance(sp2, Element): sp2 = sp2.symbol syms = tuple(sorted([sp1, sp2])) if syms in bond_lengths: return bond_lengths[syms].copy() elif default_bl is not None: ...
Obtain bond lengths for all bond orders from bond length database Args: sp1 (Specie): First specie. sp2 (Specie): Second specie. default_bl: If a particular type of bond does not exist, use this bond length as a default value (bond order = 1). If None, a ValueError will be thrown. Return: A dict mapping bond order to...
juraj-google-style
def _compute_full_path(self, fn_parent_ref, fn_parent_seq): names = [] root_id = 5 (index, seq) = (fn_parent_ref, fn_parent_seq) is_orphan = False while (index != root_id): try: parent_entry = self[index] if (seq != parent_entry.header.seq_number): is_...
Based on the parent reference and sequence, computes the full path. The majority of the files in a filesystem has a very small amount of parent directories. By definition, a filesystem is expected to have much smaller amount of directories than files. As such we use a function with the minimal amount of arguments to f...
codesearchnet
def apply_and_name(self, aggregator): reduced_df = self._apply(aggregator) if (len(self.names) != len(reduced_df.columns)): raise IndexError('ColumnFunction creates more columns than it has names for.') reduced_df.columns = self.names return reduced_df
Fetches the row-aggregated input columns for this ColumnFunction. Args: aggregator (Aggregator) Returns: pd.DataFrame: The dataframe has columns with names self.names that were created by this ColumnFunction, and is indexed by the index that was passed to aggregator.aggregate(index).
codesearchnet
def delete_qubits(self, indices): if not isinstance(indices, list): indices = [indices] self._z = np.delete(self._z, indices) self._x = np.delete(self._x, indices) return self
Delete pauli at the indices. Args: indices(list[int]): the indices of to-be-deleted paulis Returns: Pauli: self
juraj-google-style
def find(self, **kwargs): if len(kwargs) != 1: raise ValueError("One and only one keyword argument accepted") key = list(kwargs.keys())[0] value = list(kwargs.values())[0] ret = None for row in self.values(): if row[key] == value: ...
Finds row matching specific field value Args: **kwargs: (**only one argument accepted**) fielname=value, e.g., formula="OH" Returns: list element or None
juraj-google-style
def _load_from_cache_if_available(self, key): if key in self._cache: entity = self._cache[key] if entity is None or entity._key == key: raise tasklets.Return(entity)
Returns a cached Model instance given the entity key if available. Args: key: Key instance. Returns: A Model instance if the key exists in the cache.
juraj-google-style
def browse(self, max_lines=None, headers=None): if self.path.startswith('gs: lines = CsvFile._read_gcs_lines(self.path, max_lines) else: lines = CsvFile._read_local_lines(self.path, max_lines) if len(lines) == 0: return pd.DataFrame(columns=headers) columns_size = len(next(csv.rea...
Try reading specified number of lines from the CSV object. Args: max_lines: max number of lines to read. If None, the whole file is read headers: a list of strings as column names. If None, it will use "col0, col1..." Returns: A pandas DataFrame with the schema inferred from the data. Raises: Exception if the csv objec...
juraj-google-style
def get_diff_coeff(hvec, n=1): hvec = np.array(hvec, dtype=np.float) acc = len(hvec) exp = np.column_stack([np.arange(acc)]*acc) a = np.vstack([hvec] * acc) ** exp b = np.zeros(acc) b[n] = factorial(n) return np.linalg.solve(a, b)
Helper function to find difference coefficients of an derivative on an arbitrary mesh. Args: hvec (1D array-like): sampling stencil n (int): degree of derivative to find
juraj-google-style
def _GetDataTypeMap(self, name): data_type_map = self._data_type_maps.get(name, None) if not data_type_map: data_type_map = self._fabric.CreateDataTypeMap(name) self._data_type_maps[name] = data_type_map return data_type_map
Retrieves a data type map defined by the definition file. The data type maps are cached for reuse. Args: name (str): name of the data type as defined by the definition file. Returns: dtfabric.DataTypeMap: data type map which contains a data type definition, such as a structure, that can be mapped onto binary data.
juraj-google-style
def prefix(self: EventSetOrNode, prefix: str) -> EventSetOrNode: from temporian.core.operators.prefix import prefix as _prefix return _prefix(self, prefix=prefix)
Adds a prefix to the names of the features in an [`EventSet`][temporian.EventSet]. Usage example: ```python >>> a = tp.event_set( ... timestamps=[0, 1], ... features={"f1": [0, 2], "f2": [5, 6]} ... ) >>> b = a * 5 >>> # Prefix before glue to avoid duplicated names >>> c = tp.glue(a.prefix("original_"), b.prefi...
github-repos
def check_errors(self, is_global=False): errors = self.global_errors if is_global else self.errors if errors: print('dfTimewolf encountered one or more errors:') for error, critical in errors: print('{0:s} {1:s}'.format('CRITICAL: ' if critical else '', error)) if critical: ...
Checks for errors and exits if any of them are critical. Args: is_global: If True, check the global_errors attribute. If false, check the error attribute.
juraj-google-style
def get_config(model_type: str, feature: str) -> OnnxConfig: return FeaturesManager._SUPPORTED_MODEL_TYPE[model_type][feature]
Gets the OnnxConfig for a model_type and feature combination. Args: model_type (`str`): The model type to retrieve the config for. feature (`str`): The feature to retrieve the config for. Returns: `OnnxConfig`: config for the combination
github-repos
def move_all_files_from_subfolders_to_top(folder_path, delete_subfolders=False, copy=False): for item in os.listdir(folder_path): sub_path = os.path.join(folder_path, item) if os.path.isdir(sub_path): for sub_item in os.listdir(sub_path): src = os.path.join(sub_pat...
Move all files/folder from all subfolders of `folder_path` on top into `folder_path`. Args: folder_path (str): Path of the folder. delete_subfolders (bool): If True the subfolders are deleted after all items are moved out of it. copy (bool): If True copies the files instead of moving. (default False)
juraj-google-style
def __init__(self, zone, environment): self._zone = zone self._environment = environment self._gcs_dag_location = None
Initializes an instance of a Composer object. Args: zone: Zone in which Composer environment has been created. environment: Name of the Composer environment.
juraj-google-style
def to_proto(self, export_scope=None): if export_scope is None: return self.saver_def if not (self.saver_def.filename_tensor_name.startswith(export_scope) and self.saver_def.save_tensor_name.startswith(export_scope) and self.saver_def.restore_op_name.startswith(export_scope)): return None sa...
Converts this `Saver` to a `SaverDef` protocol buffer. Args: export_scope: Optional `string`. Name scope to remove. Returns: A `SaverDef` protocol buffer.
github-repos
def add_keyword(self, keyword, schema=None, source=None): keyword_dict = self._sourced_dict(source, value=keyword) if (schema is not None): keyword_dict['schema'] = schema self._append_to('keywords', keyword_dict)
Add a keyword. Args: keyword(str): keyword to add. schema(str): schema to which the keyword belongs. source(str): source for the keyword.
codesearchnet
def create_group(self, name): self.project_service.set_auth(self._token_project) return self.project_service.create_group(name)
Create a new group. Args: name (string): Name of the group to create. Returns: (bool): True on success. Raises: requests.HTTPError on failure.
juraj-google-style
def is_collection(return_type: FhirPathDataType) -> bool: return return_type and return_type.cardinality == Cardinality.COLLECTION
Indicates if the return type represents a collection. Args: return_type: The data type to describe. Returns: True if `return_type` represents an element with cardinality greater than one. False otherwise.
github-repos
def timestamp_ids(self, time_precision=0.02): return self.convert_tokens_to_ids(['<|%.2f|>' % (i * time_precision) for i in range(1500 + 1)])
Compute the timestamp token ids for a given precision and save to least-recently used (LRU) cache. Args: time_precision (`float`, *optional*, defaults to 0.02): The time ratio to convert from token to time.
github-repos
def console_map_string_to_font(s: str, fontCharX: int, fontCharY: int) -> None: lib.TCOD_console_map_string_to_font_utf(_unicode(s), fontCharX, fontCharY)
Remap a string of codes to a contiguous set of tiles. Args: s (AnyStr): A string of character codes to map to new values. The null character `'\\x00'` will prematurely end this function. fontCharX (int): The starting X tile coordinate on the loaded tileset. 0 is the leftmost tile. fontCharY (int): The starting Y tile ...
juraj-google-style
def get(self, *, txid, headers=None): block_list = self.transport.forward_request( method='GET', path=self.path, params={'transaction_id': txid}, headers=headers, ) return block_list[0] if len(block_list) else None
Get the block that contains the given transaction id (``txid``) else return ``None`` Args: txid (str): Transaction id. headers (dict): Optional headers to pass to the request. Returns: :obj:`list` of :obj:`int`: List of block heights.
juraj-google-style
def run_inference(self, batch: Sequence[numpy.ndarray], model: BaseEstimator, inference_args: Optional[dict[str, Any]]=None) -> Iterable[PredictionResult]: predictions = self._model_inference_fn(model, batch, inference_args) return utils._convert_to_result(batch, predictions, model_id=self._model_uri)
Runs inferences on a batch of numpy arrays. Args: batch: A sequence of examples as numpy arrays. They should be single examples. model: A numpy model or pipeline. Must implement predict(X). Where the parameter X is a numpy array. inference_args: Any additional arguments for an inference. Returns: An Iterable of type ...
github-repos
def Get(self, request, global_params=None): config = self.GetMethodConfig('Get') return self._RunMethod(config, request, global_params=global_params)
Returns information about a specific job. Job information is available for a six month period after creation. Requires that you're the person who ran the job, or have the Is Owner project role. Args: request: (BigqueryJobsGetRequest) input message global_params: (StandardQueryParameters, default: None) global argument...
github-repos
def contact(self, id): try: json = self.skype.conn("POST", "{0}/users/batch/profiles".format(SkypeConnection.API_USER), json={"usernames": [id]}, auth=SkypeConnection.Auth.SkypeToken).json() contact = SkypeContact.fromRaw(self.skype, json[0]) ...
Retrieve all details for a specific contact, including fields such as birthday and mood. Args: id (str): user identifier to lookup Returns: SkypeContact: resulting contact object
juraj-google-style
def get_stored_variation(self, experiment, user_profile): user_id = user_profile.user_id variation_id = user_profile.get_variation_for_experiment(experiment.id) if variation_id: variation = self.config.get_variation_from_id(experiment.key, variation_id) if variation: self.logger....
Determine if the user has a stored variation available for the given experiment and return that. Args: experiment: Object representing the experiment for which user is to be bucketed. user_profile: UserProfile object representing the user's profile. Returns: Variation if available. None otherwise.
codesearchnet
def plot_waves(self, ax=None, fontsize=12, **kwargs): (ax, fig, plt) = get_ax_fig_plt(ax) ax.grid(True) ax.set_xlabel('r [Bohr]') ax.set_ylabel('$r\\phi,\\, r\\tilde\\phi\\, [Bohr]^{-\\frac{1}{2}}$') for (state, rfunc) in self.pseudo_partial_waves.items(): ax.plot(rfunc.mesh, (rfunc.mesh * r...
Plot the AE and the pseudo partial waves. Args: ax: matplotlib :class:`Axes` or None if a new figure should be created. fontsize: fontsize for legends and titles Returns: `matplotlib` figure
codesearchnet
def get_all_dataset_names(configuration=None, **kwargs): dataset = Dataset(configuration=configuration) dataset['id'] = 'all dataset names' return dataset._write_to_hdx('list', kwargs, 'id')
Get all dataset names in HDX Args: configuration (Optional[Configuration]): HDX configuration. Defaults to global configuration. **kwargs: See below limit (int): Number of rows to return. Defaults to all dataset names. offset (int): Offset in the complete result for where the set of returned dataset names should begin...
juraj-google-style
def get(self, group=None, backend=None): from .options import Store, Options keywords = {} groups = (Options._option_groups if (group is None) else [group]) backend = (backend if backend else Store.current_backend) for group in groups: optsobj = Store.lookup_options(backend, self._obj, group...
Returns the corresponding Options object. Args: group: The options group. Flattens across groups if None. backend: Current backend if None otherwise chosen backend. Returns: Options object associated with the object containing the applied option keywords.
codesearchnet
def line_id(self, lat): if self.grid == 'WAC': line = np.rint(1.0 + self.LINE_PROJECTION_OFFSET - self.A_AXIS_RADIUS * np.pi * lat / (self.MAP_SCALE * 1e-3 * 180)) else: line = np.rint(float(self.LINE_PROJECTION_OFFSET) - float(self.MAP_RESOLUT...
Return the corresponding line Args: lat (int): latitude in degree Returns: Correponding line
juraj-google-style
def pose_inv(pose): pose_inv = np.zeros((4, 4)) pose_inv[(:3, :3)] = pose[(:3, :3)].T pose_inv[(:3, 3)] = (- pose_inv[(:3, :3)].dot(pose[(:3, 3)])) pose_inv[(3, 3)] = 1.0 return pose_inv
Computes the inverse of a homogenous matrix corresponding to the pose of some frame B in frame A. The inverse is the pose of frame A in frame B. Args: pose: numpy array of shape (4,4) for the pose to inverse Returns: numpy array of shape (4,4) for the inverse pose
codesearchnet
def create_position_ids_from_inputs_embeds(self, inputs_embeds): input_shape = inputs_embeds.size()[:-1] sequence_length = input_shape[1] position_ids = torch.arange(self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device) return position_ids.unsqueeze...
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids. Args: inputs_embeds: torch.Tensor Returns: torch.Tensor
github-repos
def CopyFromStringTuple(self, time_elements_tuple): if len(time_elements_tuple) < 7: raise ValueError(( 'Invalid time elements tuple at least 7 elements required,' 'got: {0:d}').format(len(time_elements_tuple))) super(TimeElementsWithFractionOfSecond, self).CopyFromStringTuple( ...
Copies time elements from string-based time elements tuple. Args: time_elements_tuple (Optional[tuple[str, str, str, str, str, str, str]]): time elements, contains year, month, day of month, hours, minutes, seconds and fraction of seconds. Raises: ValueError: if the time elements tuple is invalid.
juraj-google-style
def process_sequence(sequence, rules=None, skip_non_vietnamese=True): result = '' raw = result result_parts = [] if (rules is None): rules = get_telex_definition() accepted_chars = _accepted_chars(rules) for key in sequence: if (key not in accepted_chars): result_part...
\ Convert a key sequence into a Vietnamese string with diacritical marks. Args: rules (optional): see docstring for process_key(). skip_non_vietnamese (optional): see docstring for process_key(). It even supports continous key sequences connected by separators. i.e. process_sequence('con meof.ddieen') should work.
codesearchnet
def add_mixin(self, mixin): raw = mixin.tokens[0][0].raw() if raw in self._mixins: self._mixins[raw].append(mixin) else: self._mixins[raw] = [mixin]
Add mixin to scope Args: mixin (Mixin): Mixin object
juraj-google-style
def describe(self, **kwargs): description = {'label': self.label, 'details': inspect.cleandoc(self.details), 'type': ('list of {}'.format(self.type) if self.many else self.type), 'spec': self.spec, 'read_only': self.read_only, 'write_only': self.write_only, 'allow_null': self.allow_null} description.update(kwar...
Describe this field instance for purpose of self-documentation. Args: kwargs (dict): dictionary of additional description items for extending default description Returns: dict: dictionary of description items Suggested way for overriding description fields or extending it with additional items is calling super class...
codesearchnet
def create_streaming_endpoint(access_token, name, description='New Streaming Endpoint', scale_units='1'): path = '/StreamingEndpoints' endpoint = ''.join([ams_rest_endpoint, path]) body = (((((('{ \t\t"Id":null, \t\t"Name":"' + name) + '", \t\t"Description":"') + description) + '", \t\t"Created":"0001-01-01...
Create Media Service Streaming Endpoint. Args: access_token (str): A valid Azure authentication token. name (str): A Media Service Streaming Endpoint Name. description (str): A Media Service Streaming Endpoint Description. scale_units (str): A Media Service Scale Units Number. Returns: HTTP response. JSON body.
codesearchnet
def NewRow(self, value=''): newrow = self.row_class() newrow.row = (self.size + 1) newrow.table = self headers = self._Header() for header in headers: newrow[header] = value return newrow
Fetches a new, empty row, with headers populated. Args: value: Initial value to set each row entry to. Returns: A Row() object.
codesearchnet
def recipe_video(config, auth_read, sheet, tab, project, dataset, table): sheets(config, {'__comment__': 'Copy the tamplate sheet to the users sheet. If it already exists, nothing happens.', 'auth': auth_read, 'template': {'sheet': 'https: video(config, {'__comment__': 'Read video effects and values from sheet...
Add images, text, and audio to videos. Args: auth_read (authentication) - Credentials used for reading data. sheet (string) - Name or URL of sheet. tab (string) - Name of sheet tab. project (string) - Google Cloud Project Identifier. dataset (string) - Name of dataset. table (string) - Name of table.
github-repos
def encrypt(self, mesg): seqn = next(self._tx_sn) rv = self._tx_tinh.enc(s_msgpack.en((seqn, mesg))) return rv
Wrap a message with a sequence number and encrypt it. Args: mesg: The mesg to encrypt. Returns: bytes: The encrypted message.
codesearchnet
def unpack(self, buff=None, offset=0): band_type = UBInt16(enum_ref=MeterBandType) band_type.unpack(buff, offset) self.__class__ = MeterBandType(band_type.value).find_class() length = UBInt16() length.unpack(buff, offset=offset+2) super().unpack(buff[:offset+le...
Unpack *buff* into this object. This method will convert a binary data into a readable value according to the attribute format. Args: buff (bytes): Binary buffer. offset (int): Where to begin unpacking. Raises: :exc:`~.exceptions.UnpackException`: If unpack fails.
juraj-google-style
def list2str(self, l: List, joiner: str) -> str: result = str() for item in l: if isinstance(item, list): result = ((result + self.list2str(item, joiner)) + joiner) elif isinstance(item, dict): result = ((result + self.dict2str(item, joiner)) + joiner) elif item: ...
Convert list to str as input for tokenizer Args: l (list): list for converting joiner (str): join the elements using this string to separate them. Returns: the value of the list as a string
codesearchnet
def ndtri(p, name="ndtri"): with tf.name_scope(name): p = tf.convert_to_tensor(value=p, name="p") if dtype_util.as_numpy_dtype(p.dtype) not in [np.float32, np.float64]: raise TypeError( "p.dtype=%s is not handled, see docstring for supported types." % p.dtype) return _ndtri(p...
The inverse of the CDF of the Normal distribution function. Returns x such that the area under the pdf from minus infinity to x is equal to p. A piece-wise rational approximation is done for the function. This is a port of the implementation in netlib. Args: p: `Tensor` of type `float32`, `float64`. name: Python str...
juraj-google-style
def __update_cleanup_paths(new_path): cleanup_dirs = settings.CFG['cleanup_paths'].value cleanup_dirs = set(cleanup_dirs) cleanup_dirs.add(new_path) cleanup_dirs = list(cleanup_dirs) settings.CFG['cleanup_paths'] = cleanup_dirs
Add the new path to the list of paths to clean up afterwards. Args: new_path: Path to the directory that need to be cleaned up.
codesearchnet
def pretty_print_fhir_to_json_string_for_analytics(fhir_proto: message.Message, *, indent_size: int=2) -> str: printer = _json_printer.JsonPrinter.pretty_printer_for_analytics(_PRIMITIVE_HANDLER, indent_size=indent_size) return printer.print(fhir_proto)
Returns an Analytic FHIR JSON representation with spaces and newlines. Args: fhir_proto: The proto to serialize into a "pretty" JSON string. indent_size: An integer denoting the size of space indentation for lexical scoping. Defaults to 2. Returns: An Analytic FHIR JSON representation with spaces and newlines.
github-repos
def split_sequence_columns_v2(feature_columns): sequence_columns = [] non_sequence_columns = [] for column in feature_columns: if not isinstance(column, (_TPUEmbeddingColumnV2, _TPUSharedEmbeddingColumnV2)): raise TypeError(f'column must be a _TPUEmbeddingColumnV2 or _TPUSharedEmbeddingC...
Split a list of _TPUEmbeddingColumn into sequence and non-sequence columns. For use in a TPUEstimator model_fn function. E.g. def model_fn(features): sequence_columns, feature_columns = ( tf.tpu.feature_column.split_sequence_columns(feature_columns)) input = tf.feature_column.input_layer( features=features, feature_c...
github-repos
async def reclaim_task(context, task): while True: log.debug(('waiting %s seconds before reclaiming...' % context.config['reclaim_interval'])) (await asyncio.sleep(context.config['reclaim_interval'])) if (task != context.task): return log.debug('Reclaiming task...') ...
Try to reclaim a task from the queue. This is a keepalive / heartbeat. Without it the job will expire and potentially be re-queued. Since this is run async from the task, the task may complete before we run, in which case we'll get a 409 the next time we reclaim. Args: context (scriptworker.context.Context): the sc...
codesearchnet
def __init__(self, cell): self._cell = cell
Creates a new StringGaugeCell. Args: cell: A c pointer of TFE_MonitoringStringGaugeCell.
github-repos
def checkUser(self, user): return not self.conn("POST", "{0}/GetCredentialType.srf".format(SkypeConnection.API_MSACC), json={"username": user}).json().get("IfExistsResult")
Query a username or email address to see if a corresponding Microsoft account exists. Args: user (str): username or email address of an account Returns: bool: whether the account exists
juraj-google-style
def head(self, n=10): r = self.__repr__().split('\n') print('\n'.join(r[:n]), end=' ')
Display the top of the file. Args: n (int): Number of lines to display
codesearchnet
def mesh_split(tensor, device_mesh, tensor_split_dims_mapping, use_sharding_op=False, manual_mesh_dims=None, unspecified_dims=None): sharding = mesh_split_sharding(device_mesh, tensor_split_dims_mapping, manual_mesh_dims) return sharding.apply_to_tensor(tensor, use_sharding_op=use_sharding_op, unspecified_dims=...
Returns a tensor that is split along multiple dimensions in a device mesh. Args: tensor: A tf.Tensor to split. device_mesh: An np.ndarray describing the topology of the device mesh and each element is the ID of the device in the topology. tensor_split_dims_mapping: A list of integers that map each tensor axis to the d...
github-repos
def categorical(logits, num_samples, dtype=None, seed=None, name=None): with ops.name_scope(name, 'categorical', [logits]): return multinomial_categorical_impl(logits, num_samples, dtype, seed)
Draws samples from a categorical distribution. Example: ```python # samples has shape [1, 5], where each value is either 0 or 1 with equal # probability. samples = tf.random.categorical(tf.math.log([[0.5, 0.5]]), 5) ``` Args: logits: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice `[i, :]` represents ...
github-repos
def resize(self, image: np.ndarray, size: Dict[str, int], size_divisor: int=0, resample: PILImageResampling=PILImageResampling.BILINEAR, data_format=None, input_data_format: Optional[Union[str, ChannelDimension]]=None, **kwargs) -> np.ndarray: max_size = kwargs.pop('max_size', None) size = get_size_dict(size, m...
Resize the image to the given size. Size can be min_size (scalar) or `(height, width)` tuple. If size is an int, smaller edge of the image will be matched to this number. Args: image (`np.ndarray`): Image to resize. size (`Dict[str, int]`): The size of the output image. size_divisor (`int`, *optional*, defaults to 0):...
github-repos
def AddVSSProcessingOptions(self, argument_group): argument_group.add_argument( '--no_vss', '--no-vss', dest='no_vss', action='store_true', default=False, help=( 'Do not scan for Volume Shadow Snapshots (VSS). This means that ' 'Volume Shadow Snapshots (VSS) are not proc...
Adds the VSS processing options to the argument group. Args: argument_group (argparse._ArgumentGroup): argparse argument group.
juraj-google-style
def imrescale(img, scale, return_scale=False, interpolation='bilinear'): (h, w) = img.shape[:2] if isinstance(scale, (float, int)): if (scale <= 0): raise ValueError('Invalid scale {}, must be positive.'.format(scale)) scale_factor = scale elif isinstance(scale, tuple): m...
Resize image while keeping the aspect ratio. Args: img (ndarray): The input image. scale (float or tuple[int]): The scaling factor or maximum size. If it is a float number, then the image will be rescaled by this factor, else if it is a tuple of 2 integers, then the image will be rescaled as large as possible within t...
codesearchnet
def setData(self, index, value, role=DTYPE_CHANGE_ROLE): if ((role != DTYPE_CHANGE_ROLE) or (not index.isValid())): return False if (not self.editable()): return False self.layoutAboutToBeChanged.emit() dtype = SupportedDtypes.dtype(value) currentDtype = np.dtype(index.data(role=DTYP...
Updates the datatype of a column. The model must be initated with a dataframe already, since valid indexes are necessary. The `value` is a translated description of the data type. The translations can be found at `qtpandas.translation.DTypeTranslator`. If a datatype can not be converted, e.g. datetime to integer, a `...
codesearchnet
def drug_matches_criteria(drug: Drug, **criteria: Dict[str, bool]) -> bool: for attribute, value in criteria.items(): if getattr(drug, attribute) != value: return False return True
Determines whether a drug, passed as an instance of :class:`.Drug`, matches the specified criteria. Args: drug: a :class:`.Drug` instance criteria: ``name=value`` pairs to match against the attributes of the :class:`Drug` class. For example, you can include keyword arguments like ``antidepressant=True``.
juraj-google-style
def verify_cot_signatures(chain): for link in chain.links: unsigned_path = link.get_artifact_full_path('public/chain-of-trust.json') ed25519_signature_path = link.get_artifact_full_path('public/chain-of-trust.json.sig') verify_link_ed25519_cot_signature(chain, link, unsigned_path, ed255...
Verify the signatures of the chain of trust artifacts populated in ``download_cot``. Populate each link.cot with the chain of trust json body. Args: chain (ChainOfTrust): the chain of trust to add to. Raises: CoTError: on failure.
juraj-google-style
def dvds_upcoming(self, **kwargs): path = self._get_path('dvds_upcoming') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Gets the upcoming movies from the API. Args: page_limit (optional): number of movies to show per page, default=16 page (optional): results page number, default=1 country (optional): localized data for selected country, default="us" Returns: A dict respresentation of the JSON returned from the API.
juraj-google-style
def post_attention(self, token, x): with tf.control_dependencies([ self.previous_segment.assign(token[0]), self.previous_vals.assign(token[1]), self.previous_bias.assign(token[2]), ]): return tf.identity(x)
Called after self-attention. The memory can be updated here. Args: token: Data returned by pre_attention, which can be used to carry over state related to the current memory operation. x: a Tensor of data after self-attention and feed-forward Returns: a (possibly modified) version of the input x
juraj-google-style
def swo_set_host_buffer_size(self, buf_size): buf = ctypes.c_uint32(buf_size) res = self._dll.JLINKARM_SWO_Control(enums.JLinkSWOCommands.SET_BUFFERSIZE_HOST, ctypes.byref(buf)) if res < 0: raise errors.JLinkException(res) ...
Sets the size of the buffer used by the host to collect SWO data. Args: self (JLink): the ``JLink`` instance buf_size (int): the new size of the host buffer Returns: ``None`` Raises: JLinkException: on error
juraj-google-style
def register(self, command: str, handler: Any): if (not command.startswith('/')): command = f'/{command}' LOG.info('Registering %s to %s', command, handler) self._routes[command].append(handler)
Register a new handler for a specific slash command Args: command: Slash command handler: Callback
codesearchnet
def _ProcessMetadataFile(self, mediator, file_entry): self.processing_status = definitions.STATUS_INDICATOR_EXTRACTING self._event_extractor.ParseFileEntryMetadata(mediator, file_entry) for data_stream in file_entry.data_streams: if self._abort: break self.last_activity_timestamp =...
Processes a metadata file. Args: mediator (ParserMediator): mediates the interactions between parsers and other components, such as storage and abort signals. file_entry (dfvfs.FileEntry): file entry of the metadata file.
juraj-google-style
def render_root_node_with_subs(root_node, subs): def rec(node, acc): if isinstance(node, e_nodes.EndOfStreamNode): pass elif isinstance(node, e_nodes.OpenStartElementNode): acc.append("<") acc.append(node.tag_name()) for child in node.children()...
render the given root node using the given substitutions into XML. Args: root_node (e_nodes.RootNode): the node to render. subs (list[str]): the substitutions that maybe included in the XML. Returns: str: the rendered XML document.
juraj-google-style
def abs(cls, x: 'TensorFluent') -> 'TensorFluent': return cls._unary_op(x, tf.abs, tf.float32)
Returns a TensorFluent for the abs function. Args: x: The input fluent. Returns: A TensorFluent wrapping the abs function.
codesearchnet
def APFSUnlockVolume(fsapfs_volume, path_spec, key_chain): is_locked = fsapfs_volume.is_locked() if is_locked: password = key_chain.GetCredential(path_spec, 'password') if password: fsapfs_volume.set_password(password) recovery_password = key_chain.GetCredential(path_spec, 'recovery_password')...
Unlocks an APFS volume using the path specification. Args: fsapfs_volume (pyapfs.volume): APFS volume. path_spec (PathSpec): path specification. key_chain (KeyChain): key chain. Returns: bool: True if the volume is unlocked, False otherwise.
juraj-google-style
def download(self, streamed=False, action=None, chunk_size=1024, **kwargs): path = ('/projects/%s/export/download' % self.project_id) result = self.manager.gitlab.http_get(path, streamed=streamed, raw=True, **kwargs) return utils.response_content(result, streamed, action, chunk_size)
Download the archive of a project export. Args: streamed (bool): If True the data will be processed by chunks of `chunk_size` and each chunk is passed to `action` for reatment action (callable): Callable responsible of dealing with chunk of data chunk_size (int): Size of each chunk **kwargs: Extra options to send to t...
codesearchnet
def committed(self, partition): assert (self.config['api_version'] >= (0, 8, 1)), 'Requires >= Kafka 0.8.1' assert (self.config['group_id'] is not None), 'Requires group_id' if (not isinstance(partition, TopicPartition)): raise TypeError('partition must be a TopicPartition namedtuple') if self._...
Get the last committed offset for the given partition. This offset will be used as the position for the consumer in the event of a failure. This call may block to do a remote call if the partition in question isn't assigned to this consumer or if the consumer hasn't yet initialized its cache of committed offsets. Ar...
codesearchnet
def consume_socket_output(frames, demux=False): if (demux is False): return six.binary_type().join(frames) out = [None, None] for frame in frames: assert (frame != (None, None)) if (frame[0] is not None): if (out[0] is None): out[0] = frame[0] ...
Iterate through frames read from the socket and return the result. Args: demux (bool): If False, stdout and stderr are multiplexed, and the result is the concatenation of all the frames. If True, the streams are demultiplexed, and the result is a 2-tuple where each item is the concatenation of frames belonging to the...
codesearchnet
def hide_tool(self, context_name, tool_name): data = self._context(context_name) hidden_tools = data["hidden_tools"] if tool_name not in hidden_tools: self._validate_tool(context_name, tool_name) hidden_tools.add(tool_name) self._flush_tools()
Hide a tool so that it is not exposed in the suite. Args: context_name (str): Context containing the tool. tool_name (str): Name of tool to hide.
juraj-google-style
def peek_record(self, model_class, record_id): if self._cache: return self._cache.get_record(model_class.__name__, record_id) else: return None
Return an instance of the model_class from the cache if it is present. Args: model_class (:class:`cinder_data.model.CinderModel`): A subclass of :class:`cinder_data.model.CinderModel` of your chosen model. record_id (int): The id of the record requested. Returns: :class:`cinder_data.model.CinderModel`: An instance of...
juraj-google-style
def get_help_commands(server_prefix): datapacks = [] _dir = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__))) for module_name in os.listdir('{}/../'.format(_dir)): if ((not module_name.startswith('_')) and (not module_name.startswith('!'))): help_command = '`{}help {...
Get the help commands for all modules Args: server_prefix: The server command prefix Returns: datapacks (list): A list of datapacks for the help commands for all the modules
codesearchnet
def _GetNextLogCountPerToken(token): global _log_counter_per_token _log_counter_per_token[token] = 1 + _log_counter_per_token.get(token, -1) return _log_counter_per_token[token]
Wrapper for _log_counter_per_token. Args: token: The token for which to look up the count. Returns: The number of times this function has been called with *token* as an argument (starting at 0)
juraj-google-style
def _prepare_grid(self, times, grid_step): grid = tf.range(0.0, times[-1], grid_step, dtype=self._dtype) all_times = tf.concat([grid, times], axis=0) mask = tf.concat([tf.zeros_like(grid, dtype=tf.bool), tf.ones_like(times, dtype=tf.bool)], axis=0) perm = tf.argsort(all_times, stable=True) all_times...
Prepares grid of times for path generation. Args: times: Rank 1 `Tensor` of increasing positive real values. The times at which the path points are to be evaluated. grid_step: Rank 0 real `Tensor`. Maximal distance between points in resulting grid. Returns: Tuple `(all_times, mask)`. `all_times` is 1-D real `Tensor`...
github-repos
def update_score_summary(sender, **kwargs): score = kwargs['instance'] try: score_summary = ScoreSummary.objects.get(student_item=score.student_item) score_summary.latest = score if score.reset: score_summary.highest = score elif (score.to_float() > score_summary.high...
Listen for new Scores and update the relevant ScoreSummary. Args: sender: not used Kwargs: instance (Score): The score model whose save triggered this receiver.
codesearchnet
def set_size(self, width, height): if (width is not None): try: width = to_pix(int(width)) except ValueError: pass self.style['width'] = width if (height is not None): try: height = to_pix(int(height)) except ValueError: pas...
Set the widget size. Args: width (int or str): An optional width for the widget (es. width=10 or width='10px' or width='10%'). height (int or str): An optional height for the widget (es. height=10 or height='10px' or height='10%').
codesearchnet
def __init__(self, params=None, connection_string=None): if params is None and connection_string is None: raise RuntimeError("Please provide either 'params' or 'connection_string'") if params is not None and connection_string is not None: raise RuntimeError("Please pro...
Instantiate a client object A client can be configured either from a parameters dictionary ``params`` or directly from an :mod:`sqlalchemy` connection string ``connection_string``. Exactly one of the two must be provided. Args: params (dict): database configuration, as defined in :mod:`ozelot.config` connection_strin...
juraj-google-style
def create(cls, tx_signers, recipients, metadata=None, asset=None): (inputs, outputs) = cls.validate_create(tx_signers, recipients, asset, metadata) return cls(cls.CREATE, {'data': asset}, inputs, outputs, metadata)
A simple way to generate a `CREATE` transaction. Note: This method currently supports the following Cryptoconditions use cases: - Ed25519 - ThresholdSha256 Additionally, it provides support for the following BigchainDB use cases: - Multiple inputs and outputs. Args: tx_signers (:obj:`list` of :obj:`str`): A list of ...
codesearchnet
def list_tags(): codes = _AutoCodes() grouped = set([(k, '/{0}'.format(k), codes[k], codes['/{0}'.format(k)]) for k in codes if (not k.startswith('/'))]) found = [c for r in grouped for c in r[:2]] missing = set([(('', r[0], None, r[1]) if r[0].startswith('/') else (r[0], '', r[1], None)) for r in _Auto...
Lists the available tags. Returns: Tuple of tuples. Child tuples are four items: ('opening tag', 'closing tag', main ansi value, closing ansi value).
codesearchnet
def join_pretty_tensors(tensors, output, join_function=None, name='join'): if (not tensors): raise ValueError('pretty_tensors must be a non-empty sequence.') with output.g.name_scope(name): if (join_function is None): last_dim = (len(tensors[0].shape) - 1) return output.w...
Joins the list of pretty_tensors and sets head of output_pretty_tensor. Args: tensors: A sequence of Layers or SequentialLayerBuilders to join. output: A pretty_tensor to set the head with the result. join_function: A function to join the tensors, defaults to concat on the last dimension. name: A name that is used for...
codesearchnet
def setup(docker_mount=None, force=False): if not is_ubuntu() and not is_boot2docker(): raise Exception('Head In The Clouds Docker is only supported on Ubuntu') if os.path.exists('dot_dockercfg') and not fabric.contrib.files.exists('~/.dockercfg'): put('dot_dockercfg', '~/.dockercfg'...
Prepare a vanilla server by installing docker, curl, and sshpass. If a file called ``dot_dockercfg`` exists in the current working directory, it is uploaded as ``~/.dockercfg``. Args: * docker_mount=None: Partition that will be mounted as /var/lib/docker
juraj-google-style
def label_durations(self, label_list_ids=None): duration = collections.defaultdict(int) for utterance in self.utterances.values(): for label_value, utt_count in utterance.label_total_duration(label_list_ids=label_list_ids).items(): duration[label_value] += utt_count...
Return a dictionary containing the total duration, every label-value in this corpus is occurring. Args: label_list_ids (list): If not None, only labels from label-lists with an id contained in this list are considered. Returns: dict: A dictionary containing the total duration with the label-value as key.
juraj-google-style
def ReadClientFullInfo(self, client_id): result = self.MultiReadClientFullInfo([client_id]) try: return result[client_id] except KeyError: raise UnknownClientError(client_id)
Reads full client information for a single client. Args: client_id: A GRR client id string, e.g. "C.ea3b2b71840d6fa7". Returns: A `ClientFullInfo` instance for given client. Raises: UnknownClientError: if no client with such id was found.
juraj-google-style
def convert_x_www_form_urlencoded_to_dict(post_data): if isinstance(post_data, str): converted_dict = {} for k_v in post_data.split('&'): try: (key, value) = k_v.split('=') except ValueError: raise Exception('Invalid x_www_form_urlencoded data ...
convert x_www_form_urlencoded data to dict Args: post_data (str): a=1&b=2 Returns: dict: {"a":1, "b":2}
codesearchnet
def _read_config(filename): parser = configparser.RawConfigParser() if (filename and (not parser.read(filename))): sys.stderr.write(("Unable to open configuration file %s. Use --config='' to disable this warning.\n" % filename)) config = {} for (section, defaults) in BASE_CONFIG.items(): ...
Read configuration from the given file. Parsing is performed through the configparser library. Returns: dict: a flattened dict of (option_name, value), using defaults.
codesearchnet
def remove_site(self): params = dict(oxd_id=self.oxd_id) logger.debug('Sending command `remove_site` with params %s', params) response = self.msgr.request('remove_site', **params) logger.debug('Received response: %s', response) if (response['status'] == 'error'): raise OxdServerError(respons...
Cleans up the data for the site. Returns: oxd_id if the process was completed without error Raises: OxdServerError if there was an issue with the operation
codesearchnet
def plot_brillouin_zone_from_kpath(kpath, ax=None, **kwargs): lines = [[kpath.kpath['kpoints'][k] for k in p] for p in kpath.kpath['path']] return plot_brillouin_zone(bz_lattice=kpath.prim_rec, lines=lines, ax=ax, labels=kpath.kpath['kpoints'], **kwargs)
Gives the plot (as a matplotlib object) of the symmetry line path in the Brillouin Zone. Args: kpath (HighSymmKpath): a HighSymmKPath object ax: matplotlib :class:`Axes` or None if a new figure should be created. **kwargs: provided by add_fig_kwargs decorator Returns: matplotlib figure
juraj-google-style
def get_model(servoid): data = [] data.append(9) data.append(servoid) data.append(EEP_READ_REQ) data.append(MODEL_NO1_EEP) data.append(BYTE1) send_data(data) rxdata = [] try: rxdata = SERPORT.read(12) return (ord(rxdata[9]) & 255) except: raise HerkulexErr...
Get the servo model This function gets the model of the herkules servo, provided its id Args: servoid(int): the id of the servo Returns: int: an integer corresponding to the model number 0x06 for DRS-602 0x04 for DRS-402 0x02 for DRS-202
codesearchnet
def Delete(self, request, global_params=None): config = self.GetMethodConfig('Delete') return self._RunMethod(config, request, global_params=global_params)
Deletes a `WorkerPool`. Args: request: (CloudbuildProjectsLocationsWorkerPoolsDeleteRequest) input message global_params: (StandardQueryParameters, default: None) global arguments Returns: (Operation) The response message.
github-repos
def delete_vmss(access_token, subscription_id, resource_group, vmss_name): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourceGroups/', resource_group, '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name, '?api-version=', COMP_API]) return do_delete(endpoint, acc...
Delete a virtual machine scale set. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. resource_group (str): Azure resource group name. vmss_name (str): Name of the virtual machine scale set. Returns: HTTP response.
codesearchnet
def contains(self, value, equality_comparer=operator.eq): if self.closed(): raise ValueError('Attempt to call contains() on a closed Queryable.') if (not is_callable(equality_comparer)): raise TypeError('contains() parameter equality_comparer={0} is not callable'.format(repr(equality_comparer)))...
Determines whether the sequence contains a particular value. Execution is immediate. Depending on the type of the sequence, all or none of the sequence may be consumed by this operation. Note: This method uses immediate execution. Args: value: The value to test for membership of the sequence Returns: True if value ...
codesearchnet
def sg_summary_gradient(tensor, gradient, prefix=None, name=None): prefix = ('' if (prefix is None) else (prefix + '/')) name = ((prefix + _pretty_name(tensor)) if (name is None) else (prefix + name)) _scalar((name + '/grad'), tf.reduce_mean(tf.abs(gradient))) _histogram((name + '/grad-h'), tf.abs(gradi...
r"""Register `tensor` to summary report as `gradient` Args: tensor: A `Tensor` to log as gradient gradient: A 0-D `Tensor`. A gradient to log prefix: A `string`. A prefix to display in the tensor board web UI. name: A `string`. A name to display in the tensor board web UI. Returns: None
codesearchnet
def multi_label_train_test_split(y, test_size=0.2): if test_size <= 0 or test_size >= 1: raise ValueError("`test_size` should be between 0 and 1") frac = Fraction(test_size).limit_denominator() test_folds, total_folds = frac.numerator, frac.denominator logger.warn('Inferring test_size...
Creates a test split with roughly the same multi-label distribution in `y`. Args: y: The multi-label outputs. test_size: The test size in [0, 1] Returns: The train and test indices.
juraj-google-style
def set_icon_file(self, filename, rel='icon'): (mimetype, encoding) = mimetypes.guess_type(filename) self.add_child('favicon', ('<link rel="%s" href="%s" type="%s" />' % (rel, filename, mimetype)))
Allows to define an icon for the App Args: filename (str): the resource file name (ie. "/res:myicon.png") rel (str): leave it unchanged (standard "icon")
codesearchnet
def get_policy(observations, hparams, action_space): if (not isinstance(action_space, gym.spaces.Discrete)): raise ValueError('Expecting discrete action space.') obs_shape = common_layers.shape_list(observations) (frame_height, frame_width) = obs_shape[2:4] if (hparams.policy_problem_name == 'du...
Get a policy network. Args: observations: observations hparams: parameters action_space: action space Returns: Tuple (action logits, value).
codesearchnet
def __init__(self, api_login, api_key): self.login = api_login self.key = api_key self.api_url = self.api_base_url.format(api_version=self.api_version)
Initializes OpenLoad instance with given parameters and formats api base url. Args: api_login (str): API Login found in openload.co api_key (str): API Key found in openload.co Returns: None
juraj-google-style
def match_any(patterns, name): if (not patterns): return True return any((match(pattern, name) for pattern in patterns))
Test if a name matches any of a list of patterns. Will return `True` if ``patterns`` is an empty list. Arguments: patterns (list): A list of wildcard pattern, e.g ``["*.py", "*.pyc"]`` name (str): A filename. Returns: bool: `True` if the name matches at least one of the patterns.
codesearchnet
def cas(self, key, value, cas, expire=0, noreply=False): return self._store_cmd(b'cas', {key: value}, expire, noreply, cas)[key]
The memcached "cas" command. Args: key: str, see class docs for details. value: str, see class docs for details. cas: int or str that only contains the characters '0'-'9'. expire: optional int, number of seconds until the item is expired from the cache, or zero for no expiry (the default). noreply: optional bool, Fals...
codesearchnet