code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def future_value(present_value, annual_rate, periods_per_year, years): rate_per_period = (annual_rate / float(periods_per_year)) periods = (periods_per_year * years) return (present_value * ((1 + rate_per_period) ** periods))
Calculates the future value of money invested at an anual interest rate, x times per year, for a given number of years. Args: present_value: int or float, the current value of the money (principal). annual_rate: float 0 to 1 e.g., .5 = 50%), the interest rate paid out. periods_per_year: int, the number of times money is invested per year. years: int, the number of years invested. Returns: Float, the future value of the money invested with compound interest.
codesearchnet
def isdisjoint(self, other): if isinstance(other, (_sequence_types + (BaseMultiset,))): pass elif (not isinstance(other, Container)): other = self._as_multiset(other) return all(((element not in other) for element in self._elements.keys()))
r"""Return True if the set has no elements in common with other. Sets are disjoint iff their intersection is the empty set. >>> ms = Multiset('aab') >>> ms.isdisjoint('bc') False >>> ms.isdisjoint(Multiset('ccd')) True Args: other: The other set to check disjointedness. Can also be an :class:`~typing.Iterable`\[~T] or :class:`~typing.Mapping`\[~T, :class:`int`] which are then converted to :class:`Multiset`\[~T].
codesearchnet
def build_filter_stack(stack, options): if options.get('keyword_case'): stack.preprocess.append(filters.KeywordCaseFilter(options['keyword_case'])) if options.get('identifier_case'): stack.preprocess.append(filters.IdentifierCaseFilter(options['identifier_case'])) if options.get('truncate_strings'): stack.preprocess.append(filters.TruncateStringFilter(width=options['truncate_strings'], char=options['truncate_char'])) if options.get('use_space_around_operators', False): stack.enable_grouping() stack.stmtprocess.append(filters.SpacesAroundOperatorsFilter()) if options.get('strip_comments'): stack.enable_grouping() stack.stmtprocess.append(filters.StripCommentsFilter()) if (options.get('strip_whitespace') or options.get('reindent')): stack.enable_grouping() stack.stmtprocess.append(filters.StripWhitespaceFilter()) if options.get('reindent'): stack.enable_grouping() stack.stmtprocess.append(filters.ReindentFilter(char=options['indent_char'], width=options['indent_width'], indent_after_first=options['indent_after_first'], indent_columns=options['indent_columns'], wrap_after=options['wrap_after'], comma_first=options['comma_first'])) if options.get('reindent_aligned', False): stack.enable_grouping() stack.stmtprocess.append(filters.AlignedIndentFilter(char=options['indent_char'])) if options.get('right_margin'): stack.enable_grouping() stack.stmtprocess.append(filters.RightMarginFilter(width=options['right_margin'])) if options.get('output_format'): frmt = options['output_format'] if (frmt.lower() == 'php'): fltr = filters.OutputPHPFilter() elif (frmt.lower() == 'python'): fltr = filters.OutputPythonFilter() else: fltr = None if (fltr is not None): stack.postprocess.append(fltr) return stack
Setup and return a filter stack. Args: stack: :class:`~sqlparse.filters.FilterStack` instance options: Dictionary with options validated by validate_options.
codesearchnet
def after_run(self, run_context, run_values): pass
Called after each call to run(). The `run_values` argument contains results of requested ops/tensors by `before_run()`. The `run_context` argument is the same one send to `before_run` call. `run_context.request_stop()` can be called to stop the iteration. If `session.run()` raises any exceptions then `after_run()` is not called. Args: run_context: A `SessionRunContext` object. run_values: A SessionRunValues object.
github-repos
def stream_matching(self, address, name): matching = [x for x in self.entries if (x.valid and x.target.matches(address, name))] rpc_list = [] for var in matching: rpc_list.extend(var.generate_rpcs(address)) return rpc_list
Return the RPCs needed to stream matching config variables to the given tile. This function will return a list of tuples suitable for passing to EmulatedDevice.deferred_rpc. Args: address (int): The address of the tile that we wish to stream to name (str or bytes): The 6 character name of the target tile. Returns: list of tuple: The list of RPCs to send to stream these variables to a tile.
codesearchnet
def _GetHashes(self, target_queue, max_hashes): hashes = [] for _ in range(0, max_hashes): try: item = target_queue.get_nowait() except Queue.Empty: continue hashes.append(item) return hashes
Retrieves a list of items from a queue. Args: target_queue (Queue.queue): queue to retrieve hashes from. max_hashes (int): maximum number of items to retrieve from the target_queue. Returns: list[object]: list of at most max_hashes elements from the target_queue. The list may have no elements if the target_queue is empty.
codesearchnet
def decorate(fn): if not isfunction(fn): raise TypeError('paco: fn must be a callable object') @functools.wraps(fn) def decorator(*args, **kw): for arg in args: if iscoro_or_corofunc(arg): return fn(*args, **kw) if len(args) and args[0] is None: raise TypeError('paco: first argument cannot be empty') def wrapper(coro, *_args, **_kw): if not iscoro_or_corofunc(coro): raise TypeError('paco: first argument must be a ' 'coroutine or coroutine function') _args = ((coro,) + (args + _args)) kw.update(_kw) return fn(*_args, **kw) return wrapper return decorator
Generic decorator for coroutines helper functions allowing multiple variadic initialization arguments. This function is intended to be used internally. Arguments: fn (function): target function to decorate. Raises: TypeError: if function or coroutine function is not provided. Returns: function: decorated function.
juraj-google-style
def _single_request(self, method, *args, **kwargs): _method = self._service for item in method.split('.'): if method.endswith(item): _method = getattr(_method, item)(*args, **kwargs) else: _method = getattr(_method, item)() _method.uri = _method.uri.replace('$ENDPOINT', self._endpoint) try: return _method.execute(http=self._http) except googleapiclient.errors.HttpError as exc: response = json.loads(exc.content.decode('utf-8'))['error'] raise APIError(code=response['code'], message=response['message'], http_error=exc)
Make a single request to the fleet API endpoint Args: method (str): A dot delimited string indicating the method to call. Example: 'Machines.List' *args: Passed directly to the method being called. **kwargs: Passed directly to the method being called. Returns: dict: The response from the method called. Raises: fleet.v1.errors.APIError: Fleet returned a response code >= 400
juraj-google-style
def tas53(msg): d = hex2bin(data(msg)) if (d[33] == '0'): return None tas = (bin2int(d[34:46]) * 0.5) return round(tas, 1)
Aircraft true airspeed, BDS 5,3 message Args: msg (String): 28 bytes hexadecimal message Returns: float: true airspeed in knots
codesearchnet
def bridge_to_parent(br): cmd = 'ovs-vsctl br-to-parent {0}'.format(br) result = __salt__['cmd.run_all'](cmd) if result['retcode'] != 0: return False return result['stdout']
Returns the parent bridge of a bridge. Args: br: A string - bridge name Returns: Name of the parent bridge. This is the same as the bridge name if the bridge is not a fake bridge. If the bridge does not exist, False is returned. CLI Example: .. code-block:: bash salt '*' openvswitch.bridge_to_parent br0
juraj-google-style
def nr_profiles(arr, genomes): gs_collapse = [] genome_idx_dict = {} indices = [] patt_dict = {} for i, g in enumerate(genomes): p = arr[i, :].tostring() if p in patt_dict: parent = patt_dict[p] idx = genome_idx_dict[parent] gs_collapse[idx].append(g) else: indices.append(i) patt_dict[p] = g genome_idx_dict[g] = len(gs_collapse) gs_collapse.append([g]) return arr[indices, :], gs_collapse
Get a condensed cgMLST pairwise distance matrix for specified Genomes_ where condensed means redundant cgMLST profiles are only represented once in the distance matrix. Args: user_name (list): List of Genome_ names to retrieve condensed distance matrix for Returns: (numpy.array, list): tuple of condensed cgMLST distance matrix and list of grouped Genomes_
juraj-google-style
def zip(self, destination: typing.Union[str, Path] = None, encode: bool = True) -> str: if encode: self._encode() if destination is None: destination_path = self.miz_path.parent.joinpath(f'{self.miz_path.stem}_EMIZ.miz') else: destination_path = elib.path.ensure_file(destination, must_exist=False) LOGGER.debug('zipping mission to: %s', destination_path) destination_path.write_bytes(dummy_miz) with ZipFile(str(destination_path), mode='w', compression=8) as zip_file: for root, _, items in os.walk(self.temp_dir.absolute()): for item in items: item_abs_path = Path(root, item).absolute() item_rel_path = Path(item_abs_path).relative_to(self.temp_dir) zip_file.write(item_abs_path, arcname=item_rel_path) return str(destination_path)
Write mission, dictionary etc. to a MIZ file Args: destination: target MIZ file (if none, defaults to source MIZ + "_EMIZ" Returns: destination file
juraj-google-style
def read_field_h5(xdmf_file, fieldname, snapshot, header=None): if header is None: header, xdmf_root = read_geom_h5(xdmf_file, snapshot) else: xdmf_root = xmlET.parse(str(xdmf_file)).getroot() npc = header['nts'] flds = np.zeros(_flds_shape(fieldname, header)) data_found = False for elt_subdomain in xdmf_root[0][0][snapshot].findall('Grid'): ibk = int(elt_subdomain.get('Name').startswith('meshYang')) for data_attr in elt_subdomain.findall('Attribute'): if data_attr.get('Name') != fieldname: continue icore, fld = _get_field(xdmf_file, data_attr.find('DataItem')) fld = fld.T shp = fld.shape if shp[-1] == 1 and header['nts'][0] == 1: fld = fld.reshape((shp[0], 1, shp[1], shp[2])) if header['rcmb'] < 0: fld = fld[(2, 0, 1), ...] elif shp[-1] == 1: fld = fld.reshape((shp[0], shp[1], 1, shp[2])) if header['rcmb'] < 0: fld = fld[(0, 2, 1), ...] elif header['nts'][1] == 1: fld = fld.reshape((1, shp[0], 1, shp[1])) ifs = [icore npc[i] for i in range(3)] if header['zp']: fld = fld[:, :, :, :-1] flds[:, ifs[0]:ifs[0] + npc[0] + header['xp'], ifs[1]:ifs[1] + npc[1] + header['yp'], ifs[2]:ifs[2] + npc[2], ibk] = fld data_found = True flds = _post_read_flds(flds, header) return (header, flds) if data_found else None
Extract field data from hdf5 files. Args: xdmf_file (:class:`pathlib.Path`): path of the xdmf file. fieldname (str): name of field to extract. snapshot (int): snapshot number. header (dict): geometry information. Returns: (dict, numpy.array): geometry information and field data. None is returned if data is unavailable.
juraj-google-style
def exact_match(self, descriptor): return self._exact_match_field(self._group, descriptor.get_group()) \ and self._exact_atch_field(self._type, descriptor.get_type()) \ and self._exact_match_field(self._kind, descriptor.get_kind()) \ and self._exact_match_field(self._name, descriptor.get_name()) \ and self._exact_match_field(self._version, descriptor.get_version())
Matches this descriptor to another descriptor exactly. Args: descriptor: another descriptor to match this one. Returns: True if descriptors match or False otherwise.
juraj-google-style
def _normalize(self, text: str) -> str: accepted = [chr(i) for i in range(ord('a'), ord('z') + 1)] + [chr(i) for i in range(ord('A'), ord('Z') + 1)] + [chr(i) for i in range(ord('0'), ord('9') + 1)] + ['.'] accepted = frozenset(accepted) pattern = re.compile('_+') text = ''.join([c if c in accepted else '_' for c in text.lower()]) text = pattern.sub('_', text).strip('_') return text
Normalizes the input text. This process is for the genres and the artist Args: text (`str`): Artist or Genre string to normalize
github-repos
def build_dot_value(key, value): if (key.count('.') == 0): return (key, value) final_value = value reverse_split = key.split('.')[::(- 1)] end = (len(reverse_split) - 1) for (idx, k) in enumerate(reverse_split): if (idx == end): return (k, final_value) final_value = {k: final_value}
Build new dictionaries based off of the dot notation key. For example, if a key were 'x.y.z' and the value was 'foo', we would expect a return value of: ('x', {'y': {'z': 'foo'}}) Args: key (str): The key to build a dictionary off of. value: The value associated with the dot notation key. Returns: tuple: A 2-tuple where the first element is the key of the outermost scope (e.g. left-most in the dot notation key) and the value is the constructed value for that key (e.g. a dictionary)
codesearchnet
def stop(self, timeout=None): assert self.state == STARTED, "Process not started" self.state = STOPPING self._run_hook(ProcessStopHook, timeout=timeout) for s in self._spawned: if not s.ready(): self.log.debug( "Waiting for %s *%s **%s", s._function, s._args, s._kwargs) s.wait(timeout=timeout) self._spawned = [] self._controllers = OrderedDict() self._unpublished = set() self.state = STOPPED self.log.debug("Done process.stop()")
Stop the process and wait for it to finish Args: timeout (float): Maximum amount of time to wait for each spawned object. None means forever
juraj-google-style
def _resize_images(self, x, height_factor, width_factor, data_format, interpolation='nearest'): if data_format not in {'channels_last', 'channels_first'}: raise ValueError(f'Invalid `data_format` argument: {data_format}') if data_format == 'channels_first': x = ops.transpose(x, [0, 2, 3, 1]) if interpolation == 'nearest': x = ops.repeat(x, height_factor, axis=1) x = ops.repeat(x, width_factor, axis=2) else: shape = ops.shape(x) new_shape = (shape[1] * height_factor, shape[2] * width_factor) x = ops.image.resize(x, new_shape, interpolation=interpolation) if data_format == 'channels_first': x = ops.transpose(x, [0, 3, 1, 2]) return x
Resizes the images contained in a 4D tensor. Args: x: Tensor or variable to resize. height_factor: Positive integer. width_factor: Positive integer. data_format: One of `"channels_first"`, `"channels_last"`. interpolation: A string, one of `"bicubic"`, `"bilinear"`, `"lanczos3"`, `"lanczos5"`, or `"nearest"`. Returns: A tensor.
github-repos
def includes(self, lo_freq: float) -> bool: if (self._lb <= lo_freq <= self._ub): return True return False
Whether `lo_freq` is within the `LoRange`. Args: lo_freq: LO frequency to be checked Returns: bool: True if lo_freq is included in this range, otherwise False
codesearchnet
def knots_from_marginal(marginal, nr_knots, spline_order): cumsum = np.cumsum(marginal) cumsum = (cumsum / cumsum.max()) borders = np.linspace(0, 1, nr_knots) knot_placement = (([0] + np.unique([np.where((cumsum >= b))[0][0] for b in borders[1:(- 1)]]).tolist()) + [(len(marginal) - 1)]) knots = augknt(knot_placement, spline_order) return knots
Determines knot placement based on a marginal distribution. It places knots such that each knot covers the same amount of probability mass. Two of the knots are reserved for the borders which are treated seperatly. For example, a uniform distribution with 5 knots will cause the knots to be equally spaced with 25% of the probability mass between each two knots. Input: marginal: Array Estimate of the marginal distribution used to estimate knot placement. nr_knots: int Number of knots to be placed. spline_order: int Order of the splines Returns: knots: Array Sequence of knot positions
codesearchnet
def append_filter(self, structure_filter): hdict = structure_filter.as_dict() hdict["input_structure"] = self.final_structure.as_dict() self.history.append(hdict)
Adds a filter. Args: structure_filter (StructureFilter): A filter implementating the AbstractStructureFilter API. Tells transmuter waht structures to retain.
juraj-google-style
def unzip(self, overwrite: bool=False): if (self.zip_content and (not overwrite)): raise FileExistsError(str(self.temp_dir)) LOGGER.debug('unzipping miz to temp dir') try: with ZipFile(str(self.miz_path)) as zip_file: LOGGER.debug('reading infolist') self.zip_content = [f.filename for f in zip_file.infolist()] self._extract_files_from_zip(zip_file) except BadZipFile: raise BadZipFile(str(self.miz_path)) except: LOGGER.exception('error while unzipping miz file: %s', self.miz_path) raise LOGGER.debug('checking miz content') for miz_item in ['mission', 'options', 'warehouses', 'l10n/DEFAULT/dictionary', 'l10n/DEFAULT/mapResource']: if (not Path(self.temp_dir.joinpath(miz_item)).exists()): LOGGER.error('missing file in miz: %s', miz_item) raise FileNotFoundError(miz_item) self._check_extracted_content() LOGGER.debug('all files have been found, miz successfully unzipped')
Flattens a MIZ file into the temp dir Args: overwrite: allow overwriting exiting files
codesearchnet
async def runCmdLine(self, line): opts = self.getCmdOpts(line) return (await self.runCmdOpts(opts))
Run a line of command input for this command. Args: line (str): Line to execute Examples: Run the foo command with some arguments: await foo.runCmdLine('foo --opt baz woot.com')
codesearchnet
def save_json(obj, filename, **kwargs): with open(filename, 'w', encoding='utf-8') as f: json.dump(obj, f, **kwargs)
Save an object as a JSON file. Args: obj: The object to save. Must be JSON-serializable. filename: Path to the output file. **kwargs: Additional arguments to `json.dump`.
codesearchnet
def codemirror_settings_update(configs, parameters, on=None, names=None): output = copy.deepcopy(configs) if names: output = {k: output[k] for k in names} if (not on): on = output.keys() for k in on: output[k].update(parameters) return output
Return a new dictionnary of configs updated with given parameters. You may use ``on`` and ``names`` arguments to select config or filter out some configs from returned dict. Arguments: configs (dict): Dictionnary of configurations to update. parameters (dict): Dictionnary of parameters to apply on selected configurations. Keyword Arguments: on (list): List of configuration names to select for update. If empty, all given configurations will be updated. names (list): List of configuration names to keep. If not empty, only those configurations will be in returned dict. Else every configs from original dict will be present. Returns: dict: Dict of configurations with updated parameters.
codesearchnet
def _GetSerializedAttributeContainerByIndex(self, container_type, index): container_list = self._GetSerializedAttributeContainerList(container_type) return container_list.GetAttributeContainerByIndex(index)
Retrieves a specific serialized attribute container. Args: container_type (str): attribute container type. index (int): attribute container index. Returns: bytes: serialized attribute container data or None if not available.
juraj-google-style
def _cast_value(self, value): if self.convert_datetimes: try: date_time = datetime.datetime.fromtimestamp(float(value)) if (datetime.datetime(1970, 1, 1) > date_time): raise ValueError else: return date_time except ValueError: pass tests = (int, float, str) for test in tests: try: return test(value) except ValueError: continue return value
Internal method that makes sure every value in dictionary is properly cast into the correct types, instead of just treating everything like a string from the csv file. Args: value : The value to be casted Returns: A casted Value.
codesearchnet
def autodecode(b): import warnings import chardet try: return b.decode() except UnicodeError: result = chardet.detect(b) if result['confidence'] < 0.95: warnings.warn('autodecode failed with utf-8; guessing %s' % result['encoding']) return result.decode(result['encoding'])
Try to decode ``bytes`` to text - try default encoding first, otherwise try to autodetect Args: b (bytes): byte string Returns: str: decoded text string
juraj-google-style
def is_sparse(x): return isinstance(x, (SparseTensor, SparseTensorValue))
Check whether `x` is sparse. Check whether an object is a `tf.sparse.SparseTensor` or `tf.compat.v1.SparseTensorValue`. Args: x: A python object to check. Returns: `True` iff `x` is a `tf.sparse.SparseTensor` or `tf.compat.v1.SparseTensorValue`.
github-repos
def collection(self, collection_id): child_path = (self._path + (collection_id,)) return self._client.collection(*child_path)
Create a sub-collection underneath the current document. Args: collection_id (str): The sub-collection identifier (sometimes referred to as the "kind"). Returns: ~.firestore_v1beta1.collection.CollectionReference: The child collection.
codesearchnet
def average_pooling1d(inputs, pool_size, strides, padding='valid', data_format='channels_last', name=None): warnings.warn('`tf.layers.average_pooling1d` is deprecated and will be removed in a future version. Please use `tf.keras.layers.AveragePooling1D` instead.') layer = AveragePooling1D(pool_size=pool_size, strides=strides, padding=padding, data_format=data_format, name=name) return layer.apply(inputs)
Average Pooling layer for 1D inputs. Args: inputs: The tensor over which to pool. Must have rank 3. pool_size: An integer or tuple/list of a single integer, representing the size of the pooling window. strides: An integer or tuple/list of a single integer, specifying the strides of the pooling operation. padding: A string. The padding method, either 'valid' or 'same'. Case-insensitive. data_format: A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`. name: A string, the name of the layer. Returns: The output tensor, of rank 3. Raises: ValueError: if eager execution is enabled.
github-repos
def plot_entropy(self, tmin, tmax, ntemp, ylim=None, **kwargs): temperatures = np.linspace(tmin, tmax, ntemp) if self.structure: ylabel = r"$S$ (J/K/mol)" else: ylabel = r"$S$ (J/K/mol-c)" fig = self._plot_thermo(self.dos.entropy, temperatures, ylabel=ylabel, ylim=ylim, **kwargs) return fig
Plots the vibrational entrpy in a temperature range. Args: tmin: minimum temperature tmax: maximum temperature ntemp: number of steps ylim: tuple specifying the y-axis limits. kwargs: kwargs passed to the matplotlib function 'plot'. Returns: matplotlib figure
juraj-google-style
def AddCredentialOptions(self, argument_group): argument_group.add_argument('--credential', action='append', default=[], type=str, dest='credentials', metavar='TYPE:DATA', help='Define a credentials that can be used to unlock encrypted volumes e.g. BitLocker. The credential is defined as type:data e.g. "password:BDE-test". Supported credential types are: {0:s}. Binary key data is expected to be passed in BASE-16 encoding (hexadecimal). WARNING credentials passed via command line arguments can end up in logs, so use this option with care.'.format(', '.join(self._SUPPORTED_CREDENTIAL_TYPES)))
Adds the credential options to the argument group. The credential options are use to unlock encrypted volumes. Args: argument_group (argparse._ArgumentGroup): argparse argument group.
codesearchnet
def state_invariant_scope(self, state: Sequence[tf.Tensor]): scope = {} scope.update(self.non_fluents_scope()) scope.update(self.state_scope(state)) return scope
Returns the state invariant fluent scope for the current `state`. Args: state (Sequence[tf.Tensor]): The current state fluents. Returns: A mapping from fluent names to :obj:`rddl2tf.fluent.TensorFluent`.
juraj-google-style
def replace_batch_norm(model): for name, module in model.named_children(): if isinstance(module, nn.BatchNorm2d): new_module = TestDetrFrozenBatchNorm2d(module.num_features) if not module.weight.device == torch.device('meta'): new_module.weight.data.copy_(module.weight) new_module.bias.data.copy_(module.bias) new_module.running_mean.data.copy_(module.running_mean) new_module.running_var.data.copy_(module.running_var) model._modules[name] = new_module if len(list(module.children())) > 0: replace_batch_norm(module)
Recursively replace all `torch.nn.BatchNorm2d` with `TestDetrFrozenBatchNorm2d`. Args: model (torch.nn.Module): input model
github-repos
def cumsum(x, dim, exclusive=False): with tf.variable_scope('cumsum'): new_name = 'tmp_dim_cumsum' new_dim = Dimension(new_name, dim.size) new_shape = x.shape.rename_dimension(dim.name, new_name) comparator = (less if exclusive else less_equal) m = cast(comparator(mtf_range(x.mesh, dim, dtype=tf.float32), mtf_range(x.mesh, new_dim, dtype=tf.float32)), x.dtype) ret = einsum([x, m], output_shape=new_shape) return reshape(ret, x.shape)
Cumulative sum. Args: x: a Tensor dim: a Dimension exclusive: a boolean Returns: a Tensor with the same shape as x.
codesearchnet
def from_composition_and_entries(comp, entries_in_chemsys, working_ion_symbol="Li"): pd = PhaseDiagram(entries_in_chemsys) return ConversionElectrode.from_composition_and_pd(comp, pd, working_ion_symbol)
Convenience constructor to make a ConversionElectrode from a composition and all entries in a chemical system. Args: comp: Starting composition for ConversionElectrode, e.g., Composition("FeF3") entries_in_chemsys: Sequence containing all entries in a chemical system. E.g., all Li-Fe-F containing entries. working_ion_symbol: Element symbol of working ion. Defaults to Li.
juraj-google-style
def __init__( self, identifier=None, location=None, parent=None, **kwargs): if (not identifier and not location) or not parent: raise ValueError('Missing identifier and location, or parent value.') super(APFSPathSpec, self).__init__(parent=parent, **kwargs) self.identifier = identifier self.location = location
Initializes a path specification. Note that an APFS path specification must have a parent. Args: identifier (Optional[int]): identifier. location (Optional[str]): location. parent (Optional[PathSpec]): parent path specification. Raises: ValueError: when parent or both identifier and location are not set.
juraj-google-style
def set(self, response: 'requests.Response') -> None: self.data[response.url] = SavedEndpoint(response.json(), self._get_expiration(response.headers))
Adds a response to the cache. Args: response: response from ESI Returns: None
codesearchnet
def add_features_to_nglview(view, structure_resnums, chain_id): if not structprop.chains.has_id(chain_id): structprop.parse_structure() if not structprop.chains.has_id(chain_id): raise ValueError('Chain {} not present in structure {}'.format(chain_id, structprop.id)) if not seqprop.features: log.warning('{}: no stored features'.format(seqprop.id)) for f in seqprop.features: if f.type.lower() == 'disulfide bond': disulfide = map_seqprop_resnums_to_structprop_resnums(resnums=[f.location.start + 1, f.location.end], seqprop=seqprop, structprop=structprop, chain_id=chain_id, use_representatives=False) to_view = [str(x)+'.CA' for x in list(disulfide.values())] view.add_distance(atom_pair=[to_view], color='black') log.info('Disulfide bridge at residues {} & {}'.format(f.location.start + 1, f.location.end)) if f.type.lower() == 'dna-binding region' or f.type.lower() == 'nucleotide phosphate-binding region': impres = self.map_seqprop_resnums_to_structprop_resnums(resnums=[f.location.start + 1, f.location.end], seqprop=seqprop, structprop=structprop, chain_id=chain_id, use_representatives=use_representatives) if f.location.start + 1 in impres and f.location.end in impres: mapped_start = impres[f.location.start + 1] mapped_end = impres[f.location.end] view.add_ball_and_stick(selection=':{} and ( {}-{} )'.format(chain_id, mapped_start, mapped_end), color='black') log.info('{} at sequence region {}-{}, structure residues {}-{}'.format(f.type, f.location.start, f.location.end, mapped_start, mapped_end)) if f.location.end - 1 == f.location.start: if f.type.lower() == 'sequence variant' or f.type.lower() == 'mutagenesis site': continue impres = self.map_seqprop_resnums_to_structprop_resnums(resnums=f.location.end, seqprop=seqprop, structprop=structprop, chain_id=chain_id, use_representatives=use_representatives) if f.location.end in impres: impres_mapped = impres[f.location.end] view.add_ball_and_stick(selection=str(impres_mapped), color='black') view.add_label(selection=':{} and {}'.format(chain_id, impres_mapped), label_type='res', color='black') log.info('{} at sequence residue {}, structure residue {}'.format(f.type, f.location.end, impres_mapped))
Add select features from the selected SeqProp object to an NGLWidget view object. Currently parsing for: * Single residue features (ie. metal binding sites) * Disulfide bonds Args: view (NGLWidget): NGLWidget view object seqprop (SeqProp): SeqProp object structprop (StructProp): StructProp object chain_id (str): ID of the structure's chain to get annotation from
juraj-google-style
def record2marcxml(record): schema_name = _get_schema_name(record) if (schema_name == 'hep'): marcjson = hep2marc.do(record) elif (schema_name == 'authors'): marcjson = hepnames2marc.do(record) else: raise NotImplementedError(u'JSON -> MARC rules missing for "{}"'.format(schema_name)) record = RECORD() for (key, values) in sorted(iteritems(marcjson)): (tag, ind1, ind2) = _parse_key(key) if _is_controlfield(tag, ind1, ind2): value = force_single_element(values) if (not isinstance(value, text_type)): value = text_type(value) record.append(CONTROLFIELD(_strip_invalid_chars_for_xml(value), {'tag': tag})) else: for value in force_list(values): datafield = DATAFIELD({'tag': tag, 'ind1': ind1, 'ind2': ind2}) for (code, els) in sorted(iteritems(value)): for el in force_list(els): if (not isinstance(el, text_type)): el = text_type(el) datafield.append(SUBFIELD(_strip_invalid_chars_for_xml(el), {'code': code})) record.append(datafield) return tostring(record, encoding='utf8', pretty_print=True)
Convert a JSON record to a MARCXML string. Deduces which set of rules to use by parsing the ``$schema`` key, as it unequivocally determines which kind of record we have. Args: record(dict): a JSON record. Returns: str: a MARCXML string converted from the record.
codesearchnet
def show_ordered_code(code, extra_col=None): if not extra_col: extra_col = {} _setup_tabulate() block_lines = [] op_lines = [] boundaries = [] start = 0 for block in code.order: end = start ids = lambda xs: [x.id for x in xs] block_lines.append(f'block: {block.id} -> {ids(block.outgoing)} <- {ids(block.incoming)}') for op in block: end += 1 op_lines.append([op.index, op.__class__.__name__, getattr(op, 'argval', ''), op.target and op.target.index, op.block_target and op.block_target.index, '✓' if op.push_exc_block else '', '✓' if op.pop_exc_block else '', op.next and op.next.index, op.line, extra_col.get(op.index)]) boundaries.append((start, end)) start = end headers = ['ix', 'op', 'arg', 'tgt', 'btgt', '>exc', '<exc', 'next', 'line', 'extra'] block_table = tabulate.tabulate(op_lines, headers, tablefmt='presto') block_table = block_table.split('\n') tab = [[block_table[0]]] block_table = block_table[2:] for blk, (start, end) in zip(block_lines, boundaries): tab.append([blk]) tab.append(['\n'.join(block_table[start:end])]) print(tabulate.tabulate(tab, tablefmt='fancy_grid'))
Print out the block structure of an OrderedCode object as a table. Args: code: A blocks.OrderedCode object extra_col: A map from opcode_index to a single additional cell to display
github-repos
def replace_case(self, case_obj): LOG.info("Saving case %s", case_obj['_id']) case_obj['updated_at'] = datetime.datetime.now(), updated_case = self.case_collection.find_one_and_replace( {'_id': case_obj['_id']}, case_obj, return_document=pymongo.ReturnDocument.AFTER ) return updated_case
Replace a existing case with a new one Keeps the object id Args: case_obj(dict) Returns: updated_case(dict)
juraj-google-style
def poweroff_vmss(access_token, subscription_id, resource_group, vmss_name): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourceGroups/', resource_group, '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name, '/powerOff?api-version=', COMP_API]) body = '{"instanceIds" : ["*"]}' return do_post(endpoint, body, access_token)
Power off all the VMs in a virtual machine scale set. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. resource_group (str): Azure resource group name. vmss_name (str): Name of the virtual machine scale set. Returns: HTTP response.
juraj-google-style
def users(self): if (not self.__users): self.__users = Users(self.__connection) return self.__users
Gets the Users API client. Returns: Users:
codesearchnet
def stop_site(name): ps_cmd = ['Stop-WebSite', r"'{0}'".format(name)] cmd_ret = _srvmgr(ps_cmd) return cmd_ret['retcode'] == 0
Stop a Web Site in IIS. .. versionadded:: 2017.7.0 Args: name (str): The name of the website to stop. Returns: bool: True if successful, otherwise False CLI Example: .. code-block:: bash salt '*' win_iis.stop_site name='My Test Site'
juraj-google-style
def logsumexp(x, axis=None, keepdims=False): return math_ops.reduce_logsumexp(x, axis, keepdims)
Computes log(sum(exp(elements across dimensions of a tensor))). This function is more numerically stable than log(sum(exp(x))). It avoids overflows caused by taking the exp of large inputs and underflows caused by taking the log of small inputs. Args: x: A tensor or variable. axis: An integer, the axis to reduce over. keepdims: A boolean, whether to keep the dimensions or not. If `keepdims` is `False`, the rank of the tensor is reduced by 1. If `keepdims` is `True`, the reduced dimension is retained with length 1. Returns: The reduced tensor.
github-repos
def ws025(self, value=None): if (value is not None): try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float for field `ws025`'.format(value)) self._ws025 = value
Corresponds to IDD Field `ws025` Wind speed corresponding to 2.5% annual cumulative frequency of occurrence Args: value (float): value for IDD Field `ws025` Unit: m/s if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def state_invariant_scope(self, state: Sequence[tf.Tensor]): scope = {} scope.update(self.non_fluents_scope()) scope.update(self.state_scope(state)) return scope
Returns the state invariant fluent scope for the current `state`. Args: state (Sequence[tf.Tensor]): The current state fluents. Returns: A mapping from fluent names to :obj:`rddl2tf.fluent.TensorFluent`.
codesearchnet
def _CreateRoutePatternsFolder(self, parent, route, style_id=None, visible=True): pattern_id_to_trips = route.GetPatternIdTripDict() if (not pattern_id_to_trips): return None pattern_trips = pattern_id_to_trips.values() pattern_trips.sort((lambda a, b: cmp(len(b), len(a)))) folder = self._CreateFolder(parent, 'Patterns', visible) for (n, trips) in enumerate(pattern_trips): trip_ids = [trip.trip_id for trip in trips] name = ('Pattern %d (trips: %d)' % ((n + 1), len(trips))) description = ('Trips using this pattern (%d in total): %s' % (len(trips), ', '.join(trip_ids))) placemark = self._CreatePlacemark(folder, name, style_id, visible, description) coordinates = [(stop.stop_lon, stop.stop_lat) for stop in trips[0].GetPattern()] self._CreateLineString(placemark, coordinates) return folder
Create a KML Folder containing placemarks for each pattern in the route. A pattern is a sequence of stops used by one of the trips in the route. If there are not patterns for the route then no folder is created and None is returned. Args: parent: The parent ElementTree.Element instance. route: The transitfeed.Route instance. style_id: The id of a style to use if not None. visible: Whether the folder is initially visible or not. Returns: The Folder ElementTree.Element instance or None if there are no patterns.
codesearchnet
def init_from_acceptor(self, acceptor): self.states = copy.deepcopy(acceptor.states) self.alphabet = copy.deepcopy(acceptor.alphabet) self.osyms = copy.deepcopy(acceptor.osyms) self.isyms = copy.deepcopy(acceptor.isyms)
Adds a sink state Args: alphabet (list): The input alphabet Returns: None
juraj-google-style
def color_scale_HSV(c: Color, scoef: float, vcoef: float) -> None: color_p = ffi.new('TCOD_color_t*') (color_p.r, color_p.g, color_p.b) = (c.r, c.g, c.b) lib.TCOD_color_scale_HSV(color_p, scoef, vcoef) c[:] = (color_p.r, color_p.g, color_p.b)
Scale a color's saturation and value. Does not return a new Color. ``c`` is modified inplace. Args: c (Union[Color, List[int]]): A Color instance, or an [r, g, b] list. scoef (float): Saturation multiplier, from 0 to 1. Use 1 to keep current saturation. vcoef (float): Value multiplier, from 0 to 1. Use 1 to keep current value.
codesearchnet
def sigmoid_cross_entropy_with_logits(logits, targets): if logits.shape != targets.shape: raise ValueError( "logits shape must equal targets shape" "logits=%s targets=%s" % (logits.to_string, targets.to_string)) x = logits z = targets return mtf.relu(x) - x * z + mtf.log(1 + mtf.exp(-mtf.abs(x)))
Sigmoid cross-entropy loss. Args: logits: a mtf.Tensor targets: a mtf.Tensor with the same shape as logits Returns: a mtf.Tensor whose shape is equal to logits.shape Raises: ValueError: if the shapes do not match.
juraj-google-style
def ParseRecord(self, parser_mediator, key, structure): if key not in ('header', 'logline'): raise errors.ParseError( 'Unable to parse record, unknown structure: {0:s}'.format(key)) if key == 'logline': self._ParseLine(parser_mediator, structure) elif key == 'header': self._ParseHeader(parser_mediator, structure)
Parse each record structure and return an EventObject if applicable. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. key (str): identifier of the structure of tokens. structure (pyparsing.ParseResults): structure of tokens derived from a line of a text file. Raises: ParseError: when the structure type is unknown.
juraj-google-style
def binfiles_set(self, isnap): possible_files = set((self.filename(fstem, isnap, force_legacy=True) for fstem in phyvars.FIELD_FILES)) return (possible_files & self.files)
Set of existing binary files at a given snap. Args: isnap (int): snapshot index. Returns: set of pathlib.Path: the set of output files available for this snapshot number.
codesearchnet
def vibrational_internal_energy(self, temperature, volume): y = self.debye_temperature(volume) / temperature return self.kb * self.natoms * temperature * (9./8. * y + 3*self.debye_integral(y))
Vibrational internal energy, U_vib(V, T). Eq(4) in doi.org/10.1016/j.comphy.2003.12.001 Args: temperature (float): temperature in K volume (float): in Ang^3 Returns: float: vibrational internal energy in eV
juraj-google-style
def default_get_arg_names_from_class_name(class_name): parts = [] rest = class_name if rest.startswith('_'): rest = rest[1:] while True: m = re.match('([A-Z][a-z]+)(.*)', rest) if (m is None): break parts.append(m.group(1)) rest = m.group(2) if (not parts): return [] return ['_'.join((part.lower() for part in parts))]
Converts normal class names into normal arg names. Normal class names are assumed to be CamelCase with an optional leading underscore. Normal arg names are assumed to be lower_with_underscores. Args: class_name: a class name, e.g., "FooBar" or "_FooBar" Returns: all likely corresponding arg names, e.g., ["foo_bar"]
codesearchnet
def _extract_match(self, candidate, offset): if (_SLASH_SEPARATED_DATES.search(candidate)): return None if _TIME_STAMPS.search(candidate): following_text = self.text[offset + len(candidate):] if _TIME_STAMPS_SUFFIX.match(following_text): return None match = self._parse_and_verify(candidate, offset) if match is not None: return match return self._extract_inner_match(candidate, offset)
Attempts to extract a match from a candidate string. Arguments: candidate -- The candidate text that might contain a phone number. offset -- The offset of candidate within self.text Returns the match found, None if none can be found
juraj-google-style
def create_border(self, border_style_type): if (border_style_type == MenuBorderStyleType.ASCII_BORDER): return self.create_ascii_border() elif (border_style_type == MenuBorderStyleType.LIGHT_BORDER): return self.create_light_border() elif (border_style_type == MenuBorderStyleType.HEAVY_BORDER): return self.create_heavy_border() elif (border_style_type == MenuBorderStyleType.DOUBLE_LINE_BORDER): return self.create_doubleline_border() elif (border_style_type == MenuBorderStyleType.HEAVY_OUTER_LIGHT_INNER_BORDER): return self.create_heavy_outer_light_inner_border() elif (border_style_type == MenuBorderStyleType.DOUBLE_LINE_OUTER_LIGHT_INNER_BORDER): return self.create_doubleline_outer_light_inner_border() else: self.logger.info('Unrecognized border style type: {}. Defaulting to ASCII.'.format(border_style_type)) return self.create_ascii_border()
Create a new MenuBorderStyle instance based on the given border style type. Args: border_style_type (int): an integer value from :obj:`MenuBorderStyleType`. Returns: :obj:`MenuBorderStyle`: a new MenuBorderStyle instance of the specified style.
codesearchnet
def from_tuplelist(tuple_list): out = Layout() for physical, virtual in enumerate(tuple_list): if virtual is None: continue elif Layout.is_virtual(virtual): if virtual in out._v2p: raise LayoutError('Duplicate values not permitted; Layout is bijective.') out[virtual] = physical else: raise LayoutError("The list should contain elements of the form" " (Register, integer) or None") return out
Populates a Layout from a list containing virtual qubits---(QuantumRegister, int) tuples---, or None. Args: tuple_list (list): e.g.: [qr[0], None, qr[2], qr[3]] Returns: Layout: the corresponding Layout object Raises: LayoutError: If the elements are not (Register, integer) or None
juraj-google-style
def sym_hash(x: Any) -> int: if isinstance(x, Symbolic): return x.sym_hash() if inspect.isfunction(x): return hash(x.__code__.co_code) if inspect.ismethod(x): return hash((sym_hash(x.__self__), x.__code__.co_code)) return hash(x)
Returns hash of value. Use symbolic hashing function if possible. Example:: @pg.symbolize class A: def __init__(self, x): self.x = x assert hash(A(1)) != hash(A(1)) assert pg.hash(A(1)) == pg.hash(A(1)) assert pg.hash(pg.Dict(x=[A(1)])) == pg.hash(pg.Dict(x=[A(1)])) Args: x: Value for computing hash. Returns: The hash value for `x`.
github-repos
def JoinTypes(types): queue = collections.deque(types) seen = set() new_types = [] while queue: t = queue.popleft() if isinstance(t, pytd.UnionType): queue.extendleft(reversed(t.type_list)) elif isinstance(t, pytd.NothingType): pass elif t not in seen: new_types.append(t) seen.add(t) if len(new_types) == 1: return new_types.pop() elif any((isinstance(t, pytd.AnythingType) for t in new_types)): nonetype = pytd.NamedType('builtins.NoneType') unresolved_nonetype = pytd.NamedType('NoneType') if any((t in (nonetype, unresolved_nonetype) for t in new_types)): return pytd.UnionType((pytd.AnythingType(), nonetype)) return pytd.AnythingType() elif new_types: return pytd.UnionType(tuple(new_types)) else: return pytd.NothingType()
Combine a list of types into a union type, if needed. Leaves singular return values alone, or wraps a UnionType around them if there are multiple ones, or if there are no elements in the list (or only NothingType) return NothingType. Arguments: types: A list of types. This list might contain other UnionTypes. If so, they are flattened. Returns: A type that represents the union of the types passed in. Order is preserved.
github-repos
def unparse_headers(hdrs): return (''.join([unparse_header(n, v) for (n, v) in hdrs.items()]) + '\r\n')
Parse a dictionary of headers to a string. Args: hdrs: A dictionary of headers. Returns: The headers as a string that can be used in an NNTP POST.
codesearchnet
class BayesianWatermarkDetectorModelOutput(ModelOutput): loss: Optional[torch.FloatTensor] = None posterior_probabilities: Optional[torch.FloatTensor] = None
Base class for outputs of models predicting if the text is watermarked. Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): Language modeling loss. posterior_probabilities (`torch.FloatTensor` of shape `(1,)`): Multiple choice classification loss.
github-repos
def draw_rects(self, *rects): rect_array = ffi.new('SDL_Rect[]', len(rects)) for (i, r) in enumerate(rects): rect_array[i] = r._ptr[0] check_int_err(lib.SDL_RenderDrawRects(self._ptr, rect_array, len(rects)))
Draw some number of rectangles on the current rendering target. Args: *rects (Rect): The destination rectangles. Raises: SDLError: If an error is encountered.
codesearchnet
def load_structure_path(self, structure_path, file_type): if not file_type: raise ValueError('File type must be specified') self.file_type = file_type self.structure_dir = op.dirname(structure_path) self.structure_file = op.basename(structure_path)
Load a structure file and provide pointers to its location Args: structure_path (str): Path to structure file file_type (str): Type of structure file
juraj-google-style
def managed(name, table, data, record=None): ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''} if (record is None): record = name current_data = {column: __salt__['openvswitch.db_get'](table, record, column) for column in data} comment_changes = 'Columns have been updated.' comment_no_changes = 'All columns are already up to date.' comment_error = 'Error while updating column {0}: {1}' if __opts__['test']: for column in data: if (data[column] != current_data[column]): ret['changes'][column] = {'old': current_data[column], 'new': data[column]} if ret['changes']: ret['result'] = None ret['comment'] = comment_changes else: ret['result'] = True ret['comment'] = comment_no_changes return ret for column in data: if (data[column] != current_data[column]): result = __salt__['openvswitch.db_set'](table, record, column, data[column]) if (result is not None): ret['comment'] = comment_error.format(column, result) ret['result'] = False return ret ret['changes'][column] = {'old': current_data[column], 'new': data[column]} ret['result'] = True ret['comment'] = comment_no_changes return ret
Ensures that the specified columns of the named record have the specified values. Args: name: The name of the record. table: The name of the table to which the record belongs. data: Dictionary containing a mapping from column names to the desired values. Columns that exist, but are not specified in this dictionary are not touched. record: The name of the record (optional). Replaces name if specified.
codesearchnet
def __init__(self, token=None, auth_test=False, verify=True, lazy=False): try: self.token = token if token else os.environ['SLACK_TOKEN'] except KeyError: raise ValueError('If not providing a token, must set SLACK_TOKEN envvar') if auth_test: response = self.auth_test() if not response['ok']: raise ValueError('Authentication Failed with response: {}'.format(response)) self.verify = verify self._channels = [] self._users = [] if not lazy: _ = self.channels _ = self.users
Instantiation an instance of the Slack API Args: token: {str} (required) API token, read from SLACK_TOKEN env var auth_test: {bool} verify this token verify: {bool} verify all API calls return with a True 'ok' lazy: {bool} Don't populate properties until called
juraj-google-style
def _read_mode_tsopt(self, size, kind): temp = struct.unpack('>II', self._read_fileng(size)) data = dict(kind=kind, length=size, val=temp[0], ecr=temp[1]) return data
Read Timestamps option. Positional arguments: * size - int, length of option * kind - int, 8 (Timestamps) Returns: * dict -- extracted Timestamps (TS) option Structure of TCP TSopt [RFC 7323]: +-------+-------+---------------------+---------------------+ |Kind=8 | 10 | TS Value (TSval) |TS Echo Reply (TSecr)| +-------+-------+---------------------+---------------------+ 1 1 4 4 Octets Bits Name Description 0 0 tcp.ts.kind Kind (8) 1 8 tcp.ts.length Length (10) 2 16 tcp.ts.val Timestamp Value 6 48 tcp.ts.ecr Timestamps Echo Reply
codesearchnet
def with_row_splits_dtype(self, dtype): dtype = dtypes.as_dtype(dtype) if dtype not in (dtypes.int32, dtypes.int64): raise ValueError(f'Argument `row_splits` dtype must be int32 or int64. Received {dtype}.') if self._row_partition.dtype == dtype: return self current_values = self._values if isinstance(current_values, RaggedTensor): return RaggedTensor(values=current_values.with_row_splits_dtype(dtype), row_partition=self._row_partition.with_dtype(dtype), internal=True) else: return RaggedTensor(values=current_values, row_partition=self._row_partition.with_dtype(dtype), internal=True)
Returns a copy of this RaggedTensor with the given `row_splits` dtype. For RaggedTensors with multiple ragged dimensions, the `row_splits` for all nested `RaggedTensor` objects are cast to the given dtype. Args: dtype: The dtype for `row_splits`. One of `tf.int32` or `tf.int64`. Returns: A copy of this RaggedTensor, with the `row_splits` cast to the given type.
github-repos
async def getTempCoreCmdr(mods=None, outp=None): acm = genTempCoreProxy(mods) prox = (await acm.__aenter__()) cmdrcore = (await CmdrCore.anit(prox, outp=outp)) cmdrcore.acm = acm return cmdrcore
Get a CmdrCore instance which is backed by a temporary Cortex. Args: mods (list): A list of additional CoreModules to load in the Cortex. outp: A output helper. Will be used for the Cmdr instance. Notes: The CmdrCore returned by this should be fini()'d to tear down the temporary Cortex. Returns: CmdrCore: A CmdrCore instance.
codesearchnet
def GetLinkedFileEntry(self): link = self._GetLink() if (not link): return None path_spec = os_path_spec.OSPathSpec(location=link) return OSFileEntry(self._resolver_context, self._file_system, path_spec)
Retrieves the linked file entry, for example for a symbolic link. Returns: OSFileEntry: linked file entry or None if not available.
codesearchnet
def get_channel(self, channel_name, project_name, dataset_name): return self.resources.get_channel(channel_name, project_name, dataset_name)
Gets info about a channel given its name, name of its project , and name of its dataset. Arguments: channel_name (str): Channel name project_name (str): Project name dataset_name (str): Dataset name Returns: dict: Channel info
juraj-google-style
def coldestmonth(self, value=None): if value is not None: try: value = int(value) except ValueError: raise ValueError('value {} need to be of type int ' 'for field `coldestmonth`'.format(value)) if value < 1: raise ValueError('value need to be greater or equal 1 ' 'for field `coldestmonth`') if value > 12: raise ValueError('value need to be smaller 12 ' 'for field `coldestmonth`') self._coldestmonth = value
Corresponds to IDD Field `coldestmonth` Args: value (int): value for IDD Field `coldestmonth` value >= 1 value <= 12 if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
juraj-google-style
def savefits(cube, fitsname, **kwargs): dropdeg = kwargs.pop('dropdeg', False) ndim = len(cube.dims) FITSINFO = get_data('decode', 'data/fitsinfo.yaml') hdrdata = yaml.load(FITSINFO, dc.utils.OrderedLoader) if (ndim == 2): header = fits.Header(hdrdata['dcube_2d']) data = cube.values.T elif (ndim == 3): if dropdeg: header = fits.Header(hdrdata['dcube_2d']) data = cube.values[(:, :, 0)].T else: header = fits.Header(hdrdata['dcube_3d']) kidfq = cube.kidfq.values freqrange = (~ np.isnan(kidfq)) orderedfq = np.argsort(kidfq[freqrange]) newcube = cube[(:, :, orderedfq)] data = newcube.values.T else: raise TypeError(ndim) if (cube.coordsys == 'AZEL'): header.update({'CTYPE1': 'dAZ', 'CTYPE2': 'dEL'}) elif (cube.coordsys == 'RADEC'): header.update({'OBSRA': float(cube.xref), 'OBSDEC': float(cube.yref)}) else: pass header.update({'CRVAL1': float(cube.x[0]), 'CDELT1': float((cube.x[1] - cube.x[0])), 'CRVAL2': float(cube.y[0]), 'CDELT2': float((cube.y[1] - cube.y[0])), 'DATE': datetime.now(timezone('UTC')).isoformat()}) if ((ndim == 3) and (not dropdeg)): header.update({'CRVAL3': float(newcube.kidfq[0]), 'CDELT3': float((newcube.kidfq[1] - newcube.kidfq[0]))}) fitsname = str(Path(fitsname).expanduser()) fits.writeto(fitsname, data, header, **kwargs) logger.info('{} has been created.'.format(fitsname))
Save a cube to a 3D-cube FITS file. Args: cube (xarray.DataArray): Cube to be saved. fitsname (str): Name of output FITS file. kwargs (optional): Other arguments common with astropy.io.fits.writeto().
codesearchnet
def log_mel_spectrogram(data, audio_sample_rate=8000, log_offset=0.0, window_length_secs=0.025, hop_length_secs=0.01, **kwargs): window_length_samples = int(round((audio_sample_rate * window_length_secs))) hop_length_samples = int(round((audio_sample_rate * hop_length_secs))) fft_length = (2 ** int(np.ceil((np.log(window_length_samples) / np.log(2.0))))) spectrogram = stft_magnitude(data, fft_length=fft_length, hop_length=hop_length_samples, window_length=window_length_samples) mel_spectrogram = np.dot(spectrogram, spectrogram_to_mel_matrix(num_spectrogram_bins=spectrogram.shape[1], audio_sample_rate=audio_sample_rate, **kwargs)) return np.log((mel_spectrogram + log_offset))
Convert waveform to a log magnitude mel-frequency spectrogram. Args: data: 1D np.array of waveform data. audio_sample_rate: The sampling rate of data. log_offset: Add this to values when taking log to avoid -Infs. window_length_secs: Duration of each window to analyze. hop_length_secs: Advance between successive analysis windows. **kwargs: Additional arguments to pass to spectrogram_to_mel_matrix. Returns: 2D np.array of (num_frames, num_mel_bins) consisting of log mel filterbank magnitudes for successive frames.
codesearchnet
def replace_batch_norm(model): for name, module in model.named_children(): if isinstance(module, nn.BatchNorm2d): new_module = DeformableDetrFrozenBatchNorm2d(module.num_features) if not module.weight.device == torch.device('meta'): new_module.weight.data.copy_(module.weight) new_module.bias.data.copy_(module.bias) new_module.running_mean.data.copy_(module.running_mean) new_module.running_var.data.copy_(module.running_var) model._modules[name] = new_module if len(list(module.children())) > 0: replace_batch_norm(module)
Recursively replace all `torch.nn.BatchNorm2d` with `DeformableDetrFrozenBatchNorm2d`. Args: model (torch.nn.Module): input model
github-repos
def downsample_residual(x, output_channels, dim='2d', stride=1, scope='h'): with tf.variable_scope(scope): if (stride > 1): avg_pool = CONFIG[dim]['avg_pool'] x = avg_pool(x, pool_size=(stride, stride), strides=(stride, stride), padding='VALID') input_channels = tf.shape(x)[3] diff = (output_channels - input_channels) x = tf.pad(x, [[0, 0], [0, 0], [0, 0], [(diff return x
Downsamples 'x' by `stride` using average pooling. Args: x: input tensor of size [N, H, W, C] output_channels: Desired number of output channels. dim: '2d' if 2-dimensional, '3d' if 3-dimensional. stride: What stride to use. Usually 1 or 2. scope: Optional variable scope. Returns: A downsampled tensor of size [N, H/2, W/2, output_channels] if stride is 2, else returns a tensor of size [N, H, W, output_channels] if stride is 1.
codesearchnet
def writegroup(self, auth, entries, defer=False): return self._call('writegroup', auth, [entries], defer)
Writes the given values for the respective resources in the list, all writes have same timestamp. Args: auth: cik for authentication. entries: List of key, value lists. eg. [[key, value], [k,v],,,]
juraj-google-style
def dump_property(self, name): if not hasattr(self, name): raise ArgumentError("Unknown property %s" % name) value = getattr(self, name) if name in self._complex_properties: value = self._complex_properties[name][0](value) return value
Serialize a property of this class by name. Args: name (str): The name of the property to dump. Returns: object: The serialized value of the property.
juraj-google-style
def _tf_extension_type_fields(cls): if '_tf_extension_type_cached_fields' in cls.__dict__: return cls._tf_extension_type_cached_fields try: type_hints = typing_extensions.get_type_hints(cls, include_extras=False) ok_to_cache = True except (NameError, AttributeError): type_hints = {} for base in reversed(cls.__mro__): type_hints.update(base.__dict__.get('__annotations__', {})) ok_to_cache = False fields = [] for name, value_type in type_hints.items(): default = getattr(cls, name, extension_type_field.ExtensionTypeField.NO_DEFAULT) fields.append(extension_type_field.ExtensionTypeField(name, value_type, default)) fields = tuple(fields) if ok_to_cache: cls._tf_extension_type_cached_fields = fields return fields
An ordered list describing the fields of this ExtensionType. Returns: A list of `ExtensionTypeField` objects. Forward references are resolved if possible, or left unresolved otherwise.
github-repos
def trainable(self, value): value = bool(value) self._trainable = value for v in self._trainable_variables: v.trainable = value for layer in self._layers: layer.trainable = value
Sets trainable attribute for the layer and its sublayers. When this value is changed during training (e.g. with a `Callback`) you need to call the parent `Model.make_train_function` with `force=True` in order to recompile the training graph. Args: value: Boolean with the desired state for the layer's trainable attribute.
github-repos
def GetCacheValueByObject(self, vfs_object): for identifier, cache_value in iter(self._values.items()): if not cache_value: raise RuntimeError('Missing cache value.') if cache_value.vfs_object == vfs_object: return identifier, cache_value return None, None
Retrieves the cache value for the cached object. Args: vfs_object (object): VFS object that was cached. Returns: tuple[str, ObjectsCacheValue]: identifier and cache value object or (None, None) if not cached. Raises: RuntimeError: if the cache value is missing.
juraj-google-style
def setProvisioningUrl(self, strURL='grl.com'): print '%s call setProvisioningUrl' % self.port self.provisioningUrl = strURL if self.deviceRole == Thread_Device_Role.Commissioner: cmd = WPANCTL_CMD + 'setprop Commissioner:ProvisioningUrl %s' %(strURL) print cmd return self.__sendCommand(cmd)[0] != "Fail" return True
set provisioning Url Args: strURL: Provisioning Url string Returns: True: successful to set provisioning Url False: fail to set provisioning Url
juraj-google-style
def _results_tc_args(self): results = [] if os.access(self.default_args.tc_out_path, os.W_OK): result_file = '{}/results.tc'.format(self.default_args.tc_out_path) else: result_file = 'results.tc' if os.path.isfile(result_file): with open(result_file, 'r') as rh: results = rh.read().strip().split('\n') os.remove(result_file) for line in results: if ((not line) or (' = ' not in line)): continue (key, value) = line.split(' = ') if (value == 'true'): value = True elif (value == 'false'): value = False elif (not value): value = None setattr(self._default_args, key, value)
Read data from results_tc file from previous run of app. This method is only required when not running from the with the TcEX platform and is only intended for testing apps locally. Returns: (dictionary): A dictionary of values written to results_tc.
codesearchnet
def find_many(self, url, type, resource): return [type(item) for item in RestClient.get(url)[resource]]
Get a list of resources Args: url (string): URL to invoke type (class): Class type resource (string): The REST Resource Returns: list of object: List of resource instances
codesearchnet
def scoped_format(txt, **objects): pretty = objects.pop('pretty', RecursiveAttribute.format_pretty) expand = objects.pop('expand', RecursiveAttribute.format_expand) attr = RecursiveAttribute(objects, read_only=True) formatter = scoped_formatter(**objects) return formatter.format(txt, pretty=pretty, expand=expand)
Format a string with respect to a set of objects' attributes. Example: >>> Class Foo(object): >>> def __init__(self): >>> self.name = "Dave" >>> print scoped_format("hello {foo.name}", foo=Foo()) hello Dave Args: objects (dict): Dict of objects to format with. If a value is a dict, its values, and any further neted dicts, will also format with dot notation. pretty (bool): See `ObjectStringFormatter`. expand (bool): See `ObjectStringFormatter`.
codesearchnet
def get_graphql_schema_from_schema_graph(schema_graph, class_to_field_type_overrides, hidden_classes): _validate_overriden_fields_are_not_defined_in_superclasses(class_to_field_type_overrides, schema_graph) inherited_field_type_overrides = _get_inherited_field_types(class_to_field_type_overrides, schema_graph) if (not schema_graph.get_element_by_class_name(ORIENTDB_BASE_VERTEX_CLASS_NAME).properties): hidden_classes.add(ORIENTDB_BASE_VERTEX_CLASS_NAME) graphql_types = OrderedDict() type_equivalence_hints = OrderedDict() for vertex_cls_name in sorted(schema_graph.vertex_class_names): vertex_cls = schema_graph.get_element_by_class_name(vertex_cls_name) if (vertex_cls_name in hidden_classes): continue inherited_field_type_overrides.setdefault(vertex_cls_name, dict()) field_type_overrides = inherited_field_type_overrides[vertex_cls_name] field_specification_lambda = _create_field_specification(schema_graph, graphql_types, field_type_overrides, hidden_classes, vertex_cls_name) current_graphql_type = None if vertex_cls.abstract: current_graphql_type = GraphQLInterfaceType(vertex_cls_name, fields=field_specification_lambda) else: interface_specification_lambda = _create_interface_specification(schema_graph, graphql_types, hidden_classes, vertex_cls_name) current_graphql_type = GraphQLObjectType(vertex_cls_name, field_specification_lambda, interfaces=interface_specification_lambda, is_type_of=(lambda : None)) graphql_types[vertex_cls_name] = current_graphql_type for vertex_cls_name in sorted(schema_graph.vertex_class_names): vertex_cls = schema_graph.get_element_by_class_name(vertex_cls_name) if (vertex_cls_name in hidden_classes): continue vertex_cls_subclasses = schema_graph.get_subclass_set(vertex_cls_name) if ((not vertex_cls.abstract) and (len(vertex_cls_subclasses) > 1)): union_type_name = _get_union_type_name(vertex_cls_subclasses) type_specification_lambda = _create_union_types_specification(schema_graph, graphql_types, hidden_classes, vertex_cls_name) union_type = GraphQLUnionType(union_type_name, types=type_specification_lambda) graphql_types[union_type_name] = union_type type_equivalence_hints[graphql_types[vertex_cls_name]] = union_type for non_graph_cls_name in sorted(schema_graph.non_graph_class_names): if (non_graph_cls_name in hidden_classes): continue if (not schema_graph.get_element_by_class_name(non_graph_cls_name).abstract): continue cls_subclasses = schema_graph.get_subclass_set(non_graph_cls_name) if (len(cls_subclasses) > 1): all_non_abstract_subclasses_are_vertices = True for subclass_name in cls_subclasses: subclass = schema_graph.get_element_by_class_name(subclass_name) if (subclass_name != non_graph_cls_name): if ((not subclass.abstract) and (not subclass.is_vertex)): all_non_abstract_subclasses_are_vertices = False break if all_non_abstract_subclasses_are_vertices: inherited_field_type_overrides.setdefault(non_graph_cls_name, dict()) field_type_overrides = inherited_field_type_overrides[non_graph_cls_name] field_specification_lambda = _create_field_specification(schema_graph, graphql_types, field_type_overrides, hidden_classes, non_graph_cls_name) graphql_type = GraphQLInterfaceType(non_graph_cls_name, fields=field_specification_lambda) graphql_types[non_graph_cls_name] = graphql_type if (not graphql_types): raise EmptySchemaError(u'After evaluating all subclasses of V, we were not able to find visible schema data to import into the GraphQL schema object') RootSchemaQuery = GraphQLObjectType('RootSchemaQuery', OrderedDict([(name, GraphQLField(value)) for (name, value) in sorted(six.iteritems(graphql_types), key=(lambda x: x[0])) if (not isinstance(value, GraphQLUnionType))])) schema = GraphQLSchema(RootSchemaQuery, directives=DIRECTIVES) return (schema, _get_referenced_type_equivalences(graphql_types, type_equivalence_hints))
Return a GraphQL schema object corresponding to the schema of the given schema graph. Args: schema_graph: SchemaGraph class_to_field_type_overrides: dict, class name -> {field name -> field type}, (string -> {string -> GraphQLType}). Used to override the type of a field in the class where it's first defined and all the class's subclasses. hidden_classes: set of strings, classes to not include in the GraphQL schema. Returns: tuple of (GraphQL schema object, GraphQL type equivalence hints dict). The tuple is of type (GraphQLSchema, {GraphQLObjectType -> GraphQLUnionType}).
codesearchnet
def maybe_saved_model_directory(export_dir): txt_path = file_io.join(export_dir, constants.SAVED_MODEL_FILENAME_PBTXT) pb_path = file_io.join(export_dir, constants.SAVED_MODEL_FILENAME_PB) cpb_path = file_io.join(export_dir, constants.SAVED_MODEL_FILENAME_CPB) return file_io.file_exists(txt_path) or file_io.file_exists(pb_path) or file_io.file_exists(cpb_path)
Checks whether the provided export directory could contain a SavedModel. Note that the method does not load any data by itself. If the method returns `false`, the export directory definitely does not contain a SavedModel. If the method returns `true`, the export directory may contain a SavedModel but provides no guarantee that it can be loaded. Args: export_dir: Absolute string path to possible export location. For example, '/my/foo/model'. Returns: True if the export directory contains SavedModel files, False otherwise.
github-repos
def get_stable_entries(self, charge_to_discharge=True): list_copy = list(self._stable_entries) return list_copy if charge_to_discharge else list_copy.reverse()
Get the stable entries. Args: charge_to_discharge: order from most charge to most discharged state? Default to True. Returns: A list of stable entries in the electrode, ordered by amount of the working ion.
juraj-google-style
def _process_inputs(self, input_reader, shard_state, tstate, ctx): processing_limit = self._processing_limit(tstate.mapreduce_spec) if (processing_limit == 0): return finished_shard = True iterator = iter(input_reader) while True: try: entity = iterator.next() except StopIteration: break if isinstance(entity, db.Model): shard_state.last_work_item = repr(entity.key()) elif isinstance(entity, ndb.Model): shard_state.last_work_item = repr(entity.key) else: shard_state.last_work_item = repr(entity)[:100] processing_limit -= 1 if (not self._process_datum(entity, input_reader, ctx, tstate)): finished_shard = False break elif (processing_limit == 0): finished_shard = False break self.slice_context.incr(context.COUNTER_MAPPER_WALLTIME_MS, int(((self._time() - self._start_time) * 1000))) return finished_shard
Read inputs, process them, and write out outputs. This is the core logic of MapReduce. It reads inputs from input reader, invokes user specified mapper function, and writes output with output writer. It also updates shard_state accordingly. e.g. if shard processing is done, set shard_state.active to False. If errors.FailJobError is caught, it will fail this MR job. All other exceptions will be logged and raised to taskqueue for retry until the number of retries exceeds a limit. Args: input_reader: input reader. shard_state: shard state. tstate: transient shard state. ctx: mapreduce context. Returns: Whether this shard has finished processing all its input split.
codesearchnet
def detailed_log_handler(self, handler): if (not self.opened()): handler = (handler or util.noop) self._detailed_log_handler = enums.JLinkFunctions.LOG_PROTOTYPE(handler) self._dll.JLINKARM_EnableLogCom(self._detailed_log_handler)
Setter for the detailed log handler function. Args: self (JLink): the ``JLink`` instance Returns: ``None``
codesearchnet
def price(self, valuation_date, market, model=None, pricing_context=None, name=None): name = name or self._name + '_price' with tf.name_scope(name): valuation_date = dates.convert_to_date_tensor(valuation_date) pay_cf = self._pay_leg.price(valuation_date, market, model, pricing_context) receive_cf = self._receive_leg.price(valuation_date, market, model, pricing_context) return receive_cf - pay_cf
Returns the present value of the instrument on the valuation date. Args: valuation_date: A scalar `DateTensor` specifying the date on which valuation is being desired. market: A namedtuple of type `InterestRateMarket` which contains the necessary information for pricing the interest rate swap. model: Reserved for future use. pricing_context: Additional context relevant for pricing. name: Python str. The name to give to the ops created by this function. Default value: `None` which maps to 'price'. Returns: A Rank 1 `Tensor` of real type containing the modeled price of each IRS contract based on the input market data.
github-repos
def load_dataset(data_path: str, findex: typing.Dict[str, int]) -> Dataset: Y = array.array('i') X_rows = array.array('I') X_cols = array.array('I') with open(data_path) as f: i = 0 for row in f: cols = row.strip().split('\t') if len(cols) < 2: continue Y.append(int(cols[0])) hit_indices = [findex[feat] for feat in cols[1:] if feat in findex] X_rows.extend((i for _ in range(len(hit_indices)))) X_cols.extend(hit_indices) i += 1 return Dataset(jnp.asarray(X_rows), jnp.asarray(X_cols), jnp.asarray(Y))
Loads a dataset from the given encoded data file. Args: data_path (str): A file path for the encoded data file. findex (Dict[str, int]): A dictionary that maps a feature to its index. Returns: A dataset
github-repos
def to_dict(mapreduce_yaml): all_configs = [] for config in mapreduce_yaml.mapreduce: out = {'name': config.name, 'mapper_input_reader': config.mapper.input_reader, 'mapper_handler': config.mapper.handler} if config.mapper.params_validator: out['mapper_params_validator'] = config.mapper.params_validator if config.mapper.params: param_defaults = {} for param in config.mapper.params: param_defaults[param.name] = (param.default or param.value) out['mapper_params'] = param_defaults if config.params: param_defaults = {} for param in config.params: param_defaults[param.name] = (param.default or param.value) out['params'] = param_defaults if config.mapper.output_writer: out['mapper_output_writer'] = config.mapper.output_writer all_configs.append(out) return all_configs
Converts a MapReduceYaml file into a JSON-encodable dictionary. For use in user-visible UI and internal methods for interfacing with user code (like param validation). as a list Args: mapreduce_yaml: The Pyton representation of the mapreduce.yaml document. Returns: A list of configuration dictionaries.
codesearchnet
def _get_entry_energy(pd, composition): candidate = [i.energy_per_atom for i in pd.qhull_entries if i.composition.fractional_composition == composition.fractional_composition] if not candidate: warnings.warn("The reactant " + composition.reduced_formula + " has no matching entry with negative formation" " energy, instead convex hull energy for this" " composition will be used for reaction energy " "calculation. ") return pd.get_hull_energy(composition) else: min_entry_energy = min(candidate) return min_entry_energy * composition.num_atoms
Finds the lowest entry energy for entries matching the composition. Entries with non-negative formation energies are excluded. If no entry is found, use the convex hull energy for the composition. Args: pd (PhaseDiagram): PhaseDiagram object. composition (Composition): Composition object that the target entry should match. Returns: The lowest entry energy among entries matching the composition.
juraj-google-style
def json(self, branch='master', filename=''): file_contents = self.get(branch=branch, filename=filename) try: json_dict = json.loads(file_contents) except ValueError as error: msg = ('"{filename}" appears to be invalid json. ' 'Please validate it with http: 'JSON decoder error:\n' '{error}').format( filename=filename, error=error) raise SystemExit(msg) LOG.debug('JSON object:\n%s', json_dict) return json_dict
Retrieve _filename_ from GitLab. Args: branch (str): Git Branch to find file. filename (str): Name of file to retrieve. Returns: dict: Decoded JSON. Raises: SystemExit: Invalid JSON provided.
juraj-google-style
def validate_metadata(train_config): if (len(train_config['csv_header']) != len(train_config['csv_defaults'])): raise ValueError('Unequal number of columns in input features file and schema file.') sorted_columns = sorted((train_config['csv_header'] + [train_config['target_column']])) sorted_columns2 = sorted((((train_config['categorical_columns'] + train_config['numerical_columns']) + [train_config['key_column']]) + [train_config['target_column']])) if (sorted_columns2 != sorted_columns): raise ValueError('Each csv header must be a numerical/categorical type, a key, or a target.')
Perform some checks that the trainig config is correct. Args: train_config: train config as produced by merge_metadata() Raises: ValueError: if columns look wrong.
codesearchnet
def delete_object(self, ref, delete_arguments=None): opts = self._get_request_options() if (not isinstance(delete_arguments, dict)): delete_arguments = {} url = self._construct_url(ref, query_params=delete_arguments) self._log_request('delete', url, opts) r = self.session.delete(url, **opts) self._validate_authorized(r) if (r.status_code != requests.codes.ok): self._check_service_availability('delete', r, ref) raise ib_ex.InfobloxCannotDeleteObject(response=jsonutils.loads(r.content), ref=ref, content=r.content, code=r.status_code) return self._parse_reply(r)
Remove an Infoblox object Args: ref (str): Object reference delete_arguments (dict): Extra delete arguments Returns: The object reference of the removed object Raises: InfobloxException
codesearchnet
def enforce_periodic_boundary_conditions( self ): for s in self.sites: for i in range(3): if s.r[i] < 0.0: s.r[i] += self.cell_lengths[i] if s.r[i] > self.cell_lengths[i]: s.r[i] -= self.cell_lengths[i]
Ensure that all lattice sites are within the central periodic image of the simulation cell. Sites that are outside the central simulation cell are mapped back into this cell. Args: None Returns: None
juraj-google-style