code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def __call__(self, shape, dtype=None, **kwargs): _validate_kwargs(self.__class__.__name__, kwargs) dtype = _get_dtype(dtype) if not dtype.is_numpy_compatible or dtype == dtypes.string: raise ValueError('Expected numeric or boolean dtype, got %s.' % dtype) if _PARTITION_SHAPE in kwargs: shape = kwargs[_PARTITION_SHAPE] return array_ops.zeros(shape, dtype)
Returns a tensor object initialized as specified by the initializer. Args: shape: Shape of the tensor. dtype: Optional dtype of the tensor. Only numeric or boolean dtypes are supported. If not specified, `tf.keras.backend.floatx()` is used, which default to `float32` unless you configured it otherwise (via `tf.keras.backend.set_floatx(float_dtype)`). **kwargs: Additional keyword arguments.
github-repos
def decorate_event_js(js_code): def add_annotation(method): setattr(method, "__is_event", True ) setattr(method, "_js_code", js_code ) return method return add_annotation
setup a method as an event, adding also javascript code to generate Args: js_code (str): javascript code to generate the event client-side. js_code is added to the widget html as widget.attributes['onclick'] = js_code%{'emitter_identifier':widget.identifier, 'event_name':'onclick'}
juraj-google-style
def build_kalman_cov_step(get_transition_matrix_for_timestep, get_transition_noise_for_timestep, get_observation_matrix_for_timestep, get_observation_noise_for_timestep): def cov_step(previous_covs, t): 'Single step of prior covariance recursion.' (previous_latent_cov, _) = previous_covs latent_cov = _propagate_cov(previous_latent_cov, get_transition_matrix_for_timestep((t - 1)), get_transition_noise_for_timestep((t - 1))) observation_cov = _propagate_cov(latent_cov, get_observation_matrix_for_timestep(t), get_observation_noise_for_timestep(t)) return (latent_cov, observation_cov) return cov_step
Build a callable for one step of Kalman covariance recursion. Args: get_transition_matrix_for_timestep: callable taking a timestep as an integer `Tensor` argument, and returning a `LinearOperator` of shape `[latent_size, latent_size]`. get_transition_noise_for_timestep: callable taking a timestep as an integer `Tensor` argument, and returning a `MultivariateNormalLinearOperator` of event shape `[latent_size]`. get_observation_matrix_for_timestep: callable taking a timestep as an integer `Tensor` argument, and returning a `LinearOperator` of shape `[observation_size, observation_size]`. get_observation_noise_for_timestep: callable taking a timestep as an integer `Tensor` argument, and returning a `MultivariateNormalLinearOperator` of event shape `[observation_size]`. Returns: cov_step: a callable that computes latent state and observation covariance at time `t`, given latent covariance at time `t-1`.
codesearchnet
def get_exitstatus(self): logger.debug('Exit status is {0}'.format(self._spawn.exitstatus)) return self._spawn.exitstatus
Get the exit status of the program execution. Returns: int: Exit status as reported by the operating system, or None if it is not available.
codesearchnet
def check_address(address): if isinstance(address, tuple): check_host(address[0]) check_port(address[1]) elif isinstance(address, string_types): if (os.name != 'posix'): raise ValueError('Platform does not support UNIX domain sockets') if (not (os.path.exists(address) or os.access(os.path.dirname(address), os.W_OK))): raise ValueError('ADDRESS not a valid socket domain socket ({0})'.format(address)) else: raise ValueError('ADDRESS is not a tuple, string, or character buffer ({0})'.format(type(address).__name__))
Check if the format of the address is correct Arguments: address (tuple): (``str``, ``int``) representing an IP address and port, respectively .. note:: alternatively a local ``address`` can be a ``str`` when working with UNIX domain sockets, if supported by the platform Raises: ValueError: raised when address has an incorrect format Example: >>> check_address(('127.0.0.1', 22))
codesearchnet
def register_obj_processors(self, obj_processors): self.obj_processors = obj_processors self.type_convertors.update(obj_processors)
Object processors are callables that will be called after each successful model object construction. Those callables receive model object as its parameter. Registration of new object processors will replace previous. Args: obj_processors(dict): A dictionary where key=class name, value=callable
codesearchnet
def pearson_correlation_coefficient(predictions, labels, weights_fn=None): del weights_fn (_, pearson) = tf.contrib.metrics.streaming_pearson_correlation(predictions, labels) return (pearson, tf.constant(1.0))
Calculate pearson correlation coefficient. Args: predictions: The raw predictions. labels: The actual labels. weights_fn: Weighting function. Returns: The pearson correlation coefficient.
codesearchnet
def minimum(inputs, **kwargs): return Minimum(**kwargs)(inputs)
Functional interface to the `Minimum` layer. Args: inputs: A list of input tensors (at least 2). **kwargs: Standard layer keyword arguments. Returns: A tensor, the element-wise minimum of the inputs.
github-repos
def set(self, name, value): curr = self.values parts = name.split('.') for (i, part) in enumerate(parts[:(- 1)]): try: curr = curr.setdefault(part, {}) except AttributeError: raise InvalidPath('.'.join(parts[:(i + 1)])) try: curr[parts[(- 1)]] = value except TypeError: raise InvalidPath('.'.join(parts[:(- 1)]))
Set context value. Args: name (str): The name of the context value to change. value (Any): The new value for the selected context value
codesearchnet
def _ragged_stack_concat_helper(rt_inputs, axis, stack_values): if not rt_inputs: raise ValueError('rt_inputs may not be empty.') rt_inputs = [ragged_tensor.convert_to_tensor_or_ragged_tensor(rt_input, name='rt_input') for rt_input in rt_inputs] row_splits_dtype, rt_inputs = ragged_tensor.match_row_splits_dtypes(*rt_inputs, return_dtype=True) rt_inputs = list(rt_inputs) if len(rt_inputs) == 1 and (not stack_values): return rt_inputs[0] ndims = None for rt in rt_inputs: if ndims is None: ndims = rt.shape.ndims else: rt.shape.assert_has_rank(ndims) out_ndims = ndims if ndims is None or not stack_values else ndims + 1 axis = array_ops.get_positive_axis(axis, out_ndims) if stack_values and ndims == 1 and (axis == 0): return ragged_tensor.RaggedTensor.from_row_lengths(values=array_ops.concat(rt_inputs, axis=0), row_lengths=array_ops.concat([array_ops.shape(r) for r in rt_inputs], axis=0)) if all((not ragged_tensor.is_ragged(rt) for rt in rt_inputs)): if ndims is not None and (axis == out_ndims - 1 or axis == ndims - 1): if stack_values: return array_ops_stack.stack(rt_inputs, axis) else: return array_ops.concat(rt_inputs, axis) for i in range(len(rt_inputs)): if not ragged_tensor.is_ragged(rt_inputs[i]): rt_inputs[i] = ragged_tensor.RaggedTensor.from_tensor(rt_inputs[i], ragged_rank=1, row_splits_dtype=row_splits_dtype) ragged_rank = max(max((rt.ragged_rank for rt in rt_inputs)), 1) rt_inputs = [_increase_ragged_rank_to(rt, ragged_rank, row_splits_dtype) for rt in rt_inputs] if axis == 0: return _ragged_stack_concat_axis_0(rt_inputs, stack_values) elif axis == 1: return _ragged_stack_concat_axis_1(rt_inputs, stack_values) else: values = [rt.values for rt in rt_inputs] splits = [[rt_input.row_splits] for rt_input in rt_inputs] with ops.control_dependencies(ragged_util.assert_splits_match(splits)): return ragged_tensor.RaggedTensor.from_row_splits(_ragged_stack_concat_helper(values, axis - 1, stack_values), splits[0][0], validate=False)
Helper function to concatenate or stack ragged tensors. Args: rt_inputs: A list of RaggedTensors or Tensors to combine. axis: The axis along which to concatenate or stack. stack_values: A boolean -- if true, then stack values; otherwise, concatenate them. Returns: A RaggedTensor. Raises: ValueError: If rt_inputs is empty, or if axis is out of range.
github-repos
def check_error(response, expect_status=200): json = None try: json = response.json() except: pass if (response.status_code != expect_status or response.status_code == 400 or 'error' in json): if json: error = json['error'] raise YoukuError(error['code'], error['type'], error['description'], response.status_code) else: error = parse_qs(response.text) raise YoukuError(error.get('code', [None])[0], error.get('type', [None])[0], error.get('description', [None])[0], response.status_code)
Youku error should return in json form, like: HTTP 400 { "error":{ "code":120010223, "type":"UploadsException", "description":"Expired upload token" } } But error also maybe in response url params or response booy. Content-Type maybe application/json or text/plain, so don't relay on it. Args: expect_status: normally is 200 or 201
juraj-google-style
def query_blockchain_events(web3: Web3, contract_manager: ContractManager, contract_address: Address, contract_name: str, topics: List, from_block: BlockNumber, to_block: BlockNumber) -> List[Dict]: filter_params = {'fromBlock': from_block, 'toBlock': to_block, 'address': to_checksum_address(contract_address), 'topics': topics} events = web3.eth.getLogs(filter_params) contract_abi = contract_manager.get_contract_abi(contract_name) return [decode_event(abi=contract_abi, log_=raw_event) for raw_event in events]
Returns events emmitted by a contract for a given event name, within a certain range. Args: web3: A Web3 instance contract_manager: A contract manager contract_address: The address of the contract to be filtered, can be `None` contract_name: The name of the contract topics: The topics to filter for from_block: The block to start search events to_block: The block to stop searching for events Returns: All matching events
codesearchnet
def variables(self): return self.weights
Returns the list of all layer variables/weights. Alias of `self.weights`. Returns: A list of variables.
github-repos
def ParseActivityLogUncompressedRow( self, parser_mediator, query, row, **unused_kwargs): query_hash = hash(query) event_data = ChromeExtensionActivityEventData() event_data.action_type = self._GetRowValue(query_hash, row, 'action_type') event_data.activity_id = self._GetRowValue(query_hash, row, 'activity_id') event_data.api_name = self._GetRowValue(query_hash, row, 'api_name') event_data.arg_url = self._GetRowValue(query_hash, row, 'arg_url') event_data.args = self._GetRowValue(query_hash, row, 'args') event_data.extension_id = self._GetRowValue(query_hash, row, 'extension_id') event_data.other = self._GetRowValue(query_hash, row, 'other') event_data.page_title = self._GetRowValue(query_hash, row, 'page_title') event_data.page_url = self._GetRowValue(query_hash, row, 'page_url') event_data.query = query timestamp = self._GetRowValue(query_hash, row, 'time') date_time = dfdatetime_webkit_time.WebKitTime(timestamp=timestamp) event = time_events.DateTimeValuesEvent( date_time, definitions.TIME_DESCRIPTION_UNKNOWN) parser_mediator.ProduceEventWithEventData(event, event_data)
Parses an activity log row. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. query (str): query that created the row. row (sqlite3.Row): row.
juraj-google-style
def approve(self, sha=None, **kwargs): path = ('%s/%s/approve' % (self.manager.path, self.get_id())) data = {} if sha: data['sha'] = sha server_data = self.manager.gitlab.http_post(path, post_data=data, **kwargs) self._update_attrs(server_data)
Approve the merge request. Args: sha (str): Head SHA of MR **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabMRApprovalError: If the approval failed
codesearchnet
def load_embeddings(lang="en", task="embeddings", type="cw", normalize=False): src_dir = "_".join((type, task)) if type else task p = locate_resource(src_dir, lang) e = Embedding.load(p) if type == "cw": e.apply_expansion(CaseExpander) e.apply_expansion(DigitExpander) if type == "sgns": e.apply_expansion(CaseExpander) if type == "ue": e.apply_expansion(CaseExpander) if normalize: e.normalize_words(inplace=True) return e
Return a word embeddings object for `lang` and of type `type` Args: lang (string): language code. task (string): parameters that define task. type (string): skipgram, cw, cbow ... noramlized (boolean): returns noramlized word embeddings vectors.
juraj-google-style
def _save_to_database(url, property_name, data): data = json.dumps([(d.to_dict() if hasattr(d, 'to_dict') else d) for d in data]) logger.debug(('_save_to_database() data: %s' % repr(data))) requests.post((_WEB_URL + _REQUEST_DB_SAVE), timeout=REQUEST_TIMEOUT, allow_redirects=True, verify=False, data={'url': url, 'value': data, 'property_name': property_name}) logger.info(('`%s` for `%s` sent to REST DB.' % (property_name, url)))
Store `data` under `property_name` in the `url` key in REST API DB. Args: url (obj): URL of the resource to which `property_name` will be stored. property_name (str): Name of the property under which the `data` will be stored. data (obj): Any object.
codesearchnet
def bulk_load_docs(es, docs): chunk_size = 200 try: results = elasticsearch.helpers.bulk(es, docs, chunk_size=chunk_size) log.debug(f'Elasticsearch documents loaded: {results[0]}') if (len(results[1]) > 0): log.error('Bulk load errors {}'.format(results)) except elasticsearch.ElasticsearchException as e: log.error('Indexing error: {}\n'.format(e))
Bulk load docs Args: es: elasticsearch handle docs: Iterator of doc objects - includes index_name
codesearchnet
def get_relative_embeddings_left_right(max_relative_position, length, depth, num_heads, heads_share_relative_embedding, name): initializer_stddev = (depth ** (- 0.5)) max_relative_position_unmasked = ((2 * max_relative_position) - 1) if heads_share_relative_embedding: embedding_shape = (max_relative_position_unmasked, depth) else: embedding_shape = (num_heads, max_relative_position_unmasked, depth) relative_embeddings = tf.get_variable(name=name, shape=embedding_shape, initializer=tf.random_normal_initializer(stddev=initializer_stddev)) pad_length = tf.maximum((length - max_relative_position), 0) slice_start_position = tf.maximum((max_relative_position - length), 0) if heads_share_relative_embedding: padded_relative_embeddings = tf.pad(relative_embeddings, [[pad_length, pad_length], [0, 0]]) used_relative_embeddings = tf.slice(padded_relative_embeddings, [slice_start_position, 0], [((2 * length) - 1), (- 1)]) else: padded_relative_embeddings = tf.pad(relative_embeddings, [[0, 0], [pad_length, pad_length], [0, 0]]) used_relative_embeddings = tf.slice(padded_relative_embeddings, [0, slice_start_position, 0], [(- 1), ((2 * length) - 1), (- 1)]) return used_relative_embeddings
Instantiate or retrieve relative embeddings, sliced according to length. Use for unmasked case where the relative attention looks both left and right. Args: max_relative_position: an Integer for the number of entries in the relative embedding, which corresponds to the max relative distance that is considered. length: an Integer, specifies the length of the input sequence for which this relative embedding is retrieved for. depth: an Integer, specifies the depth for relative embeddings. num_heads: an Integer, specifies the number of heads. heads_share_relative_embedding: a Boolean specifying if the relative embedding is shared across heads. name: a string giving the name of the embedding variables. Returns: a Tensor with shape [length, depth]
codesearchnet
def configure_parser(self, parser): parser.add_argument("--log_path", default="", help="The log file path") parser.add_argument("--verbose", help="Increase logging verbosity", action="store_true")
Adds the necessary supported arguments to the argument parser. Args: parser (argparse.ArgumentParser): The parser to add arguments to.
juraj-google-style
class AriaProjector(nn.Module): def __init__(self, config: AriaConfig): super().__init__() self.patch_to_query_dict = config.projector_patch_to_query_dict self.in_features = config.vision_config.hidden_size self.num_heads = config.vision_config.num_attention_heads self.kv_dim = config.vision_config.hidden_size self.hidden_features = config.text_config.hidden_size self.output_dim = config.text_config.hidden_size self.query = nn.Parameter(torch.zeros(config.max_value_projector_patch_to_query_dict, self.in_features)) self.cross_attn = AriaCrossAttention(config) self.layer_norm = nn.LayerNorm(self.in_features) self.feed_forward = AriaProjectorMLP(self.in_features, self.hidden_features, self.output_dim) def forward(self, key_value_states: torch.Tensor, attn_mask: Optional[torch.Tensor]=None): batch_size, num_patches = (key_value_states.shape[0], key_value_states.shape[1]) if num_patches not in self.patch_to_query_dict.keys(): raise KeyError(f'Number of patches {num_patches} not found in patch_to_query_dict amongst possible values {self.patch_to_query_dict.keys()}.') query_num = self.patch_to_query_dict[num_patches] queries = self.query[:query_num].unsqueeze(0).repeat(batch_size, 1, 1) if attn_mask is not None: attn_mask = attn_mask.repeat_interleave(self.num_heads, 0) attn_mask = attn_mask.unsqueeze(1).expand(-1, queries.size(1), -1) attention_out = self.cross_attn(key_value_states, queries, attn_mask=attn_mask) out = self.feed_forward(self.layer_norm(attention_out)) return out
Aria Projector module. This module projects vision features into the language model's embedding space, enabling interaction between vision and language components. Args: config (`AriaConfig`): Configuration object for the model.
github-repos
def _get_ssm_parameter(self, p): try: response = self._ssm.get_parameter(Name=p, WithDecryption=True) return response.get('Parameter', {}).get('Value', None) except Exception as ruh_roh: logging.error(ruh_roh, exc_info=False) return None
Get parameters from Simple Systems Manager Args: p - a parameter name Returns: a value, decrypted if needed, if successful or None if things go sideways.
codesearchnet
def limits(self, clip_negative=True): if self.as_numpy_dtype in dtype_range: min, max = dtype_range[self.as_numpy_dtype] else: raise ValueError(str(self) + ' does not have defined limits.') if clip_negative: min = 0 return (min, max)
Return intensity limits, i.e. (min, max) tuple, of the dtype. Args: clip_negative : bool, optional If True, clip the negative range (i.e. return 0 for min intensity) even if the image dtype allows negative values. Returns min, max : tuple Lower and upper intensity limits.
github-repos
def _check_error(self, response, json_response=None): if response.status_code >= 400: json_response = json_response or self._get_json_response(response) err_cls = self._check_http_error_code(response.status_code) try: raise err_cls("%s error: %s" % (response.status_code, json_response["error"]["error_msg"]), response.status_code) except TypeError: raise err_cls("%s error: %s" % (response.status_code, json_response["error_description"]), response.status_code) return True
Check for HTTP error code from the response, raise exception if there's any Args: response (object): Object returned by requests' `get` and `post` methods json_response (dict): JSON response, if applicable Raises: HTTPError: If the status code of response is either 4xx or 5xx Returns: True if status code is not error code
juraj-google-style
def initialize_block(self, block_header): state_view = \ BlockWrapper.state_view_for_block( self._block_cache.block_store.chain_head, self._state_view_factory) settings_view = SettingsView(state_view) self._min_wait_time = settings_view.get_setting( "sawtooth.consensus.min_wait_time", self._min_wait_time, int) self._max_wait_time = settings_view.get_setting( "sawtooth.consensus.max_wait_time", self._max_wait_time, int) self._valid_block_publishers = settings_view.get_setting( "sawtooth.consensus.valid_block_publishers", self._valid_block_publishers, list) block_header.consensus = b"Devmode" self._start_time = time.time() self._wait_time = random.uniform( self._min_wait_time, self._max_wait_time) return True
Do initialization necessary for the consensus to claim a block, this may include initiating voting activates, starting proof of work hash generation, or create a PoET wait timer. Args: block_header (BlockHeader): the BlockHeader to initialize. Returns: True
juraj-google-style
def cli_cmd_to_string(args): if isinstance(args, basestring): return args return ' '.join([pipes.quote(arg) for arg in args])
Converts a cmd arg list to string. Args: args: list of strings, the arguments of a command. Returns: String representation of the command.
juraj-google-style
def assertStartsWith(self, actual, expected_start, msg=None): if not actual.startswith(expected_start): fail_msg = '%r does not start with %r' % (actual, expected_start) fail_msg += ' : %r' % msg if msg else '' self.fail(fail_msg)
Assert that actual.startswith(expected_start) is True. Args: actual: str expected_start: str msg: Optional message to report on failure.
github-repos
def get_test_input_for_op(val, dtype): python_inferred_types = {(dtypes.int32, True): 1, (dtypes.float32, True): 1.0, (dtypes.complex128, True): 1j} dtype, weak = dtype inputs = [] if weak: inputs.append(convert_to_input_type(val, 'WeakTensor', dtype)) if dtype in python_inferred_types: val_in_dtype = val * python_inferred_types[dtype] inputs.append(val_in_dtype) inputs.append(convert_to_input_type(val_in_dtype, 'Tensor', None)) else: inputs.append(convert_to_input_type(val, 'Tensor', dtype)) inputs.append(convert_to_input_type(val, 'NumPy', dtype)) return inputs
Returns a list containing all the possible inputs with a given dtype. Args: val: value to convert to test input. dtype: a tuple of format (tf.Dtype, bool) where the bool value represents whether the dtype is "weak" or not. Returns: A list of all possible inputs given a value and a dtype.
github-repos
def service_status(self, short_name): if (short_name not in self.services): raise ArgumentError('Unknown service name', short_name=short_name) info = {} service = self.services[short_name]['state'] info['heartbeat_age'] = (monotonic() - service.last_heartbeat) info['numeric_status'] = service.state info['string_status'] = service.string_state return info
Get the current status of a service. Returns information about the service such as the length since the last heartbeat, any status messages that have been posted about the service and whether the heartbeat should be considered out of the ordinary. Args: short_name (string): The short name of the service to query Returns: dict: A dictionary with the status of the service
codesearchnet
def reformat_to_pretty_xml(doc_xml): assert isinstance(doc_xml, str) dom_obj = xml.dom.minidom.parseString(doc_xml) pretty_xml = dom_obj.toprettyxml(indent=' ') return re.sub('^\\s*$\\n', '', pretty_xml, flags=re.MULTILINE)
Pretty print XML doc. Args: doc_xml : str Well formed XML doc Returns: str: Pretty printed XML doc
codesearchnet
def is_same_file(path1, path2): return ( path1 and path2 and os.path.isfile(path1) and os.path.isfile(path2) and os.path.samefile(path1, path2))
Return True if path1 is the same file as path2. The reason for this dance is that samefile throws if either file doesn't exist. Args: path1: str or path-like. path2: str or path-like. Returns: bool. True if the same file, False if not.
juraj-google-style
class CLIPSegDecoderOutput(ModelOutput): logits: Optional[torch.FloatTensor] = None hidden_states: Optional[Tuple[torch.FloatTensor]] = None attentions: Optional[Tuple[torch.FloatTensor]] = None
Args: logits (`torch.FloatTensor` of shape `(batch_size, height, width)`): Classification scores for each pixel. hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
github-repos
def poly_energy(sample_like, poly): msg = 'poly_energy is deprecated and will be removed in dimod 0.9.0.In the future, use BinaryPolynomial.energy' warnings.warn(msg, DeprecationWarning) return BinaryPolynomial(poly, 'SPIN').energy(sample_like)
Calculates energy of a sample from a higher order polynomial. Args: sample (samples_like): A raw sample. `samples_like` is an extension of NumPy's array_like structure. See :func:`.as_samples`. poly (dict): Polynomial as a dict of form {term: bias, ...}, where `term` is a tuple of variables and `bias` the associated bias. Returns: float: The energy of the sample.
codesearchnet
def get_model_from_feature(feature: str, model: str, framework: Optional[str]=None, cache_dir: Optional[str]=None) -> Union['PreTrainedModel', 'TFPreTrainedModel']: framework = FeaturesManager.determine_framework(model, framework) model_class = FeaturesManager.get_model_class_for_feature(feature, framework) try: model = model_class.from_pretrained(model, cache_dir=cache_dir) except OSError: if framework == 'pt': logger.info('Loading TensorFlow model in PyTorch before exporting to ONNX.') model = model_class.from_pretrained(model, from_tf=True, cache_dir=cache_dir) else: logger.info('Loading PyTorch model in TensorFlow before exporting to ONNX.') model = model_class.from_pretrained(model, from_pt=True, cache_dir=cache_dir) return model
Attempts to retrieve a model from a model's name and the feature to be enabled. Args: feature (`str`): The feature required. model (`str`): The name of the model to export. framework (`str`, *optional*, defaults to `None`): The framework to use for the export. See `FeaturesManager.determine_framework` for the priority should none be provided. Returns: The instance of the model.
github-repos
def fix_addresses(start=None, end=None): if (start in (None, idaapi.BADADDR)): start = idaapi.cvar.inf.minEA if (end in (None, idaapi.BADADDR)): end = idaapi.cvar.inf.maxEA return (start, end)
Set missing addresses to start and end of IDB. Take a start and end addresses. If an address is None or `BADADDR`, return start or end addresses of the IDB instead. Args start: Start EA. Use `None` to get IDB start. end: End EA. Use `None` to get IDB end. Returns: (start, end)
codesearchnet
def all_near_zero_mod(a: Union[float, complex, Iterable[float], np.ndarray], period: float, *, atol: float = 1e-8) -> bool: b = (np.asarray(a) + period / 2) % period - period / 2 return np.all(np.less_equal(np.abs(b), atol))
Checks if the tensor's elements are all near multiples of the period. Args: a: Tensor of elements that could all be near multiples of the period. period: The period, e.g. 2 pi when working in radians. atol: Absolute tolerance.
juraj-google-style
def align_segmentation(beat_times, song): try: (segment_times, segment_labels) = msaf.io.read_references(song) except: return (None, None, None) segment_times = np.asarray(segment_times) segment_intervals = msaf.utils.times_to_intervals(segment_times) beat_intervals = np.asarray(zip(beat_times[:(- 1)], beat_times[1:])) beat_segment_ids = librosa.util.match_intervals(beat_intervals, segment_intervals) segment_beats = [] segment_times_out = [] segment_labels_out = [] for i in range(segment_times.shape[0]): hits = np.argwhere((beat_segment_ids == i)) if ((len(hits) > 0) and (i < len(segment_intervals)) and (i < len(segment_labels))): segment_beats.extend(hits[0]) segment_times_out.append(segment_intervals[(i, :)]) segment_labels_out.append(segment_labels[i]) segment_beats = list(segment_beats) segment_times_out = segment_times return (segment_beats, segment_times_out, segment_labels_out)
Load a ground-truth segmentation, and align times to the nearest detected beats. Arguments: beat_times -- array song -- path to the audio file Returns: segment_beats -- array beat-aligned segment boundaries segment_times -- array true segment times segment_labels -- array list of segment labels
codesearchnet
def cctop_check_status(jobid): status = 'http: status_text = requests.post(status) return status_text.text
Check the status of a CCTOP job ID. Args: jobid (str): Job ID obtained when job was submitted Returns: str: 'Finished' if the job is finished and results ready to be downloaded, 'Running' if still in progress, 'Invalid' for any errors.
juraj-google-style
def _PrintStorageInformationAsJSON(self, storage_reader): serializer = json_serializer.JSONAttributeContainerSerializer storage_counters = self._CalculateStorageCounters(storage_reader) storage_counters_json = json.dumps(storage_counters) self._output_writer.Write('{') self._output_writer.Write('"storage_counters": {0:s}'.format(storage_counters_json)) self._output_writer.Write(',\n') self._output_writer.Write(' "sessions": {') for (index, session) in enumerate(storage_reader.GetSessions()): json_string = serializer.WriteSerialized(session) if (index != 0): self._output_writer.Write(',\n') self._output_writer.Write('"session_{0:s}": {1:s} '.format(session.identifier, json_string)) self._output_writer.Write('}}')
Writes a summary of sessions as machine-readable JSON. Args: storage_reader (StorageReader): storage reader.
codesearchnet
def list_cert_bindings(site): ret = dict() sites = list_sites() if (site not in sites): log.warning('Site not found: %s', site) return ret for binding in sites[site]['bindings']: if sites[site]['bindings'][binding]['certificatehash']: ret[binding] = sites[site]['bindings'][binding] if (not ret): log.warning('No certificate bindings found for site: %s', site) return ret
List certificate bindings for an IIS site. .. versionadded:: 2016.11.0 Args: site (str): The IIS site name. Returns: dict: A dictionary of the binding names and properties. CLI Example: .. code-block:: bash salt '*' win_iis.list_bindings site
codesearchnet
class RuntimeMetric(Metric): def __init__(self, runtime_list, metric_id): value = self._prepare_runtime_metrics(runtime_list) submit_timestamp = time.time() label = runtime_list[0].key.metric.namespace + '_' + RUNTIME_METRIC super().__init__(submit_timestamp, metric_id, value, None, label) def _prepare_runtime_metrics(self, distributions): min_values = [] max_values = [] for dist in distributions: min_values.append(dist.result.min) max_values.append(dist.result.max) min_value = min(min_values) max_value = max(max_values) runtime_in_s = float(max_value - min_value) return runtime_in_s
The Distribution Metric in ready-to-publish format. Args: runtime_list: list of distributions metrics from MetricResult with runtime name metric_id(uuid): unique id to identify test run
github-repos
def update_(self, sct_dict, conf_arg=True): for opt, val in sct_dict.items(): if opt not in self.def_: continue if not conf_arg or self.def_[opt].conf_arg: self[opt] = val
Update values of configuration section with dict. Args: sct_dict (dict): dict indexed with option names. Undefined options are discarded. conf_arg (bool): if True, only options that can be set in a config file are updated.
juraj-google-style
def remove(path, **kwargs): kwargs = salt.utils.args.clean_kwargs(**kwargs) rehash_ = kwargs.pop('rehash', True) if kwargs: salt.utils.args.invalid_kwargs(kwargs) path = _normalize_dir(path) path_str = salt.utils.stringutils.to_str(path) system_path = get_path() local_path = [salt.utils.stringutils.to_str(x) for x in os.environ['PATH'].split(PATHSEP)] def _check_path(dirs, path): '\n Check the dir list for the specified path, and make changes to the list\n if needed. Return True if changes were made to the list, otherwise\n return False.\n ' dirs_lc = [x.lower() for x in dirs] path_lc = path.lower() new_dirs = [] for (index, dirname) in enumerate(dirs_lc): if (path_lc != dirname): new_dirs.append(dirs[index]) if (len(new_dirs) != len(dirs)): dirs[:] = new_dirs[:] return True else: return False if _check_path(local_path, path_str): _update_local_path(local_path) if (not _check_path(system_path, path)): return True result = __utils__['reg.set_value'](HIVE, KEY, VNAME, ';'.join(salt.utils.data.decode(system_path)), VTYPE) if (result and rehash_): return rehash() else: return result
r''' Remove the directory from the SYSTEM path Returns: boolean True if successful, False if unsuccessful rehash : True If the registry was updated, and this value is set to ``True``, sends a WM_SETTINGCHANGE broadcast to refresh the environment variables. Set this to ``False`` to skip this broadcast. CLI Example: .. code-block:: bash # Will remove C:\Python27 from the path salt '*' win_path.remove 'c:\\python27'
codesearchnet
def register_name(self, register_index): result = self._dll.JLINKARM_GetRegisterName(register_index) return ctypes.cast(result, ctypes.c_char_p).value.decode()
Retrives and returns the name of an ARM CPU register. Args: self (JLink): the ``JLink`` instance register_index (int): index of the register whose name to retrieve Returns: Name of the register.
juraj-google-style
def adjust_contrast(img, contrast_factor): if (not _is_pil_image(img)): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Adjust contrast of an Image. Args: img (PIL Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL Image: Contrast adjusted image.
codesearchnet
def process_runway_configs(runway_dir=''): LOG.info('Processing application.json files from local directory "%s".', runway_dir) file_lookup = FileLookup(runway_dir=runway_dir) app_configs = process_configs(file_lookup, 'application-master-{env}.json', 'pipeline.json') return app_configs
Read the _application.json_ files. Args: runway_dir (str): Name of runway directory with app.json files. Returns: collections.defaultdict: Configurations stored for each environment found.
juraj-google-style
def CleanseRawStrings(raw_lines): delimiter = None lines_without_raw_strings = [] for line in raw_lines: if delimiter: end = line.find(delimiter) if end >= 0: leading_space = Match(r'^(\s*)\S', line) line = leading_space.group(1) + '""' + line[end + len(delimiter):] delimiter = None else: line = '""' while delimiter is None: matched = Match(r'^(.*?)\b(?:R|u8R|uR|UR|LR)"([^\s\\()]*)\((.*)$', line) if (matched and not Match(r'^([^\'"]|\'(\\.|[^\'])*\'|"(\\.|[^"])*")* matched.group(1))): delimiter = ')' + matched.group(2) + '"' end = matched.group(3).find(delimiter) if end >= 0: line = (matched.group(1) + '""' + matched.group(3)[end + len(delimiter):]) delimiter = None else: line = matched.group(1) + '""' else: break lines_without_raw_strings.append(line) return lines_without_raw_strings
Removes C++11 raw strings from lines. Before: static const char kData[] = R"( multi-line string )"; After: static const char kData[] = "" (replaced by blank line) ""; Args: raw_lines: list of raw lines. Returns: list of lines with C++11 raw strings replaced by empty strings.
juraj-google-style
def add_sam2rnf_parser(subparsers, subcommand, help, description, simulator_name=None): parser_sam2rnf = subparsers.add_parser(subcommand, help=help, description=description) parser_sam2rnf.set_defaults(func=sam2rnf) parser_sam2rnf.add_argument( '-s', '--sam', type=str, metavar='file', dest='sam_fn', required=True, help='Input SAM/BAM with true (expected) alignments of the reads (- for standard input).' ) _add_shared_params(parser_sam2rnf, unmapped_switcher=True) parser_sam2rnf.add_argument( '-n', '--simulator-name', type=str, metavar='str', dest='simulator_name', default=simulator_name, help='Name of the simulator (for RNF).' if simulator_name is not None else argparse.SUPPRESS, )
Add another parser for a SAM2RNF-like command. Args: subparsers (subparsers): File name of the genome from which read tuples are created (FASTA file). simulator_name (str): Name of the simulator used in comments.
juraj-google-style
def assert_has_rank(self, rank): if self.rank not in (None, rank): raise ValueError('Shape %s must have rank %d' % (self, rank))
Raises an exception if `self` is not compatible with the given `rank`. Args: rank: An integer. Raises: ValueError: If `self` does not represent a shape with the given `rank`.
github-repos
def compile(cfg_path, out_path, executable=None, env=None, log=None): try: check_call([(executable or 'ace'), '-g', cfg_path, '-G', out_path], stdout=log, stderr=log, close_fds=True, env=(env or os.environ)) except (CalledProcessError, OSError): logging.error('Failed to compile grammar with ACE. See {}'.format((log.name if (log is not None) else '<stderr>'))) raise
Use ACE to compile a grammar. Args: cfg_path (str): the path to the ACE config file out_path (str): the path where the compiled grammar will be written executable (str, optional): the path to the ACE binary; if `None`, the `ace` command will be used env (dict, optional): environment variables to pass to the ACE subprocess log (file, optional): if given, the file, opened for writing, or stream to write ACE's stdout and stderr compile messages
codesearchnet
async def run(*cmd): stdout = await checked_run(*cmd) log_path = os.path.join(FLAGS.base_dir, get_cmd_name(cmd) + '.log') with gfile.Open(log_path, 'a') as f: f.write(expand_cmd_str(cmd)) f.write('\n') f.write(stdout) f.write('\n') return stdout.split('\n')
Run the given subprocess command in a coroutine. Args: *cmd: the command to run and its arguments. Returns: The output that the command wrote to stdout as a list of strings, one line per element (stderr output is piped to stdout). Raises: RuntimeError: if the command returns a non-zero result.
juraj-google-style
def create_datastore_write_config(mapreduce_spec): force_writes = parse_bool(mapreduce_spec.params.get("force_writes", "false")) if force_writes: return datastore_rpc.Configuration(force_writes=force_writes) else: return datastore_rpc.Configuration()
Creates datastore config to use in write operations. Args: mapreduce_spec: current mapreduce specification as MapreduceSpec. Returns: an instance of datastore_rpc.Configuration to use for all write operations in the mapreduce.
juraj-google-style
def pairwise_alignment_stats(reference_seq_aln, other_seq_aln): if (len(reference_seq_aln) != len(other_seq_aln)): raise ValueError('Sequence lengths not equal - was an alignment run?') reference_seq_aln = ssbio.protein.sequence.utils.cast_to_str(reference_seq_aln) other_seq_aln = ssbio.protein.sequence.utils.cast_to_str(other_seq_aln) infodict = {} stats_percent_ident = get_percent_identity(a_aln_seq=reference_seq_aln, b_aln_seq=other_seq_aln) infodict['percent_identity'] = stats_percent_ident aln_df = get_alignment_df(a_aln_seq=reference_seq_aln, b_aln_seq=other_seq_aln) infodict['deletions'] = get_deletions(aln_df) infodict['insertions'] = get_insertions(aln_df) infodict['mutations'] = get_mutations(aln_df) infodict['unresolved'] = get_unresolved(aln_df) return infodict
Get a report of a pairwise alignment. Args: reference_seq_aln (str, Seq, SeqRecord): Reference sequence, alignment form other_seq_aln (str, Seq, SeqRecord): Other sequence, alignment form Returns: dict: Dictionary of information on mutations, insertions, sequence identity, etc.
codesearchnet
def _grappler_enabled_session_config(): rewriter_config = rewriter_config_pb2.RewriterConfig(disable_model_pruning=False, arithmetic_optimization=rewriter_config_pb2.RewriterConfig.ON) graph_options = config_pb2.GraphOptions(rewrite_options=rewriter_config) return config_pb2.ConfigProto(graph_options=graph_options)
Constructs a Session config proto that explicitly enables Grappler. Returns: A config proto that obtains extra safety for the unit tests in this file by ensuring that the relevant Grappler rewrites are always enabled.
github-repos
def check_runtime_errors(cmd_derived_from_alias, pos_args_table): for placeholder, value in pos_args_table.items(): exec('{} = "{}"'.format(placeholder, value)) expressions = get_placeholders(cmd_derived_from_alias) for expression in expressions: try: exec(expression) except Exception as exception: error_msg = PLACEHOLDER_EVAL_ERROR.format(expression, exception) raise CLIError(error_msg)
Validate placeholders and their expressions in cmd_derived_from_alias to make sure that there is no runtime error (such as index out of range). Args: cmd_derived_from_alias: The command derived from the alias (include any positional argument placehodlers) pos_args_table: The positional argument table.
juraj-google-style
def cardinality(gym_space): if (gym_space.dtype == np.float32) or (gym_space.dtype == np.float64): tf.logging.error("Returning None for a float gym space's cardinality: ", gym_space) return None if isinstance(gym_space, Discrete): return gym_space.n if isinstance(gym_space, Box): return np.prod(gym_space.high - gym_space.low + 1) raise NotImplementedError
Number of elements that can be represented by the space. Makes the most sense for Discrete or Box type with integral dtype, ex: number of actions in an action space. Args: gym_space: The gym space. Returns: np.int64 number of observations that can be represented by this space, or returns None when this doesn't make sense, i.e. float boxes etc. Raises: NotImplementedError when a space's cardinality makes sense but we haven't implemented it.
juraj-google-style
def _wait_time(self, shard_state, secs, now=datetime.datetime.now): assert (shard_state.slice_start_time is not None) delta = (now() - shard_state.slice_start_time) duration = datetime.timedelta(seconds=secs) if (delta < duration): return util.total_seconds((duration - delta)) else: return 0
Time to wait until slice_start_time is secs ago from now. Args: shard_state: shard state. secs: duration in seconds. now: a func that gets now. Returns: 0 if no wait. A positive int in seconds otherwise. Always around up.
codesearchnet
def concept_distance(c1, c2): cause_purview = tuple(set((c1.cause.purview + c2.cause.purview))) effect_purview = tuple(set((c1.effect.purview + c2.effect.purview))) return (repertoire_distance(c1.expand_cause_repertoire(cause_purview), c2.expand_cause_repertoire(cause_purview)) + repertoire_distance(c1.expand_effect_repertoire(effect_purview), c2.expand_effect_repertoire(effect_purview)))
Return the distance between two concepts in concept space. Args: c1 (Concept): The first concept. c2 (Concept): The second concept. Returns: float: The distance between the two concepts in concept space.
codesearchnet
def get_street_from_xy(self, **kwargs): params = { 'coordinateX': kwargs.get('longitude'), 'coordinateY': kwargs.get('latitude'), 'Radius': kwargs.get('radius'), 'cultureInfo': util.language_code(kwargs.get('lang')) } result = self.make_request('geo', 'get_street_from_xy', **params) if not util.check_result(result, 'site'): return False, 'UNKNOWN ERROR' values = util.response_list(result, 'site') return True, [emtype.Street(**a) for a in values]
Obtain a list of streets around the specified point. Args: latitude (double): Latitude in decimal degrees. longitude (double): Longitude in decimal degrees. radius (int): Radius (in meters) of the search. lang (str): Language code (*es* or *en*). Returns: Status boolean and parsed response (list[Street]), or message string in case of error.
juraj-google-style
def GetNodeAnnotation(node, annotation, default=None): return getattr(node, _NODE_ANNOTATION_PREFIX + annotation, default)
Get annotation value from a node. Arguments: node: the node. annotation: annotation name - a string. default: the default value to return if there's no annotation. Returns: Value of the annotation in the given node. If the node doesn't have this particular annotation name yet, returns default.
github-repos
def file_path(path=None, payload=None, objectInput=None): f = (path if path else write_payload(payload, objectInput)) if (not os.path.exists(f)): msg = 'File {!r} does not exist'.format(f) log.exception(msg) raise TikaAppFilePathError(msg) return f
Given a file path, payload or file object, it writes file on disk and returns the temp path. Args: path (string): path of real file payload(string): payload in base64 of file objectInput (object): file object/standard input to analyze Returns: Path of file
codesearchnet
def ParseMessage(descriptor, byte_str): result_class = MakeClass(descriptor) new_msg = result_class() new_msg.ParseFromString(byte_str) return new_msg
Generate a new Message instance from this Descriptor and a byte string. Args: descriptor: Protobuf Descriptor object byte_str: Serialized protocol buffer byte string Returns: Newly created protobuf Message object.
codesearchnet
def wait_for_elements(self, using, value, timeout=10000, interval=1000, asserter=is_displayed): if (not callable(asserter)): raise TypeError('Asserter must be callable.') @retry(retry_on_exception=(lambda ex: isinstance(ex, WebDriverException)), stop_max_delay=timeout, wait_fixed=interval) def _wait_for_elements(ctx, using, value): els = ctx.elements(using, value) if (not len(els)): raise WebDriverException('no such element') else: el = els[0] asserter(el) return els return _wait_for_elements(self, using, value)
Wait for elements till satisfy the given condition Support: Android iOS Web(WebView) Args: using(str): The element location strategy. value(str): The value of the location strategy. timeout(int): How long we should be retrying stuff. interval(int): How long between retries. asserter(callable): The asserter func to determine the result. Returns: Return the list of Element if any of them satisfy the condition. Raises: WebDriverException.
codesearchnet
def from_parent(parent_key, i): if not isinstance(parent_key, HDPrivateKey): raise TypeError("parent_key must be an HDPrivateKey object.") hmac_key = parent_key.chain_code if i & 0x80000000: hmac_data = b'\x00' + bytes(parent_key._key) + i.to_bytes(length=4, byteorder='big') else: hmac_data = parent_key.public_key.compressed_bytes + i.to_bytes(length=4, byteorder='big') I = hmac.new(hmac_key, hmac_data, hashlib.sha512).digest() Il, Ir = I[:32], I[32:] parse_Il = int.from_bytes(Il, 'big') if parse_Il >= bitcoin_curve.n: return None child_key = (parse_Il + parent_key._key.key) % bitcoin_curve.n if child_key == 0: return None child_depth = parent_key.depth + 1 return HDPrivateKey(key=child_key, chain_code=Ir, index=i, depth=child_depth, parent_fingerprint=parent_key.fingerprint)
Derives a child private key from a parent private key. It is not possible to derive a child private key from a public parent key. Args: parent_private_key (HDPrivateKey):
juraj-google-style
def __init__(self, resume_delay=0): self.resume_delay = resume_delay
Initializes a ProcessContinuation object. Args: resume_delay: indicates the minimum time, in seconds, that should elapse before re-invoking process() method for resuming the invocation of the current element.
github-repos
def __mul__( self, scaling ): new_calculation = Calculation( title=self.title, energy=self.energy*scaling, stoichiometry=self.scale_stoichiometry( scaling ) ) return new_calculation
"Multiply" this Calculation by a scaling factor. Returns a new Calculation with the same title, but scaled energy and stoichiometry. Args: scaling (float): The scaling factor. Returns: (vasppy.Calculation): The scaled Calculation.
juraj-google-style
def __div__(self, other): if isinstance(other, LazyOpResult): other = other.expr return NumpyArrayWeld( numpy_weld_impl.div( self.expr, other, self.weld_type ), self.weld_type )
Summary Args: other (TYPE): Description Returns: TYPE: Description
juraj-google-style
def hash64(data: Any, seed: int = 0) -> int: c_data = to_str(data) if mmh3: c_signed_low, _ = mmh3.hash64(data, seed=seed, x64arch=IS_64_BIT) return c_signed_low py_data = to_bytes(c_data) py_signed_low, _ = pymmh3_hash64(py_data, seed=seed) return py_signed_low
Non-cryptographic, deterministic, fast hash. Args: data: data to hash seed: seed Returns: signed 64-bit integer
juraj-google-style
def prose_wc(args): if args.file is None: return 1 if args.split_hyphens: INTERSTITIAL_PUNCTUATION.append(re.compile(r'-')) content = args.file.read().decode('utf-8') filename = args.file.name body = strip_frontmatter(content) parsed = markdown_to_text(body) result = wc(filename, body, parsed=parsed, is_jekyll=(body != content)) if (args.update and filename != '_stdin_' and result['counts']['type'] == 'jekyll'): update_file(filename, result, content, args.indent) else: _mockable_print({ 'yaml': yaml.safe_dump(result, default_flow_style=False, indent=args.indent), 'json': json.dumps(result, indent=args.indent), 'default': default_dump(result), }[args.format]) return 0
Processes data provided to print a count object, or update a file. Args: args: an ArgumentParser object returned by setup()
juraj-google-style
def send_log_messages(self, messages: List[LogMessage]) -> None: for message in messages: self.send_log_message(message)
Prints multiple log messages to be captured by cloud logging. Args: * messages: list of LogMessage dictionaries Returns: * None
github-repos
def conv2d_bn(x, filters, num_row, num_col, padding='same', strides=(1, 1), name=None): if name is not None: bn_name = name + '_bn' conv_name = name + '_conv' else: bn_name = None conv_name = None if backend.image_data_format() == 'channels_first': bn_axis = 1 else: bn_axis = 3 x = layers.Conv2D(filters, (num_row, num_col), strides=strides, padding=padding, use_bias=False, name=conv_name)(x) x = layers.BatchNormalization(axis=bn_axis, scale=False, name=bn_name)(x) x = layers.Activation('relu', name=name)(x) return x
Utility function to apply conv + BN. Args: x: input tensor. filters: filters in `Conv2D`. num_row: height of the convolution kernel. num_col: width of the convolution kernel. padding: padding mode in `Conv2D`. strides: strides in `Conv2D`. name: name of the ops; will become `name + '_conv'` for the convolution and `name + '_bn'` for the batch norm layer. Returns: Output tensor after applying `Conv2D` and `BatchNormalization`.
github-repos
def add_update_resource_views(self, resource_views): if (not isinstance(resource_views, list)): raise HDXError('ResourceViews should be a list!') for resource_view in resource_views: self.add_update_resource_view(resource_view)
Add new or update existing resource views in resource with new metadata. Args: resource_views (List[Union[ResourceView,Dict]]): A list of resource views metadata from ResourceView objects or dictionaries Returns: None
codesearchnet
def call(self, input_ids: TFModelInputType | None=None, attention_mask: np.ndarray | tf.Tensor | None=None, head_mask: np.ndarray | tf.Tensor | None=None, inputs_embeds: np.ndarray | tf.Tensor | None=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, return_dict: Optional[bool]=None, training: Optional[bool]=False) -> Union[Tuple, TFBaseModelOutput]: encoder_outputs = self.encoder(input_ids, attention_mask=attention_mask, encoder_hidden_states=None, encoder_attention_mask=None, inputs_embeds=inputs_embeds, head_mask=head_mask, past_key_values=None, use_cache=False, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training) if not return_dict: return encoder_outputs return TFBaseModelOutput(last_hidden_state=encoder_outputs.last_hidden_state, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions)
Returns: Examples: ```python >>> from transformers import AutoTokenizer, TFT5EncoderModel >>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small") >>> model = TFT5EncoderModel.from_pretrained("google-t5/t5-small") >>> input_ids = tokenizer( ... "Studies have been shown that owning a dog is good for you", return_tensors="tf" ... ).input_ids # Batch size 1 >>> outputs = model(input_ids) ```
github-repos
def _CaptureExpression(self, frame, expression): (rc, value) = _EvaluateExpression(frame, expression) if (not rc): return {'name': expression, 'status': value} return self.CaptureNamedVariable(expression, value, 0, self.expression_capture_limits)
Evalutes the expression and captures it into a Variable object. Args: frame: evaluation context. expression: watched expression to compile and evaluate. Returns: Variable object (which will have error status if the expression fails to evaluate).
codesearchnet
def getModName(self): ret = self.mod_name if (ret is None): ret = self.__class__.__name__ return ret.lower()
Return the lowercased name of this module. Notes: This pulls the ``mod_name`` attribute on the class. This allows an implementer to set a arbitrary name for the module. If this attribute is not set, it defaults to ``self.__class__.__name__.lower()`` and sets ``mod_name`` to that value. Returns: (str): The module name.
codesearchnet
def en(item): if (pakr is None): return msgpack.packb(item, use_bin_type=True, unicode_errors='surrogatepass') try: return pakr.pack(item) except Exception: pakr.reset() raise
Use msgpack to serialize a compatible python object. Args: item (obj): The object to serialize Notes: String objects are encoded using utf8 encoding. In order to handle potentially malformed input, ``unicode_errors='surrogatepass'`` is set to allow encoding bad input strings. Returns: bytes: The serialized bytes in msgpack format.
codesearchnet
def get_excitation_spectrum(self, width=0.1, npoints=2000): roots = self.parse_tddft() data = roots['singlet'] en = np.array([d['energy'] for d in data]) osc = np.array([d['osc_strength'] for d in data]) epad = (20.0 * width) emin = (en[0] - epad) emax = (en[(- 1)] + epad) de = ((emax - emin) / npoints) if (width < (2 * de)): width = (2 * de) energies = [(emin + (ie * de)) for ie in range(npoints)] cutoff = (20.0 * width) gamma = (0.5 * width) gamma_sqrd = (gamma * gamma) de = ((energies[(- 1)] - energies[0]) / (len(energies) - 1)) prefac = ((gamma / np.pi) * de) x = [] y = [] for energy in energies: xx0 = (energy - en) stot = (osc / ((xx0 * xx0) + gamma_sqrd)) t = np.sum(stot[(np.abs(xx0) <= cutoff)]) x.append(energy) y.append((t * prefac)) return ExcitationSpectrum(x, y)
Generate an excitation spectra from the singlet roots of TDDFT calculations. Args: width (float): Width for Gaussian smearing. npoints (int): Number of energy points. More points => smoother curve. Returns: (ExcitationSpectrum) which can be plotted using pymatgen.vis.plotters.SpectrumPlotter.
codesearchnet
def find_record(self, model_class, record_id, reload=False): cached_model = self.peek_record(model_class, record_id) if cached_model is not None and reload is False: return cached_model else: return self._get_record(model_class, record_id)
Return a instance of model_class from the API or the local cache. Args: model_class (:class:`cinder_data.model.CinderModel`): A subclass of :class:`cinder_data.model.CinderModel` of your chosen model. record_id (int): The id of the record requested. reload (bool, optional): Don't return the cached version if reload==True. Returns: :class:`cinder_data.model.CinderModel`: An instance of model_class or None.
juraj-google-style
def get_entity(self, etype, entity_id): r = fapi.get_entity(self.namespace, self.name, etype, entity_id, self.api_url) fapi._check_response_code(r, 200) dresp = r.json() return Entity(etype, entity_id, dresp['attributes'])
Return entity in this workspace. Args: etype (str): Entity type entity_id (str): Entity name/unique id
juraj-google-style
def dumps(ms, single=False, version=_default_version, properties=True, pretty_print=False, color=False, **kwargs): if ((not pretty_print) and kwargs.get('indent')): pretty_print = True if single: ms = [ms] return serialize(ms, version=version, properties=properties, pretty_print=pretty_print, color=color)
Serialize an Xmrs object to a SimpleMRS representation Args: ms: an iterator of Xmrs objects to serialize (unless the *single* option is `True`) single: if `True`, treat *ms* as a single Xmrs object instead of as an iterator properties: if `False`, suppress variable properties pretty_print: if `True`, add newlines and indentation color: if `True`, colorize the output with ANSI color codes Returns: a SimpleMrs string representation of a corpus of Xmrs
codesearchnet
def parse_gene(gene_info): gene = {} identifier = None hgnc_id = None try: if ('hgnc_id' in gene_info): hgnc_id = int(gene_info['hgnc_id']) elif ('hgnc_idnumber' in gene_info): hgnc_id = int(gene_info['hgnc_idnumber']) elif ('hgncid' in gene_info): hgnc_id = int(gene_info['hgncid']) except ValueError as e: raise SyntaxError('Invalid hgnc id: {0}'.format(hgnc_id)) gene['hgnc_id'] = hgnc_id identifier = hgnc_id hgnc_symbol = None if ('hgnc_symbol' in gene_info): hgnc_symbol = gene_info['hgnc_symbol'] elif ('hgncsymbol' in gene_info): hgnc_symbol = gene_info['hgncsymbol'] elif ('symbol' in gene_info): hgnc_symbol = gene_info['symbol'] gene['hgnc_symbol'] = hgnc_symbol if (not identifier): if hgnc_symbol: identifier = hgnc_symbol else: raise SyntaxError('No gene identifier could be found') gene['identifier'] = identifier transcripts = '' if ('disease_associated_transcripts' in gene_info): transcripts = gene_info['disease_associated_transcripts'] elif ('disease_associated_transcript' in gene_info): transcripts = gene_info['disease_associated_transcript'] elif ('transcripts' in gene_info): transcripts = gene_info['transcripts'] gene['transcripts'] = [transcript.strip() for transcript in transcripts.split(',') if transcript] models = '' if ('genetic_disease_models' in gene_info): models = gene_info['genetic_disease_models'] elif ('genetic_disease_model' in gene_info): models = gene_info['genetic_disease_model'] elif ('inheritance_models' in gene_info): models = gene_info['inheritance_models'] elif ('genetic_inheritance_models' in gene_info): models = gene_info['genetic_inheritance_models'] gene['inheritance_models'] = [model.strip() for model in models.split(',') if (model.strip() in VALID_MODELS)] gene['mosaicism'] = (True if gene_info.get('mosaicism') else False) gene['reduced_penetrance'] = (True if gene_info.get('reduced_penetrance') else False) gene['database_entry_version'] = gene_info.get('database_entry_version') return gene
Parse a gene line with information from a panel file Args: gene_info(dict): dictionary with gene info Returns: gene(dict): A dictionary with the gene information { 'hgnc_id': int, 'hgnc_symbol': str, 'disease_associated_transcripts': list(str), 'inheritance_models': list(str), 'mosaicism': bool, 'reduced_penetrance': bool, 'database_entry_version': str, }
codesearchnet
def extract_subject_from_dn(cert_obj): return ','.join(('{}={}'.format(OID_TO_SHORT_NAME_DICT.get(v.oid.dotted_string, v.oid.dotted_string), rdn_escape(v.value)) for v in reversed(list(cert_obj.subject))))
Serialize a DN to a DataONE subject string. Args: cert_obj: cryptography.Certificate Returns: str: Primary subject extracted from the certificate DN. The certificate DN (DistinguishedName) is a sequence of RDNs (RelativeDistinguishedName). Each RDN is a set of AVAs (AttributeValueAssertion / AttributeTypeAndValue). A DataONE subject is a plain string. As there is no single standard specifying how to create a string representation of a DN, DataONE selected one of the most common ways, which yield strings such as: CN=Some Name A123,O=Some Organization,C=US,DC=Some Domain,DC=org In particular, the sequence of RDNs is reversed. Attribute values are escaped, attribute type and value pairs are separated by "=", and AVAs are joined together with ",". If an RDN contains an unknown OID, the OID is serialized as a dotted string. As all the information in the DN is preserved, it is not possible to create the same subject with two different DNs, and the DN can be recreated from the subject.
codesearchnet
def _verify_signature(message, signature, certs): for pem in certs: verifier = Verifier.from_string(pem, is_x509_cert=True) if verifier.verify(message, signature): return raise AppIdentityError('Invalid token signature')
Verifies signed content using a list of certificates. Args: message: string or bytes, The message to verify. signature: string or bytes, The signature on the message. certs: iterable, certificates in PEM format. Raises: AppIdentityError: If none of the certificates can verify the message against the signature.
juraj-google-style
def select_files(self, what="o"): choices = collections.OrderedDict([ ("i", self.input_file), ("o", self.output_file), ("f", self.files_file), ("j", self.job_file), ("l", self.log_file), ("e", self.stderr_file), ("q", self.qout_file), ]) if what == "all": return [getattr(v, "path") for v in choices.values()] selected = [] for c in what: try: selected.append(getattr(choices[c], "path")) except KeyError: logger.warning("Wrong keyword %s" % c) return selected
Helper function used to select the files of a task. Args: what: string with the list of characters selecting the file type Possible choices: i ==> input_file, o ==> output_file, f ==> files_file, j ==> job_file, l ==> log_file, e ==> stderr_file, q ==> qout_file, all ==> all files.
juraj-google-style
def debye_temperature(self, structure): v0 = (structure.volume * 1e-30 / structure.num_sites) vl, vt = self.long_v(structure), self.trans_v(structure) vm = 3**(1./3.) * (1 / vl**3 + 2 / vt**3)**(-1./3.) td = 1.05457e-34 / 1.38065e-23 * vm * (6 * np.pi**2 / v0) ** (1./3.) return td
Estimates the debye temperature from longitudinal and transverse sound velocities Args: structure: pymatgen structure object Returns: debye temperature (in SI units)
juraj-google-style
def add_properties(entity_proto, property_dict, exclude_from_indexes=None): for (name, value) in property_dict.iteritems(): set_property(entity_proto.properties, name, value, exclude_from_indexes)
Add values to the given datastore.Entity proto message. Args: entity_proto: datastore.Entity proto message. property_dict: a dictionary from property name to either a python object or datastore.Value. exclude_from_indexes: if the value should be exclude from indexes. None leaves indexing as is (defaults to False if value is not a Value message). Usage: >>> add_properties(proto, {'foo': u'a', 'bar': [1, 2]}) Raises: TypeError: if a given property value type is not supported.
codesearchnet
def convert_predictions_to_image_summaries(hook_args): decode_hparams = hook_args.decode_hparams if not decode_hparams.display_decoded_images: return [] predictions = hook_args.predictions[0] all_summaries = [] rand_predictions = np.random.choice(predictions, size=10) for ind, prediction in enumerate(rand_predictions): output_summary = image_to_tf_summary_value( prediction["outputs"], tag="%d_output" % ind) input_summary = image_to_tf_summary_value( prediction["inputs"], tag="%d_input" % ind) all_summaries.append(input_summary) all_summaries.append(output_summary) return all_summaries
Optionally converts images from hooks_args to image summaries. Args: hook_args: DecodeHookArgs namedtuple Returns: summaries: list of tf.Summary values if hook_args.decode_hpara
juraj-google-style
def as_dataframe(self, max_rows=None): max_rows = len(self._timeseries_list) if max_rows is None else max_rows headers = [{ 'resource': ts.resource._asdict(), 'metric': ts.metric._asdict()} for ts in self._timeseries_list[:max_rows]] if not headers: return pandas.DataFrame() dataframe = pandas.io.json.json_normalize(headers) dataframe.columns = pandas.MultiIndex.from_tuples( [(col, '') if col == 'resource.type' else col.rsplit('.', 1) for col in dataframe.columns]) resource_keys = google.cloud.monitoring._dataframe._sorted_resource_labels( dataframe['resource.labels'].columns) sorted_columns = [('resource.type', '')] sorted_columns += [('resource.labels', key) for key in resource_keys] sorted_columns += sorted(col for col in dataframe.columns if col[0] == 'metric.labels') dataframe = dataframe[sorted_columns] dataframe = dataframe.sort_values(sorted_columns) dataframe = dataframe.reset_index(drop=True).fillna('') return dataframe
Creates a pandas dataframe from the query metadata. Args: max_rows: The maximum number of timeseries metadata to return. If None, return all. Returns: A pandas dataframe containing the resource type, resource labels and metric labels. Each row in this dataframe corresponds to the metadata from one time series.
juraj-google-style
def get_base_most_function(function): for contract in (function.contract.inheritance + [function.contract]): for f in contract.functions_not_inherited: if (f.full_name == function.full_name): return f raise Exception('Could not resolve the base-most function for the provided function.')
Obtains the base function definition for the provided function. This could be used to obtain the original definition of a function, if the provided function is an override. Returns: (function): Returns the base-most function of a provided function. (The original definition).
codesearchnet
def getprop(self, prop_name, timeout=DEFAULT_GETPROP_TIMEOUT_SEC): return self.shell(['getprop', prop_name], timeout=timeout).decode('utf-8').strip()
Get a property of the device. This is a convenience wrapper for `adb shell getprop xxx`. Args: prop_name: A string that is the name of the property to get. timeout: float, the number of seconds to wait before timing out. If not specified, the DEFAULT_GETPROP_TIMEOUT_SEC is used. Returns: A string that is the value of the property, or None if the property doesn't exist.
github-repos
def scale(self, replicas): if 'Global' in self.attrs['Spec']['Mode'].keys(): raise InvalidArgument('Cannot scale a global container') service_mode = ServiceMode('replicated', replicas) return self.client.api.update_service(self.id, self.version, mode=service_mode, fetch_current_spec=True)
Scale service container. Args: replicas (int): The number of containers that should be running. Returns: bool: ``True`` if successful.
juraj-google-style
def connect(signal, receiver): __check_receiver(receiver) if __is_bound_method(receiver): ref = WeakMethod else: ref = weakref.ref with __lock: __purge() __receivers[signal].append(ref(receiver))
Register `receiver` method/function as a receiver for the `signal`. When the signal is emitted, this receiver will be invoked along with all other associated signals. Args: signal: A signal identifier (e.g., a signal name) receiver: A callable object to connect to the signal.
juraj-google-style
def convert_instancenorm(params, w_name, scope_name, inputs, layers, weights, names): print('Converting instancenorm ...') if names == 'short': tf_name = 'IN' + random_string(6) elif names == 'keep': tf_name = w_name else: tf_name = w_name + str(random.random()) assert(len(inputs) == 3) bias_name = '{0}.bias'.format(w_name) weights_name = '{0}.weight'.format(w_name) if inputs[-2] + '_np' in layers: gamma = layers[inputs[-2] + '_np'] else: gamma = weights[weights_name].numpy() if inputs[-1] + '_np' in layers: beta = layers[inputs[-1] + '_np'] else: beta = weights[bias_name].numpy() def target_layer(x, epsilon=params['epsilon'], gamma=gamma, beta=beta): layer = tf.contrib.layers.instance_norm( x, param_initializers={'beta': tf.constant_initializer(beta), 'gamma': tf.constant_initializer(gamma)}, epsilon=epsilon, data_format='NCHW', trainable=False ) return layer lambda_layer = keras.layers.Lambda(target_layer, name=tf_name) layers[scope_name] = lambda_layer(layers[inputs[0]])
Convert instance normalization layer. Args: params: dictionary with layer parameters w_name: name prefix in state_dict scope_name: pytorch scope name inputs: pytorch node inputs layers: dictionary with keras tensors weights: pytorch state_dict names: use short names for keras layers
juraj-google-style
def make_multi_qq_plots(arrays, key_text): import omega as om p = om.RectPlot() p.addXY([0, 1.0], [0, 1.0], '1:1') for (index, array) in enumerate(arrays): (kev, obs, mdl) = array c_obs = np.cumsum(obs) c_mdl = np.cumsum(mdl) mx = (0.5 * (c_obs[(- 1)] + c_mdl[(- 1)])) c_obs /= mx c_mdl /= mx p.addXY(c_mdl, c_obs, ('%s locs = (np.array([0, 0.05, 0.08, 0.11, 0.17, 0.3, 0.4, 0.7, 1]) * (kev.size - 2)) c0 = 1.05 c1 = 1.1 for loc in locs: i0 = int(np.floor(loc)) frac = (loc - i0) kevval = (((1 - frac) * kev[i0]) + (frac * kev[(i0 + 1)])) mdlval = (((1 - frac) * c_mdl[i0]) + (frac * c_mdl[(i0 + 1)])) obsval = (((1 - frac) * c_obs[i0]) + (frac * c_obs[(i0 + 1)])) p.addXY([mdlval, mdlval], [c0, c1], ('%.2f keV' % kevval), dsn=2) p.addXY([c0, c1], [obsval, obsval], None, dsn=2) p.setLabels('Cumulative rescaled model', 'Cumulative rescaled data') p.defaultKeyOverlay.vAlign = 0.3 return p
Make a quantile-quantile plot comparing multiple sets of events and models. *arrays* X. *key_text* Text describing the quantile-quantile comparison quantity; will be shown on the plot legend. Returns: An :class:`omega.RectPlot` instance. *TODO*: nothing about this is Sherpa-specific. Same goes for some of the plotting routines in :mod:`pkwit.environments.casa.data`; might be reasonable to add a submodule for generic X-ray-y plotting routines. *TODO*: Some gross code duplication here.
codesearchnet
def get_job(self, id): return self._get_element_by_id(self.jobs, 'jobs', Job, str(id))
Retrieves a job matching the given `id` Args: id (str): Job `id` to match. Returns: Job: Job matching the given `id` Raises: ValueError: No resource matches given `id` or multiple resources matching given `id`
codesearchnet
def send(self, value): if not self.block and self._stdin is not None: self.writer.write("{}\n".format(value)) return self else: raise TypeError(NON_BLOCKING_ERROR_MESSAGE)
Send text to stdin. Can only be used on non blocking commands Args: value (str): the text to write on stdin Raises: TypeError: If command is blocking Returns: ShellCommand: return this ShellCommand instance for chaining
juraj-google-style
def PluginRunToTagToContent(self, plugin_name): mapping = {} for run in self.Runs(): try: tag_to_content = self.GetAccumulator(run).PluginTagToContent( plugin_name) except KeyError: continue mapping[run] = tag_to_content return mapping
Returns a 2-layer dictionary of the form {run: {tag: content}}. The `content` referred above is the content field of the PluginData proto for the specified plugin within a Summary.Value proto. Args: plugin_name: The name of the plugin for which to fetch content. Returns: A dictionary of the form {run: {tag: content}}.
juraj-google-style
def infer_transportation_modes(self, dt_threshold=10): self.segments = [segment.infer_transportation_mode(dt_threshold=dt_threshold) for segment in self.segments] return self
In-place transportation inferring of segments Returns: This track
codesearchnet
def show_stories(self, raw=False, limit=None): show_stories = self._get_stories('showstories', limit) if raw: show_stories = [story.raw for story in show_stories] return show_stories
Returns list of item ids of latest Show HN stories Args: limit (int): specifies the number of stories to be returned. raw (bool): Flag to indicate whether to transform all objects into raw json. Returns: `list` object containing ids of Show HN stories.
juraj-google-style
def __init__(self, text, stopwords=None): self.text = text self.load_stopwords(stopwords) self.tokenize()
Store the raw text, tokenize. Args: text (str): The raw text string. stopwords (str): A custom stopwords list path.
juraj-google-style