code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def get_room_by_name(self, name): rooms = self.get_rooms() for room in (rooms or []): if (room['name'] == name): return self.get_room(room['id']) raise RoomNotFoundException(('Room %s not found' % name))
Get a room by name. Returns: :class:`Room`. Room Raises: RoomNotFoundException
codesearchnet
def compatible_firmware_version(self): identifier = self.firmware_version.split('compiled')[0] buf_size = self.MAX_BUF_SIZE buf = (ctypes.c_char * buf_size)() res = self._dll.JLINKARM_GetEmbeddedFWString(identifier.encode(), buf, buf_size) if (res < 0): raise errors.JLinkException(res) return ctypes.string_at(buf).decode()
Returns the DLL's compatible J-Link firmware version. Args: self (JLink): the ``JLink`` instance Returns: The firmware version of the J-Link that the DLL is compatible with. Raises: JLinkException: on error.
codesearchnet
def forward(self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, output_attentions: Optional[bool]=False) -> Tuple[torch.FloatTensor]: residual = hidden_states hidden_states = self.layer_norm1(hidden_states) hidden_states, attn_weights = self.self_attn(hidden_states=hidden_states, head_mask=attention_mask, output_attentions=output_attentions) hidden_states = hidden_states + residual residual = hidden_states hidden_states = self.layer_norm2(hidden_states) hidden_states = self.mlp(hidden_states) hidden_states = hidden_states + residual outputs = (hidden_states,) if output_attentions: outputs += (attn_weights,) return outputs
Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. `(config.encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail.
github-repos
def _export_to_saved_model_graph(self, object_map, tensor_map, options, **kwargs): _, _, _ = (object_map, tensor_map, options) del kwargs return []
Creates a copy of this object's tensors onto SavedModel graph. Needs to be overridden if the class contains tensors that must be saved into the graph. This method should update the `object_map` and `tensor_map` dictionaries. This method is called on all nodes in the Trackable Graph (generated by `_trackable_children`). The nodes are traversed in the order defined by `_deserialization_dependencies` All usages of _map_resources should be migrated to this method. Args: object_map: A dictionary that maps original Trackables to the copied Trackables. This only needs to be updated if the object is a tf.function, or if the copied tensors are necessary for checkpointing this object. tensor_map: Dictionary mapping original tensors to copied tensors. options: A `tf.saved_model.SaveOptions` object. **kwargs: Additional kwargs that may be added at a later time. Returns: Flat list of original tensors that have been copied.
github-repos
def __init__(self, counters=None, distributions=None, gauges=None, string_sets=None, bounded_tries=None): self.counters = counters or {} self.distributions = distributions or {} self.gauges = gauges or {} self.string_sets = string_sets or {} self.bounded_tries = bounded_tries or {}
Create a MetricUpdates object. Args: counters: Dictionary of MetricKey:MetricUpdate updates. distributions: Dictionary of MetricKey:MetricUpdate objects. gauges: Dictionary of MetricKey:MetricUpdate objects. string_sets: Dictionary of MetricKey:MetricUpdate objects. bounded_tries: Dictionary of MetricKey:MetricUpdate objects.
github-repos
def update(self, **kwargs): for key, value in kwargs.items(): if hasattr(self, key): setattr(self, key, value)
Update the configuration attributes with new values. Args: **kwargs: Keyword arguments representing configuration attributes and their new values.
github-repos
def StringEscape(self, string, match, **unused_kwargs): if match.group(1) in '\\\'"rnbt\\.ws': self.string += codecs.decode(string, 'unicode_escape') else: raise errors.ParseError('Invalid escape character {0:s}.'.format(string))
Escape backslashes found inside a string quote. Backslashes followed by anything other than [\'"rnbt.ws] will raise an Error. Args: string: The string that matched. match: the match object (instance of re.MatchObject). Where match.group(1) contains the escaped code. Raises: ParseError: When the escaped string is not one of [\'"rnbt]
juraj-google-style
def get(self, key, mem_map=True): self.raise_error_if_not_open() if (key in self._file): data = self._file[key] sampling_rate = data.attrs[SAMPLING_RATE_ATTR] if (not mem_map): data = data[()] data = (np.float32(data) / MAX_INT16_VALUE) return (data, sampling_rate)
Return the samples for the given key and the sampling-rate. Args: key (str): The key to read the data from. mem_map (bool): If ``True`` returns the data as memory-mapped array, otherwise a copy is returned. Note: The container has to be opened in advance. Returns: tuple: A tuple containing the samples as numpy array with ``np.float32`` [-1.0,1.0] and the sampling-rate.
codesearchnet
def expected_exercise_fn(design, continuation_value, exercise_value): batch_design = tf.broadcast_to(tf.expand_dims(design, -1), design.shape + [continuation_value.shape[-1]]) mask = tf.cast(exercise_value > 0, design.dtype) masked = tf.transpose(batch_design * mask, perm=(2, 1, 0)) lhs = tf.matmul(masked, masked, transpose_a=True) lhs_pinv = tf.linalg.pinv(lhs) rhs = tf.matmul(masked, tf.expand_dims(tf.transpose(continuation_value), -1), transpose_a=True) beta = tf.linalg.matmul(lhs_pinv, rhs) continuation = tf.matmul(tf.transpose(batch_design, perm=(2, 1, 0)), beta) return tf.maximum(tf.transpose(tf.squeeze(continuation, -1)), 0.0)
Returns the expected continuation value for each path. Args: design: A real `Tensor` of shape `[basis_size, num_samples]`. continuation_value: A `Tensor` of shape `[num_samples, payoff_dim]` and of the same dtype as `design`. The optimal value of the option conditional on not exercising now or earlier, taking future information into account. exercise_value: A `Tensor` of the same shape and dtype as `continuation_value`. Value of the option if exercised immideately at the current time Returns: A `Tensor` of the same shape and dtype as `continuation_value` whose `(n, v)`-th entry represents the expected continuation value of sample path `n` under the `v`-th payoff scheme.
github-repos
def match_partial_against_complete(self, matcher, solver, partial, complete): assert is_partial(partial) assert is_complete(complete) subst = {p.type_param: pytd.AnythingType() for p in complete.template} formula = matcher.match_Class_against_Class(partial, complete, subst) if formula is booleq.FALSE: raise FlawedQuery(f'{partial.name} can never be {complete.name}') solver.always_true(formula)
Match a partial class (call record) against a complete class. Args: matcher: An instance of pytd.type_match.TypeMatch. solver: An instance of pytd.booleq.Solver. partial: The partial class to match. The class name needs to be prefixed with "~" - the rest of the name is typically the same as complete.name. complete: A complete class to match against. (E.g. a built-in or a user defined class) Returns: An instance of pytd.booleq.BooleanTerm. Raises: FlawedQuery: If this call record is incompatible with the builtin.
github-repos
def _init_init_op(self, init_op=USE_DEFAULT, init_feed_dict=None): if init_op is Supervisor.USE_DEFAULT: init_op = self._get_first_op_from_collection(ops.GraphKeys.INIT_OP) if init_op is None: init_op = variables.global_variables_initializer() ops.add_to_collection(ops.GraphKeys.INIT_OP, init_op) self._init_op = init_op self._init_feed_dict = init_feed_dict
Initializes init_op. Args: init_op: `Operation` to initialize the variables. If set to USE_DEFAULT, create an op that initializes all variables and tables. init_feed_dict: A dictionary that maps `Tensor` objects to feed values. This feed dictionary will be used when `init_op` is evaluated.
github-repos
def VerifyStructure(self, parser_mediator, lines): try: structure = self._SDF_HEADER.parseString(lines) except pyparsing.ParseException: logger.debug('Not a SkyDrive log file') return False try: dfdatetime_time_elements.TimeElementsInMilliseconds( time_elements_tuple=structure.header_date_time) except ValueError: logger.debug( 'Not a SkyDrive log file, invalid date and time: {0!s}'.format( structure.header_date_time)) return False return True
Verify that this file is a SkyDrive log file. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. lines (str): one or more lines from the text file. Returns: bool: True if this is the correct parser, False otherwise.
juraj-google-style
def convert_sbml_model(model): biomass_reactions = set() for reaction in model.reactions: if (reaction.id not in model.limits): (lower, upper) = parse_flux_bounds(reaction) if ((lower is not None) or (upper is not None)): model.limits[reaction.id] = (reaction.id, lower, upper) objective = parse_objective_coefficient(reaction) if ((objective is not None) and (objective != 0)): biomass_reactions.add(reaction.id) if (len(biomass_reactions) == 1): model.biomass_reaction = next(iter(biomass_reactions)) convert_model_entries(model) if (model.extracellular_compartment is None): extracellular = detect_extracellular_compartment(model) model.extracellular_compartment = extracellular convert_exchange_to_compounds(model)
Convert raw SBML model to extended model. Args: model: :class:`NativeModel` obtained from :class:`SBMLReader`.
codesearchnet
def ifilterfalse_items(item_iter, flag_iter): false_items = (item for (item, flag) in zip(item_iter, flag_iter) if not flag) return false_items
ifilterfalse_items Args: item_iter (list): flag_iter (list): of bools Example: >>> # ENABLE_DOCTEST >>> from utool.util_iter import * # NOQA >>> item_iter = [1, 2, 3, 4, 5] >>> flag_iter = [False, True, True, False, True] >>> false_items = ifilterfalse_items(item_iter, flag_iter) >>> result = list(false_items) >>> print(result) [1, 4]
juraj-google-style
def GetGtfsClassByFileName(self, filename): if (filename not in self._file_mapping): return None mapping = self._file_mapping[filename] class_list = mapping['classes'] if (len(class_list) > 1): raise problems.NonStandardMapping(filename) else: return self._class_mapping[class_list[0]]
Returns the transitfeed class corresponding to a GTFS file. Args: filename: The filename whose class is to be returned Raises: NonStandardMapping if the specified filename has more than one corresponding class
codesearchnet
def to_dict(self) -> Dict[str, Any]: output = copy.deepcopy(self.__dict__) output['feature_extractor_type'] = self.__class__.__name__ if 'mel_filters' in output: del output['mel_filters'] if 'mel_filters_slaney' in output: del output['mel_filters_slaney'] return output
Serializes this instance to a Python dictionary. Returns: `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance, except for the mel filter banks, which do not need to be saved or printed as they are too long.
github-repos
def _parse_name(self, config): value = NAME_RE.search(config).group('value') return dict(name=value)
_parse_name scans the provided configuration block and extracts the vlan name. The config block is expected to always return the vlan name. The return dict is intended to be merged into the response dict. Args: config (str): The vlan configuration block from the nodes running configuration Returns: dict: resource dict attribute
juraj-google-style
def executions(self, digest=False, begin=None, end=None): digests = self._execution_digests if begin is not None or end is not None: begin = begin or 0 end = end or len(digests) digests = digests[begin:end] if digest: return digests else: return [self.read_execution(digest) for digest in digests]
Get `Execution`s or `ExecutionDigest`s this reader has read so far. Args: digest: Whether the results are returned in a digest form, i.e., `ExecutionDigest` format, instead of the more detailed `Execution` format. begin: Optional beginning index for the requested execution data objects or their digests. Python-style negative indices are supported. end: Optional ending index for the requested execution data objects or their digests. Python-style negative indices are supported. Returns: If `digest`: a `list` of `ExecutionDigest` objects. Else: a `list` of `Execution` objects.
github-repos
def children_after_parents(self, piper1, piper2): if piper1 in self[piper2].deep_nodes(): return 1 elif piper2 in self[piper1].deep_nodes(): return - 1 else: return 0
Custom compare function. Returns ``1`` if the first ``Piper`` instance is upstream of the second ``Piper`` instance, ``-1`` if the first ``Piper`` is downstream of the second ``Piper`` and ``0`` if the two ``Pipers`` are independent. Arguments: - piper1(``Piper``) ``Piper`` instance. - piper2(``Piper``) ``Piper`` instance.
juraj-google-style
def _convert_dataset_to_list(dataset, dataset_type_spec, data_size_warning_flag=True, ensure_shape_similarity=True): dataset_iterator = _get_data_iterator_from_dataset(dataset, dataset_type_spec) dataset_as_list = [] start_time = time.time() for sample in _get_next_sample(dataset_iterator, ensure_shape_similarity, data_size_warning_flag, start_time): dataset_as_list.append(sample) return dataset_as_list
Convert `dataset` object to a list of samples. Args: dataset: A `tf.data.Dataset`, a `torch.utils.data.Dataset` object, or a list/tuple of arrays. dataset_type_spec: the type of the dataset. data_size_warning_flag: If set to `True`, a warning will be issued if the dataset takes longer than 10 seconds to iterate. Defaults to `True`. ensure_shape_similarity: If set to `True`, the shape of the first sample will be used to validate the shape of rest of the samples. Defaults to `True`. Returns: List: A list of samples.
github-repos
def commit(self, synchronized_processing_time): assert not self._committed self._committed = True self._elements = tuple(self._elements) self._synchronized_processing_time = synchronized_processing_time
Commits this bundle. Uncommitted bundle will become committed (immutable) after this call. Args: synchronized_processing_time: the synchronized processing time at which this bundle was committed
github-repos
def save_dataset(self, out_file_name): for time_tag, script in self.data_sets.items(): script.save(os.path.join(out_file_name, '{:s}.b26s'.format(time_tag)))
saves current dataset to out_file_name Args: out_file_name: name of file
juraj-google-style
def is_valid(self, field_name, value) -> (bool, object): if self.has_field(field_name): if (self.fields_dict[field_name] == FieldType.KG_ID): return (True, value) if (self.fields_dict[field_name] == FieldType.NUMBER): if isinstance(value, numbers.Number): return (True, value) else: converted_number = self.parse_number(value) return ((False, value) if (not converted_number) else (True, value)) if (self.fields_dict[field_name] == FieldType.STRING): if isinstance(value, str): return (True, value.strip()) else: return (True, str(value).strip()) if (self.fields_dict[field_name] == FieldType.DATE): (valid, d) = self.is_date(value) if valid: return (True, d.isoformat()) else: return (False, value) if (self.fields_dict[field_name] == FieldType.LOCATION): (valid, l) = self.is_location(value) if valid: return (True, l) else: return (False, value) else: print('{} not found in KG Schema'.format(field_name)) return (False, value)
Return true if the value type matches or can be coerced to the defined type in schema, otherwise false. If field not defined, return none Args: field_name: str value: Returns: bool, value, where the value may have been coerced to the required type.
codesearchnet
def __init__(self, data_type_definition): super(UUIDMap, self).__init__(data_type_definition) self._byte_order = data_type_definition.byte_order
Initializes an UUID (or GUID) data type map. Args: data_type_definition (DataTypeDefinition): data type definition.
juraj-google-style
def _lookup_namespace(self, symbol, namespace): for namespace_part in symbol.parts: namespace = namespace.get(namespace_part) if (namespace is None): break if (not isinstance(namespace, dict)): return namespace raise Error(('%s not found' % symbol.name))
Helper for lookup_symbol that only looks up variables in a namespace. Args: symbol: Symbol namespace: pointer into self.namespaces
codesearchnet
def chunk_sequence(sequence, chunk_length=200, padding_value=0): if ('length' in sequence): length = sequence.pop('length') else: length = tf.shape(tools.nested.flatten(sequence)[0])[0] num_chunks = (((length - 1) padding_length = ((chunk_length * num_chunks) - length) padded = tools.nested.map((lambda tensor: tf.concat([tensor, ((0 * tensor[:padding_length]) + padding_value)], 0)), sequence) chunks = tools.nested.map((lambda tensor: tf.reshape(tensor, ([num_chunks, chunk_length] + tensor.shape[1:].as_list()))), padded) chunks['length'] = tf.concat([(chunk_length * tf.ones(((num_chunks - 1),), dtype=tf.int32)), [(chunk_length - padding_length)]], 0) return chunks
Split a nested dict of sequence tensors into a batch of chunks. This function does not expect a batch of sequences, but a single sequence. A `length` key is added if it did not exist already. Args: sequence: Nested dict of tensors with time dimension. chunk_length: Size of chunks the sequence will be split into. padding_value: Value used for padding the last chunk after the sequence. Returns: Nested dict of sequence tensors with chunk dimension.
codesearchnet
def get_or_create_direct_channel(cls, initiator_key, receiver_key): existing = cls.objects.OR().filter( code_name='%s_%s' % (initiator_key, receiver_key)).filter( code_name='%s_%s' % (receiver_key, initiator_key)) receiver_name = UserModel.objects.get(receiver_key).full_name if existing: channel = existing[0] else: channel_name = '%s_%s' % (initiator_key, receiver_key) channel = cls(is_direct=True, code_name=channel_name, typ=10).blocking_save() with BlockSave(Subscriber): Subscriber.objects.get_or_create(channel=channel, user_id=initiator_key, name=receiver_name) Subscriber.objects.get_or_create(channel=channel, user_id=receiver_key, name=UserModel.objects.get(initiator_key).full_name) return channel, receiver_name
Creates a direct messaging channel between two user Args: initiator: User, who want's to make first contact receiver: User, other party Returns: (Channel, receiver_name)
juraj-google-style
def ParseMessage(descriptor, byte_str): result_class = MakeClass(descriptor) new_msg = result_class() new_msg.ParseFromString(byte_str) return new_msg
Generate a new Message instance from this Descriptor and a byte string. Args: descriptor: Protobuf Descriptor object byte_str: Serialized protocol buffer byte string Returns: Newly created protobuf Message object.
juraj-google-style
def match(self, request): errors = [] def match(matcher): try: return matcher.match(request) except Exception as err: err = '{}: {}'.format(type(matcher).__name__, err) errors.append(err) return False return all([match(matcher) for matcher in self]), errors
Match the given HTTP request instance against the registered matcher functions in the current engine. Arguments: request (pook.Request): outgoing request to match. Returns: tuple(bool, list[Exception]): ``True`` if all matcher tests passes, otherwise ``False``. Also returns an optional list of error exceptions.
juraj-google-style
def _process_image_files_batch(coder, thread_index, ranges, name, filenames, texts, labels, num_shards): num_threads = len(ranges) assert (not (num_shards % num_threads)) num_shards_per_batch = int((num_shards / num_threads)) shard_ranges = np.linspace(ranges[thread_index][0], ranges[thread_index][1], (num_shards_per_batch + 1)).astype(int) num_files_in_thread = (ranges[thread_index][1] - ranges[thread_index][0]) counter = 0 for s in range(num_shards_per_batch): shard = ((thread_index * num_shards_per_batch) + s) output_filename = ('%s-%.5d-of-%.5d' % (name, shard, num_shards)) output_file = os.path.join(FLAGS.output_directory, output_filename) writer = tf.python_io.TFRecordWriter(output_file) shard_counter = 0 files_in_shard = np.arange(shard_ranges[s], shard_ranges[(s + 1)], dtype=int) for i in files_in_shard: filename = filenames[i] label = labels[i] text = texts[i] (image_buffer, height, width) = _process_image(filename, coder) example = _convert_to_example(filename, image_buffer, label, text, height, width) writer.write(example.SerializeToString()) shard_counter += 1 counter += 1 if (not (counter % 1000)): print(('%s [thread %d]: Processed %d of %d images in thread batch.' % (datetime.now(), thread_index, counter, num_files_in_thread))) sys.stdout.flush() writer.close() print(('%s [thread %d]: Wrote %d images to %s' % (datetime.now(), thread_index, shard_counter, output_file))) sys.stdout.flush() shard_counter = 0 print(('%s [thread %d]: Wrote %d images to %d shards.' % (datetime.now(), thread_index, counter, num_files_in_thread))) sys.stdout.flush()
Processes and saves list of images as TFRecord in 1 thread. Args: coder: instance of ImageCoder to provide TensorFlow image coding utils. thread_index: integer, unique batch to run index is within [0, len(ranges)). ranges: list of pairs of integers specifying ranges of each batches to analyze in parallel. name: string, unique identifier specifying the data set filenames: list of strings; each string is a path to an image file texts: list of strings; each string is human readable, e.g. 'dog' labels: list of integer; each integer identifies the ground truth num_shards: integer number of shards for this data set.
codesearchnet
def _rewrite_output_as_tensor(body_grad_graph, grad_output_slices): with body_grad_graph.as_default(): new_output = tensor_conversion.convert_to_tensor_v2(grad_output_slices) idx = _get_tensor_index_in_iterable(body_grad_graph.structured_outputs, grad_output_slices) body_grad_graph.structured_outputs[idx] = new_output body_grad_graph.outputs = func_graph.flatten(body_grad_graph.structured_outputs)
Rewrites grad_output_slices to be a Tensor output. Args: body_grad_graph: _WhileBodyGradFuncGraph. grad_output_slices: IndexedSlices output of body_grad_graph.
github-repos
def _path_to_str(self, path): inp = '' for arc in path: i = self.isyms.find(arc.ilabel) if i != fst.EPSILON: inp += i return inp
Convert a path to the string representing the path Args: path (tuple): A tuple of arcs Returns: inp (str): The path concatenated as as string
juraj-google-style
def _get_table(name): item = google.datalab.utils.commands.get_notebook_item(name) if isinstance(item, bigquery.Table): return item try: return _existing_table_cache[name] except KeyError: table = bigquery.Table(name) if table.exists(): _existing_table_cache[name] = table return table return None
Given a variable or table name, get a Table if it exists. Args: name: the name of the Table or a variable referencing the Table. Returns: The Table, if found.
codesearchnet
def _SetSshHostKeys(self, host_key_types=None): section = 'Instance' instance_id = self._GetInstanceId() if instance_id != self.instance_config.GetOptionString( section, 'instance_id'): self.logger.info('Generating SSH host keys for instance %s.', instance_id) file_regex = re.compile(r'ssh_host_(?P<type>[a-z0-9]*)_key\Z') key_dir = '/etc/ssh' key_files = [f for f in os.listdir(key_dir) if file_regex.match(f)] key_types = host_key_types.split(',') if host_key_types else [] key_types_files = ['ssh_host_%s_key' % key_type for key_type in key_types] for key_file in set(key_files) | set(key_types_files): key_type = file_regex.match(key_file).group('type') key_dest = os.path.join(key_dir, key_file) self._GenerateSshKey(key_type, key_dest) self._StartSshd() self.instance_config.SetOption(section, 'instance_id', str(instance_id))
Regenerates SSH host keys when the VM is restarted with a new IP address. Booting a VM from an image with a known SSH key allows a number of attacks. This function will regenerating the host key whenever the IP address changes. This applies the first time the instance is booted, and each time the disk is used to boot a new instance. Args: host_key_types: string, a comma separated list of host key types.
juraj-google-style
def on_message(self, message): if 'content' in message['d']: metadata = self._parse_metadata(message) message = Message(text=message['d']['content'], metadata=metadata).__dict__ logger.debug(message) self.baseplate.tell(message)
Runs on a create_message event from websocket connection Args: message (dict): Full message from Discord websocket connection"
juraj-google-style
def append_paulis(self, paulis=None, pauli_labels=None): return self.insert_paulis(None, paulis=paulis, pauli_labels=pauli_labels)
Append pauli at the end. Args: paulis (Pauli): the to-be-inserted or appended pauli pauli_labels (list[str]): the to-be-inserted or appended pauli label Returns: Pauli: self
codesearchnet
def __recognize_union(self, node: yaml.Node, expected_type: Type) -> RecResult: logger.debug('Recognizing as a union') recognized_types = [] message = '' union_types = generic_type_args(expected_type) logger.debug('Union types {}'.format(union_types)) for possible_type in union_types: recognized_type, msg = self.recognize(node, possible_type) if len(recognized_type) == 0: message += msg recognized_types.extend(recognized_type) recognized_types = list(set(recognized_types)) if bool in recognized_types and bool_union_fix in recognized_types: recognized_types.remove(bool_union_fix) if len(recognized_types) == 0: return recognized_types, message elif len(recognized_types) > 1: message = ('{}{}Could not determine which of the following types' ' this is: {}').format(node.start_mark, os.linesep, recognized_types) return recognized_types, message return recognized_types, ''
Recognize a node that we expect to be one of a union of types. Args: node: The node to recognize. expected_type: Union[...something...] Returns: The specific type that was recognized, multiple, or none.
juraj-google-style
def _validate_chain_strength(sampler, chain_strength): properties = sampler.properties if ('extended_j_range' in properties): max_chain_strength = (- min(properties['extended_j_range'])) elif ('j_range' in properties): max_chain_strength = (- min(properties['j_range'])) else: raise ValueError("input sampler should have 'j_range' and/or 'extended_j_range' property.") if (chain_strength is None): chain_strength = max_chain_strength elif (chain_strength > max_chain_strength): raise ValueError('Provided chain strength exceedds the allowed range.') return chain_strength
Validate the provided chain strength, checking J-ranges of the sampler's children. Args: chain_strength (float) The provided chain strength. Use None to use J-range. Returns (float): A valid chain strength, either provided or based on available J-range. Positive finite float.
codesearchnet
def _relative_position_to_absolute_position_masked(x): (batch, heads, length, _) = common_layers.shape_list(x) x = tf.pad(x, [[0, 0], [0, 0], [0, 0], [1, 0]]) x = tf.reshape(x, [batch, heads, (1 + length), length]) x = tf.slice(x, [0, 0, 1, 0], [(- 1), (- 1), (- 1), (- 1)]) return x
Helper to dot_product_self_attention_relative_v2. Rearrange an attention logits or weights Tensor. The dimensions of the input represent: [batch, heads, query_position, memory_position - query_position + length - 1] The dimensions of the output represent: [batch, heads, query_position, memory_position] Only works with masked_attention. Undefined behavior for regions of the input where memory_position > query_position. Args: x: a Tensor with shape [batch, heads, length, length] Returns: a Tensor with shape [batch, heads, length, length]
codesearchnet
def phone_text_subs(): Small = {'zero': 0, 'zer0': 0, 'one': 1, 'two': 2, 'three': 3, 'four': 4, 'fuor': 4, 'five': 5, 'fith': 5, 'six': 6, 'seven': 7, 'sven': 7, 'eight': 8, 'nine': 9, 'ten': 10, 'eleven': 11, 'twelve': 12, 'thirteen': 13, 'fourteen': 14, 'fifteen': 15, 'sixteen': 16, 'seventeen': 17, 'eighteen': 18, 'nineteen': 19, 'twenty': 20, 'thirty': 30, 'forty': 40, 'fifty': 50, 'sixty': 60, 'seventy': 70, 'eighty': 80, 'ninety': 90, 'oh': 0} Magnitude = {'thousand': 0, 'million': 0} Others = {'!': 1, 'o': 0, 'l': 1, 'i': 1} output = {} output['Small'] = Small output['Magnitude'] = Magnitude output['Others'] = Others return output
Gets a dictionary of dictionaries that each contain alphabetic number manifestations mapped to their actual Number value. Returns: dictionary of dictionaries containing Strings mapped to Numbers
codesearchnet
def get_internal_modules(key='exa'): key += '.' return [v for (k, v) in sys.modules.items() if k.startswith(key)]
Get a list of modules belonging to the given package. Args: key (str): Package or library name (e.g. "exa")
codesearchnet
def mach60(msg): d = hex2bin(data(msg)) if (d[23] == '0'): return None mach = ((bin2int(d[24:34]) * 2.048) / 512.0) return round(mach, 3)
Aircraft MACH number Args: msg (String): 28 bytes hexadecimal message (BDS60) string Returns: float: MACH number
codesearchnet
def get_v2_names(symbol: Any) -> Sequence[str]: names_v2 = [] tensorflow_api_attr = API_ATTRS[TENSORFLOW_API_NAME].names keras_api_attr = API_ATTRS[KERAS_API_NAME].names if not hasattr(symbol, '__dict__'): return names_v2 if tensorflow_api_attr in symbol.__dict__: names_v2.extend(getattr(symbol, tensorflow_api_attr)) if keras_api_attr in symbol.__dict__: names_v2.extend(getattr(symbol, keras_api_attr)) return names_v2
Get a list of TF 2.0 names for this symbol. Args: symbol: symbol to get API names for. Returns: List of all API names for this symbol.
github-repos
def get_reference_points(spatial_shapes, valid_ratios, device): reference_points_list = [] for level, (height, width) in enumerate(spatial_shapes): ref_y, ref_x = meshgrid(torch.linspace(0.5, height - 0.5, height, dtype=valid_ratios.dtype, device=device), torch.linspace(0.5, width - 0.5, width, dtype=valid_ratios.dtype, device=device), indexing='ij') ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, level, 1] * height) ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, level, 0] * width) ref = torch.stack((ref_x, ref_y), -1) reference_points_list.append(ref) reference_points = torch.cat(reference_points_list, 1) reference_points = reference_points[:, :, None] * valid_ratios[:, None] return reference_points
Get reference points for each feature map. Used in decoder. Args: spatial_shapes (`torch.LongTensor` of shape `(num_feature_levels, 2)`): Spatial shapes of each feature map. valid_ratios (`torch.FloatTensor` of shape `(batch_size, num_feature_levels, 2)`): Valid ratios of each feature map. device (`torch.device`): Device on which to create the tensors. Returns: `torch.FloatTensor` of shape `(batch_size, num_queries, num_feature_levels, 2)`
github-repos
async def invoke(self, context): try: tasks = await self._run_cancellable(claim_work(context)) if not tasks or not tasks.get('tasks', []): await self._run_cancellable(asyncio.sleep(context.config['poll_interval'])) return None status = None for task_defn in tasks.get('tasks', []): prepare_to_run_task(context, task_defn) reclaim_fut = context.event_loop.create_task(reclaim_task(context, context.task)) try: status = await do_run_task(context, self._run_cancellable, self._to_cancellable_process) artifacts_paths = filepaths_in_dir(context.config['artifact_dir']) except WorkerShutdownDuringTask: shutdown_artifact_paths = [os.path.join('public', 'logs', log_file) for log_file in ['chain_of_trust.log', 'live_backing.log']] artifacts_paths = [path for path in shutdown_artifact_paths if os.path.isfile(os.path.join(context.config['artifact_dir'], path))] status = STATUSES['worker-shutdown'] status = worst_level(status, await do_upload(context, artifacts_paths)) await complete_task(context, status) reclaim_fut.cancel() cleanup(context) return status except asyncio.CancelledError: return None
Claims and processes Taskcluster work. Args: context (scriptworker.context.Context): context of worker Returns: status code of build
juraj-google-style
def _sendline(self, line): self.lines = [] try: self._read() except socket.error: logging.debug('Nothing cleared') logger.debug('sending [%s]', line) self._write(line + '\r\n') time.sleep(0.5)
Send exactly one line to the device Args: line str: data send to device
juraj-google-style
def forward(ctx, scores: torch.Tensor, multiplier: torch.Tensor, selected_experts: torch.Tensor, masked_gates: torch.Tensor, mask_for_one: torch.Tensor): ctx.save_for_backward(multiplier, selected_experts, masked_gates) return multiplier * mask_for_one
Forward pass for the custom autograd function. Args: ctx: Context object to save information for backward computation. scores (torch.Tensor): Input scores tensor. multiplier (torch.Tensor): Multiplier tensor. selected_experts (torch.Tensor): Tensor of selected experts. masked_gates (torch.Tensor): Masked gates tensor. mask_for_one (torch.Tensor): Mask for one tensor. Returns: torch.Tensor: Result of the forward pass.
github-repos
def center_crop(self, image: np.ndarray, crop_size: Dict[str, int], data_format: Optional[Union[str, ChannelDimension]]=None, input_data_format: Optional[Union[str, ChannelDimension]]=None, **kwargs) -> np.ndarray: crop_size = get_size_dict(crop_size, default_to_square=True) if 'height' not in crop_size or 'width' not in crop_size: raise ValueError('crop_size dictionary must contain height and width keys') return center_crop(image, (crop_size['height'], crop_size['width']), data_format=data_format, input_data_format=input_data_format, **kwargs)
Center crop an image to a certain size. Args: image (`np.ndarray`): Image to center crop. crop_size (`Dict[str, int]`): The size to center crop the image to. Must contain height and width keys. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred.
github-repos
def populate_request_data(self, request_args): request_args['auth'] = HTTPBasicAuth( self._username, self._password) return request_args
Add the authentication info to the supplied dictionary. We use the `requests.HTTPBasicAuth` class as the `auth` param. Args: `request_args`: The arguments that will be passed to the request. Returns: The updated arguments for the request.
juraj-google-style
def from_object(cls, o, base_uri): if isinstance(o, list): if len(o) == 1: return cls.from_object(o[0], base_uri) return [cls.from_object(x, base_uri) for x in o] return cls(o, base_uri)
Returns a new ``Link`` based on a JSON object or array. Arguments: - ``o``: a dictionary holding the deserializated JSON for the new ``Link``, or a ``list`` of such documents. - ``base_uri``: optional URL used as the basis when expanding relative URLs in the link.
juraj-google-style
def __init__(self, html_template_path, export_report_path): if not _file_io.file_exists(html_template_path): raise IOError("File '{0}' does not exist.".format(html_template_path)) with _file_io.FileIO(html_template_path, 'r') as f: self.html_template = f.read() _file_io.recursive_create_dir(os.path.dirname(export_report_path)) self.export_report_path = export_report_path
Reads the HTML template content. Args: html_template_path: A string, path to the template HTML file. export_report_path: A string, path to the generated HTML report. This path should point to a '.html' file with date and time in its name. e.g. 2019-01-01-10:05.toco_report.html. Raises: IOError: File doesn't exist.
github-repos
def credits(self, **kwargs): path = self._get_series_id_season_number_path('credits') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Get the cast & crew credits for a TV season by season number. Returns: A dict respresentation of the JSON returned from the API.
codesearchnet
def CoinFromRef(coin_ref, tx_output, state=CoinState.Unconfirmed, transaction=None): coin = Coin(coin_reference=coin_ref, tx_output=tx_output, state=state) coin._transaction = transaction return coin
Get a Coin object using a CoinReference. Args: coin_ref (neo.Core.CoinReference): an object representing a single UTXO / transaction input. tx_output (neo.Core.Transaction.TransactionOutput): an object representing a transaction output. state (neo.Core.State.CoinState): Returns: Coin: self.
codesearchnet
def success(channel, stats, name, platform, dp): datapacks = [('Platform', platform, False)] for stat in stats: if (stat[0] in ('Duel 1v1', 'Doubles 2v2', 'Solo Standard 3v3', 'Standard 3v3')): stat_name = (('__' + stat[0]) + '__') stat_value = (('**' + stat[1]) + '**') else: stat_name = stat[0] stat_value = stat[1] if stat[2]: stat_value += ((' *(Top ' + stat[2]) + '%)*') datapacks.append((stat_name, stat_value, True)) gui = ui_embed.UI(channel, 'Rocket League Stats: {}'.format(name), '*Stats obtained from [Rocket League Tracker Network](https: return gui
Creates an embed UI containing the Rocket League stats Args: channel (discord.Channel): The Discord channel to bind the embed to stats (tuple): Tuples of (field, value, percentile) name (str): The name of the player platform (str): The playfor to search on, can be 'steam', 'ps', or 'xbox' dp (str): URL to the player's dp Returns: (discord.Embed): The created embed
codesearchnet
def get_version(self, timestamp): if timestamp != 0 and timestamp != self.current_timestamp: assert timestamp > self.current_timestamp self.current_version = self.current_version + 1 self.current_timestamp = timestamp return self.current_version
Updates version if necessary and returns the version number. Args: timestamp: (int) unix timestamp when the cache is updated. This value is zero if the cache has been evicted or doesn't exist.
github-repos
def _AddSerializedEvent(self, event): identifier = identifiers.SQLTableIdentifier(self._CONTAINER_TYPE_EVENT, (self._serialized_event_heap.number_of_events + 1)) event.SetIdentifier(identifier) serialized_data = self._SerializeAttributeContainer(event) self._serialized_event_heap.PushEvent(event.timestamp, serialized_data) if (self._serialized_event_heap.data_size > self._maximum_buffer_size): self._WriteSerializedAttributeContainerList(self._CONTAINER_TYPE_EVENT)
Adds an serialized event. Args: event (EventObject): event. Raises: IOError: if the event cannot be serialized. OSError: if the event cannot be serialized.
codesearchnet
def has_overlap(self, interval: 'Interval') -> bool: if ((self.begin < interval.end) and (interval.begin < self.end)): return True return False
Check if self has overlap with `interval`. Args: interval: interval to be examined Returns: bool: True if self has overlap with `interval` otherwise False
codesearchnet
def ragged_rank(self): return self._ragged_rank
The number of times the RaggedTensor's flat_values is partitioned. Defaults to `shape.ndims - 1`. Examples: >>> values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]]) >>> tf.type_spec_from_value(values).ragged_rank 1 >>> rt1 = tf.RaggedTensor.from_uniform_row_length(values, 2) >>> tf.type_spec_from_value(rt1).ragged_rank 2 Returns: A Python `int` indicating the number of times the underlying `flat_values` Tensor has been partitioned to add a new dimension. I.e., `tf.rank(rt) = tf.rank(rt.flat_values) + rt.ragged_rank`.
github-repos
def __init__(self, performed_action, run_metadata=None, client_graph_def=None, tf_error=None): _check_type(performed_action, str) self.performed_action = performed_action if run_metadata is not None: _check_type(run_metadata, config_pb2.RunMetadata) self.run_metadata = run_metadata self.client_graph_def = client_graph_def self.tf_error = tf_error
Constructor for `OnRunEndRequest`. Args: performed_action: (`OnRunStartAction`) Actually-performed action by the debug-wrapper session. run_metadata: run_metadata output from the run() call (if any). client_graph_def: (GraphDef) GraphDef from the client side, i.e., from the python front end of TensorFlow. Can be obtained with session.graph.as_graph_def(). tf_error: (errors.OpError subtypes) TensorFlow OpError that occurred during the run (if any).
github-repos
def get_groups(self, **kwargs): params = {'cultureInfo': util.language_code(kwargs.get('lang'))} result = self.make_request('geo', 'get_groups', **params) if (not util.check_result(result)): return (False, result.get('resultDescription', 'UNKNOWN ERROR')) values = util.response_list(result, 'resultValues') return (True, [emtype.GeoGroupItem(**a) for a in values])
Obtain line types and details. Args: lang (str): Language code (*es* or *en*). Returns: Status boolean and parsed response (list[GeoGroupItem]), or message string in case of error.
codesearchnet
def Selector(fields): check_user_facing_fields_dict(fields, 'Selector') class _Selector(_ConfigSelector): def __init__(self): key = 'Selector.' + str(DictCounter.get_next_count()) super(_Selector, self).__init__( key=key, name=None, fields=fields, type_attributes=ConfigTypeAttributes(is_builtin=True), ) return _Selector
Selectors are used when you want to be able present several different options to the user but force them to select one. For example, it would not make much sense to allow them to say that a single input should be sourced from a csv and a parquet file: They must choose. Note that in other type systems this might be called an "input union." Args: fields (Dict[str, Field]):
juraj-google-style
def sanity_check_states(states_spec): states = copy.deepcopy(states_spec) is_unique = ('shape' in states) if is_unique: states = dict(state=states) for (name, state) in states.items(): if isinstance(state['shape'], int): state['shape'] = (state['shape'],) if ('type' not in state): state['type'] = 'float' return (states, is_unique)
Sanity checks a states dict, used to define the state space for an MDP. Throws an error or warns if mismatches are found. Args: states_spec (Union[None,dict]): The spec-dict to check (or None). Returns: Tuple of 1) the state space desc and 2) whether there is only one component in the state space.
codesearchnet
def _set_current(self, new_current): new_cur_full_path = self.join(new_current) if (not os.path.exists(new_cur_full_path)): raise PrefixNotFound(('Prefix "%s" does not exist in workdir %s' % (new_current, self.path))) if os.path.lexists(self.join('current')): os.unlink(self.join('current')) os.symlink(new_current, self.join('current')) self.current = new_current
Change the current default prefix, for internal usage Args: new_current(str): Name of the new current prefix, it must already exist Returns: None Raises: PrefixNotFound: if the given prefix name does not exist in the workdir
codesearchnet
def RotateServerKey(cn=u'grr', keylength=4096): ca_certificate = config.CONFIG['CA.certificate'] ca_private_key = config.CONFIG['PrivateKeys.ca_key'] if ((not ca_certificate) or (not ca_private_key)): raise ValueError('No existing CA certificate found.') existing_cert = config.CONFIG['Frontend.certificate'] serial_number = (existing_cert.GetSerialNumber() + 1) EPrint(("Generating new server key (%d bits, cn '%s', serial server_private_key = rdf_crypto.RSAPrivateKey.GenerateKey(bits=keylength) server_cert = key_utils.MakeCASignedCert(str(cn), server_private_key, ca_certificate, ca_private_key, serial_number=serial_number) EPrint('Updating configuration.') config.CONFIG.Set('Frontend.certificate', server_cert.AsPEM()) config.CONFIG.Set('PrivateKeys.server_key', server_private_key.AsPEM()) config.CONFIG.Write() EPrint('Server key rotated, please restart the GRR Frontends.')
This function creates and installs a new server key. Note that - Clients might experience intermittent connection problems after the server keys rotated. - It's not possible to go back to an earlier key. Clients that see a new certificate will remember the cert's serial number and refuse to accept any certificate with a smaller serial number from that point on. Args: cn: The common name for the server to use. keylength: Length in bits for the new server key. Raises: ValueError: There is no CA cert in the config. Probably the server still needs to be initialized.
codesearchnet
def _HasSelf(self, sig): return sig.params and sig.params[0].name == 'self'
True if a signature has a self parameter. This only checks for the name, since the type can be too many different things (type of the method, type of the base class, object, unknown etc.) and doesn't carry over to the simplified version, anyway. Arguments: sig: Function signature (instance of pytd.Signature) Returns: True if the signature has "self".
github-repos
def add_method(self, m, **kwargs): if isinstance(m, types.FunctionType): self[('function', id(m))] = m else: (f, obj) = get_method_vars(m) wrkey = (f, id(obj)) self[wrkey] = obj
Add an instance method or function Args: m: The instance method or function to store
codesearchnet
def set_number_of_shards(self, number_of_shards): if self._frozen: if self._number_of_shards != number_of_shards: raise ValueError(f"Can't set sharding policy to use {number_of_shards} shards since it has been frozen to use {self._number_of_shards}") elif number_of_shards > 0: self._number_of_shards = number_of_shards else: raise ValueError(f"Can't set sharding policy to use {number_of_shards} shards; value must be > 0")
Sets the number of shards for the current policy. If the policy has been frozen then number_of_shards must match the existing setting. Args: number_of_shards: The number of shards to use in the policy. Raises: ValueError: If the policy has been frozen and number_of_shards differs from the frozen value; or number_of_shards <= 0.
github-repos
def delete_variant(self, variant): mongo_variant = self.get_variant(variant) if mongo_variant: if mongo_variant['observations'] == 1: LOG.debug("Removing variant {0}".format( mongo_variant.get('_id') )) message = self.db.variant.delete_one({'_id': variant['_id']}) else: LOG.debug("Decreasing observations for {0}".format( mongo_variant.get('_id') )) message = self.db.variant.update_one({ '_id': mongo_variant['_id'] },{ '$inc': { 'observations': -1, 'homozygote': - (variant.get('homozygote', 0)), 'hemizygote': - (variant.get('hemizygote', 0)), }, '$pull': { 'families': variant.get('case_id') } }, upsert=False) return
Delete observation in database This means that we take down the observations variable with one. If 'observations' == 1 we remove the variant. If variant was homozygote we decrease 'homozygote' with one. Also remove the family from array 'families'. Args: variant (dict): A variant dictionary
juraj-google-style
def _map_across_full_axis_select_indices( self, axis, func, indices, keep_remaining=False ): return self.data.apply_func_to_select_indices_along_full_axis( axis, func, indices, keep_remaining )
Maps function to select indices along full axis. Args: axis: 0 for columns and 1 for rows. func: Callable mapping function over the BlockParitions. indices: indices along axis to map over. keep_remaining: True if keep indices where function was not applied. Returns: BaseFrameManager containing the result of mapping func over axis on indices.
juraj-google-style
def _FormatAttrToken(self, token_data): return { 'mode': token_data.file_mode, 'uid': token_data.user_identifier, 'gid': token_data.group_identifier, 'system_id': token_data.file_system_identifier, 'node_id': token_data.file_identifier, 'device': token_data.device}
Formats an attribute token as a dictionary of values. Args: token_data (bsm_token_data_attr32|bsm_token_data_attr64): AUT_ATTR32 or AUT_ATTR64 token data. Returns: dict[str, str]: token values.
juraj-google-style
def _to_sparse_input_and_drop_ignore_values(input_tensor, ignore_value=None): input_tensor = sparse_tensor_lib.convert_to_tensor_or_sparse_tensor(input_tensor) if isinstance(input_tensor, sparse_tensor_lib.SparseTensor): return input_tensor with ops.name_scope(None, 'to_sparse_input', (input_tensor, ignore_value)): if ignore_value is None: if input_tensor.dtype == dtypes.string: ignore_value = '' elif input_tensor.dtype.is_integer: ignore_value = -1 else: ignore_value = input_tensor.dtype.as_numpy_dtype() ignore_value = math_ops.cast(ignore_value, input_tensor.dtype, name='ignore_value') indices = array_ops.where_v2(math_ops.not_equal(input_tensor, ignore_value), name='indices') return sparse_tensor_lib.SparseTensor(indices=indices, values=array_ops.gather_nd(input_tensor, indices, name='values'), dense_shape=array_ops.shape(input_tensor, out_type=dtypes.int64, name='dense_shape'))
Converts a `Tensor` to a `SparseTensor`, dropping ignore_value cells. If `input_tensor` is already a `SparseTensor`, just return it. Args: input_tensor: A string or integer `Tensor`. ignore_value: Entries in `dense_tensor` equal to this value will be absent from the resulting `SparseTensor`. If `None`, default value of `dense_tensor`'s dtype will be used ('' for `str`, -1 for `int`). Returns: A `SparseTensor` with the same shape as `input_tensor`. Raises: ValueError: when `input_tensor`'s rank is `None`.
github-repos
def find_untranscribed_wavs(wav_path: Path, transcription_path: Path, label_type: str) -> List[str]: audio_files = wav_path.glob("***.{}".format(label_type)) transcription_file_prefixes = [t_file.stem for t_file in transcription_files] untranscribed_prefixes = [] for a_file in audio_files: if a_file.stem not in transcription_file_prefixes: untranscribed_prefixes.append(a_file.stem) return untranscribed_prefixes
Find the prefixes for all the wav files that do not have an associated transcription Args: wav_path: Path to search for wav files in transcription_path: Path to search for transcriptions in label_type: The type of labels for transcriptions. Eg "phonemes" "phonemes_and_tones" Returns: A list of all untranscribed prefixes
juraj-google-style
def AddNewSignature(self, pattern, offset=None): self.signatures.append(Signature(pattern, offset=offset))
Adds a signature. Args: pattern (bytes): pattern of the signature. offset (int): offset of the signature. None is used to indicate the signature has no offset. A positive offset is relative from the start of the data a negative offset is relative from the end of the data.
codesearchnet
def round(cls, x: 'TensorFluent') -> 'TensorFluent': return cls._unary_op(x, tf.round, tf.float32)
Returns a TensorFluent for the round function. Args: x: The input fluent. Returns: A TensorFluent wrapping the round function.
codesearchnet
def header(self, sheet, name): header = sheet.row(0) for i, column in enumerate(self.headers[name]): header.write(i, self.headers[name][i])
Write sheet header. Args: sheet: (xlwt.Worksheet.Worksheet) instance of xlwt sheet. name: (unicode) name of sheet.
juraj-google-style
def write_signatures(self, signatures): self.fileobj.seek(self.signature_offset) sig_entries = [dict(algorithm_id=id_, size=len(sig), signature=sig) for (id_, sig) in signatures] sigs = sigs_header.build(dict(filesize=self.filesize, count=len(signatures), sigs=sig_entries)) self.fileobj.write(sigs) signatures_len = len(sigs) self.additional_offset = (self.signature_offset + signatures_len) if (not (self.additional_offset == self.fileobj.tell())): raise IOError('ended up at unexpected offset')
Write signature data to the MAR file. Args: signatures (list): list of signature tuples of the form (algorithm_id, signature_data)
codesearchnet
def is_null_merge(self): return not bool(self._spec.to_string())
Indicate whether the wrapped spec is empty. In the degenerate case where self._spec is an empty specification, a caller may wish to skip a merge step entirely. (However this class does not have enough information to make that determination.) Returns: A boolean indicating whether a device merge will be trivial.
github-repos
def read_tracers_h5(xdmf_file, infoname, snapshot, position): xdmf_root = xmlET.parse(str(xdmf_file)).getroot() tra = {} tra[infoname] = [{}, {}] if position: for axis in 'xyz': tra[axis] = [{}, {}] for elt_subdomain in xdmf_root[0][0][snapshot].findall('Grid'): ibk = int(elt_subdomain.get('Name').startswith('meshYang')) if position: for data_attr in elt_subdomain.findall('Geometry'): for data_item, axis in zip(data_attr.findall('DataItem'), 'xyz'): icore, data = _get_field(xdmf_file, data_item) tra[axis][ibk][icore] = data for data_attr in elt_subdomain.findall('Attribute'): if data_attr.get('Name') != infoname: continue icore, data = _get_field(xdmf_file, data_attr.find('DataItem')) tra[infoname][ibk][icore] = data for info in tra: tra[info] = [trab for trab in tra[info] if trab] for iblk, trab in enumerate(tra[info]): tra[info][iblk] = np.concatenate([trab[icore] for icore in range(len(trab))]) return tra
Extract tracers data from hdf5 files. Args: xdmf_file (:class:`pathlib.Path`): path of the xdmf file. infoname (str): name of information to extract. snapshot (int): snapshot number. position (bool): whether to extract position of tracers. Returns: dict of list of numpy.array: Tracers data organized by attribute and block.
juraj-google-style
def copyglob(src: str, dest: str, allow_nothing: bool=False, allow_nonfiles: bool=False) -> None: something = False for filename in glob.glob(src): if (allow_nonfiles or os.path.isfile(filename)): shutil.copy(filename, dest) something = True if (something or allow_nothing): return raise ValueError('No files found matching: {}'.format(src))
Copies files whose filenames match the glob src" into the directory "dest". Raises an error if no files are copied, unless allow_nothing is True. Args: src: source glob (e.g. ``/somewhere/*.txt``) dest: destination directory allow_nothing: don't raise an exception if no files are found allow_nonfiles: copy things that are not files too (as judged by :func:`os.path.isfile`). Raises: ValueError: if no files are found and ``allow_nothing`` is not set
codesearchnet
async def _call_rpc(self, header): (length, _, cmd, feature, address) = struct.unpack('<BBBBB', bytes(header)) rpc_id = ((feature << 8) | cmd) payload = self.rpc_payload[:length] self._logger.debug('Calling RPC %d:%04X with %s', address, rpc_id, binascii.hexlify(payload)) exception = None response = None try: response = (await self.send_rpc(self.CLIENT_ID, str(self.device.iotile_id), address, rpc_id, bytes(payload), timeout=30.0)) except VALID_RPC_EXCEPTIONS as err: exception = err except Exception as err: self._logger.exception('Error calling RPC %d:%04X', address, rpc_id) exception = err (status, response) = pack_rpc_response(response, exception) resp_header = struct.pack('<BBBB', status, 0, 0, len(response)) (await self._send_notification(self.ReceiveHeaderHandle, resp_header)) if (len(response) > 0): (await self._send_notification(self.ReceivePayloadHandle, response))
Call an RPC given a header and possibly a previously sent payload Args: header (bytearray): The RPC header we should call
codesearchnet
def create(filename: str, layers: Union[(np.ndarray, Dict[(str, np.ndarray)], loompy.LayerManager)], row_attrs: Union[(loompy.AttributeManager, Dict[(str, np.ndarray)])], col_attrs: Union[(loompy.AttributeManager, Dict[(str, np.ndarray)])], *, file_attrs: Dict[(str, str)]=None) -> None: if isinstance(row_attrs, loompy.AttributeManager): row_attrs = {k: v[:] for (k, v) in row_attrs.items()} if isinstance(col_attrs, loompy.AttributeManager): col_attrs = {k: v[:] for (k, v) in col_attrs.items()} if (isinstance(layers, np.ndarray) or scipy.sparse.issparse(layers)): layers = {'': layers} elif isinstance(layers, loompy.LayerManager): layers = {k: v[(:, :)] for (k, v) in layers.items()} if ('' not in layers): raise ValueError('Data for default layer must be provided') shape = layers[''].shape if ((shape[0] == 0) or (shape[1] == 0)): raise ValueError('Main matrix cannot be empty') for (name, layer) in layers.items(): if (layer.shape != shape): raise ValueError(f"Layer '{name}' is not the same shape as the main matrix") for (name, ra) in row_attrs.items(): if (ra.shape[0] != shape[0]): raise ValueError(f"Row attribute '{name}' is not the same length ({ra.shape[0]}) as number of rows in main matrix ({shape[0]})") for (name, ca) in col_attrs.items(): if (ca.shape[0] != shape[1]): raise ValueError(f"Column attribute '{name}' is not the same length ({ca.shape[0]}) as number of columns in main matrix ({shape[1]})") try: with new(filename, file_attrs=file_attrs) as ds: for (key, vals) in layers.items(): ds.layer[key] = vals for (key, vals) in row_attrs.items(): ds.ra[key] = vals for (key, vals) in col_attrs.items(): ds.ca[key] = vals except ValueError as ve: if os.path.exists(filename): os.remove(filename) raise ve
Create a new Loom file from the given data. Args: filename (str): The filename (typically using a ``.loom`` file extension) layers: One of the following: * Two-dimensional (N-by-M) numpy ndarray of float values * Sparse matrix (e.g. :class:`scipy.sparse.csr_matrix`) * Dictionary of named layers, each an N-by-M ndarray or sparse matrix * A :class:`.LayerManager`, with each layer an N-by-M ndarray row_attrs (dict): Row attributes, where keys are attribute names and values are numpy arrays (float or string) of length N col_attrs (dict): Column attributes, where keys are attribute names and values are numpy arrays (float or string) of length M file_attrs (dict): Global attributes, where keys are attribute names and values are strings Returns: Nothing Remarks: If the file exists, it will be overwritten.
codesearchnet
def Tensors(self, run, tag): accumulator = self.GetAccumulator(run) return accumulator.Tensors(tag)
Retrieve the tensor events associated with a run and tag. Args: run: A string name of the run for which values are retrieved. tag: A string name of the tag for which values are retrieved. Raises: KeyError: If the run is not found, or the tag is not available for the given run. Returns: An array of `event_accumulator.TensorEvent`s.
codesearchnet
def create_from_json(cls, json_data): prop = Property() address_info = json_data["address_info"] prop.address = address_info["address"] prop.block_id = address_info["block_id"] prop.zipcode = address_info["zipcode"] prop.zipcode_plus4 = address_info["zipcode_plus4"] prop.address_full = address_info["address_full"] prop.city = address_info["city"] prop.county_fips = address_info["county_fips"] prop.geo_precision = address_info["geo_precision"] prop.lat = address_info["lat"] prop.lng = address_info["lng"] prop.slug = address_info["slug"] prop.state = address_info["state"] prop.unit = address_info["unit"] prop.meta = None if "meta" in json_data: prop.meta = json_data["meta"] prop.component_results = _create_component_results(json_data, "address_info") return prop
Deserialize property json data into a Property object Args: json_data (dict): The json data for this property Returns: Property object
juraj-google-style
def AddNEP5Token(self, token): if token.ScriptHash.ToBytes() in self._tokens.keys(): logger.error("Token already in wallet") return self._tokens[token.ScriptHash.ToBytes()] = token
Add a NEP-5 compliant token to the wallet. Args: token (NEP5Token): an instance of type neo.Wallets.NEP5Token. Note: Prints a warning to the console if the token already exists in the wallet.
juraj-google-style
def __init__(self, storage_writer): super(StorageMergeReader, self).__init__() self._storage_writer = storage_writer
Initializes a storage merge reader. Args: storage_writer (StorageWriter): storage writer.
juraj-google-style
def publish(self, object_id: str, event_type: str, event_data: dict=None): object_key = SchedulingObject.get_key(self.type, object_id) publish(event_type=event_type, event_data=event_data, object_type=self.type, object_id=object_id, object_key=object_key, origin=None)
Publish a scheduling object event. Args: object_id (str): ID of the scheduling object event_type (str): Type of event. event_data (dict, optional): Event data.
codesearchnet
def indent_css(f, output): line_count = get_line_count(f) f = open(f, 'r+') output = open(output, 'r+') for line in range(line_count): string = f.readline().rstrip() if len(string) > 0: if string[-1] == ";": output.write(" " + string + "\n") else: output.write(string + "\n") output.close() f.close()
Indentes css that has not been indented and saves it to a new file. A new file is created if the output destination does not already exist. Args: f: string, path to file. output: string, path/name of the output file (e.g. /directory/output.css). print type(response.read()) Returns: None.
juraj-google-style
def list_channels(self, collection_name, experiment_name): dont_care = 'image' chan = ChannelResource( name='', collection_name=collection_name, experiment_name=experiment_name, type=dont_care) return self._list_resource(chan)
List all channels belonging to the named experiment that is part of the named collection. Args: collection_name (string): Name of the parent collection. experiment_name (string): Name of the parent experiment. Returns: (list) Raises: requests.HTTPError on failure.
juraj-google-style
class TFCvtStage(keras.layers.Layer): def __init__(self, config: CvtConfig, stage: int, **kwargs): super().__init__(**kwargs) self.config = config self.stage = stage if self.config.cls_token[self.stage]: self.cls_token = self.add_weight(shape=(1, 1, self.config.embed_dim[-1]), initializer=get_initializer(self.config.initializer_range), trainable=True, name='cvt.encoder.stages.2.cls_token') self.embedding = TFCvtEmbeddings(self.config, patch_size=config.patch_sizes[self.stage], num_channels=config.num_channels if self.stage == 0 else config.embed_dim[self.stage - 1], stride=config.patch_stride[self.stage], embed_dim=config.embed_dim[self.stage], padding=config.patch_padding[self.stage], dropout_rate=config.drop_rate[self.stage], name='embedding') drop_path_rates = tf.linspace(0.0, config.drop_path_rate[self.stage], config.depth[stage]) drop_path_rates = [x.numpy().item() for x in drop_path_rates] self.layers = [TFCvtLayer(config, num_heads=config.num_heads[self.stage], embed_dim=config.embed_dim[self.stage], kernel_size=config.kernel_qkv[self.stage], stride_q=config.stride_q[self.stage], stride_kv=config.stride_kv[self.stage], padding_q=config.padding_q[self.stage], padding_kv=config.padding_kv[self.stage], qkv_projection_method=config.qkv_projection_method[self.stage], qkv_bias=config.qkv_bias[self.stage], attention_drop_rate=config.attention_drop_rate[self.stage], drop_rate=config.drop_rate[self.stage], mlp_ratio=config.mlp_ratio[self.stage], drop_path_rate=drop_path_rates[self.stage], with_cls_token=config.cls_token[self.stage], name=f'layers.{j}') for j in range(config.depth[self.stage])] def call(self, hidden_state: tf.Tensor, training: bool=False): cls_token = None hidden_state = self.embedding(hidden_state, training) batch_size, height, width, num_channels = shape_list(hidden_state) hidden_size = height * width hidden_state = tf.reshape(hidden_state, shape=(batch_size, hidden_size, num_channels)) if self.config.cls_token[self.stage]: cls_token = tf.repeat(self.cls_token, repeats=batch_size, axis=0) hidden_state = tf.concat((cls_token, hidden_state), axis=1) for layer in self.layers: layer_outputs = layer(hidden_state, height, width, training=training) hidden_state = layer_outputs if self.config.cls_token[self.stage]: cls_token, hidden_state = tf.split(hidden_state, [1, height * width], 1) hidden_state = tf.reshape(hidden_state, shape=(batch_size, height, width, num_channels)) return (hidden_state, cls_token) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, 'embedding', None) is not None: with tf.name_scope(self.embedding.name): self.embedding.build(None) if getattr(self, 'layers', None) is not None: for layer in self.layers: with tf.name_scope(layer.name): layer.build(None)
Cvt stage (encoder block). Each stage has 2 parts : - (1) A Convolutional Token Embedding layer - (2) A Convolutional Transformer Block (layer). The classification token is added only in the last stage. Args: config ([`CvtConfig`]): Model configuration class. stage (`int`): Stage number.
github-repos
def by_geopoint(self, lat, long): (header, content) = self._http_request(self.BASE_URL, lat=lat, long=long) return json.loads(content)
Perform a Yelp Neighborhood API Search based on a geopoint. Args: lat - geopoint latitude long - geopoint longitude
codesearchnet
def init(self, force_deploy=False, client=None): _force_deploy = self.provider_conf.force_deploy self.provider_conf.force_deploy = (_force_deploy or force_deploy) self._provider_conf = self.provider_conf.to_dict() r = api.Resources(self._provider_conf, client=client) r.launch() roles = r.get_roles() networks = r.get_networks() return (_to_enos_roles(roles), _to_enos_networks(networks))
Reserve and deploys the nodes according to the resources section In comparison to the vagrant provider, networks must be characterized as in the networks key. Args: force_deploy (bool): True iff the environment must be redeployed Raises: MissingNetworkError: If one network is missing in comparison to what is claimed. NotEnoughNodesError: If the `min` constraints can't be met.
codesearchnet
def delete(filething): dsf_file = DSFFile(filething.fileobj) if dsf_file.dsd_chunk.offset_metdata_chunk != 0: id3_location = dsf_file.dsd_chunk.offset_metdata_chunk dsf_file.dsd_chunk.offset_metdata_chunk = 0 dsf_file.dsd_chunk.write() filething.fileobj.seek(id3_location) filething.fileobj.truncate()
Remove tags from a file. Args: filething (filething) Raises: mutagen.MutagenError
juraj-google-style
def timeout(seconds=0, minutes=0, hours=0): limit = seconds + 60 * minutes + 3600 * hours def handler(signum, frame): raise TimeoutError('timed out after {} seconds'.format(limit)) try: signal.signal(signal.SIGALRM, handler) signal.setitimer(signal.ITIMER_REAL, limit) yield finally: signal.alarm(0)
Add a signal-based timeout to any block of code. If multiple time units are specified, they will be added together to determine time limit. Usage: with timeout(seconds=5): my_slow_function(...) Args: - seconds: The time limit, in seconds. - minutes: The time limit, in minutes. - hours: The time limit, in hours.
juraj-google-style
def _static_check(self): my_dtype = self.dtype if self._uniform_row_length is not None: if self._uniform_row_length.dtype != my_dtype: raise ValueError('_uniform_row_length.dtype=' + str(self._uniform_row_length.dtype) + ', not ' + str(my_dtype)) if self._row_lengths is not None and self._row_lengths.dtype != my_dtype: raise ValueError('_row_lengths.dtype=' + str(self._row_lengths.dtype) + ', not ' + str(my_dtype)) if self._value_rowids is not None and self._value_rowids.dtype != my_dtype: raise ValueError('_value_rowids.dtype=' + str(self._value_rowids.dtype) + ', not ' + str(my_dtype)) if self._nrows is not None and self._nrows.dtype != my_dtype: raise ValueError('_nrows.dtype=' + str(self._nrows.dtype) + ', not ' + str(my_dtype))
Checks if the object is internally consistent. Raises: ValueError if inconsistent.
github-repos
def compile_function(node, globals_=None): if (not isinstance(node, gast.AST)): if (not isinstance(node, six.string_types)): raise TypeError node = gast.parse(node) if isinstance(node, gast.Module): for succ in node.body: if isinstance(succ, gast.FunctionDef): name = succ.name break else: raise ValueError('no function found') elif isinstance(node, gast.FunctionDef): name = node.name else: raise TypeError module = compile_file(node, globals_) return getattr(module, name)
Convert an AST or string into a function with inspectable source. This function uses `compile_file` internally, but instead of returning the entire module it will return the function only. Args: node: A `FunctionDef` node or a `Module` node which contains at least one `FunctionDef` node. If a module contains multiple functions, a handle to the first one will be returned. globals_: See `compile_file` Returns: A handle to the compiled function. Raises: TypeError: If the input is not a string or AST. ValueError: If no function can be found.
codesearchnet
def get_effect(self, label: str) -> Effect: return self._get_resource(label, self._effects, "effect")
Get an effect instance by label Args: label (str): The label for the effect instance Returns: Effect class instance
juraj-google-style
def _batch_prepare_for_model(self, batch_text_or_text_pairs, is_pair: Optional[bool]=None, xpaths: Optional[List[List[int]]]=None, node_labels: Optional[List[List[int]]]=None, add_special_tokens: bool=True, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy=TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int]=None, stride: int=0, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[str]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_length: bool=False, verbose: bool=True) -> BatchEncoding: batch_outputs = {} for idx, example in enumerate(zip(batch_text_or_text_pairs, xpaths)): batch_text_or_text_pair, xpaths_example = example outputs = self.prepare_for_model(batch_text_or_text_pair[0] if is_pair else batch_text_or_text_pair, batch_text_or_text_pair[1] if is_pair else None, xpaths_example, node_labels=node_labels[idx] if node_labels is not None else None, add_special_tokens=add_special_tokens, padding=PaddingStrategy.DO_NOT_PAD.value, truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=None, padding_side=None, return_attention_mask=False, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, return_tensors=None, prepend_batch_axis=False, verbose=verbose) for key, value in outputs.items(): if key not in batch_outputs: batch_outputs[key] = [] batch_outputs[key].append(value) batch_outputs = self.pad(batch_outputs, padding=padding_strategy.value, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_attention_mask=return_attention_mask) batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors) return batch_outputs
Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a moving window (with user defined stride) for overflowing tokens. Args: batch_ids_pairs: list of tokenized input ids or input ids pairs
github-repos
async def get_pushlog_info(decision_link): source_env_prefix = decision_link.context.config['source_env_prefix'] repo = get_repo(decision_link.task, source_env_prefix) rev = get_revision(decision_link.task, source_env_prefix) context = decision_link.context pushlog_url = context.config['pushlog_url'].format(repo=repo, revision=rev) log.info('Pushlog url {}'.format(pushlog_url)) file_path = os.path.join(context.config['work_dir'], '{}_push_log.json'.format(decision_link.name)) pushlog_info = (await load_json_or_yaml_from_url(context, pushlog_url, file_path, overwrite=False)) if (len(pushlog_info['pushes']) != 1): log.warning('Pushlog error: expected a single push at {} but got {}!'.format(pushlog_url, pushlog_info['pushes'])) return pushlog_info
Get pushlog info for a decision LinkOfTrust. Args: decision_link (LinkOfTrust): the decision link to get pushlog info about. Returns: dict: pushlog info.
codesearchnet
def wb020(self, value=None): if (value is not None): try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float for field `wb020`'.format(value)) self._wb020 = value
Corresponds to IDD Field `wb020` Wet-bulb temperature corresponding to 02.0% annual cumulative frequency of occurrence Args: value (float): value for IDD Field `wb020` Unit: C if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def prune_graph(graph_str, package_name): g = read_dot(graph_str) nodes = set() for (node, attrs) in g.node_attr.iteritems(): attr = [x for x in attrs if (x[0] == 'label')] if attr: label = attr[0][1] try: req_str = _request_from_label(label) request = PackageRequest(req_str) except PackageRequestError: continue if (request.name == package_name): nodes.add(node) if (not nodes): raise ValueError(('The package %r does not appear in the graph.' % package_name)) g_rev = g.reverse() accessible_nodes = set() access = accessibility(g_rev) for node in nodes: nodes_ = access.get(node, []) accessible_nodes |= set(nodes_) inaccessible_nodes = (set(g.nodes()) - accessible_nodes) for node in inaccessible_nodes: g.del_node(node) return write_dot(g)
Prune a package graph so it only contains nodes accessible from the given package. Args: graph_str (str): Dot-language graph string. package_name (str): Name of package of interest. Returns: Pruned graph, as a string.
codesearchnet