code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
async def find_movie(self, query): params = OrderedDict([('query', query), ('include_adult', False)]) url = self.url_builder('search/movie', {}, params) data = (await self.get_data(url)) if (data is None): return return [Movie.from_json(item, self.config['data'].get('images')) for item in data.get('results', [])]
Retrieve movie data by search query. Arguments: query (:py:class:`str`): Query to search for. Returns: :py:class:`list`: Possible matches.
codesearchnet
def _AnalyzeDataStream(self, mediator, file_entry, data_stream_name): display_name = mediator.GetDisplayName() logger.debug('[AnalyzeDataStream] analyzing file: {0:s}'.format(display_name)) if self._processing_profiler: self._processing_profiler.StartTiming('analyzing') try: file_object = file_entry.GetFileObject(data_stream_name=data_stream_name) if (not file_object): raise RuntimeError('Unable to retrieve file-like object for file entry: {0:s}.'.format(display_name)) try: self._AnalyzeFileObject(mediator, file_object) finally: file_object.close() finally: if self._processing_profiler: self._processing_profiler.StopTiming('analyzing') logger.debug('[AnalyzeDataStream] completed analyzing file: {0:s}'.format(display_name))
Analyzes the contents of a specific data stream of a file entry. The results of the analyzers are set in the parser mediator as attributes that are added to produced event objects. Note that some file systems allow directories to have data streams, e.g. NTFS. Args: mediator (ParserMediator): mediates the interactions between parsers and other components, such as storage and abort signals. file_entry (dfvfs.FileEntry): file entry whose data stream is to be analyzed. data_stream_name (str): name of the data stream. Raises: RuntimeError: if the file-like object cannot be retrieved from the file entry.
codesearchnet
class AnyVote(LabelAggregation): def __init__(self, **kwargs): def inner(predictions: Iterable[int]) -> int: return self._outlier_label if any(map(lambda p: p == self._outlier_label, predictions)) else self._normal_label super().__init__(agg_func=inner, **kwargs)
Aggregates anomaly labels using an "any vote" (OR) scheme. This `AggregationFn` implements an "any vote" strategy. It aggregates anomaly labels such that the result is considered an outlier if at least one of the input `AnomalyPrediction` objects is labeled as an outlier. Example: If input labels are [normal, normal, outlier], and outlier_label=1, then the aggregated label will be outlier (1). If input labels are [normal, normal, normal], and outlier_label=1, then the aggregated label will be normal (0). Args: normal_label (int): The integer label for normal predictions. Defaults to 0. outlier_label (int): The integer label for outlier predictions. Defaults to 1. **kwargs: Additional keyword arguments to pass to the base `LabelAggregation` class.
github-repos
def all(self): return [email for (email, action) in self._collaborators.items() if (action in [RoleValue.Owner, RoleValue.User, ShareRequestValue.Add])]
Get all collaborators. Returns: List[str]: Collaborators.
codesearchnet
def retrieve_reviewers(self, product): if (not isinstance(product, self._product_cls)): raise TypeError("Type of given product isn't acceptable:", product, ', expected:', self._product_cls) return list(self.graph.predecessors(product))
Retrieve reviewers who reviewed a given product. Args: product: A product specifying reviewers. Returns: A list of reviewers who review the product. Raises: TypeError: when given product isn't instance of specified product class when this graph is constructed.
codesearchnet
def ParseSearchRow(self, parser_mediator, query, row, **unused_kwargs): query_hash = hash(query) event_data = TwitterAndroidSearchEventData() event_data.query = query event_data.name = self._GetRowValue(query_hash, row, 'name') event_data.search_query = self._GetRowValue(query_hash, row, 'query') timestamp = self._GetRowValue(query_hash, row, 'time') if timestamp: date_time = dfdatetime_java_time.JavaTime(timestamp=timestamp) event = time_events.DateTimeValuesEvent(date_time, definitions.TIME_DESCRIPTION_CREATION) parser_mediator.ProduceEventWithEventData(event, event_data)
Parses a search row from the database. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. query (str): query that created the row. row (sqlite3.Row): row resulting from query.
codesearchnet
def sia_bipartitions(nodes, node_labels=None): if config.CUT_ONE_APPROXIMATION: bipartitions = directed_bipartition_of_one(nodes) else: bipartitions = directed_bipartition(nodes, nontrivial=True) return [Cut(bipartition[0], bipartition[1], node_labels) for bipartition in bipartitions]
Return all |big_phi| cuts for the given nodes. This value changes based on :const:`config.CUT_ONE_APPROXIMATION`. Args: nodes (tuple[int]): The node indices to partition. Returns: list[Cut]: All unidirectional partitions.
juraj-google-style
def is_seq_of(seq, expected_type, seq_type=None): if seq_type is None: exp_seq_type = collections_abc.Sequence else: assert isinstance(seq_type, type) exp_seq_type = seq_type if not isinstance(seq, exp_seq_type): return False for item in seq: if not isinstance(item, expected_type): return False return True
Check whether it is a sequence of some type. Args: seq (Sequence): The sequence to be checked. expected_type (type): Expected type of sequence items. seq_type (type, optional): Expected sequence type. Returns: bool: Whether the sequence is valid.
juraj-google-style
def create(self, title, teamId=None, **request_parameters): check_type(title, basestring) check_type(teamId, basestring) post_data = dict_from_items_with_values(request_parameters, title=title, teamId=teamId) json_data = self._session.post(API_ENDPOINT, json=post_data) return self._object_factory(OBJECT_TYPE, json_data)
Create a room. The authenticated user is automatically added as a member of the room. Args: title(basestring): A user-friendly name for the room. teamId(basestring): The team ID with which this room is associated. **request_parameters: Additional request parameters (provides support for parameters that may be added in the future). Returns: Room: A Room with the details of the created room. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error.
codesearchnet
def create(self, resource): uri = (self.URI + self.RESOURCES_PATH) return self._client.create(resource=resource, uri=uri)
Set all the labels for a resource. Args: resource: The object containing the resource URI and a list of labels Returns: dict: Resource Labels
codesearchnet
def unlock(self, password: str): if self.locked: self._privkey = decode_keyfile_json(self.keystore, password.encode('UTF-8')) self.locked = False self._fill_address()
Unlock the account with a password. If the account is already unlocked, nothing happens, even if the password is wrong. Raises: ValueError: (originating in ethereum.keys) if the password is wrong (and the account is locked)
codesearchnet
def set_buffer_options(self, options, bufnr=None): buf = (self._vim.buffers[bufnr] if bufnr else self._vim.current.buffer) filetype = options.pop('filetype', None) if filetype: self.set_filetype(filetype) for (opt, value) in options.items(): buf.options[opt] = value
Set buffer-local options for a buffer, defaulting to current. Args: options (dict): Options to set, with keys being Vim option names. For Boolean options, use a :class:`bool` value as expected, e.g. ``{'buflisted': False}`` for ``setlocal nobuflisted``. bufnr (Optional[int]): A Vim buffer number, as you might get from VimL ``bufnr('%')`` or Python ``vim.current.buffer.number``. If ``None``, options are set on the current buffer.
codesearchnet
def to(self, *args, **kwargs) -> 'BatchFeature': requires_backends(self, ['torch']) import torch new_data = {} device = kwargs.get('device') if device is None and len(args) > 0: arg = args[0] if is_torch_dtype(arg): pass elif isinstance(arg, str) or is_torch_device(arg) or isinstance(arg, int): device = arg else: raise ValueError(f'Attempting to cast a BatchFeature to type {str(arg)}. This is not supported.') def _to(elem): if torch.is_floating_point(elem): return elem.to(*args, **kwargs) if device is not None: return elem.to(device=device) return elem for k, v in self.items(): if isinstance(v, list) and isinstance(v[0], list): new_v = [] for elems in v: new_v.append([_to(elem) for elem in elems]) new_data[k] = new_v elif isinstance(v, list): new_data[k] = [_to(elem) for elem in v] else: new_data[k] = _to(v) self.data = new_data return self
Send all values to device by calling `v.to(*args, **kwargs)` (PyTorch only). This should support casting in different `dtypes` and sending the `BatchFeature` to a different `device`. Args: args (`Tuple`): Will be passed to the `to(...)` function of the tensors. kwargs (`Dict`, *optional*): Will be passed to the `to(...)` function of the tensors. Returns: [`BatchFeature`]: The same instance after modification.
github-repos
def _parse_stop_words_file(self, path): language = None loaded = False if os.path.isfile(path): self._logger.debug('Loading stop words in %s', path) language = path.split('-')[-1] if not language in self.__stop_words: self.__stop_words[language] = set() with codecs.open(path, 'r', 'UTF-8') as file: loaded = True for word in file: self.__stop_words[language].add(word.strip()) return loaded
Load stop words from the given path. Parse the stop words file, saving each word found in it in a set for the language of the file. This language is obtained from the file name. If the file doesn't exist, the method will have no effect. Args: path: Path to the stop words file. Returns: A boolean indicating whether the file was loaded.
juraj-google-style
def GetCPIOArchiveFileEntryByPathSpec(self, path_spec): location = getattr(path_spec, 'location', None) if (location is None): raise errors.PathSpecError('Path specification missing location.') if (not location.startswith(self.LOCATION_ROOT)): raise errors.PathSpecError('Invalid location in path specification.') if (len(location) == 1): return None return self._cpio_archive_file.GetFileEntryByPath(location[1:])
Retrieves the CPIO archive file entry for a path specification. Args: path_spec (PathSpec): a path specification. Returns: CPIOArchiveFileEntry: CPIO archive file entry or None if not available. Raises: PathSpecError: if the path specification is incorrect.
codesearchnet
def __verify_ready(self, creating=False): if (len(self._value_ranges) == 0): self._logger.log('crit', 'Attribute value_ranges must have at least one value') raise RuntimeWarning('Attribute value_ranges must have at least one value') if ((len(self._employers) == 0) and (creating is False)): self._logger.log('crit', 'Need to create employers') raise RuntimeWarning('Need to create employers')
Some cleanup, ensures that everything is set up properly to avoid random errors during execution Args: creating (bool): True if currently creating employer bees, False for checking all other operations
codesearchnet
def Add(self, entry): if not isinstance(entry, SshkeyMapEntry): raise TypeError return super(SshkeyMap, self).Add(entry)
Add a new object, verify it is a SshkeyMapEntry instance. Args: entry: A SshkeyMapEntry instance. Returns: True if added successfully, False otherwise. Raises: TypeError: The argument is of the wrong type.
github-repos
def Execute(self, message): self.message = message if message: self.require_fastpoll = message.require_fastpoll args = None try: if self.message.args_rdf_name: if (not self.in_rdfvalue): raise RuntimeError(('Did not expect arguments, got %s.' % self.message.args_rdf_name)) if (self.in_rdfvalue.__name__ != self.message.args_rdf_name): raise RuntimeError(('Unexpected arg type %s != %s.' % (self.message.args_rdf_name, self.in_rdfvalue.__name__))) args = self.message.payload if (self._authentication_required and (self.message.auth_state != rdf_flows.GrrMessage.AuthorizationState.AUTHENTICATED)): raise RuntimeError(('Message for %s was not Authenticated.' % self.message.name)) self.cpu_start = self.proc.cpu_times() self.cpu_limit = self.message.cpu_limit if getattr(flags.FLAGS, 'debug_client_actions', False): pdb.set_trace() try: self.Run(args) finally: used = self.proc.cpu_times() self.cpu_used = ((used.user - self.cpu_start.user), (used.system - self.cpu_start.system)) except NetworkBytesExceededError as e: self.SetStatus(rdf_flows.GrrStatus.ReturnedStatus.NETWORK_LIMIT_EXCEEDED, ('%r: %s' % (e, e)), traceback.format_exc()) except Exception as e: self.SetStatus(rdf_flows.GrrStatus.ReturnedStatus.GENERIC_ERROR, ('%r: %s' % (e, e)), traceback.format_exc()) if flags.FLAGS.pdb_post_mortem: self.DisableNanny() pdb.post_mortem() if (self.status.status != rdf_flows.GrrStatus.ReturnedStatus.OK): logging.info('Job Error (%s): %s', self.__class__.__name__, self.status.error_message) if self.status.backtrace: logging.debug(self.status.backtrace) if self.cpu_used: self.status.cpu_time_used.user_cpu_time = self.cpu_used[0] self.status.cpu_time_used.system_cpu_time = self.cpu_used[1] self.SendReply(self.status, message_type=rdf_flows.GrrMessage.Type.STATUS) self._RunGC()
This function parses the RDFValue from the server. The Run method will be called with the specified RDFValue. Args: message: The GrrMessage that we are called to process. Returns: Upon return a callback will be called on the server to register the end of the function and pass back exceptions. Raises: RuntimeError: The arguments from the server do not match the expected rdf type.
codesearchnet
def ms_bot_framework(self) -> list: ms_bf_controls = [control.ms_bot_framework() for control in self.controls] return ms_bf_controls
Returns list of MS Bot Framework compatible states of the RichMessage instance nested controls. Returns: ms_bf_controls: MS Bot Framework representation of RichMessage instance nested controls.
codesearchnet
def __init__(self, scope, parent): CodeControlFlow.__init__(self, scope, parent, "switch") self.cases = [] self.default_case = None
Constructor for switches. Args: scope (CodeEntity): The program scope where this object belongs. parent (CodeEntity): This object's parent in the program tree.
juraj-google-style
def compute_output(self, o, output_shape=None): if self.combine_dims: o = mtf.transpose(o, ((o.shape - self.o_dims) + self.o_dims)) o = mtf.replace_dimensions(o, self.o_dims, self.wo.shape.dims[0]) reduced_dims = [self.wo.shape.dims[0]] else: reduced_dims = self.o_dims return mtf.einsum([o, self.wo], output_shape=output_shape, reduced_dims=reduced_dims)
Compute output of multihead attention. Args: o: a Tensor with dimensions query_heads_dims + {value_dim} + other_dims output_shape: an optional Shape Returns: a Tensor with shape: {output_dim} + other_dims
codesearchnet
def write8(self, offset, value): if not isinstance(offset, (int, long)): raise TypeError("Invalid offset type, should be integer.") if not isinstance(value, (int, long)): raise TypeError("Invalid value type, should be integer.") if value < 0 or value > 0xff: raise ValueError("Value out of bounds.") offset = self._adjust_offset(offset) self._validate_offset(offset, 1) self.mapping[offset:offset + 1] = struct.pack("B", value)
Write 8-bits to the specified `offset` in bytes, relative to the base physical address of the MMIO region. Args: offset (int, long): offset from base physical address, in bytes. value (int, long): 8-bit value to write. Raises: TypeError: if `offset` or `value` type are invalid. ValueError: if `offset` or `value` are out of bounds.
juraj-google-style
def dict_filter_nones(dict_): dict2_ = {key: val for (key, val) in six.iteritems(dict_) if (val is not None)} return dict2_
r""" Removes None values Args: dict_ (dict): a dictionary Returns: dict: CommandLine: python -m utool.util_dict --exec-dict_filter_nones Example: >>> # DISABLE_DOCTEST >>> # UNSTABLE_DOCTEST >>> # fails on python 3 because of dict None order >>> from utool.util_dict import * # NOQA >>> import utool as ut >>> dict_ = {1: None, 2: 'blue', 3: 'four', None: 'fun'} >>> dict2_ = dict_filter_nones(dict_) >>> result = ut.repr4(dict2_, nl=False) >>> print(result) {None: 'fun', 2: 'blue', 3: 'four'}
codesearchnet
def locked_put(self, credentials): filters = {self.key_name: self.key_value} query = self.session.query(self.model_class).filter_by(**filters) entity = query.first() if not entity: entity = self.model_class(**filters) setattr(entity, self.property_name, credentials) self.session.add(entity)
Write a credentials to the SQLAlchemy datastore. Args: credentials: :class:`oauth2client.Credentials`
juraj-google-style
def get_service_list(self) -> list: services = [] if (not self._manager): raise RuntimeError('Only the Swarm manager node can retrieve all the services.') service_list = self._client.services.list() for s_list in service_list: services.append(s_list.short_id) return services
Get a list of docker services. Only the manager nodes can retrieve all the services Returns: list, all the ids of the services in swarm
codesearchnet
def __init__(self, sess_creator): self._sess_creator = sess_creator _WrappedSession.__init__(self, self._create_session())
Create a new `_RecoverableSession`. The value returned by calling `sess_creator.create_session()` will be the session wrapped by this recoverable session. Args: sess_creator: A 'SessionCreator' to be wrapped by recoverable.
github-repos
def scalars_impl(self, run, tag_regex_string): if not tag_regex_string: return { _REGEX_VALID_PROPERTY: False, _TAG_TO_EVENTS_PROPERTY: {}, } try: regex = re.compile(tag_regex_string) except re.error: return { _REGEX_VALID_PROPERTY: False, _TAG_TO_EVENTS_PROPERTY: {}, } run_to_data = self._multiplexer.PluginRunToTagToContent( scalars_metadata.PLUGIN_NAME) tag_to_data = None try: tag_to_data = run_to_data[run] except KeyError: payload = {} if tag_to_data: scalars_plugin_instance = self._get_scalars_plugin() if not scalars_plugin_instance: raise ValueError(('Failed to respond to request for /scalars. ' 'The scalars plugin is oddly not registered.')) form = scalars_plugin.OutputFormat.JSON payload = { tag: scalars_plugin_instance.scalars_impl(tag, run, None, form)[0] for tag in tag_to_data.keys() if regex.match(tag) } return { _REGEX_VALID_PROPERTY: True, _TAG_TO_EVENTS_PROPERTY: payload, }
Given a tag regex and single run, return ScalarEvents. Args: run: A run string. tag_regex_string: A regular expression that captures portions of tags. Raises: ValueError: if the scalars plugin is not registered. Returns: A dictionary that is the JSON-able response.
juraj-google-style
def testHeatEquationWithVariousSchemes(self, one_step_fn, time_step): def final_cond_fn(x): return math.e * math.sin(x) def expected_result_fn(x): return tf.sin(x) @dirichlet def lower_boundary_fn(t, x): del x return -tf.math.exp(t) @dirichlet def upper_boundary_fn(t, x): del x return tf.math.exp(t) grid = grids.uniform_grid(minimums=[-10.5 * math.pi], maximums=[10.5 * math.pi], sizes=[1000], dtype=np.float32) self._testHeatEquation(grid=grid, final_t=1, time_step=time_step, final_cond_fn=final_cond_fn, expected_result_fn=expected_result_fn, one_step_fn=one_step_fn, lower_boundary_fn=lower_boundary_fn, upper_boundary_fn=upper_boundary_fn, error_tolerance=0.001)
Test solving heat equation with various time marching schemes. Tests solving heat equation with the boundary conditions `u(x, t=1) = e * sin(x)`, `u(-2 pi n - pi / 2, t) = -e^t`, and `u(2 pi n + pi / 2, t) = -e^t` with some integer `n` for `u(x, t=0)`. The exact solution is `u(x, t=0) = sin(x)`. All time marching schemes should yield reasonable results given small enough time steps. First-order accurate schemes (explicit, implicit, weighted with theta != 0.5) require smaller time step than second-order accurate ones (Crank-Nicolson, Extrapolation). Args: one_step_fn: one_step_fn representing a time marching scheme to use. time_step: time step for given scheme.
github-repos
def build_tensor_serving_input_receiver_fn(shape, dtype=tf.float32, batch_size=1): def serving_input_receiver_fn(): features = tf.placeholder(dtype=dtype, shape=([batch_size] + shape), name='input_tensor') return tf.estimator.export.TensorServingInputReceiver(features=features, receiver_tensors=features) return serving_input_receiver_fn
Returns a input_receiver_fn that can be used during serving. This expects examples to come through as float tensors, and simply wraps them as TensorServingInputReceivers. Arguably, this should live in tf.estimator.export. Testing here first. Args: shape: list representing target size of a single example. dtype: the expected datatype for the input example batch_size: number of input tensors that will be passed for prediction Returns: A function that itself returns a TensorServingInputReceiver.
codesearchnet
def _run_calibration(saved_model_path: str, signature_keys: Sequence[str], tags: Collection[str], force_graph_mode_calibration: bool, representative_dataset_file_map: Mapping[str, quantization_options_pb2.RepresentativeDatasetFile]) -> bool: repr_dataset_map = rd.TfRecordRepresentativeDatasetLoader(representative_dataset_file_map).load() _run_graph_for_calibration(saved_model_path, signature_keys, tags, repr_dataset_map, force_graph_mode_calibration) return True
Runs calibration and adds calibration statistics to exported model. Args: saved_model_path: Path to the SavedModel to run calibration. signature_keys: List of signature keys corresponding to SignatureDefs to run calibration on. tags: A set of tags that identify the MetaGraphDef. force_graph_mode_calibration: If True, runs the calibration in graph mode. representative_dataset_file_map: Signature key -> `RepresentativeDatasetFile` mapping for running the calibration step. Each dataset file stores the representative dataset for the function matching the signature key. Returns: `True` upon successfully running calibration.
github-repos
def _least_upper_bound(*nodes): N = set(nodes) UB = LATTICE_UPPER_BOUNDS try: bounds = [UB[n] for n in N] except KeyError: dtype = next((n for n in N if n not in UB)) raise ValueError(f'dtype={dtype!r} is not a valid dtype for Keras type promotion.') CUB = set.intersection(*bounds) LUB = CUB & N or {c for c in CUB if CUB.issubset(UB[c])} if len(LUB) == 1: return LUB.pop() elif len(LUB) == 0: msg = f'Input dtypes {tuple((str(n) for n in nodes))} have no available implicit dtype promotion path. Try explicitly casting inputs to the desired output type.' raise ValueError(msg) else: raise ValueError(f"Internal Type Promotion error: {nodes} do not have a unique least upper bound on the specified lattice; options are {LUB}. This is an unexpected error in Keras's internal logic; please report it to the maintainers.")
Compute the least upper bound of a set of nodes. Args: nodes: sequence of entries from dtypes + weak_types Returns: The type representing the least upper bound of the input nodes on the promotion lattice.
github-repos
async def call(self, methname, *args, **kwargs): todo = (methname, args, kwargs) return await self.task(todo)
Call a remote method by name. Args: methname (str): The name of the remote method. *args: Arguments to the method call. **kwargs: Keyword arguments to the method call. Most use cases will likely use the proxy methods directly: The following two are effectively the same: valu = proxy.getFooBar(x, y) valu = proxy.call('getFooBar', x, y)
juraj-google-style
def estimate_mutual_information(x, y): xy = np.concatenate((x, y), axis=1) epsilon = _calculate_epsilon(xy) h_x = estimate_entropy(x, epsilon) h_y = estimate_entropy(y, epsilon) h_xy = estimate_entropy(xy, epsilon) return max(0, ((h_x + h_y) - h_xy))
Estimate the mutual information of two datasets. Mutual information is a measure of dependence between two datasets and is calculated as: $I(x;y) = H(x) + H(y) - H(x,y)$ Where H(x) is the Shannon entropy of x. For continuous datasets, adapts the Kraskov Estimator [1] for mutual information. Args: x (array-like): An array with shape (n_samples, n_features_x) y (array-like): An array with shape (n_samples, n_features_y) Returns: float: A floating point number representing the mutual information of x and y. This calculation is *exact* for entirely discrete datasets and *approximate* if there are continuous columns present. References: .. [1] A. Kraskov, H. Stogbauer and P. Grassberger, "Estimating mutual information". Phys. Rev. E 69, 2004.
codesearchnet
def disable_switchport(self, inter_type, inter): config = ET.Element('config') interface = ET.SubElement(config, 'interface', xmlns='urn:brocade.com:mgmt:brocade-interface') int_type = ET.SubElement(interface, inter_type) name = ET.SubElement(int_type, 'name') name.text = inter ET.SubElement(int_type, 'switchport-basic', operation='delete') try: self._callback(config) return True except Exception as error: logging.error(error) return False
Change an interface's operation to L3. Args: inter_type: The type of interface you want to configure. Ex. tengigabitethernet, gigabitethernet, fortygigabitethernet. inter: The ID for the interface you want to configure. Ex. 1/0/1 Returns: True if command completes successfully or False if not. Raises: None
codesearchnet
def recv(self, socket_, encoding=None): unpacker = msgpack.Unpacker(encoding=encoding) response = socket_.recv(8) if response == b"": raise TensorForceError("No data received by socket.recv in call to method `recv` " + "(listener possibly closed)!") orig_len = int(response) received_len = 0 while True: data = socket_.recv(min(orig_len - received_len, self.max_msg_len)) if not data: raise TensorForceError("No data of len {} received by socket.recv in call to method `recv`!". format(orig_len - received_len)) data_len = len(data) received_len += data_len unpacker.feed(data) if received_len == orig_len: break for message in unpacker: sts = message.get("status", message.get(b"status")) if sts: if sts == "ok" or sts == b"ok": return message else: raise TensorForceError("RemoteEnvironment server error: {}". format(message.get("message", "not specified"))) else: raise TensorForceError("Message without field 'status' received!") raise TensorForceError("No message encoded in data stream (data stream had len={})". format(orig_len))
Receives a message as msgpack-numpy encoded byte-string from the given socket object. Blocks until something was received. Args: socket_: The python socket object to use. encoding (str): The encoding to use for unpacking messages from the socket. Returns: The decoded (as dict) message received.
juraj-google-style
def safe_join(directory: FilePath, *paths: FilePath) -> Path: try: safe_path = file_path_to_path(directory).resolve(strict=True) full_path = file_path_to_path(directory, *paths).resolve(strict=True) except FileNotFoundError: raise NotFound() try: full_path.relative_to(safe_path) except ValueError: raise NotFound() return full_path
Safely join the paths to the known directory to return a full path. Raises: NotFound: if the full path does not share a commonprefix with the directory.
codesearchnet
def list_projects(self, dataset_name): url = self.url() + "/nd/resource/dataset/{}".format(dataset_name)\ + "/project/" req = self.remote_utils.get_url(url) if req.status_code is not 200: raise RemoteDataNotFoundError('Could not find {}'.format(req.text)) else: return req.json()
Lists a set of projects related to a dataset. Arguments: dataset_name (str): Dataset name to search projects for Returns: dict: Projects found based on dataset query
juraj-google-style
def slope(self, other): (X1, Y1, X2, Y2) = (self.X, self.Y, other.X, other.Y) Y3 = (Y1 - Y2) X3 = (X1 - X2) return ((Y3 * self.inverse(X3)) % self.P)
Determines the slope between this point and another point. Args: other (AffinePoint): The second point. Returns: int: Slope between self and other.
codesearchnet
def MergeDataSets(self): rules = set() for (schedule, merge_map, zone_map) in ([self.feed_merger.a_schedule, self.feed_merger.a_merge_map, self.feed_merger.a_zone_map], [self.feed_merger.b_schedule, self.feed_merger.b_merge_map, self.feed_merger.b_zone_map]): for fare in schedule.GetFareAttributeList(): for fare_rule in fare.GetFareRuleList(): fare_id = merge_map[schedule.GetFareAttribute(fare_rule.fare_id)].fare_id route_id = (fare_rule.route_id and merge_map[schedule.GetRoute(fare_rule.route_id)].route_id) origin_id = (fare_rule.origin_id and zone_map[fare_rule.origin_id]) destination_id = (fare_rule.destination_id and zone_map[fare_rule.destination_id]) contains_id = (fare_rule.contains_id and zone_map[fare_rule.contains_id]) rules.add((fare_id, route_id, origin_id, destination_id, contains_id)) for fare_rule_tuple in rules: migrated_fare_rule = transitfeed.FareRule(*fare_rule_tuple) self.feed_merger.merged_schedule.AddFareRuleObject(migrated_fare_rule) if rules: self.feed_merger.problem_reporter.FareRulesBroken(self) print(('Fare Rules: union has %d fare rules' % len(rules))) return True
Merge the fare rule datasets. The fare rules are first migrated. Merging is done by removing any duplicate rules. Returns: True since fare rules can always be merged.
codesearchnet
def __init__(self, weight_shape: Sequence[int], bias_size: Optional[int]=None, activation_fn: Optional[ops.Operation]=None, use_biasadd: bool=True) -> None: self.bias_size = bias_size self.activation_fn = activation_fn self.use_biasadd = use_biasadd self.filters = np.random.uniform(low=-1.0, high=1.0, size=weight_shape) if bias_size is not None: self.bias = np.random.uniform(low=-1.0, high=1.0, size=bias_size)
Initializes a MatmulModel. Args: weight_shape: Shape of the weight tensor. bias_size: If None, do not use bias. Else, use given size as bias. activation_fn: The activation function to be used. No activation function if None. use_biasadd: If True, use BiasAdd for adding bias, else use AddV2.
github-repos
def send_request(self, request, correlation_id=None): log.debug('Sending request %s', request) if correlation_id is None: correlation_id = self._next_correlation_id() header = RequestHeader(request, correlation_id=correlation_id, client_id=self._client_id) message = b''.join([header.encode(), request.encode()]) size = Int32.encode(len(message)) data = size + message self.bytes_to_send.append(data) if request.expect_response(): ifr = (correlation_id, request) self.in_flight_requests.append(ifr) return correlation_id
Encode and queue a kafka api request for sending. Arguments: request (object): An un-encoded kafka request. correlation_id (int, optional): Optionally specify an ID to correlate requests with responses. If not provided, an ID will be generated automatically. Returns: correlation_id
juraj-google-style
def transformer_text_encoder(inputs, target_space, hparams, name=None): with tf.variable_scope(name, default_name='transformer_text_encoder'): inputs = common_layers.flatten4d3d(inputs) [encoder_input, encoder_self_attention_bias, ed] = transformer_layers.transformer_prepare_encoder(inputs, target_space=target_space, hparams=hparams) encoder_input = tf.nn.dropout(encoder_input, (1.0 - hparams.dropout)) encoder_output = transformer_layers.transformer_encoder(encoder_input, encoder_self_attention_bias, hparams) return (encoder_output, ed)
Transformer text encoder over inputs with unmasked full attention. Args: inputs: Tensor of shape [batch, length, 1, hparams.hidden_size]. target_space: int. Used for encoding inputs under a target space id. hparams: HParams. name: string, variable scope. Returns: encoder_output: Tensor of shape [batch, length, hparams.hidden_size]. ed: Tensor of shape [batch, 1, 1, length]. Encoder-decoder attention bias for any padded tokens.
codesearchnet
def resample(self, seed=None): if (seed is not None): gen = torch.manual_seed(seed) else: gen = torch.default_generator if self.replacement: self.perm = torch.LongTensor(len(self)).random_(len(self.dataset), generator=gen) else: self.perm = torch.randperm(len(self.dataset), generator=gen).narrow(0, 0, len(self))
Resample the dataset. Args: seed (int, optional): Seed for resampling. By default no seed is used.
codesearchnet
def UpdateMapping(self, filename, mapping_update): if (filename not in self._file_mapping): raise problems.NonexistentMapping(filename) mapping = self._file_mapping[filename] mapping.update(mapping_update)
Updates an entry in the list of known filenames. An entry is identified by its filename. Args: filename: The filename whose mapping is to be updated mapping_update: A dictionary containing the fields to update and their new values. Raises: InexistentMapping if the filename does not exist in the mapping
codesearchnet
def pop_stack(stack, op_id): if __debug__: pushed_stack, pushed_op_id = stack.pop() assert pushed_op_id == op_id, 'Wanted %s, got %s' % (op_id, pushed_op_id) else: pushed_stack = stack.pop() return pushed_stack
Proxy of pop, where we know we're popping a stack off of a stack. We know that we don't need to differentiate through this. See pop() for more. Args: stack: The stack to pop from. op_id: A unique variable that is also passed into the matching push. Allows optimization passes to track pairs of pushes and pops. Returns: The last value.
juraj-google-style
def update_datastore(self, schema=None, primary_key=None, path=None): self.create_datastore(schema, primary_key, 2, path=path)
For tabular data, update a resource in the HDX datastore which enables data preview in HDX. If no schema is provided all fields are assumed to be text. If path is not supplied, the file is first downloaded from HDX. Args: schema (List[Dict]): List of fields and types of form {'id': 'FIELD', 'type': 'TYPE'}. Defaults to None. primary_key (Optional[str]): Primary key of schema. Defaults to None. path (Optional[str]): Local path to file that was uploaded. Defaults to None. Returns: None
codesearchnet
def sub_map(self, counters_map): for counter_name in counters_map.counters: self.increment(counter_name, -counters_map.counters[counter_name])
Subtracts all counters from the map. For each counter in the passed map, subtracts its value to the counter in this map. Args: counters_map: CounterMap instance to subtract.
juraj-google-style
def _DecodeURL(self, url): if (not url): return '' decoded_url = urlparse.unquote(url) if isinstance(decoded_url, py2to3.BYTES_TYPE): try: decoded_url = decoded_url.decode('utf-8') except UnicodeDecodeError as exception: decoded_url = decoded_url.decode('utf-8', errors='replace') logger.warning('Unable to decode URL: {0:s} with error: {1!s}'.format(url, exception)) return decoded_url
Decodes the URL, replaces %XX to their corresponding characters. Args: url (str): encoded URL. Returns: str: decoded URL.
codesearchnet
def children_rest_names(self): names = [] for fetcher in self.fetchers: names.append(fetcher.__class__.managed_object_rest_name()) return names
Gets the list of all possible children ReST names. Returns: list: list containing all possible rest names as string Example: >>> entity = NUEntity() >>> entity.children_rest_names ["foo", "bar"]
codesearchnet
def diff(self, sym: Symbol, n: int=1, expand_simplify: bool=True): if (not isinstance(sym, sympy.Basic)): raise TypeError(('%s needs to be a Sympy symbol' % sym)) if sym.free_symbols.issubset(self.free_symbols): deriv = QuantumDerivative.create(self, derivs={sym: n}, vals=None) if ((not deriv.is_zero) and expand_simplify): deriv = deriv.expand().simplify_scalar() return deriv else: return self.__class__._zero
Differentiate by scalar parameter `sym`. Args: sym: What to differentiate by. n: How often to differentiate expand_simplify: Whether to simplify the result. Returns: The n-th derivative.
codesearchnet
def anonymous_login(services): if isinstance(services, str): services = [services] clients = {} for serv in services: try: clients[serv] = KNOWN_CLIENTS[serv](http_timeout=STD_TIMEOUT) except KeyError: print("Error: No known client for '{}' service.".format(serv)) except Exception: print("Error: Unable to create client for '{}' service.\n" "Anonymous access may not be allowed.".format(serv)) return clients
Initialize services without authenticating to Globus Auth. Note: Clients may have reduced functionality without authentication. Arguments: services (str or list of str): The services to initialize clients for. Returns: dict: The clients requested, indexed by service name.
juraj-google-style
def __best_intent(self, parse_result, context=[]): best_intent = None best_tags = None context_as_entities = [{'entities': [c]} for c in context] for intent in self.intent_parsers: i, tags = intent.validate_with_tags(parse_result.get('tags') + context_as_entities, parse_result.get('confidence')) if not best_intent or (i and i.get('confidence') > best_intent.get('confidence')): best_intent = i best_tags = tags return best_intent, best_tags
Decide the best intent Args: parse_result(list): results used to match the best intent. context(list): ? Returns: best_intent, best_tags: best_intent : The best intent for given results best_tags : The Tags for result
juraj-google-style
def get_rollout_from_id(self, rollout_id): layer = self.rollout_id_map.get(rollout_id) if layer: return layer self.logger.error('Rollout with ID "%s" is not in datafile.' % rollout_id) return None
Get rollout for the provided ID. Args: rollout_id: ID of the rollout to be fetched. Returns: Rollout corresponding to the provided ID.
juraj-google-style
def _get_function_id(self): if self.is_for_driver_task: return ray.FunctionID.nil() function_id_hash = hashlib.sha1() function_id_hash.update(self.module_name.encode('ascii')) function_id_hash.update(self.function_name.encode('ascii')) function_id_hash.update(self.class_name.encode('ascii')) function_id_hash.update(self._function_source_hash) function_id = function_id_hash.digest() return ray.FunctionID(function_id)
Calculate the function id of current function descriptor. This function id is calculated from all the fields of function descriptor. Returns: ray.ObjectID to represent the function descriptor.
codesearchnet
def get_de_novos_in_transcript(transcript, de_novos): in_transcript = [] for de_novo in de_novos: site = transcript.get_coding_distance(de_novo) cds_length = transcript.get_coding_distance(transcript.get_cds_end()) within_cds = site['pos'] >= 0 and site['pos'] < cds_length['pos'] if within_cds and (transcript.in_coding_region(de_novo) or abs(site['offset']) < 9): in_transcript.append(de_novo) return in_transcript
get the de novos within the coding sequence of a transcript Args: transcript: Transcript object, which defines the transcript coordinates de_novos: list of chromosome sequence positions for de novo events Returns: list of de novo positions found within the transcript
juraj-google-style
def predict(data, training_dir=None, model_name=None, model_version=None, cloud=False): if cloud: if ((not model_version) or (not model_name)): raise ValueError('model_version or model_name is not set') if training_dir: raise ValueError('training_dir not needed when cloud is True') with warnings.catch_warnings(): warnings.simplefilter('ignore') return cloud_predict(model_name, model_version, data) else: if (not training_dir): raise ValueError('training_dir is not set') if (model_version or model_name): raise ValueError('model_name and model_version not needed when cloud is False.') with warnings.catch_warnings(): warnings.simplefilter('ignore') return local_predict(training_dir, data)
Runs prediction locally or on the cloud. Args: data: List of csv strings or a Pandas DataFrame that match the model schema. training_dir: local path to the trained output folder. model_name: deployed model name model_version: depoyed model version cloud: bool. If False, does local prediction and data and training_dir must be set. If True, does cloud prediction and data, model_name, and model_version must be set. For cloud prediction, the model must be created. This can be done by running two gcloud commands:: 1) gcloud beta ml models create NAME 2) gcloud beta ml versions create VERSION --model NAME --origin gs://BUCKET/training_dir/model or these datalab commands: 1) import google.datalab as datalab model = datalab.ml.ModelVersions(MODEL_NAME) model.deploy(version_name=VERSION, path='gs://BUCKET/training_dir/model') Note that the model must be on GCS. Returns: Pandas DataFrame.
codesearchnet
def json_to_data(fn=None, return_json=True): def json_to_data_decorator(fn): @handle_type_error @wraps(fn) def get_data_wrapper(*args, **kwargs): kwargs["data"] = decode_json_body() if not return_json: return fn(*args, **kwargs) return encode_json_body( fn(*args, **kwargs) ) return get_data_wrapper if fn: return json_to_data_decorator(fn) return json_to_data_decorator
Decode JSON from the request and add it as ``data`` parameter for wrapped function. Args: return_json (bool, default True): Should the decorator automatically convert returned value to JSON?
juraj-google-style
def __init__(self, decode_module, encode_module, methodName='runTest'): super(DescriptorSourceTestBase, self).__init__(methodName) self._decode_module = decode_module self._encode_module = encode_module
DescriptorSourceTestBase initializer. Args: decode_module: a module containing the `decode_proto_op` method encode_module: a module containing the `encode_proto_op` method methodName: the name of the test method (same as for test.TestCase)
github-repos
def load(archive_file: fhir_package.PackageSource) -> fhir_package.FhirPackage[structure_definition_pb2.StructureDefinition, search_parameter_pb2.SearchParameter, code_system_pb2.CodeSystem, value_set_pb2.ValueSet]: return fhir_package.FhirPackage.load(archive_file, _PRIMITIVE_HANDLER, structure_definition_pb2.StructureDefinition, search_parameter_pb2.SearchParameter, code_system_pb2.CodeSystem, value_set_pb2.ValueSet)
Instantiates and returns a new `FhirPackage` for FHIR R4. Args: archive_file: The zip or tar file path or a function returning a file-like containing resources represented by this collection. Returns: An instance of `FhirPackage`. Raises: ValueError: In the event that the file or contents are invalid.
github-repos
def penalize_boundary_complexity(shp, w=20, mask=None, C=0.5): def inner(T): arr = T('input') if (mask is None): mask_ = np.ones(shp) mask_[(:, w:(- w), w:(- w))] = 0 else: mask_ = mask blur = _tf_blur(arr, w=5) diffs = ((blur - arr) ** 2) diffs += (0.8 * ((arr - C) ** 2)) return (- tf.reduce_sum((diffs * mask_))) return inner
Encourage the boundaries of an image to have less variation and of color C. Args: shp: shape of T("input") because this may not be known. w: width of boundary to penalize. Ignored if mask is set. mask: mask describing what area should be penalized. Returns: Objective.
codesearchnet
def get_i_name(self, num, is_oai=None): if num not in (1, 2): raise ValueError("`num` parameter have to be 1 or 2!") if is_oai is None: is_oai = self.oai_marc i_name = "ind" if not is_oai else "i" return i_name + str(num)
This method is used mainly internally, but it can be handy if you work with with raw MARC XML object and not using getters. Args: num (int): Which indicator you need (1/2). is_oai (bool/None): If None, :attr:`.oai_marc` is used. Returns: str: current name of ``i1``/``ind1`` parameter based on \ :attr:`oai_marc` property.
juraj-google-style
def _CreateComplexTypeFromData(self, elem_type, type_is_override, data, set_type_attrs): elem_arguments = dict(elem_type.elements) instantiated_arguments = {k: self._PackArgumentsHelper(elem_arguments[k], v, set_type_attrs) for (k, v) in data if (k != 'xsi_type')} if set_type_attrs: found_type_attr = next((e_name for (e_name, _) in elem_type.elements if e_name.endswith('.Type')), None) if (found_type_attr and type_is_override): instantiated_arguments[found_type_attr] = elem_type.qname.localname return elem_type(**instantiated_arguments)
Initialize a SOAP element with specific data. Args: elem_type: The type of the element to create. type_is_override: A boolean specifying if the type is being overridden. data: The data to hydrate the type with. set_type_attrs: A boolean indicating whether or not attributes that end in .Type should be set. This is only necessary for batch job service. Returns: An fully initialized SOAP element.
codesearchnet
def get_all_for_resource(identifier, configuration=None): resourceview = ResourceView(configuration=configuration) success, result = resourceview._read_from_hdx('resource view', identifier, 'id', ResourceView.actions()['list']) resourceviews = list() if success: for resourceviewdict in result: resourceview = ResourceView(resourceviewdict, configuration=configuration) resourceviews.append(resourceview) return resourceviews
Read all resource views for a resource given by identifier from HDX and returns list of ResourceView objects Args: identifier (str): Identifier of resource configuration (Optional[Configuration]): HDX configuration. Defaults to global configuration. Returns: List[ResourceView]: List of ResourceView objects
juraj-google-style
def WriteScanNode(self, scan_context, scan_node, indentation=''): if (not scan_node): return values = [] part_index = getattr(scan_node.path_spec, 'part_index', None) if (part_index is not None): values.append('{0:d}'.format(part_index)) store_index = getattr(scan_node.path_spec, 'store_index', None) if (store_index is not None): values.append('{0:d}'.format(store_index)) start_offset = getattr(scan_node.path_spec, 'start_offset', None) if (start_offset is not None): values.append('start offset: {0:d} (0x{0:08x})'.format(start_offset)) location = getattr(scan_node.path_spec, 'location', None) if (location is not None): values.append('location: {0:s}'.format(location)) values = ', '.join(values) flags = '' if (scan_node in scan_context.locked_scan_nodes): flags = ' [LOCKED]' print('{0:s}{1:s}: {2:s}{3:s}'.format(indentation, scan_node.path_spec.type_indicator, values, flags)) indentation = ' {0:s}'.format(indentation) for sub_scan_node in scan_node.sub_nodes: self.WriteScanNode(scan_context, sub_scan_node, indentation=indentation)
Writes the source scanner node to stdout. Args: scan_context (SourceScannerContext): the source scanner context. scan_node (SourceScanNode): the scan node. indentation (Optional[str]): indentation.
codesearchnet
def compile_action_preconditions_checking(self, state: Sequence[tf.Tensor], action: Sequence[tf.Tensor]) -> tf.Tensor: with self.graph.as_default(): with tf.name_scope('action_preconditions_checking'): preconds = self.compile_action_preconditions(state, action) all_preconds = tf.stack([p.tensor for p in preconds], axis=1) checking = tf.reduce_all(all_preconds, axis=1) return checking
Combines the action preconditions into an applicability checking op. Args: state (Sequence[tf.Tensor]): The current state fluents. action (Sequence[tf.Tensor]): The action fluents. Returns: A boolean tensor for checking if `action` is application in `state`.
codesearchnet
def get_connection(self, name): name = 'connection:{}'.format(name) if (not self.has_section(name)): return None return dict(self.items(name))
Returns the properties for a connection name This method will return the settings for the configuration specified by name. Note that the name argument should only be the name. For instance, give the following eapi.conf file .. code-block:: ini [connection:veos01] transport: http The name to use to retrieve the configuration would be veos01 >>> pyeapi.client.config.get_connection('veos01') Args: name (str): The name of the connection to return Returns: A Python dictionary object of key/value pairs that represent the node configuration. If the name provided in the argument is not found, then None is returned.
codesearchnet
def transform(self, obj, user_context): if inspect.isfunction(obj) or inspect.ismethod(obj): return self.transform_function(obj, user_context) raise NotImplementedError('Non-function: {}'.format(type(obj)))
Transforms a Python object. Users typically call this method. Args: obj: A Python object, function, type, etc. user_context: An opaque object (may be None) that is forwarded to transform_ast, through the ctx.user attribute. Returns: The result of calling transform_function. Raises: NotImplementedError: if the type of obj is not handled.
github-repos
def __init__(self, model_name: str, *, title: Optional[str]=None, task_type: str=DEFAULT_TASK_TYPE, project: Optional[str]=None, location: Optional[str]=None, credentials: Optional[Credentials]=None, **kwargs): if not vertexai: raise ImportError('vertexai is required to use VertexAITextEmbeddings. Please install it with `pip install google-cloud-aiplatform`') super().__init__(type_adapter=create_rag_adapter(), **kwargs) self.model_name = model_name self.title = title self.task_type = task_type self.project = project self.location = location self.credentials = credentials
Utilizes Vertex AI text embeddings for semantic search and RAG pipelines. Args: model_name: Name of the Vertex AI text embedding model title: Optional title for the text content task_type: Task type for embeddings (default: RETRIEVAL_DOCUMENT) project: GCP project ID location: GCP location credentials: Optional GCP credentials **kwargs: Additional arguments passed to EmbeddingsManager including ModelHandler inference_args.
github-repos
def get_dssp_annotations(self, outdir, force_rerun=False): if self.structure: parsed = self.structure else: parsed = self.parse_structure() if (not parsed): log.error('{}: unable to open structure to run DSSP'.format(self.id)) return log.debug('{}: running DSSP'.format(self.id)) dssp_results = ssbio.protein.structure.properties.dssp.get_dssp_df(model=parsed.first_model, pdb_file=self.structure_path, outdir=outdir, force_rerun=force_rerun) if dssp_results.empty: log.error('{}: unable to run DSSP'.format(self.id)) return chains = dssp_results.chain.unique() dssp_summary = ssbio.protein.structure.properties.dssp.secondary_structure_summary(dssp_results) for chain in chains: ss = dssp_results[(dssp_results.chain == chain)].ss.tolist() exposure_rsa = dssp_results[(dssp_results.chain == chain)].exposure_rsa.tolist() exposure_asa = dssp_results[(dssp_results.chain == chain)].exposure_asa.tolist() phi = dssp_results[(dssp_results.chain == chain)].phi.tolist() psi = dssp_results[(dssp_results.chain == chain)].psi.tolist() chain_prop = self.chains.get_by_id(chain) chain_seq = chain_prop.seq_record ss = ssbio.protein.structure.properties.residues.match_structure_sequence(orig_seq=chain_seq, new_seq=ss, fill_with='-') exposure_rsa = ssbio.protein.structure.properties.residues.match_structure_sequence(orig_seq=chain_seq, new_seq=exposure_rsa, fill_with=float('Inf')) exposure_asa = ssbio.protein.structure.properties.residues.match_structure_sequence(orig_seq=chain_seq, new_seq=exposure_asa, fill_with=float('Inf')) phi = ssbio.protein.structure.properties.residues.match_structure_sequence(orig_seq=chain_seq, new_seq=phi, fill_with=float('Inf')) psi = ssbio.protein.structure.properties.residues.match_structure_sequence(orig_seq=chain_seq, new_seq=psi, fill_with=float('Inf')) chain_prop.seq_record.annotations.update(dssp_summary[chain]) chain_prop.seq_record.letter_annotations['SS-dssp'] = ss chain_prop.seq_record.letter_annotations['RSA-dssp'] = exposure_rsa chain_prop.seq_record.letter_annotations['ASA-dssp'] = exposure_asa chain_prop.seq_record.letter_annotations['PHI-dssp'] = phi chain_prop.seq_record.letter_annotations['PSI-dssp'] = psi log.debug('{}: stored DSSP annotations in chain seq_record letter_annotations'.format(chain))
Run DSSP on this structure and store the DSSP annotations in the corresponding ChainProp SeqRecords Calculations are stored in the ChainProp's ``letter_annotations`` at the following keys: * ``SS-dssp`` * ``RSA-dssp`` * ``ASA-dssp`` * ``PHI-dssp`` * ``PSI-dssp`` Args: outdir (str): Path to where DSSP dataframe will be stored. force_rerun (bool): If DSSP results should be recalculated TODO: * Also parse global properties, like total accessible surface area. Don't think Biopython parses those?
codesearchnet
def running(processid): try: os.kill(processid, 0) except OverflowError as exc: print("checking validity of pid ({p}) failed with: {e}" .format(p=processid, e=exc)) sys.exit(1) except OSError: return False else: return True
Check the validity of a process ID. Arguments: processid (int): Process ID number. Returns: True if process ID is found otherwise False.
juraj-google-style
def wrap_query_in_nested_if_field_is_nested(query, field, nested_fields): for element in nested_fields: match_pattern = '^{}.'.format(element) if re.match(match_pattern, field): return generate_nested_query(element, query) return query
Helper for wrapping a query into a nested if the fields within the query are nested Args: query : The query to be wrapped. field : The field that is being queried. nested_fields : List of fields which are nested. Returns: (dict): The nested query
codesearchnet
def write(self, ostream, kmip_version=enums.KMIPVersion.KMIP_1_0): binary = '{0:b}'.format(abs(self.value)) binary = (('0' * (64 - (len(binary) % 64))) + binary) if (self.value < 0): binary = binary.replace('1', 'i') binary = binary.replace('0', '1') binary = binary.replace('i', '0') pivot = binary.rfind('0') binary = ((binary[0:pivot] + '1') + ('0' * len(binary[(pivot + 1):]))) hexadecimal = b'' for i in range(0, len(binary), 8): byte = binary[i:(i + 8)] byte = int(byte, 2) hexadecimal += struct.pack('!B', byte) self.length = len(hexadecimal) super(BigInteger, self).write(ostream, kmip_version=kmip_version) ostream.write(hexadecimal)
Write the encoding of the BigInteger to the output stream. Args: ostream (Stream): A buffer to contain the encoded bytes of a BigInteger object. Usually a BytearrayStream object. Required. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be encoded. Optional, defaults to KMIP 1.0.
codesearchnet
def verify(self, flag_values): param = self._get_input_to_checker_function(flag_values) if (not self.checker(param)): raise _exceptions.ValidationError(self.message)
Verifies that constraint is satisfied. flags library calls this method to verify Validator's constraint. Args: flag_values: flags.FlagValues, the FlagValues instance to get flags from. Raises: Error: Raised if constraint is not satisfied.
codesearchnet
def rollback(self, label=None, plane='sdr'): begin = time.time() rb_label = self._chain.target_device.rollback(label=label, plane=plane) elapsed = time.time() - begin if label: self.emit_message("Configuration rollback last {:.0f}s. Label: {}".format(elapsed, rb_label), log_level=logging.INFO) else: self.emit_message("Configuration failed.", log_level=logging.WARNING) return rb_label
Rollback the configuration. This method rolls back the configuration on the device. Args: label (text): The configuration label ID plane: (text): sdr or admin Returns: A string with commit label or None
juraj-google-style
def ParseRecord(self, parser_mediator, key, structure): if (key != 'logline'): logger.warning('Unable to parse record, unknown structure: {0:s}'.format(key)) return try: timestamp = int(structure.timestamp) except ValueError: logger.debug('Invalid timestamp string {0:s}, skipping record'.format(structure.timestamp)) return try: (nickname, text) = self._StripThenGetNicknameAndText(structure.text) except pyparsing.ParseException: logger.debug('Error parsing entry at offset {0:d}'.format(self._offset)) return event_data = XChatScrollbackEventData() event_data.nickname = nickname event_data.offset = self._offset event_data.text = text date_time = dfdatetime_posix_time.PosixTime(timestamp=timestamp) event = time_events.DateTimeValuesEvent(date_time, definitions.TIME_DESCRIPTION_ADDED) parser_mediator.ProduceEventWithEventData(event, event_data)
Parses a log record structure. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. key (str): name of the parsed structure. structure (pyparsing.ParseResults): structure parsed from the log file.
codesearchnet
def __call__(self, inputs, state, scope=None, *args, **kwargs): return base_layer.Layer.__call__(self, inputs, state, *args, scope=scope, **kwargs)
Run this RNN cell on inputs, starting from the given state. Args: inputs: `2-D` tensor with shape `[batch_size, input_size]`. state: if `self.state_size` is an integer, this should be a `2-D Tensor` with shape `[batch_size, self.state_size]`. Otherwise, if `self.state_size` is a tuple of integers, this should be a tuple with shapes `[batch_size, s] for s in self.state_size`. scope: optional cell scope. *args: Additional positional arguments. **kwargs: Additional keyword arguments. Returns: A pair containing: - Output: A `2-D` tensor with shape `[batch_size, self.output_size]`. - New state: Either a single `2-D` tensor, or a tuple of tensors matching the arity and shapes of `state`.
github-repos
def split_pair(pair_string, separator, nullable_idx=1): pair = pair_string.split(separator, 1) if (len(pair) == 1): if (nullable_idx == 0): return [None, pair[0]] elif (nullable_idx == 1): return [pair[0], None] else: raise IndexError('nullable_idx should be either 0 or 1.') else: return pair
Split a string into a pair, which can have one empty value. Args: pair_string: The string to be split. separator: The separator to be used for splitting. nullable_idx: The location to be set to null if the separator is not in the input string. Should be either 0 or 1. Returns: A list containing the pair. Raises: IndexError: If nullable_idx is not 0 or 1.
codesearchnet
def switch_to_frame(self, frame): if isinstance(frame, Element): self.driver.switch_to_frame(frame) self._scopes.append("frame") elif frame == "parent": if self._scopes[-1] != "frame": raise ScopeError("`switch_to_frame(\"parent\")` cannot be called " "from inside a descendant frame's `scope` context.") self._scopes.pop() self.driver.switch_to_frame("parent") elif frame == "top": if "frame" in self._scopes: idx = self._scopes.index("frame") if any([scope not in ["frame", None] for scope in self._scopes[idx:]]): raise ScopeError("`switch_to_frame(\"top\")` cannot be called " "from inside a descendant frame's `scope` context.") self._scopes = self._scopes[:idx] self.driver.switch_to_frame("top") else: raise ValueError( "You must provide a frame element, \"parent\", or \"top\" " "when calling switch_to_frame")
Switch to the given frame. If you use this method you are responsible for making sure you switch back to the parent frame when done in the frame changed to. :meth:`frame` is preferred over this method and should be used when possible. May not be supported by all drivers. Args: frame (Element | str): The iframe/frame element to switch to.
juraj-google-style
async def _handle_conversation_delta(self, conversation): conv_id = conversation.conversation_id.id conv = self._conv_dict.get(conv_id, None) if conv is None: await self._get_or_fetch_conversation(conv_id) else: conv.update_conversation(conversation)
Receive Conversation delta and create or update the conversation. Args: conversation: hangouts_pb2.Conversation instance Raises: NetworkError: A request to fetch the complete conversation failed.
juraj-google-style
def FileEntryExistsByPathSpec(self, path_spec): try: file_object = resolver.Resolver.OpenFileObject( path_spec, resolver_context=self._resolver_context) except (IOError, ValueError, errors.AccessError, errors.PathSpecError): return False file_object.close() return True
Determines if a file entry for a path specification exists. Args: path_spec (PathSpec): path specification. Returns: bool: True if the file entry exists.
juraj-google-style
def update(self, *args): for token in tuple(*args): self.add(token)
Updates the Trie with new tokens provided as arguments. Args: *args: Variable number of words to be added to the Trie.
github-repos
def send_message(self, room_id, text_content, msgtype="m.text", timestamp=None): return self.send_message_event( room_id, "m.room.message", self.get_text_body(text_content, msgtype), timestamp=timestamp )
Perform PUT /rooms/$room_id/send/m.room.message Args: room_id (str): The room ID to send the event in. text_content (str): The m.text body to send. timestamp (int): Set origin_server_ts (For application services only)
juraj-google-style
def _AnsiCmd(command_list): if (not isinstance(command_list, list)): raise ValueError(('Invalid list: %s' % command_list)) for sgr in command_list: if (sgr.lower() not in SGR): raise ValueError(('Invalid or unsupported SGR name: %s' % sgr)) command_str = [str(SGR[x.lower()]) for x in command_list] return ('\x1b[%sm' % ';'.join(command_str))
Takes a list of SGR values and formats them as an ANSI escape sequence. Args: command_list: List of strings, each string represents an SGR value. e.g. 'fg_blue', 'bg_yellow' Returns: The ANSI escape sequence. Raises: ValueError: if a member of command_list does not map to a valid SGR value.
codesearchnet
def generate_dequeue_op(self, tpu_device=0): self.freeze() if self._generated_dequeue_op and (not ops.inside_function()): raise ValueError("Can't generate two dequeue Ops from the same queue") self._generated_dequeue_op = True full_name = '%s/dequeue' % self._name sharded_shapes = [policy.get_unpartitioned_shape(policy.get_sharded_shape(shape)) for shape, policy in zip(self._tuple_shapes, self._sharding_policies)] if tpu_device is not None: with ops.device(tpu_name_util.core(tpu_device)): dequeue_op = tpu_ops.infeed_dequeue_tuple(dtypes=self._tuple_types, shapes=sharded_shapes, name=full_name) else: dequeue_op = tpu_ops.infeed_dequeue_tuple(dtypes=self._tuple_types, shapes=sharded_shapes, name=full_name) if self._number_of_partitions <= 1: return dequeue_op partitions = [policy.get_unpartitioned_shape([1] * shape.ndims).as_list() for shape, policy in zip(self._tuple_shapes, self._sharding_policies)] return tag_sharding_attribute_for_dequeued_tensors(dequeue_op, partitions)
Generates the device-side Op to dequeue a tuple from the queue. Implicitly freezes the queue configuration if it is not already frozen, which will raise errors if the shapes and types have not been fully specified. Args: tpu_device: The TPU device ordinal where the infeed instruction should be placed. If None, no explicit placement will be performed, and it is up to the user to call this API from within a proper TPU device scope. The XLA code will fail if the TPU dequeue instruction is not bound to any device. Returns: A list of Outputs corresponding to a shard of infeed dequeued into XLA, suitable for use within a replicated block. Raises: ValueError: if the types or shapes of the tuple elements have not been set; or if a dequeue op has already been generated.
github-repos
def extract_archive(archive_path, dest): if (not os.path.isdir(dest)): os.makedirs(dest) try: tmpfolder = None if ((not tf.gfile.Exists(archive_path)) or tf.gfile.IsDirectory(archive_path)): raise ValueError(('archive path %s is not a file' % archive_path)) if archive_path.startswith('gs: tmpfolder = tempfile.mkdtemp() cmd_args = ['gsutil', 'cp', archive_path, tmpfolder] _shell_process.run_and_monitor(cmd_args, os.getpid()) archive_path = os.path.join(tmpfolder, os.path.name(archive_path)) if archive_path.lower().endswith('.tar.gz'): flags = '-xzf' elif archive_path.lower().endswith('.tar'): flags = '-xf' else: raise ValueError('Only tar.gz or tar.Z files are supported.') cmd_args = ['tar', flags, archive_path, '-C', dest] _shell_process.run_and_monitor(cmd_args, os.getpid()) finally: if tmpfolder: shutil.rmtree(tmpfolder)
Extract a local or GCS archive file to a folder. Args: archive_path: local or gcs path to a *.tar.gz or *.tar file dest: local folder the archive will be extracted to
codesearchnet
def _get_api_call(self, function_name, *args): api_call = (dedent('\n var done = arguments[0];\n KindleAPI.%(api_call)s(%(args)s).always(function(a) {\n done(a);\n });\n ') % {'api_call': function_name, 'args': ', '.join(args)}) script = '\n'.join((api.API_SCRIPT, api_call)) try: return self._browser.execute_async_script(script) except TimeoutException: raise APIError
Runs an api call with javascript-formatted arguments. Args: function_name: The name of the KindleAPI call to run. *args: Javascript-formatted arguments to pass to the API call. Returns: The result of the API call. Raises: APIError: If the API call fails or times out.
codesearchnet
def _build(self): flat_initial_state = nest.flatten(self._initial_state) if (self._mask is not None): flat_mask = nest.flatten(self._mask) flat_learnable_state = [_single_learnable_state(state, state_id=i, learnable=mask) for (i, (state, mask)) in enumerate(zip(flat_initial_state, flat_mask))] else: flat_learnable_state = [_single_learnable_state(state, state_id=i) for (i, state) in enumerate(flat_initial_state)] return nest.pack_sequence_as(structure=self._initial_state, flat_sequence=flat_learnable_state)
Connects the module to the graph. Returns: The learnable state, which has the same type, structure and shape as the `initial_state` passed to the constructor.
codesearchnet
def setup_remoteckan(self, remoteckan=None, **kwargs): if remoteckan is None: self._remoteckan = self.create_remoteckan(self.get_hdx_site_url(), full_agent=self.get_user_agent(), **kwargs) else: self._remoteckan = remoteckan
Set up remote CKAN from provided CKAN or by creating from configuration Args: remoteckan (Optional[ckanapi.RemoteCKAN]): CKAN instance. Defaults to setting one up from configuration. Returns: None
juraj-google-style
def set_hostname(hostname=None): if not hostname: raise salt.exceptions.CommandExecutionError("Hostname option must be provided.") dn = "sys/rack-unit-1/mgmt/if-1" inconfig = .format(hostname) ret = __proxy__['cimc.set_config_modify'](dn, inconfig, False) try: if ret['outConfig']['mgmtIf'][0]['status'] == 'modified': return True else: return False except Exception as err: return False
Sets the hostname on the server. .. versionadded:: 2019.2.0 Args: hostname(str): The new hostname to set. CLI Example: .. code-block:: bash salt '*' cimc.set_hostname foobar
juraj-google-style
def power(self, n): if not isinstance(n, (int, np.integer)): raise QiskitError("Can only power with integer powers.") if self._input_dim != self._output_dim: raise QiskitError("Can only power with input_dim = output_dim.") return SuperOp( np.linalg.matrix_power(self._data, n), self.input_dims(), self.output_dims())
Return the compose of a QuantumChannel with itself n times. Args: n (int): compute the matrix power of the superoperator matrix. Returns: SuperOp: the n-times composition channel as a SuperOp object. Raises: QiskitError: if the input and output dimensions of the QuantumChannel are not equal, or the power is not an integer.
juraj-google-style
def _generate_security_groups(config_key): raw_default_groups = validate_key_values(CONFIG, 'base', config_key, default='') default_groups = _convert_string_to_native(raw_default_groups) LOG.debug('Default security group for %s is %s', config_key, default_groups) entries = {} for env in ENVS: entries[env] = [] if isinstance(default_groups, (list)): groups = _remove_empty_entries(default_groups) for env in entries: entries[env] = groups elif isinstance(default_groups, (dict)): entries.update(default_groups) LOG.debug('Generated security group: %s', entries) return entries
Read config file and generate security group dict by environment. Args: config_key (str): Configuration file key Returns: dict: of environments in {'env1': ['group1', 'group2']} format
juraj-google-style
def export(self, top=True): out = [] if top: out.append(self._internal_name) out.append(self._to_str(self.ground_temperature_depth)) out.append(self._to_str(self.depth_soil_conductivity)) out.append(self._to_str(self.depth_soil_density)) out.append(self._to_str(self.depth_soil_specific_heat)) out.append(self._to_str(self.depth_january_average_ground_temperature)) out.append( self._to_str( self.depth_february_average_ground_temperature)) out.append(self._to_str(self.depth_march_average_ground_temperature)) out.append(self._to_str(self.depth_april_average_ground_temperature)) out.append(self._to_str(self.depth_may_average_ground_temperature)) out.append(self._to_str(self.depth_june_average_ground_temperature)) out.append(self._to_str(self.depth_july_average_ground_temperature)) out.append(self._to_str(self.depth_august_average_ground_temperature)) out.append( self._to_str( self.depth_september_average_ground_temperature)) out.append(self._to_str(self.depth_october_average_ground_temperature)) out.append( self._to_str( self.depth_november_average_ground_temperature)) out.append( self._to_str( self.depth_december_average_ground_temperature)) return ",".join(out)
Exports object to its string representation. Args: top (bool): if True appends `internal_name` before values. All non list objects should be exported with value top=True, all list objects, that are embedded in as fields inlist objects should be exported with `top`=False Returns: str: The objects string representation
juraj-google-style
def add(self, watch_key, tensor_value): if watch_key not in self._tensor_data: self._tensor_data[watch_key] = _WatchStore( watch_key, mem_bytes_limit=self._watch_mem_bytes_limit) self._tensor_data[watch_key].add(tensor_value)
Add a tensor value. Args: watch_key: A string representing the debugger tensor watch, e.g., 'Dense_1/BiasAdd:0:DebugIdentity'. tensor_value: The value of the tensor as a numpy.ndarray.
juraj-google-style
def genCaCert(self, name, signas=None, outp=None, save=True): pkey, cert = self._genBasePkeyCert(name) ext0 = crypto.X509Extension(b'basicConstraints', False, b'CA:TRUE') cert.add_extensions([ext0]) if signas is not None: self.signCertAs(cert, signas) else: self.selfSignCert(cert, pkey) if save: keypath = self._savePkeyTo(pkey, 'cas', '%s.key' % name) if outp is not None: outp.printf('key saved: %s' % (keypath,)) crtpath = self._saveCertTo(cert, 'cas', '%s.crt' % name) if outp is not None: outp.printf('cert saved: %s' % (crtpath,)) return pkey, cert
Generates a CA keypair. Args: name (str): The name of the CA keypair. signas (str): The CA keypair to sign the new CA with. outp (synapse.lib.output.Output): The output buffer. Examples: Make a CA named "myca": mycakey, mycacert = cdir.genCaCert('myca') Returns: ((OpenSSL.crypto.PKey, OpenSSL.crypto.X509)): Tuple containing the private key and certificate objects.
juraj-google-style
def get_dependency_definitions(self, url: str) -> List[_StructDefT]: dependencies: Dict[str, _StructDefT] = {} urls_to_load: List[str] = [url] while urls_to_load: url_to_load = urls_to_load.pop() base_definition = self.get_structure_definition(url_to_load) for elem in base_definition.snapshot.element: for elem_type in elem.type: type_name = elem_type.code.value if _fhir_path_data_types.primitive_type_from_type_code(type_name) is None and type_name not in dependencies: child_struct = self.get_structure_definition(type_name) dependencies[type_name] = child_struct urls_to_load.append(child_struct.url.value) return list(dependencies.values())
Returns all dependencies for the structure identified by the given URL. Args: url: The URL identifying the FHIR StructureDefinition to load dependencies for. Returns: The structure definitions depended on by the above URL. Raises: UnableToLoadResourceError if the resource cannot be loaded.
github-repos
def load_text_file(self, filename, encoding="utf-8", tokenizer=None): with load_file(filename, encoding=encoding) as data: self.load_text(data, tokenizer)
Load in a text file from which to generate a word frequency list Args: filename (str): The filepath to the text file to be loaded encoding (str): The encoding of the text file tokenizer (function): The function to use to tokenize a string
juraj-google-style
def EnforceLimits(self, client_id, user, flow_name, flow_args, token=None): if ((not self.dup_interval) and (not self.daily_req_limit)): return now = rdfvalue.RDFDatetime.Now() yesterday = (now - rdfvalue.Duration('1d')) dup_boundary = (now - self.dup_interval) min_create_time = min(yesterday, dup_boundary) flow_count = 0 flows = self._LoadFlows(client_id, min_create_time, token=token) if (flow_args is None): flow_args = flow.EmptyFlowArgs() for flow_obj in flows: if ((flow_obj.create_time > dup_boundary) and (flow_obj.flow_class_name == flow_name) and (flow_obj.args == flow_args)): raise DuplicateFlowError(('Identical %s already run on %s at %s' % (flow_name, client_id, flow_obj.create_time)), flow_id=flow_obj.flow_id) if ((flow_obj.creator == user) and (flow_obj.create_time > yesterday)): flow_count += 1 if (self.daily_req_limit and (flow_count >= self.daily_req_limit)): raise DailyFlowRequestLimitExceededError(('%s flows run since %s, limit: %s' % (flow_count, yesterday, self.daily_req_limit)))
Enforce DailyFlowRequestLimit and FlowDuplicateInterval. Look at the flows that have run on this client recently and check we aren't exceeding our limits. Raises if limits will be exceeded by running the specified flow. Args: client_id: client URN user: username string flow_name: flow name string flow_args: flow args rdfvalue for the flow being launched token: acl token Raises: DailyFlowRequestLimitExceededError: if the user has already run API.DailyFlowRequestLimit on this client in the previous 24h. DuplicateFlowError: an identical flow was run on this machine by a user within the API.FlowDuplicateInterval
codesearchnet
def extra(name: str, desc: str) -> Callable: def attr_dec(f): f.__setattr__('extra_fn', True) f.__setattr__('name', name) f.__setattr__('desc', desc) return f return attr_dec
Decorator for slave channel's "additional features" interface. Args: name (str): A human readable name for the function. desc (str): A short description and usage of it. Use ``{function_name}`` in place of the function name in the description. Returns: The decorated method.
codesearchnet
def index_buffer(self, buffer, index_element_size=4): if not type(buffer) in [moderngl.Buffer, numpy.ndarray, bytes]: raise VAOError("buffer parameter must be a moderngl.Buffer, numpy.ndarray or bytes instance") if isinstance(buffer, numpy.ndarray): buffer = self.ctx.buffer(buffer.tobytes()) if isinstance(buffer, bytes): buffer = self.ctx.buffer(data=buffer) self._index_buffer = buffer self._index_element_size = index_element_size
Set the index buffer for this VAO Args: buffer: ``moderngl.Buffer``, ``numpy.array`` or ``bytes`` Keyword Args: index_element_size (int): Byte size of each element. 1, 2 or 4
juraj-google-style
def get_message(self, message_id): for message in self.messages: if (message.id == message_id): return message raise ArgumentError('Message ID not found', message_id=message_id)
Get a message by its persistent id. Args: message_id (int): The id of the message that we're looking for
codesearchnet