code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def _AddAttributeContainer(self, container_type, attribute_container): container_list = self._GetSerializedAttributeContainerList(container_type) identifier = identifiers.SQLTableIdentifier(container_type, (container_list.next_sequence_number + 1)) attribute_container.SetIdentifier(identifier) serialized_data = self._SerializeAttributeContainer(attribute_container) container_list.PushAttributeContainer(serialized_data) if (container_list.data_size > self._maximum_buffer_size): self._WriteSerializedAttributeContainerList(container_type)
Adds an attribute container. Args: container_type (str): attribute container type. attribute_container (AttributeContainer): attribute container. Raises: IOError: if the attribute container cannot be serialized. OSError: if the attribute container cannot be serialized.
codesearchnet
def loads(text): if text.startswith("CCSDS_OEM_VERS"): func = _read_oem elif text.startswith("CCSDS_OPM_VERS"): func = _read_opm else: raise ValueError("Unknown CCSDS type") return func(text)
Read CCSDS from a string, and provide the beyond class corresponding; Orbit or list of Orbit if it's an OPM, Ephem if it's an OEM. Args: text (str): Return: Orbit or Ephem Raise: ValueError: when the text is not a recognizable CCSDS format
juraj-google-style
def round_f1_macro(y_true, y_predicted): try: predictions = [np.round(x) for x in y_predicted] except TypeError: predictions = y_predicted return f1_score(np.array(y_true), np.array(predictions), average="macro")
Calculates F1 macro measure. Args: y_true: list of true values y_predicted: list of predicted values Returns: F1 score
juraj-google-style
def set_extana_callback(self, callback, data=None): self.extana_callback = callback self.extana_callback_data = data
Register a callback for incoming data packets from the SK8-ExtAna board. This method allows you to pass in a callable which will be called on receipt of each packet sent from the SK8-ExtAna board. Set to `None` to disable it again. Args: callback: a callable with the following signature: (ana1, ana2, temp, seq, timestamp, data) where: ana1, ana2 = current values of the two analogue inputs temp = temperature sensor reading seq = packet sequence number (int, 0-255) timestamp = value of time.time() when packet received data = value of `data` parameter passed to this method data: an optional arbitrary object that will be passed as a parameter to the callback
codesearchnet
def list_keyvaults(access_token, subscription_id, rgname): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourcegroups/', rgname, '/providers/Microsoft.KeyVault/vaults', '?api-version=', KEYVAULT_API]) return do_get_next(endpoint, access_token)
Lists key vaults in the named resource group. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. Returns: HTTP response. 200 OK.
juraj-google-style
def candidates(self, word): if self.known([word]): return {word} res = [x for x in self.edit_distance_1(word)] tmp = self.known(res) if tmp: return tmp if (self._distance == 2): tmp = self.known([x for x in self.__edit_distance_alt(res)]) if tmp: return tmp return {word}
Generate possible spelling corrections for the provided word up to an edit distance of two, if and only when needed Args: word (str): The word for which to calculate candidate spellings Returns: set: The set of words that are possible candidates
codesearchnet
def en004(self, value=None): if (value is not None): try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float for field `en004`'.format(value)) self._en004 = value
Corresponds to IDD Field `en004` mean coincident dry-bulb temperature to Enthalpy corresponding to 0.4% annual cumulative frequency of occurrence Args: value (float): value for IDD Field `en004` Unit: kJ/kg if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def compute_discriminator_reward( self, true_posterior_arr, generated_posterior_arr ): grad_arr = np.log(true_posterior_arr + 1e-08) + np.log(1 - generated_posterior_arr + 1e-08) return grad_arr
Compute discriminator's reward. Args: true_posterior_arr: `np.ndarray` of `true` posterior inferenced by the discriminator. generated_posterior_arr: `np.ndarray` of `fake` posterior inferenced by the discriminator. Returns: `np.ndarray` of Gradients.
juraj-google-style
def _randomize_direction(base_heading, sigma) -> int: val = MissionWeather._gauss(base_heading, sigma) val = MissionWeather._normalize_direction(val) return val
Creates a variation in direction Args: base_heading: base direction sigma: sigma value for gaussian variation Returns: random direction
juraj-google-style
def print_treemap(self, format=None, output=sys.stdout, **kwargs): treemap = self.as_treemap() treemap.print(format=format, output=output, **kwargs)
Print the matrix for self's nodes. Args: format (str): output format (csv, json or text). output (file): file descriptor on which to write.
juraj-google-style
def add_ordinary_node(self, ast_node): node = self._add_new_node(ast_node) self.leaves = set((node,)) return node
Grows the graph by adding an ordinary CFG node. Ordinary nodes are followed by the next node, in lexical order, that is, they become the new leaf set. Args: ast_node: ast.AST Returns: Node
github-repos
def send_rpc(self, address, rpc_id, call_payload, timeout=3.0): if (not self.connected): raise HardwareError('Cannot send an RPC if we are not in a connected state') if (timeout is None): timeout = 3.0 status = (- 1) payload = b'' recording = None if self.connection_interrupted: self._try_reconnect() if (self._record is not None): recording = _RecordedRPC(self.connection_string, address, rpc_id, call_payload) recording.start() try: payload = self._loop.run_coroutine(self.adapter.send_rpc(0, address, rpc_id, call_payload, timeout)) (status, payload) = pack_rpc_response(payload, None) except VALID_RPC_EXCEPTIONS as exc: (status, payload) = pack_rpc_response(payload, exc) if (self._record is not None): recording.finish(status, payload) self._recording.append(recording) if self.connection_interrupted: self._try_reconnect() return unpack_rpc_response(status, payload, rpc_id, address)
Send an rpc to our connected device. The device must already be connected and the rpc interface open. This method will synchronously send an RPC and wait for the response. Any RPC errors will be raised as exceptions and if there were no errors, the RPC's response payload will be returned as a binary bytearray. See :meth:`AbstractDeviceAdapter.send_rpc` for documentation of the possible exceptions that can be raised here. Args: address (int): The tile address containing the RPC rpc_id (int): The ID of the RPC that we wish to call. call_payload (bytes): The payload containing encoded arguments for the RPC. timeout (float): The maximum number of seconds to wait for the RPC to finish. Defaults to 3s. Returns: bytearray: The RPC's response payload.
codesearchnet
def get_exception_handlers( node: astroid.node_classes.NodeNG, exception=Exception ) -> List[astroid.ExceptHandler]: context = find_try_except_wrapper_node(node) if isinstance(context, astroid.TryExcept): return [ handler for handler in context.handlers if error_of_type(handler, exception) ] return None
Return the collections of handlers handling the exception in arguments. Args: node (astroid.NodeNG): A node that is potentially wrapped in a try except. exception (builtin.Exception or str): exception or name of the exception. Returns: list: the collection of handlers that are handling the exception or None.
juraj-google-style
def __add__(self, other): sum_ct = ContingencyTable(*(self.table + other.table).tolist()) return sum_ct
Add two contingency tables together and return a combined one. Args: other: Another contingency table Returns: Sum of contingency tables
juraj-google-style
def verify(self, obj): if len(self._options) == 0: raise ValidationError("No options", reason='no options given in options verifier, matching not possible', object=obj) exceptions = {} for i, option in enumerate(self._options): try: obj = option.verify(obj) return obj except ValidationError as exc: exceptions['option_%d' % (i+1)] = exc.params['reason'] raise ValidationError("Object did not match any of a set of options", reason="object did not match any given option (first failure = '%s')" % exceptions['option_1'], **exceptions)
Verify that the object conforms to this verifier's schema Args: obj (object): A python object to verify Raises: ValidationError: If there is a problem verifying the dictionary, a ValidationError is thrown with at least the reason key set indicating the reason for the lack of validation.
juraj-google-style
def module_entry(yfile): ytxt = yfile.read() mp = ModuleParser(ytxt) mst = mp.statement() submod = mst.keyword == "submodule" import_only = True rev = "" features = [] includes = [] rec = {} for sst in mst.substatements: if not rev and sst.keyword == "revision": rev = sst.argument elif import_only and sst.keyword in data_kws: import_only = False elif sst.keyword == "feature": features.append(sst.argument) elif submod: continue elif sst.keyword == "namespace": rec["namespace"] = sst.argument elif sst.keyword == "include": rd = sst.find1("revision-date") includes.append((sst.argument, rd.argument if rd else None)) rec["import-only"] = import_only rec["features"] = features if submod: rec["revision"] = rev submodmap[mst.argument] = rec else: rec["includes"] = includes modmap[(mst.argument, rev)] = rec
Add entry for one file containing YANG module text. Args: yfile (file): File containing a YANG module or submodule.
juraj-google-style
def git_branch_delete(branch_name): if branch_name not in git.protected_branches(): log.info("Deleting branch <33>{}", branch_name) shell.run('git branch -d {}'.format(branch_name))
Delete the given branch. Args: branch_name (str): Name of the branch to delete.
juraj-google-style
def _without_tensor_names(self) -> 'TypeSpec': def rename(value): if isinstance(value, TypeSpec): return value._without_tensor_names() return value return self._deserialize(nest.map_structure(rename, self._serialize()))
Returns a TypeSpec compatible with `self`, with tensor names removed. Returns: A `TypeSpec` that is compatible with `self`, where the name of any `TensorSpec` is set to `None`.
github-repos
def RegisterParser(cls, parser_class): parser_name = parser_class.NAME.lower() if parser_name in cls._parser_classes: raise KeyError('Parser class already set for name: {0:s}.'.format( parser_class.NAME)) cls._parser_classes[parser_name] = parser_class
Registers a parser class. The parser classes are identified based on their lower case name. Args: parser_class (type): parser class (subclass of BaseParser). Raises: KeyError: if parser class is already set for the corresponding name.
juraj-google-style
def __init__(self, sess, hooks): _WrappedSession.__init__(self, sess) self._hooks = hooks self._should_stop = False
Initializes a _HookedSession object. Args: sess: A `tf.compat.v1.Session` or a `_WrappedSession` object. hooks: An iterable of `SessionRunHook' objects.
github-repos
def _compute_hparam_info_from_values(self, name, values): result = api_pb2.HParamInfo(name=name, type=api_pb2.DATA_TYPE_UNSET) distinct_values = set((_protobuf_value_to_string(v) for v in values if _protobuf_value_type(v))) for v in values: v_type = _protobuf_value_type(v) if (not v_type): continue if (result.type == api_pb2.DATA_TYPE_UNSET): result.type = v_type elif (result.type != v_type): result.type = api_pb2.DATA_TYPE_STRING if (result.type == api_pb2.DATA_TYPE_STRING): break if (result.type == api_pb2.DATA_TYPE_UNSET): return None if ((result.type == api_pb2.DATA_TYPE_STRING) and (len(distinct_values) <= self._max_domain_discrete_len)): result.domain_discrete.extend(distinct_values) return result
Builds an HParamInfo message from the hparam name and list of values. Args: name: string. The hparam name. values: list of google.protobuf.Value messages. The list of values for the hparam. Returns: An api_pb2.HParamInfo message.
codesearchnet
def configure_ospf(self, cmd): config = self.get() cmds = ['router ospf {}'.format(config['ospf_process_id'])] cmds.extend(make_iterable(cmd)) return super(Ospf, self).configure(cmds)
Allows for a list of OSPF subcommands to be configured" Args: cmd: (list or str): Subcommand to be entered Returns: bool: True if all the commands completed successfully
juraj-google-style
def publish(self, channel, message, pipeline=False): if pipeline: self._pipeline.publish(channel, message) else: self._db.publish(channel, message)
Post a message to a given channel. Args: channel (str): Channel where the message will be published message (str): Message to publish pipeline (bool): True, start a transaction block. Default false.
codesearchnet
def WaitForReport(self, report_job): service = self._GetReportService() report_job_id = service.runReportJob(report_job)['id'] if self._version > 'v201502': status = service.getReportJobStatus(report_job_id) else: status = service.getReportJob(report_job_id)['reportJobStatus'] while status != 'COMPLETED' and status != 'FAILED': _data_downloader_logger.debug('Report job status: %s', status) time.sleep(30) if self._version > 'v201502': status = service.getReportJobStatus(report_job_id) else: status = service.getReportJob(report_job_id)['reportJobStatus'] if status == 'FAILED': raise googleads.errors.AdManagerReportError(report_job_id) else: _data_downloader_logger.debug('Report has completed successfully') return report_job_id
Runs a report, then waits (blocks) for the report to finish generating. Args: report_job: The report job to wait for. This may be a dictionary or an instance of the SOAP ReportJob class. Returns: The completed report job's ID as a string. Raises: An AdManagerReportError if the report job fails to complete.
juraj-google-style
def SkipAhead(self, file_object, number_of_characters): lines_size = len(self.lines) while (number_of_characters >= lines_size): number_of_characters -= lines_size self.lines = '' self.ReadLines(file_object) lines_size = len(self.lines) if (lines_size == 0): return self.lines = self.lines[number_of_characters:]
Skips ahead a number of characters. Args: file_object (dfvfs.FileIO): file-like object. number_of_characters (int): number of characters.
codesearchnet
def sample_frames(self, video: 'torch.Tensor', metadata: Union[VideoMetadata, dict], num_frames: Optional[int]=None, fps: Optional[int]=None, skip_secs: Optional[int]=1): num_frames = num_frames if num_frames is not None else self.num_frames fps = fps if fps is not None else self.fps total_num_frames = video.shape[0] estimated_frames = int(round(fps * metadata['duration'])) desired_frames = min(estimated_frames, num_frames) if desired_frames < 1: desired_frames = 1 start_idx = 0 end_idx = total_num_frames - 1 if skip_secs > 0 and metadata['duration'] - 2 * skip_secs > num_frames * fps: start_idx = int(skip_secs * metadata['fps']) end_idx = int(total_num_frames - skip_secs * metadata['fps']) start_idx = max(0, start_idx) end_idx = min(end_idx, total_num_frames - 1) if start_idx >= end_idx: start_idx, end_idx = (0, total_num_frames - 1) indices = np.linspace(start_idx, end_idx, desired_frames, dtype=int) indices = np.unique(indices) video = video[indices].contiguous() timestamps = [] for idx in indices: sec = idx / metadata['fps'] mm = int(sec ss = int(sec % 60) timestamps.append([mm, ss]) return (video, timestamps, int(metadata['duration']))
Video sampling function which: - Uses `num_frames` (if provided) or calculates it from `fps` and metadata. - Applies a basic center-skip if fewer frames than available, otherwise optionally skips `skip_secs` from both the start and end. - Uniformly samples the desired number of frames between the start and end indices. Args: video (`torch.Tensor`): Video that need to be sampled. metadata (`VideoMetadata`): Metadata of the video containing information about total duration, fps and total number of frames. num_frames (`int`, *optional*): Maximum number of frames to sample. Defaults to `self.num_frames`. fps (`int`, *optional*): Target frames to sample per second. Defaults to `self.fps`. skip_secs (`float`, *optional*, defaults to `1`): Number of seconds to skip from the start and end if the video is long enough. Returns: torch.Tensor: Sampled video frames.
github-repos
def parse(self, text): tokens = self.lex(text) parser = Parser(tokens) return parser.parse()
Parse self.text. Args: text (str): the text to lex Returns: object: a node representing the current rule.
juraj-google-style
def safe_datetime_cast(self, col): casted_dates = pd.to_datetime(col[self.col_name], format=self.date_format, errors='coerce') if len(casted_dates[casted_dates.isnull()]): slice_ = (casted_dates.isnull() & (~ col[self.col_name].isnull())) col[slice_][self.col_name].apply(self.strptime_format) return casted_dates
Parses string values into datetime. Args: col(pandas.DataFrame): Data to transform. Returns: pandas.Series
codesearchnet
def create(self, uri=None, graph=None, data=None): if (uri is not None): existing_entity = self.__dedup__(rdflib.URIRef(uri), graph) if (existing_entity is not None): return else: default_request = urllib.request.Request('/'.join([self.base_url, 'rest']), method='POST') uri = urllib.request.urlopen(default_request).read().decode() if (graph is not None): new_graph = copy_graph(rdflib.URIRef(uri), graph) create_response = self.connect(uri, data=new_graph.serialize(format='turtle'), method='PUT') raw_response = create_response.read() return uri
Method takes an optional URI and graph, first checking if the URL is already present in Fedora, if not, creates a Fedora Object with the graph as properties. If URI is None, uses Fedora 4 default PID minter to create the object's URI. Args: uri(string): String of URI, default is None graph(rdflib.Graph): RDF Graph of subject, default is None data(object): Binary datastream that will be saved as fcr:content Returns: URI(string): New Fedora URI or None if uri already exists
codesearchnet
def inner_shape(self): return self._inner_shape
The inner dimension sizes for this shape. Returns: A 1-D integer `Tensor`.
github-repos
def convert_new_publication_info_to_old(publication_infos): def _needs_a_hidden_pubnote(journal_title, journal_volume): return ((journal_title in _JOURNALS_THAT_NEED_A_HIDDEN_PUBNOTE) and (journal_volume in _JOURNALS_THAT_NEED_A_HIDDEN_PUBNOTE[journal_title])) result = [] for publication_info in publication_infos: _publication_info = copy.deepcopy(publication_info) journal_title = _publication_info.get('journal_title') try: journal_title = _JOURNALS_RENAMED_NEW_TO_OLD[journal_title] _publication_info['journal_title'] = journal_title result.append(_publication_info) continue except KeyError: pass journal_volume = _publication_info.get('journal_volume') year = _publication_info.get('year') if ((journal_title in _JOURNALS_WITH_YEAR_ADDED_TO_VOLUME) and year and journal_volume and (len(journal_volume) == 2)): two_digit_year = str(year)[2:] _publication_info['journal_volume'] = ''.join([two_digit_year, journal_volume]) result.append(_publication_info) continue if (journal_title and journal_volume): match = _RE_TITLE_ENDS_WITH_A_LETTER.match(journal_title) if (match and _needs_a_hidden_pubnote(journal_title, journal_volume)): _publication_info['journal_title'] = match.group('title') _publication_info['journal_volume'] = (journal_volume + match.group('letter')) result.append(_publication_info) _publication_info = copy.deepcopy(publication_info) _publication_info['hidden'] = True _publication_info['journal_title'] = match.group('title') _publication_info['journal_volume'] = (match.group('letter') + journal_volume) elif (match and (journal_title not in _JOURNALS_ALREADY_ENDING_WITH_A_LETTER)): _publication_info['journal_title'] = match.group('title') _publication_info['journal_volume'] = (match.group('letter') + journal_volume) result.append(_publication_info) return result
Convert back a ``publication_info`` value from the new format to the old. Does the inverse transformation of :func:`convert_old_publication_info_to_new`, to be used whenever we are sending back records from Labs to Legacy. Args: publication_infos: a ``publication_info`` in the new format. Returns: list(dict): a ``publication_info`` in the old format.
codesearchnet
def new_netting_channel(self, partner: Address, settle_timeout: int, given_block_identifier: BlockSpecification) -> ChannelID: checking_block = self.client.get_checking_block() self._new_channel_preconditions(partner=partner, settle_timeout=settle_timeout, block_identifier=given_block_identifier) log_details = {'peer1': pex(self.node_address), 'peer2': pex(partner)} gas_limit = self.proxy.estimate_gas(checking_block, 'openChannel', participant1=self.node_address, participant2=partner, settle_timeout=settle_timeout) if (not gas_limit): self.proxy.jsonrpc_client.check_for_insufficient_eth(transaction_name='openChannel', transaction_executed=False, required_gas=GAS_REQUIRED_FOR_OPEN_CHANNEL, block_identifier=checking_block) self._new_channel_postconditions(partner=partner, block=checking_block) log.critical('new_netting_channel call will fail', **log_details) raise RaidenUnrecoverableError('Creating a new channel will fail') log.debug('new_netting_channel called', **log_details) if (gas_limit and (partner not in self.open_channel_transactions)): new_open_channel_transaction = AsyncResult() self.open_channel_transactions[partner] = new_open_channel_transaction gas_limit = safe_gas_limit(gas_limit, GAS_REQUIRED_FOR_OPEN_CHANNEL) try: transaction_hash = self.proxy.transact('openChannel', gas_limit, participant1=self.node_address, participant2=partner, settle_timeout=settle_timeout) self.client.poll(transaction_hash) receipt_or_none = check_transaction_threw(self.client, transaction_hash) if receipt_or_none: self._new_channel_postconditions(partner=partner, block=receipt_or_none['blockNumber']) log.critical('new_netting_channel failed', **log_details) raise RaidenUnrecoverableError('creating new channel failed') except Exception as e: log.critical('new_netting_channel failed', **log_details) new_open_channel_transaction.set_exception(e) raise else: new_open_channel_transaction.set(transaction_hash) finally: self.open_channel_transactions.pop(partner, None) else: self.open_channel_transactions[partner].get() channel_identifier: ChannelID = self._detail_channel(participant1=self.node_address, participant2=partner, block_identifier='latest').channel_identifier log_details['channel_identifier'] = str(channel_identifier) log.info('new_netting_channel successful', **log_details) return channel_identifier
Creates a new channel in the TokenNetwork contract. Args: partner: The peer to open the channel with. settle_timeout: The settle timeout to use for this channel. given_block_identifier: The block identifier of the state change that prompted this proxy action Returns: The ChannelID of the new netting channel.
codesearchnet
def set_many(self, values, expire=0, noreply=None): if (noreply is None): noreply = self.default_noreply result = self._store_cmd(b'set', values, expire, noreply) return [k for (k, v) in six.iteritems(result) if (not v)]
A convenience function for setting multiple values. Args: values: dict(str, str), a dict of keys and values, see class docs for details. expire: optional int, number of seconds until the item is expired from the cache, or zero for no expiry (the default). noreply: optional bool, True to not wait for the reply (defaults to self.default_noreply). Returns: Returns a list of keys that failed to be inserted. If noreply is True, always returns empty list.
codesearchnet
def GetFeeds(client): feed_service = client.GetService('FeedService', 'v201809') feeds = [] more_pages = True selector = { 'fields': ['Id', 'Name', 'Attributes'], 'predicates': [ { 'field': 'Origin', 'operator': 'EQUALS', 'values': ['USER'] }, { 'field': 'FeedStatus', 'operator': 'EQUALS', 'values': ['ENABLED'] } ], 'paging': { 'startIndex': 0, 'numberResults': PAGE_SIZE } } while more_pages: page = feed_service.get(selector) if 'entries' in page: feeds.extend(page['entries']) selector['paging']['startIndex'] += PAGE_SIZE more_pages = selector['paging']['startIndex'] < int(page['totalNumEntries']) return feeds
Returns a list of all enabled Feeds. Args: client: an AdWordsClient instance. Returns: A list containing all enabled Feeds.
juraj-google-style
def __init__(self, min_value: int=0, max_value: Optional[int]=None): super().__init__() self._min_value = min_value self._max_value = max_value
Constructor. Args: min_value: Min value that is acceptable for the list index. max_value: Max value that is acceptable for the list index. If None, there is no upper bound for list index.
github-repos
def op(self): return self._op
The operation that failed, if known. *N.B.* If the failed op was synthesized at runtime, e.g. a `Send` or `Recv` op, there will be no corresponding `tf.Operation` object. In that case, this will return `None`, and you should instead use the `tf.errors.OpError.node_def` to discover information about the op. Returns: The `Operation` that failed, or None.
github-repos
def thread(self, value: str): if value is not None and not isinstance(value, str): raise TypeError("'thread' MUST be a string") self._thread = value
Set thread id of the message Args: value (str): the thread id
juraj-google-style
def sort_servers_closest(servers: Sequence[str]) -> Sequence[Tuple[(str, float)]]: if (not {urlparse(url).scheme for url in servers}.issubset({'http', 'https'})): raise TransportError('Invalid server urls') get_rtt_jobs = set((gevent.spawn((lambda url: (url, get_http_rtt(url))), server_url) for server_url in servers)) gevent.joinall(get_rtt_jobs, raise_error=False) sorted_servers: List[Tuple[(str, float)]] = sorted((job.value for job in get_rtt_jobs if (job.value[1] is not None)), key=itemgetter(1)) log.debug('Matrix homeserver RTT times', rtt_times=sorted_servers) return sorted_servers
Sorts a list of servers by http round-trip time Params: servers: sequence of http server urls Returns: sequence of pairs of url,rtt in seconds, sorted by rtt, excluding failed servers (possibly empty)
codesearchnet
def readData(self, fileName): lock_and_call( lambda: self._impl.readData(fileName), self._lock ) self._errorhandler_wrapper.check()
Interprets the specified file as an AMPL data file. As a side effect, it invalidates all entities (as the passed file can contain any arbitrary command); the lists of entities will be re-populated lazily (at first access). After reading the file, the interpreter is put back to "model" mode. Args: fileName: Full path to the file. Raises: RuntimeError: in case the file does not exist.
juraj-google-style
def build_phenotype(phenotype_id, adapter): phenotype_obj = {} phenotype = adapter.hpo_term(phenotype_id) if phenotype: phenotype_obj['phenotype_id'] = phenotype['hpo_id'] phenotype_obj['feature'] = phenotype['description'] return phenotype
Build a small phenotype object Build a dictionary with phenotype_id and description Args: phenotype_id (str): The phenotype id adapter (scout.adapter.MongoAdapter) Returns: phenotype_obj (dict): dict( phenotype_id = str, feature = str, # description of phenotype )
codesearchnet
def is_insert_grad_of_statement(node): tangent_calls = [(anno.getanno(item.context_expr, 'func', None) is utils.insert_grad_of) for item in node.items] if all(tangent_calls): return True elif any(tangent_calls): raise ValueError else: return False
Check whether a context manager calls `insert_grad_of`. Args: node: The context manager node. Returns: Whether or not this node contains `insert_grad_of` calls. Raises: ValueError: If the `insert_grad_of` calls are mixed with other calls.
codesearchnet
def get_id(page): start_pos = page.find("<id>") end_pos = page.find("</id>") assert start_pos != -1 assert end_pos != -1 start_pos += len("<id>") return int(page[start_pos:end_pos])
Extract the id from a page. Args: page: a string Returns: an integer
juraj-google-style
def _FormatPackedIPv6Address(self, packed_ip_address): octet_pairs = zip(packed_ip_address[0::2], packed_ip_address[1::2]) octet_pairs = [((octet1 << 8) | octet2) for (octet1, octet2) in octet_pairs] return ':'.join(['{0:04x}'.format(octet_pair) for octet_pair in octet_pairs])
Formats a packed IPv6 address as a human readable string. Args: packed_ip_address (list[int]): packed IPv6 address. Returns: str: human readable IPv6 address.
codesearchnet
def encode_categorical_inputs(inputs, output_mode, depth, dtype, sparse=False, count_weights=None, backend_module=None): backend_module = backend_module or backend if output_mode == 'int': return backend_module.cast(inputs, dtype=dtype) rank_of_inputs = len(backend_module.shape(inputs)) if rank_of_inputs == 0: inputs = backend_module.numpy.expand_dims(inputs, -1) rank_of_inputs = 1 if backend_module.__name__.endswith('tensorflow') and rank_of_inputs <= 2 and (output_mode in ('multi_hot', 'count')): try: return tf_utils.tf_encode_categorical_inputs(inputs, output_mode, depth, dtype=dtype, sparse=sparse, count_weights=count_weights) except ValueError: pass if output_mode == 'multi_hot': return backend_module.nn.multi_hot(inputs, depth, dtype=dtype, sparse=sparse) elif output_mode == 'one_hot': input_shape = backend_module.core.shape(inputs) if input_shape is not None and len(input_shape) > 1 and (input_shape[-1] == 1): newshape = tuple(input_shape[:-1]) inputs = backend_module.numpy.reshape(inputs, newshape) return backend_module.nn.one_hot(inputs, depth, dtype=dtype, sparse=sparse) elif output_mode == 'count': reduction_axis = 1 if len(inputs.shape) > 1 else 0 if count_weights is not None: dtype = count_weights.dtype one_hot_encoding = backend_module.nn.one_hot(inputs, depth, dtype=dtype, sparse=sparse) if count_weights is not None: count_weights = backend_module.numpy.expand_dims(count_weights, -1) one_hot_encoding = one_hot_encoding * count_weights outputs = backend_module.numpy.sum(one_hot_encoding, axis=reduction_axis) return outputs
Encodes categorical inputs according to output_mode. Args: inputs: the inputs to encode. output_mode: one of `"int"`, `"one_hot"`, `"multi_hot"`, or `"count"`. depth: number of classes, this will be the last dimension of the output. dtype: the dtype of the output, unless `count_weights` is not `None`. sparse: whether the output should be sparse for backends supporting it. count_weights: weights to apply if `output_mode` is `"count"`. backend_module: the backend to use instead of the current one. Returns: the encoded inputs.
github-repos
def check(self, version): for disjunct in self._disjuncts: if self._check_insersection(version, disjunct): return True return False
Check that a version is inside this SemanticVersionRange Args: version (SemanticVersion): The version to check Returns: bool: True if the version is included in the range, False if not
codesearchnet
def set_fore( self, x: int, y: int, r: int, g: int, b: int, char: str ) -> None: i = self.width * y + x self.fore_r[i] = r self.fore_g[i] = g self.fore_b[i] = b self.char[i] = ord(char)
Set the character and foreground color of one cell. Args: x (int): X position to change. y (int): Y position to change. r (int): Red foreground color, from 0 to 255. g (int): Green foreground color, from 0 to 255. b (int): Blue foreground color, from 0 to 255. char (AnyStr): A single character str or bytes object.
juraj-google-style
def verify(self, verify_key): if ((not self.mardata.signatures) or (not self.mardata.signatures.sigs)): return False hashers = [] for sig in self.mardata.signatures.sigs: hashers.append((sig.algorithm_id, sig.signature, make_hasher(sig.algorithm_id))) assert (len(hashers) == len(self.mardata.signatures.sigs)) for block in get_signature_data(self.fileobj, self.mardata.signatures.filesize): [h.update(block) for (_, _, h) in hashers] for (algo_id, sig, h) in hashers: if (not verify_signature(verify_key, sig, h.finalize(), h.algorithm.name)): return False else: return True
Verify that this MAR file has a valid signature. Args: verify_key (str): PEM formatted public key Returns: True if the MAR file's signature matches its contents False otherwise; this includes cases where there is no signature.
codesearchnet
def __init__(self, dataset, coordinator): if isinstance(dataset, input_lib.DistributedDataset): original_dataset = dataset._original_dataset serialized = serialize_dataset_to_graph(original_dataset) def dataset_fn(): deserialized = deserialize_dataset_from_graph(serialized, original_dataset.element_spec) dataset.build(dataset_to_replace=deserialized) return dataset elif isinstance(dataset, input_lib.DistributedDatasetsFromFunction): def dataset_fn(): dataset.build() return dataset elif isinstance(dataset, dataset_ops.Dataset): serialized = serialize_dataset_to_graph(dataset) def dataset_fn(): return deserialize_dataset_from_graph(serialized, dataset.element_spec) else: raise ValueError('Unexpected dataset type!') super(PerWorkerDatasetFromDataset, self).__init__(dataset_fn, coordinator)
Makes an iterable from datasets created by the given dataset. It creates a dataset_fn which deserializes a dataset from a graph under the hood. Args: dataset: A tf.data.Dataset, a DistributedDataset or a DistributedDatasetsFromFunction coordinator: a `ClusterCoordinator` object, used to create dataset resources.
github-repos
def store_inputs(self, line_num, source, source_raw=None): self.old.store_inputs(line_num, source, source_raw) self.decorator.pre_run_cell(line_num, source)
Store source and raw input in history and create input cache variables ``_i*``. Args: line_num (int): The prompt number of this input. source (str): Python input. source_raw (str): If given, this is the raw input without any IPython transformations applied to it. If not given, ``source`` is used.
juraj-google-style
def from_iterables(ig_info: fhir_package.IgInfo, structure_definitions: Iterable[structure_definition_pb2.StructureDefinition], search_parameters: Iterable[search_parameter_pb2.SearchParameter], code_systems: Iterable[code_system_pb2.CodeSystem], value_sets: Iterable[value_set_pb2.ValueSet], resource_time_zone: str='Z') -> fhir_package.FhirPackage[structure_definition_pb2.StructureDefinition, search_parameter_pb2.SearchParameter, code_system_pb2.CodeSystem, value_set_pb2.ValueSet]: return fhir_package.FhirPackage(ig_info=ig_info, structure_definitions=fhir_package.ResourceCollection.from_iterable(structure_definitions, structure_definition_pb2.StructureDefinition, _PRIMITIVE_HANDLER, resource_time_zone), search_parameters=fhir_package.ResourceCollection.from_iterable(search_parameters, search_parameter_pb2.SearchParameter, _PRIMITIVE_HANDLER, resource_time_zone), code_systems=fhir_package.ResourceCollection.from_iterable(code_systems, code_system_pb2.CodeSystem, _PRIMITIVE_HANDLER, resource_time_zone), value_sets=fhir_package.ResourceCollection.from_iterable(value_sets, value_set_pb2.ValueSet, _PRIMITIVE_HANDLER, resource_time_zone))
Builds a FHIR R4 `FhirPackage` containing the given resources. Args: ig_info: The metadata to associate with the `FhirPackage`. structure_definitions: The structure definitions to include in the `FhirPackage`. search_parameters: The search parameters to include in the `FhirPackage`. code_systems: The code systems to include in the `FhirPackage`. value_sets: The value sets to include in the `FhirPackage`. resource_time_zone: If additional JSON resources are added to the `FhirPackage`, the time zone code to parse resource dates into when adding those JSON resources. Returns: A `FhirPackage` instance with the requested resources.
github-repos
def authenticate(json_path=None): msg = ('budou.authentication() is deprecated. ' 'Please use budou.get_parser() to obtain a parser instead.') warnings.warn(msg, DeprecationWarning) parser = get_parser('nlapi', credentials_path=json_path) return parser
Gets a Natural Language API parser by authenticating the API. **This method is deprecated.** Please use :obj:`budou.get_parser` to obtain a parser instead. Args: json_path (:obj:`str`, optional): The file path to the service account's credentials. Returns: Parser. (:obj:`budou.parser.NLAPIParser`)
juraj-google-style
def _split_heads(self, fused_qkv: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: if self.new_decoder_architecture: batch, seq_len, _ = fused_qkv.shape qkv = fused_qkv.view(batch, seq_len, -1, self.num_heads query = qkv[:, :, :, :-2] key = qkv[:, :, :, [-2]] value = qkv[:, :, :, [-1]] key = torch.broadcast_to(key, query.shape) value = torch.broadcast_to(value, query.shape) query, key, value = [x.flatten(2, 3) for x in (query, key, value)] return (query, key, value) elif not self.multi_query: batch_size, seq_length, three_times_hidden_size = fused_qkv.shape fused_qkv = fused_qkv.view(batch_size, seq_length, self.num_heads, 3, self.head_dim) return (fused_qkv[..., 0, :], fused_qkv[..., 1, :], fused_qkv[..., 2, :]) else: batch_size, seq_length, three_times_hidden_size = fused_qkv.shape fused_qkv = fused_qkv.view(batch_size, seq_length, self.num_heads + 2, self.head_dim) return (fused_qkv[..., :-2, :], fused_qkv[..., [-2], :], fused_qkv[..., [-1], :])
Split the last dimension into (num_heads, head_dim), results share same memory storage as `fused_qkv` Args: fused_qkv (`torch.tensor`): [batch_size, seq_length, num_heads * 3 * head_dim] Returns: query: [batch_size, seq_length, num_heads, head_dim] key: [batch_size, seq_length, num_heads, head_dim] value: [batch_size, seq_length, num_heads, head_dim]
github-repos
def get_beta(self, kl_loss=0.0): if self.hparams.latent_loss_multiplier_dynamic: beta = tf.Variable(self.hparams.latent_loss_multiplier, trainable=False, dtype=tf.float32) alpha = self.hparams.latent_loss_multiplier_alpha epsilon = self.hparams.latent_loss_multiplier_epsilon shadow_beta = (beta + (alpha * (kl_loss - epsilon))) shadow_beta = tf.maximum(shadow_beta, 0.0) shadow_beta = tf.minimum(shadow_beta, 1.0) update_op = tf.assign(beta, shadow_beta) else: beta = common_video.beta_schedule(schedule=self.hparams.latent_loss_multiplier_schedule, global_step=self.get_iteration_num(), final_beta=self.hparams.latent_loss_multiplier, decay_start=(self.hparams.num_iterations_1st_stage + self.hparams.num_iterations_2nd_stage), decay_end=self.hparams.anneal_end) update_op = tf.identity(beta) with tf.control_dependencies([update_op]): tf.summary.scalar('beta', beta) return beta
Get the KL multiplier, either dynamically or schedule based. if hparams.latent_loss_multiplier_dynamic is set to true, then beta is being adjusted to keep KL under hparams.latent_loss_multiplier_epsilon. In order to do so, the beta is being updated at each iteration by taking steps of size hparams.latent_loss_multiplier_alpha. The same formulation can be retrieved by solving the Lagrangian with KL < epsilon as a constraint. Args: kl_loss: KL loss. Only used for dynamic adjustment. Returns: beta: the final value of beta.
codesearchnet
def preprocess_for_train(image, image_size=224, normalize=True): if normalize: image = tf.to_float(image) / 255.0 image = _random_crop(image, image_size) if normalize: image = _normalize(image) image = _flip(image) image = tf.reshape(image, [image_size, image_size, 3]) return image
Preprocesses the given image for evaluation. Args: image: `Tensor` representing an image of arbitrary size. image_size: int, how large the output image should be. normalize: bool, if True the image is normalized. Returns: A preprocessed image `Tensor`.
juraj-google-style
def _normalize_pattern(pattern): if pattern.startswith('regex:'): pattern_type = 'regex' pattern = pattern[len('regex:'):] elif pattern.startswith('wildcard:'): pattern_type = 'wildcard' pattern = pattern[len('wildcard:'):] elif pattern.startswith('literal:'): pattern_type = 'literal' pattern = pattern[len('literal:'):] elif RegexRoute.like(pattern): pattern_type = 'regex' elif WildcardRoute.like(pattern): pattern_type = 'wildcard' else: pattern_type = 'literal' return (pattern_type, pattern)
Return a normalized form of the pattern. Normalize the pattern by removing pattern type prefix if it exists in the pattern. Then return the pattern type and the pattern as a tuple of two strings. Arguments: pattern (str): Route pattern to match request paths Returns: tuple: Ruple of pattern type (str) and pattern (str)
codesearchnet
def run_missing_simulations(self, param_list, runs=None): if isinstance(param_list, dict): param_list = list_param_combinations(param_list) self.run_simulations(self.get_missing_simulations(param_list, runs))
Run the simulations from the parameter list that are not yet available in the database. This function also makes sure that we have at least runs replications for each parameter combination. Additionally, param_list can either be a list containing the desired parameter combinations or a dictionary containing multiple values for each parameter, to be expanded into a list. Args: param_list (list, dict): either a list of parameter combinations or a dictionary to be expanded into a list through the list_param_combinations function. runs (int): the number of runs to perform for each parameter combination. This parameter is only allowed if the param_list specification doesn't feature an 'RngRun' key already.
codesearchnet
def db_dp010(self, value=None): if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `db_dp010`'.format(value)) self._db_dp010 = value
Corresponds to IDD Field `db_dp010` mean coincident dry-bulb temperature to Dew-point temperature corresponding to 1.0% annual cumulative frequency of occurrence Args: value (float): value for IDD Field `db_dp010` Unit: C if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
juraj-google-style
def _SerializeRequest(self, request): parsed = urllib_parse.urlsplit(request.url) request_line = urllib_parse.urlunsplit( ('', '', parsed.path, parsed.query, '')) if not isinstance(request_line, six.text_type): request_line = request_line.decode('utf-8') status_line = u' '.join(( request.http_method, request_line, u'HTTP/1.1\n' )) major, minor = request.headers.get( 'content-type', 'application/json').split('/') msg = mime_nonmultipart.MIMENonMultipart(major, minor) for key, value in request.headers.items(): if key == 'content-type': continue msg[key] = value msg['Host'] = parsed.netloc msg.set_unixfrom(None) if request.body is not None: msg.set_payload(request.body) str_io = six.StringIO() gen = generator.Generator(str_io, maxheaderlen=0) gen.flatten(msg, unixfrom=False) body = str_io.getvalue() return status_line + body
Convert a http_wrapper.Request object into a string. Args: request: A http_wrapper.Request to serialize. Returns: The request as a string in application/http format.
juraj-google-style
def validate_detector(self, detector): resp = self._post(self._u(self._DETECTOR_ENDPOINT_SUFFIX, 'validate'), data=detector) resp.raise_for_status()
Validate a detector. Validates the given detector; throws a 400 Bad Request HTTP error if the detector is invalid; otherwise doesn't return or throw anything. Args: detector (object): the detector model object. Will be serialized as JSON.
codesearchnet
def _write_class_markdown_to_file(self, f, name, cls): methods = dict(self.get_class_members(name, cls)) num_methods = len(methods) try: self._write_docstring_markdown_to_file(f, ' except ValueError as e: raise ValueError((str(e) + (' in class `%s`' % cls.__name__))) any_method_called_out = (len(methods) != num_methods) if any_method_called_out: other_methods = {n: m for (n, m) in methods.items() if (n in cls.__dict__)} if other_methods: print('\n else: other_methods = methods for name in sorted(other_methods): self._write_member_markdown_to_file(f, '
Write the class doc to `f`. Args: f: File to write to. prefix: Prefix for names. cls: class object. name: name to use.
codesearchnet
def indent(lines, amount=2, char=' '): r lines = str(lines) padding = amount * char return padding + ('\n' + padding).join(lines.split('\n'))
r"""Indent a string. Prepends whitespace to every line in the passed string. (Lines are separated by newline characters.) Args: lines (str): The string to indent. Keyword Args: amount (int): The number of columns to indent by. char (str): The character to to use as the indentation. Returns: str: The indented string. Example: >>> print(indent('line1\nline2', char='*')) **line1 **line2
juraj-google-style
def get_gradients(self, loss, params): grads = backend.gradients(loss, params) if any((g is None for g in grads)): raise ValueError('An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: backend.argmax, backend.round, backend.eval.') if hasattr(self, 'clipnorm'): grads = [clip_ops.clip_by_norm(g, self.clipnorm) for g in grads] if hasattr(self, 'clipvalue'): grads = [clip_ops.clip_by_value(g, -self.clipvalue, self.clipvalue) for g in grads] return grads
Returns gradients of `loss` with respect to `params`. Args: loss: Loss tensor. params: List of variables. Returns: List of gradient tensors. Raises: ValueError: In case any gradient cannot be computed (e.g. if gradient function not implemented).
github-repos
def train(self, X): _trainer = bob.learn.linear.CGLogRegTrainer(**{'lambda': self.regularizer}) if (len(X) == 2): return _trainer.train(add_bias(X[0]), add_bias(X[1])) else: machines = [] for k in range(len(X)): NC_range = (list(range(0, k)) + list(range((k + 1), len(X)))) machines.append(_trainer.train(add_bias(numpy.vstack(X[NC_range])), add_bias(X[k]))) return MultiClassMachine(machines)
Trains multiple logistic regression classifiers to handle the multiclass problem posed by ``X`` X (numpy.ndarray): The input data matrix. This must be a numpy.ndarray with 3 dimensions or an iterable containing 2 numpy.ndarrays with 2 dimensions each. Each correspond to the data for one of the input classes, every row corresponds to one example of the data set, every column, one different feature. Returns: Machine: A trained multiclass machine.
codesearchnet
def __init__(self, sparse, map_op, rank): self._sparse = sparse self._map_op = map_op self._rank = tensor_shape.as_dimension(rank)
Create the metadata. Args: sparse: Python boolean. map_op: The `Operation` that created the `SparseTensorsMap` in question. This Op contains information about the underlying Map object and the dtype of the original data. rank: The statically known rank of the `SparseTensor`.
github-repos
def is_interactive_logging_enabled(): return global_state.get_global_attribute('interactive_logging', True)
Check if interactive logging is enabled. To switch between writing logs to stdout and `absl.logging`, you may use `keras.config.enable_interactive_logging()` and `keras.config.disable_interactive_logging()`. Returns: Boolean, `True` if interactive logging is enabled, and `False` otherwise.
github-repos
def get_initialization_function(self, *args, **kwargs): with self._lock: if self._variable_creation_config is not None: raise RuntimeError('get_initialization_function cannot be called after the function has been used') initializers = [] self._initialize(args, kwargs, add_initializers_to=initializers) def initialize_variables(): for v, init in initializers: v.assign(lift_to_graph.lift_to_graph([init], ops.get_default_graph())[init], read_value=False) options = tracing_compilation.TracingOptions(initialize_variables, 'initialize_variables') return tracing_compilation.trace_function(tracing_options=options)
Returns a `ConcreteFunction` which initializes this function's variables. Requires that this function hasn't been accessed yet through either calling it or calling get_concrete_function. Fails if we cannot build an initializer function which does not depend on the concrete values of the inputs to this function. Note that running this function will overwrite any values currently assigned to variables, for example restores from a checkpoint. Args: *args: arguments to the underlying python callable. **kwargs: keyword arguments to the python callable. Returns: A `ConcreteFunction` object which initializes the variables of this function. Raises: RuntimeError: if called after the variables have been initialized.
github-repos
def _replace_row_partitions(value, new_partitions): if isinstance(value, tensor.Tensor) or not new_partitions: return value elif isinstance(value, ragged_tensor.RaggedTensor): return ragged_tensor.RaggedTensor._from_row_partition(values=_replace_row_partitions(value.values, new_partitions[1:]), row_partition=new_partitions[0]) else: assert isinstance(value, StructuredTensor) new_fields = dict(((k, _replace_row_partitions(v, new_partitions)) for k, v in value._fields.items())) return StructuredTensor._old_init(fields=new_fields, shape=value.shape, nrows=value.nrows(), row_partitions=tuple(new_partitions) + tuple(value.row_partitions[len(new_partitions):]))
Updates `value` to use `new_partitions` as its (outer) row partitions. This is used to ensure that all fields in a `StructuredTensor` use identical `RowPartition` objects for the shared dimensions. In particular, `StructuredTensor.from_fields` first merges all of the row partitions from any fields, and then replaces the outer row partitions of all fields with the merged row partitions (using this function). Args: value: A `Tensor`, `RaggedTensor`, or `StructuredTensor`. new_partitions: A list of row-partitions that should be used by `value`. Must be equivalent to `value`'s current row partitions. Returns: A value that is equivalent to `value`, where outer row partitions have been replaced by `new_partitions`.
github-repos
def parsed_aggregate_reports_to_csv(reports): def to_str(obj): return str(obj).lower() fields = ["xml_schema", "org_name", "org_email", "org_extra_contact_info", "report_id", "begin_date", "end_date", "errors", "domain", "adkim", "aspf", "p", "sp", "pct", "fo", "source_ip_address", "source_country", "source_reverse_dns", "source_base_domain", "count", "disposition", "dkim_alignment", "spf_alignment", "policy_override_reasons", "policy_override_comments", "envelope_from", "header_from", "envelope_to", "dkim_domains", "dkim_selectors", "dkim_results", "spf_domains", "spf_scopes", "spf_results"] csv_file_object = StringIO(newline="\n") writer = DictWriter(csv_file_object, fields) writer.writeheader() if type(reports) == OrderedDict: reports = [reports] for report in reports: xml_schema = report["xml_schema"] org_name = report["report_metadata"]["org_name"] org_email = report["report_metadata"]["org_email"] org_extra_contact = report["report_metadata"]["org_extra_contact_info"] report_id = report["report_metadata"]["report_id"] begin_date = report["report_metadata"]["begin_date"] end_date = report["report_metadata"]["end_date"] errors = "|".join(report["report_metadata"]["errors"]) domain = report["policy_published"]["domain"] adkim = report["policy_published"]["adkim"] aspf = report["policy_published"]["aspf"] p = report["policy_published"]["p"] sp = report["policy_published"]["sp"] pct = report["policy_published"]["pct"] fo = report["policy_published"]["fo"] report_dict = dict(xml_schema=xml_schema, org_name=org_name, org_email=org_email, org_extra_contact_info=org_extra_contact, report_id=report_id, begin_date=begin_date, end_date=end_date, errors=errors, domain=domain, adkim=adkim, aspf=aspf, p=p, sp=sp, pct=pct, fo=fo) for record in report["records"]: row = report_dict row["source_ip_address"] = record["source"]["ip_address"] row["source_country"] = record["source"]["country"] row["source_reverse_dns"] = record["source"]["reverse_dns"] row["source_base_domain"] = record["source"]["base_domain"] row["count"] = record["count"] row["disposition"] = record["policy_evaluated"]["disposition"] row["spf_alignment"] = record["policy_evaluated"]["spf"] row["dkim_alignment"] = record["policy_evaluated"]["dkim"] policy_override_reasons = list(map( lambda r: r["type"], record["policy_evaluated"] ["policy_override_reasons"])) policy_override_comments = list(map( lambda r: r["comment"] or "none", record["policy_evaluated"] ["policy_override_reasons"])) row["policy_override_reasons"] = ",".join( policy_override_reasons) row["policy_override_comments"] = "|".join( policy_override_comments) row["envelope_from"] = record["identifiers"]["envelope_from"] row["header_from"] = record["identifiers"]["header_from"] envelope_to = record["identifiers"]["envelope_to"] row["envelope_to"] = envelope_to dkim_domains = [] dkim_selectors = [] dkim_results = [] for dkim_result in record["auth_results"]["dkim"]: dkim_domains.append(dkim_result["domain"]) if "selector" in dkim_result: dkim_selectors.append(dkim_result["selector"]) dkim_results.append(dkim_result["result"]) row["dkim_domains"] = ",".join(map(to_str, dkim_domains)) row["dkim_selectors"] = ",".join(map(to_str, dkim_selectors)) row["dkim_results"] = ",".join(map(to_str, dkim_results)) spf_domains = [] spf_scopes = [] spf_results = [] for spf_result in record["auth_results"]["spf"]: spf_domains.append(spf_result["domain"]) spf_scopes.append(spf_result["scope"]) spf_results.append(spf_result["result"]) row["spf_domains"] = ",".join(map(to_str, spf_domains)) row["spf_scopes"] = ",".join(map(to_str, spf_scopes)) row["spf_results"] = ",".join(map(to_str, dkim_results)) writer.writerow(row) csv_file_object.flush() return csv_file_object.getvalue()
Converts one or more parsed aggregate reports to flat CSV format, including headers Args: reports: A parsed aggregate report or list of parsed aggregate reports Returns: str: Parsed aggregate report data in flat CSV format, including headers
juraj-google-style
def _PageThroughPqlSet(self, pql_query, output_function, values): if isinstance(values, dict): values = PQLHelper.GetQueryValuesFromDict(values, self._version) pql_service = self._GetPqlService() current_offset = 0 while True: query_w_limit_offset = '%s LIMIT %d OFFSET %d' % (pql_query, SUGGESTED_PAGE_LIMIT, current_offset) response = pql_service.select({'query': query_w_limit_offset, 'values': values}) if 'rows' in response: if current_offset == 0: header = response['columnTypes'] output_function([label['labelName'] for label in header]) entities = response['rows'] result_set_size = len(entities) for entity in entities: output_function([self._ConvertValueForCsv(value) for value in entity['values']]) current_offset += result_set_size if result_set_size != SUGGESTED_PAGE_LIMIT: break else: break
Pages through a pql_query and performs an action (output_function). Args: pql_query: str a statement filter to apply (the query should not include the limit or the offset) output_function: the function to call to output the results (csv or in memory) values: A dict of python objects or a list of raw SOAP values to bind to the pql_query.
juraj-google-style
def validate_test_result(result): buckets = [(result.passed, records.TestResultEnums.TEST_RESULT_PASS), (result.failed, records.TestResultEnums.TEST_RESULT_FAIL), (result.error, records.TestResultEnums.TEST_RESULT_ERROR), (result.skipped, records.TestResultEnums.TEST_RESULT_SKIP)] for bucket_list, expected_enum in buckets: for record in bucket_list: if record.result != expected_enum: raise AssertionError('Expected result %s, got %s.' % (expected_enum, record.result))
Validate basic properties of a test result. The records in each bucket of the test result should have the corresponding result enum. Args: result: The `records.TestResult` object to validate.
github-repos
def And(exprs): return simplify_exprs(exprs, _And, FALSE, TRUE)
Create a conjunction or its simplified equivalent. This will ensure that, when an _And is returned, none of its immediate subterms is TRUE, FALSE, or another conjunction. Args: exprs: An iterable. The subterms. Returns: A BooleanTerm.
github-repos
async def _populate_fields(self, example: Example, client: GRPCClient): if example.tag.never_run: logging.info('populating example fields from provided files %s', example.filepath) self._populate_from_repo(example) else: await self._populate_from_runner(example, client)
Populate fields of the example reading them from the backend or from the repository. Args: example: beam example that should be verified
github-repos
def fts_count(self, fts, inv): return len(list(filter(lambda s: self.fts_match(fts, s), inv)))
Return the count of segments in an inventory matching a given feature mask. Args: fts (set): feature mask given as a set of (value, feature) tuples inv (set): inventory of segments (as Unicode IPA strings) Returns: int: number of segments in `inv` that match feature mask `fts`
juraj-google-style
async def has_commit_landed_on_repository(self, context, revision): if not _is_git_full_hash(revision): revision = self.get_tag_hash(tag_name=revision) repo = self._github_repository.html_url url = '/'.join([repo.rstrip('/'), 'branch_commits', revision]) html_data = await retry_request(context, url) html_text = html_data.strip() return html_text != ''
Tell if a commit was landed on the repository or if it just comes from a pull request. Args: context (scriptworker.context.Context): the scriptworker context. revision (str): the commit hash or the tag name. Returns: bool: True if the commit is present in one of the branches of the main repository
juraj-google-style
def add_checkpoint_values_check(object_graph_proto): parents = {} checkpointed_trackables = object_identity.ObjectIdentitySet() checkpointed_trackables = set() for node_id, object_proto in enumerate(object_graph_proto.nodes): if object_proto.attributes or object_proto.slot_variables or object_proto.HasField('registered_saver'): checkpointed_trackables.add(node_id) for child_proto in object_proto.children: child = child_proto.node_id if child not in parents: parents[child] = set() parents[child].add(node_id) to_visit = set() to_visit.update(checkpointed_trackables) while to_visit: trackable = to_visit.pop() if trackable not in parents: continue current_parents = parents.pop(trackable) checkpointed_trackables.update(current_parents) for parent in current_parents: if parent in parents: to_visit.add(parent) for node_id, object_proto in enumerate(object_graph_proto.nodes): object_proto.has_checkpoint_values.value = bool(node_id in checkpointed_trackables)
Determines which objects have checkpoint values and save this to the proto. Args: object_graph_proto: A `TrackableObjectGraph` proto.
github-repos
def ignore_path(path): ignore = False for name in ['.tox', 'dist', 'build', 'node_modules', 'htmlcov']: if (path.find(name) >= 0): ignore = True break return ignore
Verify whether to ignore a path. Args: path (str): path to check. Returns: bool: True when to ignore given path.
codesearchnet
def is_struct(declaration): if not is_class(declaration): return False decl = class_traits.get_declaration(declaration) return decl.class_type == class_declaration.CLASS_TYPES.STRUCT
Returns True if declaration represents a C++ struct Args: declaration (declaration_t): the declaration to be checked. Returns: bool: True if declaration represents a C++ struct
juraj-google-style
def dimension_name(dimension): if isinstance(dimension, Dimension): return dimension.name elif isinstance(dimension, basestring): return dimension elif isinstance(dimension, tuple): return dimension[0] elif isinstance(dimension, dict): return dimension['name'] elif (dimension is None): return None else: raise ValueError(('%s type could not be interpreted as Dimension. Dimensions must be declared as a string, tuple, dictionary or Dimension type.' % type(dimension).__name__))
Return the Dimension.name for a dimension-like object. Args: dimension: Dimension or dimension string, tuple or dict Returns: The name of the Dimension or what would be the name if the input as converted to a Dimension.
codesearchnet
def observe(self, success, failure): if (isinstance(success, int) is False): if (isinstance(success, float) is False): raise TypeError() if (isinstance(failure, int) is False): if (isinstance(failure, float) is False): raise TypeError() if (success <= 0): raise ValueError() if (failure <= 0): raise ValueError() self.__success += success self.__failure += failure
Observation data. Args: success: The number of success. failure: The number of failure.
codesearchnet
def Read(self, file_object): file_object.seek(self.last_read, os.SEEK_SET) read_data = file_object.read(self._MAXIMUM_READ_SIZE) self.last_read = file_object.get_offset() compressed_data = b''.join([self._compressed_data, read_data]) decompressed, extra_compressed = self._decompressor.Decompress( compressed_data) self._compressed_data = extra_compressed self.uncompressed_offset += len(decompressed) return decompressed
Reads the next uncompressed data from the gzip stream. Args: file_object (FileIO): file object that contains the compressed stream. Returns: bytes: next uncompressed data from the compressed stream.
juraj-google-style
def imread(path, grayscale=False, size=None, interpolate='bilinear', channel_first=False, as_uint16=False, num_channels=(- 1)): _imread_before(grayscale, num_channels) f = (path if hasattr(path, 'read') else open(path, 'rb')) r = png.Reader(file=f) (width, height, pixels, metadata) = r.asDirect() bit_depth = metadata.get('bitdepth') if (bit_depth not in [8, 16]): raise ValueError('The bit-depth of the image you want to read is unsupported ({}bit).Currently, pypng backend`s imread supports only [8, 16] bit-depth.the path for this image is {}'.format(bit_depth, path)) img = read_result_to_ndarray(pixels, width, height, metadata, grayscale, as_uint16, num_channels) return _imread_after(img, size, interpolate, channel_first, imresize)
Read image by pypng module. Args: path (str or 'file object'): File path or object to read. grayscale (bool): size (tupple of int): (width, height). If None, output img shape depends on the files to read. channel_first (bool): This argument specifies the shape of img is whether (height, width, channel) or (channel, height, width). Default value is False, which means the img shape is (height, width, channel). interpolate (str): must be one of ["nearest", "box", "bilinear", "hamming", "bicubic", "lanczos"]. as_uint16 (bool): If True, this function reads image as uint16. num_channels (int): channel size of output array. Default is -1 which preserves raw image shape. Returns: numpy.ndarray
codesearchnet
def init_grad(obj, allow_lazy_initializer=False): if obj is None: return 0.0 initializer, supports_lazy_initializer = grad_initializers[type(obj)] if supports_lazy_initializer: if isinstance(obj, ZeroGradient): if allow_lazy_initializer: return ZeroGradient(obj.like) else: return obj.instantiate() else: if allow_lazy_initializer: return ZeroGradient(obj) else: assert not isinstance(obj, ZeroGradient) return initializer(obj)
Initialize the gradient for an object. Args: obj: The object to initialize the gradient for, can be either a number, array, tuple, list, or dictionary. allow_lazy_initializer: Whether to allow using the ZeroGradient wrapper, for efficiency. Returns: An object of the same type, shape, etc. but with all numeric values set to zero. If the type is unknown, a zero is returned.
juraj-google-style
def load_partition_data(self, index): info = self.partitions[index] data = PartitionData(info) for utt_id in info.utt_ids: utt_data = [c._file[utt_id][:] for c in self.containers] data.utt_data.append(utt_data) return data
Load and return the partition with the given index. Args: index (int): The index of partition, that refers to the index in ``self.partitions``. Returns: PartitionData: A PartitionData object containing the data for the partition with the given index.
codesearchnet
def set_conf_str(conf, optstrs): falsy = ['0', 'no', 'n', 'off', 'false', 'f'] bool_actions = ['store_true', 'store_false', internal.Switch] for optstr in optstrs: opt, val = optstr.split('=', 1) sec, opt = opt.split('.', 1) if sec not in conf: raise error.SectionError(sec) if opt not in conf[sec]: raise error.OptionError(opt) meta = conf[sec].def_[opt] if meta.default is None: if 'type' in meta.cmd_kwargs: cast = meta.cmd_kwargs['type'] else: act = meta.cmd_kwargs.get('action') cast = bool if act in bool_actions else str else: cast = type(meta.default) if cast is bool and val.lower() in falsy: val = '' conf[sec][opt] = cast(val)
Set options from a list of section.option=value string. Args: conf (:class:`~loam.manager.ConfigurationManager`): the conf to update. optstrs (list of str): the list of 'section.option=value' formatted string.
juraj-google-style
def _RawGlobPathSpecWithAlphabeticalSchema(file_system, parent_path_spec, segment_format, location, segment_length, upper_case=False): segment_number = 0 segment_files = [] while True: segment_index = segment_number segment_letters = [] while (len(segment_letters) < segment_length): (segment_index, remainder) = divmod(segment_index, 26) if upper_case: segment_letters.append(chr((ord('A') + remainder))) else: segment_letters.append(chr((ord('a') + remainder))) segment_letters = ''.join(segment_letters[::(- 1)]) segment_location = segment_format.format(location, segment_letters) kwargs = path_spec_factory.Factory.GetProperties(parent_path_spec) kwargs['location'] = segment_location if (parent_path_spec.parent is not None): kwargs['parent'] = parent_path_spec.parent segment_path_spec = path_spec_factory.Factory.NewPathSpec(parent_path_spec.type_indicator, **kwargs) if (not file_system.FileEntryExistsByPathSpec(segment_path_spec)): break segment_files.append(segment_path_spec) segment_number += 1 return segment_files
Globs for path specifications according to an alphabetical naming schema. Args: file_system (FileSystem): file system. parent_path_spec (PathSpec): parent path specification. segment_format (str): naming schema of the segment file location. location (str): the base segment file location string. segment_length (int): length (number of characters) of the segment indicator. upper_case (Optional[bool]): True if the segment name is in upper case. Returns: list[PathSpec]: path specifications that match the glob.
codesearchnet
def Open(self, hostname, port): server_url = 'http: try: self._xmlrpc_proxy = xmlrpclib.ServerProxy(server_url, allow_none=True) except SocketServer.socket.error as exception: logger.warning('Unable to connect to RPC server on {0:s}:{1:d} with error: {2!s}'.format(hostname, port, exception)) return False return True
Opens a RPC communication channel to the server. Args: hostname (str): hostname or IP address to connect to for requests. port (int): port to connect to for requests. Returns: bool: True if the communication channel was established.
codesearchnet
def extract(self, html_text: str, strategy: Strategy=Strategy.ALL_TEXT) -> List[Extraction]: if html_text: if (strategy == Strategy.ALL_TEXT): soup = BeautifulSoup(html_text, 'html.parser') texts = soup.findAll(text=True) visible_texts = filter(self._tag_visible, texts) all_text = u' '.join((t.strip() for t in visible_texts)) return [Extraction(all_text, self.name)] else: relax = (strategy == Strategy.MAIN_CONTENT_RELAXED) readable = Document(html_text, recallPriority=relax).summary(html_partial=False) clean_text = BeautifulSoup(readable.encode('utf-8'), 'lxml').strings readability_text = ' '.join(clean_text) return [Extraction(readability_text, self.name)] else: return []
Extracts text from an HTML page using a variety of strategies Args: html_text (str): html page in string strategy (enum[Strategy.ALL_TEXT, Strategy.MAIN_CONTENT_RELAXED, Strategy.MAIN_CONTENT_STRICT]): one of Strategy.ALL_TEXT, Strategy.MAIN_CONTENT_STRICT and Strategy.MAIN_CONTENT_RELAXED Returns: List[Extraction]: typically a singleton list with the extracted text
codesearchnet
def copy_docstring(source_class): def decorator(method): if method.__doc__: raise ValueError('Method already has a docstring.') source_method = getattr(source_class, method.__name__) method.__doc__ = source_method.__doc__ return method return decorator
Decorator that copies a method's docstring from another class. Args: source_class (type): The class that has the documented method. Returns: Callable: A decorator that will copy the docstring of the same named method in the source class to the decorated method.
juraj-google-style
def save_source(driver, name): source = driver.page_source file_name = os.path.join(os.environ.get('SAVED_SOURCE_DIR'), '{name}.html'.format(name=name)) try: with open(file_name, 'wb') as output_file: output_file.write(source.encode('utf-8')) except Exception: msg = u"Could not save the browser page source to {}.".format(file_name) LOGGER.warning(msg)
Save the rendered HTML of the browser. The location of the source can be configured by the environment variable `SAVED_SOURCE_DIR`. If not set, this defaults to the current working directory. Args: driver (selenium.webdriver): The Selenium-controlled browser. name (str): A name to use in the output file name. Note that ".html" is appended automatically Returns: None
juraj-google-style
def steps(self, goal): path = self.path(goal) for i in range((len(path) - 1)): (yield (path[i], path[(i + 1)]))
Get the list of individual relations leading to the targeted node Args: goal (str): Name of the targeted node Return: list of tuple of Node
codesearchnet
def pseudo_with_symbol(self, symbol, allow_multi=False): pseudos = self.select_symbols(symbol, ret_list=True) if not pseudos or (len(pseudos) > 1 and not allow_multi): raise ValueError("Found %d occurrences of symbol %s" % (len(pseudos), symbol)) if not allow_multi: return pseudos[0] else: return pseudos
Return the pseudo with the given chemical symbol. Args: symbols: String with the chemical symbol of the element allow_multi: By default, the method raises ValueError if multiple occurrences are found. Use allow_multi to prevent this. Raises: ValueError if symbol is not found or multiple occurences are present and not allow_multi
juraj-google-style
def eval_single(self, key, data, data_store): if (key in self): value = self[key] if ((value is not None) and callable(value)): return value(data, data_store) else: return value else: raise AttributeError()
Evaluate the value of a single parameter taking into account callables . Native types are not touched and simply returned, while callable methods are executed and their return value is returned. Args: key (str): The name of the parameter that should be evaluated. data (MultiTaskData): The data object that has been passed from the predecessor task. data_store (DataStore): The persistent data store object that allows the task to store data for access across the current workflow run.
codesearchnet
def ion_or_solid_comp_object(formula): m = re.search(r"\[([^\[\]]+)\]|\(aq\)", formula) if m: comp_obj = Ion.from_formula(formula) elif re.search(r"\(s\)", formula): comp_obj = Composition(formula[:-3]) else: comp_obj = Composition(formula) return comp_obj
Returns either an ion object or composition object given a formula. Args: formula: String formula. Eg. of ion: NaOH(aq), Na[+]; Eg. of solid: Fe2O3(s), Fe(s), Na2O Returns: Composition/Ion object
juraj-google-style
def _check_params(window_length, dtype): if not dtype.is_floating: raise ValueError('dtype must be a floating point type. Found %s' % dtype) window_length = ops.convert_to_tensor(window_length, dtype=dtypes.int32) window_length.shape.assert_has_rank(0) return window_length
Check window_length and dtype params. Args: window_length: A scalar value or `Tensor`. dtype: The data type to produce. Must be a floating point type. Returns: window_length converted to a tensor of type int32. Raises: ValueError: If `dtype` is not a floating point type or window_length is not a scalar.
github-repos
def html_job_status(job_name, job_type, refresh_interval, html_on_running, html_on_success): _HTML_TEMPLATE = div_id = _html.Html.next_id() return IPython.core.display.HTML(_HTML_TEMPLATE % (div_id, div_id, job_name, job_type, refresh_interval, html_on_running, html_on_success))
create html representation of status of a job (long running operation). Args: job_name: the full name of the job. job_type: type of job. Can be 'local' or 'cloud'. refresh_interval: how often should the client refresh status. html_on_running: additional html that the job view needs to include on job running. html_on_success: additional html that the job view needs to include on job success.
juraj-google-style
def _time_step(time, output_ta_t, state): if in_graph_mode: input_t = tuple((ta.read(time) for ta in input_ta)) for input_, shape in zip(input_t, inputs_got_shape): input_.set_shape(shape[1:]) else: input_t = tuple((ta[time.numpy()] for ta in input_ta)) input_t = nest.pack_sequence_as(structure=inputs, flat_sequence=input_t) call_cell = lambda: cell(input_t, state) if sequence_length is not None: output, new_state = _rnn_step(time=time, sequence_length=sequence_length, min_sequence_length=min_sequence_length, max_sequence_length=max_sequence_length, zero_output=zero_output, state=state, call_cell=call_cell, state_size=state_size, skip_conditionals=True) else: output, new_state = call_cell() output = nest.flatten(output) if in_graph_mode: output_ta_t = tuple((ta.write(time, out) for ta, out in zip(output_ta_t, output))) else: for ta, out in zip(output_ta_t, output): ta[time.numpy()] = out return (time + 1, output_ta_t, new_state)
Take a time step of the dynamic RNN. Args: time: int32 scalar Tensor. output_ta_t: List of `TensorArray`s that represent the output. state: nested tuple of vector tensors that represent the state. Returns: The tuple (time + 1, output_ta_t with updated flow, new_state).
github-repos
def patch_request(self, id_or_uri, body, timeout=-1, custom_headers=None): uri = self.build_uri(id_or_uri) logger.debug('Patch resource (uri = %s, data = %s)' % (uri, body)) custom_headers_copy = custom_headers.copy() if custom_headers else {} if self._connection._apiVersion >= 300 and 'Content-Type' not in custom_headers_copy: custom_headers_copy['Content-Type'] = 'application/json-patch+json' task, entity = self._connection.patch(uri, body, custom_headers=custom_headers_copy) if not task: return entity return self._task_monitor.wait_for_task(task, timeout)
Uses the PATCH to update a resource. Only one operation can be performed in each PATCH call. Args: id_or_uri: Can be either the resource ID or the resource URI. body: Patch request body timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView; it just stops waiting for its completion. Returns: Updated resource.
juraj-google-style
def annotate_source(dump, source_file_path, do_dumped_tensors=False, file_stack_top=False, min_line=None, max_line=None): py_graph = dump.python_graph if not py_graph: raise ValueError('Cannot perform source annotation due to a lack of set Python graph in the dump object') source_file_path = _norm_abs_path(source_file_path) line_to_op_names = {} for op in py_graph.get_operations(): for file_path, line_number, _, _ in reversed(dump.node_traceback(op.name)): if min_line is not None and line_number < min_line or (max_line is not None and line_number >= max_line): continue if _norm_abs_path(file_path) != source_file_path: continue if do_dumped_tensors: watch_keys = dump.debug_watch_keys(op.name) items_to_append = list(set(map(_convert_watch_key_to_tensor_name, watch_keys))) else: items_to_append = [op.name] if line_number in line_to_op_names: line_to_op_names[line_number].extend(items_to_append) else: line_to_op_names[line_number] = items_to_append if file_stack_top: break return line_to_op_names
Annotate a Python source file with a list of ops created at each line. (The annotation doesn't change the source file itself.) Args: dump: (`DebugDumpDir`) A `DebugDumpDir` object of which the Python graph has been loaded. source_file_path: (`str`) Path to the source file being annotated. do_dumped_tensors: (`str`) Whether dumped Tensors, instead of ops are to be used to annotate the source file. file_stack_top: (`bool`) Whether only the top stack trace in the specified source file is to be annotated. min_line: (`None` or `int`) The 1-based line to start annotate the source file from (inclusive). max_line: (`None` or `int`) The 1-based line number to end the annotation at (exclusive). Returns: A `dict` mapping 1-based line number to a list of op name(s) created at that line, or tensor names if `do_dumped_tensors` is True. Raises: ValueError: If the dump object does not have a Python graph set.
github-repos
def __item_descriptor(self, config): descriptor = {'kind': 'discovery description = config.get('description') root_url = config.get('root') name = config.get('name') version = config.get('api_version') relative_path = '/apis/{0}/{1}/rest'.format(name, version) if description: descriptor['description'] = description descriptor['name'] = name descriptor['version'] = version descriptor['discoveryLink'] = '.{0}'.format(relative_path) root_url_port = urlparse.urlparse(root_url).port original_path = self.__request.reconstruct_full_url(port_override=root_url_port) descriptor['discoveryRestUrl'] = '{0}/{1}/{2}/rest'.format(original_path, name, version) if (name and version): descriptor['id'] = '{0}:{1}'.format(name, version) return descriptor
Builds an item descriptor for a service configuration. Args: config: A dictionary containing the service configuration to describe. Returns: A dictionary that describes the service configuration.
codesearchnet
def match_pattern(expr_or_pattern: object, expr: object) -> MatchDict: try: return expr_or_pattern.match(expr) except AttributeError: if (expr_or_pattern == expr): return MatchDict() else: res = MatchDict() res.success = False res.reason = ("Expressions '%s' and '%s' are not the same" % (repr(expr_or_pattern), repr(expr))) return res
Recursively match `expr` with the given `expr_or_pattern` Args: expr_or_pattern: either a direct expression (equal to `expr` for a successful match), or an instance of :class:`Pattern`. expr: the expression to be matched
codesearchnet