code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def parse(self, filepath, content): try: parsed = yaml.load(content) except yaml.YAMLError as exc: msg = 'No YAML object could be decoded from file: {}\n{}' raise SettingsBackendError(msg.format(filepath, exc)) return parsed
Parse opened settings content using YAML parser. Args: filepath (str): Settings object, depends from backend content (str): Settings content from opened file, depends from backend. Raises: boussole.exceptions.SettingsBackendError: If parser can not decode a valid YAML object. Returns: dict: Dictionnary containing pa...
codesearchnet
def find_centroid_alleles(alleles, bp=28, t=0.025): centroid_alleles = set() len_allele = group_alleles_by_size(alleles) for (length, seqs) in len_allele.items(): if (len(seqs) == 1): centroid_alleles.add(seqs[0]) continue seq_arr = seq_int_arr(seqs) starts_en...
Reduce list of alleles to set of centroid alleles based on size grouping, ends matching and hierarchical clustering Workflow for finding centroid alleles: - grouping by size (e.g. 100bp, 101bp, 103bp, etc) - then grouped by `bp` nucleotides at ends matching - size and ends grouped alleles hierarchically clustered (Ha...
codesearchnet
def _get_reference_classnames(self, classname, namespace, resultclass_name, role): self._validate_namespace(namespace) result_classes = self._classnamedict(resultclass_name, namespace) rtn_classnames_set = set() role = (role.lower() if role else role) for cl in self._get_association_classes(namespac...
Get list of classnames that are references for which this classname is a target filtered by the result_class and role parameters if they are none. This is a common method used by all of the other reference and associator methods to create a list of reference classnames Returns: list of classnames that satisfy the crit...
codesearchnet
def __init__(self, error_name, error_id, error_msg, token_value): self.error_name = error_name self.error_id = error_id self.error_msg = error_msg self._token_value = token_value
Create a LexerError that matches |token_value|. Args: error_name: A short, human readable name for the error, using lowercase-with-dashes-format. error_id: An integer to identify a specific error: 100s: Lexer errors. 200s: Low level parsing errors. 300s: High level parsing errors. error_msg: A message to display with ...
github-repos
def nuc_p(msg): tc = typecode(msg) if typecode(msg) < 5 or typecode(msg) > 22: raise RuntimeError( "%s: Not a surface position message (5<TC<8), \ airborne position message (8<TC<19), \ or airborne position with GNSS height (20<TC<22)" % msg ) try: ...
Calculate NUCp, Navigation Uncertainty Category - Position (ADS-B version 1) Args: msg (string): 28 bytes hexadecimal message string, Returns: int: Horizontal Protection Limit int: 95% Containment Radius - Horizontal (meters) int: 95% Containment Radius - Vertical (meters)
juraj-google-style
def _update_in_hdx(self, object_type, id_field_name, file_to_upload=None, **kwargs): self._check_load_existing_object(object_type, id_field_name) self._merge_hdx_update(object_type, id_field_name, file_to_upload, **kwargs)
Helper method to check if HDX object exists in HDX and if so, update it Args: object_type (str): Description of HDX object type (for messages) id_field_name (str): Name of field containing HDX object identifier file_to_upload (Optional[str]): File to upload to HDX **kwargs: See below operation (string): Operation to p...
codesearchnet
def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None) -> List[int]: if token_ids_1 is None: return self.prefix_tokens + token_ids_0 + self.suffix_tokens return self.prefix_tokens + token_ids_0 + token_ids_1 + self.suffix_tokens
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. The special tokens depend on calling set_lang. An NLLB sequence has the following format, where `X` represents the sequence: - `input_ids` (for encoder) `X [eos, src_lang_code]` - `de...
github-repos
def print_stack_events(self): first_token = '7be7981bd6287dd8112305e8f3822a6f' keep_going = True next_token = first_token current_request_token = None rows = [] try: while (keep_going and next_token): if (next_token == first_token): response = self._cf_client....
List events from the given stack Args: None Returns: None
codesearchnet
def ssh(container, cmd='', user='root', password='root'): ip = get_ip(container) ssh_cmd = 'sshpass -p \'%s\' ssh -A -t -o StrictHostKeyChecking=no \'%s\'@%s' % (password, user, ip) local('ssh -A -t -o StrictHostKeyChecking=no -i "%s" %s@%s %s %s' % ( env.key_filename, env.user, env.host, ssh_c...
SSH into a running container, using the host as a jump host. This requires the container to have a running sshd process. Args: * container: Container name or ID * cmd='': Command to run in the container * user='root': SSH username * password='root': SSH password
juraj-google-style
def targets(self): return self._targets
Return the unique names of ops to run. Returns: A list of strings.
github-repos
def switch_to_window(page_class, webdriver): window_list = list(webdriver.window_handles) original_window = webdriver.current_window_handle for window_handle in window_list: webdriver.switch_to_window(window_handle) try: return PageFactory.create_...
Utility method for switching between windows. It will search through currently open windows, then switch to the window matching the provided PageObject class. Args: page_class (PageObject): Page class to search for/instantiate. webdriver (WebDriver): Selenium webdriver. Usage:: WebUtils.switch_to_window(DetailsPopU...
juraj-google-style
def set(self, key, value, **kwargs): path = '%s/%s' % (self.path, key.replace('/', '%2F')) data = {'value': value} server_data = self.gitlab.http_put(path, post_data=data, **kwargs) return self._obj_cls(self, server_data)
Create or update the object. Args: key (str): The key of the object to create/update value (str): The value to set for the object **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabSetError: If an error occured Returns: obj: The created...
juraj-google-style
def write_fasta_file(self, outfile, force_rerun=False): if ssbio.utils.force_rerun(flag=force_rerun, outfile=outfile): SeqIO.write(self, outfile, "fasta") self.sequence_path = outfile
Write a FASTA file for the protein sequence, ``seq`` will now load directly from this file. Args: outfile (str): Path to new FASTA file to be written to force_rerun (bool): If an existing file should be overwritten
juraj-google-style
def client(self): if self.proxy: proxyhandler = urllib.ProxyHandler({'http': self.proxy}) opener = urllib.build_opener(proxyhandler) urllib.install_opener(opener) transport = ProxyTransport() if (not hasattr(self, '_client')): transport = None if self.pypi: ...
XMLRPC client for PyPI. Always returns the same instance. If the package is provided as a path to compressed source file, PyPI will not be used and the client will not be instantiated. Returns: XMLRPC client for PyPI or None.
codesearchnet
def psq2(d1, d2): d1, d2 = flatten(d1), flatten(d2) def f(p): return sum((p ** 2) * np.nan_to_num(np.log(p * len(p)))) return abs(f(d1) - f(d2))
Compute the PSQ2 measure. Args: d1 (np.ndarray): The first distribution. d2 (np.ndarray): The second distribution.
juraj-google-style
def myRank(grade, badFormat, year, length): return int((sorted(everyonesAverage(year, badFormat, length), reverse=True).index(grade) + 1))
rank of candidateNumber in year Arguments: grade {int} -- a weighted average for a specific candidate number and year badFormat {dict} -- candNumber : [results for candidate] year {int} -- year you are in length {int} -- length of each row in badFormat divided by 2 Returns: int -- rank of candidateNumber in year
codesearchnet
def conversations_replies(self, *, channel: str, ts: str, **kwargs) -> SlackResponse: kwargs.update({"channel": channel, "ts": ts}) return self.api_call("conversations.replies", http_verb="GET", params=kwargs)
Retrieve a thread of messages posted to a conversation Args: channel (str): Conversation ID to fetch thread from. e.g. 'C1234567890' ts (str): Unique identifier of a thread's parent message. e.g. '1234567890.123456'
juraj-google-style
def __init__(self, scope, parent, name, result, args=None, paren=False): CodeExpression.__init__(self, scope, parent, name, result, paren) self.arguments = args or ()
Constructor for operators. Args: scope (CodeEntity): The program scope where this object belongs. parent (CodeEntity): This object's parent in the program tree. name (str): The name of the operator in the program. result (str): The return type of the operator in the program. Kwargs: args (tuple): Initial tuple of arg...
juraj-google-style
def NewFromJSON(data): if data.get('shakes', None): shakes = [Shake.NewFromJSON(shk) for shk in data.get('shakes')] else: shakes = None return User( id=data.get('id', None), name=data.get('name', None), profile_image_url=data....
Create a new User instance from a JSON dict. Args: data (dict): JSON dictionary representing a user. Returns: A User instance.
juraj-google-style
def get_backend(self, name=None, **kwargs): backends = self.backends(name, **kwargs) if (len(backends) > 1): raise QiskitBackendNotFoundError('More than one backend matches the criteria') elif (not backends): raise QiskitBackendNotFoundError('No backend matches the criteria') return back...
Return a single backend matching the specified filtering. Args: name (str): name of the backend. **kwargs (dict): dict used for filtering. Returns: BaseBackend: a backend matching the filtering. Raises: QiskitBackendNotFoundError: if no backend could be found or more than one backend matches.
codesearchnet
def decompress_dir(path): for parent, subdirs, files in os.walk(path): for f in files: decompress_file(os.path.join(parent, f))
Recursively decompresses all files in a directory. Args: path (str): Path to parent directory.
juraj-google-style
def validate_all_values_for_key(obj, key, validation_fun): for vkey, value in obj.items(): if vkey == key: validation_fun(value) elif isinstance(value, dict): validate_all_values_for_key(value, key, validation_fun)
Validate value for all (nested) occurrence of `key` in `obj` using `validation_fun`. Args: obj (dict): dictionary object. key (str): key whose value is to be validated. validation_fun (function): function used to validate the value of `key`. Raises: ValidationError: `validation_fun` will raise this error on failure
juraj-google-style
def be2le_state_by_state(tpm): le = np.empty(tpm.shape) N = tpm.shape[0] n = int(log2(N)) for i in range(N): le[(i, :)] = tpm[(be2le(i, n), :)] return le
Convert a state-by-state TPM from big-endian to little-endian or vice versa. Args: tpm (np.ndarray): A state-by-state TPM. Returns: np.ndarray: The state-by-state TPM in the other indexing format. Example: >>> tpm = np.arange(16).reshape([4, 4]) >>> be2le_state_by_state(tpm) array([[ 0., 1., 2., 3.], [ 8., 9., 1...
codesearchnet
def generate_orders(events, sell_delay=5, sep=','): sell_delay = (float(unicode(sell_delay)) or 1) for (i, (t, row)) in enumerate(events.iterrows()): for (sym, event) in row.to_dict().iteritems(): if (event and (not np.isnan(event))): if (event > 0): sell_...
Generate CSV orders based on events indicated in a DataFrame Arguments: events (pandas.DataFrame): Table of NaNs or 1's, one column for each symbol. 1 indicates a BUY event. -1 a SELL event. nan or 0 is a nonevent. sell_delay (float): Number of days to wait before selling back the shares bought sep (str or None): if s...
codesearchnet
def __query_node(self, ip, host): host = util.shorten_host_name(host, self.config.host_domains) (node, node_updated) = self.__get_known_node(ip, host) if (node == None): node = natlas_node() node.name = host node.ip = [ip] state = NODE_NEW else: if (node.snmpobj.s...
Query this node. Return node details and if we already knew about it or if this is a new node. Don't save the node to the known list, just return info about it. Args: ip: IP Address of the node. host: Hostname of this known (if known from CDP/LLDP) Returns: natlas_node: Node of th...
codesearchnet
def replace_batch_norm(model): for name, module in model.named_children(): if isinstance(module, nn.BatchNorm2d): new_module = DetaFrozenBatchNorm2d(module.num_features) if not module.weight.device == torch.device('meta'): new_module.weight.data.copy_(module.weight) ...
Recursively replace all `torch.nn.BatchNorm2d` with `DetaFrozenBatchNorm2d`. Args: model (torch.nn.Module): input model
github-repos
def locate_module(module_id: str, module_type: str=None): entry_point = None if module_type: entry_point = ('ehforwarderbot.%s' % module_type) module_id = module_id.split(' if entry_point: for i in pkg_resources.iter_entry_points(entry_point): if (i.name == module_id): ...
Locate module by module ID Args: module_id: Module ID module_type: Type of module, one of ``'master'``, ``'slave'`` and ``'middleware'``
codesearchnet
def change_t(self, t): t = super().change_t(t) self.__now_cycles += 1 if self.__now_cycles % self.__reannealing_per == 0: t = t * self.__thermostat if t < self.__t_min: t = self.__t_default return t
Change temperature. Override. Args: t: Now temperature. Returns: Next temperature.
juraj-google-style
def set_parameter_vector(self, vector, include_frozen=False): v = self.parameter_vector if include_frozen: v[:] = vector else: v[self.unfrozen_mask] = vector self.parameter_vector = v self.dirty = True
Set the parameter values to the given vector Args: vector (array[vector_size] or array[full_size]): The target parameter vector. This must be in the same order as ``parameter_names`` and it should only include frozen parameters if ``include_frozen`` is ``True``. include_frozen (Optional[bool]): Should the frozen param...
juraj-google-style
def _pearson_correlation(self, imgs_to_decode): (x, y) = (imgs_to_decode.astype(float), self.feature_images.astype(float)) return self._xy_corr(x, y)
Decode images using Pearson's r. Computes the correlation between each input image and each feature image across voxels. Args: imgs_to_decode: An ndarray of images to decode, with voxels in rows and images in columns. Returns: An n_features x n_images 2D array, with each cell representing the pearson correlation bet...
codesearchnet
def process(self, tensor): for processor in self.preprocessors: tensor = processor.process(tensor=tensor) return tensor
Process state. Args: tensor: tensor to process Returns: processed state
juraj-google-style
def substitute(self, var_map, cont=False, tag=None): return self.apply(substitute, var_map=var_map, cont=cont, tag=tag)
Substitute sub-expressions both on the lhs and rhs Args: var_map (dict): Dictionary with entries of the form ``{expr: substitution}``
juraj-google-style
def to_sql(self, view: views.View, limit: Optional[int]=None) -> str: sql_generator = self._build_sql_generator(view) sql_statement = sql_generator.build_sql_statement() view_table_name = f'{self._value_set_codes_table.project}.{self._value_set_codes_table.dataset_id}.{self._value_set_codes_table.table_id}'...
Returns the SQL used to run the given view in BigQuery. Args: view: the view used to generate the SQL. limit: optional limit to attach to the generated SQL. Returns: The SQL used to run the given view.
github-repos
def etm_supported(self): res = self._dll.JLINKARM_ETM_IsPresent() if (res == 1): return True info = ctypes.c_uint32(0) index = enums.JLinkROMTable.ETM res = self._dll.JLINKARM_GetDebugInfo(index, ctypes.byref(info)) if (res...
Returns if the CPU core supports ETM. Args: self (JLink): the ``JLink`` instance. Returns: ``True`` if the CPU has the ETM unit, otherwise ``False``.
juraj-google-style
def pred(scores: jax.Array, rows: jax.Array, cols: jax.Array, N: int) -> jax.Array: r: jax.Array = 2 * jax.ops.segment_sum(scores.take(cols), rows, N) - scores.sum() return r > 0
Predicts the target output from the learned scores and input entries. Args: scores (jax.Array): Contribution scores of features. rows (jax.Array): Row indices of True values in the input. cols (jax.Array): Column indices of True values in the input. N (int): The number of input entries. Returns: res (jax.Array): A pr...
github-repos
def convert_source_tokens_to_target_tokens(self, input_ids, source_tokenizer, destination_tokenizer): text = source_tokenizer.batch_decode(input_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) dest_ids = destination_tokenizer(text, add_special_tokens=True, return_tensors='pt')['input_ids'] ...
Convert token IDs from one tokenizer to another. Args: input_ids: The input token IDs. source_tokenizer: The source tokenizer. destination_tokenizer: The destination tokenizer. Returns: The converted token IDs.
github-repos
def _CreatePerformanceTarget(client, campaign_group_id): cgpt_service = client.GetService('CampaignGroupPerformanceTargetService', version='v201809') operations = [{'operator': 'ADD', 'operand': {'campaignGroupId': campaign_group_id, 'performanceTarget': {'efficiencyTargetType': 'CPC_LESS_THAN_OR_EQUAL_TO', 'ef...
Creates a performance target for the campaign group. Args: client: an AdWordsClient instance. campaign_group_id: an integer ID for the campaign group.
codesearchnet
def create_proxy_api_files(output_files, proxy_module_root, output_dir): for file_path in output_files: module = get_module(os.path.dirname(file_path), output_dir) if not os.path.isdir(os.path.dirname(file_path)): os.makedirs(os.path.dirname(file_path)) contents = f'from {proxy_m...
Creates __init__.py files in proxy format for the Python API. Args: output_files: List of __init__.py file paths to create. proxy_module_root: Module root for proxy-import format. If specified, proxy files with content like `from proxy_module_root.proxy_module import *` will be created to enable import resolution unde...
github-repos
def __init__(self, to_track: Dict): self.to_track = to_track self._seen: Set[str] = set()
This class "tracks" a python dictionary by keeping track of which item is accessed. Args: to_track (Dict): The dictionary we wish to track
github-repos
def major_complex(network, state): log.info('Calculating major complex...') result = complexes(network, state) if result: result = max(result) else: empty_subsystem = Subsystem(network, state, ()) result = _null_sia(empty_subsystem) log.info('Finished calculating major comple...
Return the major complex of the network. Args: network (Network): The |Network| of interest. state (tuple[int]): The state of the network (a binary tuple). Returns: SystemIrreducibilityAnalysis: The |SIA| for the |Subsystem| with maximal |big_phi|.
codesearchnet
def show_available_noise_curves(return_curves=True, print_curves=False): if ((return_curves is False) and (print_curves is False)): raise ValueError(('Both return curves and print_curves are False.' + ' You will not see the options')) cfd = os.path.dirname(os.path.abspath(__file__)) curves = [curve....
List available sensitivity curves This function lists the available sensitivity curve strings in noise_curves folder. Args: return_curves (bool, optional): If True, return a list of curve options. print_curves (bool, optional): If True, print each curve option. Returns: (optional list of str): List of curve options....
codesearchnet
def run_processor( processorClass, ocrd_tool=None, mets_url=None, resolver=None, workspace=None, page_id=None, log_level=None, input_file_grp=None, output_file_grp=None, parameter=None, working_dir=None, ): workspace = _ge...
Create a workspace for mets_url and run processor through it Args: parameter (string): URL to the parameter
juraj-google-style
def set_status(self, trial, status): trial.status = status if status in [Trial.TERMINATED, Trial.ERROR]: self.try_checkpoint_metadata(trial)
Sets status and checkpoints metadata if needed. Only checkpoints metadata if trial status is a terminal condition. PENDING, PAUSED, and RUNNING switches have checkpoints taken care of in the TrialRunner. Args: trial (Trial): Trial to checkpoint. status (Trial.status): Status to set trial to.
juraj-google-style
def unnest_collection(collection, df_list): for item in collection['link']['item']: if item['class'] == 'dataset': df_list.append(Dataset.read(item['href']).write('dataframe')) elif item['class'] == 'collection': nested_collection = request(item['href']) unne...
Unnest collection structure extracting all its datasets and converting \ them to Pandas Dataframes. Args: collection (OrderedDict): data in JSON-stat format, previously \ deserialized to a python object by \ json.load() or json.loads(), df_list (list): list variable which will contain the converted \ datasets. Return...
juraj-google-style
def retransmit(self, data): if data["method"] == "REGISTER": if not self.registered and self.register_retries < self.max_retries: logger.debug("<%s> Timeout exceeded. " % str(self.cuuid) + \ "Retransmitting REGISTER request.")...
Processes messages that have been delivered from the transport protocol. Args: data (dict): A dictionary containing the packet data to resend. Returns: None Examples: >>> data {'method': 'REGISTER', 'address': ('192.168.0.20', 40080)}
juraj-google-style
class DPTFeatureFusionLayer(nn.Module): def __init__(self, config, align_corners=True): super().__init__() self.align_corners = align_corners self.projection = nn.Conv2d(config.fusion_hidden_size, config.fusion_hidden_size, kernel_size=1, bias=True) self.residual_layer1 = DPTPreActR...
Feature fusion layer, merges feature maps from different stages. Args: config (`[DPTConfig]`): Model configuration class defining the model architecture. align_corners (`bool`, *optional*, defaults to `True`): The align_corner setting for bilinear upsample.
github-repos
def proto_refactor_files(dest_dir, namespace, namespace_path): for dn, dns, fns in os.walk(dest_dir): for fn in fns: fn = os.path.join(dn, fn) if fnmatch.fnmatch(fn, '*.proto'): data = proto_refactor(fn, namespace, namespace_path) with open(fn, 'w...
This method runs the refactoring on all the Protobuf files in the Dropsonde repo. Args: dest_dir (str): directory where the Protobuf files lives. namespace (str): the desired package name (i.e. "dropsonde.py2") namespace_path (str): the desired path corresponding to the package name (i.e. "dropsonde/py2")
juraj-google-style
def _create(self, monomer, mon_vector): while self.length != (self.n_units-1): if self.linear_chain: move_direction = np.array(mon_vector) / np.linalg.norm(mon_vector) else: move_direction = self._next_move_direction() self._add_monome...
create the polymer from the monomer Args: monomer (Molecule) mon_vector (numpy.array): molecule vector that starts from the start atom index to the end atom index
juraj-google-style
def save_as(self, filename: str) -> None: lib.TCOD_image_save(self.image_c, filename.encode('utf-8'))
Save the Image to a 32-bit .bmp or .png file. Args: filename (Text): File path to same this Image.
codesearchnet
def start(self): if (not self.started): self.started = True self.executor = ThreadPoolExecutor(max_workers=32) self.poller = self.executor.submit(self.poll_events) else: raise IllegalStateError('Dispatcher is already started.')
Starts the event dispatcher. Initiates executor and start polling events. Raises: IllegalStateError: Can't start a dispatcher again when it's already running.
codesearchnet
def report_fhir_path_warning(self, element_path: str, fhir_path_constraint: str, msg: str) -> None:
Reports a FHIRPath constraint warning during validation and/or encoding. Args: element_path: The path to the field that the constraint is defined on. fhir_path_constraint: The FHIRPath constraint expression. msg: The warning message produced.
github-repos
def color_gen_map(colors: Iterable[Tuple[(int, int, int)]], indexes: Iterable[int]) -> List[Color]: ccolors = ffi.new('TCOD_color_t[]', colors) cindexes = ffi.new('int[]', indexes) cres = ffi.new('TCOD_color_t[]', (max(indexes) + 1)) lib.TCOD_color_gen_map(cres, len(ccolors), ccolors, cindexes) retu...
Return a smoothly defined scale of colors. If ``indexes`` is [0, 3, 9] for example, the first color from ``colors`` will be returned at 0, the 2nd will be at 3, and the 3rd will be at 9. All in-betweens will be filled with a gradient. Args: colors (Iterable[Union[Tuple[int, int, int], Sequence[int]]]): Array of color...
codesearchnet
def from_node(cls, node): if (not isinstance(node, aioxmpp.stanza.Message)): raise AttributeError('node must be a aioxmpp.stanza.Message instance') msg = cls() msg._to = node.to msg._sender = node.from_ if (None in node.body): msg.body = node.body[None] else: for key in n...
Creates a new spade.message.Message from an aixoxmpp.stanza.Message Args: node (aioxmpp.stanza.Message): an aioxmpp Message Returns: spade.message.Message: a new spade Message
codesearchnet
def _render_fluent_timestep(self, fluent_type: str, fluents: Sequence[Tuple[str, np.array]], fluent_variables: Sequence[Tuple[str, List[str]]]) -> None: for fluent_pair, variable_list in zip(fluents, fluent_variables): name, fluent = fluent_pair ...
Prints `fluents` of given `fluent_type` as list of instantiated variables with corresponding values. Args: fluent_type (str): Fluent type. fluents (Sequence[Tuple[str, np.array]]): List of pairs (fluent_name, fluent_values). fluent_variables (Sequence[Tuple[str, List[str]]]): List of pairs (fluent_name, args).
juraj-google-style
def console_print_ex( con: tcod.console.Console, x: int, y: int, flag: int, alignment: int, fmt: str, ) -> None: lib.TCOD_console_printf_ex(_console(con), x, y, flag, alignment, _fmt(fmt))
Print a string on a console using a blend mode and alignment mode. Args: con (Console): Any Console instance. x (int): Character x position from the left. y (int): Character y position from the top. .. deprecated:: 8.5 Use :any:`Console.print_` instead.
juraj-google-style
def bulkWrite(self, endpoint, buffer, timeout = 100): r return self.dev.write(endpoint, buffer, timeout)
r"""Perform a bulk write request to the endpoint specified. Arguments: endpoint: endpoint number. buffer: sequence data buffer to write. This parameter can be any sequence type. timeout: operation timeout in milliseconds. (default: 100) Returns the number of bytes written.
juraj-google-style
def _recursive_remove_blank_dirs(self, path): path = os.path.abspath(path) if path == self.path or len(path) <= len(self.path): return if not os.path.exists(path): return self._recursive_remove_blank_dirs( os.path.dirname(path)...
Make sure, that blank directories are removed from the storage. Args: path (str): Path which you suspect that is blank.
juraj-google-style
def triangle_area(point1, point2, point3): a = point_distance(point1, point2) b = point_distance(point1, point3) c = point_distance(point2, point3) s = (a + b + c) / 2.0 return math.sqrt(s * (s - a) * (s - b) * (s - c))
Uses Heron's formula to find the area of a triangle based on the coordinates of three points. Args: point1: list or tuple, the x y coordinate of point one. point2: list or tuple, the x y coordinate of point two. point3: list or tuple, the x y coordinate of point three. Returns: The area of a triangle as a floating ...
juraj-google-style
def grad_dot(dy, x1, x2): if len(numpy.shape(x1)) == 1: dy = numpy.atleast_2d(dy) elif len(numpy.shape(x2)) == 1: dy = numpy.transpose(numpy.atleast_2d(dy)) x2 = numpy.transpose(numpy.atleast_2d(x2)) x2_t = numpy.transpose(numpy.atleast_2d( numpy.sum(x2, axis=tuple(numpy.arange(numpy.ndim(x2)...
Gradient of NumPy dot product w.r.t. to the left hand side. Args: dy: The gradient with respect to the output. x1: The left hand side of the `numpy.dot` function. x2: The right hand side Returns: The gradient with respect to `x1` i.e. `x2.dot(dy.T)` with all the broadcasting involved.
juraj-google-style
def _publish_internal(self, push_messages): import requests response = requests.post(((self.host + self.api_url) + '/push/send'), data=json.dumps([pm.get_payload() for pm in push_messages]), headers={'accept': 'application/json', 'accept-encoding': 'gzip, deflate', 'content-type': 'application/json'}) try: ...
Send push notifications The server will validate any type of syntax errors and the client will raise the proper exceptions for the user to handle. Each notification is of the form: { 'to': 'ExponentPushToken[xxx]', 'body': 'This text gets display in the notification', 'badge': 1, 'data': {'any': 'json object'}, } Ar...
codesearchnet
def pubsub_pop_message(self, deadline=None): if not self.subscribed: excep = ClientError("you must subscribe before using " "pubsub_pop_message") raise tornado.gen.Return(excep) reply = None try: reply = self._reply_lis...
Pops a message for a subscribed client. Args: deadline (int): max number of seconds to wait (None => no timeout) Returns: Future with the popped message as result (or None if timeout or ConnectionError object in case of connection errors or ClientError object if you are not subscribed)
juraj-google-style
def _prepare_for_training(self, job_name=None): if (job_name is not None): self._current_job_name = job_name else: if self.base_job_name: base_name = self.base_job_name elif isinstance(self, sagemaker.algorithm.AlgorithmEstimator): base_name = self.algorithm_arn.s...
Set any values in the estimator that need to be set before training. Args: * job_name (str): Name of the training job to be created. If not specified, one is generated, using the base name given to the constructor if applicable.
codesearchnet
def DeserializeExclusiveData(self, reader): self.Type = TransactionType.StateTransaction self.Descriptors = reader.ReadSerializableArray('neo.Core.State.StateDescriptor.StateDescriptor')
Deserialize full object. Args: reader (neo.IO.BinaryReader): Raises: Exception: If the transaction type is incorrect or if there are no claims.
juraj-google-style
def __call__(self, raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]], padding: Union[bool, str, PaddingStrategy]=False, max_length: Optional[int]=None, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, verbose: ...
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences. Args: raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`): The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a li...
github-repos
def run(self, args): jlink = self.create_jlink(args) if args.list: print('Built-in Licenses: %s' % ', '.join(jlink.licenses.split(','))) print('Custom Licenses: %s' % ', '.join(jlink.custom_licenses.split(','))) elif args.add is not None: if jlink.add...
Runs the license command. Args: self (LicenseCommand): the ``LicenseCommand`` instance args (Namespace): the arguments passed on the command-line Returns: ``None``
juraj-google-style
def use_gradient(grad_f): grad_f_name = register_to_random_name(grad_f) def function_wrapper(f): def inner(*inputs): state = {'out_value': None} out = f(*inputs) def store_out(out_value): 'Store the value of out to a python variable.' ...
Decorator for easily setting custom gradients for TensorFlow functions. * DO NOT use this function if you need to serialize your graph. * This function will cause the decorated function to run slower. Example: def _foo_grad(op, grad): ... @use_gradient(_foo_grad) def foo(x1, x2, x3): ... Args: grad_f: function to ...
codesearchnet
def MergeBaseClass(cls, base): bases = tuple((b for b in cls.bases if b != base)) bases += tuple((b for b in base.bases if b not in bases)) method_names = [m.name for m in cls.methods] methods = cls.methods + tuple((m for m in base.methods if m.name not in method_names)) constant_names = [c.name for...
Merge a base class into a subclass. Arguments: cls: The subclass to merge values into. pytd.Class. base: The superclass whose values will be merged. pytd.Class. Returns: a pytd.Class of the two merged classes.
github-repos
def check_mailfy(self, query, kwargs={}): import re import requests s = requests.Session() r1 = s.get("https: csrf_token = re.findall("csrf_token", r1.text)[0] r2 = s.post( 'https: data={"email": query}, he...
Verifying a mailfy query in this platform. This might be redefined in any class inheriting from Platform. The only condition is that any of this should return a dictionary as defined. Args: ----- query: The element to be searched. kwargs: Dictionary with extra parameters. Just in case. Return: ------- Returns the co...
juraj-google-style
def get_frame(self, index=None, onset=None): if onset: index = int(onset * self.fps) return super(VideoStim, self).get_frame(index)
Overrides the default behavior by giving access to the onset argument. Args: index (int): Positional index of the desired frame. onset (float): Onset (in seconds) of the desired frame.
juraj-google-style
def is_process_running(process_name): is_running = False if os.path.isfile('/usr/bin/pgrep'): dev_null = open(os.devnull, 'wb') returncode = subprocess.call(['/usr/bin/pgrep', process_name], stdout=dev_null) is_running = bool(returncode == ...
Check if a process with the given name is running. Args: (str): Process name, e.g. "Sublime Text" Returns: (bool): True if the process is running
juraj-google-style
def get_variant_genotypes(self, variant): if not self.has_index: raise NotImplementedError("Not implemented when IMPUTE2 file is " "not indexed (see genipe)") try: impute2_chrom = CHROM_STR_TO_INT[variant.chrom.name] ...
Get the genotypes from a well formed variant instance. Args: marker (Variant): A Variant instance. Returns: A list of Genotypes instance containing a pointer to the variant as well as a vector of encoded genotypes.
juraj-google-style
def post_warning(self, name, message): self.post_command(OPERATIONS.CMD_POST_MESSAGE, _create_message(name, states.WARNING_LEVEL, message))
Asynchronously post a user facing warning message about a service. Args: name (string): The name of the service message (string): The user facing warning message that will be stored for the service and can be queried later.
codesearchnet
def list(self, container_or_share_name, container=None, account=None): key = self.storage_client.storage_accounts.list_keys(self.resource_group_name, account).keys[0].value if container: bs = BlockBlobService(account_name=account, account_key=key) container_list = [] ...
List the blobs/files inside a container/share_name. Args: container_or_share_name(str): Name of the container/share_name where we want to list the blobs/files. container(bool): flag to know it you are listing files or blobs. account(str): The name of the storage account.
juraj-google-style
def step1_get_authorize_url(self, redirect_uri=None, state=None): if (redirect_uri is not None): logger.warning('The redirect_uri parameter for OAuth2WebServerFlow.step1_get_authorize_url is deprecated. Please move to passing the redirect_uri in via the constructor.') self.redirect_uri = redirect_ur...
Returns a URI to redirect to the provider. Args: redirect_uri: string, Either the string 'urn:ietf:wg:oauth:2.0:oob' for a non-web-based application, or a URI that handles the callback from the authorization server. This parameter is deprecated, please move to passing the redirect_uri in via the constructor. state: st...
codesearchnet
def setup_service(api_name, api_version, credentials=None): if not credentials: credentials = oauth2client.client.GoogleCredentials.get_application_default( ) return apiclient.discovery.build( api_name, api_version, credentials=credentials)
Configures genomics API client. Args: api_name: Name of the Google API (for example: "genomics") api_version: Version of the API (for example: "v2alpha1") credentials: Credentials to be used for the gcloud API calls. Returns: A configured Google Genomics API client with appropriate credentials.
juraj-google-style
def flatten(self, max_value: int) -> FrozenSet[int]: return frozenset(self.iter(max_value))
Return a set of all values contained in the sequence set. Args: max_value: The maximum value, in place of any ``*``.
juraj-google-style
def recoverURL(self, url): self.setUserAgent() if "https: self.setProxy(protocol = "https") else: self.setProxy(protocol = "http") if ".onion" in url: try: pass except: ...
Public method to recover a resource. Args: ----- url: The URL to be collected. Returns: -------- Returns a resource that has to be read, for instance, with html = self.br.read()
juraj-google-style
def print_type(self, t, literal=False) -> str:
Returns a string of the type of t. For example, if t is `0`, then this method returns "int" with literal=False or `Literal[0]` with literal=True. Args: t: An abstract value. literal: Whether to print literals literally.
github-repos
def render_table(data, headers=None): builder = HtmlBuilder() builder._render_objects(data, headers, datatype='dict') return builder._to_html()
Return a dictionary list formatted as a HTML table. Args: data: a list of dictionaries, one per row. headers: the keys in the dictionary to use as table columns, in order.
juraj-google-style
def run_benchmarks(benchmark_suite, verbose=True): def run(benchmark: BenchmarkFactoryFn, size: int): benchmark_instance_callable = benchmark(size) start = time.time() _ = benchmark_instance_callable() return time.time() - start cost_series = collections.defaultdict(list) si...
Runs benchmarks, and collects execution times. A simple instrumentation to run a callable several times, collect and print its execution times. Args: benchmark_suite: A list of BenchmarkConfig. verbose: bool, whether to print benchmark results to stdout. Returns: A dictionary of the form string -> list of floats. Ke...
github-repos
def erfinv(x): if any_symbolic_tensors((x,)): return Erfinv().symbolic_call(x) x = backend.convert_to_tensor(x) return backend.math.erfinv(x)
Computes the inverse error function of `x`, element-wise. Args: x: Input tensor. Returns: A tensor with the same dtype as `x`. Example: >>> x = np.array([-0.5, -0.2, -0.1, 0.0, 0.3]) >>> keras.ops.erfinv(x) array([-0.47694, -0.17914, -0.08886, 0. , 0.27246], dtype=float32)
github-repos
def closest_point(a, b, p): ap = [(p[0] - a[0]), (p[1] - a[1])] ab = [(b[0] - a[0]), (b[1] - a[1])] mag = float(((ab[0] ** 2) + (ab[1] ** 2))) proj = dot(ap, ab) if (mag == 0): dist = 0 else: dist = (proj / mag) if (dist < 0): return [a[0], a[1]] elif (dist > 1): ...
Finds closest point in a line segment Args: a ([float, float]): x and y coordinates. Line start b ([float, float]): x and y coordinates. Line end p ([float, float]): x and y coordinates. Point to find in the segment Returns: (float, float): x and y coordinates of the closest point
codesearchnet
def getfileversion(self): (status, major_v, minor_v, release, info) = _C.Hgetfileversion(self._id) _checkErr('getfileversion', status, 'cannot get file version') return (major_v, minor_v, release, info)
Get file version info. Args: no argument Returns: 4-element tuple with the following components: -major version number (int) -minor version number (int) -complete library version number (int) -additional information (string) C library equivalent : Hgetlibversion
codesearchnet
def __init__(self, *args, **kwargs): super(ClaimTransaction, self).__init__(*args, **kwargs) self.Type = TransactionType.ClaimTransaction
Create an instance. Args: *args: **kwargs:
juraj-google-style
def add_ensembl_info(genes, ensembl_lines): LOG.info("Adding ensembl coordinates") if isinstance(ensembl_lines, DataFrame): ensembl_genes = parse_ensembl_gene_request(ensembl_lines) else: ensembl_genes = parse_ensembl_genes(ensembl_lines) for ensembl_gene in ensembl_genes...
Add the coordinates from ensembl Args: genes(dict): Dictionary with all genes ensembl_lines(iteable): Iteable with raw ensembl info
juraj-google-style
def AddArguments(cls, argument_group): argument_group.add_argument( '--server', dest='server', type=str, action='store', default=cls._DEFAULT_SERVER, metavar='HOSTNAME', help='The hostname or server IP address of the server.') argument_group.add_argument( '--port', dest='por...
Adds command line arguments the helper supports to an argument group. This function takes an argument parser or an argument group object and adds to it all the command line arguments this helper supports. Args: argument_group (argparse._ArgumentGroup|argparse.ArgumentParser): argparse group.
juraj-google-style
def _clean_url(url): if url == 'default': url = DEFAULT_SERVER_HTTP_URL if url.startswith("ws"): raise ValueError("url should be the http or https URL for the server, not the websocket URL") return url.rstrip("/")
Produce a canonical Bokeh server URL. Args: url (str) A URL to clean, or "defatul". If "default" then the ``BOKEH_SERVER_HTTP_URL`` will be returned. Returns: str
juraj-google-style
async def skip(self, query="1"): if not self.state == 'ready': logger.debug("Trying to skip from wrong state '{}'".format(self.state)) return if query == "": query = "1" elif query == "all": query = str(len(self.queue) + 1) try:...
The skip command Args: query (str): The number of items to skip
juraj-google-style
def rotate(self, vecs): assert vecs.dtype == np.float32 assert vecs.ndim in [1, 2] if vecs.ndim == 2: return vecs @ self.R elif vecs.ndim == 1: return (vecs.reshape(1, -1) @ self.R).reshape(-1)
Rotate input vector(s) by the rotation matrix.` Args: vecs (np.ndarray): Input vector(s) with dtype=np.float32. The shape can be a single vector (D, ) or several vectors (N, D) Returns: np.ndarray: Rotated vectors with the same shape and dtype to the input vecs.
juraj-google-style
def commit(self, sourcedir, targetdir, abs_config, abs_sourcedir, abs_targetdir): config_path, config_filename = os.path.split(abs_config) if not os.path.exists(config_path): os.makedirs(config_path) if not os.path.exists(abs_sourcedir): os.makedi...
Commit project structure and configuration file Args: sourcedir (string): Source directory path. targetdir (string): Compiled files target directory path. abs_config (string): Configuration file absolute path. abs_sourcedir (string): ``sourcedir`` expanded as absolute path. abs_targetdir (string): ``targetdir`` expand...
juraj-google-style
def abspath(self, path): if not path.startswith(os.path.sep) or path.startswith('~'): path = os.path.expanduser(os.path.join(self.base_path, path)) return path
Transform the path to an absolute path Args: path (string): The path to transform to an absolute path Returns: string: The absolute path to the file
juraj-google-style
def LockedWrite(self, cache_data): if isinstance(cache_data, six.text_type): cache_data = cache_data.encode(encoding=self._encoding) with self._thread_lock: if not self._EnsureFileExists(): return False with self._process_lock_getter() as acq...
Acquire an interprocess lock and write a string. This method safely acquires the locks then writes a string to the cache file. If the string is written successfully the function will return True, if the write fails for any reason it will return False. Args: cache_data: string or bytes to write. Returns: bool: succes...
juraj-google-style
def __clean__(struct: Union[dict, list]) -> Union[dict, list]: if isinstance(struct, dict): for key, value in struct.items(): if isinstance(value, bytes): struct[key] = base64.standard_b64encode(value).decode('ascii') elif isinstance(value, date): stru...
Helper to recursively clean up JSON data for API call. Converts bytes -> base64. Converts date -> str (yyyy-mm-dd). TODO: Add Converts datetime, time -> string. Args: struct: The kwargs being cleaned up. Returns: struct: The kwargs with replacments.
github-repos
def get_public_tokens(self): r = self.remote_utils.get_url((self.url() + 'public_tokens/')) return r.json()
Get a list of public tokens available on this server. Arguments: None Returns: str[]: list of public tokens
codesearchnet
def setOutputHandler(self, outputhandler): class OutputHandlerInternal(amplpython.OutputHandler): def output(self, kind, msg): outputhandler.output(kind, msg) self._outputhandler = outputhandler self._outputhandler_internal = OutputHandlerInternal() ...
Sets a new output handler. Args: outputhandler: The function handling the AMPL output derived from interpreting user commands.
juraj-google-style
def add_scalar_value(self, value_buf): self.__container_node.add_child(_Node(value_buf)) self.current_container_length += len(value_buf)
Add a node to the tree containing a scalar value. Args: value_buf (bytearray): bytearray containing the scalar value.
codesearchnet
def _move_bee(self, bee, new_values): score = np.nan_to_num(new_values[0]) if (bee.score > score): bee.failed_trials += 1 else: bee.values = new_values[1] bee.score = score bee.error = new_values[2] bee.failed_trials = 0 self._logger.log('debug', 'Bee assigned...
Moves a bee to a new position if new fitness score is better than the bee's current fitness score Args: bee (EmployerBee): bee to move new_values (tuple): (new score, new values, new fitness function return value)
codesearchnet
def merge(profile, branch, merge_into): data = merges.merge(profile, branch, merge_into) return data
Merge a branch into another branch. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. branch The name of the branch to merge. merge_into The name of the branch you want to merge into....
codesearchnet
def set_brightness(self, brightness): if not 25 <= brightness <= 255: raise ValueError("The brightness needs to be between 25 and 255.") payload = self.generate_payload(SET, {self.DPS_INDEX_BRIGHTNESS: brightness}) data = self._send_receive(payload) return data
Set the brightness value of an rgb bulb. Args: brightness(int): Value for the brightness (25-255).
juraj-google-style
def _accept(random_sample: float, cost_diff: float, temp: float) -> Tuple[bool, float]: exponent = -cost_diff / temp if exponent >= 0.0: return True, 1.0 else: probability = math.exp(exponent) return probability > random_sample, probability
Calculates probability and draws if solution should be accepted. Based on exp(-Delta*E/T) formula. Args: random_sample: Uniformly distributed random number in the range [0, 1). cost_diff: Cost difference between new and previous solutions. temp: Current temperature. Returns: Tuple of boolean and float, with boolean ...
juraj-google-style