code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def of_cte(cls, header: Optional[ContentTransferEncodingHeader]) \ -> 'MessageDecoder': if header is None: return _NoopDecoder() hdr_str = str(header).lower() custom = cls.registry.get(hdr_str) if custom is not None: return custom elif...
Return a decoder from the CTE header value. There is built-in support for ``7bit``, ``8bit``, ``quoted-printable``, and ``base64`` CTE header values. Decoders can be added or overridden with the :attr:`.registry` dictionary. Args: header: The CTE header value.
juraj-google-style
def learn(self, iter_n=500, k_step=10): generative_model, discriminative_model = self.__GAN.train( self.__true_sampler, self.__generative_model, self.__discriminative_model, iter_n=iter_n, k_step=k_step ) self.__generative_mode...
Learning. Args: iter_n: The number of training iterations. k_step: The number of learning of the `discriminator`.
juraj-google-style
def save_users(users, path=settings.LOGIN_FILE): with open(path, "w") as fh: for username, data in users.items(): pass_line = username + ":" + ":".join([ data["pass_hash"], data["uid"], data["gid"], data["full_name"], ...
Save dictionary with user data to passwd file (default :attr:`ftp.settings.LOGIN_FILE`). Args: users (dict): dictionary with user data. For details look at dict returned from :func:`load_users`. path (str, default settings.LOGIN_FILE): path of the file, where the data will be stored (default :attr:`ftp.settings.LOGIN_...
juraj-google-style
def memoizedmethod(method): method_name = method.__name__ @wraps(method) def patched(self, *args, **kwargs): try: return self._cache[method_name] except KeyError: result = self._cache[method_name] = method( self, *args...
Decorator that caches method result. Args: method (function): Method Returns: function: Memoized method. Notes: Target method class needs as "_cache" attribute (dict). It is the case of "ObjectIOBase" and all its subclasses.
juraj-google-style
def __init__(self, num_workers, *unused_args, **unused_kwargs): super().__init__(*unused_args, **unused_kwargs) self._num_workers = num_workers self._successful_ops = util.MovingSum(window_ms=1000, bucket_ms=1000) self._first_instant = datetime.datetime.now() self._throttled_secs = Metrics.counter(R...
Initializes a ramp-up throttler transform. Args: num_workers: A hint for the expected number of workers, used to derive the local rate limit.
github-repos
def __call__(self, fn): def output(app, *args, **kwargs): data = fn(app, *args, **kwargs) attr = getattr(app, self.attribute) if isinstance(data, list) and isinstance(attr, list): getattr(app, self.attribute).extend(data) eli...
Implement __call__ function for decorator. Args: fn (function): The decorated function. Returns: function: The custom decorator function.
juraj-google-style
def get_file(profile, branch, file_path): branch_sha = get_branch_sha(profile, branch) tree = get_files_in_branch(profile, branch_sha) match = None for item in tree: if (item.get('path') == file_path): match = item break file_sha = match.get('sha') blob = blobs.ge...
Get a file from a branch. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. branch The name of a branch. file_path The path of the file to fetch. Returns: The (UTF-8 encoded) content...
codesearchnet
def refs(self, type='all', **kwargs): path = '%s/%s/refs' % (self.manager.path, self.get_id()) data = {'type': type} return self.manager.gitlab.http_get(path, query_data=data, **kwargs)
List the references the commit is pushed to. Args: type (str): The scope of references ('branch', 'tag' or 'all') **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabGetError: If the references could not be retrieved Returns: list: The r...
juraj-google-style
def sin(duration: int, amp: complex, freq: float=None, phase: float=0, name: str=None) -> SamplePulse: if (freq is None): freq = (1 / duration) return _sampled_sin_pulse(duration, amp, freq, phase=phase, name=name)
Generates sine wave `SamplePulse`. Args: duration: Duration of pulse. Must be greater than zero. amp: Pulse amplitude. freq: Pulse frequency, units of 1/dt. If `None` defaults to single cycle. phase: Pulse phase. name: Name of pulse.
codesearchnet
def solve2x2(lhs, rhs): if (np.abs(lhs[(1, 0)]) > np.abs(lhs[(0, 0)])): ratio = (lhs[(0, 0)] / lhs[(1, 0)]) denominator = (lhs[(0, 1)] - (ratio * lhs[(1, 1)])) if (denominator == 0.0): return (True, None, None) y_val = ((rhs[0] - (ratio * rhs[1])) / denominator) x...
Solve a square 2 x 2 system via LU factorization. This is meant to be a stand-in for LAPACK's ``dgesv``, which just wraps two calls to ``dgetrf`` and ``dgetrs``. We wrap for two reasons: * We seek to avoid exceptions as part of the control flow (which is what :func`numpy.linalg.solve` does). * We seek to avoid excess...
codesearchnet
def create_application_configuration(self, name, properties, description=None): if not hasattr(self, 'applicationConfigurations'): raise NotImplementedError() cv = ApplicationConfiguration._props(name, properties, description) res = self.rest_client.session.post(self.appli...
Create an application configuration. Args: name (str, optional): Only return application configurations containing property **name** that matches `name`. `name` can be a .. versionadded 1.12
juraj-google-style
def export_analytics_data_to_csv(data, output_folder, result_info_key, identifier_keys): workbook = create_excel_workbook(data, result_info_key, identifier_keys) suffix = '.csv' if (not os.path.exists(output_folder)): os.makedirs(output_folder) for worksheet in workbook.worksheets: file_...
Creates CSV files containing data returned by the Analytics API. Creates one file per requested endpoint and saves it into the specified output_folder Args: data: Analytics API data as a list of dicts output_folder: Path to a folder to save the CSV files into
codesearchnet
def get_tensor_num_entries(self, tensor_name, partial_layout=None, mesh_dimension_to_size=None): shape = self.get_tensor_shape(tensor_name) num_entries = 1 for dim in shape.dims: num_entries = (num_entries * dim.value) if (not partial_layout): return num_entries for mtf_dimension_nam...
The number of entries in a tensor. If partial_layout is specified, then mesh_dimension_to_size must also be. In this case, the number of entries on a single device is returned. Args: tensor_name: a string, name of a tensor in the graph. partial_layout: an optional {string: string}, from MTF dimension name to mesh dim...
codesearchnet
def validate_read(self, address): if (not any((address.startswith(ns) for ns in self._read_list))): raise AuthorizationException(address=address)
Raises an exception if the address is not allowed to be read in this context, based on txn inputs. Args: address (str): An address to be validated. Returns: None Raises: AuthorizationException
codesearchnet
def _ReadMemberFooter(self, file_object): file_offset = file_object.get_offset() member_footer = self._ReadStructure( file_object, file_offset, self._MEMBER_FOOTER_SIZE, self._MEMBER_FOOTER, 'member footer') self.uncompressed_data_size = member_footer.uncompressed_data_size
Reads a member footer. Args: file_object (FileIO): file-like object to read from. Raises: FileFormatError: if the member footer cannot be read.
juraj-google-style
def retrieve(self, block_height, headers=None): path = (self.path + block_height) return self.transport.forward_request(method='GET', path=path, headers=None)
Retrieves the block with the given ``block_height``. Args: block_height (str): height of the block to retrieve. headers (dict): Optional headers to pass to the request. Returns: dict: The block with the given ``block_height``.
codesearchnet
def GetArtifactPathDependencies(rdf_artifact): deps = set() for source in rdf_artifact.sources: for (arg, value) in iteritems(source.attributes): paths = [] if (arg in ['path', 'query']): paths.append(value) if (arg == 'key_value_pairs'): ...
Return a set of knowledgebase path dependencies. Args: rdf_artifact: RDF artifact object. Returns: A set of strings for the required kb objects e.g. ["users.appdata", "systemroot"]
codesearchnet
def _coords2idx(self, coords): x = self._coords2vec(coords) idx = self._kd.query(x, p=self._metric_p, distance_upper_bound=self._max_pix_scale) return idx[1]
Converts from sky coordinates to pixel indices. Args: coords (:obj:`astropy.coordinates.SkyCoord`): Sky coordinates. Returns: Pixel indices of the coordinates, with the same shape as the input coordinates. Pixels which are outside the map are given an index equal to the number of pixels in the map.
juraj-google-style
def indentjoin(strlist, indent='\n ', suffix=''): indent_ = indent strlist = list(strlist) if (len(strlist) == 0): return '' return (indent_ + indent_.join([(six.text_type(str_) + suffix) for str_ in strlist]))
r""" Convineince indentjoin similar to '\n '.join(strlist) but indent is also prefixed Args: strlist (?): indent (str): suffix (str): Returns: str: joined list
codesearchnet
def init(self, address, hard_reset=False): self.address = address if hard_reset: pass for i in range(Dongle.PORT_RETRIES): try: logger.debug('Setting up BGAPI, attempt {}/{}'.format(i ...
Open the serial connection to a dongle at the supplied address. Args: address (str): the serial port address of the BLED112 dongle, e.g. 'COM5' hard_reset (bool): not currently used Returns: True if a connection with the dongle was established, False otherwise.
juraj-google-style
def __call__(self, fn): def loop(app, *args, **kwargs): r = [] arg_data = app.tcex.playbook.read(getattr(app.args, self.arg)) arg_type = app.tcex.playbook.variable_type(getattr(app.args, self.arg)) if not isinstance(arg_data, l...
Implement __call__ function for decorator. Args: fn (function): The decorated function. Returns: function: The custom decorator function.
juraj-google-style
def create(self, batch_outs): raise NotImplementedError('Must be implemented in subclasses.')
Creates the initial results from the first batch outputs. Args: batch_outs: A list of batch-level outputs.
github-repos
def fill_slot(self, filler_pipeline_key, slot, value): if not isinstance(filler_pipeline_key, db.Key): filler_pipeline_key = db.Key(filler_pipeline_key) if _TEST_MODE: slot._set_value_test(filler_pipeline_key, value) else: encoded_value = json.dumps(value, ...
Fills a slot, enqueueing a task to trigger pending barriers. Args: filler_pipeline_key: db.Key or stringified key of the _PipelineRecord that filled this slot. slot: The Slot instance to fill. value: The serializable value to assign. Raises: UnexpectedPipelineError if the _SlotRecord for the 'slot' could not be found...
juraj-google-style
def search(self, *arg, **kw): output = {'cant_results': 0, 'matched_terms': defaultdict(set), 'results': {}, 'runtime': 0} indexes = self.indexes() models = kw.get('models', list(self._entities.values())) if (sys.version_info[0] < 3): models = [(self._entities.get(model, None) if (isinstance(mod...
A full search function. This allows you to search expression using the following arguments. Arg: query (str): The search string expression. Optional Args: - include_entity (bool): include in each result the entity values associated of the fields stored. - add_wildcards (bool): set it if you want to consider matches t...
codesearchnet
def Run(self, force=False): if not self.locked: raise aff4.LockError("CronJob must be locked for Run() to be called.") self.KillOldFlows() current_flow_urn = self.Get(self.Schema.CURRENT_FLOW_URN) if current_flow_urn: current_flow = aff4.FACTORY.Open(current_flow_urn, token=self....
Do the actual work of the Cron. Will first check if DueToRun is True. CronJob object must be locked (i.e. opened via OpenWithLock) for Run() to be called. Args: force: If True, the job will run no matter what (i.e. even if DueToRun() returns False). Raises: LockError: if the object is not locked.
juraj-google-style
def parse(self, key, value): if (value is not None): try: return self._parser(value) except Exception: raise ParsingError('Error parsing {}'.format(key)) elif (self._default is not SENTINAL): return self._default else: raise KeyError(key)
Parse the environment value for a given key against the schema. Args: key: The name of the environment variable. value: The value to be parsed.
codesearchnet
def score_intersect(self, term1, term2, **kwargs): t1_kde = self.kde(term1, **kwargs) t2_kde = self.kde(term2, **kwargs) overlap = np.minimum(t1_kde, t2_kde) return np.trapz(overlap)
Compute the geometric area of the overlap between the kernel density estimates of two terms. Args: term1 (str) term2 (str) Returns: float
juraj-google-style
def loopUntil(self, condition=None, timeout: float=0) -> Iterator[object]: endTime = (time.time() + timeout) while True: test = (condition and condition()) if test: (yield test) return elif (timeout and (time.time() > endTime)): (yield False) ...
Iterate until condition is met, with optional timeout in seconds. The yielded value is that of the condition or False when timed out. Args: condition: Predicate function that is tested after every network update. timeout: Maximum time in seconds to wait. If 0 then no timeout is used.
codesearchnet
def merge(self, decision_point: pg.geno.DecisionPoint, parent_decisions: List[Union[int, List[int], float, None]], global_state: pg.geno.AttributeDict, step: int) -> Union[int, List[int], float]:
Implementation of point-wise decision making. Args: decision_point: Decision point for recombination. parent_decisions: A list of parent's decisions. Each item should be an int as an active single-choice decision, a list of int as active multi- choice decisions, a float as an active float decision, or None for inactiv...
github-repos
def ctc_unique_labels(labels, name=None): with ops.name_scope(name, 'ctc_unique_labels', [labels]): labels = ops.convert_to_tensor(labels, name='labels') def _unique(x): u = array_ops.unique(x) y = array_ops.pad(u.y, [[0, _get_dim(u.idx, 0) - _get_dim(u.y, 0)]]) ...
Get unique labels and indices for batched labels for `tf.nn.ctc_loss`. For use with `tf.nn.ctc_loss` optional argument `unique`: This op can be used to preprocess labels in input pipeline to for better speed/memory use computing the ctc loss on TPU. Example: ctc_unique_labels([[3, 4, 4, 3]]) -> unique labels padded w...
github-repos
def port_add(br, port, may_exist=False, internal=False): param_may_exist = _param_may_exist(may_exist) cmd = 'ovs-vsctl {2}add-port {0} {1}'.format(br, port, param_may_exist) if internal: cmd += ' -- set interface {0} type=internal'.format(port) result = __salt__['cmd.run_all'](cmd) retcode ...
Creates on bridge a new port named port. Returns: True on success, else False. Args: br: A string - bridge name port: A string - port name may_exist: Bool, if False - attempting to create a port that exists returns False. internal: A boolean to create an internal interface if one does not exist. .. versionadded:: 20...
codesearchnet
def to_batched_tensor_list(element_spec, element): return _to_tensor_list_helper(lambda state, spec, component: state + spec._to_batched_tensor_list(component), element_spec, element)
Returns a tensor list representation of the element. Args: element_spec: A nested structure of `tf.TypeSpec` objects representing to element type specification. element: The element to convert to tensor list representation. Returns: A tensor list representation of `element`. Raises: ValueError: If `element_spec` and...
github-repos
def detect_shadowing_definitions(self, contract): result = [] for function in (contract.functions + contract.modifiers): if (function.contract != contract): continue for variable in function.variables: overshadowed = [] for scope_contract in ([contract] + cont...
Detects if functions, access modifiers, events, state variables, and local variables are named after reserved keywords. Any such definitions are returned in a list. Returns: list of tuple: (type, contract name, definition)
codesearchnet
def _validate_state_root(self, state_root): if self._state_root_regex.fullmatch(state_root) is None: LOGGER.debug('Invalid state root: %s', state_root) raise _ResponseFailed(self._status.INVALID_ROOT)
Validates a state root, raising a ResponseFailed error if invalid. Args: state_root (str): The state_root to validate Raises: ResponseFailed: The state_root was invalid, and a status of INVALID_ROOT will be sent with the response.
juraj-google-style
def _build_graph(self): q = data_flow_ops.FIFOQueue(1, 'float') init = q.enqueue(1.0) x = q.dequeue() q_inc = q.enqueue(x + 1) return (init, q_inc)
Builds a graph that enqueues and dequeues a single float. Returns: A tuple with the graph init tensor and graph output tensor.
github-repos
def frag2text(endpoint, stype, selector, clean=False, raw=False, verbose=False): try: return main(endpoint, stype, selector, clean, raw, verbose) except StandardError as err: return err
returns Markdown text of selected fragment. Args: endpoint: URL, file, or HTML string stype: { 'css' | 'xpath' } selector: CSS selector or XPath expression Returns: Markdown text Options: clean: cleans fragment (lxml.html.clean defaults) raw: returns raw HTML fragment verbose: show http status, encoding, headers
juraj-google-style
def parse_content_type(headers: MutableMapping) -> Tuple[(Optional[str], str)]: content_type = headers.get('content-type') if (not content_type): return (None, 'utf-8') else: (type_, parameters) = cgi.parse_header(content_type) encoding = parameters.get('charset', 'utf-8') re...
Find content-type and encoding of the response Args: headers: Response headers Returns: :py:class:`tuple` (content-type, encoding)
codesearchnet
def from_function(cls, function): module_name = function.__module__ function_name = function.__name__ class_name = '' function_source_hasher = hashlib.sha1() try: source = inspect.getsource(function) if (sys.version_info[0] >= 3): source = source.encode() function...
Create a FunctionDescriptor from a function instance. This function is used to create the function descriptor from a python function. If a function is a class function, it should not be used by this function. Args: cls: Current class which is required argument for classmethod. function: the python function used to cr...
codesearchnet
def put(self, type: Type[T], item: T) -> None: LOGGER.info("Getting SinkHandlers for \"{type}\"".format(type=type.__name__)) try: handlers = self._put_types[type] except KeyError: try: LOGGER.info("Building new SinkHandlers for \"{type}\"".format(...
Puts an objects into the data pipeline. The object may be transformed into a new type for insertion if necessary. Args: item: The object to be inserted into the data pipeline.
juraj-google-style
def _get(self, url, params=None): if (not params): params = {} params.update({'login': self.login, 'key': self.key}) response_json = requests.get((self.api_url + url), params).json() return self._process_response(response_json)
Used by every other method, it makes a GET request with the given params. Args: url (str): relative path of a specific service (account_info, ...). params (:obj:`dict`, optional): contains parameters to be sent in the GET request. Returns: dict: results of the response of the GET request.
codesearchnet
def __init__(self, top_probs=5): self.top_probs = top_probs self._sess = None self._tf_input_var = None self._tf_predict_var = None self._model_name = None self._latest_ckpt_name = None self._latest_ckpt_time = None
Create a new instance of this model. `BaseModel` is an interface and should only be instantiated via a subclass. Args: top_probs (int): Number of classes to display per result. For instance, VGG16 has 1000 classes, we don't want to display a visualization for every single possibility. Defaults to 5.
juraj-google-style
def invoice_access(request, access_code): invoices = commerce.Invoice.objects.filter(user__attendee__access_code=access_code).order_by('-issue_time') if (not invoices): raise Http404() unpaid = invoices.filter(status=commerce.Invoice.STATUS_UNPAID) paid = invoices.filter(status=commerce.Invoice....
Redirects to an invoice for the attendee that matches the given access code, if any. If the attendee has multiple invoices, we use the following tie-break: - If there's an unpaid invoice, show that, otherwise - If there's a paid invoice, show the most recent one, otherwise - Show the most recent invoid of all Argume...
codesearchnet
def _req(self, req): logger.debug('DUT> %s', req) (self._log and self.pause()) times = 3 res = None while times: times = (times - 1) try: self._sendline(req) self._expect(req) line = None res = [] while True: ...
Send command and wait for response. The command will be repeated 3 times at most in case data loss of serial port. Args: req (str): Command to send, please do not include new line in the end. Returns: [str]: The output lines
codesearchnet
def _get_snpeff_transcript(self, transcript_info): transcript = Transcript( hgnc_symbol = transcript_info.get('Gene_Name'), transcript_id = transcript_info.get('Feature'), ensembl_id = transcript_info.get('Gene_ID'), biotype = transcript_i...
Create a transcript based on the snpeff annotation Args: transcript_info (dict): A dict with snpeff info Returns: transcript (puzzle.models.Transcript): A Transcripts
juraj-google-style
def from_index_amount(cls, matrixpos, amt): f = np.identity(3) f[matrixpos] += amt return cls(f)
Factory method for constructing a Deformation object from a matrix position and amount Args: matrixpos (tuple): tuple corresponding the matrix position to have a perturbation added amt (float): amount to add to the identity matrix at position matrixpos
codesearchnet
def add_criterion(self, name, priority, and_or, search_type, value): criterion = SearchCriteria(name, priority, and_or, search_type, value) self.criteria.append(criterion)
Add a search criteria object to a smart group. Args: name: String Criteria type name (e.g. "Application Title") priority: Int or Str number priority of criterion. and_or: Str, either "and" or "or". search_type: String Criteria search type. (e.g. "is", "is not", "member of", etc). Construct a SmartGroup with the criter...
codesearchnet
def _get_error_generator(type, obj, schema_dir=None, version=DEFAULT_VER, default='core'): if (schema_dir is None): schema_dir = os.path.abspath((((os.path.dirname(__file__) + '/schemas-') + version) + '/')) try: schema_path = find_schema(schema_dir, type) schema = load_schema(schema_pat...
Get a generator for validating against the schema for the given object type. Args: type (str): The object type to find the schema for. obj: The object to be validated. schema_dir (str): The path in which to search for schemas. version (str): The version of the STIX specification to validate against. Only used to find ...
codesearchnet
def distance2bbox(points, distance: torch.Tensor, reg_scale: float) -> torch.Tensor: reg_scale = abs(reg_scale) top_left_x = points[..., 0] - (0.5 * reg_scale + distance[..., 0]) * (points[..., 2] / reg_scale) top_left_y = points[..., 1] - (0.5 * reg_scale + distance[..., 1]) * (points[..., 3] / reg_scale) ...
Decodes edge-distances into bounding box coordinates. Args: points (`torch.Tensor`): (batch_size, num_boxes, 4) or (num_boxes, 4) format, representing [x_center, y_center, width, height] distance (`torch.Tensor`): (batch_size, num_boxes, 4) or (num_boxes, 4), representing distances from the point to the left, top, rig...
github-repos
def has_platform(self, platform): if platform and not isinstance(platform, dict): parts = platform.split('/') if len(parts) > 3 or len(parts) < 1: raise InvalidArgument( '"{0}" is not a valid platform descriptor'.format(platform) ...
Check whether the given platform identifier is available for this digest. Args: platform (str or dict): A string using the ``os[/arch[/variant]]`` format, or a platform dictionary. Returns: (bool): ``True`` if the platform is recognized as available, ``False`` otherwise. Raises: :py:class:`docker.errors.InvalidArgum...
juraj-google-style
def get_template_object(template_file=''): jinja_template_paths_obj = [] if TEMPLATES_PATH: external_templates = pathlib.Path(TEMPLATES_PATH).expanduser().resolve() assert os.path.isdir(external_templates), 'External template path "{0}" not found'.format(external_templates) jinja_t...
Retrieve template. Args: template_file (str): Name of template file. Returns: jinja2.Template: Template ready to render. Raises: AssertionError: Configured path for templates does not exist. :obj:`foremast.exceptions.ForemastTemplateNotFound`: Requested template is not available.
juraj-google-style
def load(path: str) -> Callable[..., Dict[str, EventSetNode]]: g = _load_graph(path) inputs = g.named_inputs assert inputs is not None input_names = list(inputs.keys()) @compile def fn(*args: EventSetNode, **kwargs: EventSetNode) -> Dict[str, EventSetNode]: kwargs = _kwargs_from_args_an...
Loads a compiled Temporian function from a file. The loaded function receives the same positional and keyword arguments and applies the same operator graph to its inputs as when it was saved. Args: path: The path to load the function from. Returns: The loaded function.
github-repos
def encipher_shift(plaintext, plain_vocab, shift): ciphertext = [] cipher = ShiftEncryptionLayer(plain_vocab, shift) for _, sentence in enumerate(plaintext): cipher_sentence = [] for _, character in enumerate(sentence): encrypted_char = cipher.encrypt_character(character) cipher_sentence.a...
Encrypt plain text with a single shift layer. Args: plaintext (list of list of Strings): a list of plain text to encrypt. plain_vocab (list of Integer): unique vocabularies being used. shift (Integer): number of shift, shift to the right if shift is positive. Returns: ciphertext (list of Strings): encrypted plain text...
juraj-google-style
def clustering_factory(clf): required_methods = ['fit', 'fit_predict'] for method in required_methods: if not hasattr(clf, method): raise TypeError('"{}" is not in clf. Did you ' 'pass a clusterer instance?'.format(method)) additional_methods = { ...
Embeds scikit-plot plotting methods in an sklearn clusterer instance. Args: clf: Scikit-learn clusterer instance Returns: The same scikit-learn clusterer instance passed in **clf** with embedded scikit-plot instance methods. Raises: ValueError: If **clf** does not contain the instance methods necessary for scikit-pl...
juraj-google-style
def http_download(url, target_path): r = requests.get(url, stream=True) with open(target_path, 'wb') as f: for chunk in r.iter_content(chunk_size=1024): if chunk: f.write(chunk) return target_path
Download file to local Args: - url(string): url request path - target_path(string): download destination
juraj-google-style
def convert_sbml_model(model): biomass_reactions = set() for reaction in model.reactions: if reaction.id not in model.limits: lower, upper = parse_flux_bounds(reaction) if lower is not None or upper is not None: model.limits[reaction.id] = reaction.i...
Convert raw SBML model to extended model. Args: model: :class:`NativeModel` obtained from :class:`SBMLReader`.
juraj-google-style
def _VerifyValues(self, pool_func, input_sizes, ksize, strides, padding, expected): for data_format in GetTestConfigs(): self._VerifyOneTest(pool_func, input_sizes, ksize, strides, padding, data_format, expected)
Verifies the output values of the pooling function. Args: pool_func: Function to be called, co.MaxPool, co.AvgPool, or the Lua version. input_sizes: Input tensor dimensions. ksize: The kernel size dimensions strides: The stride dimensions padding: Padding type. expected: An array containing the expected operation outp...
github-repos
def is_monotonic(neurite, tol): for node in neurite.iter_sections(): sec = node.points for point_id in range((len(sec) - 1)): if (sec[(point_id + 1)][COLS.R] > (sec[point_id][COLS.R] + tol)): return False if ((node.parent is not None) and (sec[0][COLS.R] > (node.p...
Check if neurite tree is monotonic If each child has smaller or equal diameters from its parent Args: neurite(Neurite): neurite to operate on tol(float): tolerance Returns: True if neurite monotonic
codesearchnet
def ParseCall(self, parser_mediator, query, row, **unused_kwargs): query_hash = hash(query) guid = self._GetRowValue(query_hash, row, 'guid') is_incoming = self._GetRowValue(query_hash, row, 'is_incoming') videostatus = self._GetRowValue(query_hash, row, 'videostatus') try: aux = guid ...
Parses a call. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. query (str): query that created the row. row (sqlite3.Row): row resulting from query. query (Optional[str]): query.
codesearchnet
def wait_for( self, timeout=10000, interval=1000, asserter=lambda x: x): if not callable(asserter): raise TypeError('Asserter must be callable.') @retry( retry_on_exception=lambda ex: isinstance(ex, WebDriverException), stop_max_delay=timeout,...
Wait for driver till satisfy the given condition Support: Android iOS Web(WebView) Args: timeout(int): How long we should be retrying stuff. interval(int): How long between retries. asserter(callable): The asserter func to determine the result. Returns: Return the driver. Raises: WebDriverException.
juraj-google-style
def to(self, fmt=None, filename=None): from pymatgen.io.xyz import XYZ from pymatgen.io.gaussian import GaussianInput from pymatgen.io.babel import BabelMolAdaptor fmt = ('' if (fmt is None) else fmt.lower()) fname = os.path.basename((filename or '')) if ((fmt == 'xyz') or fnmatch(fname.lower(),...
Outputs the molecule to a file or string. Args: fmt (str): Format to output to. Defaults to JSON unless filename is provided. If fmt is specifies, it overrides whatever the filename is. Options include "xyz", "gjf", "g03", "json". If you have OpenBabel installed, any of the formats supported by OpenBabel. Non-case sen...
codesearchnet
def usufyToCsvExport(d, fPath): from pyexcel_io import get_data try: oldData = {'OSRFramework': get_data(fPath)} except: oldData = {'OSRFramework': []} tabularData = _generateTabularData(d, oldData) from pyexcel_io import save_data save_data(fPath, tabularData['OSRFramework'])
Workaround to export to a CSV file. Args: ----- d: Data to export. fPath: File path for the output file.
codesearchnet
def GetParserAndPluginNames(cls, parser_filter_expression=None): parser_and_plugin_names = [] for parser_name, parser_class in cls.GetParsers( parser_filter_expression=parser_filter_expression): parser_and_plugin_names.append(parser_name) if parser_class.SupportsPlugins(): for ...
Retrieves the parser and parser plugin names. Args: parser_filter_expression (Optional[str]): parser filter expression, where None represents all parsers and plugins. Returns: list[str]: parser and parser plugin names.
juraj-google-style
def get(self, statediag, dfaaccepted): newstatediag = {} newstate = PDAState() newstate.id = 'AI,I' newstate.type = 1 newstate.sym = '@wrapping' transitions = {} transitions[(0, 0)] = [0] newstate.trans = transitions i = 0 news...
# - Remove all the POP (type - 2) transitions to state 0,non DFA accepted # for symbol @closing # - Generate the accepted transitions - Replace DFA accepted States with a push - pop symbol and two extra states Args: statediag (list): The states of the PDA dfaaccepted (list):The list of DFA accepted states Returns: list...
juraj-google-style
def _ParseInternetPasswordRecord(self, parser_mediator, record): key = record.get('_key_', None) if not key or not key.startswith(b'ssgp'): raise errors.ParseError(( 'Unsupported Internet password record key value does not start ' 'with: "ssgp".')) protocol_string = codecs.de...
Extracts the information from an Internet password record. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. record (dict[str, object]): database record. Raises: ParseError: if Internet password record cannot be parsed.
juraj-google-style
def custom_getter(self, activation_dtype=tf.bfloat16): def getter_fn(getter, *args, **kwargs): requested_dtype = kwargs["dtype"] if requested_dtype in (tf.bfloat16, tf.float32): kwargs["dtype"] = tf.bfloat16 kwargs["initializer"] = _EncodingInitializer( kwargs["initializ...
A custom getter that uses the encoding for bfloat16 and float32 vars. When a bfloat16 or float32 variable is requsted, an encoded float16 varaible is created, which is then decoded and cast to a bfloat16 activation. Args: activation_dtype: a dtype to which to convert the decoded value. Returns: a function.
juraj-google-style
def is_initialised( self ): if not self.lattice: raise AttributeError('Running a simulation needs the lattice to be initialised') if not self.atoms: raise AttributeError('Running a simulation needs the atoms to be initialised') if not self.number_of_jumps and not...
Check whether the simulation has been initialised. Args: None Returns: None
juraj-google-style
def _collect_unused(self, start: GridQubit, used: Set[GridQubit]) -> Set[GridQubit]: def collect(n: GridQubit, visited: Set[GridQubit]): visited.add(n) for m in self._c_adj[n]: if ((m not in used) and (m not in visited)): collect(m, visited) visited = set() colle...
Lists all the qubits that are reachable from given qubit. Args: start: The first qubit for which connectivity should be calculated. Might be a member of used set. used: Already used qubits, which cannot be used during the collection. Returns: Set of qubits that are reachable from starting qubit without traversing any...
codesearchnet
def _ReadPaddingDataTypeDefinition(self, definitions_registry, definition_values, definition_name, is_member=False): if (not is_member): error_message = 'data type only supported as member' raise errors.DefinitionReaderError(definition_name, error_message) definition_object = self._ReadDataTypeD...
Reads a padding data type definition. Args: definitions_registry (DataTypeDefinitionsRegistry): data type definitions registry. definition_values (dict[str, object]): definition values. definition_name (str): name of the definition. is_member (Optional[bool]): True if the data type definition is a member data type def...
codesearchnet
def _virtual_molecule(self, mol, ilabels, eq_atoms): vmol = ob.OBMol() non_unique_atoms = set([a for g in eq_atoms for a in g]) all_atoms = set(range(1, len(ilabels) + 1)) unique_atom_labels = sorted(all_atoms - non_unique_atoms) for i in unique_atom_labels: ...
Create a virtual molecule by unique atoms, the centriods of the equivalent atoms Args: mol: The molecule. OpenBabel OBMol object ilables: inchi label map eq_atoms: equivalent atom labels farthest_group_idx: The equivalent atom group index in which there is the farthest atom to the centroid Return: The virtual molecul...
juraj-google-style
def expand(self, pcoll: beam.PCollection[Union[beam.Row, NamedTuple]]) -> beam.PCollection[common_types.InstanceDictType]: return pcoll | beam.Map(lambda x: x._asdict())
Args: pcoll: A PCollection of NamedTuples or Rows. Returns: A PCollection of dictionaries.
github-repos
def get_encoder_from_vocab(vocab_filepath): if not tf.gfile.Exists(vocab_filepath): raise ValueError("Vocab file does not exist: {}.".format(vocab_filepath)) tf.logging.info("Found vocab file: %s", vocab_filepath) encoder = text_encoder.SubwordTextEncoder(vocab_filepath) return encoder
Get encoder from vocab file. If vocab is not found in output dir, it will be copied there by copy_vocab_to_output_dir to clarify the vocab used to generate the data. Args: vocab_filepath: path to vocab, either local or cns Returns: A SubwordTextEncoder vocabulary object. None if the output_parallel_text is set.
juraj-google-style
def sunset(self, date=None, zenith=None): return (segment.sunset(date, zenith) for segment in self)
Calculate sunset times for locations. Args: date (datetime.date): Calculate rise or set for given date zenith (str): Calculate sunset events, or start of twilight times Returns: list of list of datetime.datetime: The time for the sunset for each point in each segment
codesearchnet
def directional_poisson_ratio(self, n, m, tol=1e-8): n, m = get_uvec(n), get_uvec(m) if not np.abs(np.dot(n, m)) < tol: raise ValueError("n and m must be orthogonal") v = self.compliance_tensor.einsum_sequence([n]*2 + [m]*2) v *= -1 / self.compliance_tensor.einsum_se...
Calculates the poisson ratio for a specific direction relative to a second, orthogonal direction Args: n (3-d vector): principal direction m (3-d vector): secondary direction orthogonal to n tol (float): tolerance for testing of orthogonality
juraj-google-style
def service_messages(self, short_name): if (short_name not in self.services): raise ArgumentError('Unknown service name', short_name=short_name) return list(self.services[short_name]['state'].messages)
Get the messages stored for a service. Args: short_name (string): The short name of the service to get messages for Returns: list(ServiceMessage): A list of the ServiceMessages stored for this service
codesearchnet
def collapse_addresses(addresses): i = 0 addrs = [] ips = [] nets = [] for ip in addresses: if isinstance(ip, _BaseAddress): if ips and ips[-1]._version != ip._version: raise TypeError("%s and %s are not of the same version" % ( ...
Collapse a list of IP objects. Example: collapse_addresses([IPv4Network('192.0.2.0/25'), IPv4Network('192.0.2.128/25')]) -> [IPv4Network('192.0.2.0/24')] Args: addresses: An iterator of IPv4Network or IPv6Network objects. Returns: An iterator of the collapsed IPv(4|6)Network objects. Raises: TypeError: If passed a ...
juraj-google-style
def is_flat(neurite, tol, method='tolerance'): ext = principal_direction_extent(neurite.points[(:, COLS.XYZ)]) assert (method in ('tolerance', 'ratio')), "Method must be one of 'tolerance', 'ratio'" if (method == 'ratio'): sorted_ext = np.sort(ext) return ((sorted_ext[0] / sorted_ext[1]) < f...
Check if neurite is flat using the given method Args: neurite(Neurite): neurite to operate on tol(float): tolerance method(string): the method of flatness estimation: 'tolerance' returns true if any extent of the tree is smaller than the given tolerance 'ratio' returns true if the ratio of the smallest directions is s...
codesearchnet
def save(self, filename): with open(filename, 'w') as outfile: json.dump(self.to_json(), outfile)
Writes the JSON representation of this graph to the provided filename, such that the graph can be easily reconstructed using Graph(spec=filename). Args: filename (str): Path at which to write out the json file.
juraj-google-style
def rename_libtensorflow(srcs_dir: str, version: str): major_version = version.split('.')[0] if is_macos(): shutil.move(os.path.join(srcs_dir, 'libtensorflow_cc.{}.dylib'.format(version)), os.path.join(srcs_dir, 'libtensorflow_cc.{}.dylib'.format(major_version))) shutil.move(os.path.join(srcs_di...
Update libtensorflow_cc file name. Bazel sets full TF version in name but libtensorflow_cc must contain only major. Update accordingly to the platform: e.g. libtensorflow_cc.so.2.15.0 -> libtensorflow_cc.2 Args: srcs_dir: target directory with files. version: Major version to be set.
github-repos
def url(request, json_list, nested, url_name='show_{}', ignore_get=None): if (not ignore_get): ignore_get = [] if isinstance(url_name, str): url_string = str(url_name) url_name = (lambda x: url_string.format(x)) urls = cache.get('proso_urls') if (urls is None): urls = {} ...
Enrich the given list of objects, so they have URL. Args: request (django.http.request.HttpRequest): request which is currently processed json_list (list): list of dicts (JSON objects to be enriched) url_name (str|fun): pattern to create a url name taking object_type ignore_get (list): list of GET parameters which are...
codesearchnet
def getKeywordsForText(self, retina_name, body, ): resourcePath = '/text/keywords' method = 'POST' queryParams = {} headerParams = {'Accept': 'Application/json', 'Content-Type': 'application/json'} postData = None queryParams['retina_name'] = retina_name ...
Get a list of keywords from the text Args: retina_name, str: The retina name (required) body, str: The text to be evaluated (required) Returns: Array[str]
juraj-google-style
def _handle_uniqueness(self): def _getattr(u): try: return self._field_values[u] except KeyError: return getattr(self, u) if self._uniques: for u in self._uniques: val = _getattr(u) changed_fields = self.changed_fields(from_db=True) ...
Checks marked as unique and unique_together fields of the Model at each creation and update, and if it violates the uniqueness raises IntegrityError. First, looks at the fields which marked as "unique". If Model's unique fields did not change, it means that there is still a record at db with same unique field values. ...
codesearchnet
def from_json_file(cls, json_file: Union[str, os.PathLike]) -> 'PretrainedConfig': config_dict = cls._dict_from_json_file(json_file) return cls(**config_dict)
Instantiates a [`PretrainedConfig`] from the path to a JSON file of parameters. Args: json_file (`str` or `os.PathLike`): Path to the JSON file containing the parameters. Returns: [`PretrainedConfig`]: The configuration object instantiated from that JSON file.
github-repos
def __set_mutation_type(self, hgvs_string): self.__set_lost_stop_status(hgvs_string) self.__set_lost_start_status(hgvs_string) self.__set_missense_status(hgvs_string) self.__set_indel_status() self.__set_frame_shift_status() self.__set_premature_stop_codon_...
Interpret the mutation type (missense, etc.) and set appropriate flags. Args: hgvs_string (str): hgvs syntax with "p." removed
juraj-google-style
def _lob_end_handler_factory(ion_type, action, validate=(lambda c, ctx, action_res: None)): assert ((ion_type is IonType.BLOB) or (ion_type is IonType.CLOB)) @coroutine def lob_end_handler(c, ctx): val = ctx.value prev = c action_res = None if ((c != _CLOSE_BRACE) and (c not...
Generates handlers for the end of blob or clob values. Args: ion_type (IonType): The type of this lob (either blob or clob). action (callable): Called for each non-whitespace, non-closing brace character encountered before the end of the lob. Accepts the current character's ordinal, the current context, the previous c...
codesearchnet
def VerifyStructure(self, parser_mediator, line): structure = self.LOG_LINE try: parsed_structure = structure.parseString(line) except pyparsing.ParseException: logger.debug('Not a XChat scrollback log file') return False try: int(parsed_structure.timestamp, 10) except...
Verify that this file is a XChat scrollback log file. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. line (str): line from a text file. Returns: bool: True if the line was successfully parsed.
juraj-google-style
def _get_mpr_view(self, connection, table): logger.debug( 'Looking for view of the table.\n table: {}'.format(table.vid)) view = self.get_view_name(table) view_exists = self._relation_exists(connection, view) if view_exists: logger.debug( ...
Finds and returns view name in the sqlite db represented by given connection. Args: connection: connection to sqlite db where to look for partition table. table (orm.Table): Raises: MissingViewError: if database does not have partition table. Returns: str: database table storing partition data.
juraj-google-style
def __save__(script_name, benchbuild, experiment, projects): from jinja2 import Environment, PackageLoader logs_dir = os.path.dirname(CFG['slurm']['logs'].value) node_command = str(benchbuild["-E", experiment.name, "$_project"]) env = Environment( trim_blocks=True, lstrip_blocks=Tr...
Dump a bash script that can be given to SLURM. Args: script_name (str): name of the bash script. commands (list(benchbuild.utils.cmd)): List of plumbum commands to write to the bash script. **kwargs: Dictionary with all environment variable bindings we should map in the bash script.
juraj-google-style
def plot_histograms(self, freq=None, title=None, figsize=(10, 10), **kwargs): if title is None: title = self._get_default_plot_title( freq, 'Return Histogram Matrix') plt.figure() ser = self._get_series(freq).to_returns().dropna() ...
Wrapper around pandas' hist. Args: * freq (str): Data frequency used for display purposes. Refer to pandas docs for valid freq strings. * figsize ((x,y)): figure size * title (str): Title if default not appropriate * kwargs: passed to pandas' hist method
juraj-google-style
def get_failed_enrollment_message(cls, users, enrolled_in): failed_emails = [user.email for user in users] return ( 'error', _( 'The following learners could not be enrolled in {enrolled_in}: {user_list}' ).format( enrolled_in=...
Create message for the users who were not able to be enrolled in a course or program. Args: users: An iterable of users who were not successfully enrolled enrolled_in (str): A string identifier for the course or program with which enrollment was attempted Returns: tuple: A 2-tuple containing a message type and messag...
juraj-google-style
def stop(pid): if psutil.pid_exists(pid): try: p = psutil.Process(pid) p.kill() except Exception: pass
Shut down a specific process. Args: pid: the pid of the process to shutdown.
juraj-google-style
def __init__(self, selenium): self.selenium = selenium self.window_manager = WindowManager(selenium) self.browser = self.window_manager.windows[0]
Create FoxPuppet object. Args: selenium: (:py:class:`~selenium.webdriver.remote.webdriver.WebDriver`): Firefox WebDriver object.
juraj-google-style
def OpenFileObject(cls, path_spec_object, resolver_context=None): if (not isinstance(path_spec_object, path_spec.PathSpec)): raise TypeError('Unsupported path specification type.') if (resolver_context is None): resolver_context = cls._resolver_context if (path_spec_object.type_indicator == ...
Opens a file-like object defined by path specification. Args: path_spec_object (PathSpec): path specification. resolver_context (Optional[Context]): resolver context, where None represents the built in context which is not multi process safe. Returns: FileIO: file-like object or None if the path specification could n...
codesearchnet
def sqrt(cls, x: 'TensorFluent') -> 'TensorFluent': return cls._unary_op(x, tf.sqrt, tf.float32)
Returns a TensorFluent for the sqrt function. Args: x: The input fluent. Returns: A TensorFluent wrapping the sqrt function.
codesearchnet
def __init__(self, decode_module, encode_module, methodName='runTest'): super(EncodeProtoOpTestBase, self).__init__(methodName) self._decode_module = decode_module self._encode_module = encode_module
EncodeProtoOpTestBase initializer. Args: decode_module: a module containing the `decode_proto_op` method encode_module: a module containing the `encode_proto_op` method methodName: the name of the test method (same as for test.TestCase)
github-repos
def sync_firmware(self): serial_no = self.serial_number if self.firmware_newer(): try: self.invalidate_firmware() self.update_firmware() except erro...
Syncs the emulator's firmware version and the DLL's firmware. This method is useful for ensuring that the firmware running on the J-Link matches the firmware supported by the DLL. Args: self (JLink): the ``JLink`` instance Returns: ``None``
juraj-google-style
def strace_clear_all(self): data = 0 res = self._dll.JLINK_STRACE_Control(enums.JLinkStraceCommand.TRACE_EVENT_CLR_ALL, data) if (res < 0): raise errors.JLinkException('Failed to clear all STRACE events.') return None
Clears all STRACE events. Args: self (JLink): the ``JLink`` instance. Returns: ``None`` Raises: JLinkException: on error.
codesearchnet
def parse_args(argv): parser = make_parser() args = parser.parse_args(argv) t = args.tool_args kythe_args = kythe.Args(corpus=t.kythe_corpus, root=t.kythe_root, path=t.kythe_path, skip_stdlib=t.skip_stdlib) return (args.all_args, kythe_args, args.pytype_opts)
Parse command line args. Arguments: argv: Raw command line args, typically sys.argv[1:] Returns: A tuple of ( parsed_args: argparse.Namespace, kythe_args: kythe.Args, pytype_options: pytype.config.Options)
github-repos
def predict(self, x_test): if self.model: lengths = map(len, x_test) x_test = self.p.transform(x_test) y_pred = self.model.predict(x_test) y_pred = self.p.inverse_transform(y_pred, lengths) return y_pred else: raise OSErro...
Returns the prediction of the model on the given test data. Args: x_test : array-like, shape = (n_samples, sent_length) Test samples. Returns: y_pred : array-like, shape = (n_smaples, sent_length) Prediction labels for x.
juraj-google-style
def is_running(process): if os.name == 'nt': process_list = get_cmd_out(['tasklist', '/v']) return process in process_list else: process_list = get_cmd_out('ps axw | awk \'{print $5}\'') for i in process_list.split('\n'): if not i == 'COMMAND' or i.startswith('['): if i == process: ...
Check if process is running. Check if the given process name is running or not. Note: On a Linux system, kernel threads (like ``kthreadd`` etc.) are excluded. Args: process (str): The name of the process. Returns: bool: Is the process running?
juraj-google-style
def mark_all_as_done(self, **kwargs): result = self.gitlab.http_post('/todos/mark_as_done', **kwargs) try: return int(result) except ValueError: return 0
Mark all the todos as done. Args: **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabTodoError: If the server failed to perform the request Returns: int: The number of todos maked done
codesearchnet