code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def read_partition(self, i): self._load_metadata() if i < 0 or i >= self.npartitions: raise IndexError('%d is out of range' % i) return self._get_partition(i)
Return a part of the data corresponding to i-th partition. By default, assumes i should be an integer between zero and npartitions; override for more complex indexing schemes.
def backward(self, speed=1): self.left_motor.backward(speed) self.right_motor.backward(speed)
Drive the robot backward by running both motors backward. :param float speed: Speed at which to drive the motors, as a value between 0 (stopped) and 1 (full speed). The default is 1.
def stop(self, *args, **kwargs): if self.status in (Status.stopping, Status.stopped): logger.debug("{} is already {}".format(self, self.status.name)) else: self.status = Status.stopping self.onStopping(*args, **kwargs) self.status = Status.stopped
Set the status to Status.stopping and also call `onStopping` with the provided args and kwargs.
def url(self): if self.is_public: return '{0}/{1}/{2}'.format( self.bucket._boto_s3.meta.client.meta.endpoint_url, self.bucket.name, self.name ) else: raise ValueError('{0!r} does not have the public-read ACL set. ' ...
Returns the public URL for the given key.
def _full_like_variable(other, fill_value, dtype: Union[str, np.dtype, None] = None): from .variable import Variable if isinstance(other.data, dask_array_type): import dask.array if dtype is None: dtype = other.dtype data = dask.array.full(other.shape,...
Inner function of full_like, where other must be a variable
def loadBatch(self, records): try: curr_batch = records[:self.batchSize()] next_batch = records[self.batchSize():] curr_records = list(curr_batch) if self._preloadColumns: for record in curr_records: record.recordValues(s...
Loads the records for this instance in a batched mode.
def get_graph_data(self, graph, benchmark): if benchmark.get('params'): param_iter = enumerate(zip(itertools.product(*benchmark['params']), graph.get_steps())) else: param_iter = [(None, (None, graph.get_steps()))] for j, (param,...
Iterator over graph data sets Yields ------ param_idx Flat index to parameter permutations for parameterized benchmarks. None if benchmark is not parameterized. entry_name Name for the data set. If benchmark is non-parameterized, this is the ...
def registration_id_chunks(self, registration_ids): try: xrange except NameError: xrange = range for i in xrange(0, len(registration_ids), self.FCM_MAX_RECIPIENTS): yield registration_ids[i:i + self.FCM_MAX_RECIPIENTS]
Splits registration ids in several lists of max 1000 registration ids per list Args: registration_ids (list): FCM device registration ID Yields: generator: list including lists with registration ids
def to_dict(obj, **kwargs): if is_model(obj.__class__): return related_obj_to_dict(obj, **kwargs) else: return obj
Convert an object into dictionary. Uses singledispatch to allow for clean extensions for custom class types. Reference: https://pypi.python.org/pypi/singledispatch :param obj: object instance :param kwargs: keyword arguments such as suppress_private_attr, suppress_empty_values, dict...
def merge_asof(left, right, on=None, left_on=None, right_on=None, left_index=False, right_index=False, by=None, left_by=None, right_by=None, suffixes=('_x', '_y'), tolerance=None, allow_exact_matches=True, direction...
Perform an asof merge. This is similar to a left-join except that we match on nearest key rather than equal keys. Both DataFrames must be sorted by the key. For each row in the left DataFrame: - A "backward" search selects the last row in the right DataFrame whose 'on' key is less than or e...
def python_sidebar_navigation(python_input): def get_text_fragments(): tokens = [] tokens.extend([ ('class:sidebar', ' '), ('class:sidebar.key', '[Arrows]'), ('class:sidebar', ' '), ('class:sidebar.description', 'Navigate'), ('class:side...
Create the `Layout` showing the navigation information for the sidebar.
def anchor_stream(self, stream_id, converter="rtc"): if isinstance(converter, str): converter = self._known_converters.get(converter) if converter is None: raise ArgumentError("Unknown anchor converter string: %s" % converter, known_con...
Mark a stream as containing anchor points.
def to_embedded(pool_id=None, is_thin_enabled=None, is_deduplication_enabled=None, is_compression_enabled=None, is_backup_only=None, size=None, tiering_policy=None, request_id=None, src_id=None, name=None, default_sp=None, replication_resou...
Constructs an embeded object of `UnityResourceConfig`. :param pool_id: storage pool of the resource. :param is_thin_enabled: is thin type or not. :param is_deduplication_enabled: is deduplication enabled or not. :param is_compression_enabled: is in-line compression (ILC) enabled or ...
def initialise_loggers(names, log_level=_builtin_logging.WARNING, handler_class=SplitStreamHandler): frmttr = get_formatter() for name in names: logr = _builtin_logging.getLogger(name) handler = handler_class() handler.setFormatter(frmttr) logr.addHandler(handler) logr.se...
Initialises specified loggers to generate output at the specified logging level. If the specified named loggers do not exist, they are created. :type names: :obj:`list` of :obj:`str` :param names: List of logger names. :type log_level: :obj:`int` :param log_level: Log level for messages, typica...
def _get_callback_context(env): if env.model is not None and env.cvfolds is None: context = 'train' elif env.model is None and env.cvfolds is not None: context = 'cv' return context
return whether the current callback context is cv or train
def date_range(self): try: days = int(self.days) except ValueError: exit_after_echo(QUERY_DAYS_INVALID) if days < 1: exit_after_echo(QUERY_DAYS_INVALID) start = datetime.today() end = start + timedelta(days=days) return ( da...
Generate date range according to the `days` user input.
def stop_workflow(config, *, names=None): jobs = list_jobs(config, filter_by_type=JobType.Workflow) if names is not None: filtered_jobs = [] for job in jobs: if (job.id in names) or (job.name in names) or (job.workflow_id in names): filtered_jobs.append(job) else:...
Stop one or more workflows. Args: config (Config): Reference to the configuration object from which the settings for the workflow are retrieved. names (list): List of workflow names, workflow ids or workflow job ids for the workflows that should be stopped. If all workflows ...
def get(key, profile=None): conn = salt.utils.memcached.get_conn(profile) return salt.utils.memcached.get(conn, key)
Get a value from memcached
def classify_coincident(st_vals, coincident): r if not coincident: return None if st_vals[0, 0] >= st_vals[0, 1] or st_vals[1, 0] >= st_vals[1, 1]: return UNUSED_T else: return CLASSIFICATION_T.COINCIDENT
r"""Determine if coincident parameters are "unused". .. note:: This is a helper for :func:`surface_intersections`. In the case that ``coincident`` is :data:`True`, then we'll have two sets of parameters :math:`(s_1, t_1)` and :math:`(s_2, t_2)`. If one of :math:`s1 < s2` or :math:`t1 < t2` is...
def reset(self): if self.resync_period > 0 and (self.resets + 1) % self.resync_period == 0: self._exit_resync() while not self.done: self.done = self._quit_episode() if not self.done: time.sleep(0.1) return self._start_up()
gym api reset
def write(name, keyword, domain, citation, author, description, species, version, contact, licenses, values, functions, output, value_prefix): write_namespace( name, keyword, domain, author, citation, values, namespace_description=description, namespace_species=species, nam...
Build a namespace from items.
def get_logfile_path(working_dir): logfile_filename = virtualchain_hooks.get_virtual_chain_name() + ".log" return os.path.join( working_dir, logfile_filename )
Get the logfile path for our service endpoint.
def is_monotonic(full_list): prev_elements = set({full_list[0]}) prev_item = full_list[0] for item in full_list: if item != prev_item: if item in prev_elements: return False prev_item = item prev_elements.add(item) return True
Determine whether elements in a list are monotonic. ie. unique elements are clustered together. ie. [5,5,3,4] is, [5,3,5] is not.
def reload(self): other = type(self).get(self.name, service=self.service) self.request_count = other.request_count
reload self from self.service
def get_current_shutit_pexpect_session(self, note=None): self.handle_note(note) res = self.current_shutit_pexpect_session self.handle_note_after(note) return res
Returns the currently-set default pexpect child. @return: default shutit pexpect child object
def get_genus_type_metadata(self): metadata = dict(self._mdata['genus_type']) metadata.update({'existing_string_values': self._my_map['genusTypeId']}) return Metadata(**metadata)
Gets the metadata for a genus type. return: (osid.Metadata) - metadata for the genus *compliance: mandatory -- This method must be implemented.*
def delete_user_template(self, uid=0, temp_id=0, user_id=''): if self.tcp and user_id: command = 134 command_string = pack('<24sB', str(user_id), temp_id) cmd_response = self.__send_command(command, command_string) if cmd_response.get('status'): re...
Delete specific template :param uid: user ID that are generated from device :param user_id: your own user ID :return: bool
def convert_response(allocate_quota_response, project_id): if not allocate_quota_response or not allocate_quota_response.allocateErrors: return _IS_OK theError = allocate_quota_response.allocateErrors[0] error_tuple = _QUOTA_ERROR_CONVERSION.get(theError.code, _IS_UNKNOWN) if error_tuple[1].find...
Computes a http status code and message `AllocateQuotaResponse` The return value a tuple (code, message) where code: is the http status code message: is the message to return Args: allocate_quota_response (:class:`endpoints_management.gen.servicecontrol_v1_messages.AllocateQuotaResponse`): ...
def config_changed(inherit_napalm_device=None, **kwargs): is_config_changed = False reason = '' try_compare = compare_config(inherit_napalm_device=napalm_device) if try_compare.get('result'): if try_compare.get('out'): is_config_changed = True else: reason = 'Conf...
Will prompt if the configuration has been changed. :return: A tuple with a boolean that specifies if the config was changed on the device.\ And a string that provides more details of the reason why the configuration was not changed. CLI Example: .. code-block:: bash salt '*' net.config_chang...
def get_archive_type(path): if not is_directory_archive(path): raise TypeError('Unable to determine the type of archive at path: %s' % path) try: ini_path = '/'.join([_convert_slashes(path), 'dir_archive.ini']) parser = _ConfigParser.SafeConfigParser() parser.read(ini_path) ...
Returns the contents type for the provided archive path. Parameters ---------- path : string Directory to evaluate. Returns ------- Returns a string of: sframe, sgraph, raises TypeError for anything else
def quantile_turnover(quantile_factor, quantile, period=1): quant_names = quantile_factor[quantile_factor == quantile] quant_name_sets = quant_names.groupby(level=['date']).apply( lambda x: set(x.index.get_level_values('asset'))) if isinstance(period, int): name_shifted = quant_name_sets.shi...
Computes the proportion of names in a factor quantile that were not in that quantile in the previous period. Parameters ---------- quantile_factor : pd.Series DataFrame with date, asset and factor quantile. quantile : int Quantile on which to perform turnover analysis. period: s...
def HasStorage(self): from neo.Core.State.ContractState import ContractPropertyState return self.ContractProperties & ContractPropertyState.HasStorage > 0
Flag indicating if storage is available. Returns: bool: True if available. False otherwise.
def _set_rock_ridge(self, rr): if not self.rock_ridge: self.rock_ridge = rr else: for ver in ['1.09', '1.10', '1.12']: if self.rock_ridge == ver: if rr and rr != ver: raise pycdlibexception.PyCdlibInvalidISO('Inconsisten...
An internal method to set the Rock Ridge version of the ISO given the Rock Ridge version of the previous entry. Parameters: rr - The version of rr from the last directory record. Returns: Nothing.
def read_file(path): if os.path.isabs(path): with wrap_file_exceptions(): with open(path, 'rb') as stream: return stream.read() with wrap_file_exceptions(): stream = ca_storage.open(path) try: return stream.read() finally: stream.close()
Read the file from the given path. If ``path`` is an absolute path, reads a file from the local filesystem. For relative paths, read the file using the storage backend configured using :ref:`CA_FILE_STORAGE <settings-ca-file-storage>`.
def getAWSAccountID(): link = "http://169.254.169.254/latest/dynamic/instance-identity/document" try: conn = urllib2.urlopen(url=link, timeout=5) except urllib2.URLError: return '0' jsonData = json.loads(conn.read()) return jsonData['accountId']
Print an instance's AWS account number or 0 when not in EC2
def __clear_covers(self): for i in range(self.n): self.row_covered[i] = False self.col_covered[i] = False
Clear all covered matrix cells
def ngettext(self, singular, plural, num, domain=None, **variables): variables.setdefault('num', num) t = self.get_translations(domain) return t.ungettext(singular, plural, num) % variables
Translate a string wity the current locale. The `num` parameter is used to dispatch between singular and various plural forms of the message.
def headloss_rect(FlowRate, Width, DistCenter, Length, KMinor, Nu, PipeRough, openchannel): return (headloss_exp_rect(FlowRate, Width, DistCenter, KMinor).magnitude + headloss_fric_rect(FlowRate, Width, DistCenter, Length, Nu, PipeRough, openchannel...
Return the total head loss in a rectangular channel. Total head loss is a combination of the major and minor losses. This equation applies to both laminar and turbulent flows.
def _parse_error_message(self, message): msg = message['error']['message'] code = message['error']['code'] err = None out = None if 'data' in message['error']: err = ' '.join(message['error']['data'][-1]['errors']) out = message['error']['data'] re...
Parses the eAPI failure response message This method accepts an eAPI failure message and parses the necesary parts in order to generate a CommandError. Args: message (str): The error message to parse Returns: tuple: A tuple that consists of the following: ...
def on_api_error(self, error_status=None, message=None, event_origin=None): if message.meta["error"].find("Widget instance not found on server") > -1: error_status["retry"] = True self.comm.attach_remote( self.id, self.remote_name, remote_n...
API error handling
def requires_refcount(cls, func): @functools.wraps(func) def requires_active_handle(*args, **kwargs): if cls.refcount() == 0: raise NoHandleException() return func(*args, **kwargs) return requires_active_handle
The ``requires_refcount`` decorator adds a check prior to call ``func`` to verify that there is an active handle. if there is no such handle, a ``NoHandleException`` exception is thrown.
def process_params(mod_id, params, type_params): res = {} for param_name, param_info in type_params.items(): val = params.get(param_name, param_info.get("default", None)) if val is not None: param_res = dict(param_info) param_res["value"] = val res[param_name]...
Takes as input a dictionary of parameters defined on a module and the information about the required parameters defined on the corresponding module type. Validatates that are required parameters were supplied and fills any missing parameters with their default values from the module type. Returns a nest...
def knx_to_time(knxdata): if len(knxdata) != 3: raise KNXException("Can only convert a 3 Byte object to time") dow = knxdata[0] >> 5 res = time(knxdata[0] & 0x1f, knxdata[1], knxdata[2]) return [res, dow]
Converts a KNX time to a tuple of a time object and the day of week
def construct_rest_of_worlds_mapping(self, excluded, fp=None): metadata = { 'filename': 'faces.gpkg', 'field': 'id', 'sha256': sha256(self.faces_fp) } data = [] for key, locations in excluded.items(): for location in locations: ...
Construct topo mapping file for ``excluded``. ``excluded`` must be a **dictionary** of {"rest-of-world label": ["names", "of", "excluded", "locations"]}``. Topo mapping has the data format: .. code-block:: python { 'data': [ ['location label', ...
def load(fp, encode_nominal=False, return_type=DENSE): decoder = ArffDecoder() return decoder.decode(fp, encode_nominal=encode_nominal, return_type=return_type)
Load a file-like object containing the ARFF document and convert it into a Python object. :param fp: a file-like object. :param encode_nominal: boolean, if True perform a label encoding while reading the .arff file. :param return_type: determines the data structure used to store the dat...
def normalize(self, text, cleaned=False, **kwargs): if not cleaned: text = self.clean(text, **kwargs) return ensure_list(text)
Create a represenation ideal for comparisons, but not to be shown to the user.
def translate(root_list, use_bag_semantics=False): translator = (Translator() if use_bag_semantics else SetTranslator()) return [translator.translate(root).to_sql() for root in root_list]
Translate a list of relational algebra trees into SQL statements. :param root_list: a list of tree roots :param use_bag_semantics: flag for using relational algebra bag semantics :return: a list of SQL statements
def toString(self): slist = self.toList() string = angle.slistStr(slist) return string if slist[0] == '-' else string[1:]
Returns time as string.
def login(self, username='0000', userid=0, password=None): if password and len(password) > 20: self.logger.error('password longer than 20 characters received') raise Exception('password longer than 20 characters, login failed') self.send(C1218LogonRequest(username, userid)) data = self.recv() if data != b...
Log into the connected device. :param str username: the username to log in with (len(username) <= 10) :param int userid: the userid to log in with (0x0000 <= userid <= 0xffff) :param str password: password to log in with (len(password) <= 20) :rtype: bool
def update_from_dict(self, dct): if not dct: return all_props = self.__class__.CONFIG_PROPERTIES for key, value in six.iteritems(dct): attr_config = all_props.get(key) if attr_config: setattr(self, key, value) else: ...
Updates this configuration object from a dictionary. See :meth:`ConfigurationObject.update` for details. :param dct: Values to update the ConfigurationObject with. :type dct: dict
def node_transmissions(node_id): exp = Experiment(session) direction = request_parameter(parameter="direction", default="incoming") status = request_parameter(parameter="status", default="all") for x in [direction, status]: if type(x) == Response: return x node = models.Node.quer...
Get all the transmissions of a node. The node id must be specified in the url. You can also pass direction (to/from/all) or status (all/pending/received) as arguments.
def _parse_common_paths_file(project_path): common_paths_file = os.path.join(project_path, 'common_paths.xml') tree = etree.parse(common_paths_file) paths = {} path_vars = ['basedata', 'scheme', 'style', 'style', 'customization', 'markable'] for path_var in p...
Parses a common_paths.xml file and returns a dictionary of paths, a dictionary of annotation level descriptions and the filename of the style file. Parameters ---------- project_path : str path to the root directory of the MMAX project Returns ------...
def acquire_auth_token_ticket(self, headers=None): logging.debug('[CAS] Acquiring Auth token ticket') url = self._get_auth_token_tickets_url() text = self._perform_post(url, headers=headers) auth_token_ticket = json.loads(text)['ticket'] logging.debug('[CAS] Acquire Auth token ti...
Acquire an auth token from the CAS server.
def start_centroid_distance(item_a, item_b, max_value): start_a = item_a.center_of_mass(item_a.times[0]) start_b = item_b.center_of_mass(item_b.times[0]) start_distance = np.sqrt((start_a[0] - start_b[0]) ** 2 + (start_a[1] - start_b[1]) ** 2) return np.minimum(start_distance, max_value) / float(max_val...
Distance between the centroids of the first step in each object. Args: item_a: STObject from the first set in TrackMatcher item_b: STObject from the second set in TrackMatcher max_value: Maximum distance value used as scaling value and upper constraint. Returns: Distance value ...
def invocation(): cmdargs = [sys.executable] + sys.argv[:] invocation = " ".join(shlex.quote(s) for s in cmdargs) return invocation
reconstructs the invocation for this python program
def get_logs(self): folder = os.path.dirname(self.pcfg['log_file']) for path, dir, files in os.walk(folder): for file in files: if os.path.splitext(file)[-1] == '.log': yield os.path.join(path, file)
returns logs from disk, requires .log extenstion
def __query_options(self): options = 0 if self.__tailable: options |= _QUERY_OPTIONS["tailable_cursor"] if self.__slave_okay or self.__pool._slave_okay: options |= _QUERY_OPTIONS["slave_okay"] if not self.__timeout: options |= _QUERY_OPTIONS["no_timeou...
Get the query options string to use for this query.
def project_with_metadata(self, term_doc_mat, x_dim=0, y_dim=1): return self._project_category_corpus(self._get_category_metadata_corpus_and_replace_terms(term_doc_mat), x_dim, y_dim)
Returns a projection of the :param term_doc_mat: a TermDocMatrix :return: CategoryProjection
def run_subprocess(command, return_code=False, **kwargs): use_kwargs = dict(stderr=subprocess.PIPE, stdout=subprocess.PIPE) use_kwargs.update(kwargs) p = subprocess.Popen(command, **use_kwargs) output = p.communicate() output = ['' if s is None else s for s in output] output = [s.decode('utf-8')...
Run command using subprocess.Popen Run command and wait for command to complete. If the return code was zero then return, otherwise raise CalledProcessError. By default, this will also add stdout= and stderr=subproces.PIPE to the call to Popen to suppress printing to the terminal. Parameters -...
def init_db(): import reana_db.models if not database_exists(engine.url): create_database(engine.url) Base.metadata.create_all(bind=engine)
Initialize the DB.
def _reset(cls): if os.getpid() != cls._cls_pid: cls._cls_pid = os.getpid() cls._cls_instances_by_target.clear() cls._cls_thread_by_target.clear()
If we have forked since the watch dictionaries were initialized, all that has is garbage, so clear it.
def update_cache(from_currency, to_currency): if check_update(from_currency, to_currency) is True: ccache[from_currency][to_currency]['value'] = convert_using_api(from_currency, to_currency) ccache[from_currency][to_currency]['last_update'] = time.time() cache.write(ccache)
update from_currency to_currency pair in cache if last update for that pair is over 30 minutes ago by request API info
def createPopulationFile(inputFiles, labels, outputFileName): outputFile = None try: outputFile = open(outputFileName, 'w') except IOError: msg = "%(outputFileName)s: can't write file" raise ProgramError(msg) for i in xrange(len(inputFiles)): fileName = inputFiles[i] ...
Creates a population file. :param inputFiles: the list of input files. :param labels: the list of labels (corresponding to the input files). :param outputFileName: the name of the output file. :type inputFiles: list :type labels: list :type outputFileName: str The ``inputFiles`` is in rea...
def run_timed(self, **kwargs): for key in kwargs: setattr(self, key, kwargs[key]) self.command = self.COMMAND_RUN_TIMED
Run the motor for the amount of time specified in `time_sp` and then stop the motor using the action specified by `stop_action`.
def check_ellipsis(text): err = "typography.symbols.ellipsis" msg = u"'...' is an approximation, use the ellipsis symbol '…'." regex = "\.\.\." return existence_check(text, [regex], err, msg, max_errors=3, require_padding=False, offset=0)
Use an ellipsis instead of three dots.
def cli(out_fmt, input, output): _input = StringIO() for l in input: try: _input.write(str(l)) except TypeError: _input.write(bytes(l, 'utf-8')) _input = seria.load(_input) _out = (_input.dump(out_fmt)) output.write(_out)
Converts text.
def _node_name(self, concept): if ( self.grounding_threshold is not None and concept.db_refs[self.grounding_ontology] and (concept.db_refs[self.grounding_ontology][0][1] > self.grounding_threshold)): entry = concept.db_refs[self.grounding_onto...
Return a standardized name for a node given a Concept.
def extract_archive(filepath): if os.path.isdir(filepath): path = os.path.abspath(filepath) print("Archive already extracted. Viewing from {}...".format(path)) return path elif not zipfile.is_zipfile(filepath): raise TypeError("{} is not a zipfile".format(filepath)) archive_s...
Returns the path of the archive :param str filepath: Path to file to extract or read :return: path of the archive :rtype: str
def description(self): for e in self: if isinstance(e, Description): return e.value raise NoSuchAnnotation
Obtain the description associated with the element. Raises: :class:`NoSuchAnnotation` if there is no associated description.
def send_audio_packet(self, data, *, encode=True): self.checked_add('sequence', 1, 65535) if encode: encoded_data = self.encoder.encode(data, self.encoder.SAMPLES_PER_FRAME) else: encoded_data = data packet = self._get_voice_packet(encoded_data) try: ...
Sends an audio packet composed of the data. You must be connected to play audio. Parameters ---------- data: bytes The :term:`py:bytes-like object` denoting PCM or Opus voice data. encode: bool Indicates if ``data`` should be encoded into Opus. ...
def infer_location( self, location_query, max_distance, google_key, foursquare_client_id, foursquare_client_secret, limit ): self.location_from = infer_location( self.points[0], location_query, ...
In-place location inferring See infer_location function Args: Returns: :obj:`Segment`: self
def drop_primary_key(self, table): if self.get_primary_key(table): self.execute('ALTER TABLE {0} DROP PRIMARY KEY'.format(wrap(table)))
Drop a Primary Key constraint for a specific table.
def exit_if_missing_graphviz(self): (out, err) = utils.capture_shell("which dot") if "dot" not in out: ui.error(c.MESSAGES["dot_missing"])
Detect the presence of the dot utility to make a png graph.
def execute_command(self, args, parent_environ=None, **subprocess_kwargs): if parent_environ in (None, os.environ): target_environ = {} else: target_environ = parent_environ.copy() interpreter = Python(target_environ=target_environ) executor = self._create_executo...
Run a command within a resolved context. This applies the context to a python environ dict, then runs a subprocess in that namespace. This is not a fully configured subshell - shell-specific commands such as aliases will not be applied. To execute a command within a subshell instead, us...
def listen(self, port: int, address: str = "") -> None: sockets = bind_sockets(port, address=address) self.add_sockets(sockets)
Starts accepting connections on the given port. This method may be called more than once to listen on multiple ports. `listen` takes effect immediately; it is not necessary to call `TCPServer.start` afterwards. It is, however, necessary to start the `.IOLoop`.
def Containers(vent=True, running=True, exclude_labels=None): containers = [] try: d_client = docker.from_env() if vent: c = d_client.containers.list(all=not running, filters={'label': 'vent'}) else: c = d_client.containers...
Get containers that are created, by default limit to vent containers that are running
def _read_proto_resolve(self, length, ptype): if ptype == '0800': return ipaddress.ip_address(self._read_fileng(4)) elif ptype == '86dd': return ipaddress.ip_address(self._read_fileng(16)) else: return self._read_fileng(length)
Resolve IP address according to protocol. Positional arguments: * length -- int, protocol address length * ptype -- int, protocol type Returns: * str -- IP address
def download_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, delay=15): to_dir = os.path.abspath(to_dir) try: from urllib.request import urlopen except ImportError: from urllib2 import urlopen tgz_name = "distribute-%s.tar.gz" % ve...
Download distribute from a specified location and return its filename `version` should be a valid distribute version number that is available as an egg for download under the `download_base` URL (which should end with a '/'). `to_dir` is the directory where the egg will be downloaded. `delay` is the nu...
def get(self, key): if key not in self._keystore: return None rec = self._keystore[key] if rec.is_expired: self.delete(key) return None return rec.value
Retrieves previously stored key from the storage :return value, stored in the storage
def _create_alignment_button(self): iconnames = ["AlignTop", "AlignCenter", "AlignBottom"] bmplist = [icons[iconname] for iconname in iconnames] self.alignment_tb = _widgets.BitmapToggleButton(self, bmplist) self.alignment_tb.SetToolTipString(_(u"Alignment")) self.Bind(wx.EVT_BUT...
Creates vertical alignment button
def delete(self, domain, type_name, search_command): return self._request(domain, type_name, search_command, 'DELETE', None)
Delete entry in ThreatConnect Data Store Args: domain (string): One of 'local', 'organization', or 'system'. type_name (string): This is a free form index type name. The ThreatConnect API will use this resource verbatim. search_command (string): Search comman...
def create_repo(url, vcs, **kwargs): r if vcs == 'git': return GitRepo(url, **kwargs) elif vcs == 'hg': return MercurialRepo(url, **kwargs) elif vcs == 'svn': return SubversionRepo(url, **kwargs) else: raise InvalidVCS('VCS %s is not a valid VCS' % vcs)
r"""Return a object representation of a VCS repository. :returns: instance of a repository object :rtype: :class:`libvcs.svn.SubversionRepo`, :class:`libvcs.git.GitRepo` or :class:`libvcs.hg.MercurialRepo`. Usage Example:: >>> from libvcs.shortcuts import create_repo >>> r = crea...
def getIndex(reference): if reference: reffas = reference else: parent_directory = path.dirname(path.abspath(path.dirname(__file__))) reffas = path.join(parent_directory, "reference/DNA_CS.fasta") if not path.isfile(reffas): logging.error("Could not find reference fasta for l...
Find the reference folder using the location of the script file Create the index, test if successful
def convert_notebook(self, name): exporter = nbconvert.exporters.python.PythonExporter() relative_path = self.convert_path(name) file_path = self.get_path("%s.ipynb"%relative_path) code = exporter.from_filename(file_path)[0] self.write_code(name, code) self.clean_code(nam...
Converts a notebook into a python file.
def handle_input(self, event): self.update_timeval() self.events = [] code = self._get_event_key_code(event) if code in self.codes: new_code = self.codes[code] else: new_code = 0 event_type = self._get_event_type(event) value = self._get_ke...
Process they keyboard input.
def to_html(self, show_mean=None, sortable=None, colorize=True, *args, **kwargs): if show_mean is None: show_mean = self.show_mean if sortable is None: sortable = self.sortable df = self.copy() if show_mean: df.insert(0, 'Mean', None) ...
Extend Pandas built in `to_html` method for rendering a DataFrame and use it to render a ScoreMatrix.
def three_digit(number): number = str(number) if len(number) == 1: return u'00%s' % number elif len(number) == 2: return u'0%s' % number else: return number
Add 0s to inputs that their length is less than 3. :param number: The number to convert :type number: int :returns: String :example: >>> three_digit(1) '001'
def disconnect(self): all_conns = chain( self._available_connections.values(), self._in_use_connections.values(), ) for node_connections in all_conns: for connection in node_connections: connection.disconnect()
Nothing that requires any overwrite.
def nonKeyVisibleCols(self): 'All columns which are not keysList of unhidden non-key columns.' return [c for c in self.columns if not c.hidden and c not in self.keyCols]
All columns which are not keysList of unhidden non-key columns.
def calculate_start_time(df): if "time" in df: df["time_arr"] = pd.Series(df["time"], dtype='datetime64[s]') elif "timestamp" in df: df["time_arr"] = pd.Series(df["timestamp"], dtype="datetime64[ns]") else: return df if "dataset" in df: for dset in df["dataset"].unique():...
Calculate the star_time per read. Time data is either a "time" (in seconds, derived from summary files) or a "timestamp" (in UTC, derived from fastq_rich format) and has to be converted appropriately in a datetime format time_arr For both the time_zero is the minimal value of the time_arr, whi...
def rename_variables(expression: Expression, renaming: Dict[str, str]) -> Expression: if isinstance(expression, Operation): if hasattr(expression, 'variable_name'): variable_name = renaming.get(expression.variable_name, expression.variable_name) return create_operation_expression( ...
Rename the variables in the expression according to the given dictionary. Args: expression: The expression in which the variables are renamed. renaming: The renaming dictionary. Maps old variable names to new ones. Variable names not occuring in the dictionary ar...
def merge(args): p = OptionParser(merge.__doc__) p.set_outdir(outdir="outdir") opts, args = p.parse_args(args) if len(args) < 1: sys.exit(not p.print_help()) folders = args outdir = opts.outdir mkdir(outdir) files = flatten(glob("{0}/*.*.fastq".format(x)) for x in folders) fi...
%prog merge folder1 ... Consolidate split contents in the folders. The folders can be generated by the split() process and several samples may be in separate fastq files. This program merges them.
def _stop_ubridge(self): if self._ubridge_hypervisor and self._ubridge_hypervisor.is_running(): log.info("Stopping uBridge hypervisor {}:{}".format(self._ubridge_hypervisor.host, self._ubridge_hypervisor.port)) yield from self._ubridge_hypervisor.stop() self._ubridge_hypervisor =...
Stops uBridge.
def is_deb_package_installed(pkg): with settings(hide('warnings', 'running', 'stdout', 'stderr'), warn_only=True, capture=True): result = sudo('dpkg-query -l "%s" | grep -q ^.i' % pkg) return not bool(result.return_code)
checks if a particular deb package is installed
def numbering_part(self): try: return self.part_related_by(RT.NUMBERING) except KeyError: numbering_part = NumberingPart.new() self.relate_to(numbering_part, RT.NUMBERING) return numbering_part
A |NumberingPart| object providing access to the numbering definitions for this document. Creates an empty numbering part if one is not present.
def solidangle_errorprop(twotheta, dtwotheta, sampletodetectordistance, dsampletodetectordistance, pixelsize=None): SAC = solidangle(twotheta, sampletodetectordistance, pixelsize) if pixelsize is None: pixelsize = 1 return (SAC, (sampletodetectordistance * (4 * dsampletodetectordistance ...
Solid-angle correction for two-dimensional SAS images with error propagation Inputs: twotheta: matrix of two-theta values dtwotheta: matrix of absolute error of two-theta values sampletodetectordistance: sample-to-detector distance dsampletodetectordistance: absolute error of sample...
async def process_name(message: types.Message, state: FSMContext): async with state.proxy() as data: data['name'] = message.text await Form.next() await message.reply("How old are you?")
Process user name
def _update(self, data): self.bullet = data['bullet'] self.level = data['level'] self.text = WikiText(data['text_raw'], data['text_rendered'])
Update the line using the blob of json-parsed data directly from the API.
def load_probe(name): if op.exists(name): path = name else: curdir = op.realpath(op.dirname(__file__)) path = op.join(curdir, 'probes/{}.prb'.format(name)) if not op.exists(path): raise IOError("The probe `{}` cannot be found.".format(name)) return MEA(probe=_read_python(...
Load one of the built-in probes.
def release(self, force=False): D = self.__class__ collection = self.get_collection() identity = self.Lock() query = D.id == self if not force: query &= D.lock.instance == identity.instance previous = collection.find_one_and_update(query, {'$unset': {~D.lock: True}}, {~D.lock: True}) if previous is Non...
Release an exclusive lock on this integration task. Unless forcing, if we are not the current owners of the lock a Locked exception will be raised.