sentence1
stringlengths
52
3.87M
sentence2
stringlengths
1
47.2k
label
stringclasses
1 value
def set_cookie(self, name, value, secret=None, **options): ''' Create a new cookie or replace an old one. If the `secret` parameter is set, create a `Signed Cookie` (described below). :param name: the name of the cookie. :param value: the value of the cookie. :param secret: a signature key required for signed cookies. Additionally, this method accepts all RFC 2109 attributes that are supported by :class:`cookie.Morsel`, including: :param max_age: maximum age in seconds. (default: None) :param expires: a datetime object or UNIX timestamp. (default: None) :param domain: the domain that is allowed to read the cookie. (default: current domain) :param path: limits the cookie to a given path (default: current path) :param secure: limit the cookie to HTTPS connections (default: off). :param httponly: prevents client-side javascript to read this cookie (default: off, requires Python 2.6 or newer). If neither `expires` nor `max_age` is set (default), the cookie will expire at the end of the browser session (as soon as the browser window is closed). Signed cookies may store any pickle-able object and are cryptographically signed to prevent manipulation. Keep in mind that cookies are limited to 4kb in most browsers. Warning: Signed cookies are not encrypted (the client can still see the content) and not copy-protected (the client can restore an old cookie). The main intention is to make pickling and unpickling save, not to store secret information at client side. ''' if not self._cookies: self._cookies = SimpleCookie() if secret: value = touni(cookie_encode((name, value), secret)) elif not isinstance(value, basestring): raise TypeError('Secret key missing for non-string Cookie.') if len(value) > 4096: raise ValueError('Cookie value to long.') self._cookies[name] = value for key, value in options.items(): if key == 'max_age': if isinstance(value, timedelta): value = value.seconds + value.days * 24 * 3600 if key == 'expires': if isinstance(value, (datedate, datetime)): value = value.timetuple() elif isinstance(value, (int, float)): value = time.gmtime(value) value = time.strftime("%a, %d %b %Y %H:%M:%S GMT", value) self._cookies[name][key.replace('_', '-')] = value
Create a new cookie or replace an old one. If the `secret` parameter is set, create a `Signed Cookie` (described below). :param name: the name of the cookie. :param value: the value of the cookie. :param secret: a signature key required for signed cookies. Additionally, this method accepts all RFC 2109 attributes that are supported by :class:`cookie.Morsel`, including: :param max_age: maximum age in seconds. (default: None) :param expires: a datetime object or UNIX timestamp. (default: None) :param domain: the domain that is allowed to read the cookie. (default: current domain) :param path: limits the cookie to a given path (default: current path) :param secure: limit the cookie to HTTPS connections (default: off). :param httponly: prevents client-side javascript to read this cookie (default: off, requires Python 2.6 or newer). If neither `expires` nor `max_age` is set (default), the cookie will expire at the end of the browser session (as soon as the browser window is closed). Signed cookies may store any pickle-able object and are cryptographically signed to prevent manipulation. Keep in mind that cookies are limited to 4kb in most browsers. Warning: Signed cookies are not encrypted (the client can still see the content) and not copy-protected (the client can restore an old cookie). The main intention is to make pickling and unpickling save, not to store secret information at client side.
entailment
def decode(self, encoding=None): ''' Returns a copy with all keys and values de- or recoded to match :attr:`input_encoding`. Some libraries (e.g. WTForms) want a unicode dictionary. ''' copy = FormsDict() enc = copy.input_encoding = encoding or self.input_encoding copy.recode_unicode = False for key, value in self.allitems(): copy.append(self._fix(key, enc), self._fix(value, enc)) return copy
Returns a copy with all keys and values de- or recoded to match :attr:`input_encoding`. Some libraries (e.g. WTForms) want a unicode dictionary.
entailment
def getunicode(self, name, default=None, encoding=None): ''' Return the value as a unicode string, or the default. ''' try: return self._fix(self[name], encoding) except (UnicodeError, KeyError): return default
Return the value as a unicode string, or the default.
entailment
def load_config(self, filename): ''' Load values from an *.ini style config file. If the config file contains sections, their names are used as namespaces for the values within. The two special sections ``DEFAULT`` and ``bottle`` refer to the root namespace (no prefix). ''' conf = ConfigParser() conf.read(filename) for section in conf.sections(): for key, value in conf.items(section): if section not in ('DEFAULT', 'bottle'): key = section + '.' + key self[key] = value return self
Load values from an *.ini style config file. If the config file contains sections, their names are used as namespaces for the values within. The two special sections ``DEFAULT`` and ``bottle`` refer to the root namespace (no prefix).
entailment
def load_dict(self, source, namespace='', make_namespaces=False): ''' Import values from a dictionary structure. Nesting can be used to represent namespaces. >>> ConfigDict().load_dict({'name': {'space': {'key': 'value'}}}) {'name.space.key': 'value'} ''' stack = [(namespace, source)] while stack: prefix, source = stack.pop() if not isinstance(source, dict): raise TypeError('Source is not a dict (r)' % type(key)) for key, value in source.items(): if not isinstance(key, str): raise TypeError('Key is not a string (%r)' % type(key)) full_key = prefix + '.' + key if prefix else key if isinstance(value, dict): stack.append((full_key, value)) if make_namespaces: self[full_key] = self.Namespace(self, full_key) else: self[full_key] = value return self
Import values from a dictionary structure. Nesting can be used to represent namespaces. >>> ConfigDict().load_dict({'name': {'space': {'key': 'value'}}}) {'name.space.key': 'value'}
entailment
def meta_get(self, key, metafield, default=None): ''' Return the value of a meta field for a key. ''' return self._meta.get(key, {}).get(metafield, default)
Return the value of a meta field for a key.
entailment
def meta_set(self, key, metafield, value): ''' Set the meta field for a key to a new value. This triggers the on-change handler for existing keys. ''' self._meta.setdefault(key, {})[metafield] = value if key in self: self[key] = self[key]
Set the meta field for a key to a new value. This triggers the on-change handler for existing keys.
entailment
def lookup(self, name): ''' Search for a resource and return an absolute file path, or `None`. The :attr:`path` list is searched in order. The first match is returend. Symlinks are followed. The result is cached to speed up future lookups. ''' if name not in self.cache or DEBUG: for path in self.path: fpath = os.path.join(path, name) if os.path.isfile(fpath): if self.cachemode in ('all', 'found'): self.cache[name] = fpath return fpath if self.cachemode == 'all': self.cache[name] = None return self.cache[name]
Search for a resource and return an absolute file path, or `None`. The :attr:`path` list is searched in order. The first match is returend. Symlinks are followed. The result is cached to speed up future lookups.
entailment
def open(self, name, mode='r', *args, **kwargs): ''' Find a resource and return a file object, or raise IOError. ''' fname = self.lookup(name) if not fname: raise IOError("Resource %r not found." % name) return self.opener(fname, mode=mode, *args, **kwargs)
Find a resource and return a file object, or raise IOError.
entailment
def save(self, destination, overwrite=False, chunk_size=2**16): ''' Save file to disk or copy its content to an open file(-like) object. If *destination* is a directory, :attr:`filename` is added to the path. Existing files are not overwritten by default (IOError). :param destination: File path, directory or file(-like) object. :param overwrite: If True, replace existing files. (default: False) :param chunk_size: Bytes to read at a time. (default: 64kb) ''' if isinstance(destination, basestring): # Except file-likes here if os.path.isdir(destination): destination = os.path.join(destination, self.filename) if not overwrite and os.path.exists(destination): raise IOError('File exists.') with open(destination, 'wb') as fp: self._copy_file(fp, chunk_size) else: self._copy_file(destination, chunk_size)
Save file to disk or copy its content to an open file(-like) object. If *destination* is a directory, :attr:`filename` is added to the path. Existing files are not overwritten by default (IOError). :param destination: File path, directory or file(-like) object. :param overwrite: If True, replace existing files. (default: False) :param chunk_size: Bytes to read at a time. (default: 64kb)
entailment
def render(self, *args, **kwargs): """ Render the template using keyword arguments as local variables. """ env = {}; stdout = [] for dictarg in args: env.update(dictarg) env.update(kwargs) self.execute(stdout, env) return ''.join(stdout)
Render the template using keyword arguments as local variables.
entailment
def convert_values(args_list): """convert_value in bulk. :param args_list: list of value, source, target currency pairs :return: map of converted values """ rate_map = get_rates(map(itemgetter(1, 2), args_list)) value_map = {} for value, source, target in args_list: args = (value, source, target) if source == target: value_map[args] = value else: value_map[args] = value * rate_map[(source, target)] return value_map
convert_value in bulk. :param args_list: list of value, source, target currency pairs :return: map of converted values
entailment
def convert_value(value, source_currency, target_currency): """Converts the price of a currency to another one using exchange rates :param price: the price value :param type: decimal :param source_currency: source ISO-4217 currency code :param type: str :param target_currency: target ISO-4217 currency code :param type: str :returns: converted price instance :rtype: ``Price`` """ # If price currency and target currency is same # return given currency as is if source_currency == target_currency: return value rate = get_rate(source_currency, target_currency) return value * rate
Converts the price of a currency to another one using exchange rates :param price: the price value :param type: decimal :param source_currency: source ISO-4217 currency code :param type: str :param target_currency: target ISO-4217 currency code :param type: str :returns: converted price instance :rtype: ``Price``
entailment
def convert(price, currency): """Shorthand function converts a price object instance of a source currency to target currency :param price: the price value :param type: decimal :param currency: target ISO-4217 currency code :param type: str :returns: converted price instance :rtype: ``Price`` """ # If price currency and target currency is same # return given currency as is value = convert_value(price.value, price.currency, currency) return Price(value, currency)
Shorthand function converts a price object instance of a source currency to target currency :param price: the price value :param type: decimal :param currency: target ISO-4217 currency code :param type: str :returns: converted price instance :rtype: ``Price``
entailment
def import_class(class_path): """imports and returns given class string. :param class_path: Class path as string :type class_path: str :returns: Class that has given path :rtype: class :Example: >>> import_class('collections.OrderedDict').__name__ 'OrderedDict' """ try: from django.utils.importlib import import_module module_name = '.'.join(class_path.split(".")[:-1]) mod = import_module(module_name) return getattr(mod, class_path.split(".")[-1]) except Exception, detail: raise ImportError(detail)
imports and returns given class string. :param class_path: Class path as string :type class_path: str :returns: Class that has given path :rtype: class :Example: >>> import_class('collections.OrderedDict').__name__ 'OrderedDict'
entailment
def insert_many(objects, using="default"): """Insert list of Django objects in one SQL query. Objects must be of the same Django model. Note that save is not called and signals on the model are not raised. Mostly from: http://people.iola.dk/olau/python/bulkops.py """ if not objects: return import django.db.models from django.db import connections from django.db import transaction con = connections[using] model = objects[0].__class__ fields = [f for f in model._meta.fields if not isinstance(f, django.db.models.AutoField)] parameters = [] for o in objects: params = tuple(f.get_db_prep_save(f.pre_save(o, True), connection=con) for f in fields) parameters.append(params) table = model._meta.db_table column_names = ",".join(con.ops.quote_name(f.column) for f in fields) placeholders = ",".join(("%s",) * len(fields)) con.cursor().executemany("insert into %s (%s) values (%s)" % (table, column_names, placeholders), parameters) transaction.commit_unless_managed(using=using)
Insert list of Django objects in one SQL query. Objects must be of the same Django model. Note that save is not called and signals on the model are not raised. Mostly from: http://people.iola.dk/olau/python/bulkops.py
entailment
def update_many(objects, fields=[], using="default"): """Update list of Django objects in one SQL query, optionally only overwrite the given fields (as names, e.g. fields=["foo"]). Objects must be of the same Django model. Note that save is not called and signals on the model are not raised. Mostly from: http://people.iola.dk/olau/python/bulkops.py """ if not objects: return import django.db.models from django.db import connections from django.db import transaction con = connections[using] names = fields meta = objects[0]._meta fields = [f for f in meta.fields if not isinstance(f, django.db.models.AutoField) and (not names or f.name in names)] if not fields: raise ValueError("No fields to update, field names are %s." % names) fields_with_pk = fields + [meta.pk] parameters = [] for o in objects: parameters.append(tuple(f.get_db_prep_save(f.pre_save(o, True), connection=con) for f in fields_with_pk)) table = meta.db_table assignments = ",".join(("%s=%%s" % con.ops.quote_name(f.column)) for f in fields) con.cursor().executemany("update %s set %s where %s=%%s" % (table, assignments, con.ops.quote_name(meta.pk.column)), parameters) transaction.commit_unless_managed(using=using)
Update list of Django objects in one SQL query, optionally only overwrite the given fields (as names, e.g. fields=["foo"]). Objects must be of the same Django model. Note that save is not called and signals on the model are not raised. Mostly from: http://people.iola.dk/olau/python/bulkops.py
entailment
def memoize(ttl=None): """ Cache the result of the function call with given args for until ttl (datetime.timedelta) expires. """ def decorator(obj): cache = obj.cache = {} @functools.wraps(obj) def memoizer(*args, **kwargs): now = datetime.now() key = str(args) + str(kwargs) if key not in cache: cache[key] = (obj(*args, **kwargs), now) value, last_update = cache[key] if ttl and (now - last_update) > ttl: cache[key] = (obj(*args, **kwargs), now) return cache[key][0] return memoizer return decorator
Cache the result of the function call with given args for until ttl (datetime.timedelta) expires.
entailment
def round_to_quarter(start_time, end_time): """ Return the duration between `start_time` and `end_time` :class:`datetime.time` objects, rounded to 15 minutes. """ # We don't care about the date (only about the time) but Python # can substract only datetime objects, not time ones today = datetime.date.today() start_date = datetime.datetime.combine(today, start_time) end_date = datetime.datetime.combine(today, end_time) difference_minutes = (end_date - start_date).seconds / 60 remainder = difference_minutes % 15 # Round up difference_minutes += 15 - remainder if remainder > 0 else 0 return ( start_date + datetime.timedelta(minutes=difference_minutes) ).time()
Return the duration between `start_time` and `end_time` :class:`datetime.time` objects, rounded to 15 minutes.
entailment
def _timesheets_callback(self, callback): """ Call a method on all the timesheets, aggregate the return values in a list and return it. """ def call(*args, **kwargs): return_values = [] for timesheet in self: attr = getattr(timesheet, callback) if callable(attr): result = attr(*args, **kwargs) else: result = attr return_values.append(result) return return_values return call
Call a method on all the timesheets, aggregate the return values in a list and return it.
entailment
def load(cls, file_pattern, nb_previous_files=1, parser=None): """ Load a collection of timesheet from the given `file_pattern`. `file_pattern` is a path to a timesheet file that will be expanded with :func:`datetime.date.strftime` and the current date. `nb_previous_files` is the number of other timesheets to load, depending on `file_pattern` this will result in either the timesheet from the previous month or from the previous year to be loaded. If `parser` is not set, a default :class:`taxi.timesheet.parser.TimesheetParser` will be used. """ if not parser: parser = TimesheetParser() timesheet_files = cls.get_files(file_pattern, nb_previous_files) timesheet_collection = cls() for file_path in timesheet_files: try: timesheet = Timesheet.load( file_path, parser=parser, initial=lambda: timesheet_collection.get_new_timesheets_contents() ) except ParseError as e: e.file = file_path raise timesheet_collection.timesheets.append(timesheet) # Fix `add_date_to_bottom` attribute of timesheet entries based on # previous timesheets. When a new timesheet is started it won't have # any direction defined, so we take the one from the previous # timesheet, if any if parser.add_date_to_bottom is None: previous_timesheet = None for timesheet in timesheet_collection.timesheets: if previous_timesheet: previous_timesheet_top_down = previous_timesheet.entries.is_top_down() if previous_timesheet_top_down is not None: timesheet.entries.parser.add_date_to_bottom = previous_timesheet_top_down previous_timesheet = timesheet return timesheet_collection
Load a collection of timesheet from the given `file_pattern`. `file_pattern` is a path to a timesheet file that will be expanded with :func:`datetime.date.strftime` and the current date. `nb_previous_files` is the number of other timesheets to load, depending on `file_pattern` this will result in either the timesheet from the previous month or from the previous year to be loaded. If `parser` is not set, a default :class:`taxi.timesheet.parser.TimesheetParser` will be used.
entailment
def get_files(cls, file_pattern, nb_previous_files, from_date=None): """ Return an :class:`~taxi.utils.structures.OrderedSet` of file paths expanded from `filename`, with a maximum of `nb_previous_files`. See :func:`taxi.utils.file.expand_date` for more information about filename expansion. If `from_date` is set, it will be used as a starting date instead of the current date. """ date_units = ['m', 'Y'] smallest_unit = None if not from_date: from_date = datetime.date.today() for date_unit in date_units: if ('%' + date_unit) in file_pattern: smallest_unit = date_unit break if smallest_unit is None: return OrderedSet([file_pattern]) files = OrderedSet() for i in range(nb_previous_files, -1, -1): if smallest_unit == 'm': file_date = months_ago(from_date, i) elif smallest_unit == 'Y': file_date = from_date.replace(day=1, year=from_date.year - i) files.add(file_utils.expand_date(file_pattern, file_date)) return files
Return an :class:`~taxi.utils.structures.OrderedSet` of file paths expanded from `filename`, with a maximum of `nb_previous_files`. See :func:`taxi.utils.file.expand_date` for more information about filename expansion. If `from_date` is set, it will be used as a starting date instead of the current date.
entailment
def get_new_timesheets_contents(self): """ Return the initial text to be inserted in new timesheets. """ popular_aliases = self.get_popular_aliases() template = ['# Recently used aliases:'] if popular_aliases: contents = '\n'.join(template + ['# ' + entry for entry, usage in popular_aliases]) else: contents = '' return contents
Return the initial text to be inserted in new timesheets.
entailment
def entries(self): """ Return the entries (as a {date: entries} dict) of all timesheets in the collection. """ entries_list = self._timesheets_callback('entries')() return reduce(lambda x, y: x + y, entries_list)
Return the entries (as a {date: entries} dict) of all timesheets in the collection.
entailment
def get_popular_aliases(self, *args, **kwargs): """ Return the aggregated results of :meth:`Timesheet.get_popular_aliases`. """ aliases_count_total = defaultdict(int) aliases_counts = self._timesheets_callback('get_popular_aliases')(*args, **kwargs) for aliases_count in aliases_counts: for alias, count in aliases_count: aliases_count_total[alias] += count sorted_aliases_count_total = sorted(aliases_count_total.items(), key=lambda item: item[1], reverse=True) return sorted_aliases_count_total
Return the aggregated results of :meth:`Timesheet.get_popular_aliases`.
entailment
def show(ctx, search): """ Resolves any object passed to it (aliases, projects, etc). This will resolve the following: \b - Aliases (my_alias) - Mappings (123/456) - Project ids (123) """ matches = {'aliases': [], 'mappings': [], 'projects': []} projects_db = ctx.obj['projects_db'] matches = get_alias_matches(search, matches) matches = get_mapping_matches(search, matches, projects_db) matches = get_project_matches(search, matches, projects_db) ctx.obj['view'].show_command_results(search, matches, projects_db)
Resolves any object passed to it (aliases, projects, etc). This will resolve the following: \b - Aliases (my_alias) - Mappings (123/456) - Project ids (123)
entailment
def status(ctx, date, f, pushed): """ Shows the summary of what's going to be committed to the server. """ try: timesheet_collection = get_timesheet_collection_for_context(ctx, f) except ParseError as e: ctx.obj['view'].err(e) else: ctx.obj['view'].show_status( timesheet_collection.entries.filter( date, regroup=ctx.obj['settings']['regroup_entries'], pushed=False if not pushed else None ) )
Shows the summary of what's going to be committed to the server.
entailment
def edit(ctx, file_to_edit, previous_file): """ Opens your timesheet file in your favourite editor. The PREVIOUS_FILE argument can be used to specify which nth previous file to edit. A value of 1 will edit the previous file, 2 will edit the second-previous file, etc. """ timesheet_collection = None autofill = not bool(file_to_edit) and previous_file == 0 if not file_to_edit: file_to_edit = ctx.obj['settings'].get_entries_file_path(False) # If the file was not specified and if it's the current file, autofill it if autofill: try: timesheet_collection = get_timesheet_collection_for_context( ctx, file_to_edit ) except ParseError: pass else: t = timesheet_collection.latest() if ctx.obj['settings']['auto_add'] != Settings.AUTO_ADD_OPTIONS['NO']: auto_fill_days = ctx.obj['settings']['auto_fill_days'] if auto_fill_days: t.prefill(auto_fill_days, limit=None) t.save() # Get the path to the file we should open in the editor timesheet_files = list(reversed(TimesheetCollection.get_files(file_to_edit, previous_file))) if previous_file >= len(timesheet_files): ctx.fail("Couldn't find the requested previous file for `%s`." % file_to_edit) expanded_file_to_edit = list(timesheet_files)[previous_file] editor = ctx.obj['settings']['editor'] edit_kwargs = { 'filename': expanded_file_to_edit, 'extension': '.tks' } if editor: edit_kwargs['editor'] = editor click.edit(**edit_kwargs) try: # Show the status only for the given file if it was specified with the # --file option, or for the files specified in the settings otherwise timesheet_collection = get_timesheet_collection_for_context( ctx, file_to_edit ) except ParseError as e: ctx.obj['view'].err(e) else: ctx.obj['view'].show_status( timesheet_collection.entries.filter(regroup=True, pushed=False) )
Opens your timesheet file in your favourite editor. The PREVIOUS_FILE argument can be used to specify which nth previous file to edit. A value of 1 will edit the previous file, 2 will edit the second-previous file, etc.
entailment
def get_reversed_aliases(self): """ Return the reversed aliases dict. Instead of being in the form {'alias': mapping}, the dict is in the form {mapping: 'alias'}. """ return dict((v, k) for k, v in six.iteritems(self.aliases))
Return the reversed aliases dict. Instead of being in the form {'alias': mapping}, the dict is in the form {mapping: 'alias'}.
entailment
def filter_from_mapping(self, mapping, backend=None): """ Return mappings that either exactly correspond to the given `mapping` tuple, or, if the second item of `mapping` is `None`, include mappings that only match the first item of `mapping` (useful to show all mappings for a given project). """ def mapping_filter(key_item): key, item = key_item return ( (mapping is None or item.mapping == mapping or (mapping[1] is None and item.mapping is not None and item.mapping[0] == mapping[0])) and (backend is None or item.backend == backend) ) items = [item for item in six.iteritems(self) if mapping_filter(item)] aliases = collections.OrderedDict( sorted(items, key=lambda alias: alias[1].mapping if alias[1] is not None else (0, 0)) ) return aliases
Return mappings that either exactly correspond to the given `mapping` tuple, or, if the second item of `mapping` is `None`, include mappings that only match the first item of `mapping` (useful to show all mappings for a given project).
entailment
def filter_from_alias(self, alias, backend=None): """ Return aliases that start with the given `alias`, optionally filtered by backend. """ def alias_filter(key_item): key, item = key_item return ((alias is None or alias in key) and (backend is None or item.backend == backend)) items = six.moves.filter(alias_filter, six.iteritems(self)) aliases = collections.OrderedDict(sorted(items, key=lambda a: a[0].lower())) return aliases
Return aliases that start with the given `alias`, optionally filtered by backend.
entailment
def clean_aliases(ctx, force_yes): """ Removes aliases from your config file that point to inactive projects. """ inactive_aliases = [] for (alias, mapping) in six.iteritems(aliases_database): # Ignore local aliases if mapping.mapping is None: continue project = ctx.obj['projects_db'].get(mapping.mapping[0], mapping.backend) if (project is None or not project.is_active() or (mapping.mapping[1] is not None and project.get_activity(mapping.mapping[1]) is None)): inactive_aliases.append(((alias, mapping), project)) if not inactive_aliases: ctx.obj['view'].msg("No inactive aliases found.") return if not force_yes: confirm = ctx.obj['view'].clean_inactive_aliases(inactive_aliases) if force_yes or confirm: ctx.obj['settings'].remove_aliases( [item[0] for item in inactive_aliases] ) ctx.obj['settings'].write_config() ctx.obj['view'].msg("%d inactive aliases have been successfully" " cleaned." % len(inactive_aliases))
Removes aliases from your config file that point to inactive projects.
entailment
def get_plugin_info(plugin): """ Fetch information about the given package on PyPI and return it as a dict. If the package cannot be found on PyPI, :exc:`NameError` will be raised. """ url = 'https://pypi.python.org/pypi/{}/json'.format(plugin) try: resp = request.urlopen(url) except HTTPError as e: if e.code == 404: raise NameError("Plugin {} could not be found.".format(plugin)) else: raise ValueError( "Checking plugin status on {} returned HTTP code {}".format( url, resp.getcode() ) ) try: json_resp = json.loads(resp.read().decode()) # Catch ValueError instead of JSONDecodeError which is only available in # Python 3.5+ except ValueError: raise ValueError( "Could not decode JSON info for plugin at {}".format(url) ) return json_resp['info']
Fetch information about the given package on PyPI and return it as a dict. If the package cannot be found on PyPI, :exc:`NameError` will be raised.
entailment
def get_installed_plugins(): """ Return a dict {plugin_name: version} of installed plugins. """ return { # Strip the first five characters from the plugin name since all # plugins are expected to start with `taxi-` backend.dist.project_name[5:]: backend.dist.version for backend in backends_registry._entry_points.values() if backend.dist.project_name != 'taxi' }
Return a dict {plugin_name: version} of installed plugins.
entailment
def list_(ctx): """ Lists installed plugins. """ plugins = plugins_registry.get_plugins() click.echo("\n".join( ["%s (%s)" % p for p in plugins.items()] ))
Lists installed plugins.
entailment
def install(ctx, plugin): """ Install the given plugin. """ ensure_inside_venv(ctx) plugin_name = get_plugin_name(plugin) try: info = get_plugin_info(plugin_name) except NameError: echo_error("Plugin {} could not be found.".format(plugin)) sys.exit(1) except ValueError as e: echo_error("Unable to retrieve plugin info. " "Error was:\n\n {}".format(e)) sys.exit(1) try: installed_version = pkg_resources.get_distribution(plugin_name).version except pkg_resources.DistributionNotFound: installed_version = None if installed_version is not None and info['version'] == installed_version: click.echo("You already have the latest version of {} ({}).".format( plugin, info['version'] )) return pinned_plugin = '{0}=={1}'.format(plugin_name, info['version']) try: run_command([sys.executable, '-m', 'pip', 'install', pinned_plugin]) except subprocess.CalledProcessError as e: echo_error("Error when trying to install plugin {}. " "Error was:\n\n {}".format(plugin, e)) sys.exit(1) echo_success("Plugin {} {} installed successfully.".format( plugin, info['version'] ))
Install the given plugin.
entailment
def uninstall(ctx, plugin): """ Uninstall the given plugin. """ ensure_inside_venv(ctx) if plugin not in get_installed_plugins(): echo_error("Plugin {} does not seem to be installed.".format(plugin)) sys.exit(1) plugin_name = get_plugin_name(plugin) try: run_command([sys.executable, '-m', 'pip', 'uninstall', '-y', plugin_name]) except subprocess.CalledProcessError as e: echo_error( "Error when trying to uninstall plugin {}. Error message " "was:\n\n{}".format(plugin, e.output.decode()) ) sys.exit(1) else: echo_success("Plugin {} uninstalled successfully.".format(plugin))
Uninstall the given plugin.
entailment
def hours(self): """ Return the number of hours this entry has lasted. If the duration is a tuple with a start and an end time, the difference between the two times will be calculated. If the duration is a number, it will be returned as-is. """ if not isinstance(self.duration, tuple): return self.duration if self.duration[1] is None: return 0 time_start = self.get_start_time() # This can happen if the previous entry has a non-tuple duration # and the current entry has a tuple duration without a start time if time_start is None: return 0 now = datetime.datetime.now() time_start = now.replace( hour=time_start.hour, minute=time_start.minute, second=0 ) time_end = now.replace( hour=self.duration[1].hour, minute=self.duration[1].minute, second=0 ) total_time = time_end - time_start total_hours = total_time.seconds / 3600.0 return total_hours
Return the number of hours this entry has lasted. If the duration is a tuple with a start and an end time, the difference between the two times will be calculated. If the duration is a number, it will be returned as-is.
entailment
def get_start_time(self): """ Return the start time of the entry as a :class:`datetime.time` object. If the start time is `None`, the end time of the previous entry will be returned instead. If the current entry doesn't have a duration in the form of a tuple, if there's no previous entry or if the previous entry has no end time, the value `None` will be returned. """ if not isinstance(self.duration, tuple): return None if self.duration[0] is not None: return self.duration[0] else: if (self.previous_entry and isinstance(self.previous_entry.duration, tuple) and self.previous_entry.duration[1] is not None): return self.previous_entry.duration[1] return None
Return the start time of the entry as a :class:`datetime.time` object. If the start time is `None`, the end time of the previous entry will be returned instead. If the current entry doesn't have a duration in the form of a tuple, if there's no previous entry or if the previous entry has no end time, the value `None` will be returned.
entailment
def hash(self): """ Return a value that's used to uniquely identify an entry in a date so we can regroup all entries that share the same hash. """ return u''.join([ self.alias, self.description, str(self.ignored), str(self.flags), ])
Return a value that's used to uniquely identify an entry in a date so we can regroup all entries that share the same hash.
entailment
def add_flag(self, flag): """ Add flag to the flags and memorize this attribute has changed so we can regenerate it when outputting text. """ super(Entry, self).add_flag(flag) self._changed_attrs.add('flags')
Add flag to the flags and memorize this attribute has changed so we can regenerate it when outputting text.
entailment
def remove_flag(self, flag): """ Remove flag to the flags and memorize this attribute has changed so we can regenerate it when outputting text. """ super(Entry, self).remove_flag(flag) self._changed_attrs.add('flags')
Remove flag to the flags and memorize this attribute has changed so we can regenerate it when outputting text.
entailment
def add_entry(self, date, entry): """ Add the given entry to the textual representation. """ in_date = False insert_at = 0 for (lineno, line) in enumerate(self.lines): # Search for the date of the entry if isinstance(line, DateLine) and line.date == date: in_date = True # Insert here if there is no existing Entry for this date insert_at = lineno continue if in_date: if isinstance(line, Entry): insert_at = lineno elif isinstance(line, DateLine): break self.lines.insert(insert_at + 1, entry) # If there's no other Entry in the current date, add a blank line # between the date and the entry if not isinstance(self.lines[insert_at], Entry): self.lines.insert(insert_at + 1, TextLine(''))
Add the given entry to the textual representation.
entailment
def delete_entries(self, entries): """ Remove the given entries from the textual representation. """ self.lines = trim([ line for line in self.lines if not isinstance(line, Entry) or line not in entries ])
Remove the given entries from the textual representation.
entailment
def delete_date(self, date): """ Remove the date line from the textual representation. This doesn't remove any entry line. """ self.lines = [ line for line in self.lines if not isinstance(line, DateLine) or line.date != date ] self.lines = trim(self.lines)
Remove the date line from the textual representation. This doesn't remove any entry line.
entailment
def add_date(self, date): """ Add the given date to the textual representation. """ self.lines = self.parser.add_date(date, self.lines)
Add the given date to the textual representation.
entailment
def init_from_str(self, entries): """ Initialize the structured and textual data based on a string representing the entries. For detailed information about the format of this string, refer to the :func:`~taxi.timesheet.parser.parse_text` function. """ self.lines = self.parser.parse_text(entries) for line in self.lines: if isinstance(line, DateLine): current_date = line.date self[current_date] = self.default_factory(self, line.date) elif isinstance(line, Entry): if len(self[current_date]) > 0: line.previous_entry = self[current_date][-1] self[current_date][-1].next_entry = line self[current_date].append(line)
Initialize the structured and textual data based on a string representing the entries. For detailed information about the format of this string, refer to the :func:`~taxi.timesheet.parser.parse_text` function.
entailment
def filter(self, date=None, regroup=False, ignored=None, pushed=None, unmapped=None, current_workday=None): """ Return the entries as a dict of {:class:`datetime.date`: :class:`~taxi.timesheet.lines.Entry`} items. `date` can either be a single :class:`datetime.date` object to filter only entries from the given date, or a tuple of :class:`datetime.date` objects representing `(from, to)`. `filter_callback` is a function that, given a :class:`~taxi.timesheet.lines.Entry` object, should return True to include that line, or False to exclude it. If `regroup` is set to True, similar entries (ie. having the same :meth:`~taxi.timesheet.lines.Entry.hash`) will be regrouped intro a single :class:`~taxi.timesheet.entry.AggregatedTimesheetEntry`. """ def entry_filter(entry_date, entry): if ignored is not None and entry.ignored != ignored: return False if pushed is not None and entry.pushed != pushed: return False if unmapped is not None and entry.mapped == unmapped: return False if current_workday is not None: today = datetime.date.today() yesterday = date_utils.get_previous_working_day(today) is_current_workday = entry_date in (today, yesterday) and entry_date.strftime('%w') not in [6, 0] if current_workday != is_current_workday: return False return True # Date can either be a single date (only 1 day) or a tuple for a # date range if date is not None and not isinstance(date, tuple): date = (date, date) filtered_entries = collections.defaultdict(list) for (entries_date, entries) in six.iteritems(self): if (date is not None and ( (date[0] is not None and entries_date < date[0]) or (date[1] is not None and entries_date > date[1]))): continue entries_for_date = [] if regroup: # This is a mapping between entries hashes and their # position in the entries_for_date list aggregated_entries = {} id = 0 for entry in entries: if not entry_filter(entries_date, entry): continue # Common case: the entry is not yet referenced in the # aggregated_entries dict if entry.hash not in aggregated_entries: # In that case, put it normally in the entries_for_date # list. It will get replaced by an AggregatedEntry # later if necessary entries_for_date.append(entry) aggregated_entries[entry.hash] = id id += 1 else: # Get the first occurence of the entry in the # entries_for_date list existing_entry = entries_for_date[ aggregated_entries[entry.hash] ] # The entry could already have been replaced by an # AggregatedEntry if there's more than 2 occurences if isinstance(existing_entry, Entry): # Create the AggregatedEntry, put the first # occurence of Entry in it and the current one aggregated_entry = AggregatedTimesheetEntry() aggregated_entry.entries.append(existing_entry) aggregated_entry.entries.append(entry) entries_for_date[ aggregated_entries[entry.hash] ] = aggregated_entry else: # The entry we found is already an # AggregatedEntry, let's just append the # current entry to it aggregated_entry = existing_entry aggregated_entry.entries.append(entry) else: entries_for_date = [ entry for entry in entries if entry_filter(entries_date, entry) ] if entries_for_date: filtered_entries[entries_date].extend(entries_for_date) return filtered_entries
Return the entries as a dict of {:class:`datetime.date`: :class:`~taxi.timesheet.lines.Entry`} items. `date` can either be a single :class:`datetime.date` object to filter only entries from the given date, or a tuple of :class:`datetime.date` objects representing `(from, to)`. `filter_callback` is a function that, given a :class:`~taxi.timesheet.lines.Entry` object, should return True to include that line, or False to exclude it. If `regroup` is set to True, similar entries (ie. having the same :meth:`~taxi.timesheet.lines.Entry.hash`) will be regrouped intro a single :class:`~taxi.timesheet.entry.AggregatedTimesheetEntry`.
entailment
def append(self, x): """ Append the given element to the list and synchronize the textual representation. """ super(EntriesList, self).append(x) if self.entries_collection is not None: self.entries_collection.add_entry(self.date, x)
Append the given element to the list and synchronize the textual representation.
entailment
def projects_database_update_success(self, aliases_after_update, projects_db): """ Display the results of the projects/aliases database update. We need the projects db to extract the name of the projects / activities. """ def show_aliases(aliases): """ Display the given list of aliases in the following form: my_alias project name / activity name The aliases parameter is just a list of aliases, the mapping is extracted from the aliases_after_update parameter, and the project/activity names are looked up in the projects db. """ for alias in aliases: mapping = aliases_after_update[alias] (project, activity) = projects_db.mapping_to_project(mapping) self.msg("%s\n\t%s / %s" % ( alias, project.name if project else "?", activity.name if activity else "?" )) self.msg("Projects database updated successfully.") deleted_aliases = (set(aliases_database.aliases.keys()) - set(aliases_after_update.keys())) added_aliases = (set(aliases_after_update.keys()) - set(aliases_database.aliases.keys())) modified_aliases = set() for alias, mapping in six.iteritems(aliases_after_update): if (alias in aliases_database and aliases_database[alias][:2] != mapping[:2]): modified_aliases.add(alias) if added_aliases: self.msg("\nThe following shared aliases have been added:\n") show_aliases(added_aliases) if deleted_aliases: self.msg("\nThe following shared aliases have been removed:\n") for alias in deleted_aliases: self.msg(alias) if modified_aliases: self.msg("\nThe following shared aliases have been updated:\n") show_aliases(modified_aliases)
Display the results of the projects/aliases database update. We need the projects db to extract the name of the projects / activities.
entailment
def is_top_down(lines): """ Return `True` if dates in the given lines go in an ascending order, or `False` if they go in a descending order. If no order can be determined, return `None`. The given `lines` must be a list of lines, ie. :class:`~taxi.timesheet.lines.TextLine`, :class:`taxi.timesheet.lines.Entry` or :class:`~taxi.timesheet.lines.DateLine`. """ date_lines = [ line for line in lines if hasattr(line, 'is_date_line') and line.is_date_line ] if len(date_lines) < 2 or date_lines[0].date == date_lines[1].date: return None else: return date_lines[1].date > date_lines[0].date
Return `True` if dates in the given lines go in an ascending order, or `False` if they go in a descending order. If no order can be determined, return `None`. The given `lines` must be a list of lines, ie. :class:`~taxi.timesheet.lines.TextLine`, :class:`taxi.timesheet.lines.Entry` or :class:`~taxi.timesheet.lines.DateLine`.
entailment
def trim(lines): """ Remove lines at the start and at the end of the given `lines` that are :class:`~taxi.timesheet.lines.TextLine` instances and don't have any text. """ trim_top = None trim_bottom = None _lines = lines[:] for (lineno, line) in enumerate(_lines): if hasattr(line, 'is_text_line') and line.is_text_line and not line.text.strip(): trim_top = lineno else: break for (lineno, line) in enumerate(reversed(_lines)): if hasattr(line, 'is_text_line') and line.is_text_line and not line.text.strip(): trim_bottom = lineno else: break if trim_top is not None: _lines = _lines[trim_top + 1:] if trim_bottom is not None: trim_bottom = len(_lines) - trim_bottom - 1 _lines = _lines[:trim_bottom] return _lines
Remove lines at the start and at the end of the given `lines` that are :class:`~taxi.timesheet.lines.TextLine` instances and don't have any text.
entailment
def commit(ctx, f, force_yes, date): """ Commits your work to the server. The [date] option can be used either as a single date (eg. 20.01.2014), as a range (eg. 20.01.2014-22.01.2014), or as a range with one of the dates omitted (eg. -22.01.2014). """ timesheet_collection = get_timesheet_collection_for_context(ctx, f) if not date and not force_yes: non_workday_entries = timesheet_collection.entries.filter(ignored=False, pushed=False, current_workday=False) if non_workday_entries: if not ctx.obj['view'].confirm_commit_entries(non_workday_entries): ctx.obj['view'].msg("Ok then.") return ctx.obj['view'].pushing_entries() backends_entries = defaultdict(list) try: # Push entries for timesheet in timesheet_collection.timesheets: entries_to_push = get_entries_to_push( timesheet, date, ctx.obj['settings']['regroup_entries'] ) for (entries_date, entries) in entries_to_push.items(): for entry in entries: backend_name = aliases_database[entry.alias].backend backend = plugins_registry.get_backend(backend_name) backends_entries[backend].append(entry) try: additional_info = backend.push_entry(entries_date, entry) except KeyboardInterrupt: entry.push_error = ("Interrupted, check status in" " backend") raise except Exception as e: additional_info = None entry.push_error = six.text_type(e) else: entry.push_error = None finally: ctx.obj['view'].pushed_entry(entry, additional_info) # Call post_push_entries on backends backends_post_push(backends_entries) except KeyboardInterrupt: pass finally: comment_timesheets_entries(timesheet_collection, date) ignored_entries = timesheet_collection.entries.filter(date=date, ignored=True, unmapped=False, pushed=False) ignored_entries_list = [] for (entries_date, entries) in six.iteritems(ignored_entries): ignored_entries_list.extend(entries) all_entries = itertools.chain(*backends_entries.values()) ctx.obj['view'].pushed_entries_summary(list(all_entries), ignored_entries_list)
Commits your work to the server. The [date] option can be used either as a single date (eg. 20.01.2014), as a range (eg. 20.01.2014-22.01.2014), or as a range with one of the dates omitted (eg. -22.01.2014).
entailment
async def _request(self, path): """Make the actual request and return the parsed response.""" url = '{}{}'.format(self.base_url, path) data = None try: async with self.websession.get(url, auth=self._auth, timeout=self._timeout) as response: if response.status == 200: if response.headers['content-type'] == 'application/json': data = await response.json() else: data = await response.text() except (asyncio.TimeoutError, aiohttp.ClientError) as error: _LOGGER.error('Failed to communicate with IP Webcam: %s', error) self._available = False return self._available = True if isinstance(data, str): return data.find("Ok") != -1 return data
Make the actual request and return the parsed response.
entailment
async def update(self): """Fetch the latest data from IP Webcam.""" status_data = await self._request('/status.json?show_avail=1') if status_data: self.status_data = status_data sensor_data = await self._request('/sensors.json') if sensor_data: self.sensor_data = sensor_data
Fetch the latest data from IP Webcam.
entailment
def current_settings(self): """Return dict with all config include.""" settings = {} if not self.status_data: return settings for (key, val) in self.status_data.get('curvals', {}).items(): try: val = float(val) except ValueError: val = val if val in ('on', 'off'): val = (val == 'on') settings[key] = val return settings
Return dict with all config include.
entailment
def available_settings(self): """Return dict of lists with all available config settings.""" available = {} if not self.status_data: return available for (key, val) in self.status_data.get('avail', {}).items(): available[key] = [] for subval in val: try: subval = float(subval) except ValueError: subval = subval if val in ('on', 'off'): subval = (subval == 'on') available[key].append(subval) return available
Return dict of lists with all available config settings.
entailment
def export_sensor(self, sensor): """Return (value, unit) from a sensor node.""" value = None unit = None try: container = self.sensor_data.get(sensor) unit = container.get('unit') data_point = container.get('data', [[0, [0.0]]]) if data_point and data_point[0]: value = data_point[0][-1][0] except (ValueError, KeyError, AttributeError): pass return (value, unit)
Return (value, unit) from a sensor node.
entailment
def change_setting(self, key, val): """Change a setting. Return a coroutine. """ if isinstance(val, bool): payload = 'on' if val else 'off' else: payload = val return self._request('/settings/{}?set={}'.format(key, payload))
Change a setting. Return a coroutine.
entailment
def record(self, record=True, tag=None): """Enable/disable recording. Return a coroutine. """ path = '/startvideo?force=1' if record else '/stopvideo?force=1' if record and tag is not None: path = '/startvideo?force=1&tag={}'.format(URL(tag).raw_path) return self._request(path)
Enable/disable recording. Return a coroutine.
entailment
def set_orientation(self, orientation='landscape'): """Set the video orientation. Return a coroutine. """ if orientation not in ALLOWED_ORIENTATIONS: _LOGGER.debug('%s is not a valid orientation', orientation) return False return self.change_setting('orientation', orientation)
Set the video orientation. Return a coroutine.
entailment
def set_scenemode(self, scenemode='auto'): """Set the video scene mode. Return a coroutine. """ if scenemode not in self.available_settings['scenemode']: _LOGGER.debug('%s is not a valid scenemode', scenemode) return False return self.change_setting('scenemode', scenemode)
Set the video scene mode. Return a coroutine.
entailment
def get_timesheet_collection_for_context(ctx, entries_file=None): """ Return a :class:`~taxi.timesheet.TimesheetCollection` object with the current timesheet(s). Since this depends on the settings (to get the entries files path, the number of previous files, etc) this uses the settings object from the current command context. If `entries_file` is set, this forces the path of the file to be used. """ if not entries_file: entries_file = ctx.obj['settings'].get_entries_file_path(False) parser = TimesheetParser( date_format=ctx.obj['settings']['date_format'], add_date_to_bottom=ctx.obj['settings'].get_add_to_bottom(), flags_repr=ctx.obj['settings'].get_flags(), ) return TimesheetCollection.load(entries_file, ctx.obj['settings']['nb_previous_files'], parser)
Return a :class:`~taxi.timesheet.TimesheetCollection` object with the current timesheet(s). Since this depends on the settings (to get the entries files path, the number of previous files, etc) this uses the settings object from the current command context. If `entries_file` is set, this forces the path of the file to be used.
entailment
def create_config_file(filename): """ Create main configuration file if it doesn't exist. """ import textwrap from six.moves.urllib import parse if not os.path.exists(filename): old_default_config_file = os.path.join(os.path.dirname(filename), '.tksrc') if os.path.exists(old_default_config_file): upgrade = click.confirm("\n".join(textwrap.wrap( "It looks like you recently updated Taxi. Some " "configuration changes are required. You can either let " "me upgrade your configuration file or do it " "manually.")) + "\n\nProceed with automatic configuration " "file upgrade?", default=True ) if upgrade: settings = Settings(old_default_config_file) settings.convert_to_4() with open(filename, 'w') as config_file: settings.config.write(config_file) os.remove(old_default_config_file) return else: print("Ok then.") sys.exit(0) welcome_msg = "Welcome to Taxi!" click.secho(welcome_msg, fg='green', bold=True) click.secho('=' * len(welcome_msg) + '\n', fg='green', bold=True) click.echo(click.wrap_text( "It looks like this is the first time you run Taxi. You will need " "a configuration file ({}) in order to proceed. Please answer a " "few questions to create your configuration file.".format( filename ) ) + '\n') config = pkg_resources.resource_string('taxi', 'etc/taxirc.sample').decode('utf-8') context = {} available_backends = plugins_registry.get_available_backends() context['backend'] = click.prompt( "Backend you want to use (choices are %s)" % ', '.join(available_backends), type=click.Choice(available_backends) ) context['username'] = click.prompt("Username or token") context['password'] = parse.quote( click.prompt("Password (leave empty if you're using" " a token)", hide_input=True, default=''), safe='' ) # Password can be empty in case of token auth so the ':' separator # is not included in the template config, so we add it if the user # has set a password if context['password']: context['password'] = ':' + context['password'] context['hostname'] = click.prompt( "Hostname of the backend (eg. timesheets.example.com)", type=Hostname() ) editor = Editor().get_editor() context['editor'] = click.prompt( "Editor command to edit your timesheets", default=editor ) templated_config = config.format(**context) directory = os.path.dirname(filename) if not os.path.exists(directory): os.makedirs(directory) with open(filename, 'w') as f: f.write(templated_config) else: settings = Settings(filename) conversions = settings.needed_conversions if conversions: for conversion in conversions: conversion() settings.write_config()
Create main configuration file if it doesn't exist.
entailment
def date_options(func): """ Decorator to add support for `--today/--not-today`, `--from` and `--to` options to the given command. The calculated date is then passed as a parameter named `date`. """ @click.option( '--until', type=Date(), help="Only show entries until the given date." ) @click.option( '--since', type=Date(), help="Only show entries starting at the given date.", ) @click.option( '--today/--not-today', default=None, help="Only include today's entries (same as --since=today --until=today)" " or ignore today's entries (same as --until=yesterday)" ) @functools.wraps(func) def wrapper(*args, **kwargs): since, until, today = kwargs.pop('since'), kwargs.pop('until'), kwargs.pop('today') if today is not None: if today: date = datetime.date.today() else: date = (None, datetime.date.today() - datetime.timedelta(days=1)) elif since is not None or until is not None: date = (since, until) else: date = None kwargs['date'] = date return func(*args, **kwargs) return wrapper
Decorator to add support for `--today/--not-today`, `--from` and `--to` options to the given command. The calculated date is then passed as a parameter named `date`.
entailment
def months_ago(date, nb_months=1): """ Return the given `date` with `nb_months` substracted from it. """ nb_years = nb_months // 12 nb_months = nb_months % 12 month_diff = date.month - nb_months if month_diff > 0: new_month = month_diff else: new_month = 12 + month_diff nb_years += 1 return date.replace(day=1, month=new_month, year=date.year - nb_years)
Return the given `date` with `nb_months` substracted from it.
entailment
def time_ago_to_date(value): """ Parse a date and return it as ``datetime.date`` objects. Examples of valid dates: * Relative: 2 days ago, today, yesterday, 1 week ago * Absolute: 25.10.2017 """ today = datetime.date.today() if value == 'today': return today elif value == 'yesterday': return today - datetime.timedelta(days=1) time_ago = re.match(r'(\d+) (days?|weeks?|months?|years?) ago', value) if time_ago: duration, unit = int(time_ago.group(1)), time_ago.group(2) if 'day' in unit: return today - datetime.timedelta(days=duration) elif 'week' in unit: return today - datetime.timedelta(weeks=duration) elif 'month' in unit: return months_ago(today, duration) elif 'year' in unit: return today.replace(year=today.year - duration) return datetime.datetime.strptime(value, '%d.%m.%Y').date()
Parse a date and return it as ``datetime.date`` objects. Examples of valid dates: * Relative: 2 days ago, today, yesterday, 1 week ago * Absolute: 25.10.2017
entailment
def list_(ctx, search_string, reverse, backend, used, inactive): """ List configured aliases. Aliases in red belong to inactive projects and trying to push entries to these aliases will probably result in an error. """ if not reverse: list_aliases(ctx, search_string, backend, used, inactive=inactive) else: show_mapping(ctx, search_string, backend)
List configured aliases. Aliases in red belong to inactive projects and trying to push entries to these aliases will probably result in an error.
entailment
def add(ctx, alias, mapping, backend): """ Add a new alias to your configuration file. """ if not backend: backends_list = ctx.obj['settings'].get_backends() if len(backends_list) > 1: raise click.UsageError( "You're using more than 1 backend. Please set the backend to " "add the alias to with the --backend option (choices are %s)" % ", ".join(dict(backends_list).keys()) ) add_mapping(ctx, alias, mapping, backend)
Add a new alias to your configuration file.
entailment
def convert_to_4(self): """ Convert a pre-4.0 configuration file to a 4.0 configuration file. """ from six.moves.urllib import parse if not self.config.has_section('backends'): self.config.add_section('backends') site = parse.urlparse(self.get('site', default_value='')) backend_uri = 'zebra://{username}:{password}@{hostname}'.format( username=self.get('username', default_value=''), password=parse.quote(self.get('password', default_value=''), safe=''), hostname=site.hostname ) self.config.set('backends', 'default', backend_uri) self.config.remove_option('default', 'username') self.config.remove_option('default', 'password') self.config.remove_option('default', 'site') if not self.config.has_section('default_aliases'): self.config.add_section('default_aliases') if not self.config.has_section('default_shared_aliases'): self.config.add_section('default_shared_aliases') if self.config.has_section('wrmap'): for alias, mapping in self.config.items('wrmap'): self.config.set('default_aliases', alias, mapping) self.config.remove_section('wrmap') if self.config.has_section('shared_wrmap'): for alias, mapping in self.config.items('shared_wrmap'): self.config.set('default_shared_aliases', alias, mapping) self.config.remove_section('shared_wrmap')
Convert a pre-4.0 configuration file to a 4.0 configuration file.
entailment
def autofill(ctx, f): """ Fills your timesheet up to today, for the defined auto_fill_days. """ auto_fill_days = ctx.obj['settings']['auto_fill_days'] if not auto_fill_days: ctx.obj['view'].view.err("The parameter `auto_fill_days` must be set " "to use this command.") return today = datetime.date.today() last_day = calendar.monthrange(today.year, today.month) last_date = datetime.date(today.year, today.month, last_day[1]) timesheet_collection = get_timesheet_collection_for_context( ctx, f ) t = timesheet_collection.latest() t.prefill(auto_fill_days, last_date) t.save() ctx.obj['view'].msg("Your entries file has been filled.")
Fills your timesheet up to today, for the defined auto_fill_days.
entailment
def list_(ctx, search, backend): """ Searches for a project by its name. The letter in the first column indicates the status of the project: [N]ot started, [A]ctive, [F]inished, [C]ancelled. """ projects = ctx.obj['projects_db'].search(search, backend=backend) projects = sorted(projects, key=lambda project: project.name.lower()) ctx.obj['view'].search_results(projects)
Searches for a project by its name. The letter in the first column indicates the status of the project: [N]ot started, [A]ctive, [F]inished, [C]ancelled.
entailment
def alias(ctx, search, backend): """ Searches for the given project and interactively add an alias for it. """ projects = ctx.obj['projects_db'].search(search, active_only=True) projects = sorted(projects, key=lambda project: project.name) if len(projects) == 0: ctx.obj['view'].msg( "No active project matches your search string '%s'." % ''.join(search) ) return ctx.obj['view'].projects_list(projects, True) try: number = ctx.obj['view'].select_project(projects) except CancelException: return project = projects[number] ctx.obj['view'].project_with_activities(project, numbered_activities=True) try: number = ctx.obj['view'].select_activity(project.activities) except CancelException: return retry = True while retry: try: alias = ctx.obj['view'].select_alias() except CancelException: return if alias in aliases_database: mapping = aliases_database[alias] overwrite = ctx.obj['view'].overwrite_alias(alias, mapping) if not overwrite: return elif overwrite: retry = False # User chose "retry" else: retry = True else: retry = False activity = project.activities[number] mapping = Mapping(mapping=(project.id, activity.id), backend=project.backend) ctx.obj['settings'].add_alias(alias, mapping) ctx.obj['settings'].write_config() ctx.obj['view'].alias_added(alias, (project.id, activity.id))
Searches for the given project and interactively add an alias for it.
entailment
def show(ctx, project_id, backend): """ Shows the details of the given project id. """ try: project = ctx.obj['projects_db'].get(project_id, backend) except IOError: raise Exception("Error: the projects database file doesn't exist. " "Please run `taxi update` to create it") if project is None: ctx.obj['view'].err( "Could not find project `%s`" % (project_id) ) else: ctx.obj['view'].project_with_activities(project)
Shows the details of the given project id.
entailment
def stop(ctx, description, f): """ Use it when you stop working on the current task. You can add a description to what you've done. """ description = ' '.join(description) try: timesheet_collection = get_timesheet_collection_for_context(ctx, f) current_timesheet = timesheet_collection.latest() current_timesheet.continue_entry( datetime.date.today(), datetime.datetime.now().time(), description ) except ParseError as e: ctx.obj['view'].err(e) except NoActivityInProgressError as e: ctx.obj['view'].err(e) except StopInThePastError as e: ctx.obj['view'].err(e) else: current_timesheet.save()
Use it when you stop working on the current task. You can add a description to what you've done.
entailment
def start(ctx, alias, description, f): """ Use it when you start working on the given activity. This will add the activity and the current time to your entries file. When you're finished, use the stop command. """ today = datetime.date.today() try: timesheet_collection = get_timesheet_collection_for_context(ctx, f) except ParseError as e: ctx.obj['view'].err(e) return t = timesheet_collection.latest() # If there's a previous entry on the same date, check if we can use its # end time as a start time for the newly started entry today_entries = t.entries.filter(date=today) if(today in today_entries and today_entries[today] and isinstance(today_entries[today][-1].duration, tuple) and today_entries[today][-1].duration[1] is not None): new_entry_start_time = today_entries[today][-1].duration[1] else: new_entry_start_time = datetime.datetime.now() description = ' '.join(description) if description else '?' duration = (new_entry_start_time, None) e = Entry(alias, duration, description) t.entries[today].append(e) t.save()
Use it when you start working on the given activity. This will add the activity and the current time to your entries file. When you're finished, use the stop command.
entailment
def create_time_from_text(text): """ Parse a time in the form ``hh:mm`` or ``hhmm`` (or even ``hmm``) and return a :class:`datetime.time` object. If no valid time can be extracted from the given string, :exc:`ValueError` will be raised. """ text = text.replace(':', '') if not re.match('^\d{3,}$', text): raise ValueError("Time must be numeric") minutes = int(text[-2:]) hours = int(text[0:2] if len(text) > 3 else text[0]) return datetime.time(hours, minutes)
Parse a time in the form ``hh:mm`` or ``hhmm`` (or even ``hmm``) and return a :class:`datetime.time` object. If no valid time can be extracted from the given string, :exc:`ValueError` will be raised.
entailment
def duration_to_text(self, duration): """ Return the textual representation of the given `duration`. The duration can either be a tuple of :class:`datetime.time` objects, or a simple number. The returned text will be either a hhmm-hhmm string (if the given `duration` is a tuple) or a number. """ if isinstance(duration, tuple): start = (duration[0].strftime(self.ENTRY_DURATION_FORMAT) if duration[0] is not None else '') end = (duration[1].strftime(self.ENTRY_DURATION_FORMAT) if duration[1] is not None else '?') duration = '%s-%s' % (start, end) else: duration = six.text_type(duration) return duration
Return the textual representation of the given `duration`. The duration can either be a tuple of :class:`datetime.time` objects, or a simple number. The returned text will be either a hhmm-hhmm string (if the given `duration` is a tuple) or a number.
entailment
def to_text(self, line): """ Return the textual representation of the given `line`. """ return getattr(self, self.ENTRY_TRANSFORMERS[line.__class__])(line)
Return the textual representation of the given `line`.
entailment
def date_line_to_text(self, date_line): """ Return the textual representation of the given :class:`~taxi.timesheet.lines.DateLine` instance. The date format is set by the `date_format` parameter given when instanciating the parser instance. """ # Changing the date in a dateline is not supported yet, but if it gets implemented someday this will need to be # changed if date_line._text is not None: return date_line._text else: return date_utils.unicode_strftime(date_line.date, self.date_format)
Return the textual representation of the given :class:`~taxi.timesheet.lines.DateLine` instance. The date format is set by the `date_format` parameter given when instanciating the parser instance.
entailment
def entry_line_to_text(self, entry): """ Return the textual representation of an :class:`~taxi.timesheet.lines.Entry` instance. This method is a bit convoluted since we don't want to completely mess up the original formatting of the entry. """ line = [] # The entry is new, it didn't come from an existing line, so let's just return a simple text representation of # it if not entry._text: flags_text = self.flags_to_text(entry.flags) duration_text = self.duration_to_text(entry.duration) return ''.join( (flags_text, ' ' if flags_text else '', entry.alias, ' ', duration_text, ' ', entry.description) ) for i, text in enumerate(entry._text): # If this field is mapped to an attribute, check if it has changed # and, if so, regenerate its text. The only fields that are not # mapped to attributes are spacing fields if i in self.ENTRY_ATTRS_POSITION: if self.ENTRY_ATTRS_POSITION[i] in entry._changed_attrs: attr_name = self.ENTRY_ATTRS_POSITION[i] attr_value = getattr(entry, self.ENTRY_ATTRS_POSITION[i]) # Some attributes need to be transformed to their textual representation, such as flags or duration if attr_name in self.ENTRY_ATTRS_TRANSFORMERS: attr_value = getattr(self, self.ENTRY_ATTRS_TRANSFORMERS[attr_name])(attr_value) else: attr_value = text line.append(attr_value) else: # If the length of the field has changed, do whatever we can to keep the current formatting (ie. number # of whitespaces) if len(line[i-1]) != len(entry._text[i-1]): text = ' ' * max(1, (len(text) - (len(line[i-1]) - len(entry._text[i-1])))) line.append(text) return ''.join(line).strip()
Return the textual representation of an :class:`~taxi.timesheet.lines.Entry` instance. This method is a bit convoluted since we don't want to completely mess up the original formatting of the entry.
entailment
def create_entry_line_from_text(self, text): """ Try to parse the given text line and extract and entry. Return an :class:`~taxi.timesheet.lines.Entry` object if parsing is successful, otherwise raise :exc:`~taxi.exceptions.ParseError`. """ split_line = re.match(self.entry_line_regexp, text) if not split_line: raise ParseError("Line must have an alias, a duration and a description") alias = split_line.group('alias') start_time = end_time = None if split_line.group('start_time') is not None: if split_line.group('start_time'): try: start_time = create_time_from_text(split_line.group('start_time')) except ValueError: raise ParseError("Start time is not a valid time, it must be in format hh:mm or hhmm") else: start_time = None if split_line.group('end_time') is not None: if split_line.group('end_time') == '?': end_time = None else: try: end_time = create_time_from_text(split_line.group('end_time')) except ValueError: raise ParseError("End time is not a valid time, it must be in format hh:mm or hhmm") if split_line.group('duration') is not None: duration = float(split_line.group('duration')) elif start_time or end_time: duration = (start_time, end_time) else: duration = (None, None) description = split_line.group('description') # Parse and set line flags if split_line.group('flags'): try: flags = self.extract_flags_from_text(split_line.group('flags')) # extract_flags_from_text will raise `KeyError` if one of the flags is not recognized. This should never # happen though as the list of accepted flags is bundled in self.entry_line_regexp except KeyError as e: raise ParseError(*e.args) else: flags = set() # Backwards compatibility with previous notation that allowed to end the alias with a `?` to ignore it if alias.endswith('?'): flags.add(Entry.FLAG_IGNORED) alias = alias[:-1] if description == '?': flags.add(Entry.FLAG_IGNORED) line = ( split_line.group('flags') or '', split_line.group('spacing1') or '', split_line.group('alias'), split_line.group('spacing2'), split_line.group('time'), split_line.group('spacing3'), split_line.group('description'), ) entry_line = Entry(alias, duration, description, flags=flags, text=line) return entry_line
Try to parse the given text line and extract and entry. Return an :class:`~taxi.timesheet.lines.Entry` object if parsing is successful, otherwise raise :exc:`~taxi.exceptions.ParseError`.
entailment
def create_date_from_text(self, text): """ Parse a text in the form dd/mm/yyyy, dd/mm/yy or yyyy/mm/dd and return a corresponding :class:`datetime.date` object. If no date can be extracted from the given text, a :exc:`ValueError` will be raised. """ # Try to match dd/mm/yyyy format date_matches = re.match(self.DATE_LINE_REGEXP, text) # If no match, try with yyyy/mm/dd format if date_matches is None: date_matches = re.match(self.US_DATE_LINE_REGEXP, text) if date_matches is None: raise ValueError("No date could be extracted from the given value") # yyyy/mm/dd if len(date_matches.group(1)) == 4: return datetime.date(int(date_matches.group(1)), int(date_matches.group(2)), int(date_matches.group(3))) # dd/mm/yy if len(date_matches.group(3)) == 2: current_year = datetime.date.today().year current_millennium = current_year - (current_year % 1000) year = current_millennium + int(date_matches.group(3)) # dd/mm/yyyy else: year = int(date_matches.group(3)) return datetime.date(year, int(date_matches.group(2)), int(date_matches.group(1)))
Parse a text in the form dd/mm/yyyy, dd/mm/yy or yyyy/mm/dd and return a corresponding :class:`datetime.date` object. If no date can be extracted from the given text, a :exc:`ValueError` will be raised.
entailment
def extract_flags_from_text(self, text): """ Extract the flags from the given text and return a :class:`set` of flag values. See :class:`~taxi.timesheet.lines.Entry` for a list of existing flags. """ flags = set() reversed_flags_repr = {v: k for k, v in self.flags_repr.items()} for flag_repr in text: if flag_repr not in reversed_flags_repr: raise KeyError("Flag '%s' is not recognized" % flag_repr) else: flags.add(reversed_flags_repr[flag_repr]) return flags
Extract the flags from the given text and return a :class:`set` of flag values. See :class:`~taxi.timesheet.lines.Entry` for a list of existing flags.
entailment
def parse_text(self, text): """ Parse the given text and return a list of :class:`~taxi.timesheet.lines.DateLine`, :class:`~taxi.timesheet.lines.Entry`, and :class:`~taxi.timesheet.lines.TextLine` objects. If there's an error during parsing, a :exc:`taxi.exceptions.ParseError` will be raised. """ text = text.strip() lines = text.splitlines() parsed_lines = [] encountered_date = False for (lineno, line) in enumerate(lines, 1): try: parsed_line = self.parse_line(line) if isinstance(parsed_line, DateLine): encountered_date = True elif isinstance(parsed_line, Entry) and not encountered_date: raise ParseError("Entries must be defined inside a date section") except ParseError as e: # Update exception with some more information e.line_number = lineno e.line = line raise else: parsed_lines.append(parsed_line) return parsed_lines
Parse the given text and return a list of :class:`~taxi.timesheet.lines.DateLine`, :class:`~taxi.timesheet.lines.Entry`, and :class:`~taxi.timesheet.lines.TextLine` objects. If there's an error during parsing, a :exc:`taxi.exceptions.ParseError` will be raised.
entailment
def parse_line(self, text): """ Parse the given `text` and return either a :class:`~taxi.timesheet.lines.DateLine`, an :class:`~taxi.timesheet.lines.Entry`, or a :class:`~taxi.timesheet.lines.TextLine`, or raise :exc:`taxi.exceptions.ParseError` if the line can't be parser. See the transformation methods :meth:`create_date_from_text` and :meth:`create_entry_line_from_text` for details about the format the line is expected to have. """ text = text.strip().replace('\t', ' ' * 4) # The logic is: if the line starts with a #, consider it's a comment (TextLine), otherwise try to parse it as a # date and if this fails, try to parse it as an entry. If this fails too, the line is not valid if len(text) == 0 or text.startswith('#'): parsed_line = TextLine(text) else: try: date = self.create_date_from_text(text) except ValueError: parsed_line = self.create_entry_line_from_text(text) else: parsed_line = DateLine(date, text) return parsed_line
Parse the given `text` and return either a :class:`~taxi.timesheet.lines.DateLine`, an :class:`~taxi.timesheet.lines.Entry`, or a :class:`~taxi.timesheet.lines.TextLine`, or raise :exc:`taxi.exceptions.ParseError` if the line can't be parser. See the transformation methods :meth:`create_date_from_text` and :meth:`create_entry_line_from_text` for details about the format the line is expected to have.
entailment
def add_date(self, date, lines): """ Return the given `lines` with the `date` added in the right place (ie. to the beginning or to the end of the given lines, depending on the `add_date_to_bottom` property). """ _lines = lines[:] _lines = trim(_lines) if self.add_date_to_bottom is None: add_date_to_bottom = is_top_down(lines) else: add_date_to_bottom = self.add_date_to_bottom if add_date_to_bottom: _lines.append(TextLine('')) _lines.append(DateLine(date)) else: _lines.insert(0, TextLine('')) _lines.insert(0, DateLine(date)) return trim(_lines)
Return the given `lines` with the `date` added in the right place (ie. to the beginning or to the end of the given lines, depending on the `add_date_to_bottom` property).
entailment
def update(ctx): """ Synchronizes your project database with the server and updates the shared aliases. """ ctx.obj['view'].updating_projects_database() projects = [] for backend_name, backend_uri in ctx.obj['settings'].get_backends(): backend = plugins_registry.get_backend(backend_name) backend_projects = backend.get_projects() for project in backend_projects: project.backend = backend_name projects += backend_projects ctx.obj['projects_db'].update(projects) # Put the shared aliases in the config file shared_aliases = {} backends_to_clear = set() for project in projects: for alias, activity_id in six.iteritems(project.aliases): mapping = Mapping(mapping=(project.id, activity_id), backend=project.backend) shared_aliases[alias] = mapping backends_to_clear.add(project.backend) for backend in backends_to_clear: ctx.obj['settings'].clear_shared_aliases(backend) # The user can have local aliases with additional information (eg. role definition). If these aliases also exist on # the remote, then they probably need to be cleared out locally to make sure they don't unintentionally use an # alias with a wrong role current_aliases = ctx.obj['settings'].get_aliases() removed_aliases = [ (alias, mapping) for alias, mapping in current_aliases.items() if (alias in shared_aliases and shared_aliases[alias].backend == mapping.backend and mapping.mapping[:2] != shared_aliases[alias].mapping[:2]) ] if removed_aliases: ctx.obj['settings'].remove_aliases(removed_aliases) for alias, mapping in shared_aliases.items(): ctx.obj['settings'].add_shared_alias(alias, mapping) aliases_after_update = ctx.obj['settings'].get_aliases() ctx.obj['settings'].write_config() ctx.obj['view'].projects_database_update_success( aliases_after_update, ctx.obj['projects_db'] )
Synchronizes your project database with the server and updates the shared aliases.
entailment
def attributes(self): """ A dictionary mapping names of attributes to BiomartAttribute instances. This causes overwriting errors if there are diffferent pages which use the same attribute names, but is kept for backward compatibility. """ if not self._attribute_pages: self.fetch_attributes() result = {} for page in self._attribute_pages.values(): result.update(page.attributes) return result
A dictionary mapping names of attributes to BiomartAttribute instances. This causes overwriting errors if there are diffferent pages which use the same attribute names, but is kept for backward compatibility.
entailment
def get_backends_by_class(self, backend_class): """ Return a list of backends that are instances of the given `backend_class`. """ return [backend for backend in self._backends_registry.values() if isinstance(backend, backend_class)]
Return a list of backends that are instances of the given `backend_class`.
entailment
def populate_backends(self, backends, context): """ Iterate over the given backends list and instantiate every backend found. Can raise :exc:`BackendNotFoundError` if a backend could not be found in the registered entry points. The `backends` parameter should be a dict with backend names as keys and URIs as values. """ for name, uri in backends.items(): self._backends_registry[name] = self._load_backend(uri, context)
Iterate over the given backends list and instantiate every backend found. Can raise :exc:`BackendNotFoundError` if a backend could not be found in the registered entry points. The `backends` parameter should be a dict with backend names as keys and URIs as values.
entailment
def _load_backend(self, backend_uri, context): """ Return the instantiated backend object identified by the given `backend_uri`. The entry point that is used to create the backend object is determined by the protocol part of the given URI. """ parsed = parse.urlparse(backend_uri) options = dict(parse.parse_qsl(parsed.query)) try: backend = self._entry_points[self.BACKENDS_ENTRY_POINT][parsed.scheme].load() except KeyError: raise BackendNotFoundError( "The requested backend `%s` could not be found in the " "registered entry points. Perhaps you forgot to install the " "corresponding backend package?" % parsed.scheme ) password = (parse.unquote(parsed.password) if parsed.password else parsed.password) return backend( username=parsed.username, password=password, hostname=parsed.hostname, port=parsed.port, path=parsed.path, options=options, context=context, )
Return the instantiated backend object identified by the given `backend_uri`. The entry point that is used to create the backend object is determined by the protocol part of the given URI.
entailment
def register_commands(self): """ Load entry points for custom commands. """ for command in self._entry_points[self.COMMANDS_ENTRY_POINT].values(): command.load()
Load entry points for custom commands.
entailment
def _add_or_remove_flag(self, flag, add): """ Add the given `flag` if `add` is True, remove it otherwise. """ meth = self.add_flag if add else self.remove_flag meth(flag)
Add the given `flag` if `add` is True, remove it otherwise.
entailment
def get_application_configuration(name): """Get a named application configuration. An application configuration is a named set of securely stored properties where each key and its value in the property set is a string. An application configuration object is used to store information that IBM Streams applications require, such as: * Database connection data * Credentials that your applications need to use to access external systems * Other data, such as the port numbers or URLs of external systems Arguments: name(str): Name of the application configuration. Returns: dict: Dictionary containing the property names and values for the application configuration. Raises: ValueError: Application configuration does not exist. """ _check() rc = _ec.get_application_configuration(name) if rc is False: raise ValueError("Application configuration {0} not found.".format(name)) return rc
Get a named application configuration. An application configuration is a named set of securely stored properties where each key and its value in the property set is a string. An application configuration object is used to store information that IBM Streams applications require, such as: * Database connection data * Credentials that your applications need to use to access external systems * Other data, such as the port numbers or URLs of external systems Arguments: name(str): Name of the application configuration. Returns: dict: Dictionary containing the property names and values for the application configuration. Raises: ValueError: Application configuration does not exist.
entailment
def _submit(primitive, port_index, tuple_): """Internal method to submit a tuple""" args = (_get_opc(primitive), port_index, tuple_) _ec._submit(args)
Internal method to submit a tuple
entailment
def value(self, value): """ Set the current value of the metric. """ args = (self.__ptr, int(value)) _ec.metric_set(args)
Set the current value of the metric.
entailment
def resource_tags(self): """Resource tags for this processing logic. Tags are a mechanism for differentiating and identifying resources that have different physical characteristics or logical uses. For example a resource (host) that has external connectivity for public data sources may be tagged `ingest`. Processing logic can be associated with one or more tags to require running on suitably tagged resources. For example adding tags `ingest` and `db` requires that the processing element containing the callable that created the stream runs on a host tagged with both `ingest` and `db`. A :py:class:`~streamsx.topology.topology.Stream` that was not created directly with a Python callable cannot have tags associated with it. For example a stream that is a :py:meth:`~streamsx.topology.topology.Stream.union` of multiple streams cannot be tagged. In this case this method returns an empty `frozenset` which cannot be modified. See https://www.ibm.com/support/knowledgecenter/en/SSCRJU_4.2.1/com.ibm.streams.admin.doc/doc/tags.html for more details of tags within IBM Streams. Returns: set: Set of resource tags, initially empty. .. warning:: If no resources exist with the required tags then job submission will fail. .. versionadded:: 1.7 .. versionadded:: 1.9 Support for :py:class:`Sink` and :py:class:`~streamsx.spl.op.Invoke`. """ try: plc = self._op()._placement if not 'resourceTags' in plc: plc['resourceTags'] = set() return plc['resourceTags'] except TypeError: return frozenset()
Resource tags for this processing logic. Tags are a mechanism for differentiating and identifying resources that have different physical characteristics or logical uses. For example a resource (host) that has external connectivity for public data sources may be tagged `ingest`. Processing logic can be associated with one or more tags to require running on suitably tagged resources. For example adding tags `ingest` and `db` requires that the processing element containing the callable that created the stream runs on a host tagged with both `ingest` and `db`. A :py:class:`~streamsx.topology.topology.Stream` that was not created directly with a Python callable cannot have tags associated with it. For example a stream that is a :py:meth:`~streamsx.topology.topology.Stream.union` of multiple streams cannot be tagged. In this case this method returns an empty `frozenset` which cannot be modified. See https://www.ibm.com/support/knowledgecenter/en/SSCRJU_4.2.1/com.ibm.streams.admin.doc/doc/tags.html for more details of tags within IBM Streams. Returns: set: Set of resource tags, initially empty. .. warning:: If no resources exist with the required tags then job submission will fail. .. versionadded:: 1.7 .. versionadded:: 1.9 Support for :py:class:`Sink` and :py:class:`~streamsx.spl.op.Invoke`.
entailment
def submit(ctxtype, graph, config=None, username=None, password=None): """ Submits a `Topology` (application) using the specified context type. Used to submit an application for compilation into a Streams application and execution within an Streaming Analytics service or IBM Streams instance. `ctxtype` defines how the application will be submitted, see :py:class:`ContextTypes`. The parameters `username` and `password` are only required when submitting to an IBM Streams instance and it is required to access the Streams REST API from the code performing the submit. Accessing data from views created by :py:meth:`~streamsx.topology.topology.Stream.view` requires access to the Streams REST API. Args: ctxtype(str): Type of context the application will be submitted to. A value from :py:class:`ContextTypes`. graph(Topology): The application topology to be submitted. config(dict): Configuration for the submission. username(str): Username for the Streams REST api. password(str): Password for `username`. Returns: SubmissionResult: Result of the submission. For details of what is contained see the :py:class:`ContextTypes` constant passed as `ctxtype`. """ streamsx._streams._version._mismatch_check(__name__) graph = graph.graph if not graph.operators: raise ValueError("Topology {0} does not contain any streams.".format(graph.topology.name)) context_submitter = _SubmitContextFactory(graph, config, username, password).get_submit_context(ctxtype) sr = SubmissionResult(context_submitter.submit()) sr._submitter = context_submitter return sr
Submits a `Topology` (application) using the specified context type. Used to submit an application for compilation into a Streams application and execution within an Streaming Analytics service or IBM Streams instance. `ctxtype` defines how the application will be submitted, see :py:class:`ContextTypes`. The parameters `username` and `password` are only required when submitting to an IBM Streams instance and it is required to access the Streams REST API from the code performing the submit. Accessing data from views created by :py:meth:`~streamsx.topology.topology.Stream.view` requires access to the Streams REST API. Args: ctxtype(str): Type of context the application will be submitted to. A value from :py:class:`ContextTypes`. graph(Topology): The application topology to be submitted. config(dict): Configuration for the submission. username(str): Username for the Streams REST api. password(str): Password for `username`. Returns: SubmissionResult: Result of the submission. For details of what is contained see the :py:class:`ContextTypes` constant passed as `ctxtype`.
entailment
def _vcap_from_service_definition(service_def): """Turn a service definition into a vcap services containing a single service. """ if 'credentials' in service_def: credentials = service_def['credentials'] else: credentials = service_def service = {} service['credentials'] = credentials service['name'] = _name_from_service_definition(service_def) vcap = {'streaming-analytics': [service]} return vcap
Turn a service definition into a vcap services containing a single service.
entailment