code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
def metric(self, name, filter_=None, description=""): """Creates a metric bound to the current client. :type name: str :param name: the name of the metric to be constructed. :type filter_: str :param filter_: the advanced logs filter expression defining the entries tracked by the metric. If not passed, the instance should already exist, to be refreshed via :meth:`Metric.reload`. :type description: str :param description: the description of the metric to be constructed. If not passed, the instance should already exist, to be refreshed via :meth:`Metric.reload`. :rtype: :class:`google.cloud.logging.metric.Metric` :returns: Metric created with the current client. """ return Metric(name, filter_, client=self, description=description)
Creates a metric bound to the current client. :type name: str :param name: the name of the metric to be constructed. :type filter_: str :param filter_: the advanced logs filter expression defining the entries tracked by the metric. If not passed, the instance should already exist, to be refreshed via :meth:`Metric.reload`. :type description: str :param description: the description of the metric to be constructed. If not passed, the instance should already exist, to be refreshed via :meth:`Metric.reload`. :rtype: :class:`google.cloud.logging.metric.Metric` :returns: Metric created with the current client.
def connect_edges(graph): """ Given a Graph element containing abstract edges compute edge segments directly connecting the source and target nodes. This operation just uses internal HoloViews operations and will be a lot slower than the pandas equivalent. """ paths = [] for start, end in graph.array(graph.kdims): start_ds = graph.nodes[:, :, start] end_ds = graph.nodes[:, :, end] if not len(start_ds) or not len(end_ds): raise ValueError('Could not find node positions for all edges') start = start_ds.array(start_ds.kdims[:2]) end = end_ds.array(end_ds.kdims[:2]) paths.append(np.array([start[0], end[0]])) return paths
Given a Graph element containing abstract edges compute edge segments directly connecting the source and target nodes. This operation just uses internal HoloViews operations and will be a lot slower than the pandas equivalent.
def recognize_using_websocket(self, audio, content_type, recognize_callback, model=None, language_customization_id=None, acoustic_customization_id=None, customization_weight=None, base_model_version=None, inactivity_timeout=None, interim_results=None, keywords=None, keywords_threshold=None, max_alternatives=None, word_alternatives_threshold=None, word_confidence=None, timestamps=None, profanity_filter=None, smart_formatting=None, speaker_labels=None, http_proxy_host=None, http_proxy_port=None, customization_id=None, grammar_name=None, redaction=None, **kwargs): """ Sends audio for speech recognition using web sockets. :param AudioSource audio: The audio to transcribe in the format specified by the `Content-Type` header. :param str content_type: The type of the input: audio/basic, audio/flac, audio/l16, audio/mp3, audio/mpeg, audio/mulaw, audio/ogg, audio/ogg;codecs=opus, audio/ogg;codecs=vorbis, audio/wav, audio/webm, audio/webm;codecs=opus, or audio/webm;codecs=vorbis. :param RecognizeCallback recognize_callback: The callback method for the websocket. :param str model: The identifier of the model that is to be used for the recognition request or, for the **Create a session** method, with the new session. :param str language_customization_id: The customization ID (GUID) of a custom language model that is to be used with the recognition request. The base model of the specified custom language model must match the model specified with the `model` parameter. You must make the request with service credentials created for the instance of the service that owns the custom model. By default, no custom language model is used. See [Custom models](https://console.bluemix.net/docs/services/speech-to-text/input.html#custom). **Note:** Use this parameter instead of the deprecated `customization_id` parameter. :param str acoustic_customization_id: The customization ID (GUID) of a custom acoustic model that is to be used with the recognition request or, for the **Create a session** method, with the new session. The base model of the specified custom acoustic model must match the model specified with the `model` parameter. You must make the request with service credentials created for the instance of the service that owns the custom model. By default, no custom acoustic model is used. :param float customization_weight: If you specify the customization ID (GUID) of a custom language model with the recognition request or, for sessions, with the **Create a session** method, the customization weight tells the service how much weight to give to words from the custom language model compared to those from the base model for the current request. Specify a value between 0.0 and 1.0. Unless a different customization weight was specified for the custom model when it was trained, the default value is 0.3. A customization weight that you specify overrides a weight that was specified when the custom model was trained. The default value yields the best performance in general. Assign a higher value if your audio makes frequent use of OOV words from the custom model. Use caution when setting the weight: a higher value can improve the accuracy of phrases from the custom model's domain, but it can negatively affect performance on non-domain phrases. :param str base_model_version: The version of the specified base model that is to be used with recognition request or, for the **Create a session** method, with the new session. Multiple versions of a base model can exist when a model is updated for internal improvements. The parameter is intended primarily for use with custom models that have been upgraded for a new base model. The default value depends on whether the parameter is used with or without a custom model. For more information, see [Base model version](https://console.bluemix.net/docs/services/speech-to-text/input.html#version). :param int inactivity_timeout: The time in seconds after which, if only silence (no speech) is detected in submitted audio, the connection is closed with a 400 error. Useful for stopping audio submission from a live microphone when a user simply walks away. Use `-1` for infinity. :param list[str] keywords: An array of keyword strings to spot in the audio. Each keyword string can include one or more tokens. Keywords are spotted only in the final hypothesis, not in interim results. If you specify any keywords, you must also specify a keywords threshold. You can spot a maximum of 1000 keywords. Omit the parameter or specify an empty array if you do not need to spot keywords. :param float keywords_threshold: A confidence value that is the lower bound for spotting a keyword. A word is considered to match a keyword if its confidence is greater than or equal to the threshold. Specify a probability between 0 and 1 inclusive. No keyword spotting is performed if you omit the parameter. If you specify a threshold, you must also specify one or more keywords. :param int max_alternatives: The maximum number of alternative transcripts to be returned. By default, a single transcription is returned. :param float word_alternatives_threshold: A confidence value that is the lower bound for identifying a hypothesis as a possible word alternative (also known as \"Confusion Networks\"). An alternative word is considered if its confidence is greater than or equal to the threshold. Specify a probability between 0 and 1 inclusive. No alternative words are computed if you omit the parameter. :param bool word_confidence: If `true`, a confidence measure in the range of 0 to 1 is returned for each word. By default, no word confidence measures are returned. :param bool timestamps: If `true`, time alignment is returned for each word. By default, no timestamps are returned. :param bool profanity_filter: If `true` (the default), filters profanity from all output except for keyword results by replacing inappropriate words with a series of asterisks. Set the parameter to `false` to return results with no censoring. Applies to US English transcription only. :param bool smart_formatting: If `true`, converts dates, times, series of digits and numbers, phone numbers, currency values, and internet addresses into more readable, conventional representations in the final transcript of a recognition request. For US English, also converts certain keyword strings to punctuation symbols. By default, no smart formatting is performed. Applies to US English and Spanish transcription only. :param bool speaker_labels: If `true`, the response includes labels that identify which words were spoken by which participants in a multi-person exchange. By default, no speaker labels are returned. Setting `speaker_labels` to `true` forces the `timestamps` parameter to be `true`, regardless of whether you specify `false` for the parameter. To determine whether a language model supports speaker labels, use the **Get models** method and check that the attribute `speaker_labels` is set to `true`. You can also refer to [Speaker labels](https://console.bluemix.net/docs/services/speech-to-text/output.html#speaker_labels). :param str http_proxy_host: http proxy host name. :param str http_proxy_port: http proxy port. If not set, set to 80. :param str customization_id: **Deprecated.** Use the `language_customization_id` parameter to specify the customization ID (GUID) of a custom language model that is to be used with the recognition request. Do not specify both parameters with a request. :param str grammar_name: The name of a grammar that is to be used with the recognition request. If you specify a grammar, you must also use the `language_customization_id` parameter to specify the name of the custom language model for which the grammar is defined. The service recognizes only strings that are recognized by the specified grammar; it does not recognize other custom words from the model's words resource. See [Grammars](https://cloud.ibm.com/docs/services/speech-to-text/output.html). :param bool redaction: If `true`, the service redacts, or masks, numeric data from final transcripts. The feature redacts any number that has three or more consecutive digits by replacing each digit with an `X` character. It is intended to redact sensitive numeric data, such as credit card numbers. By default, the service performs no redaction. When you enable redaction, the service automatically enables smart formatting, regardless of whether you explicitly disable that feature. To ensure maximum security, the service also disables keyword spotting (ignores the `keywords` and `keywords_threshold` parameters) and returns only a single final transcript (forces the `max_alternatives` parameter to be `1`). **Note:** Applies to US English, Japanese, and Korean transcription only. See [Numeric redaction](https://cloud.ibm.com/docs/services/speech-to-text/output.html#redaction). :param dict headers: A `dict` containing the request headers :return: A `dict` containing the `SpeechRecognitionResults` response. :rtype: dict """ if audio is None: raise ValueError('audio must be provided') if not isinstance(audio, AudioSource): raise Exception( 'audio is not of type AudioSource. Import the class from ibm_watson.websocket') if content_type is None: raise ValueError('content_type must be provided') if recognize_callback is None: raise ValueError('recognize_callback must be provided') if not isinstance(recognize_callback, RecognizeCallback): raise Exception( 'Callback is not a derived class of RecognizeCallback') headers = {} if self.default_headers is not None: headers = self.default_headers.copy() if 'headers' in kwargs: headers.update(kwargs.get('headers')) if self.token_manager: access_token = self.token_manager.get_token() headers['Authorization'] = '{0} {1}'.format(BEARER, access_token) else: authstring = "{0}:{1}".format(self.username, self.password) base64_authorization = base64.b64encode(authstring.encode('utf-8')).decode('utf-8') headers['Authorization'] = 'Basic {0}'.format(base64_authorization) url = self.url.replace('https:', 'wss:') params = { 'model': model, 'customization_id': customization_id, 'acoustic_customization_id': acoustic_customization_id, 'customization_weight': customization_weight, 'base_model_version': base_model_version, 'language_customization_id': language_customization_id } params = dict([(k, v) for k, v in params.items() if v is not None]) url += '/v1/recognize?{0}'.format(urlencode(params)) options = { 'content_type': content_type, 'inactivity_timeout': inactivity_timeout, 'interim_results': interim_results, 'keywords': keywords, 'keywords_threshold': keywords_threshold, 'max_alternatives': max_alternatives, 'word_alternatives_threshold': word_alternatives_threshold, 'word_confidence': word_confidence, 'timestamps': timestamps, 'profanity_filter': profanity_filter, 'smart_formatting': smart_formatting, 'speaker_labels': speaker_labels, 'grammar_name': grammar_name, 'redaction': redaction } options = dict([(k, v) for k, v in options.items() if v is not None]) RecognizeListener(audio, options, recognize_callback, url, headers, http_proxy_host, http_proxy_port, self.verify)
Sends audio for speech recognition using web sockets. :param AudioSource audio: The audio to transcribe in the format specified by the `Content-Type` header. :param str content_type: The type of the input: audio/basic, audio/flac, audio/l16, audio/mp3, audio/mpeg, audio/mulaw, audio/ogg, audio/ogg;codecs=opus, audio/ogg;codecs=vorbis, audio/wav, audio/webm, audio/webm;codecs=opus, or audio/webm;codecs=vorbis. :param RecognizeCallback recognize_callback: The callback method for the websocket. :param str model: The identifier of the model that is to be used for the recognition request or, for the **Create a session** method, with the new session. :param str language_customization_id: The customization ID (GUID) of a custom language model that is to be used with the recognition request. The base model of the specified custom language model must match the model specified with the `model` parameter. You must make the request with service credentials created for the instance of the service that owns the custom model. By default, no custom language model is used. See [Custom models](https://console.bluemix.net/docs/services/speech-to-text/input.html#custom). **Note:** Use this parameter instead of the deprecated `customization_id` parameter. :param str acoustic_customization_id: The customization ID (GUID) of a custom acoustic model that is to be used with the recognition request or, for the **Create a session** method, with the new session. The base model of the specified custom acoustic model must match the model specified with the `model` parameter. You must make the request with service credentials created for the instance of the service that owns the custom model. By default, no custom acoustic model is used. :param float customization_weight: If you specify the customization ID (GUID) of a custom language model with the recognition request or, for sessions, with the **Create a session** method, the customization weight tells the service how much weight to give to words from the custom language model compared to those from the base model for the current request. Specify a value between 0.0 and 1.0. Unless a different customization weight was specified for the custom model when it was trained, the default value is 0.3. A customization weight that you specify overrides a weight that was specified when the custom model was trained. The default value yields the best performance in general. Assign a higher value if your audio makes frequent use of OOV words from the custom model. Use caution when setting the weight: a higher value can improve the accuracy of phrases from the custom model's domain, but it can negatively affect performance on non-domain phrases. :param str base_model_version: The version of the specified base model that is to be used with recognition request or, for the **Create a session** method, with the new session. Multiple versions of a base model can exist when a model is updated for internal improvements. The parameter is intended primarily for use with custom models that have been upgraded for a new base model. The default value depends on whether the parameter is used with or without a custom model. For more information, see [Base model version](https://console.bluemix.net/docs/services/speech-to-text/input.html#version). :param int inactivity_timeout: The time in seconds after which, if only silence (no speech) is detected in submitted audio, the connection is closed with a 400 error. Useful for stopping audio submission from a live microphone when a user simply walks away. Use `-1` for infinity. :param list[str] keywords: An array of keyword strings to spot in the audio. Each keyword string can include one or more tokens. Keywords are spotted only in the final hypothesis, not in interim results. If you specify any keywords, you must also specify a keywords threshold. You can spot a maximum of 1000 keywords. Omit the parameter or specify an empty array if you do not need to spot keywords. :param float keywords_threshold: A confidence value that is the lower bound for spotting a keyword. A word is considered to match a keyword if its confidence is greater than or equal to the threshold. Specify a probability between 0 and 1 inclusive. No keyword spotting is performed if you omit the parameter. If you specify a threshold, you must also specify one or more keywords. :param int max_alternatives: The maximum number of alternative transcripts to be returned. By default, a single transcription is returned. :param float word_alternatives_threshold: A confidence value that is the lower bound for identifying a hypothesis as a possible word alternative (also known as \"Confusion Networks\"). An alternative word is considered if its confidence is greater than or equal to the threshold. Specify a probability between 0 and 1 inclusive. No alternative words are computed if you omit the parameter. :param bool word_confidence: If `true`, a confidence measure in the range of 0 to 1 is returned for each word. By default, no word confidence measures are returned. :param bool timestamps: If `true`, time alignment is returned for each word. By default, no timestamps are returned. :param bool profanity_filter: If `true` (the default), filters profanity from all output except for keyword results by replacing inappropriate words with a series of asterisks. Set the parameter to `false` to return results with no censoring. Applies to US English transcription only. :param bool smart_formatting: If `true`, converts dates, times, series of digits and numbers, phone numbers, currency values, and internet addresses into more readable, conventional representations in the final transcript of a recognition request. For US English, also converts certain keyword strings to punctuation symbols. By default, no smart formatting is performed. Applies to US English and Spanish transcription only. :param bool speaker_labels: If `true`, the response includes labels that identify which words were spoken by which participants in a multi-person exchange. By default, no speaker labels are returned. Setting `speaker_labels` to `true` forces the `timestamps` parameter to be `true`, regardless of whether you specify `false` for the parameter. To determine whether a language model supports speaker labels, use the **Get models** method and check that the attribute `speaker_labels` is set to `true`. You can also refer to [Speaker labels](https://console.bluemix.net/docs/services/speech-to-text/output.html#speaker_labels). :param str http_proxy_host: http proxy host name. :param str http_proxy_port: http proxy port. If not set, set to 80. :param str customization_id: **Deprecated.** Use the `language_customization_id` parameter to specify the customization ID (GUID) of a custom language model that is to be used with the recognition request. Do not specify both parameters with a request. :param str grammar_name: The name of a grammar that is to be used with the recognition request. If you specify a grammar, you must also use the `language_customization_id` parameter to specify the name of the custom language model for which the grammar is defined. The service recognizes only strings that are recognized by the specified grammar; it does not recognize other custom words from the model's words resource. See [Grammars](https://cloud.ibm.com/docs/services/speech-to-text/output.html). :param bool redaction: If `true`, the service redacts, or masks, numeric data from final transcripts. The feature redacts any number that has three or more consecutive digits by replacing each digit with an `X` character. It is intended to redact sensitive numeric data, such as credit card numbers. By default, the service performs no redaction. When you enable redaction, the service automatically enables smart formatting, regardless of whether you explicitly disable that feature. To ensure maximum security, the service also disables keyword spotting (ignores the `keywords` and `keywords_threshold` parameters) and returns only a single final transcript (forces the `max_alternatives` parameter to be `1`). **Note:** Applies to US English, Japanese, and Korean transcription only. See [Numeric redaction](https://cloud.ibm.com/docs/services/speech-to-text/output.html#redaction). :param dict headers: A `dict` containing the request headers :return: A `dict` containing the `SpeechRecognitionResults` response. :rtype: dict
def delete(self, *objects, **kwargs): ''' This method offers the ability to delete multiple entities in a single round trip to Redis (assuming your models are all stored on the same server). You can call:: session.delete(obj) session.delete(obj1, obj2, ...) session.delete([obj1, obj2, ...]) The keyword argument ``force=True`` can be provided, which can force the deletion of an entitiy again, even if we believe it to already be deleted. If ``force=True``, we won't re-call the object's ``_before_delete()`` method, but we will re-call ``_after_delete()``. .. note:: Objects are automatically dropped from the session after delete for the sake of cache coherency. ''' force = kwargs.get('force') from .model import Model, SKIP_ON_DELETE flat = [] items = deque() items.extend(objects) types = set() # flatten what was passed in, more or less arbitrarily deep while items: o = items.popleft() if isinstance(o, (list, tuple)): items.extendleft(reversed(o)) elif isinstance(o, Model): if force or not o._deleted: flat.append(o) types.add(type(o)) # make sure we can bulk delete everything we've been requested to from .columns import MODELS_REFERENCED for t in types: if not t._no_fk or t._namespace in MODELS_REFERENCED: raise ORMError("Can't bulk delete entities of models with foreign key relationships") c2p = {} for o in flat: # prepare delete if not o._deleted: o._before_delete() # make sure we've got connections c = o._connection if c not in c2p: c2p[c] = c.pipeline() # use our existing delete, and pass through a pipeline :P o.delete(_conn=c2p[c], skip_on_delete_i_really_mean_it=SKIP_ON_DELETE) # actually delete the data in Redis for p in c2p.values(): p.execute() # remove the objects from the session forget = self.forget for o in flat: if o._deleted == 1: o._after_delete() o._deleted = 2 forget(o)
This method offers the ability to delete multiple entities in a single round trip to Redis (assuming your models are all stored on the same server). You can call:: session.delete(obj) session.delete(obj1, obj2, ...) session.delete([obj1, obj2, ...]) The keyword argument ``force=True`` can be provided, which can force the deletion of an entitiy again, even if we believe it to already be deleted. If ``force=True``, we won't re-call the object's ``_before_delete()`` method, but we will re-call ``_after_delete()``. .. note:: Objects are automatically dropped from the session after delete for the sake of cache coherency.
def zrange(key, start, stop, host=None, port=None, db=None, password=None): ''' Get a range of values from a sorted set in Redis by index CLI Example: .. code-block:: bash salt '*' redis.zrange foo_sorted 0 10 ''' server = _connect(host, port, db, password) return server.zrange(key, start, stop)
Get a range of values from a sorted set in Redis by index CLI Example: .. code-block:: bash salt '*' redis.zrange foo_sorted 0 10
def write_table(table, target, tablename=None, ilwdchar_compat=None, **kwargs): """Write a `~astropy.table.Table` to file in LIGO_LW XML format This method will attempt to write in the new `ligo.lw` format (if ``ilwdchar_compat`` is ``None`` or ``False``), but will fall back to the older `glue.ligolw` (in that order) if that fails (if ``ilwdchar_compat`` is ``None`` or ``True``). """ if tablename is None: # try and get tablename from metadata tablename = table.meta.get('tablename', None) if tablename is None: # panic raise ValueError("please pass ``tablename=`` to specify the target " "LIGO_LW Table Name") try: llwtable = table_to_ligolw( table, tablename, ilwdchar_compat=ilwdchar_compat or False, ) except LigolwElementError as exc: if ilwdchar_compat is not None: raise try: llwtable = table_to_ligolw(table, tablename, ilwdchar_compat=True) except Exception: raise exc return write_ligolw_tables(target, [llwtable], **kwargs)
Write a `~astropy.table.Table` to file in LIGO_LW XML format This method will attempt to write in the new `ligo.lw` format (if ``ilwdchar_compat`` is ``None`` or ``False``), but will fall back to the older `glue.ligolw` (in that order) if that fails (if ``ilwdchar_compat`` is ``None`` or ``True``).
def load_build_config(self, config=None): '''load a google compute config, meaning that we have the following cases: 1. the user has not provided a config file directly, we look in env. 2. the environment is not set, so we use a reasonable default 3. if the final string is not found as a file, we look for it in library 4. we load the library name, or the user file, else error Parameters ========== config: the config file the user has provided, or the library URI ''' # If the config is already a dictionary, it's loaded if isinstance(config, dict): bot.debug('Config is already loaded.') return config # if the config is not defined, look in environment, then choose a default if config is None: config = self._get_and_update_setting('SREGISTRY_COMPUTE_CONFIG', 'google/compute/ubuntu/securebuild-2.4.3') # If the config is a file, we read it elif os.path.exists(config): return read_json(config) # otherwise, try to look it up in library configs = self._load_templates(config) if configs is None: bot.error('%s is not a valid config. %s' %name) sys.exit(1) bot.info('Found config %s in library!' %config) config = configs[0] return config
load a google compute config, meaning that we have the following cases: 1. the user has not provided a config file directly, we look in env. 2. the environment is not set, so we use a reasonable default 3. if the final string is not found as a file, we look for it in library 4. we load the library name, or the user file, else error Parameters ========== config: the config file the user has provided, or the library URI
def hpx_to_coords(h, shape): """ Generate an N x D list of pixel center coordinates where N is the number of pixels and D is the dimensionality of the map.""" x, z = hpx_to_axes(h, shape) x = np.sqrt(x[0:-1] * x[1:]) z = z[:-1] + 0.5 x = np.ravel(np.ones(shape) * x[:, np.newaxis]) z = np.ravel(np.ones(shape) * z[np.newaxis, :]) return np.vstack((x, z))
Generate an N x D list of pixel center coordinates where N is the number of pixels and D is the dimensionality of the map.
def weld_align(df_index_arrays, df_index_weld_types, series_index_arrays, series_index_weld_types, series_data, series_weld_type): """Returns the data from the Series aligned to the DataFrame index. Parameters ---------- df_index_arrays : list of (numpy.ndarray or WeldObject) The index columns as a list. df_index_weld_types : list of WeldType series_index_arrays : numpy.ndarray or WeldObject The index of the Series. series_index_weld_types : list of WeldType series_data : numpy.ndarray or WeldObject The data of the Series. series_weld_type : WeldType Returns ------- WeldObject Representation of this computation. """ weld_obj_index_df = weld_arrays_to_vec_of_struct(df_index_arrays, df_index_weld_types) weld_obj_series_dict = weld_data_to_dict(series_index_arrays, series_index_weld_types, series_data, series_weld_type) weld_obj = create_empty_weld_object() df_index_obj_id = get_weld_obj_id(weld_obj, weld_obj_index_df) series_dict_obj_id = get_weld_obj_id(weld_obj, weld_obj_series_dict) index_type = struct_of('{e}', df_index_weld_types) missing_literal = default_missing_data_literal(series_weld_type) if series_weld_type == WeldVec(WeldChar()): missing_literal = get_weld_obj_id(weld_obj, missing_literal) weld_template = """result( for({df_index}, appender[{data_type}], |b: appender[{data_type}], i: i64, e: {index_type}| if(keyexists({series_dict}, e), merge(b, lookup({series_dict}, e)), merge(b, {missing}) ) ) )""" weld_obj.weld_code = weld_template.format(series_dict=series_dict_obj_id, df_index=df_index_obj_id, index_type=index_type, data_type=series_weld_type, missing=missing_literal) return weld_obj
Returns the data from the Series aligned to the DataFrame index. Parameters ---------- df_index_arrays : list of (numpy.ndarray or WeldObject) The index columns as a list. df_index_weld_types : list of WeldType series_index_arrays : numpy.ndarray or WeldObject The index of the Series. series_index_weld_types : list of WeldType series_data : numpy.ndarray or WeldObject The data of the Series. series_weld_type : WeldType Returns ------- WeldObject Representation of this computation.
def can(ability, add_headers=None): """Test whether an ability is allowed.""" client = ClientMixin(api_key=None) try: client.request('GET', endpoint='abilities/%s' % ability, add_headers=add_headers) return True except Exception: pass return False
Test whether an ability is allowed.
def sendMessage(self, exchange, routing_key, message, properties=None, UUID=None): """ With this function, you can send message to `exchange`. Args: exchange (str): name of exchange you want to message to be delivered routing_key (str): which routing key to use in headers of message message (str): body of message properties (dict ,optional): properties of message - if not used, or set to ``None``, ``self.content_type`` and ``delivery_mode=2`` (persistent) is used UUID (str, optional): UUID of the message. If set, it is included into ``properties`` of the message. """ if properties is None: properties = pika.BasicProperties( content_type=self.content_type, delivery_mode=1, headers={} ) if UUID is not None: if properties.headers is None: properties.headers = {} properties.headers["UUID"] = UUID self.channel.basic_publish( exchange=exchange, routing_key=routing_key, properties=properties, body=message )
With this function, you can send message to `exchange`. Args: exchange (str): name of exchange you want to message to be delivered routing_key (str): which routing key to use in headers of message message (str): body of message properties (dict ,optional): properties of message - if not used, or set to ``None``, ``self.content_type`` and ``delivery_mode=2`` (persistent) is used UUID (str, optional): UUID of the message. If set, it is included into ``properties`` of the message.
def update(context, id, etag, name, password, email, fullname, team_id, active): """update(context, id, etag, name, password, email, fullname, team_id, active) Update a user. >>> dcictl user-update [OPTIONS] :param string id: ID of the user to update [required] :param string etag: Entity tag of the user resource [required] :param string name: Name of the user :param string password: Password of the user :param string email: Email of the user :param string fullname: Full name of the user :param boolean active: Set the user in the active state """ result = user.update(context, id=id, etag=etag, name=name, password=password, team_id=team_id, state=utils.active_string(active), email=email, fullname=fullname) utils.format_output(result, context.format)
update(context, id, etag, name, password, email, fullname, team_id, active) Update a user. >>> dcictl user-update [OPTIONS] :param string id: ID of the user to update [required] :param string etag: Entity tag of the user resource [required] :param string name: Name of the user :param string password: Password of the user :param string email: Email of the user :param string fullname: Full name of the user :param boolean active: Set the user in the active state
def get_vars_in_expression(source): '''Get list of variable names in a python expression.''' import compiler from compiler.ast import Node ## # @brief Internal recursive function. # @param node An AST parse Node. # @param var_list Input list of variables. # @return An updated list of variables. def get_vars_body(node, var_list=[]): if isinstance(node, Node): if node.__class__.__name__ == 'Name': for child in node.getChildren(): if child not in var_list: var_list.append(child) for child in node.getChildren(): if isinstance(child, Node): for child in node.getChildren(): var_list = get_vars_body(child, var_list) break return var_list return get_vars_body(compiler.parse(source))
Get list of variable names in a python expression.
def get_time(self): """ :return: Steam aligned timestamp :rtype: int """ if (self.steam_time_offset is None or (self.align_time_every and (time() - self._offset_last_check) > self.align_time_every) ): self.steam_time_offset = get_time_offset() if self.steam_time_offset is not None: self._offset_last_check = time() return int(time() + (self.steam_time_offset or 0))
:return: Steam aligned timestamp :rtype: int
def uncomment(comment): """ Converts the comment node received to a non-commented element, in place, and will return the new node. This may fail, primarily due to special characters within the comment that the xml parser is unable to handle. If it fails, this method will log an error and return None """ parent = comment.parentNode h = html.parser.HTMLParser() data = h.unescape(comment.data) try: node = minidom.parseString(data).firstChild except xml.parsers.expat.ExpatError: # Could not parse! log.error('Could not uncomment node due to parsing error!') return None else: parent.replaceChild(node, comment) return node
Converts the comment node received to a non-commented element, in place, and will return the new node. This may fail, primarily due to special characters within the comment that the xml parser is unable to handle. If it fails, this method will log an error and return None
def save_default_values(self): """Save InaSAFE default values.""" for parameter_container in self.default_value_parameter_containers: parameters = parameter_container.get_parameters() for parameter in parameters: set_inasafe_default_value_qsetting( self.settings, GLOBAL, parameter.guid, parameter.value )
Save InaSAFE default values.
def transacted(func): """ Return a callable which will invoke C{func} in a transaction using the C{store} attribute of the first parameter passed to it. Typically this is used to create Item methods which are automatically run in a transaction. The attributes of the returned callable will resemble those of C{func} as closely as L{twisted.python.util.mergeFunctionMetadata} can make them. """ def transactionified(item, *a, **kw): return item.store.transact(func, item, *a, **kw) return mergeFunctionMetadata(func, transactionified)
Return a callable which will invoke C{func} in a transaction using the C{store} attribute of the first parameter passed to it. Typically this is used to create Item methods which are automatically run in a transaction. The attributes of the returned callable will resemble those of C{func} as closely as L{twisted.python.util.mergeFunctionMetadata} can make them.
def save(self, *args, **kwargs): """creates the slug, queues up for indexing and saves the instance :param args: inline arguments (optional) :param kwargs: keyword arguments :return: `bulbs.content.Content` """ if not self.slug: self.slug = slugify(self.build_slug())[:self._meta.get_field("slug").max_length] if not self.is_indexed: if kwargs is None: kwargs = {} kwargs["index"] = False content = super(Content, self).save(*args, **kwargs) index_content_contributions.delay(self.id) index_content_report_content_proxy.delay(self.id) post_to_instant_articles_api.delay(self.id) return content
creates the slug, queues up for indexing and saves the instance :param args: inline arguments (optional) :param kwargs: keyword arguments :return: `bulbs.content.Content`
def remove(self, id): """Remove a object by id Args: id (int): Object's id should be deleted Returns: len(int): affected rows """ before_len = len(self.model.db) self.model.db = [t for t in self.model.db if t["id"] != id] if not self._batch.enable.is_set(): self.model.save_db() return before_len - len(self.model.db)
Remove a object by id Args: id (int): Object's id should be deleted Returns: len(int): affected rows
async def get_data(self): """Retrieve the data.""" try: with async_timeout.timeout(5, loop=self._loop): response = await self._session.get( '{}/{}/'.format(self.url, self.sensor_id)) _LOGGER.debug( "Response from luftdaten.info: %s", response.status) self.data = await response.json() _LOGGER.debug(self.data) except (asyncio.TimeoutError, aiohttp.ClientError): _LOGGER.error("Can not load data from luftdaten.info") raise exceptions.LuftdatenConnectionError() if not self.data: self.values = self.meta = None return try: sensor_data = sorted( self.data, key=lambda timestamp: timestamp['timestamp'], reverse=True)[0] print(sensor_data) for entry in sensor_data['sensordatavalues']: for measurement in self.values.keys(): if measurement == entry['value_type']: self.values[measurement] = float(entry['value']) self.meta['sensor_id'] = self.sensor_id self.meta['longitude'] = float( sensor_data['location']['longitude']) self.meta['latitude'] = float(sensor_data['location']['latitude']) except (TypeError, IndexError): raise exceptions.LuftdatenError()
Retrieve the data.
def _serve_forever_wrapper(self, _srv, poll_interval=0.1): """ Wrapper for the server created for a SSH forward """ self.logger.info('Opening tunnel: {0} <> {1}'.format( address_to_str(_srv.local_address), address_to_str(_srv.remote_address)) ) _srv.serve_forever(poll_interval) # blocks until finished self.logger.info('Tunnel: {0} <> {1} released'.format( address_to_str(_srv.local_address), address_to_str(_srv.remote_address)) )
Wrapper for the server created for a SSH forward
def bifurcation_partitions(neurites, neurite_type=NeuriteType.all): '''Partition at bifurcation points of a collection of neurites''' return map(_bifurcationfunc.bifurcation_partition, iter_sections(neurites, iterator_type=Tree.ibifurcation_point, neurite_filter=is_type(neurite_type)))
Partition at bifurcation points of a collection of neurites
def encode_multiple_layers(out, features_by_layer, zoom): """ features_by_layer should be a dict: layer_name -> feature tuples """ precision = precision_for_zoom(zoom) geojson = {} for layer_name, features in features_by_layer.items(): fs = create_layer_feature_collection(features, precision) geojson[layer_name] = fs json.dump(geojson, out)
features_by_layer should be a dict: layer_name -> feature tuples
def create_el(name, text=None, attrib=None): """Create element with given attributes and set element.text property to given text value (if text is not None) :param name: element name :type name: str :param text: text node value :type text: str :param attrib: attributes :type attrib: dict :returns: xml element :rtype: Element """ if attrib is None: attrib = {} el = ET.Element(name, attrib) if text is not None: el.text = text return el
Create element with given attributes and set element.text property to given text value (if text is not None) :param name: element name :type name: str :param text: text node value :type text: str :param attrib: attributes :type attrib: dict :returns: xml element :rtype: Element
def _parse_log_statement(options): ''' Parses a log path. ''' for i in options: if _is_reference(i): _add_reference(i, _current_statement) elif _is_junction(i): _add_junction(i) elif _is_inline_definition(i): _add_inline_definition(i, _current_statement)
Parses a log path.
def join_left(self, right_table=None, fields=None, condition=None, join_type='LEFT JOIN', schema=None, left_table=None, extract_fields=True, prefix_fields=False, field_prefix=None, allow_duplicates=False): """ Wrapper for ``self.join`` with a default join of 'LEFT JOIN' :type right_table: str or dict or :class:`Table <querybuilder.tables.Table>` :param right_table: The table being joined with. This can be a string of the table name, a dict of {'alias': table}, or a ``Table`` instance :type fields: str or tuple or list or :class:`Field <querybuilder.fields.Field>` :param fields: The fields to select from ``right_table``. Defaults to `None`. This can be a single field, a tuple of fields, or a list of fields. Each field can be a string or ``Field`` instance :type condition: str :param condition: The join condition specifying the fields being joined. If the two tables being joined are instances of ``ModelTable`` then the condition should be created automatically. :type join_type: str :param join_type: The type of join (JOIN, LEFT JOIN, INNER JOIN, etc). Defaults to 'JOIN' :type schema: str :param schema: This is not implemented, but it will be a string of the db schema name :type left_table: str or dict or :class:`Table <querybuilder.tables.Table>` :param left_table: The left table being joined with. This can be a string of the table name, a dict of {'alias': table}, or a ``Table`` instance. Defaults to the first table in the query. :type extract_fields: bool :param extract_fields: If True and joining with a ``ModelTable``, then '*' fields will be converted to individual fields for each column in the table. Defaults to True. :type prefix_fields: bool :param prefix_fields: If True, then the joined table will have each of its field names prefixed with the field_prefix. If not field_prefix is specified, a name will be generated based on the join field name. This is usually used with nesting results in order to create models in python or javascript. Defaults to True. :type field_prefix: str :param field_prefix: The field prefix to be used in front of each field name if prefix_fields is set to True. If no field_prefix is set, one will be automatically created based on the join field name. :return: self :rtype: :class:`Query <querybuilder.query.Query>` """ return self.join( right_table=right_table, fields=fields, condition=condition, join_type=join_type, schema=schema, left_table=left_table, extract_fields=extract_fields, prefix_fields=prefix_fields, field_prefix=field_prefix, allow_duplicates=allow_duplicates )
Wrapper for ``self.join`` with a default join of 'LEFT JOIN' :type right_table: str or dict or :class:`Table <querybuilder.tables.Table>` :param right_table: The table being joined with. This can be a string of the table name, a dict of {'alias': table}, or a ``Table`` instance :type fields: str or tuple or list or :class:`Field <querybuilder.fields.Field>` :param fields: The fields to select from ``right_table``. Defaults to `None`. This can be a single field, a tuple of fields, or a list of fields. Each field can be a string or ``Field`` instance :type condition: str :param condition: The join condition specifying the fields being joined. If the two tables being joined are instances of ``ModelTable`` then the condition should be created automatically. :type join_type: str :param join_type: The type of join (JOIN, LEFT JOIN, INNER JOIN, etc). Defaults to 'JOIN' :type schema: str :param schema: This is not implemented, but it will be a string of the db schema name :type left_table: str or dict or :class:`Table <querybuilder.tables.Table>` :param left_table: The left table being joined with. This can be a string of the table name, a dict of {'alias': table}, or a ``Table`` instance. Defaults to the first table in the query. :type extract_fields: bool :param extract_fields: If True and joining with a ``ModelTable``, then '*' fields will be converted to individual fields for each column in the table. Defaults to True. :type prefix_fields: bool :param prefix_fields: If True, then the joined table will have each of its field names prefixed with the field_prefix. If not field_prefix is specified, a name will be generated based on the join field name. This is usually used with nesting results in order to create models in python or javascript. Defaults to True. :type field_prefix: str :param field_prefix: The field prefix to be used in front of each field name if prefix_fields is set to True. If no field_prefix is set, one will be automatically created based on the join field name. :return: self :rtype: :class:`Query <querybuilder.query.Query>`
def create(self, cid, configData): """ Create a new named (cid) configuration from a parameter dictionary (config_data). """ configArgs = {'configId': cid, 'params': configData, 'force': True} cid = self.server.call('post', "/config/create", configArgs, forceText=True, headers=TextAcceptHeader) new_config = Config(cid, self.server) return new_config
Create a new named (cid) configuration from a parameter dictionary (config_data).
def configure(self, cnf=None, **kw): """ Configure resources of the widget. To get the list of options for this widget, call the method :meth:`~Table.keys`. See :meth:`~Table.__init__` for a description of the widget specific option. """ if cnf == 'drag_cols': return 'drag_cols', self._drag_cols elif cnf == 'drag_rows': return 'drag_rows', self._drag_rows elif cnf == 'sortable': return 'sortable', self._sortable if isinstance(cnf, dict): kwargs = cnf.copy() kwargs.update(kw) # keyword arguments override cnf content cnf = {} # everything is in kwargs so no need of cnf cnf2 = {} # to configure the preview else: kwargs = kw cnf2 = cnf sortable = bool(kwargs.pop("sortable", self._sortable)) if sortable != self._sortable: self._config_sortable(sortable) drag_cols = bool(kwargs.pop("drag_cols", self._drag_cols)) if drag_cols != self._drag_cols: self._config_drag_cols(drag_cols) self._drag_rows = bool(kwargs.pop("drag_rows", self._drag_rows)) if 'columns' in kwargs: # update column type dict for col in list(self._column_types.keys()): if col not in kwargs['columns']: del self._column_types[col] for col in kwargs['columns']: if col not in self._column_types: self._column_types[col] = str # Remove some keywords from the preview configuration dict kw2 = kwargs.copy() kw2.pop('displaycolumns', None) kw2.pop('xscrollcommand', None) kw2.pop('yscrollcommand', None) self._visual_drag.configure(cnf2, **kw2) if len(kwargs) != 0: return ttk.Treeview.configure(self, cnf, **kwargs)
Configure resources of the widget. To get the list of options for this widget, call the method :meth:`~Table.keys`. See :meth:`~Table.__init__` for a description of the widget specific option.
def apply_slippage_penalty(returns, txn_daily, simulate_starting_capital, backtest_starting_capital, impact=0.1): """ Applies quadratic volumeshare slippage model to daily returns based on the proportion of the observed historical daily bar dollar volume consumed by the strategy's trades. Scales the size of trades based on the ratio of the starting capital we wish to test to the starting capital of the passed backtest data. Parameters ---------- returns : pd.Series Time series of daily returns. txn_daily : pd.Series Daily transaciton totals, closing price, and daily volume for each traded name. See price_volume_daily_txns for more details. simulate_starting_capital : integer capital at which we want to test backtest_starting_capital: capital base at which backtest was origionally run. impact: See Zipline volumeshare slippage model impact : float Scales the size of the slippage penalty. Returns ------- adj_returns : pd.Series Slippage penalty adjusted daily returns. """ mult = simulate_starting_capital / backtest_starting_capital simulate_traded_shares = abs(mult * txn_daily.amount) simulate_traded_dollars = txn_daily.price * simulate_traded_shares simulate_pct_volume_used = simulate_traded_shares / txn_daily.volume penalties = simulate_pct_volume_used**2 \ * impact * simulate_traded_dollars daily_penalty = penalties.resample('D').sum() daily_penalty = daily_penalty.reindex(returns.index).fillna(0) # Since we are scaling the numerator of the penalties linearly # by capital base, it makes the most sense to scale the denominator # similarly. In other words, since we aren't applying compounding to # simulate_traded_shares, we shouldn't apply compounding to pv. portfolio_value = ep.cum_returns( returns, starting_value=backtest_starting_capital) * mult adj_returns = returns - (daily_penalty / portfolio_value) return adj_returns
Applies quadratic volumeshare slippage model to daily returns based on the proportion of the observed historical daily bar dollar volume consumed by the strategy's trades. Scales the size of trades based on the ratio of the starting capital we wish to test to the starting capital of the passed backtest data. Parameters ---------- returns : pd.Series Time series of daily returns. txn_daily : pd.Series Daily transaciton totals, closing price, and daily volume for each traded name. See price_volume_daily_txns for more details. simulate_starting_capital : integer capital at which we want to test backtest_starting_capital: capital base at which backtest was origionally run. impact: See Zipline volumeshare slippage model impact : float Scales the size of the slippage penalty. Returns ------- adj_returns : pd.Series Slippage penalty adjusted daily returns.
def _get_sorter(subpath='', **defaults): """Return function to generate specific subreddit Submission listings.""" @restrict_access(scope='read') def _sorted(self, *args, **kwargs): """Return a get_content generator for some RedditContentObject type. The additional parameters are passed directly into :meth:`.get_content`. Note: the `url` parameter cannot be altered. """ if not kwargs.get('params'): kwargs['params'] = {} for key, value in six.iteritems(defaults): kwargs['params'].setdefault(key, value) url = urljoin(self._url, subpath) # pylint: disable=W0212 return self.reddit_session.get_content(url, *args, **kwargs) return _sorted
Return function to generate specific subreddit Submission listings.
def add_reporting_args(parser): """Add reporting arguments to an argument parser. Parameters ---------- parser: `argparse.ArgumentParser` Returns ------- `argparse.ArgumentGroup` The argument group created. """ g = parser.add_argument_group('Reporting options') g.add_argument('-l', '--log-file', default=None, type = str_type, metavar = file_mv, help = textwrap.dedent("""\ Path of log file (if specified, report to stdout AND file.""")) g.add_argument('-q', '--quiet', action='store_true', help = 'Only output errors and warnings.') g.add_argument('-v', '--verbose', action='store_true', help = 'Enable verbose output. Ignored if --quiet is specified.') return parser
Add reporting arguments to an argument parser. Parameters ---------- parser: `argparse.ArgumentParser` Returns ------- `argparse.ArgumentGroup` The argument group created.
def list(self, teamId=None, rType=None, maxResults=C.MAX_RESULT_DEFAULT, limit=C.ALL): """ rType can be DIRECT or GROUP """ queryParams = {'teamId': teamId, 'type': rType, 'max': maxResults} queryParams = self.clean_query_Dict(queryParams) ret = self.send_request(C.GET, self.end, data=queryParams, limit=limit) return [Room(self.token, roomData) for roomData in ret['items']]
rType can be DIRECT or GROUP
async def cancel_scheduled_messages(self, *sequence_numbers): """Cancel one or more messages that have previsouly been scheduled and are still pending. :param sequence_numbers: The seqeuence numbers of the scheduled messages. :type sequence_numbers: int Example: .. literalinclude:: ../examples/async_examples/test_examples_async.py :start-after: [START cancel_schedule_messages] :end-before: [END cancel_schedule_messages] :language: python :dedent: 4 :caption: Schedule messages. """ if not self.running: await self.open() numbers = [types.AMQPLong(s) for s in sequence_numbers] request_body = {'sequence-numbers': types.AMQPArray(numbers)} return await self._mgmt_request_response( REQUEST_RESPONSE_CANCEL_SCHEDULED_MESSAGE_OPERATION, request_body, mgmt_handlers.default)
Cancel one or more messages that have previsouly been scheduled and are still pending. :param sequence_numbers: The seqeuence numbers of the scheduled messages. :type sequence_numbers: int Example: .. literalinclude:: ../examples/async_examples/test_examples_async.py :start-after: [START cancel_schedule_messages] :end-before: [END cancel_schedule_messages] :language: python :dedent: 4 :caption: Schedule messages.
def patch(self, deviceId): """ Updates the device with the given data. Supports a json payload like { fs: newFs samplesPerBatch: samplesPerBatch gyroEnabled: true gyroSensitivity: 500 accelerometerEnabled: true accelerometerSensitivity: 2 } A heartbeat is sent on completion of the request to ensure the analyser gets a rapid update. :return: the device and 200 if the update was ok, 400 if not. """ try: device = self.recordingDevices.get(deviceId) if device.status == RecordingDeviceStatus.INITIALISED: errors = self._handlePatch(device) if len(errors) == 0: return device, 200 else: return device, 500 else: return device, 400 finally: logger.info("Sending adhoc heartbeat on device state update") self.heartbeater.sendHeartbeat()
Updates the device with the given data. Supports a json payload like { fs: newFs samplesPerBatch: samplesPerBatch gyroEnabled: true gyroSensitivity: 500 accelerometerEnabled: true accelerometerSensitivity: 2 } A heartbeat is sent on completion of the request to ensure the analyser gets a rapid update. :return: the device and 200 if the update was ok, 400 if not.
def zrevrank(self, name, value): """ Returns the ranking in reverse order for the member :param name: str the name of the redis key :param member: str """ with self.pipe as pipe: return pipe.zrevrank(self.redis_key(name), self.valueparse.encode(value))
Returns the ranking in reverse order for the member :param name: str the name of the redis key :param member: str
def dump_to_store(self, store, **kwargs): """Store dataset contents to a backends.*DataStore object.""" from ..backends.api import dump_to_store # TODO: rename and/or cleanup this method to make it more consistent # with to_netcdf() return dump_to_store(self, store, **kwargs)
Store dataset contents to a backends.*DataStore object.
def add_resource(self, format, resource, locale, domain=None): """ Adds a resource @type format: str @param format: Name of the loader (@see add_loader) @type resource: str @param resource: The resource name @type locale: str @type domain: str @raises: ValueError If the locale contains invalid characters @return: """ if domain is None: domain = 'messages' self._assert_valid_locale(locale) self.resources[locale].append([format, resource, domain]) if locale in self.fallback_locales: self.catalogues = {} else: self.catalogues.pop(locale, None)
Adds a resource @type format: str @param format: Name of the loader (@see add_loader) @type resource: str @param resource: The resource name @type locale: str @type domain: str @raises: ValueError If the locale contains invalid characters @return:
def _kbstr_to_cimval(key, val): """ Convert a keybinding value string as found in a WBEM URI into a CIM object or CIM data type, and return it. """ if val[0] == '"' and val[-1] == '"': # A double quoted key value. This could be any of these CIM types: # * string (see stringValue in DSP00004) # * datetime (see datetimeValue in DSP0207) # * reference (see referenceValue in DSP0207) # Note: The actual definition of referenceValue is missing in # DSP0207, see issue #929. Pywbem implements: # referenceValue = WBEM-URI-UntypedInstancePath. # Note: The definition of stringValue in DSP0004 allows multiple # quoted parts (as in MOF), see issue #931. Pywbem implements only # a single quoted part. # We use slicing instead of strip() for removing the surrounding # double quotes, because there could be an escaped double quote # before the terminating double quote. cimval = val[1:-1] # Unescape the backslash-escaped string value cimval = re.sub(r'\\(.)', r'\1', cimval) # Try all possibilities. Note that this means that string-typed # properties that happen to contain a datetime value will be # converted to datetime, and string-typed properties that happen to # contain a reference value will be converted to a reference. # This is a general limitation of untyped WBEM URIs as defined in # DSP0207 and cannot be solved by using a different parsing logic. try: cimval = CIMInstanceName.from_wbem_uri(cimval) except ValueError: try: cimval = CIMDateTime(cimval) except ValueError: cimval = _ensure_unicode(cimval) return cimval if val[0] == "'" and val[-1] == "'": # A single quoted key value. This must be CIM type: # * char16 (see charValue in DSP00004) # Note: The definition of charValue in DSP0004 allows for integer # numbers in addition to single quoted strings, see issue #932. # Pywbem implements only single quoted strings. cimval = val[1:-1] cimval = re.sub(r'\\(.)', r'\1', cimval) cimval = _ensure_unicode(cimval) if len(cimval) != 1: raise ValueError( _format("WBEM URI has a char16 keybinding with an " "incorrect length: {0!A}={1!A}", key, val)) return cimval if val.lower() in ('true', 'false'): # The key value must be CIM type: # * boolean (see booleanValue in DSP00004) cimval = val.lower() == 'true' return cimval # Try CIM types uint<NN> or sint<NN> (see integerValue in DSP00004). # * For integer keybindings in an untyped WBEM URI, it is # not possible to detect the exact CIM data type. Therefore, pywbem # stores the value as a Python int type (or long in Python 2, # if needed). cimval = _integerValue_to_int(val) if cimval is not None: return cimval # Try CIM types real32/64 (see realValue in DSP00004). # * For real/float keybindings in an untyped WBEM URI, it is not # possible to detect the exact CIM data type. Therefore, pywbem # stores the value as a Python float type. cimval = _realValue_to_float(val) if cimval is not None: return cimval # Try datetime types. # At this point, all CIM types have been processed, except: # * datetime, without quotes (see datetimeValue in DSP0207) # DSP0207 requires double quotes around datetime strings, but because # earlier versions of pywbem supported them without double quotes, # pywbem continues to support that, but issues a warning. try: cimval = CIMDateTime(val) except ValueError: raise ValueError( _format("WBEM URI has invalid value format in a keybinding: " "{0!A}={1!A}", key, val)) warnings.warn( _format("Tolerating datetime value without surrounding double " "quotes in WBEM URI keybinding: {0!A}={1!A}", key, val), UserWarning) return cimval
Convert a keybinding value string as found in a WBEM URI into a CIM object or CIM data type, and return it.
def tokhex(length=10, urlsafe=False): """ Return a random string in hexadecimal """ if urlsafe is True: return secrets.token_urlsafe(length) return secrets.token_hex(length)
Return a random string in hexadecimal
def absent(name, auth=None, **kwargs): ''' Ensure a security group rule does not exist name name or id of the security group rule to delete rule_id uuid of the rule to delete project_id id of project to delete rule from ''' rule_id = kwargs['rule_id'] ret = {'name': rule_id, 'changes': {}, 'result': True, 'comment': ''} __salt__['neutronng.setup_clouds'](auth) secgroup = __salt__['neutronng.security_group_get']( name=name, filters={'tenant_id': kwargs['project_id']} ) # no need to delete a rule if the security group doesn't exist if secgroup is None: ret['comment'] = "security group does not exist" return ret # This should probably be done with compare on fields instead of # rule_id in the future rule_exists = None for rule in secgroup['security_group_rules']: if _rule_compare(rule, {"id": rule_id}) is True: rule_exists = True if rule_exists: if __opts__['test']: ret['result'] = None ret['changes'] = {'id': kwargs['rule_id']} ret['comment'] = 'Security group rule will be deleted.' return ret __salt__['neutronng.security_group_rule_delete'](rule_id=rule_id) ret['changes']['id'] = rule_id ret['comment'] = 'Deleted security group rule' return ret
Ensure a security group rule does not exist name name or id of the security group rule to delete rule_id uuid of the rule to delete project_id id of project to delete rule from
def to_signed_str(self, private, public, passphrase=None): ''' Returns a signed version of the invoice. @param private:file Private key file-like object @param public:file Public key file-like object @param passphrase:str Private key passphrase if any. @return: str ''' from pyxmli import xmldsig try: from Crypto.PublicKey import RSA except ImportError: raise ImportError('PyCrypto 2.5 or more recent module is ' \ 'required to enable XMLi signing.\n' \ 'Please visit: http://pycrypto.sourceforge.net/') if not isinstance(private, RSA._RSAobj): private = RSA.importKey(private.read(), passphrase=passphrase) if not isinstance(public, RSA._RSAobj): public = RSA.importKey(public.read()) return to_unicode(xmldsig.sign(to_unicode(self.to_string()), private, public))
Returns a signed version of the invoice. @param private:file Private key file-like object @param public:file Public key file-like object @param passphrase:str Private key passphrase if any. @return: str
def brkl2d(arr,interval): ''' arr = ["color1","r1","g1","b1","a1","color2","r2","g2","b2","a2"] >>> brkl2d(arr,5) [{'color1': ['r1', 'g1', 'b1', 'a1']}, {'color2': ['r2', 'g2', 'b2', 'a2']}] ''' lngth = arr.__len__() brkseqs = elel.init_range(0,lngth,interval) l = elel.broken_seqs(arr,brkseqs) d = elel.mapv(l,lambda ele:{ele[0]:ele[1:]}) return(d)
arr = ["color1","r1","g1","b1","a1","color2","r2","g2","b2","a2"] >>> brkl2d(arr,5) [{'color1': ['r1', 'g1', 'b1', 'a1']}, {'color2': ['r2', 'g2', 'b2', 'a2']}]
def preferred_ip(vm_, ips): ''' Return the preferred Internet protocol. Either 'ipv4' (default) or 'ipv6'. ''' proto = config.get_cloud_config_value( 'protocol', vm_, __opts__, default='ipv4', search_global=False ) family = socket.AF_INET if proto == 'ipv6': family = socket.AF_INET6 for ip in ips: try: socket.inet_pton(family, ip) return ip except Exception: continue return False
Return the preferred Internet protocol. Either 'ipv4' (default) or 'ipv6'.
def translate_update(blob): "converts JSON parse output to self-aware objects" # note below: v will be int or null return {translate_key(k):parse_serialdiff(v) for k,v in blob.items()}
converts JSON parse output to self-aware objects
def _path(self, key): """ Get the full path for the given cache key. :param key: The cache key :type key: str :rtype: str """ hash_type, parts_count = self._HASHES[self._hash_type] h = hash_type(encode(key)).hexdigest() parts = [h[i:i+2] for i in range(0, len(h), 2)][:parts_count] return os.path.join(self._directory, os.path.sep.join(parts), h)
Get the full path for the given cache key. :param key: The cache key :type key: str :rtype: str
def update(self): """ Updates information about the NepDate """ functions.check_valid_bs_range(self) # Here's a trick to find the gregorian date: # We find the number of days from earliest nepali date to the current # day. We then add the number of days to the earliest english date self.en_date = values.START_EN_DATE + \ ( self - NepDate( values.START_NP_YEAR, 1, 1 ) ) return self
Updates information about the NepDate
def send(self, msg): """ Send a message to the node on which this replica resides. :param msg: the message to send """ logger.debug("{}'s elector sending {}".format(self.name, msg)) self.outBox.append(msg)
Send a message to the node on which this replica resides. :param msg: the message to send
def _generate_event_resources(self, lambda_function, execution_role, event_resources, lambda_alias=None): """Generates and returns the resources associated with this function's events. :param model.lambda_.LambdaFunction lambda_function: generated Lambda function :param iam.IAMRole execution_role: generated Lambda execution role :param implicit_api: Global Implicit API resource where the implicit APIs get attached to, if necessary :param implicit_api_stage: Global implicit API stage resource where implicit APIs get attached to, if necessary :param event_resources: All the event sources associated with this Lambda function :param model.lambda_.LambdaAlias lambda_alias: Optional Lambda Alias resource if we want to connect the event sources to this alias :returns: a list containing the function's event resources :rtype: list """ resources = [] if self.Events: for logical_id, event_dict in self.Events.items(): try: eventsource = self.event_resolver.resolve_resource_type(event_dict).from_dict( lambda_function.logical_id + logical_id, event_dict, logical_id) except TypeError as e: raise InvalidEventException(logical_id, "{}".format(e)) kwargs = { # When Alias is provided, connect all event sources to the alias and *not* the function 'function': lambda_alias or lambda_function, 'role': execution_role, } for name, resource in event_resources[logical_id].items(): kwargs[name] = resource resources += eventsource.to_cloudformation(**kwargs) return resources
Generates and returns the resources associated with this function's events. :param model.lambda_.LambdaFunction lambda_function: generated Lambda function :param iam.IAMRole execution_role: generated Lambda execution role :param implicit_api: Global Implicit API resource where the implicit APIs get attached to, if necessary :param implicit_api_stage: Global implicit API stage resource where implicit APIs get attached to, if necessary :param event_resources: All the event sources associated with this Lambda function :param model.lambda_.LambdaAlias lambda_alias: Optional Lambda Alias resource if we want to connect the event sources to this alias :returns: a list containing the function's event resources :rtype: list
def render_embed_js(self, js_embed: Iterable[bytes]) -> bytes: """Default method used to render the final embedded js for the rendered webpage. Override this method in a sub-classed controller to change the output. """ return ( b'<script type="text/javascript">\n//<![CDATA[\n' + b"\n".join(js_embed) + b"\n//]]>\n</script>" )
Default method used to render the final embedded js for the rendered webpage. Override this method in a sub-classed controller to change the output.
def get_configured_consensus_module(block_id, state_view): """Returns the consensus_module based on the consensus module set by the "sawtooth_settings" transaction family. Args: block_id (str): the block id associated with the current state_view state_view (:obj:`StateView`): the current state view to use for setting values Raises: UnknownConsensusModuleError: Thrown when an invalid consensus module has been configured. """ settings_view = SettingsView(state_view) default_consensus = \ 'genesis' if block_id == NULL_BLOCK_IDENTIFIER else 'devmode' consensus_module_name = settings_view.get_setting( 'sawtooth.consensus.algorithm', default_value=default_consensus) return ConsensusFactory.get_consensus_module( consensus_module_name)
Returns the consensus_module based on the consensus module set by the "sawtooth_settings" transaction family. Args: block_id (str): the block id associated with the current state_view state_view (:obj:`StateView`): the current state view to use for setting values Raises: UnknownConsensusModuleError: Thrown when an invalid consensus module has been configured.
def unregister_callback(self, type_, from_, *, wildcard_resource=True): """ Unregister a callback function. :param type_: Stanza type to listen for, or :data:`None` for a wildcard match. :param from_: Sender to listen for, or :data:`None` for a full wildcard match. :type from_: :class:`aioxmpp.JID` or :data:`None` :param wildcard_resource: Whether to wildcard the resourcepart of the JID. :type wildcard_resource: :class:`bool` The callback must be disconnected with the same arguments as were used to connect it. """ if from_ is None or not from_.is_bare: wildcard_resource = False self._map.pop((type_, from_, wildcard_resource))
Unregister a callback function. :param type_: Stanza type to listen for, or :data:`None` for a wildcard match. :param from_: Sender to listen for, or :data:`None` for a full wildcard match. :type from_: :class:`aioxmpp.JID` or :data:`None` :param wildcard_resource: Whether to wildcard the resourcepart of the JID. :type wildcard_resource: :class:`bool` The callback must be disconnected with the same arguments as were used to connect it.
def encode_kv_node(keypath, child_node_hash): """ Serializes a key/value node """ if keypath is None or keypath == b'': raise ValidationError("Key path can not be empty") validate_is_bytes(keypath) validate_is_bytes(child_node_hash) validate_length(child_node_hash, 32) return KV_TYPE_PREFIX + encode_from_bin_keypath(keypath) + child_node_hash
Serializes a key/value node
def system_monitor_SFM_threshold_marginal_threshold(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") system_monitor = ET.SubElement(config, "system-monitor", xmlns="urn:brocade.com:mgmt:brocade-system-monitor") SFM = ET.SubElement(system_monitor, "SFM") threshold = ET.SubElement(SFM, "threshold") marginal_threshold = ET.SubElement(threshold, "marginal-threshold") marginal_threshold.text = kwargs.pop('marginal_threshold') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
def read_cs_raw_symmetrized_tensors(self): """ Parse the matrix form of NMR tensor before corrected to table. Returns: nsymmetrized tensors list in the order of atoms. """ header_pattern = r"\s+-{50,}\s+" \ r"\s+Absolute Chemical Shift tensors\s+" \ r"\s+-{50,}$" first_part_pattern = r"\s+UNSYMMETRIZED TENSORS\s+$" row_pattern = r"\s+".join([r"([-]?\d+\.\d+)"] * 3) unsym_footer_pattern = r"^\s+SYMMETRIZED TENSORS\s+$" with zopen(self.filename, 'rt') as f: text = f.read() unsym_table_pattern_text = header_pattern + first_part_pattern + \ r"(?P<table_body>.+)" + unsym_footer_pattern table_pattern = re.compile(unsym_table_pattern_text, re.MULTILINE | re.DOTALL) rp = re.compile(row_pattern) m = table_pattern.search(text) if m: table_text = m.group("table_body") micro_header_pattern = r"ion\s+\d+" micro_table_pattern_text = micro_header_pattern + \ r"\s*^(?P<table_body>(?:\s*" + \ row_pattern + r")+)\s+" micro_table_pattern = re.compile(micro_table_pattern_text, re.MULTILINE | re.DOTALL) unsym_tensors = [] for mt in micro_table_pattern.finditer(table_text): table_body_text = mt.group("table_body") tensor_matrix = [] for line in table_body_text.rstrip().split("\n"): ml = rp.search(line) processed_line = [float(v) for v in ml.groups()] tensor_matrix.append(processed_line) unsym_tensors.append(tensor_matrix) self.data["unsym_cs_tensor"] = unsym_tensors else: raise ValueError("NMR UNSYMMETRIZED TENSORS is not found")
Parse the matrix form of NMR tensor before corrected to table. Returns: nsymmetrized tensors list in the order of atoms.
def start(self): """Starts watching the path and running the test jobs.""" assert not self.watching def selector(evt): if evt.is_directory: return False path = evt.path if path in self._last_fnames: # Detected a "killing cycle" return False for pattern in self.skip_pattern.split(";"): if fnmatch(path, pattern.strip()): return False return True def watchdog_handler(evt): wx.CallAfter(self._watchdog_handler, evt) # Force a first event self._watching = True self._last_fnames = [] self._evts = [None] self._run_subprocess() # Starts the watchdog observer from .watcher import watcher self._watcher = watcher(path=self.directory, selector=selector, handler=watchdog_handler) self._watcher.__enter__()
Starts watching the path and running the test jobs.
def f16(op): """ Returns a floating point operand converted to 32 bits unsigned int. Negative numbers are returned in 2 complement. The result is returned in a tuple (DE, HL) => High16 (Int part), Low16 (Decimal part) """ op = float(op) negative = op < 0 if negative: op = -op DE = int(op) HL = int((op - DE) * 2**16) & 0xFFFF DE &= 0xFFFF if negative: # Do C2 DE ^= 0xFFFF HL ^= 0xFFFF DEHL = ((DE << 16) | HL) + 1 HL = DEHL & 0xFFFF DE = (DEHL >> 16) & 0xFFFF return (DE, HL)
Returns a floating point operand converted to 32 bits unsigned int. Negative numbers are returned in 2 complement. The result is returned in a tuple (DE, HL) => High16 (Int part), Low16 (Decimal part)
def SetAttributes(self, urn, attributes, to_delete, add_child_index=True, mutation_pool=None): """Sets the attributes in the data store.""" attributes[AFF4Object.SchemaCls.LAST] = [ rdfvalue.RDFDatetime.Now().SerializeToDataStore() ] to_delete.add(AFF4Object.SchemaCls.LAST) if mutation_pool: pool = mutation_pool else: pool = data_store.DB.GetMutationPool() pool.MultiSet(urn, attributes, replace=False, to_delete=to_delete) if add_child_index: self._UpdateChildIndex(urn, pool) if mutation_pool is None: pool.Flush()
Sets the attributes in the data store.
def _untracked_custom_unit_found(name, root=None): ''' If the passed service name is not available, but a unit file exist in /etc/systemd/system, return True. Otherwise, return False. ''' system = _root('/etc/systemd/system', root) unit_path = os.path.join(system, _canonical_unit_name(name)) return os.access(unit_path, os.R_OK) and not _check_available(name)
If the passed service name is not available, but a unit file exist in /etc/systemd/system, return True. Otherwise, return False.
def make_witness_input(outpoint, sequence): ''' Outpoint, int -> TxIn ''' if 'decred' in riemann.get_current_network_name(): return tx.DecredTxIn( outpoint=outpoint, sequence=utils.i2le_padded(sequence, 4)) return tx.TxIn(outpoint=outpoint, stack_script=b'', redeem_script=b'', sequence=utils.i2le_padded(sequence, 4))
Outpoint, int -> TxIn
def _make_context_immutable(context): """Best effort attempt at turning a properly formatted context (either a string, dict, or array of strings and dicts) into an immutable data structure. If we get an array, make it immutable by creating a tuple; if we get a dict, copy it into a MappingProxyType. Otherwise, return as-is. """ def make_immutable(val): if isinstance(val, Mapping): return MappingProxyType(val) else: return val if not isinstance(context, (str, Mapping)): try: return tuple([make_immutable(val) for val in context]) except TypeError: pass return make_immutable(context)
Best effort attempt at turning a properly formatted context (either a string, dict, or array of strings and dicts) into an immutable data structure. If we get an array, make it immutable by creating a tuple; if we get a dict, copy it into a MappingProxyType. Otherwise, return as-is.
def to_coords(self): """ Returns the X and Y coordinates for this EC point, as native Python integers :return: A 2-element tuple containing integers (X, Y) """ data = self.native first_byte = data[0:1] # Uncompressed if first_byte == b'\x04': remaining = data[1:] field_len = len(remaining) // 2 x = int_from_bytes(remaining[0:field_len]) y = int_from_bytes(remaining[field_len:]) return (x, y) if first_byte not in set([b'\x02', b'\x03']): raise ValueError(unwrap( ''' Invalid EC public key - first byte is incorrect ''' )) raise ValueError(unwrap( ''' Compressed representations of EC public keys are not supported due to patent US6252960 ''' ))
Returns the X and Y coordinates for this EC point, as native Python integers :return: A 2-element tuple containing integers (X, Y)
def xpath(C, node, path, namespaces=None, extensions=None, smart_strings=True, **args): """shortcut to Element.xpath()""" return node.xpath( path, namespaces=namespaces or C.NS, extensions=extensions, smart_strings=smart_strings, **args )
shortcut to Element.xpath()
def get_repr(self, obj, referent=None): """Return an HTML tree block describing the given object.""" objtype = type(obj) typename = str(objtype.__module__) + "." + objtype.__name__ prettytype = typename.replace("__builtin__.", "") name = getattr(obj, "__name__", "") if name: prettytype = "%s %r" % (prettytype, name) key = "" if referent: key = self.get_refkey(obj, referent) url = reverse('dowser_trace_object', args=( typename, id(obj) )) return ('<a class="objectid" href="%s">%s</a> ' '<span class="typename">%s</span>%s<br />' '<span class="repr">%s</span>' % (url, id(obj), prettytype, key, get_repr(obj, 100)) )
Return an HTML tree block describing the given object.
def one_hot_encoding(labels, num_classes, scope=None): """Transform numeric labels into onehot_labels. Args: labels: [batch_size] target labels. num_classes: total number of classes. scope: Optional scope for name_scope. Returns: one hot encoding of the labels. """ with tf.name_scope(scope, 'OneHotEncoding', [labels]): batch_size = labels.get_shape()[0] indices = tf.expand_dims(tf.range(0, batch_size), 1) labels = tf.cast(tf.expand_dims(labels, 1), indices.dtype) concated = tf.concat(axis=1, values=[indices, labels]) onehot_labels = tf.sparse_to_dense( concated, tf.stack([batch_size, num_classes]), 1.0, 0.0) onehot_labels.set_shape([batch_size, num_classes]) return onehot_labels
Transform numeric labels into onehot_labels. Args: labels: [batch_size] target labels. num_classes: total number of classes. scope: Optional scope for name_scope. Returns: one hot encoding of the labels.
def compare_table_cols(a, b): """ Return False if the two tables a and b have the same columns (ignoring order) according to LIGO LW name conventions, return True otherwise. """ return cmp(sorted((col.Name, col.Type) for col in a.getElementsByTagName(ligolw.Column.tagName)), sorted((col.Name, col.Type) for col in b.getElementsByTagName(ligolw.Column.tagName)))
Return False if the two tables a and b have the same columns (ignoring order) according to LIGO LW name conventions, return True otherwise.
def MergeMessage( self, source, destination, replace_message_field=False, replace_repeated_field=False): """Merges fields specified in FieldMask from source to destination. Args: source: Source message. destination: The destination message to be merged into. replace_message_field: Replace message field if True. Merge message field if False. replace_repeated_field: Replace repeated field if True. Append elements of repeated field if False. """ tree = _FieldMaskTree(self) tree.MergeMessage( source, destination, replace_message_field, replace_repeated_field)
Merges fields specified in FieldMask from source to destination. Args: source: Source message. destination: The destination message to be merged into. replace_message_field: Replace message field if True. Merge message field if False. replace_repeated_field: Replace repeated field if True. Append elements of repeated field if False.
def print_evaluation(period=1, show_stdv=True): """Create a callback that prints the evaluation results. Parameters ---------- period : int, optional (default=1) The period to print the evaluation results. show_stdv : bool, optional (default=True) Whether to show stdv (if provided). Returns ------- callback : function The callback that prints the evaluation results every ``period`` iteration(s). """ def _callback(env): if period > 0 and env.evaluation_result_list and (env.iteration + 1) % period == 0: result = '\t'.join([_format_eval_result(x, show_stdv) for x in env.evaluation_result_list]) print('[%d]\t%s' % (env.iteration + 1, result)) _callback.order = 10 return _callback
Create a callback that prints the evaluation results. Parameters ---------- period : int, optional (default=1) The period to print the evaluation results. show_stdv : bool, optional (default=True) Whether to show stdv (if provided). Returns ------- callback : function The callback that prints the evaluation results every ``period`` iteration(s).
def random_draw(self, size=None): """Draw random samples of the hyperparameters. The outputs of the two priors are stacked vertically. Parameters ---------- size : None, int or array-like, optional The number/shape of samples to draw. If None, only one sample is returned. Default is None. """ draw_1 = self.p1.random_draw(size=size) draw_2 = self.p2.random_draw(size=size) if draw_1.ndim == 1: return scipy.hstack((draw_1, draw_2)) else: return scipy.vstack((draw_1, draw_2))
Draw random samples of the hyperparameters. The outputs of the two priors are stacked vertically. Parameters ---------- size : None, int or array-like, optional The number/shape of samples to draw. If None, only one sample is returned. Default is None.
def clipping_params(ts, capacity=100): """Start and end index that clips the price/value of a time series the most Assumes that the integrated maximum includes the peak (instantaneous maximum). Arguments: ts (TimeSeries): Time series to attempt to clip to as low a max value as possible capacity (float): Total "funds" or "energy" available for clipping (integrated area under time series) Returns: 2-tuple: Timestamp of the start and end of the period of the maximum clipped integrated increase """ ts_sorted = ts.order(ascending=False) i, t0, t1, integral = 1, None, None, 0 while integral <= capacity and i+1 < len(ts): i += 1 t0_within_capacity = t0 t1_within_capacity = t1 t0 = min(ts_sorted.index[:i]) t1 = max(ts_sorted.index[:i]) integral = integrated_change(ts[t0:t1]) print i, t0, ts[t0], t1, ts[t1], integral if t0_within_capacity and t1_within_capacity: return t0_within_capacity, t1_within_capacity
Start and end index that clips the price/value of a time series the most Assumes that the integrated maximum includes the peak (instantaneous maximum). Arguments: ts (TimeSeries): Time series to attempt to clip to as low a max value as possible capacity (float): Total "funds" or "energy" available for clipping (integrated area under time series) Returns: 2-tuple: Timestamp of the start and end of the period of the maximum clipped integrated increase
def _to_span(x, idx=0): """Convert a Candidate, Mention, or Span to a span.""" if isinstance(x, Candidate): return x[idx].context elif isinstance(x, Mention): return x.context elif isinstance(x, TemporarySpanMention): return x else: raise ValueError(f"{type(x)} is an invalid argument type")
Convert a Candidate, Mention, or Span to a span.
def parse_host(entity, default_port=DEFAULT_PORT): """Validates a host string Returns a 2-tuple of host followed by port where port is default_port if it wasn't specified in the string. :Parameters: - `entity`: A host or host:port string where host could be a hostname or IP address. - `default_port`: The port number to use when one wasn't specified in entity. """ host = entity port = default_port if entity[0] == '[': host, port = parse_ipv6_literal_host(entity, default_port) elif entity.endswith(".sock"): return entity, default_port elif entity.find(':') != -1: if entity.count(':') > 1: raise ValueError("Reserved characters such as ':' must be " "escaped according RFC 2396. An IPv6 " "address literal must be enclosed in '[' " "and ']' according to RFC 2732.") host, port = host.split(':', 1) if isinstance(port, string_type): if not port.isdigit() or int(port) > 65535 or int(port) <= 0: raise ValueError("Port must be an integer between 0 and 65535: %s" % (port,)) port = int(port) # Normalize hostname to lowercase, since DNS is case-insensitive: # http://tools.ietf.org/html/rfc4343 # This prevents useless rediscovery if "foo.com" is in the seed list but # "FOO.com" is in the ismaster response. return host.lower(), port
Validates a host string Returns a 2-tuple of host followed by port where port is default_port if it wasn't specified in the string. :Parameters: - `entity`: A host or host:port string where host could be a hostname or IP address. - `default_port`: The port number to use when one wasn't specified in entity.
def _fake_openqueryinstances(self, namespace, **params): # pylint: disable=invalid-name """ Implements WBEM server responder for :meth:`~pywbem.WBEMConnection.OpenQueryInstances` with data from the instance repository. """ self._validate_namespace(namespace) self._validate_open_params(**params) # pylint: disable=assignment-from-no-return # TODO: Not implemented result = self._fake_execquery(namespace, **params) objects = [] if result is None else [x[2] for x in result[0][2]] return self._open_response(objects, namespace, 'PullInstancesWithPath', **params)
Implements WBEM server responder for :meth:`~pywbem.WBEMConnection.OpenQueryInstances` with data from the instance repository.
def init(app_id, app_key=None, master_key=None, hook_key=None): """ๅˆๅง‹ๅŒ– LeanCloud ็š„ AppId / AppKey / MasterKey :type app_id: string_types :param app_id: ๅบ”็”จ็š„ Application ID :type app_key: None or string_types :param app_key: ๅบ”็”จ็š„ Application Key :type master_key: None or string_types :param master_key: ๅบ”็”จ็š„ Master Key :param hook_key: application's hook key :type hook_key: None or string_type """ if (not app_key) and (not master_key): raise RuntimeError('app_key or master_key must be specified') global APP_ID, APP_KEY, MASTER_KEY, HOOK_KEY APP_ID = app_id APP_KEY = app_key MASTER_KEY = master_key if hook_key: HOOK_KEY = hook_key else: HOOK_KEY = os.environ.get('LEANCLOUD_APP_HOOK_KEY')
ๅˆๅง‹ๅŒ– LeanCloud ็š„ AppId / AppKey / MasterKey :type app_id: string_types :param app_id: ๅบ”็”จ็š„ Application ID :type app_key: None or string_types :param app_key: ๅบ”็”จ็š„ Application Key :type master_key: None or string_types :param master_key: ๅบ”็”จ็š„ Master Key :param hook_key: application's hook key :type hook_key: None or string_type
def get_tower_results(iterator, optimizer, dropout_rates): r''' With this preliminary step out of the way, we can for each GPU introduce a tower for which's batch we calculate and return the optimization gradients and the average loss across towers. ''' # To calculate the mean of the losses tower_avg_losses = [] # Tower gradients to return tower_gradients = [] with tf.variable_scope(tf.get_variable_scope()): # Loop over available_devices for i in range(len(Config.available_devices)): # Execute operations of tower i on device i device = Config.available_devices[i] with tf.device(device): # Create a scope for all operations of tower i with tf.name_scope('tower_%d' % i): # Calculate the avg_loss and mean_edit_distance and retrieve the decoded # batch along with the original batch's labels (Y) of this tower avg_loss = calculate_mean_edit_distance_and_loss(iterator, dropout_rates, reuse=i > 0) # Allow for variables to be re-used by the next tower tf.get_variable_scope().reuse_variables() # Retain tower's avg losses tower_avg_losses.append(avg_loss) # Compute gradients for model parameters using tower's mini-batch gradients = optimizer.compute_gradients(avg_loss) # Retain tower's gradients tower_gradients.append(gradients) avg_loss_across_towers = tf.reduce_mean(tower_avg_losses, 0) tf.summary.scalar(name='step_loss', tensor=avg_loss_across_towers, collections=['step_summaries']) # Return gradients and the average loss return tower_gradients, avg_loss_across_towers
r''' With this preliminary step out of the way, we can for each GPU introduce a tower for which's batch we calculate and return the optimization gradients and the average loss across towers.
def ct_bytes_compare(a, b): """ Constant-time string compare. http://codahale.com/a-lesson-in-timing-attacks/ """ if not isinstance(a, bytes): a = a.decode('utf8') if not isinstance(b, bytes): b = b.decode('utf8') if len(a) != len(b): return False result = 0 for x, y in zip(a, b): result |= x ^ y return (result == 0)
Constant-time string compare. http://codahale.com/a-lesson-in-timing-attacks/
def getBranch(self, name, **context): """Return a branch of this tree where the 'name' OID may reside""" for keyLen in self._vars.getKeysLens(): subName = name[:keyLen] if subName in self._vars: return self._vars[subName] raise error.NoSuchObjectError(name=name, idx=context.get('idx'))
Return a branch of this tree where the 'name' OID may reside
def extract_named_group(text, named_group, matchers, return_presence=False): ''' Return ``named_group`` match from ``text`` reached by using a matcher from ``matchers``. It also supports matching without a ``named_group`` in a matcher, which sets ``presence=True``. ``presence`` is only returned if ``return_presence=True``. ''' presence = False for matcher in matchers: if isinstance(matcher, str): v = re.search(matcher, text, flags=re.DOTALL) if v: dict_result = v.groupdict() try: return dict_result[named_group] except KeyError: if dict_result: # It's other named group matching, discard continue else: # It's a matcher without named_group # but we can't return it until every matcher pass # because a following matcher could have a named group presence = True elif callable(matcher): v = matcher(text) if v: return v if return_presence and presence: return 'presence' return None
Return ``named_group`` match from ``text`` reached by using a matcher from ``matchers``. It also supports matching without a ``named_group`` in a matcher, which sets ``presence=True``. ``presence`` is only returned if ``return_presence=True``.
def entry_set(self, predicate=None): """ Returns a list clone of the mappings contained in this map. **Warning: The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.** :param predicate: (Predicate), predicate for the map to filter entries (optional). :return: (Sequence), the list of key-value tuples in the map. .. seealso:: :class:`~hazelcast.serialization.predicate.Predicate` for more info about predicates. """ if predicate: predicate_data = self._to_data(predicate) return self._encode_invoke(map_entries_with_predicate_codec, predicate=predicate_data) else: return self._encode_invoke(map_entry_set_codec)
Returns a list clone of the mappings contained in this map. **Warning: The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.** :param predicate: (Predicate), predicate for the map to filter entries (optional). :return: (Sequence), the list of key-value tuples in the map. .. seealso:: :class:`~hazelcast.serialization.predicate.Predicate` for more info about predicates.
def vprjp(vin, plane): """ Project a vector onto a specified plane, orthogonally. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/vprjp_c.html :param vin: The projected vector. :type vin: 3-Element Array of floats :param plane: Plane containing vin. :type plane: spiceypy.utils.support_types.Plane :return: Vector resulting from projection. :rtype: 3-Element Array of floats """ vin = stypes.toDoubleVector(vin) vout = stypes.emptyDoubleVector(3) libspice.vprjp_c(vin, ctypes.byref(plane), vout) return stypes.cVectorToPython(vout)
Project a vector onto a specified plane, orthogonally. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/vprjp_c.html :param vin: The projected vector. :type vin: 3-Element Array of floats :param plane: Plane containing vin. :type plane: spiceypy.utils.support_types.Plane :return: Vector resulting from projection. :rtype: 3-Element Array of floats
def _drawBackground(self, scene, painter, rect): """ Draws the backgroud for a particular scene within the charts. :param scene | <XChartScene> painter | <QPainter> rect | <QRectF> """ rect = scene.sceneRect() if scene == self.uiChartVIEW.scene(): self.renderer().drawGrid(painter, rect, self.showGrid(), self.showColumns(), self.showRows()) elif scene == self.uiXAxisVIEW.scene(): self.renderer().drawAxis(painter, rect, self.horizontalAxis()) elif scene == self.uiYAxisVIEW.scene(): self.renderer().drawAxis(painter, rect, self.verticalAxis())
Draws the backgroud for a particular scene within the charts. :param scene | <XChartScene> painter | <QPainter> rect | <QRectF>
def get_resource(self): """Gets the ``Resource`` for this authorization. return: (osid.resource.Resource) - the ``Resource`` raise: IllegalState - ``has_resource()`` is ``false`` raise: OperationFailed - unable to complete request *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for osid.resource.Resource.get_avatar_template if not bool(self._my_map['resourceId']): raise errors.IllegalState('this Authorization has no resource') mgr = self._get_provider_manager('RESOURCE') if not mgr.supports_resource_lookup(): raise errors.OperationFailed('Resource does not support Resource lookup') lookup_session = mgr.get_resource_lookup_session(proxy=getattr(self, "_proxy", None)) lookup_session.use_federated_bin_view() osid_object = lookup_session.get_resource(self.get_resource_id()) return osid_object
Gets the ``Resource`` for this authorization. return: (osid.resource.Resource) - the ``Resource`` raise: IllegalState - ``has_resource()`` is ``false`` raise: OperationFailed - unable to complete request *compliance: mandatory -- This method must be implemented.*
def score_hist(df, columns=None, groupby=None, threshold=0.7, stacked=True, bins=20, percent=True, alpha=0.33, show=True, block=False, save=False): """Plot multiple histograms on one plot, typically of "score" values between 0 and 1 Typically the groupby or columns of the dataframe are the classification categories (0, .5, 1) And the values are scores between 0 and 1. """ df = df if columns is None else df[([] if groupby is None else [groupby]) + list(columns)].copy() if groupby is not None or threshold is not None: df = groups_from_scores(df, groupby=groupby, threshold=threshold) percent = 100. if percent else 1. if isinstance(df, pd.core.groupby.DataFrameGroupBy): df = df_from_groups(df, columns=columns) * percent columns = df.columns if columns is None else columns if bins is None: bins = 20 if isinstance(bins, int): bins = np.linspace(np.min(df.min()), np.max(df.max()), bins) log.debug('bins: {}'.format(bins)) figs = [] df.plot(kind='hist', alpha=alpha, stacked=stacked, bins=bins) # for col in df.columns: # series = df[col] * percent # log.debug('{}'.format(series)) # figs.append(plt.hist(series, bins=bins, alpha=alpha, # weights=percent * np.ones_like(series) / len(series.dropna()), # label=stringify(col))) plt.legend() plt.xlabel('Score (%)') plt.ylabel('Percent') plt.title('{} Scores for {}'.format(np.sum(df.count()), columns)) plt.draw() if save or not show: fig = plt.gcf() today = datetime.datetime.today() fig.savefig(os.path.join(IMAGES_PATH, 'score_hist_{:04d}-{:02d}-{:02d}_{:02d}{:02d}.jpg'.format(*today.timetuple()))) if show: plt.show(block=block) return figs
Plot multiple histograms on one plot, typically of "score" values between 0 and 1 Typically the groupby or columns of the dataframe are the classification categories (0, .5, 1) And the values are scores between 0 and 1.
def NonNegIntStringToInt(int_string, problems=None): """Convert an non-negative integer string to an int or raise an exception""" # Will raise TypeError unless a string match = re.match(r"^(?:0|[1-9]\d*)$", int_string) # Will raise ValueError if the string can't be parsed parsed_value = int(int_string) if parsed_value < 0: raise ValueError() elif not match and problems is not None: # Does not match the regex, but it's an int according to Python problems.InvalidNonNegativeIntegerValue(int_string) return parsed_value
Convert an non-negative integer string to an int or raise an exception
def get_status(address=None): """ Check if the DbServer is up. :param address: pair (hostname, port) :returns: 'running' or 'not-running' """ address = address or (config.dbserver.host, DBSERVER_PORT) return 'running' if socket_ready(address) else 'not-running'
Check if the DbServer is up. :param address: pair (hostname, port) :returns: 'running' or 'not-running'
def delete_network_postcommit(self, context): """Delete the network from CVX""" network = context.current log_context("delete_network_postcommit: network", network) segments = context.network_segments tenant_id = network['project_id'] self.delete_segments(segments) self.delete_network(network) self.delete_tenant_if_removed(tenant_id)
Delete the network from CVX
def likelihood_weighting(X, e, bn, N): """Estimate the probability distribution of variable X given evidence e in BayesNet bn. [Fig. 14.15] >>> seed(1017) >>> likelihood_weighting('Burglary', dict(JohnCalls=T, MaryCalls=T), ... burglary, 10000).show_approx() 'False: 0.702, True: 0.298' """ W = dict((x, 0) for x in bn.variable_values(X)) for j in xrange(N): sample, weight = weighted_sample(bn, e) # boldface x, w in Fig. 14.15 W[sample[X]] += weight return ProbDist(X, W)
Estimate the probability distribution of variable X given evidence e in BayesNet bn. [Fig. 14.15] >>> seed(1017) >>> likelihood_weighting('Burglary', dict(JohnCalls=T, MaryCalls=T), ... burglary, 10000).show_approx() 'False: 0.702, True: 0.298'
def setRoute(self, vehID, edgeList): """ setRoute(string, list) -> None changes the vehicle route to given edges list. The first edge in the list has to be the one that the vehicle is at at the moment. example usage: setRoute('1', ['1', '2', '4', '6', '7']) this changes route for vehicle id 1 to edges 1-2-4-6-7 """ if isinstance(edgeList, str): edgeList = [edgeList] self._connection._beginMessage(tc.CMD_SET_VEHICLE_VARIABLE, tc.VAR_ROUTE, vehID, 1 + 4 + sum(map(len, edgeList)) + 4 * len(edgeList)) self._connection._packStringList(edgeList) self._connection._sendExact()
setRoute(string, list) -> None changes the vehicle route to given edges list. The first edge in the list has to be the one that the vehicle is at at the moment. example usage: setRoute('1', ['1', '2', '4', '6', '7']) this changes route for vehicle id 1 to edges 1-2-4-6-7
def is_ascii_obfuscation(vm): """ Tests if any class inside a DalvikVMObject uses ASCII Obfuscation (e.g. UTF-8 Chars in Classnames) :param vm: `DalvikVMObject` :return: True if ascii obfuscation otherwise False """ for classe in vm.get_classes(): if is_ascii_problem(classe.get_name()): return True for method in classe.get_methods(): if is_ascii_problem(method.get_name()): return True return False
Tests if any class inside a DalvikVMObject uses ASCII Obfuscation (e.g. UTF-8 Chars in Classnames) :param vm: `DalvikVMObject` :return: True if ascii obfuscation otherwise False
def notifications_mark_read(input_params={}, always_retry=True, **kwargs): """ Invokes the /notifications/markRead API method. """ return DXHTTPRequest('/notifications/markRead', input_params, always_retry=always_retry, **kwargs)
Invokes the /notifications/markRead API method.
def append(self, position, array): """Append an array to the end of the map. The position must be greater than any positions in the map""" if not Gauged.map_append(self.ptr, position, array.ptr): raise MemoryError
Append an array to the end of the map. The position must be greater than any positions in the map
def bind(self, data_shapes, label_shapes=None, for_training=True, inputs_need_grad=False, force_rebind=False, shared_module=None, grad_req='write'): """Binds the symbols to construct executors. This is necessary before one can perform computation with the module. Parameters ---------- data_shapes : list of (str, tuple) or DataDesc objects Typically is ``data_iter.provide_data``. Can also be a list of (data name, data shape). label_shapes : list of (str, tuple) or DataDesc objects Typically is ``data_iter.provide_label``. Can also be a list of (label name, label shape). for_training : bool Default is ``True``. Whether the executors should be bind for training. inputs_need_grad : bool Default is ``False``. Whether the gradients to the input data need to be computed. Typically this is not needed. But this might be needed when implementing composition of modules. force_rebind : bool Default is ``False``. This function does nothing if the executors are already bound. But with this ``True``, the executors will be forced to rebind. shared_module : Module Default is ``None``. This is used in bucketing. When not ``None``, the shared module essentially corresponds to a different bucket -- a module with different symbol but with the same sets of parameters (e.g. unrolled RNNs with different lengths). grad_req : str, list of str, dict of str to str Requirement for gradient accumulation. Can be 'write', 'add', or 'null' (default to 'write'). Can be specified globally (str) or for each argument (list, dict). Examples -------- >>> # An example of binding symbols. >>> mod.bind(data_shapes=[('data', (1, 10, 10))]) >>> # Assume train_iter is already created. >>> mod.bind(data_shapes=train_iter.provide_data, label_shapes=train_iter.provide_label) """ raise NotImplementedError()
Binds the symbols to construct executors. This is necessary before one can perform computation with the module. Parameters ---------- data_shapes : list of (str, tuple) or DataDesc objects Typically is ``data_iter.provide_data``. Can also be a list of (data name, data shape). label_shapes : list of (str, tuple) or DataDesc objects Typically is ``data_iter.provide_label``. Can also be a list of (label name, label shape). for_training : bool Default is ``True``. Whether the executors should be bind for training. inputs_need_grad : bool Default is ``False``. Whether the gradients to the input data need to be computed. Typically this is not needed. But this might be needed when implementing composition of modules. force_rebind : bool Default is ``False``. This function does nothing if the executors are already bound. But with this ``True``, the executors will be forced to rebind. shared_module : Module Default is ``None``. This is used in bucketing. When not ``None``, the shared module essentially corresponds to a different bucket -- a module with different symbol but with the same sets of parameters (e.g. unrolled RNNs with different lengths). grad_req : str, list of str, dict of str to str Requirement for gradient accumulation. Can be 'write', 'add', or 'null' (default to 'write'). Can be specified globally (str) or for each argument (list, dict). Examples -------- >>> # An example of binding symbols. >>> mod.bind(data_shapes=[('data', (1, 10, 10))]) >>> # Assume train_iter is already created. >>> mod.bind(data_shapes=train_iter.provide_data, label_shapes=train_iter.provide_label)
def ostree_path(self): """ ostree repository -- content """ if self._ostree_path is None: self._ostree_path = os.path.join(self.tmpdir, "ostree-repo") subprocess.check_call(["ostree", "init", "--mode", "bare-user-only", "--repo", self._ostree_path]) return self._ostree_path
ostree repository -- content
def setup(self, app): """ Setup the plugin from an application. """ super().setup(app) if isinstance(self.cfg.template_folders, str): self.cfg.template_folders = [self.cfg.template_folders] else: self.cfg.template_folders = list(self.cfg.template_folders) self.ctx_provider(lambda: {'app': self.app}) self.env = Environment(debug=app.cfg.DEBUG, **self.cfg)
Setup the plugin from an application.
def ConsultarDomicilios(self, nro_doc, tipo_doc=80, cat_iva=None): "Busca los domicilios, devuelve la cantidad y establece la lista" self.cursor.execute("SELECT direccion FROM domicilio WHERE " " tipo_doc=? AND nro_doc=? ORDER BY id ", [tipo_doc, nro_doc]) filas = self.cursor.fetchall() self.domicilios = [fila['direccion'] for fila in filas] return len(filas)
Busca los domicilios, devuelve la cantidad y establece la lista
def get_taf_alt_ice_turb(wxdata: [str]) -> ([str], str, [str], [str]): # type: ignore """ Returns the report list and removed: Altimeter string, Icing list, Turbulance list """ altimeter = '' icing, turbulence = [], [] for i, item in reversed(list(enumerate(wxdata))): if len(item) > 6 and item.startswith('QNH') and item[3:7].isdigit(): altimeter = wxdata.pop(i)[3:7] elif item.isdigit(): if item[0] == '6': icing.append(wxdata.pop(i)) elif item[0] == '5': turbulence.append(wxdata.pop(i)) return wxdata, altimeter, icing, turbulence
Returns the report list and removed: Altimeter string, Icing list, Turbulance list
def as_report_request(self, rules, timer=datetime.utcnow): """Makes a `ServicecontrolServicesReportRequest` from this instance Args: rules (:class:`ReportingRules`): determines what labels, metrics and logs to include in the report request. timer: a function that determines the current time Return: a ``ServicecontrolServicesReportRequest`` generated from this instance governed by the provided ``rules`` Raises: ValueError: if the fields in this instance cannot be used to create a valid ``ServicecontrolServicesReportRequest`` """ if not self.service_name: raise ValueError(u'the service name must be set') op = super(Info, self).as_operation(timer=timer) # Populate metrics and labels if they can be associated with a # method/operation if op.operationId and op.operationName: labels = {} for known_label in rules.labels: known_label.do_labels_update(self, labels) # Forcibly add system label reporting here, as the base service # config does not specify it as a label. labels[_KNOWN_LABELS.SCC_PLATFORM.label_name] = ( self.platform.friendly_string()) labels[_KNOWN_LABELS.SCC_SERVICE_AGENT.label_name] = ( SERVICE_AGENT) labels[_KNOWN_LABELS.SCC_USER_AGENT.label_name] = USER_AGENT if labels: op.labels = encoding.PyValueToMessage( sc_messages.Operation.LabelsValue, labels) for known_metric in rules.metrics: known_metric.do_operation_update(self, op) # Populate the log entries now = timer() op.logEntries = [self._as_log_entry(l, now) for l in rules.logs] return sc_messages.ServicecontrolServicesReportRequest( serviceName=self.service_name, reportRequest=sc_messages.ReportRequest(operations=[op]))
Makes a `ServicecontrolServicesReportRequest` from this instance Args: rules (:class:`ReportingRules`): determines what labels, metrics and logs to include in the report request. timer: a function that determines the current time Return: a ``ServicecontrolServicesReportRequest`` generated from this instance governed by the provided ``rules`` Raises: ValueError: if the fields in this instance cannot be used to create a valid ``ServicecontrolServicesReportRequest``
def overlay(self, feature, color='Blue', opacity=0.6): """ Overlays ``feature`` on the map. Returns a new Map. Args: ``feature``: a ``Table`` of map features, a list of map features, a Map, a Region, or a circle marker map table. The features will be overlayed on the Map with specified ``color``. ``color`` (``str``): Color of feature. Defaults to 'Blue' ``opacity`` (``float``): Opacity of overlain feature. Defaults to 0.6. Returns: A new ``Map`` with the overlain ``feature``. """ result = self.copy() if type(feature) == Table: # if table of features e.g. Table.from_records(taz_map.features) if 'feature' in feature: feature = feature['feature'] # if marker table e.g. table with columns: latitudes,longitudes,popup,color,radius else: feature = Circle.map_table(feature) if type(feature) in [list, np.ndarray]: for f in feature: f._attrs['fill_color'] = color f._attrs['fill_opacity'] = opacity f.draw_on(result._folium_map) elif type(feature) == Map: for i in range(len(feature._features)): f = feature._features[i] f._attrs['fill_color'] = color f._attrs['fill_opacity'] = opacity f.draw_on(result._folium_map) elif type(feature) == Region: feature._attrs['fill_color'] = color feature._attrs['fill_opacity'] = opacity feature.draw_on(result._folium_map) return result
Overlays ``feature`` on the map. Returns a new Map. Args: ``feature``: a ``Table`` of map features, a list of map features, a Map, a Region, or a circle marker map table. The features will be overlayed on the Map with specified ``color``. ``color`` (``str``): Color of feature. Defaults to 'Blue' ``opacity`` (``float``): Opacity of overlain feature. Defaults to 0.6. Returns: A new ``Map`` with the overlain ``feature``.
def grav_pot(self, x, y, rho0, gamma, center_x=0, center_y=0): """ gravitational potential (modulo 4 pi G and rho0 in appropriate units) :param x: :param y: :param rho0: :param a: :param s: :param center_x: :param center_y: :return: """ x_ = x - center_x y_ = y - center_y r = np.sqrt(x_**2 + y_**2) mass_3d = self.mass_3d(r, rho0, gamma) pot = mass_3d/r return pot
gravitational potential (modulo 4 pi G and rho0 in appropriate units) :param x: :param y: :param rho0: :param a: :param s: :param center_x: :param center_y: :return:
def get_active_forms_state(self): """Extract ActiveForm INDRA Statements.""" for term in self._isolated_terms: act = term.find('features/active') if act is None: continue if act.text == 'TRUE': is_active = True elif act.text == 'FALSE': is_active = False else: logger.warning('Unhandled term activity feature %s' % act.text) agent = self._get_agent_by_id(term.attrib['id'], None) # Skip aggregates for now if not isinstance(agent, Agent): continue # If the Agent state is at the base state then this is not an # ActiveForm statement if _is_base_agent_state(agent): continue # Remove the activity flag since it's irrelevant here agent.activity = None text_term = term.find('text') if text_term is not None: ev_text = text_term.text else: ev_text = None ev = Evidence(source_api='trips', text=ev_text, pmid=self.doc_id) st = ActiveForm(agent, 'activity', is_active, evidence=[ev]) self.statements.append(st)
Extract ActiveForm INDRA Statements.
def _calculatePredictedCells(self, activeBasalSegments, activeApicalSegments): """ Calculate the predicted cells, given the set of active segments. An active basal segment is enough to predict a cell. An active apical segment is *not* enough to predict a cell. When a cell has both types of segments active, other cells in its minicolumn must also have both types of segments to be considered predictive. @param activeBasalSegments (numpy array) @param activeApicalSegments (numpy array) @return (numpy array) """ cellsForBasalSegments = self.basalConnections.mapSegmentsToCells( activeBasalSegments) cellsForApicalSegments = self.apicalConnections.mapSegmentsToCells( activeApicalSegments) fullyDepolarizedCells = np.intersect1d(cellsForBasalSegments, cellsForApicalSegments) partlyDepolarizedCells = np.setdiff1d(cellsForBasalSegments, fullyDepolarizedCells) inhibitedMask = np.in1d(partlyDepolarizedCells / self.cellsPerColumn, fullyDepolarizedCells / self.cellsPerColumn) predictedCells = np.append(fullyDepolarizedCells, partlyDepolarizedCells[~inhibitedMask]) if self.useApicalTiebreak == False: predictedCells = cellsForBasalSegments return predictedCells
Calculate the predicted cells, given the set of active segments. An active basal segment is enough to predict a cell. An active apical segment is *not* enough to predict a cell. When a cell has both types of segments active, other cells in its minicolumn must also have both types of segments to be considered predictive. @param activeBasalSegments (numpy array) @param activeApicalSegments (numpy array) @return (numpy array)