code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def is_special_unitary( matrix: np.ndarray, *, rtol: float = 1e-5, atol: float = 1e-8) -> bool: return (is_unitary(matrix, rtol=rtol, atol=atol) and (matrix.shape[0] == 0 or np.allclose(np.linalg.det(matrix), 1, rtol=rtol, atol=atol)))
Determines if a matrix is approximately unitary with unit determinant. A matrix is special-unitary if it is square and its adjoint is its inverse and its determinant is one. Args: matrix: The matrix to check. rtol: The per-matrix-entry relative tolerance on equality. atol: The per-matrix-entry absolute tolerance on equality. Returns: Whether the matrix is unitary with unit determinant within the given tolerance.
juraj-google-style
def DeregisterDefinition(self, data_type_definition): name = data_type_definition.name.lower() if name not in self._definitions: raise KeyError('Definition not set for name: {0:s}.'.format( data_type_definition.name)) del self._definitions[name]
Deregisters a data type definition. The data type definitions are identified based on their lower case name. Args: data_type_definition (DataTypeDefinition): data type definition. Raises: KeyError: if a data type definition is not set for the corresponding name.
juraj-google-style
def enable_quantized_dtypes_training(fn: _F) -> _F: def wrapper(*args, **kwargs): if flags.config().enable_quantized_dtypes_training.value(): return fn(*args, **kwargs) flags.config().enable_quantized_dtypes_training.reset(True) try: return fn(*args, **kwargs) finally: flags.config().enable_quantized_dtypes_training.reset(False) return wrapper
Decorator for enabling quantized_dtypes_training on a test. This function returns a decorator intended to be applied to test methods in a `tf.test.TestCase` class. Doing so will set quantized_dtypes_training, reset the context, execute the test, then reset the context to the state it was in prior to this test. Example: class MyTest(test.TestCase): @enable_quantized_dtypes_training def testFoo(self): ... Args: fn: the function to be wrapped. Returns: The wrapped function.
github-repos
def Deserialize(self, reader): self.DeserializeUnsigned(reader) self.scripts = reader.ReadSerializableArray() self.OnDeserialized()
Deserialize full object. Args: reader (neo.IO.BinaryReader):
juraj-google-style
def _send_request(self, xml_request): if (self._scheme == 'http'): return self._send_http_request(xml_request) else: return self._send_socket_request(xml_request)
Send the prepared XML request block to the CPS using the corect protocol. Args: xml_request -- A fully formed xml request string for the CPS. Returns: The raw xml response string. Raises: ConnectionError -- Can't establish a connection with the server.
codesearchnet
def parse_cron_line(self, line): stripped = line.strip() if stripped and stripped.startswith(' rexres = self.rex.search(stripped) if rexres: return ' '.join(rexres.group(1).split()) return None
Parses crontab line and returns only starting time string Args: line: crontab line Returns: Time part of cron line
juraj-google-style
def getSet(self, name): return lock_and_call( lambda: Set(self._impl.getSet(name)), self._lock )
Get the set with the corresponding name. Args: name: Name of the set to be found. Raises: TypeError: if the specified set does not exist.
juraj-google-style
def get_properties(properties_file='raw.properties.json', env=None, region=None): with open(properties_file, 'rt') as file_handle: properties = json.load(file_handle) env_properties = properties.get(env, properties) contents = env_properties.get(region, env_properties) LOG.debug('Found properties for %s:\n%s', env, contents) return contents
Get contents of _properties_file_ for the _env_. Args: properties_file (str): File name of `create-configs` JSON output. env (str): Environment to read optionally. region (str): Region to get specific configs for. Returns: dict: JSON loaded Application properties for _env_. None: Given _env_ was not found in `create-configs` JSON output.
codesearchnet
def _strict_object_meta_fset(_, private_attr, type_): def _fset(self, value): rtype = type_ if isinstance(type_, TypeVar): type_map = dict( zip(self.__parameters__, self.__orig_class__.__args__) ) rtype = type_map[type_] if not is_instance(value, rtype): raise TypeError( "Cannot assign type of {} to attribute of type {}.".format( _get_type_name(type(value)), _get_type_name(rtype) ) ) vars(self)[private_attr] = value return _fset
Create a property setter method for the attribute. Args: _: The name of the attribute to set. Unused. private_attr: The name of the attribute that will store any data related to the attribute. type_: The annotated type defining what values can be stored in the attribute. Returns: A method that takes self and a value and stores that value on self in the private attribute iff the value is an instance of type_.
juraj-google-style
def _rewrite_grad_indexed_slices_output(old_output_slices, new_input_slices): def rewrite(old_output, new_input): assert old_output.type == 'Identity' concat_op = old_output.inputs[0].op assert concat_op.type == 'ConcatV2' old_concat_args = concat_op.inputs[:-1] return array_ops.concat([new_input] + old_concat_args[1:], 0) values = rewrite(old_output_slices.values.op, new_input_slices.values) indices = rewrite(old_output_slices.indices.op, new_input_slices.indices) return indexed_slices.IndexedSlices(values=values, indices=indices, dense_shape=new_input_slices.dense_shape)
Creates a new version of old_output_slices with new_input_slices as input. This method assumes that old_output_slices.{values,indices} are produced by concatenating the incoming gradient Tensor input with the IndexedSlices produced by the gradient computation of the while body. See backprop.aggregate_indexed_slices_gradients for where these concats are constructed. We build new concats that use new_input_slices instead of the original Tensor input. Args: old_output_slices: original IndexedSlices output of while gradient. new_input_slices: new IndexedSlices to use as input to while gradient. Returns: A new IndexedSlices to replace old_output_slices.
github-repos
def split_result_of_axis_func_pandas(axis, num_splits, result, length_list=None): if num_splits == 1: return result if length_list is not None: length_list.insert(0, 0) sums = np.cumsum(length_list) if axis == 0: return [result.iloc[sums[i] : sums[i + 1]] for i in range(len(sums) - 1)] else: return [result.iloc[:, sums[i] : sums[i + 1]] for i in range(len(sums) - 1)] chunksize = compute_chunksize(result, num_splits, axis=axis) if axis == 0: return [ result.iloc[chunksize * i : chunksize * (i + 1)] for i in range(num_splits) ] else: return [ result.iloc[:, chunksize * i : chunksize * (i + 1)] for i in range(num_splits) ]
Split the Pandas result evenly based on the provided number of splits. Args: axis: The axis to split across. num_splits: The number of even splits to create. result: The result of the computation. This should be a Pandas DataFrame. length_list: The list of lengths to split this DataFrame into. This is used to return the DataFrame to its original partitioning schema. Returns: A list of Pandas DataFrames.
juraj-google-style
def parse_config(args=sys.argv): parser = argparse.ArgumentParser(description='Read in the config file') parser.add_argument('config_file', help='Configuration file.', metavar='FILE', type=extant_file) return parser.parse_args(args[1:])
Parse the args using the config_file pattern Args: args: sys.argv Returns: The populated namespace object from parser.parse_args(). Raises: TBD
codesearchnet
def copy_submission_locally(self, cloud_path): local_path = os.path.join(self.download_dir, os.path.basename(cloud_path)) cmd = ['gsutil', 'cp', cloud_path, local_path] if subprocess.call(cmd) != 0: logging.error('Can\'t copy submission locally') return None return local_path
Copies submission from Google Cloud Storage to local directory. Args: cloud_path: path of the submission in Google Cloud Storage Returns: name of the local file where submission is copied to
juraj-google-style
def _dispatch_function(self, event, listener, *args, **kwargs): try: return listener(*args, **kwargs) except Exception as exc: if (event == self.LISTENER_ERROR_EVENT): raise return self.emit(self.LISTENER_ERROR_EVENT, event, listener, exc)
Execute a sync function. Args: event (str): The name of the event that triggered this call. listener (def): The def that needs to be executed. *args: Any number of positional arguments. **kwargs: Any number of keyword arguments. The values of *args and **kwargs are passed, unaltered, to the def when exceuting. If there is an exception executing the def, such as the wrong number of arguments, the emitter's error event is triggered. If the triggering event _is_ the emitter's error event then the exception is reraised. The reraised exception may show in debug mode for the event loop but is otherwise silently dropped.
codesearchnet
def iterator_arange(variables: VarType, parent: str) -> Iterable[VarMatrix]: assert parent is not None if isinstance(variables, (int, float)): yield [{parent: i} for i in np.arange(variables)] elif isinstance(variables, dict): if variables.get("stop"): yield [{parent: i} for i in arange(**variables)] else: raise ValueError(f"Stop is a required keyword for the arange iterator.") else: raise ValueError( f"The arange keyword only takes a dict as arguments, got {variables} of type {type(variables)}" )
Create a list of values using the :func:`numpy.arange` function. Args: variables: The input variables for the creation of the range parent: The variable for which the values are being generated. Returns: A list of dictionaries mapping the parent to each value.
juraj-google-style
def __init__(self, role, train_instance_count, train_instance_type, data_location=None, **kwargs): super(AmazonAlgorithmEstimatorBase, self).__init__(role, train_instance_count, train_instance_type, **kwargs) data_location = data_location or "s3: self.sagemaker_session.default_bucket()) self.data_location = data_location
Initialize an AmazonAlgorithmEstimatorBase. Args: data_location (str or None): The s3 prefix to upload RecordSet objects to, expressed as an S3 url. For example "s3://example-bucket/some-key-prefix/". Objects will be saved in a unique sub-directory of the specified location. If None, a default data location will be used.
juraj-google-style
def load_notebook(resources=None, verbose=False, hide_banner=False, load_timeout=5000): global _NOTEBOOK_LOADED from .. import __version__ from ..core.templates import NOTEBOOK_LOAD from ..util.serialization import make_id from ..resources import CDN from ..util.compiler import bundle_all_models if (resources is None): resources = CDN if (not hide_banner): if (resources.mode == 'inline'): js_info = 'inline' css_info = 'inline' else: js_info = (resources.js_files[0] if (len(resources.js_files) == 1) else resources.js_files) css_info = (resources.css_files[0] if (len(resources.css_files) == 1) else resources.css_files) warnings = [('Warning: ' + msg['text']) for msg in resources.messages if (msg['type'] == 'warn')] if (_NOTEBOOK_LOADED and verbose): warnings.append('Warning: BokehJS previously loaded') element_id = make_id() html = NOTEBOOK_LOAD.render(element_id=element_id, verbose=verbose, js_info=js_info, css_info=css_info, bokeh_version=__version__, warnings=warnings) else: element_id = None _NOTEBOOK_LOADED = resources custom_models_js = (bundle_all_models() or '') nb_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=True) jl_js = _loading_js(resources, element_id, custom_models_js, load_timeout, register_mime=False) if (not hide_banner): publish_display_data({'text/html': html}) publish_display_data({JS_MIME_TYPE: nb_js, LOAD_MIME_TYPE: jl_js})
Prepare the IPython notebook for displaying Bokeh plots. Args: resources (Resource, optional) : how and where to load BokehJS from (default: CDN) verbose (bool, optional) : whether to report detailed settings (default: False) hide_banner (bool, optional): whether to hide the Bokeh banner (default: False) load_timeout (int, optional) : Timeout in milliseconds when plots assume load timed out (default: 5000) .. warning:: Clearing the output cell containing the published BokehJS resources HTML code may cause Bokeh CSS styling to be removed. Returns: None
codesearchnet
def add_node(self, transformer, name=None, children=None, parent=None, parameters={}, return_node=False): node = Node(transformer, name, **parameters) self.nodes[node.id] = node if (parent is None): self.roots.append(node) else: parent = self.nodes[parent.id] parent.add_child(node) if (children is not None): self.add_nodes(children, parent=node) if return_node: return node
Adds a node to the current graph. Args: transformer (str, Transformer): The pliers Transformer to use at the to-be-added node. Either a case-insensitive string giving the name of a Transformer class, or an initialized Transformer instance. name (str): Optional name to give this Node. children (list): Optional list of child nodes (i.e., nodes to pass the to-be-added node's Transformer output to). parent (Node): Optional node from which the to-be-added Node receives its input. parameters (dict): Optional keyword arguments to pass onto the Transformer initialized at this Node if a string is passed to the 'transformer' argument. Ignored if an already-initialized Transformer is passed. return_node (bool): If True, returns the initialized Node instance. Returns: The initialized Node instance if return_node is True, None otherwise.
codesearchnet
def sync_trial_info(self, job_path, expr_dir_name): expr_name = expr_dir_name[-8:] expr_path = os.path.join(job_path, expr_dir_name) if expr_name not in self._monitored_trials: self._create_trial_info(expr_path) self._monitored_trials.add(expr_name) else: self._update_trial_info(expr_path)
Load information of the trial from the given experiment directory. Create or update the trial information, together with the trial meta file. Args: job_path(str) expr_dir_name(str)
juraj-google-style
def _escape_token(token, alphabet): r token = token.replace(u"\\", u"\\\\").replace(u"_", u"\\u") ret = [c if c in alphabet and c != u"\n" else r"\%d;" % ord(c) for c in token] return u"".join(ret) + "_"
r"""Replace characters that aren't in the alphabet and append "_" to token. Apply three transformations to the token: 1. Replace underline character "_" with "\u", and backslash "\" with "\\". 2. Replace characters outside of the alphabet with "\###;", where ### is the character's Unicode code point. 3. Appends "_" to mark the end of a token. Args: token: unicode string to be escaped alphabet: list of all known characters Returns: escaped string
juraj-google-style
def get(self, name): if name.startswith(' return self.tags.get(name[1:]) return self.props.get(name)
Return a secondary property value from the Node. Args: name (str): The name of a secondary property. Returns: (obj): The secondary property value or None.
codesearchnet
def swd_read8(self, offset): value = self._dll.JLINK_SWD_GetU8(offset) return ctypes.c_uint8(value).value
Gets a unit of ``8`` bits from the input buffer. Args: self (JLink): the ``JLink`` instance offset (int): the offset (in bits) from which to start reading Returns: The integer read from the input buffer.
codesearchnet
def list(self, path, timeout=None): transport = DentFilesyncTransport(self.stream) transport.write_data('LIST', path, timeout) return (DeviceFileStat(dent_msg.name, dent_msg.mode, dent_msg.size, dent_msg.time) for dent_msg in transport.read_until_done('DENT', timeout))
List directory contents on the device. Args: path: List the contents of this directory. timeout: Timeout to use for this operation. Returns: Generator yielding DeviceFileStat tuples representing the contents of the requested path.
juraj-google-style
def calc_inv_vol_weights(returns): vol = np.divide(1.0, np.std(returns, ddof=1)) vol[np.isinf(vol)] = np.NaN volsum = vol.sum() return np.divide(vol, volsum)
Calculates weights proportional to inverse volatility of each column. Returns weights that are inversely proportional to the column's volatility resulting in a set of portfolio weights where each position has the same level of volatility. Note, that assets with returns all equal to NaN or 0 are excluded from the portfolio (their weight is set to NaN). Returns: Series {col_name: weight}
codesearchnet
def take_samples(self, sample_hz, sample_num, sample_offset=0, live=False): sys.stdout.flush() voltage = self.mon.GetVoltage() self.log.info('Taking samples at %dhz for %ds, voltage %.2fv.', sample_hz, (sample_num / sample_hz), voltage) sample_num += sample_offset self.mon.StopDataCollection() status = self.mon.GetStatus() native_hz = (status['sampleRate'] * 1000) self.mon.StartDataCollection() emitted = offset = 0 collected = [] history_deque = collections.deque() current_values = [] timestamps = [] try: last_flush = time.time() while ((emitted < sample_num) or (sample_num == (- 1))): need = int(((((native_hz - offset) + sample_hz) - 1) / sample_hz)) if (need > len(collected)): samples = self.mon.CollectData() if (not samples): break collected.extend(samples) else: offset += (need * sample_hz) while (offset >= native_hz): this_sample = (sum(collected[:need]) / need) this_time = int(time.time()) timestamps.append(this_time) if live: self.log.info('%s %s', this_time, this_sample) current_values.append(this_sample) sys.stdout.flush() offset -= native_hz emitted += 1 collected = collected[need:] now = time.time() if ((now - last_flush) >= 0.99): sys.stdout.flush() last_flush = now except Exception as e: pass self.mon.StopDataCollection() try: return MonsoonData(current_values, timestamps, sample_hz, voltage, offset=sample_offset) except: return None
Take samples of the current value supplied by monsoon. This is the actual measurement for power consumption. This function blocks until the number of samples requested has been fulfilled. Args: hz: Number of points to take for every second. sample_num: Number of samples to take. offset: The number of initial data points to discard in MonsoonData calculations. sample_num is extended by offset to compensate. live: Print each sample in console as measurement goes on. Returns: A MonsoonData object representing the data obtained in this sampling. None if sampling is unsuccessful.
codesearchnet
def groups_setPurpose(self, *, channel: str, purpose: str, **kwargs) -> SlackResponse: kwargs.update({"channel": channel, "purpose": purpose}) return self.api_call("groups.setPurpose", json=kwargs)
Sets the purpose for a private channel. Args: channel (str): The channel id. e.g. 'G1234567890' purpose (str): The new purpose for the channel. e.g. 'My Purpose'
juraj-google-style
def batch_workflow_cancel(self, batch_workflow_id): self.logger.debug('Cancel batch workflow: ' + batch_workflow_id) url = '%(base_url)s/batch_workflows/%(batch_id)s/cancel' % { 'base_url': self.base_url, 'batch_id': batch_workflow_id } r = self.gbdx_connection.post(url) return r.json()
Cancels GBDX batch workflow. Args: batch workflow_id (str): Batch workflow id. Returns: Batch Workflow status (str).
juraj-google-style
def _MaybePurgeOrphanedData(self, event): if not self.purge_orphaned_data: return if self.file_version and self.file_version >= 2: self._CheckForRestartAndMaybePurge(event) else: self._CheckForOutOfOrderStepAndMaybePurge(event) if event.HasField('summary'): self.most_recent_step = event.step self.most_recent_wall_time = event.wall_time
Maybe purge orphaned data due to a TensorFlow crash. When TensorFlow crashes at step T+O and restarts at step T, any events written after step T are now "orphaned" and will be at best misleading if they are included in TensorBoard. This logic attempts to determine if there is orphaned data, and purge it if it is found. Args: event: The event to use as a reference, to determine if a purge is needed.
juraj-google-style
def make_fn(shared_variable_store, device_id): variable_scope_access_index = {} assert isinstance(device_id, int) def create_new_variable(next_creator, **kwargs): canonical_name = _canonicalize_variable_name(kwargs.get('name')) v = next_creator(**kwargs) if canonical_name not in shared_variable_store: shared_variable_store[canonical_name] = [] shared_variable_store[canonical_name].append(v) return v def reuse_variable(next_creator, **kwargs): del next_creator name = kwargs.get('name') canonical_name = _canonicalize_variable_name(name) try: variable_index = variable_scope_access_index.get(canonical_name, 0) v = shared_variable_store[canonical_name][variable_index] variable_scope_access_index[canonical_name] = variable_index + 1 return v except (KeyError, IndexError): raise RuntimeError('Tried to create variable {} with mismatching name on device {}'.format(name, device_id)) if device_id == 0: return create_new_variable else: return reuse_variable
Construct the variable creator function for device `device_id`. Constructs custom variable creator functions for the given device. On first device (device_id == 0), it creates the variable using the `next_creator`, and stores it in the provided `shared_variable_store`. On all other devices (device_id > 0), it tries to re-use the variable already created with the same name. If no such variable exists, it throws an error. Additionally, we de-uniquify variable names before checking for matches. This helps re-use variables which are intended to be the same but have different names due to variable uniquification happening upstream. Since this might mean we may have multiple variables with the same canonical name, we store them in a list per canonical name and return them in the same order as well. Args: shared_variable_store: A dictionary that we will use to store variables created on the first device, and re-used by creators for other devices. device_id: Integer index of the device whose creator should be constructed. Returns: An appropriate creator function based on device_id.
github-repos
def trace(state: State, fn: TransitionOperator, num_steps: IntTensor, trace_fn: Callable[([State, TensorNest], TensorNest)]) -> Tuple[(State, TensorNest)]: def fn_wrapper(args, _): return tf.nest.map_structure(tf.convert_to_tensor, call_fn(fn, args[0])) def trace_fn_wrapper(args): return tf.nest.map_structure(tf.convert_to_tensor, call_fn(trace_fn, args)) state = call_fn(fn, state) first_trace = trace_fn_wrapper(state) (state, full_trace) = mcmc_util.trace_scan(fn_wrapper, state, tf.ones((num_steps - 1)), trace_fn=trace_fn_wrapper) prepend = (lambda x, y: tf.concat([tf.convert_to_tensor(value=x)[tf.newaxis], y], 0)) return (state, tf.nest.map_structure(prepend, first_trace, full_trace))
`TransitionOperator` that runs `fn` repeatedly and traces its outputs. Args: state: A nest of `Tensor`s or None. fn: A `TransitionOperator`. num_steps: Number of steps to run the function for. Must be greater than 1. trace_fn: Callable that the unpacked outputs of `fn` and returns a nest of `Tensor`s. These will be stacked and returned. Returns: state: The final state returned by `fn`. traces: Stacked outputs of `trace_fn`.
codesearchnet
def parse_variant_id(chrom, pos, ref, alt, variant_type): return generate_md5_key([chrom, pos, ref, alt, variant_type])
Parse the variant id for a variant variant_id is used to identify variants within a certain type of analysis. It is not human readable since it is a md5 key. Args: chrom(str) pos(str) ref(str) alt(str) variant_type(str): 'clinical' or 'research' Returns: variant_id(str): The variant id converted to md5 string
juraj-google-style
def detailed_log_handler(self, handler): if not self.opened(): handler = handler or util.noop self._detailed_log_handler = enums.JLinkFunctions.LOG_PROTOTYPE(handler) self._dll.JLINKARM_EnableLogCom(self._detailed_log_handler)
Setter for the detailed log handler function. Args: self (JLink): the ``JLink`` instance Returns: ``None``
juraj-google-style
def process_alias_create_namespace(namespace): namespace = filter_alias_create_namespace(namespace) _validate_alias_name(namespace.alias_name) _validate_alias_command(namespace.alias_command) _validate_alias_command_level(namespace.alias_name, namespace.alias_command) _validate_pos_args_syntax(namespace.alias_name, namespace.alias_command)
Validate input arguments when the user invokes 'az alias create'. Args: namespace: argparse namespace object.
juraj-google-style
def __init__(self, hashes): self.Root = MerkleTree.__Build([MerkleTreeNode(hash) for hash in hashes]) depth = 1 i = self.Root while i.LeftChild is not None: depth = depth + 1 i = i.LeftChild self.Depth = depth
Crease an instance. Args: hashes (list): each hash is of bytearray type.
juraj-google-style
def generate_rpcs(self, address): rpc_list = [] for offset in range(2, len(self.data), 16): rpc = (address, rpcs.SET_CONFIG_VARIABLE, self.var_id, (offset - 2), self.data[offset:(offset + 16)]) rpc_list.append(rpc) return rpc_list
Generate the RPCs needed to stream this config variable to a tile. Args: address (int): The address of the tile that we should stream to. Returns: list of tuples: A list of argument tuples for each RPC. These tuples can be passed to EmulatedDevice.rpc to actually make the RPCs.
codesearchnet
def get_managed_ports(self, id_or_uri, port_id_or_uri=''): if port_id_or_uri: uri = self._client.build_uri(port_id_or_uri) if "/managedPorts" not in uri: uri = self._client.build_uri(id_or_uri) + "/managedPorts" + "/" + port_id_or_uri else: uri = self._client.build_uri(id_or_uri) + "/managedPorts" return self._client.get_collection(uri)
Gets all ports or a specific managed target port for the specified storage system. Args: id_or_uri: Can be either the storage system id or the storage system uri. port_id_or_uri: Can be either the port id or the port uri. Returns: dict: Managed ports.
juraj-google-style
def containsParamSubset(self, params): for key in params.keys(): if key not in self.params: return False if params[key] != self.params[key]: return False return True
Test whether this element contains at least all `params`, or more. Args: params (dict/SpecialDict): Subset of parameters. Returns: bool: True if all `params` are contained in this element.
juraj-google-style
def run(argv=None, save_main_session=True, test_pipeline=None) -> PipelineResult: known_args, pipeline_args = parse_known_args(argv) pipeline_options = PipelineOptions(pipeline_args) pipeline_options.view_as(SetupOptions).save_main_session = save_main_session requirements_dir = os.path.dirname(os.path.realpath(__file__)) pipeline_options.view_as(SetupOptions).requirements_file = f'{requirements_dir}/sklearn_examples_requirements.txt' model_loader = KeyedModelHandler(SklearnModelHandlerNumpy(model_file_type=ModelFileType.PICKLE, model_uri=known_args.model_path, large_model=known_args.large_model)) pipeline = test_pipeline if not test_pipeline: pipeline = beam.Pipeline(options=pipeline_options) label_pixel_tuple = pipeline | 'ReadFromInput' >> beam.io.ReadFromText(known_args.input) | 'PreProcessInputs' >> beam.Map(process_input) predictions = label_pixel_tuple | 'RunInference' >> RunInference(model_loader) | 'PostProcessOutputs' >> beam.ParDo(PostProcessor()) _ = predictions | 'WriteOutput' >> beam.io.WriteToText(known_args.output, shard_name_template='', append_trailing_newlines=True) result = pipeline.run() result.wait_until_finish() return result
Args: argv: Command line arguments defined for this example. save_main_session: Used for internal testing. test_pipeline: Used for internal testing.
github-repos
def aggregate_grads(all_grads, colocation=False, devices=None, average=True): assert (not ((devices is not None) and colocation)) if (devices is not None): assert isinstance(devices, list), devices nr_tower = len(all_grads) if (nr_tower == 1): return all_grads[0] def aggregate(grads): if average: return tf.multiply(tf.add_n(grads), (1.0 / nr_tower)) else: return tf.add_n(grads) ret = [] for (idx, grad_and_vars) in enumerate(zip(*all_grads)): v = grad_and_vars[0][1] grads = [g for (g, _) in grad_and_vars] if colocation: with tf.device(v.device): grad = aggregate(grads) elif (devices is None): grad = aggregate(grads) else: dev = devices[(idx % len(devices))] with tf.device(dev): grad = aggregate(grads) ret.append((grad, v)) return ret
Average the gradients. Args: all_grads (K x N x 2): A list of K lists. Each of the list is a list of N (grad, var) tuples. The variables have to be the same across the K lists. colocation (bool): colocate gradient averaging on the device of the variable. devices (list[str]): assign the averaging to these device in round-robin. Cannot be used together with ``colocation``. average (bool): do average or sum Returns: (N x 2): A list of N (grad, var) tuples, where grad is averaged or summed over K.
codesearchnet
def bloom_gelu_back(g: torch.Tensor, x: torch.Tensor) -> torch.Tensor: x = x[0] tanh_out = torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x)) ff = 0.5 * x * ((1 - tanh_out * tanh_out) * (0.79788456 + 0.1070322243 * x * x)) + 0.5 * (1 + tanh_out) return ff * g
gradient of tanh approximation of gelu gradient of actual gelu is: 0.5 * (1. + torch.erf(x * 0.70710678)) + 0.3989423 * x * torch.exp(-0.5 * x * x) Args: g (`torch.tensor`): gradient output tensor x (`torch.tensor`): input tensor
github-repos
def list(self, **kwargs): resp = self.client.api.secrets(**kwargs) return [self.prepare_model(obj) for obj in resp]
List secrets. Similar to the ``docker secret ls`` command. Args: filters (dict): Server-side list filtering options. Returns: (list of :py:class:`Secret`): The secrets. Raises: :py:class:`docker.errors.APIError` If the server returns an error.
codesearchnet
def _add_qasm_reset(self, qubit): (outcome, probability) = self._get_measure_outcome(qubit) if (outcome == '0'): update = [[(1 / np.sqrt(probability)), 0], [0, 0]] self._add_unitary_single(update, qubit) else: update = [[0, (1 / np.sqrt(probability))], [0, 0]] self._add_unitary_single(update, qubit)
Apply a reset instruction to a qubit. Args: qubit (int): the qubit being rest This is done by doing a simulating a measurement outcome and projecting onto the outcome state while renormalizing.
codesearchnet
def _convert_args(handler, args): args = list(args) params = inspect.signature(handler).parameters for i, (arg, name) in enumerate(zip(args, params)): default = params[name].default annotation = params[name].annotation if annotation != inspect.Parameter.empty: if isinstance(annotation, type) and annotation != str: args[i] = annotation(arg) elif default != inspect.Parameter.empty: if default is not None and not isinstance(default, str): args[i] = type(default)(arg) return args
Convert a list of command arguments to types specified by the handler. Args: handler: a command handler function. args: the list of string arguments to pass to handler. Returns: A new list containing `args` that have been converted to the expected type for `handler`. For each function parameter of `handler` that has either an explicit type annotation or a non-None default value, the corresponding element in `args` is converted to that type.
juraj-google-style
def update(self, friendly_name=None, description=None): self._get_info() if self._info: if friendly_name: self._info['friendlyName'] = friendly_name if description: self._info['description'] = description try: self._api.datasets_update(self._name_parts, self._info) except Exception as e: raise e finally: self._info = None
Selectively updates Dataset information. Args: friendly_name: if not None, the new friendly name. description: if not None, the new description. Returns:
codesearchnet
def get_element_from_tensor_info(tensor_info, graph=None, import_scope=None): graph = graph or ops.get_default_graph() return graph.as_graph_element(ops.prepend_name_scope(tensor_info.name, import_scope=import_scope))
Returns the element in the graph described by a TensorInfo proto. Args: tensor_info: A TensorInfo proto describing an Op or Tensor by name. graph: The tf.Graph in which tensors are looked up. If None, the current default graph is used. import_scope: If not None, names in `tensor_info` are prefixed with this string before lookup. Returns: Op or tensor in `graph` described by `tensor_info`. Raises: KeyError: If `tensor_info` does not correspond to an op or tensor in `graph`
github-repos
def __init__(self, fetches, feed_dict, run_options, run_metadata, run_call_count, is_callable_runner=False): self.fetches = fetches self.feed_dict = feed_dict self.run_options = run_options self.run_metadata = run_metadata self.run_call_count = run_call_count self.is_callable_runner = is_callable_runner
Constructor of `OnRunStartRequest`. Args: fetches: Fetch targets of the run() call. feed_dict: The feed dictionary to the run() call. run_options: RunOptions input to the run() call. run_metadata: RunMetadata input to the run() call. The above four arguments are identical to the input arguments to the run() method of a non-wrapped TensorFlow session. run_call_count: 1-based count of how many run calls (including this one) has been invoked. is_callable_runner: (bool) whether a runner returned by Session.make_callable is being run.
github-repos
def create_table_from(self, name, src): query = self.execute("SELECT sql FROM sqlite_master WHERE " "type='table' and name=?", (src,)) try: cmd = query.fetchone()[0] except TypeError: raise sql.OperationalError("Cannot copy non-existent table '{0}'" .format(src)) new_cmd = re.sub("(CREATE TABLE) \w+", "\\1 " + name, cmd, re.IGNORECASE) self.execute(new_cmd)
Create a new table with same schema as the source. If the named table already exists, nothing happens. Arguments: name (str): The name of the table to create. src (str): The name of the source table to duplicate. Raises: sql.OperationalError: If source table does not exist.
juraj-google-style
def count(self, event_str, inc_int=1): self._event_dict.setdefault(event_str, 0) self._event_dict[event_str] += inc_int
Count an event. Args: event_str: The name of an event to count. Used as a key in the event dict. The same name will also be used in the summary. inc_int: int Optional argument to increase the count for the event by more than 1.
codesearchnet
def abs_url(self, url): parsed_url = urllib.parse.urlparse(url) if ((not parsed_url.scheme) and (not parsed_url.netloc)): return urllib.parse.urljoin(str(self.base_url), str(url)) else: return url
Given a relative or absolute URL; return an absolute URL. Args: url(basestring): A relative or absolute URL. Returns: str: An absolute URL.
codesearchnet
def un(byts): return msgpack.loads(byts, use_list=False, raw=False, unicode_errors='surrogatepass')
Use msgpack to de-serialize a python object. Args: byts (bytes): The bytes to de-serialize Notes: String objects are decoded using utf8 encoding. In order to handle potentially malformed input, ``unicode_errors='surrogatepass'`` is set to allow decoding bad input strings. Returns: obj: The de-serialized object
juraj-google-style
def CreateBiddingStrategy(client): bidding_strategy_service = client.GetService('BiddingStrategyService', version='v201809') shared_bidding_strategy = {'name': ('Maximize Clicks %s' % uuid.uuid4()), 'biddingScheme': {'xsi_type': 'TargetSpendBiddingScheme', 'bidCeiling': {'microAmount': '2000000'}}} operation = {'operator': 'ADD', 'operand': shared_bidding_strategy} response = bidding_strategy_service.mutate([operation]) new_bidding_strategy = response['value'][0] print(('Shared bidding strategy with name "%s" and ID "%s" of type "%s"was created.' % (new_bidding_strategy['name'], new_bidding_strategy['id'], new_bidding_strategy['biddingScheme']['BiddingScheme.Type']))) return new_bidding_strategy
Creates a bidding strategy object. Args: client: AdWordsClient the client to run the example with. Returns: dict An object representing a bidding strategy.
codesearchnet
def random_new_from_seed(seed: Hashable, algo: int=RNG_CMWC) -> tcod.random.Random: return tcod.random.Random(algo, seed)
Return a new Random instance. Using the given ``seed`` and ``algo``. Args: seed (Hashable): The RNG seed. Should be a 32-bit integer, but any hashable object is accepted. algo (int): The random number algorithm to use. Returns: Random: A new Random instance using the given algorithm.
codesearchnet
def matches(self, desc): desc_value_type = (desc.valueType or ValueType.STRING) return ((self.label_name == desc.key) and (self.value_type == desc_value_type))
Determines if a given label descriptor matches this enum instance Args: desc (:class:`endpoints_management.gen.servicemanagement_v1_messages.LabelDescriptor`): the instance to test Return: `True` if desc is supported, otherwise `False`
codesearchnet
def depth_january_average_ground_temperature(self, value=None): if (value is not None): try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float for field `depth_january_average_ground_temperature`'.format(value)) self._depth_january_average_ground_temperature = value
Corresponds to IDD Field `depth_january_average_ground_temperature` Args: value (float): value for IDD Field `depth_january_average_ground_temperature` Unit: C if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def __init__(self, app=None, env=None, region=None, prop_path=None): self.app_name = app self.env = env self.region = region self.prop_path = prop_path self.properties = get_properties(properties_file=prop_path, env=env, region=region)
Lambda event object. Args: app (str): Application name env (str): Environment/Account region (str): AWS Region prop_path (str): Path of environment property file
juraj-google-style
def get_pipeline(self, name): check.str_param(name, 'name') if (name in self._pipeline_cache): return self._pipeline_cache[name] try: pipeline = self.pipeline_dict[name]() except KeyError: raise DagsterInvariantViolationError('Could not find pipeline "{name}". Found: {pipeline_names}.'.format(name=name, pipeline_names=', '.join(['"{pipeline_name}"'.format(pipeline_name=pipeline_name) for pipeline_name in self.pipeline_dict.keys()]))) check.invariant((pipeline.name == name), 'Name does not match. Name in dict {name}. Name in pipeline {pipeline.name}'.format(name=name, pipeline=pipeline)) self._pipeline_cache[name] = check.inst(pipeline, PipelineDefinition, 'Function passed into pipeline_dict with key {key} must return a PipelineDefinition'.format(key=name)) return pipeline
Get a pipeline by name. Only constructs that pipeline and caches it. Args: name (str): Name of the pipeline to retriever Returns: PipelineDefinition: Instance of PipelineDefinition with that name.
codesearchnet
def parse_module_content(content: str) -> List[str]: objects = [] current_object = [] lines = content.split('\n') end_markers = [')', ']', '}', '"""'] for line in lines: is_valid_object = len(current_object) > 0 if is_valid_object and len(current_object) == 1: is_valid_object = not current_object[0].startswith(' if not is_empty_line(line) and find_indent(line) == 0 and is_valid_object: if line in end_markers: current_object.append(line) objects.append('\n'.join(current_object)) current_object = [] else: objects.append('\n'.join(current_object)) current_object = [line] else: current_object.append(line) if len(current_object) > 0: objects.append('\n'.join(current_object)) return objects
Parse the content of a module in the list of objects it defines. Args: content (`str`): The content to parse Returns: `List[str]`: The list of objects defined in the module.
github-repos
def lint(filename, lines, config): (_, ext) = os.path.splitext(filename) if (ext in config): output = collections.defaultdict(list) for linter in config[ext]: linter_output = linter(filename, lines) for (category, values) in linter_output[filename].items(): output[category].extend(values) if ('comments' in output): output['comments'] = sorted(output['comments'], key=(lambda x: (x.get('line', (- 1)), x.get('column', (- 1))))) return {filename: dict(output)} else: return {filename: {'skipped': [('no linter is defined or enabled for files with extension "%s"' % ext)]}}
Lints a file. Args: filename: string: filename to lint. lines: list[int]|None: list of lines that we want to capture. If None, then all lines will be captured. config: dict[string: linter]: mapping from extension to a linter function. Returns: dict: if there were errors running the command then the field 'error' will have the reasons in a list. if the lint process was skipped, then a field 'skipped' will be set with the reasons. Otherwise, the field 'comments' will have the messages.
codesearchnet
def alexa(self) -> list: alexa_controls = [control.alexa() for control in self.controls] return alexa_controls
Returns list of Amazon Alexa compatible states of the RichMessage instance nested controls. Returns: alexa_controls: Amazon Alexa representation of RichMessage instance nested controls.
codesearchnet
def apply_modifications(model, custom_objects=None): model_path = os.path.join(tempfile.gettempdir(), next(tempfile._get_candidate_names()) + '.h5') try: model.save(model_path) return load_model(model_path, custom_objects=custom_objects) finally: os.remove(model_path)
Applies modifications to the model layers to create a new Graph. For example, simply changing `model.layers[idx].activation = new activation` does not change the graph. The entire graph needs to be updated with modified inbound and outbound tensors because of change in layer building function. Args: model: The `keras.models.Model` instance. Returns: The modified model with changes applied. Does not mutate the original `model`.
juraj-google-style
def _encode_gif(images, fps): writer = WholeVideoWriter(fps) writer.write_multi(images) return writer.finish()
Encodes numpy images into gif string. Args: images: A 4-D `uint8` `np.array` (or a list of 3-D images) of shape `[time, height, width, channels]` where `channels` is 1 or 3. fps: frames per second of the animation Returns: The encoded gif string. Raises: IOError: If the ffmpeg command returns an error.
codesearchnet
def kill(self, signal=None): return self.client.api.kill(self.id, signal=signal)
Kill or send a signal to the container. Args: signal (str or int): The signal to send. Defaults to ``SIGKILL`` Raises: :py:class:`docker.errors.APIError` If the server returns an error.
codesearchnet
def parse_auth(cls, entries, raise_on_error=False): conf = {} for registry, entry in six.iteritems(entries): if not isinstance(entry, dict): log.debug( 'Config entry for key {0} is not auth config'.format( registry ) ) if raise_on_error: raise errors.InvalidConfigFile( 'Invalid configuration for registry {0}'.format( registry ) ) return {} if 'identitytoken' in entry: log.debug( 'Found an IdentityToken entry for registry {0}'.format( registry ) ) conf[registry] = { 'IdentityToken': entry['identitytoken'] } continue if 'auth' not in entry: log.debug( 'Auth data for {0} is absent. Client might be using a ' 'credentials store instead.'.format(registry) ) conf[registry] = {} continue username, password = decode_auth(entry['auth']) log.debug( 'Found entry (registry={0}, username={1})' .format(repr(registry), repr(username)) ) conf[registry] = { 'username': username, 'password': password, 'email': entry.get('email'), 'serveraddress': registry, } return conf
Parses authentication entries Args: entries: Dict of authentication entries. raise_on_error: If set to true, an invalid format will raise InvalidConfigFile Returns: Authentication registry.
juraj-google-style
def _sample_proposals(self, matched_idxs: torch.Tensor, matched_labels: torch.Tensor, gt_classes: torch.Tensor): has_gt = gt_classes.numel() > 0 if has_gt: gt_classes = gt_classes[matched_idxs] gt_classes[matched_labels == 0] = self.bg_label gt_classes[matched_labels == -1] = -1 else: gt_classes = torch.zeros_like(matched_idxs) + self.bg_label sampled_fg_idxs, sampled_bg_idxs = subsample_labels(gt_classes, self.batch_size_per_image, self.positive_fraction, self.bg_label) sampled_idxs = torch.cat([sampled_fg_idxs, sampled_bg_idxs], dim=0) return (sampled_idxs, gt_classes[sampled_idxs])
Based on the matching between N proposals and M groundtruth, sample the proposals and set their classification labels. Args: matched_idxs (Tensor): a vector of length N, each is the best-matched gt index in [0, M) for each proposal. matched_labels (Tensor): a vector of length N, the matcher's label (one of cfg.MODEL.ROI_HEADS.IOU_LABELS) for each proposal. gt_classes (Tensor): a vector of length M. Returns: Tensor: a vector of indices of sampled proposals. Each is in [0, N). Tensor: a vector of the same length, the classification label for each sampled proposal. Each sample is labeled as either a category in [0, num_classes) or the background (num_classes).
github-repos
def _GetCurrentControlSet(self, key_path_suffix): select_key_path = 'HKEY_LOCAL_MACHINE\\System\\Select' select_key = self.GetKeyByPath(select_key_path) if (not select_key): return None control_set = None for value_name in ('Current', 'Default', 'LastKnownGood'): value = select_key.GetValueByName(value_name) if ((not value) or (not value.DataIsInteger())): continue control_set = value.GetDataAsObject() if ((control_set > 0) or (control_set <= 999)): break if ((not control_set) or (control_set <= 0) or (control_set > 999)): return None control_set_path = 'HKEY_LOCAL_MACHINE\\System\\ControlSet{0:03d}'.format(control_set) key_path = ''.join([control_set_path, key_path_suffix]) return self.GetKeyByPath(key_path)
Virtual key callback to determine the current control set. Args: key_path_suffix (str): current control set Windows Registry key path suffix with leading path separator. Returns: WinRegistryKey: the current control set Windows Registry key or None if not available.
codesearchnet
def _wrap_response(self, status=None, **kwargs): kwargs['status'] = (status if (status is not None) else self._status.OK) return kwargs
Convenience method to wrap a status with any key word args. Args: status (enum): enum response status, defaults to OK Returns: dict: inlcudes a 'status' attribute and any key word arguments
codesearchnet
def get_oxi_state_decorated_structure(self, structure): s = structure.copy() if s.is_ordered: valences = self.get_valences(s) s.add_oxidation_state_by_site(valences) else: valences = self.get_valences(s) s = add_oxidation_state_by_site_fraction(s, valences) return s
Get an oxidation state decorated structure. This currently works only for ordered structures only. Args: structure: Structure to analyze Returns: A modified structure that is oxidation state decorated. Raises: ValueError if the valences cannot be determined.
juraj-google-style
def _update_example(self, request): if (request.method != 'POST'): return http_util.Respond(request, {'error': 'invalid non-POST request'}, 'application/json', code=405) example_json = request.form['example'] index = int(request.form['index']) if (index >= len(self.examples)): return http_util.Respond(request, {'error': 'invalid index provided'}, 'application/json', code=400) new_example = self.example_class() json_format.Parse(example_json, new_example) self.examples[index] = new_example self.updated_example_indices.add(index) self.generate_sprite([ex.SerializeToString() for ex in self.examples]) return http_util.Respond(request, {}, 'application/json')
Updates the specified example. Args: request: A request that should contain 'index' and 'example'. Returns: An empty response.
codesearchnet
def register_to_random_name(grad_f): grad_f_name = ((grad_f.__name__ + '_') + str(uuid.uuid4())) tf.RegisterGradient(grad_f_name)(grad_f) return grad_f_name
Register a gradient function to a random string. In order to use a custom gradient in TensorFlow, it must be registered to a string. This is both a hassle, and -- because only one function can every be registered to a string -- annoying to iterate on in an interactive environemnt. This function registers a function to a unique random string of the form: {FUNCTION_NAME}_{RANDOM_SALT} And then returns the random string. This is a helper in creating more convenient gradient overrides. Args: grad_f: gradient function to register. Should map (op, grad) -> grad(s) Returns: String that gradient function was registered to.
codesearchnet
def _viscounts2radiance(counts, slope, offset): rad = counts * slope + offset return rad.clip(min=0)
Convert VIS counts to radiance References: [VIS] Args: counts: Raw detector counts slope: Slope [W m-2 um-1 sr-1] offset: Offset [W m-2 um-1 sr-1] Returns: Radiance [W m-2 um-1 sr-1]
juraj-google-style
def _ReadFileHeader(self, file_object): data_type_map = self._GetDataTypeMap('keychain_file_header') file_header, _ = self._ReadStructureFromFileObject( file_object, 0, data_type_map) if file_header.signature != self._FILE_SIGNATURE: raise errors.ParseError('Unsupported file signature.') if (file_header.major_format_version != self._MAJOR_VERSION or file_header.minor_format_version != self._MINOR_VERSION): raise errors.ParseError('Unsupported format version: {0:s}.{1:s}'.format( file_header.major_format_version, file_header.minor_format_version)) return file_header
Reads the file header. Args: file_object (file): file-like object. Returns: keychain_file_header: file header. Raises: ParseError: if the file header cannot be read.
juraj-google-style
def to_dict(self): msg_dict = {} msg_dict['level'] = self.level msg_dict['message'] = self.message msg_dict['now_time'] = monotonic() msg_dict['created_time'] = self.created msg_dict['id'] = self.id msg_dict['count'] = self.count return msg_dict
Create a dictionary with the information in this message. Returns: dict: The dictionary with information
codesearchnet
def kill_task(self, task_type, task_id): assert self._mpr if not self._start_events[task_type][task_id].is_set() or self._finish_events[task_type][task_id].is_set(): raise ValueError("The task %s:%d doesn't exist." % (task_type, task_id)) self._finish_events[task_type][task_id].set() self._mpr._processes[task_type, task_id].join()
Kill a server given task_type and task_id. Args: task_type: the type of the task such as "worker". task_id: the id the task such as 1.
github-repos
def table(self, ref): try: obj_number = ObjectNumber.parse(ref) ds_obj_number = obj_number.as_dataset dataset = self._db.dataset(ds_obj_number) table = dataset.table(ref) except NotObjectNumberError: q = self.database.session.query(Table).filter((Table.name == str(ref))).order_by(Table.vid.desc()) table = q.first() if (not table): raise NotFoundError("No table for ref: '{}'".format(ref)) return table
Finds table by ref and returns it. Args: ref (str): id, vid (versioned id) or name of the table Raises: NotFoundError: if table with given ref not found. Returns: orm.Table
codesearchnet
def reset(self, indices=None): if self._store_rollouts and self.current_epoch is None: raise ValueError( "No current epoch. start_new_epoch() should first be called." ) if indices is None: indices = np.arange(self.batch_size) new_obs = self._reset(indices) if self._should_preprocess_on_reset: new_obs = self._preprocess_observations(new_obs) if self._store_rollouts: encoded_obs = self._encode_observations(new_obs) for (index, ob) in zip(indices, encoded_obs): frame = self._current_batch_frames[index] if frame is not None: rollout = self._current_batch_rollouts[index] rollout.append(frame._replace(action=0)) self._current_epoch_rollouts.append(rollout) self._current_batch_rollouts[index] = [] self._current_batch_frames[index] = Frame( observation=ob, reward=0, unclipped_reward=0, done=False, action=None ) return new_obs
Resets environments at given indices. Does any preprocessing and adds rollouts to history. Args: indices: Indices of environments to reset. Returns: Batch of initial observations of reset environments. Raises: ValueError: when there's no current epoch.
juraj-google-style
def close(self): if (not self._is_open): return else: try: self.proxy.close() self._is_open = False except Exception as e: self.logger.error('could not close client connection: %s', e) raise
Close the client connection. Raises: Exception: if an error occurs while trying to close the connection
codesearchnet
def to_str(self, *, content_only: bool=False, **kwargs) -> str: if content_only: return self.content return '\n'.join([v for v in ['<html>', self.head_section, self.body_section, '</html>'] if v])
Returns the HTML str. Args: content_only: If True, only the content will be returned. **kwargs: Additional keyword arguments passed from the user that will be ignored. Returns: The generated HTML str.
github-repos
def create_wf_instances(self, roles=None): if roles: wf_instances = [ WFInstance( wf=self.wf, current_actor=role, task=self, name=self.wf.name ) for role in roles ] else: wf_instances = [ WFInstance( wf=self.wf, task=self, name=self.wf.name ) ] if self.task_type in ["C", "D"]: return [wfi.save() for wfi in wf_instances] else: wf_obj_instances = [] for wfi in wf_instances: role = wfi.current_actor if self.task_type == "A" else None keys = self.get_object_keys(role) wf_obj_instances.extend( [WFInstance( wf=self.wf, current_actor=role, task=self, name=self.wf.name, wf_object=key, wf_object_type=self.object_type ).save() for key in keys] ) return wf_obj_instances
Creates wf instances. Args: roles (list): role list Returns: (list): wf instances
juraj-google-style
def __add__(self, period_tensor): period_type = period_tensor.period_type() if period_type == constants.PeriodType.DAY: ordinals = self._ordinals + period_tensor.quantity() return from_ordinals(ordinals) if period_type == constants.PeriodType.WEEK: return self + periods.PeriodTensor(period_tensor.quantity() * 7, constants.PeriodType.DAY) def adjust_day(year, month, day): return tf.math.minimum(day, _num_days_in_month(month, year)) if period_type == constants.PeriodType.MONTH: m = self._months - 1 + period_tensor.quantity() y = self._years + m m = m % 12 + 1 d = adjust_day(y, m, self._days) return from_year_month_day(y, m, d, validate=False) if period_type == constants.PeriodType.YEAR: y = self._years + period_tensor.quantity() m = tf.broadcast_to(self._months, tf.shape(y)) d = adjust_day(y, m, self._days) return from_year_month_day(y, m, d, validate=False) raise ValueError('Unrecognized period type: {}'.format(period_type))
Adds a tensor of periods. When adding months or years, the resulting day of the month is decreased to the largest valid value if necessary. E.g. 31.03.2020 + 1 month = 30.04.2020, 29.02.2020 + 1 year = 28.02.2021. Args: period_tensor: A `PeriodTensor` object broadcastable to the shape of "self". Returns: The new instance of DateTensor. #### Example ```python dates = tff.datetime.dates_from_tuples([(2020, 2, 25), (2020, 3, 31)]) new_dates = dates + tff.datetime.month() # DateTensor([(2020, 3, 25), (2020, 4, 30)]) new_dates = dates + tff.datetime.month([1, 2]) # DateTensor([(2020, 3, 25), (2020, 5, 31)]) ```
github-repos
def is_attribute_supported(self, attribute): if (attribute not in self._attribute_rule_sets.keys()): return False rule_set = self._attribute_rule_sets.get(attribute) if (self._version >= rule_set.version_added): return True else: return False
Check if the attribute is supported by the current KMIP version. Args: attribute (string): The name of the attribute (e.g., 'Cryptographic Algorithm'). Required. Returns: bool: True if the attribute is supported by the current KMIP version. False otherwise.
codesearchnet
def domain_dimension(self): if self.shape.rank is None: return tensor_shape.Dimension(None) else: return self.shape.dims[-1]
Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `N`. Returns: `Dimension` object.
github-repos
def begin_episode(self, agent_indices): with tf.name_scope('begin_episode/'): if self._last_state is None: reset_state = tf.no_op() else: reset_state = utility.reinit_nested_vars( self._last_state, agent_indices) reset_buffer = self._current_episodes.clear(agent_indices) with tf.control_dependencies([reset_state, reset_buffer]): return tf.constant('')
Reset the recurrent states and stored episode. Args: agent_indices: Tensor containing current batch indices. Returns: Summary tensor.
juraj-google-style
def _resource_apply_sparse(self, grad, handle, indices, apply_state): raise NotImplementedError('Must be implemented in subclasses.')
Add ops to apply sparse gradients to the variable `handle`. Similar to `_apply_sparse`, the `indices` argument to this method has been de-duplicated. Optimizers which deal correctly with non-unique indices may instead override `_resource_apply_sparse_duplicate_indices` to avoid this overhead. Args: grad: a `Tensor` representing the gradient for the affected indices. handle: a `Tensor` of dtype `resource` which points to the variable to be updated. indices: a `Tensor` of integral type representing the indices for which the gradient is nonzero. Indices are unique. apply_state: A dict which is used across multiple apply calls. Returns: An `Operation` which updates the value of the variable.
github-repos
def add_squashed_change(self, path, data): assert self._squashed_count, 'Called while not squashing changes' self._squashed_changes.append([path[1:], data])
Register a squashed change to a particular path Args: path (list): The path of what has changed, relative from Block data (object): The new data
codesearchnet
def _ConvertFieldValuePair(self, js, message): names = [] message_descriptor = message.DESCRIPTOR fields_by_json_name = dict((f.json_name, f) for f in message_descriptor.fields) for name in js: try: field = fields_by_json_name.get(name, None) if not field: field = message_descriptor.fields_by_name.get(name, None) if not field: if self.ignore_unknown_fields: continue raise ParseError( 'Message type "{0}" has no field named "{1}".'.format( message_descriptor.full_name, name)) if name in names: raise ParseError('Message type "{0}" should not have multiple ' '"{1}" fields.'.format( message.DESCRIPTOR.full_name, name)) names.append(name) if field.containing_oneof is not None: oneof_name = field.containing_oneof.name if oneof_name in names: raise ParseError('Message type "{0}" should not have multiple ' '"{1}" oneof fields.'.format( message.DESCRIPTOR.full_name, oneof_name)) names.append(oneof_name) value = js[name] if value is None: if (field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE and field.message_type.full_name == 'google.protobuf.Value'): sub_message = getattr(message, field.name) sub_message.null_value = 0 else: message.ClearField(field.name) continue if _IsMapEntry(field): message.ClearField(field.name) self._ConvertMapFieldValue(value, message, field) elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED: message.ClearField(field.name) if not isinstance(value, list): raise ParseError('repeated field {0} must be in [] which is ' '{1}.'.format(name, value)) if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: for item in value: sub_message = getattr(message, field.name).add() if (item is None and sub_message.DESCRIPTOR.full_name != 'google.protobuf.Value'): raise ParseError('null is not allowed to be used as an element' ' in a repeated field.') self.ConvertMessage(item, sub_message) else: for item in value: if item is None: raise ParseError('null is not allowed to be used as an element' ' in a repeated field.') getattr(message, field.name).append( _ConvertScalarFieldValue(item, field)) elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: sub_message = getattr(message, field.name) sub_message.SetInParent() self.ConvertMessage(value, sub_message) else: setattr(message, field.name, _ConvertScalarFieldValue(value, field)) except ParseError as e: if field and field.containing_oneof is None: raise ParseError('Failed to parse {0} field: {1}'.format(name, e)) else: raise ParseError(str(e)) except ValueError as e: raise ParseError('Failed to parse {0} field: {1}.'.format(name, e)) except TypeError as e: raise ParseError('Failed to parse {0} field: {1}.'.format(name, e))
Convert field value pairs into regular message. Args: js: A JSON object to convert the field value pairs. message: A regular protocol message to record the data. Raises: ParseError: In case of problems converting.
juraj-google-style
def OpenAndRead(relative_path='debugger-blacklist.yaml'): try: with open(os.path.join(sys.path[0], relative_path), 'r') as f: return Read(f) except IOError: return None
Attempts to find the yaml configuration file, then read it. Args: relative_path: Optional relative path override. Returns: A Config object if the open and read were successful, None if the file does not exist (which is not considered an error). Raises: Error (some subclass): As thrown by the called Read() function.
juraj-google-style
def _compute_llama3_parameters(config: PretrainedConfig, device: 'torch.device', seq_len: Optional[int]=None, **rope_kwargs) -> tuple['torch.Tensor', float]: inv_freq, attention_factor = _compute_default_rope_parameters(config, device, seq_len, **rope_kwargs) factor = config.rope_scaling['factor'] low_freq_factor = config.rope_scaling['low_freq_factor'] high_freq_factor = config.rope_scaling['high_freq_factor'] old_context_len = config.rope_scaling['original_max_position_embeddings'] low_freq_wavelen = old_context_len / low_freq_factor high_freq_wavelen = old_context_len / high_freq_factor wavelen = 2 * math.pi / inv_freq inv_freq_llama = torch.where(wavelen > low_freq_wavelen, inv_freq / factor, inv_freq) smooth_factor = (old_context_len / wavelen - low_freq_factor) / (high_freq_factor - low_freq_factor) smoothed_inv_freq = (1 - smooth_factor) * inv_freq_llama / factor + smooth_factor * inv_freq_llama is_medium_freq = ~(wavelen < high_freq_wavelen) * ~(wavelen > low_freq_wavelen) inv_freq_llama = torch.where(is_medium_freq, smoothed_inv_freq, inv_freq_llama) return (inv_freq_llama, attention_factor)
Computes the inverse frequencies for llama 3.1. Args: config ([`~transformers.PretrainedConfig`]): The model configuration. device (`torch.device`): The device to use for initialization of the inverse frequencies. seq_len (`int`, *optional*): The current sequence length. Unused for this type of RoPE. rope_kwargs (`Dict`, *optional*): BC compatibility with the previous RoPE class instantiation, will be removed in v4.45. Returns: Tuple of (`torch.Tensor`, `float`), containing the inverse frequencies for the RoPE embeddings and the post-processing scaling factor applied to the computed cos/sin.
github-repos
def merge(self, x=None, y=None, ildj=None, kwargs=None, mapping=None): if (mapping is None): mapping = _Mapping(x=x, y=y, ildj=ildj, kwargs=kwargs) elif any(((arg is not None) for arg in [x, y, ildj, kwargs])): raise ValueError('Cannot simultaneously specify mapping and individual arguments.') return _Mapping(x=self._merge(self.x, mapping.x), y=self._merge(self.y, mapping.y), ildj=self._merge(self.ildj, mapping.ildj), kwargs=self._merge(self.kwargs, mapping.kwargs, use_equals=True))
Returns new _Mapping with args merged with self. Args: x: `Tensor` or None. Input to forward; output of inverse. y: `Tensor` or None. Input to inverse; output of forward. ildj: `Tensor`. This is the (un-reduce_sum'ed) inverse log det jacobian. kwargs: Python dictionary. Extra args supplied to forward/inverse/etc functions. mapping: Instance of _Mapping to merge. Can only be specified if no other arg is specified. Returns: mapping: New instance of `_Mapping` which has inputs merged with self. Raises: ValueError: if mapping and any other arg is not `None`.
codesearchnet
def __get_valid_form_data_elements(self, soup): elements = [] for element in soup.find_all(["input", "button", "textarea", "select"]): if element.has_attr("name"): elements.append(element) return elements
Get all valid form input elements. Note: An element is valid when the value can be updated client-side and the element has a name attribute. Args: soup (obj): The BeautifulSoup form. Returns: list(obj): Soup elements.
juraj-google-style
def _div_python2(x, y, name=None): with ops.name_scope(name, 'div', [x, y]) as name: x = ops.convert_to_tensor(x, name='x') y = ops.convert_to_tensor(y, name='y', dtype=x.dtype.base_dtype) x_dtype = x.dtype.base_dtype y_dtype = y.dtype.base_dtype if x_dtype != y_dtype: raise TypeError(f'`x` and `y` must have the same dtype, got {x_dtype!r} != {y_dtype!r}.') if x_dtype.is_floating or x_dtype.is_complex: return gen_math_ops.real_div(x, y, name=name) else: return gen_math_ops.floor_div(x, y, name=name)
Divide two values using Python 2 semantics. Used for Tensor.__div__. Args: x: `Tensor` numerator of real numeric type. y: `Tensor` denominator of real numeric type. name: A name for the operation (optional). Returns: `x / y` returns the quotient of x and y.
github-repos
def from_config(cls, config, custom_objects=None): if 'lr' in config: config['learning_rate'] = config.pop('lr') if 'learning_rate' in config: if isinstance(config['learning_rate'], dict): config['learning_rate'] = learning_rate_schedule.deserialize(config['learning_rate'], custom_objects=custom_objects) return cls(**config)
Creates an optimizer from its config. This method is the reverse of `get_config`, capable of instantiating the same optimizer from the config dictionary. Args: config: A Python dictionary, typically the output of get_config. custom_objects: A Python dictionary mapping names to additional Python objects used to create this optimizer, such as a function used for a hyperparameter. Returns: An optimizer instance.
github-repos
def _get_elements(self, url, key, eclass, id=None, name=None): if ((id is not None) and (name is not None)): raise ValueError('id and name cannot specified together') json_elements = self.rest_client.make_request(url)[key] return [eclass(element, self.rest_client) for element in json_elements if (_exact_resource(element, id) and _matching_resource(element, name))]
Get elements matching `id` or `name` Args: url(str): url of children. key(str): key in the returned JSON. eclass(subclass type of :py:class:`_ResourceElement`): element class to create instances of. id(str, optional): only return resources whose `id` property matches the given `id` name(str, optional): only return resources whose `name` property matches the given `name` Returns: list(_ResourceElement): List of `eclass` instances Raises: ValueError: both `id` and `name` are specified together
codesearchnet
def outline(self, level=logging.INFO, message=""): steps = 1 logger.log(level, "Plan \"%s\":", self.description) for step in self.steps: logger.log( level, " - step: %s: target: \"%s\", action: \"%s\"", steps, step.name, step.fn.__name__, ) steps += 1 if message: logger.log(level, message)
Print an outline of the actions the plan is going to take. The outline will represent the rough ordering of the steps that will be taken. Args: level (int, optional): a valid log level that should be used to log the outline message (str, optional): a message that will be logged to the user after the outline has been logged.
juraj-google-style
def reset_from_key_counter(self, key, counter): counter = _convert_to_state_tensor(counter) key = _convert_to_state_tensor(key) counter.shape.assert_is_compatible_with([_get_state_size(self.algorithm) - 1]) key.shape.assert_is_compatible_with([]) key = array_ops.reshape(key, [1]) state = array_ops.concat([counter, key], 0) self._state_var.assign(state)
Resets the generator by a new key-counter pair. See `from_key_counter` for the meaning of "key" and "counter". Args: key: the new key. counter: the new counter.
github-repos
def export(self, chunk_size=DEFAULT_DATA_CHUNK_SIZE): return self.client.api.export(self.id, chunk_size)
Export the contents of the container's filesystem as a tar archive. Args: chunk_size (int): The number of bytes returned by each iteration of the generator. If ``None``, data will be streamed as it is received. Default: 2 MB Returns: (str): The filesystem tar archive Raises: :py:class:`docker.errors.APIError` If the server returns an error.
codesearchnet
def get_items(self, page=1, order_by=None, filters=None): start = (page-1)*self.per_page query = self.get_query() if order_by is not None: query = query.order_by(self._get_field(order_by)) if filters is not None: query = self._filter(query, filters) return query.offset(start).limit(self.per_page), self.count(query)
Fetch database for items matching. Args: page (int): which page will be sliced slice size is ``self.per_page``. order_by (str): a field name to order query by. filters (dict): a ``filter name``: ``value`` dict. Returns: tuple with: items, sliced by page*self.per_page total items without slice
juraj-google-style
def _FindLargestIdPostfixNumber(self, schedule): postfix_number_re = re.compile('(\\d+)$') def ExtractPostfixNumber(entity_id): 'Try to extract an integer from the end of entity_id.\n\n If entity_id is None or if there is no integer ending the id, zero is\n returned.\n\n Args:\n entity_id: An id string or None.\n\n Returns:\n An integer ending the entity_id or zero.\n ' if (entity_id is None): return 0 match = postfix_number_re.search(entity_id) if (match is not None): return int(match.group(1)) else: return 0 id_data_sets = {'agency_id': schedule.GetAgencyList(), 'stop_id': schedule.GetStopList(), 'route_id': schedule.GetRouteList(), 'trip_id': schedule.GetTripList(), 'service_id': schedule.GetServicePeriodList(), 'fare_id': schedule.GetFareAttributeList(), 'shape_id': schedule.GetShapeList()} max_postfix_number = 0 for (id_name, entity_list) in id_data_sets.items(): for entity in entity_list: entity_id = getattr(entity, id_name) postfix_number = ExtractPostfixNumber(entity_id) max_postfix_number = max(max_postfix_number, postfix_number) return max_postfix_number
Finds the largest integer used as the ending of an id in the schedule. Args: schedule: The schedule to check. Returns: The maximum integer used as an ending for an id.
codesearchnet
def find_emails_by_subject(self, subject, limit=50, match_recipient=None): self._mail.select("inbox") matching_uids = self.__search_email_by_subject( subject, match_recipient) return matching_uids
Searches for Email by Subject. Returns email's imap message IDs as a list if matching subjects is found. Args: subject (str) - Subject to search for. Kwargs: limit (int) - Limit search to X number of matches, default 50 match_recipient (str) - Recipient to exactly (don't care if not specified) Returns: list - List of Integers representing imap message UIDs.
juraj-google-style
def handle(self, message): logger.debug(message) if Utilities.isNotEmpty(message['metadata']['opts']): target = message['metadata']['opts']['target'] self.botThread.create_message(target, message['text'])
Attempts to send a message to the specified destination in Discord. Extends Legobot.Lego.handle() Args: message (Legobot.Message): message w/ metadata to send.
juraj-google-style
def _add_function(self, func, identify_observed): key = self.make_key(func) if key not in self.observers: self.observers[key] = ObserverFunction( func, identify_observed, (key, self.observers)) return True else: return False
Add a function as an observer. Args: func: The function to register as an observer. identify_observed: See docstring for add_observer. Returns: True if the function is added, otherwise False.
juraj-google-style