code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def _MergeEntities(self, a, b): def _MergeAgencyId(a_agency_id, b_agency_id): "Merge two agency ids.\n\n The only difference between this and _MergeIdentical() is that the values\n None and '' are regarded as being the same.\n\n Args:\n a_agency_id: The first agency id.\n b_agency_id: The second agency id.\n\n Returns:\n The merged agency id.\n\n Raises:\n MergeError: The agency ids could not be merged.\n " a_agency_id = (a_agency_id or None) b_agency_id = (b_agency_id or None) return self._MergeIdentical(a_agency_id, b_agency_id) scheme = {'agency_id': _MergeAgencyId, 'agency_name': self._MergeIdentical, 'agency_url': self._MergeIdentical, 'agency_timezone': self._MergeIdentical} return self._SchemedMerge(scheme, a, b)
Merges two agencies. To be merged, they are required to have the same id, name, url and timezone. The remaining language attribute is taken from the new agency. Args: a: The first agency. b: The second agency. Returns: The merged agency. Raises: MergeError: The agencies could not be merged.
codesearchnet
def total_cost_function(self, item_a, item_b, time_a, time_b): distances = np.zeros(len(self.weights)) for c, component in enumerate(self.cost_function_components): distances[c] = component(item_a, time_a, item_b, time_b, self.max_values[c]) total_distance = np.sum(self.weights * distances) return total_distance
Calculate total cost function between two items. Args: item_a: STObject item_b: STObject time_a: Timestep in item_a at which cost function is evaluated time_b: Timestep in item_b at which cost function is evaluated Returns: The total weighted distance between item_a and item_b
juraj-google-style
def object_hook(self, object_dict): instance = self.decoder(object_dict) self.condition_list.append(instance) self.index += 1 return self.index
Hook which when passed into a json.JSONDecoder will replace each dict in a json string with its index and convert the dict to an object as defined by the passed in condition_decoder. The newly created condition object is appended to the conditions_list. Args: object_dict: Dict representing an object. Returns: An index which will be used as the placeholder in the condition_structure
codesearchnet
def _get_augmented_label_matrix(self, L, higher_order=False): self.c_data = {} for i in range(self.m): self.c_data[i] = {'start_index': (i * self.k), 'end_index': ((i + 1) * self.k), 'max_cliques': set([j for j in self.c_tree.nodes() if (i in self.c_tree.node[j]['members'])])} L_ind = self._create_L_ind(L) if higher_order: L_aug = np.copy(L_ind) for item in chain(self.c_tree.nodes(), self.c_tree.edges()): if isinstance(item, int): C = self.c_tree.node[item] C_type = 'node' elif isinstance(item, tuple): C = self.c_tree[item[0]][item[1]] C_type = 'edge' else: raise ValueError(item) members = list(C['members']) nc = len(members) if (nc == 1): C['start_index'] = (members[0] * self.k) C['end_index'] = ((members[0] + 1) * self.k) else: L_C = np.ones((self.n, (self.k ** nc))) for (i, vals) in enumerate(product(range(self.k), repeat=nc)): for (j, v) in enumerate(vals): L_C[(:, i)] *= L_ind[(:, ((members[j] * self.k) + v))] if (L_aug is not None): C['start_index'] = L_aug.shape[1] C['end_index'] = (L_aug.shape[1] + L_C.shape[1]) L_aug = np.hstack([L_aug, L_C]) else: C['start_index'] = 0 C['end_index'] = L_C.shape[1] L_aug = L_C id = (tuple(members) if (len(members) > 1) else members[0]) self.c_data[id] = {'start_index': C['start_index'], 'end_index': C['end_index'], 'max_cliques': (set([item]) if (C_type == 'node') else set(item))} return L_aug else: return L_ind
Returns an augmented version of L where each column is an indicator for whether a certain source or clique of sources voted in a certain pattern. Args: L: An [n,m] scipy.sparse label matrix with values in {0,1,...,k}
codesearchnet
def check_file(self, fs, info): if self.exclude is not None and fs.match(self.exclude, info.name): return False return fs.match(self.filter, info.name)
Check if a filename should be included. Override to exclude files from the walk. Arguments: fs (FS): A filesystem instance. info (Info): A resource info object. Returns: bool: `True` if the file should be included.
juraj-google-style
def shrink(script, iterations=1): filter_xml = ' <filter name="Erode Selection"/>\n' for _ in range(iterations): util.write_filter(script, filter_xml) return None
Shrink (erode, reduce) the current set of selected faces Args: script: the FilterScript object or script filename to write the filter to. iterations (int): the number of times to shrink the selection. Layer stack: No impacts MeshLab versions: 2016.12 1.3.4BETA
juraj-google-style
def strace_clear(self, handle): data = ctypes.c_int(handle) res = self._dll.JLINK_STRACE_Control(enums.JLinkStraceCommand.TRACE_EVENT_CLR, ctypes.byref(data)) if (res < 0): raise errors.JLinkException('Failed to clear STRACE event.') return None
Clears the trace event specified by the given handle. Args: self (JLink): the ``JLink`` instance. handle (int): handle of the trace event. Returns: ``None`` Raises: JLinkException: on error.
codesearchnet
def create_redis_client(redis_address, password=None): (redis_ip_address, redis_port) = redis_address.split(':') return redis.StrictRedis(host=redis_ip_address, port=int(redis_port), password=password)
Create a Redis client. Args: The IP address, port, and password of the Redis server. Returns: A Redis client.
codesearchnet
def _GetDayOfYear(self, year, month, day_of_month): if (month not in range(1, 13)): raise ValueError('Month value out of bounds.') days_per_month = self._GetDaysPerMonth(year, month) if ((day_of_month < 1) or (day_of_month > days_per_month)): raise ValueError('Day of month value out of bounds.') day_of_year = day_of_month for past_month in range(1, month): day_of_year += self._GetDaysPerMonth(year, past_month) return day_of_year
Retrieves the day of the year for a specific day of a month in a year. Args: year (int): year e.g. 1970. month (int): month, where 1 represents January. day_of_month (int): day of the month, where 1 represents the first day. Returns: int: day of year. Raises: ValueError: if the month or day of month value is out of bounds.
codesearchnet
def _broadcast_arg(U, arg, argtype, name): if ((arg is None) or isinstance(arg, argtype)): return [arg for _ in range(U.ndim)] elif np.iterable(arg): if (len(arg) != U.ndim): raise ValueError('Parameter {} was specified as a sequence of incorrect length. The length must match the number of tensor dimensions (U.ndim={})'.format(name, U.ndim)) elif (not all([isinstance(a, argtype) for a in arg])): raise TypeError('Parameter {} specified as a sequence of incorrect type. Expected {}.'.format(name, argtype)) else: return arg else: raise TypeError('Parameter {} specified as a {}. Expected {}.'.format(name, type(arg), argtype))
Broadcasts plotting option `arg` to all factors. Args: U : KTensor arg : argument provided by the user argtype : expected type for arg name : name of the variable, used for error handling Returns: iterable version of arg of length U.ndim
codesearchnet
def sqrt(x): zero = _constant_to_tensor(0.0, x.dtype.base_dtype) x = math_ops.maximum(x, zero) return math_ops.sqrt(x)
Element-wise square root. This function clips negative tensor values to 0 before computing the square root. Args: x: Tensor or variable. Returns: A tensor.
github-repos
def FindFileContainingSymbol(self, symbol): symbol = _NormalizeFullyQualifiedName(symbol) try: return self._descriptors[symbol].file except KeyError: pass try: return self._enum_descriptors[symbol].file except KeyError: pass try: file_proto = self._internal_db.FindFileContainingSymbol(symbol) except KeyError as error: if self._descriptor_db: file_proto = self._descriptor_db.FindFileContainingSymbol(symbol) else: raise error if not file_proto: raise KeyError('Cannot find a file containing %s' % symbol) return self._ConvertFileProtoToFileDescriptor(file_proto)
Gets the FileDescriptor for the file containing the specified symbol. Args: symbol: The name of the symbol to search for. Returns: A FileDescriptor that contains the specified symbol. Raises: KeyError: if the file can not be found in the pool.
juraj-google-style
def combs(a, r): if r == 0: return np.asarray([]) a = np.asarray(a) data_type = a.dtype if r == 0 else np.dtype([('', a.dtype)] * r) b = np.fromiter(combinations(a, r), data_type) return b.view(a.dtype).reshape(-1, r)
NumPy implementation of ``itertools.combinations``. Return successive ``r``-length combinations of elements in the array ``a``. Args: a (np.ndarray): The array from which to get combinations. r (int): The length of the combinations. Returns: np.ndarray: An array of combinations.
juraj-google-style
def ms_bot_framework(self) -> dict: rich_card = {} buttons = [button.ms_bot_framework() for button in self.content] rich_card['buttons'] = buttons if self.text: rich_card['title'] = self.text attachments = [{'contentType': 'application/vnd.microsoft.card.thumbnail', 'content': rich_card}] out_activity = {} out_activity['type'] = 'message' out_activity['attachments'] = attachments return out_activity
Returns MS Bot Framework compatible state of the ButtonsFrame instance. Creating MS Bot Framework activity blank with RichCard in "attachments". RichCard is populated with CardActions corresponding buttons embedded in ButtonsFrame. Returns: control_json: MS Bot Framework representation of ButtonsFrame state.
codesearchnet
def filter(self, scored_list): if (len(scored_list) > 0): avg = np.mean([s[1] for s in scored_list]) std = np.std([s[1] for s in scored_list]) else: avg = 0 std = 0 limiter = (avg + (0.5 * std)) mean_scored = [(sent_idx, score) for (sent_idx, score) in scored_list if (score > limiter)] return mean_scored
Filtering with std. Args: scored_list: The list of scoring. Retruns: The list of filtered result.
codesearchnet
def compress_artifact_if_supported(artifact_path): content_type, encoding = guess_content_type_and_encoding(artifact_path) log.debug('"{}" is encoded with "{}" and has mime/type "{}"'.format(artifact_path, encoding, content_type)) if encoding is None and content_type in _GZIP_SUPPORTED_CONTENT_TYPE: log.info('"{}" can be gzip\'d. Compressing...'.format(artifact_path)) with open(artifact_path, 'rb') as f_in: text_content = f_in.read() with gzip.open(artifact_path, 'wb') as f_out: f_out.write(text_content) encoding = 'gzip' log.info('"{}" compressed'.format(artifact_path)) else: log.debug('"{}" is not supported for compression.'.format(artifact_path)) return content_type, encoding
Compress artifacts with GZip if they're known to be supported. This replaces the artifact given by a gzip binary. Args: artifact_path (str): the path to compress Returns: content_type, content_encoding (tuple): Type and encoding of the file. Encoding equals 'gzip' if compressed.
juraj-google-style
def from_service_account_file(cls, filename, **kwargs): (info, signer) = _service_account_info.from_filename(filename, require=['client_email', 'token_uri']) return cls._from_signer_and_info(signer, info, **kwargs)
Creates a Credentials instance from a service account json file. Args: filename (str): The path to the service account json file. kwargs: Additional arguments to pass to the constructor. Returns: google.auth.service_account.Credentials: The constructed credentials.
codesearchnet
def set_parameter(self, name, value): i = self.get_parameter_names(include_frozen=True).index(name) v = self.get_parameter_vector(include_frozen=True) v[i] = value self.set_parameter_vector(v, include_frozen=True)
Set a parameter value by name Args: name: The name of the parameter value (float): The new value for the parameter
codesearchnet
def get_attribute(self, attribute: str) -> 'Node': matches = [ value_node for key_node, value_node in self.yaml_node.value if key_node.value == attribute ] if len(matches) != 1: raise SeasoningError( 'Attribute not found, or found multiple times: {}'.format( matches)) return Node(matches[0])
Returns the node representing the given attribute's value. Use only if is_mapping() returns true. Args: attribute: The name of the attribute to retrieve. Raises: KeyError: If the attribute does not exist. Returns: A node representing the value.
juraj-google-style
def add_observers(self, count, date_observed): if (not self.can_update()): self._tcex.handle_error(910, [self.type]) data = {'count': count, 'dataObserved': self._utils.format_datetime(date_observed, date_format='%Y-%m-%dT%H:%M:%SZ')} return self.tc_requests.add_observations(self.api_type, self.api_sub_type, self.unique_id, data, owner=self.owner)
Adds a Indicator Observation Args: count: date_observed:
codesearchnet
def assert_non_singular(self, name='assert_non_singular'): with self._name_scope(name): return self._assert_non_singular()
Returns an `Op` that asserts this operator is non singular. This operator is considered non-singular if ``` ConditionNumber < max{100, range_dimension, domain_dimension} * eps, eps := np.finfo(self.dtype.as_numpy_dtype).eps ``` Args: name: A string name to prepend to created ops. Returns: An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is singular.
github-repos
def get_keys(self, alias_name, key_format): uri = ((((self.URI + '/keys/') + alias_name) + '?format=') + key_format) return self._client.get(uri)
Retrieves the contents of PKCS12 file in the format specified. This PKCS12 formatted file contains both the certificate as well as the key file data. Valid key formats are Base64 and PKCS12. Args: alias_name: Key pair associated with the RabbitMQ key_format: Valid key formats are Base64 and PKCS12. Returns: dict: RabbitMQ certificate
codesearchnet
def from_poscar_string(poscar_string, transformations=None): p = Poscar.from_string(poscar_string) if not p.true_names: raise ValueError("Transformation can be craeted only from POSCAR " "strings with proper VASP5 element symbols.") raw_string = re.sub(r"'", "\"", poscar_string) s = p.structure source_info = {"source": "POSCAR", "datetime": str(datetime.datetime.now()), "original_file": raw_string} return TransformedStructure(s, transformations, history=[source_info])
Generates TransformedStructure from a poscar string. Args: poscar_string (str): Input POSCAR string. transformations ([Transformations]): Sequence of transformations to be applied to the input structure.
juraj-google-style
def header_string_from_file(filename='feff.inp'): with zopen(filename, 'r') as fobject: f = fobject.readlines() feff_header_str = [] ln = 0 try: feffpmg = f[0].find('pymatgen') except IndexError: feffpmg = False if feffpmg: nsites = int(f[8].split()[2]) for line in f: ln += 1 if (ln <= (nsites + 9)): feff_header_str.append(line) else: end = 0 for line in f: if (((line[0] == '*') or (line[0] == 'T')) and (end == 0)): feff_header_str.append(line.replace('\r', '')) else: end = 1 return ''.join(feff_header_str)
Reads Header string from either a HEADER file or feff.inp file Will also read a header from a non-pymatgen generated feff.inp file Args: filename: File name containing the Header data. Returns: Reads header string.
codesearchnet
def _load_hdf5(self, filename, parent_level="CellpyData"): if not os.path.isfile(filename): self.logger.info(f"file does not exist: {filename}") raise IOError store = pd.HDFStore(filename) required_keys = ['dfdata', 'dfsummary', 'info'] required_keys = ["/" + parent_level + "/" + _ for _ in required_keys] for key in required_keys: if key not in store.keys(): self.logger.info(f"This hdf-file is not good enough - " f"at least one key is missing: {key}") raise Exception(f"OH MY GOD! At least one crucial key" f"is missing {key}!") self.logger.debug(f"Keys in current hdf5-file: {store.keys()}") data = DataSet() if parent_level != "CellpyData": self.logger.debug("Using non-default parent label for the " "hdf-store: {}".format(parent_level)) infotable = store.select(parent_level + "/info") try: data.cellpy_file_version = \ self._extract_from_dict(infotable, "cellpy_file_version") except Exception as e: data.cellpy_file_version = 0 warnings.warn(f"Unhandled exception raised: {e}") if data.cellpy_file_version < MINIMUM_CELLPY_FILE_VERSION: raise WrongFileVersion if data.cellpy_file_version > CELLPY_FILE_VERSION: raise WrongFileVersion data.dfsummary = store.select(parent_level + "/dfsummary") data.dfdata = store.select(parent_level + "/dfdata") try: data.step_table = store.select(parent_level + "/step_table") except Exception as e: self.logging.debug("could not get step_table from cellpy-file") data.step_table = pd.DataFrame() warnings.warn(f"Unhandled exception raised: {e}") try: fidtable = store.select( parent_level + "/fidtable") fidtable_selected = True except Exception as e: self.logging.debug("could not get fid-table from cellpy-file") fidtable = [] warnings.warn("no fidtable - you should update your hdf5-file") fidtable_selected = False self.logger.debug(" h5") newtests = [] data = self._load_infotable(data, infotable, filename) if fidtable_selected: data.raw_data_files, data.raw_data_files_length = \ self._convert2fid_list(fidtable) else: data.raw_data_files = None data.raw_data_files_length = None newtests.append(data) store.close() return newtests
Load a cellpy-file. Args: filename (str): Name of the cellpy file. parent_level (str) (optional): name of the parent level (defaults to "CellpyData") Returns: loaded datasets (DataSet-object)
juraj-google-style
def __init__(self, channel): self.ListAlertPolicies = channel.unary_unary( "/google.monitoring.v3.AlertPolicyService/ListAlertPolicies", request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_alert__service__pb2.ListAlertPoliciesRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_alert__service__pb2.ListAlertPoliciesResponse.FromString, ) self.GetAlertPolicy = channel.unary_unary( "/google.monitoring.v3.AlertPolicyService/GetAlertPolicy", request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_alert__service__pb2.GetAlertPolicyRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_alert__pb2.AlertPolicy.FromString, ) self.CreateAlertPolicy = channel.unary_unary( "/google.monitoring.v3.AlertPolicyService/CreateAlertPolicy", request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_alert__service__pb2.CreateAlertPolicyRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_alert__pb2.AlertPolicy.FromString, ) self.DeleteAlertPolicy = channel.unary_unary( "/google.monitoring.v3.AlertPolicyService/DeleteAlertPolicy", request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_alert__service__pb2.DeleteAlertPolicyRequest.SerializeToString, response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString, ) self.UpdateAlertPolicy = channel.unary_unary( "/google.monitoring.v3.AlertPolicyService/UpdateAlertPolicy", request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_alert__service__pb2.UpdateAlertPolicyRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_alert__pb2.AlertPolicy.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def projection_error(nodes, projected): relative_err = np.linalg.norm((nodes - projected), ord='fro') if (relative_err != 0.0): relative_err /= np.linalg.norm(nodes, ord='fro') return relative_err
Compute the error between ``nodes`` and the projected nodes. .. note:: This is a helper for :func:`maybe_reduce`, which is in turn a helper for :func:`_full_reduce`. Hence there is no corresponding Fortran speedup. For now, just compute the relative error in the Frobenius norm. But, we may wish to consider the error per row / point instead. Args: nodes (numpy.ndarray): Nodes in a curve. projected (numpy.ndarray): The ``nodes`` projected into the space of degree-elevated nodes. Returns: float: The relative error.
codesearchnet
def expert_dot_product(q, k, v, info_q, info_k): length_q = common_layers.shape_list(q)[0] length_k = common_layers.shape_list(k)[0] depth_v = v.get_shape().as_list()[(- 1)] bias = attention_bias_coordinates(info_q.coordinates, info_k.coordinates) if (info_k.order is not None): bias += attention_bias_future(info_q.order, info_k.order) (q, k, v) = [tf.expand_dims(tf.expand_dims(t, 0), 0) for t in (q, k, v)] def is_zero(): zeros = tf.zeros(shape=[1, 1, length_q, depth_v], dtype=tf.float32) zeros = tf.Print(zeros, [length_k, length_q], 'length_k/length_q: ') return zeros def is_not_zero(): return dot_product_attention(q, k, v, bias=bias, make_image_summary=False) v_out = tf.cond(tf.logical_or(tf.equal(length_q, 0), tf.equal(length_k, 0)), is_zero, is_not_zero) v_out = tf.squeeze(v_out, axis=0) v_out = tf.squeeze(v_out, axis=0) return v_out
Perform dot product on a subset of the sequence. Can add a mask to the attention to prevent sequences to attend to each other and to prevent attention to the future. Args: q (tf.Tensor): Queries of shape [length_expert_q, depth_k] k (tf.Tensor): Keys of shape [length_expert_k, depth_k] v (tf.Tensor): Values of shape [length_expert_k, depth_v] info_q (BatchInfo): Batch info for queries. If None, no mask is added info_k (BatchInfo): Batch info for keys Returns: tf.Tensor: dot product attention output ([length_expert_q, depth_v])
codesearchnet
def _FusedBatchNormGradGrad(op: ops.Operation, *grad): data_format = op.get_attr('data_format') epsilon = op.get_attr('epsilon') is_training = op.get_attr('is_training') grad_y = op.inputs[0] x = op.inputs[1] scale = op.inputs[2] pop_mean = op.inputs[3] pop_var = op.inputs[4] grad_grad_x = grad[0] grad_grad_scale = grad[1] grad_grad_offset = grad[2] with backprop.GradientTape() as tape: tape.watch(grad_y) tape.watch(x) tape.watch(scale) grad_x, grad_scale, grad_offset = _BatchNormGrad(grad_y, x, scale, pop_mean, pop_var, epsilon, data_format, is_training) grad_initial = [grad_grad_x, grad_grad_scale, grad_grad_offset] grad_grad_y, grad_x, grad_scale = tape.gradient([grad_x, grad_scale, grad_offset], [grad_y, x, scale], grad_initial) return (grad_grad_y, grad_x, grad_scale, None, None)
Returns the gradients for the 3 inputs of FusedBatchNormGrad. Args: op: The FusedBatchNormGradOp for which we need to compute gradients. *grad: An argument list for tensors of gradients wrt the outputs with grad[0] as grad_grad_x, grad[1] as grad_grad_scale, grad[2] as grad_grad_offset. Returns: A tuple (grad_grad_y, grad_x, grad_scale, None, None), where grad_grad_y is the gradient for grad_y, grad_x the gradient for x, grad_scale the gradient for scale.
github-repos
def _add_write_pbs(self, write_pbs): if self._read_only: raise ValueError(_WRITE_READ_ONLY) super(Transaction, self)._add_write_pbs(write_pbs)
Add `Write`` protobufs to this transaction. Args: write_pbs (List[google.cloud.proto.firestore.v1beta1.\ write_pb2.Write]): A list of write protobufs to be added. Raises: ValueError: If this transaction is read-only.
codesearchnet
def recipe_bucket(config, auth_write, bucket_bucket, bucket_emails, bucket_groups): bucket(config, {'auth': auth_write, 'bucket': bucket_bucket, 'emails': bucket_emails, 'groups': bucket_groups})
Create and permission a bucket in Storage. Args: auth_write (authentication) - Credentials used for writing data. bucket_bucket (string) - Name of Google Cloud Bucket to create. bucket_emails (string_list) - Comma separated emails. bucket_groups (string_list) - Comma separated groups.
github-repos
def plot_normal_cdf(rbound=None, lbound=None, mean=0, sd=1): shade = ((rbound is not None) or (lbound is not None)) shade_left = ((rbound is not None) and (lbound is not None)) inf = (3.5 * sd) step = 0.1 rlabel = rbound llabel = lbound if (rbound is None): rbound = (inf + mean) rlabel = '$\\infty$' if (lbound is None): lbound = ((- inf) + mean) llabel = '-$\\infty$' pdf_range = np.arange(((- inf) + mean), (inf + mean), step) plt.plot(pdf_range, stats.norm.pdf(pdf_range, loc=mean, scale=sd), color='k', lw=1) cdf_range = np.arange(lbound, (rbound + step), step) if shade: plt.fill_between(cdf_range, stats.norm.pdf(cdf_range, loc=mean, scale=sd), color='gold') if shade_left: cdf_range = np.arange(((- inf) + mean), (lbound + step), step) plt.fill_between(cdf_range, stats.norm.pdf(cdf_range, loc=mean, scale=sd), color='darkblue') plt.ylim(0, (stats.norm.pdf(0, loc=0, scale=sd) * 1.25)) plt.xlabel('z') plt.ylabel('$\\phi$(z)', rotation=90) plt.title('Normal Curve ~ ($\\mu$ = {0}, $\\sigma$ = {1}) {2} < z < {3}'.format(mean, sd, llabel, rlabel), fontsize=16) plt.show()
Plots a normal curve with specified parameters and area below curve shaded between ``lbound`` and ``rbound``. Args: ``rbound`` (numeric): right boundary of shaded region ``lbound`` (numeric): left boundary of shaded region; by default is negative infinity ``mean`` (numeric): mean/expectation of normal distribution ``sd`` (numeric): standard deviation of normal distribution
codesearchnet
def import_gssapi_extension(name): try: path = 'gssapi.raw.ext_{0}'.format(name) __import__(path) return sys.modules[path] except ImportError: return None
Import a GSSAPI extension module This method imports a GSSAPI extension module based on the name of the extension (not including the 'ext_' prefix). If the extension is not available, the method retuns None. Args: name (str): the name of the extension Returns: module: Either the extension module or None
codesearchnet
def start_of_chunk(prev_tag, tag, prev_type, type_): chunk_start = False if tag == 'B': chunk_start = True if tag == 'S': chunk_start = True if prev_tag == 'E' and tag == 'E': chunk_start = True if prev_tag == 'E' and tag == 'I': chunk_start = True if prev_tag == 'S' and tag == 'E': chunk_start = True if prev_tag == 'S' and tag == 'I': chunk_start = True if prev_tag == 'O' and tag == 'E': chunk_start = True if prev_tag == 'O' and tag == 'I': chunk_start = True if tag != 'O' and tag != '.' and prev_type != type_: chunk_start = True return chunk_start
Checks if a chunk started between the previous and current word. Args: prev_tag: previous chunk tag. tag: current chunk tag. prev_type: previous type. type_: current type. Returns: chunk_start: boolean.
juraj-google-style
def start_day_cycle(self, day_length): if (day_length <= 0): raise HolodeckException('The given day length should be between above 0!') self._should_write_to_command_buffer = True command_to_send = DayCycleCommand(True) command_to_send.set_day_length(day_length) self._commands.add_command(command_to_send)
Queue up a day cycle command to start the day cycle. It will be applied when `tick` or `step` is called next. The sky sphere will now update each tick with an updated sun angle as it moves about the sky. The length of a day will be roughly equivalent to the number of minutes given. Args: day_length (int): The number of minutes each day will be.
codesearchnet
def indicator_associations_types( self, main_type, sub_type, unique_id, association_type, api_branch=None, api_entity=None, owner=None, params=None, ): params = params or {} if owner: params['owner'] = owner api_branch = api_branch or association_type.api_sub_type api_entity = api_entity or association_type.api_entity if not sub_type: url = '/v2/{}/{}/indicators/{}'.format(main_type, unique_id, api_branch) else: url = '/v2/{}/{}/{}/indicators/{}'.format(main_type, sub_type, unique_id, api_branch) for iat in self._iterate(url, params, api_entity): yield iat
Args: owner: main_type: sub_type: unique_id: association_type: api_branch: api_entity: params: Return:
juraj-google-style
def avg(self, vars_list: List[str]) -> 'TensorFluent': operand = self if (operand.dtype == tf.bool): operand = operand.cast(tf.float32) return self._aggregation_op(tf.reduce_mean, operand, vars_list)
Returns the TensorFluent for the avg aggregation function. Args: vars_list: The list of variables to be aggregated over. Returns: A TensorFluent wrapping the avg aggregation function.
codesearchnet
def batch_flatten(x): x = array_ops.reshape(x, array_ops_stack.stack([-1, prod(shape(x)[1:])])) return x
Turn a nD tensor into a 2D tensor with same 0th dimension. In other words, it flattens each data samples of a batch. Args: x: A tensor or variable. Returns: A tensor. Examples: Flattening a 3D tensor to 2D by collapsing the last dimension. >>> x_batch = tf.keras.backend.ones(shape=(2, 3, 4, 5)) >>> x_batch_flatten = batch_flatten(x_batch) >>> tf.keras.backend.int_shape(x_batch_flatten) (2, 60)
github-repos
def __init__(self, model_name: str, columns: list[str], api_key: Optional[str]=None, organization: Optional[str]=None, dimensions: Optional[int]=None, user: Optional[str]=None, max_batch_size: Optional[int]=None, **kwargs): self.model_name = model_name self.api_key = api_key self.organization = organization self.dimensions = dimensions self.user = user self.max_batch_size = max_batch_size super().__init__(columns=columns, **kwargs)
Embedding Config for OpenAI Text Embedding models. Text Embeddings are generated for a batch of text using the OpenAI API. Args: model_name: Name of the OpenAI embedding model columns: The columns where the embeddings will be stored in the output api_key: OpenAI API key organization: OpenAI organization ID dimensions: Specific embedding dimensions to use (if model supports it) user: End-user identifier for tracking and rate limit calculations max_batch_size: Maximum batch size for requests to OpenAI API
github-repos
def make_grid(tensor, nrow=8, padding=2, pad_value=0): if not (isinstance(tensor, np.ndarray) or (isinstance(tensor, list) and all(isinstance(t, np.ndarray) for t in tensor))): raise TypeError('tensor or list of tensors expected, got {}'.format(type(tensor))) if isinstance(tensor, list): tensor = np.stack(tensor, 0) if tensor.ndim == 2: tensor = tensor.reshape((1, tensor.shape[0], tensor.shape[1])) if tensor.ndim == 3: if tensor.shape[0] == 1: tensor = np.concatenate((tensor, tensor, tensor), 0) tensor = tensor.reshape((1, tensor.shape[0], tensor.shape[1], tensor.shape[2])) if tensor.ndim == 4 and tensor.shape[1] == 1: tensor = np.concatenate((tensor, tensor, tensor), 1) if tensor.shape[0] == 1: return np.squeeze(tensor) nmaps = tensor.shape[0] xmaps = min(nrow, nmaps) ymaps = int(math.ceil(float(nmaps) / xmaps)) height, width = int(tensor.shape[2] + padding), int(tensor.shape[3] + padding) grid = np.ones((3, height * ymaps + padding, width * xmaps + padding)) * pad_value k = 0 for y in range(ymaps): for x in range(xmaps): if k >= nmaps: break grid[:, y * height + padding:(y+1) * height,\ x * width + padding:(x+1) * width] = tensor[k] k = k + 1 return grid
Make a grid of images, via numpy. Args: tensor (Tensor or list): 4D mini-batch Tensor of shape (B x C x H x W) or a list of images all of the same size. nrow (int, optional): Number of images displayed in each row of the grid. The Final grid size is (B / nrow, nrow). Default is 8. padding (int, optional): amount of padding. Default is 2. pad_value (float, optional): Value for the padded pixels.
juraj-google-style
def master(self, task_type=None, task_id=None, rpc_layer=None): task_type = task_type if task_type is not None else self.task_type task_id = task_id if task_id is not None else self.task_id if task_type is not None and task_id is not None: return format_master_url(self.cluster_spec().task_address(task_type, task_id), rpc_layer or self.rpc_layer) return ''
Returns the master string for connecting to a TensorFlow master. Args: task_type: (Optional) Overrides the default auto-selected task type. task_id: (Optional) Overrides the default auto-selected task index. rpc_layer: (Optional) Overrides the default RPC protocol TensorFlow uses to communicate across nodes. Returns: A connection string for connecting to a TensorFlow master.
github-repos
def create_toolbutton(entries, parent=None): btn = QtGui.QToolButton(parent) menu = QtGui.QMenu() actions = [] for label, slot in entries: action = add_menu_action(menu, label, slot) actions.append(action) btn.setPopupMode(QtGui.QToolButton.MenuButtonPopup) btn.setDefaultAction(actions[0]) btn.setMenu(menu) return btn, actions
Create a toolbutton. Args: entries: List of (label, slot) tuples. Returns: `QtGui.QToolBar`.
juraj-google-style
def _fit(self, col): column = col[self.col_name].replace({np.nan: np.inf}) frequencies = column.groupby(column).count().rename({np.inf: None}).to_dict() start = 0 end = 0 num_vals = len(col) for val in frequencies: prob = (frequencies[val] / num_vals) end = (start + prob) interval = (start, end) mean = np.mean(interval) std = (prob / 6) self.probability_map[val] = (interval, mean, std) start = end
Create a map of the empirical probability for each category. Args: col(pandas.DataFrame): Data to transform.
codesearchnet
def compose_containerized_launch_cmd(self, filepath, engine_dir, container_image): self.engine_file = os.path.expanduser(filepath) uid = str(uuid.uuid4()) engine_json = None try: with open(self.engine_file, 'r') as f: engine_json = f.read() except OSError as e: logger.error("Could not open engine_json : ", self.engine_file) raise e return .format(engine_dir, engine_json, container_image, debug_option=self.debug_option, uid=uid)
Reads the json contents from filepath and uses that to compose the engine launch command. Notes: Add this to the ipengine launch for debug logs : --log-to-file --debug Args: filepath (str): Path to the engine file engine_dir (str): CWD for the engines . container_image (str): The container to be used to launch workers
juraj-google-style
def _find_elements(self, result, elements): element_mapping = {} result = StringIO.StringIO(result) for (_, e) in ET.iterparse(result, events=('end',)): if (not elements): break if (e.tag in elements): element_mapping[e.tag] = e.text elements.remove(e.tag) return element_mapping
Find interesting elements from XML. This function tries to only look for specified elements without parsing the entire XML. The specified elements is better located near the beginning. Args: result: response XML. elements: a set of interesting element tags. Returns: A dict from element tag to element value.
codesearchnet
def validate(self, *args, **kwargs): return super(ParameterValidator, self)._validate(*args, **kwargs)
Validate a parameter dict against a parameter schema from an ocrd-tool.json Args: obj (dict): schema (dict):
juraj-google-style
def fib(n): assert (n > 0) (a, b) = (1, 1) for i in range((n - 1)): (a, b) = (b, (a + b)) return a
Fibonacci example function Args: n (int): integer Returns: int: n-th Fibonacci number
codesearchnet
def recipe_trends_places_to_sheets_via_value(config, auth_write, secret, key, places_dataset, places_query, places_legacy, destination_sheet, destination_tab): twitter(config, {'auth': auth_write, 'secret': secret, 'key': key, 'trends': {'places': {'single_cell': True, 'bigquery': {'dataset': places_dataset, 'query': places_query, 'legacy': places_legacy}}}, 'out': {'sheets': {'sheet': destination_sheet, 'tab': destination_tab, 'range': 'A1'}}})
Move using hard coded WOEID values. Args: auth_write (authentication) - Credentials used for writing data. secret (string) - NA key (string) - NA places_dataset (string) - NA places_query (string) - NA places_legacy (boolean) - NA destination_sheet (string) - NA destination_tab (string) - NA
github-repos
def share(self, group_id, group_access, expires_at=None, **kwargs): path = '/projects/%s/share' % self.get_id() data = {'group_id': group_id, 'group_access': group_access, 'expires_at': expires_at} self.manager.gitlab.http_post(path, post_data=data, **kwargs)
Share the project with a group. Args: group_id (int): ID of the group. group_access (int): Access level for the group. **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabCreateError: If the server failed to perform the request
juraj-google-style
def rep1sep(parser: Union[(Parser, Sequence[Input])], separator: Union[(Parser, Sequence[Input])]) -> RepeatedOnceSeparatedParser: if isinstance(parser, str): parser = lit(parser) if isinstance(separator, str): separator = lit(separator) return RepeatedOnceSeparatedParser(parser, separator)
Match a parser one or more times separated by another parser. This matches repeated sequences of ``parser`` separated by ``separator``. If there is at least one match, a list containing the values of the ``parser`` matches is returned. The values from ``separator`` are discarded. If it does not match ``parser`` at all, it fails. Args: parser: Parser or literal separator: Parser or literal
codesearchnet
def _add_sparse_to_tensors_map(sp_input, container=None, shared_name=None, name=None): sp_input = _convert_to_sparse_tensor(sp_input) return gen_sparse_ops.add_sparse_to_tensors_map(sp_input.indices, sp_input.values, sp_input.dense_shape, container=container, shared_name=shared_name, name=name)
Add a `SparseTensor` to a `SparseTensorsMap` and return its handle. Args: sp_input: The input `SparseTensor`. container: The container for the underlying `SparseTensorsMap` (optional). shared_name: The shared name for the underlying `SparseTensorsMap` (optional, defaults to the name of the newly created op). name: A name prefix for the returned tensors (optional). Returns: A string 1-vector (1D `Tensor`), with the single element representing the a unique handle to a `SparseTensor` stored by the `SparseTensorMap` underlying this op. Raises: TypeError: If `sp_input` is not a `SparseTensor`.
github-repos
def to_event(self, event_type, field_name=None, depth=None): if (self.ion_event is None): value = self if isinstance(self, IonPyNull): value = None self.ion_event = IonEvent(event_type, ion_type=self.ion_type, value=value, field_name=field_name, annotations=self.ion_annotations, depth=depth) return self.ion_event
Constructs an IonEvent from this _IonNature value. Args: event_type (IonEventType): The type of the resulting event. field_name (Optional[text]): The field name associated with this value, if any. depth (Optional[int]): The depth of this value. Returns: An IonEvent with the properties from this value.
codesearchnet
def try_get_column(column_name, node, context): selectable = get_node_selectable(node, context) if (not hasattr(selectable, 'c')): raise AssertionError(u'Selectable "{}" does not have a column collection. Context is {}.'.format(selectable, context)) return selectable.c.get(column_name, None)
Attempt to get a column by name from the selectable. Args: column_name: str, name of the column to retrieve. node: SqlNode, the node the column is being retrieved for. context: CompilationContext, compilation specific metadata. Returns: Optional[column], the SQLAlchemy column if found, None otherwise.
codesearchnet
def flash_progress_callback(action, progress_string, percentage): if action.lower() != 'compare': return progress_bar(min(100, percentage), 100, prefix=action) return None
Callback that can be used with ``JLink.flash()``. This callback generates a progress bar in the console to show the progress of each of the steps of the flash. Args: action (str): the current action being invoked progress_string (str): the current step in the progress percentage (int): the percent to which the current step has been done Returns: ``None`` Note: This function ignores the compare action.
juraj-google-style
def from_json(cls, json): return cls(json[cls.BLOB_KEY_PARAM], json[cls.START_INDEX_PARAM], json[cls.END_INDEX_PARAM])
Creates an instance of the InputReader for the given input shard state. Args: json: The InputReader state as a dict-like object. Returns: An instance of the InputReader configured using the values of json.
juraj-google-style
def _InitializeParserObjects(self, parser_filter_expression=None): (self._formats_with_signatures, non_sigscan_parser_names) = parsers_manager.ParsersManager.GetFormatsWithSignatures(parser_filter_expression=parser_filter_expression) self._non_sigscan_parser_names = [] for parser_name in non_sigscan_parser_names: if (parser_name not in ('filestat', 'usnjrnl')): self._non_sigscan_parser_names.append(parser_name) self._file_scanner = parsers_manager.ParsersManager.CreateSignatureScanner(self._formats_with_signatures) self._parsers = parsers_manager.ParsersManager.GetParserObjects(parser_filter_expression=parser_filter_expression) active_parser_names = ', '.join(sorted(self._parsers.keys())) logger.debug('Active parsers: {0:s}'.format(active_parser_names)) self._filestat_parser = self._parsers.get('filestat', None) if ('filestat' in self._parsers): del self._parsers['filestat'] self._mft_parser = self._parsers.get('mft', None) self._usnjrnl_parser = self._parsers.get('usnjrnl', None) if ('usnjrnl' in self._parsers): del self._parsers['usnjrnl']
Initializes the parser objects. Args: parser_filter_expression (Optional[str]): the parser filter expression, None represents all parsers and plugins. The parser filter expression is a comma separated value string that denotes a list of parser names to include and/or exclude. Each entry can have the value of: * An exact match of a list of parsers, or a preset (see data/presets.yaml for the list of predefined presets). * A name of a single parser (case insensitive), e.g. msiecf. * A glob name for a single parser, e.g. '*msie*' (case insensitive).
codesearchnet
def processMailList(platformNames=[], emails=[]): platforms = platform_selection.getPlatformsByName(platformNames, mode="mailfy") results = [] for e in emails: for pla in platforms: entities = pla.getInfo(query=e, mode="mailfy") if entities != {}: results += json.loads(entities) return results
Method to perform the email search. Args: ----- platformNames: List of names of the platforms. emails: List of numbers to be queried. Return: ------- A list of verified emails.
juraj-google-style
def _check_state_for_finalize_write(self, writer_results, num_shards): if not writer_results: return ([], [], [], 0) src_glob = FileSystems.join(FileSystems.split(writer_results[0])[0], '*') dst_glob = self._get_final_name_glob(num_shards) src_glob_files = set((file_metadata.path for mr in FileSystems.match([src_glob]) for file_metadata in mr.metadata_list)) dst_glob_files = set((file_metadata.path for mr in FileSystems.match([dst_glob]) for file_metadata in mr.metadata_list)) src_files = [] dst_files = [] delete_files = [] num_skipped = 0 for shard_num, src in enumerate(writer_results): final_name = self._get_final_name(shard_num, num_shards) dst = final_name src_exists = src in src_glob_files dst_exists = dst in dst_glob_files if not src_exists and (not dst_exists): raise BeamIOError('src and dst files do not exist. src: %s, dst: %s' % (src, dst)) if not src_exists and dst_exists: _LOGGER.debug('src: %s -> dst: %s already renamed, skipping', src, dst) num_skipped += 1 continue if src_exists and dst_exists and (FileSystems.checksum(src) == FileSystems.checksum(dst)): _LOGGER.debug('src: %s == dst: %s, deleting src', src, dst) delete_files.append(src) continue src_files.append(src) dst_files.append(dst) self._report_sink_lineage(dst_glob, dst_files) return (src_files, dst_files, delete_files, num_skipped)
Checks writer output files' states. Returns: src_files, dst_files: Lists of files to rename. For each i, finalize_write should rename(src_files[i], dst_files[i]). delete_files: Src files to delete. These could be leftovers from an incomplete (non-atomic) rename operation. num_skipped: Tally of writer results files already renamed, such as from a previous run of finalize_write().
github-repos
def sub_index(self, sub, start=0, end=None): start_index = self.index(sub[0], start, end) end = self._fix_end_index(end) if ((start_index + len(sub)) > end): raise ValueError for i in range(1, len(sub)): if (sub[i] != self[(start_index + i)]): raise ValueError return start_index
Return the index of a subsequence. This runs in O(len(sub)) Args: sub (Sequence): An Iterable to search for Returns: int: The index of the first element of sub Raises: ValueError: If sub isn't a subsequence TypeError: If sub isn't iterable IndexError: If start or end are out of range
codesearchnet
def encode(self, input_ids: jnp.ndarray, attention_mask: Optional[jnp.ndarray]=None, position_ids: Optional[jnp.ndarray]=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, return_dict: Optional[bool]=None, train: bool=False, params: Optional[dict]=None, dropout_rng: PRNGKey=None): output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states return_dict = return_dict if return_dict is not None else self.config.return_dict if attention_mask is None: attention_mask = jnp.ones_like(input_ids) if position_ids is None: batch_size, sequence_length = input_ids.shape position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length)) rngs = {} if dropout_rng is not None: rngs['dropout'] = dropout_rng def _encoder_forward(module, input_ids, attention_mask, position_ids, **kwargs): encode_module = module._get_encoder_module() return encode_module(input_ids, attention_mask, position_ids, **kwargs) outputs = self.module.apply({'params': params or self.params}, input_ids=jnp.array(input_ids, dtype='i4'), attention_mask=jnp.array(attention_mask, dtype='i4'), position_ids=jnp.array(position_ids, dtype='i4'), output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, deterministic=not train, rngs=rngs, method=_encoder_forward) if return_dict: outputs = FlaxBaseModelOutput(last_hidden_state=outputs.last_hidden_state, hidden_states=outputs.hidden_states, attentions=outputs.attentions) return outputs
Returns: Example: ```python >>> from transformers import FlaxEncoderDecoderModel, BertTokenizer >>> # initialize a bert2gpt2 from pretrained BERT and GPT2 models. Note that the cross-attention layers will be randomly initialized >>> model = FlaxEncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-cased", "openai-community/gpt2") >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased") >>> text = "My friends are cool but they eat too many carbs." >>> input_ids = tokenizer.encode(text, return_tensors="np") >>> encoder_outputs = model.encode(input_ids) ```
github-repos
def _process_scopes(scopes): all_scopes = set() sufficient_scopes = set() for scope_set in scopes: scope_set_scopes = frozenset(scope_set.split()) all_scopes.update(scope_set_scopes) sufficient_scopes.add(scope_set_scopes) return (all_scopes, sufficient_scopes)
Parse a scopes list into a set of all scopes and a set of sufficient scope sets. scopes: A list of strings, each of which is a space-separated list of scopes. Examples: ['scope1'] ['scope1', 'scope2'] ['scope1', 'scope2 scope3'] Returns: all_scopes: a set of strings, each of which is one scope to check for sufficient_scopes: a set of sets of strings; each inner set is a set of scopes which are sufficient for access. Example: {{'scope1'}, {'scope2', 'scope3'}}
codesearchnet
def dot(inputs, axes=-1, **kwargs): return Dot(axes=axes, **kwargs)(inputs)
Functional interface to the `Dot` layer. Args: inputs: A list of input tensors (at least 2). axes: Integer or tuple of integers, axis or axes along which to take the dot product. normalize: Whether to L2-normalize samples along the dot product axis before taking the dot product. If set to `True`, then the output of the dot product is the cosine proximity between the two samples. **kwargs: Standard layer keyword arguments. Returns: A tensor, the dot product of the samples from the inputs.
github-repos
def permute(self, ordering: np.ndarray, *, axis: int) -> None: if (axis not in (0, 1)): raise ValueError('Axis must be 0 (rows) or 1 (columns)') for layer in self.layers.values(): layer._permute(ordering, axis=axis) if (axis == 0): if (self.row_graphs is not None): for g in self.row_graphs.values(): g._permute(ordering) for a in self.row_attrs.values(): a._permute(ordering) elif (axis == 1): if (self.col_graphs is not None): for g in self.col_graphs.values(): g._permute(ordering) for a in self.col_attrs.values(): a._permute(ordering)
Permute the view, by permuting its layers, attributes and graphs Args: ordering (np.ndarray): The desired ordering along the axis axis (int): 0, permute rows; 1, permute columns
codesearchnet
def _process_worker(call_queue, result_queue): while True: call_item = call_queue.get(block=True) if call_item is None: result_queue.put(None) return try: r = call_item.fn(*call_item.args, **call_item.kwargs) except BaseException: e = sys.exc_info()[1] result_queue.put(_ResultItem(call_item.work_id, exception=e)) else: result_queue.put(_ResultItem(call_item.work_id, result=r))
Evaluates calls from call_queue and places the results in result_queue. This worker is run in a separate process. Args: call_queue: A multiprocessing.Queue of _CallItems that will be read and evaluated by the worker. result_queue: A multiprocessing.Queue of _ResultItems that will written to by the worker. shutdown: A multiprocessing.Event that will be set as a signal to the worker that it should exit when call_queue is empty.
juraj-google-style
def parse_location(location): def split_dms(text, hemisphere): out = [] sect = [] for i in text: if i.isdigit(): sect.append(i) else: out.append(sect) sect = [] d, m, s = [float(''.join(i)) for i in out] if hemisphere in 'SW': d, m, s = [-1 * x for x in (d, m, s)] return to_dd(d, m, s) for sep in ';, ': chunks = location.split(sep) if len(chunks) == 2: if chunks[0].endswith('N'): latitude = float(chunks[0][:-1]) elif chunks[0].endswith('S'): latitude = -1 * float(chunks[0][:-1]) else: latitude = float(chunks[0]) if chunks[1].endswith('E'): longitude = float(chunks[1][:-1]) elif chunks[1].endswith('W'): longitude = -1 * float(chunks[1][:-1]) else: longitude = float(chunks[1]) return latitude, longitude elif len(chunks) == 4: if chunks[0].endswith(('s', '"')): latitude = split_dms(chunks[0], chunks[1]) else: latitude = float(chunks[0]) if chunks[1] == 'S': latitude = -1 * latitude if chunks[2].endswith(('s', '"')): longitude = split_dms(chunks[2], chunks[3]) else: longitude = float(chunks[2]) if chunks[3] == 'W': longitude = -1 * longitude return latitude, longitude
Parse latitude and longitude from string location. Args: location (str): String to parse Returns: tuple of float: Latitude and longitude of location
juraj-google-style
def to_unicode(self, s): if isinstance(s, unicode): return s if isinstance(s, str): return unicode(s, errors='ignore') return s
Convert an elementary datatype to unicode. Args: s: the datatype to be unicoded. Returns: Unicoded data.
codesearchnet
def _CheckAtLeast3DImage(image, require_static=True): try: if image.get_shape().ndims is None: image_shape = image.get_shape().with_rank(3) else: image_shape = image.get_shape().with_rank_at_least(3) except ValueError: raise ValueError("'image' (shape %s) must be at least three-dimensional." % image.shape) if require_static and (not image_shape.is_fully_defined()): raise ValueError("'image' must be fully defined.") if any((x == 0 for x in image_shape[-3:])): raise ValueError("inner 3 dims of 'image.shape' must be > 0: %s" % image_shape) if not image_shape[-3:].is_fully_defined(): return [check_ops.assert_positive(array_ops.shape(image)[-3:], ["inner 3 dims of 'image.shape' must be > 0."]), check_ops.assert_greater_equal(array_ops.rank(image), 3, message="'image' must be at least three-dimensional.")] else: return []
Assert that we are working with a properly shaped image. Args: image: >= 3-D Tensor of size [*, height, width, depth] require_static: If `True`, requires that all dimensions of `image` are known and non-zero. Raises: ValueError: if image.shape is not a [>= 3] vector. Returns: An empty list, if `image` has fully defined dimensions. Otherwise, a list containing an assert op is returned.
github-repos
def zeros_like(x, dtype=None): if any_symbolic_tensors((x,)): return ZerosLike(dtype=dtype).symbolic_call(x) return backend.numpy.zeros_like(x, dtype=dtype)
Return a tensor of zeros with the same shape and type as `x`. Args: x: Input tensor. dtype: Overrides the data type of the result. Returns: A tensor of zeros with the same shape and type as `x`.
github-repos
def softmax_cross_entropy_one_hot(logits, labels, weights_fn=None): with tf.variable_scope("softmax_cross_entropy_one_hot", values=[logits, labels]): del weights_fn cross_entropy = tf.losses.softmax_cross_entropy( onehot_labels=labels, logits=logits) return cross_entropy, tf.constant(1.0)
Calculate softmax cross entropy given one-hot labels and logits. Args: logits: Tensor of size [batch-size, o=1, p=1, num-classes] labels: Tensor of size [batch-size, o=1, p=1, num-classes] weights_fn: Function that takes in labels and weighs examples (unused) Returns: cross-entropy (scalar), weights
juraj-google-style
def _get_associated_classnames(self, classname, namespace, assoc_class, result_class, result_role, role): class_repo = self._get_class_repo(namespace) result_classes = self._classnamedict(result_class, namespace) assoc_classes = self._classnamedict(assoc_class, namespace) rtn_classnames_set = set() role = (role.lower() if role else role) result_role = (result_role.lower() if result_role else result_role) ref_clns = self._get_reference_classnames(classname, namespace, assoc_class, role) cls = [class_repo[cln] for cln in ref_clns] for cl in cls: for prop in six.itervalues(cl.properties): if (prop.type == 'reference'): if self._assoc_prop_matches(prop, cl.classname, assoc_classes, result_classes, result_role): rtn_classnames_set.add(prop.reference_class) return list(rtn_classnames_set)
Get list of classnames that are associated classes for which this classname is a target filtered by the assoc_class, role, result_class, and result_role parameters if they are none. This is a common method used by all of the other reference and associator methods to create a list of reference classnames Returns: list of classnames that satisfy the criteria.
codesearchnet
def deserialize_subject_info(subject_info_xml): try: return d1_common.xml.deserialize(subject_info_xml) except ValueError as e: raise d1_common.types.exceptions.InvalidToken( 0, 'Could not deserialize SubjectInfo. subject_info="{}", error="{}"'.format( subject_info_xml, str(e) ), )
Deserialize SubjectInfo XML doc to native object. Args: subject_info_xml: str SubjectInfo XML doc Returns: SubjectInfo PyXB object
juraj-google-style
def forward(self, hidden_states: torch.Tensor, grid_thw: torch.Tensor) -> torch.Tensor: hidden_states = self.patch_embed(hidden_states) rotary_pos_emb = self.rot_pos_emb(grid_thw) window_index, cu_window_seqlens = self.get_window_index(grid_thw) cu_window_seqlens = torch.tensor(cu_window_seqlens, device=hidden_states.device, dtype=grid_thw.dtype if torch.jit.is_tracing() else torch.int32) cu_window_seqlens = torch.unique_consecutive(cu_window_seqlens) seq_len, _ = hidden_states.size() hidden_states = hidden_states.reshape(seq_len hidden_states = hidden_states[window_index, :, :] hidden_states = hidden_states.reshape(seq_len, -1) rotary_pos_emb = rotary_pos_emb.reshape(seq_len rotary_pos_emb = rotary_pos_emb[window_index, :, :] rotary_pos_emb = rotary_pos_emb.reshape(seq_len, -1) cu_seqlens = torch.repeat_interleave(grid_thw[:, 1] * grid_thw[:, 2], grid_thw[:, 0]).cumsum(dim=0, dtype=grid_thw.dtype if torch.jit.is_tracing() else torch.int32) cu_seqlens = F.pad(cu_seqlens, (1, 0), value=0) for layer_num, blk in enumerate(self.blocks): if layer_num in self.fullatt_block_indexes: cu_seqlens_now = cu_seqlens else: cu_seqlens_now = cu_window_seqlens if self.gradient_checkpointing and self.training: hidden_states = self._gradient_checkpointing_func(blk.__call__, hidden_states, cu_seqlens_now, rotary_pos_emb) else: hidden_states = blk(hidden_states, cu_seqlens=cu_seqlens_now, rotary_pos_emb=rotary_pos_emb) hidden_states = self.merger(hidden_states) reverse_indices = torch.argsort(window_index) hidden_states = hidden_states[reverse_indices, :] return hidden_states
Args: hidden_states (`torch.Tensor` of shape `(seq_len, hidden_size)`): The final hidden states of the model. grid_thw (`torch.Tensor` of shape `(num_images_or_videos, 3)`): The temporal, height and width of feature shape of each image in LLM. Returns: `torch.Tensor`: hidden_states.
github-repos
def query(self, src: Any, use_inferred: bool=False) -> Any: return self._query(0, src, use_inferred)
Query the value from the source object based on current path. Example:: @pg.members([ ('x', pg.typing.Int()), ('y', pg.typing.Str()) ]) class A(pg.Object): pass @pg.members([ ('z', pg.typing.Object(A)) ]) class B(pg.Object): pass b = B(z=A(x=1, y='foo')) assert pg.KeyPath.parse('z.x').query(b) == 1 Args: src: Source value to query. use_inferred: If True, infer `pg.Inferential` values. Otherwise returns their symbolic form. Applicable only for symbolic values. Returns: Value from src if path exists. Raises: KeyError: Path doesn't exist in src. RuntimeError: Called on a KeyPath that is considered as removed.
github-repos
def _is_valid(self, value): if hasattr(self._type, "istypeof"): return self._type.istypeof(value) else: return isinstance(value, self._type)
Return True if the input value is valid for insertion into the inner list. Args: value: An object about to be inserted.
juraj-google-style
def set_shape(self, shape): self._ref().set_shape(shape) self.value().set_shape(shape)
Overrides the shape for this variable. Args: shape: the `TensorShape` representing the overridden shape.
github-repos
def sort_ordered_objects(items, getter=lambda x: x): return sorted(items, key=lambda x: getattr(getter(x), OrderedBase.CREATION_COUNTER_FIELD, -1))
Sort an iterable of OrderedBase instances. Args: items (iterable): the objects to sort getter (callable or None): a function to extract the OrderedBase instance from an object. Examples: >>> sort_ordered_objects([x, y, z]) >>> sort_ordered_objects(v.items(), getter=lambda e: e[1])
juraj-google-style
def rename_nodes(self, renaming_map): if not isinstance(renaming_map, dict): raise TypeError("renaming_map must be a dict") for node in self.traverse_preorder(): if node.label in renaming_map: node.label = renaming_map[node.label]
Rename nodes in this ``Tree`` Args: ``renaming_map`` (``dict``): A dictionary mapping old labels (keys) to new labels (values)
juraj-google-style
def is_done(self, transform: Optional[AppliedPTransform]=None) -> bool: if transform: return self._is_transform_done(transform) for applied_ptransform in self._step_names: if not self._is_transform_done(applied_ptransform): return False return True
Checks completion of a step or the pipeline. Args: transform: AppliedPTransform to check for completion. Returns: True if the step will not produce additional output. If transform is None returns true if all steps are done.
github-repos
def queryString_required(strList): def _dec(function): @wraps(function) def _wrap(request, *args, **kwargs): for i in strList: if i not in request.GET: raise Http404("api does not exist") return function(request, *args, **kwargs) return _wrap return _dec
An decorator checking whether queryString key is valid or not Args: str: allowed queryString key Returns: if contains invalid queryString key, it will raise exception.
juraj-google-style
def modutf7_decode(data: bytes) -> str: parts = [] is_usascii = True buf = memoryview(data) while buf: byte = buf[0] if is_usascii: if buf[0:2] == b'&-': parts.append('&') buf = buf[2:] elif byte == 0x26: is_usascii = False buf = buf[1:] else: parts.append(chr(byte)) buf = buf[1:] else: for i, byte in enumerate(buf): if byte == 0x2d: to_decode = buf[:i].tobytes() decoded = _modified_b64decode(to_decode) parts.append(decoded) buf = buf[i + 1:] is_usascii = True break if not is_usascii: to_decode = buf.tobytes() decoded = _modified_b64decode(to_decode) parts.append(decoded) return ''.join(parts)
Decode the bytestring using modified UTF-7. Args: data: The encoded bytestring to decode.
juraj-google-style
def reset( self ): self.lattice.reset() for atom in self.atoms.atoms: atom.reset()
Reset all counters for this simulation. Args: None Returns: None
juraj-google-style
def append(self, node): if not isinstance(node, grammar.STATEMENTS): raise ValueError self.to_append[-1].append(node)
Append a statement to the current statement. Note that multiple calls to append will result in the last statement to be appended to end up at the bottom. Args: node: The statement to append. Raises: ValueError: If the given node is not a statement.
juraj-google-style
def clone(self, opts): topt = self.opts.copy() topt.update(opts) return self.__class__(self.modl, self.name, self.info, topt)
Create a new instance of this type with the specified options. Args: opts (dict): The type specific options for the new instance.
juraj-google-style
def __add__(self, other): ret = RichLine() if isinstance(other, str): ret.text = self.text + other ret.font_attr_segs = self.font_attr_segs[:] return ret elif isinstance(other, RichLine): ret.text = self.text + other.text ret.font_attr_segs = self.font_attr_segs[:] old_len = len(self.text) for start, end, font_attr in other.font_attr_segs: ret.font_attr_segs.append((old_len + start, old_len + end, font_attr)) return ret else: raise TypeError('%r cannot be concatenated with a RichLine' % other)
Concatenate two chunks of maybe rich text to make a longer rich line. Does not modify self. Args: other: Another piece of text to concatenate with this one. If it is a plain str, it will be appended to this string with no attributes. If it is a RichLine, it will be appended to this string with its attributes preserved. Returns: A new RichLine comprising both chunks of text, with appropriate attributes applied to the corresponding substrings.
github-repos
def _reciprocal_condition_number(lu_mat, one_norm): if (_scipy_lapack is None): raise OSError('This function requires SciPy for calling into LAPACK.') (rcond, info) = _scipy_lapack.dgecon(lu_mat, one_norm) if (info != 0): raise RuntimeError('The reciprocal 1-norm condition number could not be computed.') return rcond
r"""Compute reciprocal condition number of a matrix. Args: lu_mat (numpy.ndarray): A 2D array of a matrix :math:`A` that has been LU-factored, with the non-diagonal part of :math:`L` stored in the strictly lower triangle and :math:`U` stored in the upper triangle. one_norm (float): The 1-norm of the original matrix :math:`A`. Returns: float: The reciprocal condition number of :math:`A`. Raises: OSError: If SciPy is not installed. RuntimeError: If the reciprocal 1-norm condition number could not be computed.
codesearchnet
def get(self, key, default='', stringify=True): obj = self.__getitem__(key) if obj is None: obj = default elif stringify: obj = str(obj) return obj
Returns dictionary values or default. Args: key: string. Dictionary key to look up. default: string. Return this value if key not found. stringify: bool. Force all return values to string for compatibility reasons. Returns: python-wrapped CF object or default if not found.
juraj-google-style
def _ParseCommon2003CachedEntry(self, value_data, cached_entry_offset): data_type_map = self._GetDataTypeMap( 'appcompatcache_cached_entry_2003_common') try: cached_entry = self._ReadStructureFromByteStream( value_data[cached_entry_offset:], cached_entry_offset, data_type_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError( 'Unable to parse cached entry value with error: {0!s}'.format( exception)) if cached_entry.path_size > cached_entry.maximum_path_size: raise errors.ParseError('Path size value out of bounds.') path_end_of_string_size = ( cached_entry.maximum_path_size - cached_entry.path_size) if cached_entry.path_size == 0 or path_end_of_string_size != 2: raise errors.ParseError('Unsupported path size values.') return cached_entry
Parses the cached entry structure common for Windows 2003, Vista and 7. Args: value_data (bytes): value data. cached_entry_offset (int): offset of the first cached entry data relative to the start of the value data. Returns: appcompatcache_cached_entry_2003_common: cached entry structure common for Windows 2003, Windows Vista and Windows 7. Raises: ParseError: if the value data could not be parsed.
juraj-google-style
def update(self, resource, id_or_uri=None, timeout=(- 1)): uri = resource.pop('uri', None) if (not uri): if (not id_or_uri): raise ValueError('URI was not provided') uri = self._client.build_uri(id_or_uri) return self._client.update(resource=resource, uri=uri, timeout=timeout)
Updates the specified alert resource. Args: resource (dict): Object to update. timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView; it just stops waiting for its completion. Returns: dict: Updated alert.
codesearchnet
def to(self, *args, **kwargs) -> 'BatchFeature': requires_backends(self, ['torch']) import torch device = kwargs.get('device') non_blocking = kwargs.get('non_blocking', False) if device is None and len(args) > 0: arg = args[0] if is_torch_dtype(arg): pass elif isinstance(arg, str) or is_torch_device(arg) or isinstance(arg, int): device = arg else: raise ValueError(f'Attempting to cast a BatchFeature to type {str(arg)}. This is not supported.') def maybe_to(v): if isinstance(v, torch.Tensor) and torch.is_floating_point(v): return v.to(*args, **kwargs) elif isinstance(v, torch.Tensor) and device is not None: return v.to(device=device, non_blocking=non_blocking) else: return v self.data = {k: maybe_to(v) for k, v in self.items()} return self
Send all values to device by calling `v.to(*args, **kwargs)` (PyTorch only). This should support casting in different `dtypes` and sending the `BatchFeature` to a different `device`. Args: args (`Tuple`): Will be passed to the `to(...)` function of the tensors. kwargs (`Dict`, *optional*): Will be passed to the `to(...)` function of the tensors. To enable asynchronous data transfer, set the `non_blocking` flag in `kwargs` (defaults to `False`). Returns: [`BatchFeature`]: The same instance after modification.
github-repos
def check_lines(first, second): if (not ((first.__class__ is Linearization) and (second.__class__ is Linearization) and (first.error == 0.0) and (second.error == 0.0))): return (False, None) (s, t, success) = segment_intersection(first.start_node, first.end_node, second.start_node, second.end_node) if success: if (_helpers.in_interval(s, 0.0, 1.0) and _helpers.in_interval(t, 0.0, 1.0)): intersections = np.asfortranarray([[s], [t]]) result = (intersections, False) else: result = (np.empty((2, 0), order='F'), False) else: (disjoint, params) = parallel_lines_parameters(first.start_node, first.end_node, second.start_node, second.end_node) if disjoint: result = (np.empty((2, 0), order='F'), False) else: result = (params, True) return (True, result)
Checks if two curves are lines and tries to intersect them. .. note:: This is a helper for :func:`._all_intersections`. If they are not lines / not linearized, immediately returns :data:`False` with no "return value". If they are lines, attempts to intersect them (even if they are parallel and share a coincident segment). Args: first (Union[SubdividedCurve, Linearization]): First curve being intersected. second (Union[SubdividedCurve, Linearization]): Second curve being intersected. Returns: Tuple[bool, Optional[Tuple[numpy.ndarray, bool]]]: A pair of * Flag indicating if both candidates in the pair are lines. * Optional "result" populated only if both candidates are lines. When this result is populated, it will be a pair of * array of parameters of intersection * flag indicating if the two candidates share a coincident segment
codesearchnet
def add_authorization_policy(access_token, ck_id, oid): path = '/ContentKeys' body = '{"AuthorizationPolicyId":"' + oid + '"}' return helper_add(access_token, ck_id, path, body)
Add Media Service Authorization Policy. Args: access_token (str): A valid Azure authentication token. ck_id (str): A Media Service Asset Content Key ID. options_id (str): A Media Service OID. Returns: HTTP response. JSON body.
juraj-google-style
def encoding_specs(self, spec): raise NotImplementedError(f'{type(self).__name__}.encoding_specs')
Returns a nest of `TypeSpec`(s) describing the encoding for `spec`. Args: spec: The TypeSpec whose encoding should be described. Returns: A nest (as defined by `tf.nest) of `tf.TypeSpec`, describing the values that are returned by `self.encode(spec, ...)`. All TypeSpecs in this nest must be batchable.
github-repos
def get_task_info(self): return (self.task_type, self.task_id)
Returns job name and task_id for the process which calls this. This returns the job name and task index for the process which calls this function according to its rank and cluster specification. The job name and task index are set after a cluster is constructed by cluster_spec otherwise defaults to None. Returns: A string specifying job name the process belongs to and an integer specifying the task index the process belongs to in that job.
github-repos
def dump(self, conf_file=None): if conf_file: conf_dir = os.path.dirname(conf_file) if (not conf_dir): conf_dir = self.__invoke_dir elif (not os.path.exists(conf_dir)): os.makedirs(conf_dir) else: conf_dir = self.__conf_dir final_conf = {} for (key, value) in list(self.__config.items()): if (key in self.__cli): continue final_conf[key] = value for (key, value) in list(self.__cli.items()): if (key.endswith('index') or (key in ['sitemap', 'output'])): path = self.__abspath(value, from_conf=False) if path: relpath = os.path.relpath(path, conf_dir) final_conf[key] = relpath elif (key.endswith('sources') or key.endswith('source_filters')): new_list = [] for path in value: path = self.__abspath(path, from_conf=False) if path: relpath = os.path.relpath(path, conf_dir) new_list.append(relpath) final_conf[key] = new_list elif (key not in ['command', 'output_conf_file']): final_conf[key] = value with open((conf_file or self.conf_file or 'hotdoc.json'), 'w') as _: _.write(json.dumps(final_conf, sort_keys=True, indent=4))
Dump the possibly updated config to a file. Args: conf_file: str, the destination, or None to overwrite the existing configuration.
codesearchnet
def content(self): if (self._content is None): self._content = self.parse_files() return self._content
Return parsed data. Parse it if not already parsed. Returns: list: list of dictionaries (one for each parsed line).
codesearchnet
def get_lacp_mode(self, name): members = self.get_members(name) if not members: return DEFAULT_LACP_MODE for member in self.get_members(name): match = re.search(r'channel-group\s\d+\smode\s(?P<value>.+)', self.get_block('^interface %s' % member)) return match.group('value')
Returns the LACP mode for the specified Port-Channel interface Args: name(str): The Port-Channel interface name to return the LACP mode for from the configuration Returns: The configured LACP mode for the interface. Valid mode values are 'on', 'passive', 'active'
juraj-google-style
def send_messages(self, email_messages): if not email_messages: return sent_message_count = 0 for email_message in email_messages: if self._send(email_message): sent_message_count += 1 return sent_message_count
Sends one or more EmailMessage objects and returns the number of email messages sent. Args: email_messages: A list of Django EmailMessage objects. Returns: An integer count of the messages sent. Raises: ClientError: An interaction with the Amazon SES HTTP API failed.
juraj-google-style
def float_value_convert(dictin, dropfailedvalues=False): return key_value_convert(dictin, valuefn=float, dropfailedvalues=dropfailedvalues)
Convert values of dictionary to floats Args: dictin (DictUpperBound): Input dictionary dropfailedvalues (bool): Whether to drop dictionary entries where key conversion fails. Defaults to False. Returns: Dict: Dictionary with values converted to floats
codesearchnet
def tokenize(self, string): s = string s = re.sub('\t', ' ', s) s = re.sub((('(' + regex_separator) + ')'), ' \\g<1> ', s) s = re.sub('([^0-9]),', '\\g<1> , ', s) s = re.sub(',([^0-9])', ' , \\g<1>', s) s = re.sub("^(')", '\\g<1> ', s) s = re.sub((('(' + regex_not_letter_number) + ")'"), "\\g<1> '", s) s = re.sub((('(' + regex_clitics) + ')$'), ' \\g<1>', s) s = re.sub((((('(' + regex_clitics) + ')(') + regex_not_letter_number) + ')'), ' \\g<1> \\g<2>', s) words = s.strip().split() p1 = re.compile((('.*' + regex_letter_number) + '\\.')) p2 = re.compile('^([A-Za-z]\\.([A-Za-z]\\.)+|[A-Z][bcdfghj-nptvxz]+\\.)$') token_list = [] for word in words: m1 = p1.match(word) m2 = p2.match(word) if (m1 and (word not in abbreviations_list) and (not m2)): token_list.append(word[0:word.find('.')]) token_list.append(word[word.find('.')]) else: token_list.append(word) return token_list
Used to parce a string into tokens This function is to take in a string and return a list of tokens Args: string(str): This is a string of words or a sentance to be parsed into tokens Returns: list: a list of tokens from the string passed in. Notes: Doesn't seem to parse contractions correctly for example don't would parse as two tokens 'do' and "n't" and this seems to be not what we would want. Maybe should be "don't" or maybe contractions should be expanded into "do not" or "do","not". This could be done with a contraction dictionary and some preprocessing.
codesearchnet
def _actor_property(self, event, cameo_code, actor_regex): if cameo_code not in self.mapping: return None arguments = self.mapping[cameo_code][event + "-arguments"] if not isinstance(arguments, list): arguments = [arguments] result = list() for a in arguments: match = re.search(actor_regex, a) if match: result.append(match.group(1)) return result[0] if len(result) > 0 else None
Determine the property to use for modeling an actor Args: event: one of "event1", "event2" or "event3" cameo_code: one of the cameo codes actor_regex: one of the regexes above Returns:
juraj-google-style