code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def entitlements(self, request, pk=None): enterprise_customer_user = self.get_object() instance = {'entitlements': enterprise_customer_user.entitlements} serializer = serializers.EnterpriseCustomerUserEntitlementSerializer(instance, context={'request': request}) return Response(serializer.data)
Retrieve the list of entitlements available to this learner. Only those entitlements are returned that satisfy enterprise customer's data sharing setting. Arguments: request (HttpRequest): Reference to in-progress request instance. pk (Int): Primary key value of the selected enterprise learner. Returns: (HttpResponse): Response object containing a list of learner's entitlements.
codesearchnet
def set_peer_link(self, value=None, default=False, disable=False): return self._configure_mlag('peer-link', value, default, disable)
Configures the mlag peer-link value Args: value (str): The value to configure the peer-link default (bool): Configures the peer-link using the default keyword disable (bool): Negates the peer-link using the no keyword Returns: bool: Returns True if the commands complete successfully
juraj-google-style
def make_datastore_query(self, cursor=None): filters = {} filters['__key__ >= '] = _key_for_namespace( self.namespace_start, self.app) filters['__key__ <= '] = _key_for_namespace( self.namespace_end, self.app) return datastore.Query('__namespace__', filters=filters, keys_only=True, cursor=cursor, _app=self.app)
Returns a datastore.Query that generates all namespaces in the range. Args: cursor: start cursor for the query. Returns: A datastore.Query instance that generates db.Keys for each namespace in the NamespaceRange.
juraj-google-style
def run_eagerly(self): if self.dynamic and self._run_eagerly is False: raise ValueError('Your model contains layers that can only be successfully run in eager execution (layers constructed with `dynamic=True`). You cannot set `run_eagerly=False`.') if self._cluster_coordinator and self._run_eagerly: raise ValueError('When using `Model` with `ParameterServerStrategy`, `run_eagerly` is not supported.') return self.dynamic or self._run_eagerly or (def_function.functions_run_eagerly() and self._run_eagerly is None)
Settable attribute indicating whether the model should run eagerly. Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls. By default, we will attempt to compile your model to a static graph to deliver the best execution performance. Returns: Boolean, whether the model should run eagerly.
github-repos
def _update_listing_client_kwargs(client_kwargs, max_request_entries): client_kwargs = client_kwargs.copy() if max_request_entries: client_kwargs['num_results'] = max_request_entries return client_kwargs
Updates client kwargs for listing functions. Args: client_kwargs (dict): Client arguments. max_request_entries (int): If specified, maximum entries returned by request. Returns: dict: Updated client_kwargs
juraj-google-style
def convert_exchange_to_compounds(model): exchanges = set() for reaction in model.reactions: equation = reaction.properties.get('equation') if equation is None: continue if len(equation.compounds) != 1: if (len(equation.left) == 0) != (len(equation.right) == 0): logger.warning('Exchange reaction {} has more than one' ' compound, it was not converted to' ' exchange compound'.format(reaction.id)) continue exchanges.add(reaction.id) for reaction_id in exchanges: equation = model.reactions[reaction_id].equation compound, value = equation.compounds[0] if compound.compartment != model.extracellular_compartment: continue if compound in model.exchange: logger.warning( 'Compound {} is already defined in the exchange' ' definition'.format(compound)) continue lower_flux, upper_flux = None, None if reaction_id in model.limits: _, lower, upper = model.limits[reaction_id] if lower is not None: lower_flux = lower * abs(value) if upper is not None: upper_flux = upper * abs(value) if lower_flux is None and equation.direction == Direction.Forward: lower_flux = 0 if upper_flux is None and equation.direction == Direction.Reverse: upper_flux = 0 if value > 0: lower_flux, upper_flux = ( -upper_flux if upper_flux is not None else None, -lower_flux if lower_flux is not None else None) model.exchange[compound] = ( compound, reaction_id, lower_flux, upper_flux) model.reactions.discard(reaction_id) model.limits.pop(reaction_id, None)
Convert exchange reactions in model to exchange compounds. Only exchange reactions in the extracellular compartment are converted. The extracelluar compartment must be defined for the model. Args: model: :class:`NativeModel`.
juraj-google-style
def trailing_stop_loss(self, accountID, **kwargs): return self.create(accountID, order=TrailingStopLossOrderRequest(**kwargs))
Shortcut to create a Trailing Stop Loss Order in an Account Args: accountID : The ID of the Account kwargs : The arguments to create a TrailingStopLossOrderRequest Returns: v20.response.Response containing the results from submitting the request
codesearchnet
def delete_service(self, service: str): if not self._manager: raise RuntimeError('Services can only be deleted ' 'on swarm manager nodes') self._api_client.remove_service(service)
Removes/stops a docker service. Only the manager nodes can delete a service Args: service (string): Service name or ID
juraj-google-style
def _apply_op(self, op_fn): raise NotImplementedError()
Applies given tensor-to-tensor op. This method is used for implementing ops that take a tensor and return a new tensor, such as tf.expand_dims or tf.transpose. Implementing wrappers should apply `op_fn` to the backing tensor(s) and return an new wrapper instance with the updated backing tensor. Args: op_fn: Callable that applies tensor-to-tensor op to the given Tensor. E.g. applies tf.expand_dims. Returns: A TensorWrapper instance with updated backing tensor(s).
github-repos
def write_gtiff_file(f_name, n_rows, n_cols, data, geotransform, srs, nodata_value, gdal_type=GDT_Float32): UtilClass.mkdir(os.path.dirname(FileClass.get_file_fullpath(f_name))) driver = gdal_GetDriverByName(str('GTiff')) try: ds = driver.Create(f_name, n_cols, n_rows, 1, gdal_type) except Exception: print('Cannot create output file %s' % f_name) return ds.SetGeoTransform(geotransform) try: ds.SetProjection(srs.ExportToWkt()) except AttributeError or Exception: ds.SetProjection(srs) ds.GetRasterBand(1).SetNoDataValue(nodata_value) if isinstance(data, numpy.ndarray) and data.dtype in [numpy.dtype('int'), numpy.dtype('float')]: data = numpy.where(numpy.isnan(data), nodata_value, data) ds.GetRasterBand(1).WriteArray(data) ds = None
Output Raster to GeoTiff format file. Args: f_name: output gtiff file name. n_rows: Row count. n_cols: Col count. data: 2D array data. geotransform: geographic transformation. srs: coordinate system. nodata_value: nodata value. gdal_type (:obj:`pygeoc.raster.GDALDataType`): output raster data type, GDT_Float32 as default.
juraj-google-style
def _get_attribute(self, node, obj, cls, name, valself): if cls: node, attr = self._get_attribute_computed(node, cls, name, valself, compute_function='__getattribute__') else: attr = None if attr is None: if isinstance(obj, abstract.Class): node, attr = self._lookup_from_mro_and_handle_descriptors(node, obj, name, valself, skip=()) else: node, attr = self._get_member(node, obj, name, valself) is_unknown_instance_attribute = attr is None and obj.maybe_missing_members if attr is None and cls: node, attr = self.get_attribute(node, cls, name, valself) if attr: if is_unknown_instance_attribute: attr2 = self._lookup_from_mro(node, cls, name, valself, ()) if any((isinstance(v, abstract.FUNCTION_TYPES) for v in attr2.data)): is_unknown_instance_attribute = False elif not is_unknown_instance_attribute: node, attr = self._get_attribute_computed(node, cls, name, valself, compute_function='__getattr__') if is_unknown_instance_attribute: attr = self.ctx.new_unsolvable(node) if attr is not None: attr = self._filter_var(node, attr) return (node, attr)
Get an attribute from an object or its class. The underlying method called by all of the (_)get_(x_)attribute methods. Attempts to resolve an attribute first with __getattribute__, then by fetching it from the object, then by fetching it from the class, and finally with __getattr__. Arguments: node: The current node. obj: The object. cls: The object's class, may be None. name: The attribute name. valself: The binding to the self reference. Returns: A tuple of the node and the attribute, or None if it was not found.
github-repos
def run_pipeline_steps(steps, context): logger.debug("starting") assert isinstance( context, dict), "context must be a dictionary, even if empty {}." if steps is None: logger.debug("No steps found to execute.") else: step_count = 0 for step in steps: step_instance = Step(step) step_instance.run_step(context) step_count += 1 logger.debug(f"executed {step_count} steps") logger.debug("done")
Run the run_step(context) method of each step in steps. Args: steps: list. Sequence of Steps to execute context: pypyr.context.Context. The pypyr context. Will mutate.
juraj-google-style
def create(self, resource, timeout=-1): return self._client.create(resource, timeout=timeout, default_values=self.DEFAULT_VALUES)
Creates a scope. Args: resource (dict): Object to create. timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView, just stop waiting for its completion. Returns: dict: Created scope.
juraj-google-style
def partitioned_call_op(name: str, args: Sequence[core.Tensor], is_stateful: bool, tout: Sequence[Any], config: Any=None, executor_type: Optional[str]=None, xla_compile_attr: Any=None) -> ops.Operation: if config is None: config = function_utils.get_disabled_rewriter_config() if executor_type is None: executor_type = '' args = [ops.convert_to_tensor(x) for x in args] tin_attr = attr_value_pb2.AttrValue(list=attr_value_pb2.AttrValue.ListValue(type=[x.dtype.as_datatype_enum for x in args])) tout_attr = attr_value_pb2.AttrValue(list=attr_value_pb2.AttrValue.ListValue(type=tout)) func_attr = attr_value_pb2.AttrValue(func=attr_value_pb2.NameAttrList(name=name)) executor_type_attr = attr_value_pb2.AttrValue(s=compat.as_bytes(executor_type)) config_proto = attr_value_pb2.AttrValue(s=config) op_name = 'StatefulPartitionedCall' if is_stateful else 'PartitionedCall' op_attrs = {'Tin': tin_attr, 'Tout': tout_attr, 'f': func_attr, 'config_proto': config_proto, 'executor_type': executor_type_attr} if xla_compile_attr is not None: op_attrs[attributes_lib.XLA_COMPILE] = xla_compile_attr op = ops.get_default_graph().create_op(op_name, args, tout, name=op_name, attrs=op_attrs) return op
Generates a function call op respecting device annotations. Args: name: Name of the function to call. args: The arguments of the function, including captured inputs. is_stateful: If the function is stateful. tout: a list containing the output dtypes enums config: (Optional) A `tensorflow::ConfigProto` proto, serialized. If `None`, all optimizations are disabled. Currently only handled for eager defined functions. executor_type: (Optional) A string for the name of the executor to be used in the function call. If not set, or set to an empty string, the default tensorflow executor will be used. xla_compile_attr: (Optional) value of the XLA compilation attribute. Returns: Returns the operation.
github-repos
def update_aliases(self): changed = False try: response = self.client.api.get_room_state(self.room_id) except MatrixRequestError: return False for chunk in response: content = chunk.get('content') if content: if ('aliases' in content): aliases = content['aliases'] if (aliases != self.aliases): self.aliases = aliases changed = True if (chunk.get('type') == 'm.room.canonical_alias'): canonical_alias = content['alias'] if (self.canonical_alias != canonical_alias): self.canonical_alias = canonical_alias changed = True if (changed and self.aliases and (not self.canonical_alias)): self.canonical_alias = self.aliases[0] return changed
Get aliases information from room state Returns: boolean: True if the aliases changed, False if not
codesearchnet
def GetUsernameByIdentifier(self, user_identifier, session_identifier=CURRENT_SESSION): user_accounts = self._user_accounts.get(session_identifier, {}) user_account = user_accounts.get(user_identifier, None) if (not user_account): return '' return (user_account.username or '')
Retrieves the username based on an user identifier. Args: user_identifier (str): user identifier, either a UID or SID. session_identifier (Optional[str])): session identifier, where CURRENT_SESSION represents the active session. Returns: str: username.
codesearchnet
def size_internal(input, name=None, optimize=True, out_type=dtypes.int32): if context.executing_eagerly() and (not hasattr(input, 'graph')) and (not isinstance(input, (sparse_tensor.SparseTensor, sparse_tensor.SparseTensorValue))): input = ops.convert_to_tensor(input) np_out_type = out_type.as_numpy_dtype num_elements = np.prod(input._shape_tuple(), dtype=np_out_type) return ops.convert_to_tensor(num_elements, dtype=out_type) with ops.name_scope(name, 'Size', [input]) as name: if isinstance(input, (sparse_tensor.SparseTensor, sparse_tensor.SparseTensorValue)): return gen_math_ops.prod(gen_math_ops.cast(input.dense_shape, out_type), 0, name=name) else: input = ops.convert_to_tensor(input) input_shape = input.get_shape() if optimize: if input_shape.is_fully_defined(): return constant(input_shape.num_elements(), out_type, name=name) if input_shape.dims and any((dim == 0 for dim in input_shape.dims)): return constant(0, out_type, name=name) return gen_array_ops.size(input, name=name, out_type=out_type)
Returns the size of a tensor. Args: input: A `Tensor` or `SparseTensor`. name: A name for the operation (optional). optimize: if true, encode the size as a constant when possible. out_type: (Optional) The specified non-quantized numeric output type of the operation. Defaults to `tf.int32`. Returns: A `Tensor` of type `out_type`. Defaults to `tf.int32`.
github-repos
def query(self, queryEngine, query=None, vendorSpecific=None, **kwargs): response = self.queryResponse(queryEngine, query, vendorSpecific, **kwargs) return self._read_stream_response(response)
See Also: queryResponse() Args: queryEngine: query: vendorSpecific: **kwargs: Returns:
juraj-google-style
def subtract(x1, x2): if any_symbolic_tensors((x1, x2)): return Subtract().symbolic_call(x1, x2) return backend.numpy.subtract(x1, x2)
Subtract arguments element-wise. Args: x1: First input tensor. x2: Second input tensor. Returns: Output tensor, element-wise difference of `x1` and `x2`.
github-repos
def write_command(self, command: Command): _logger.debug('Write command.') data = command.to_bytes() (yield from self._connection.write(data)) self._data_event_dispatcher.notify_write(data)
Write a command to the stream. Args: command: The command. Coroutine.
codesearchnet
def sql_column_like_drug(self, column_name: str) -> str: clauses = [ "{col} LIKE {fragment}".format( col=column_name, fragment=sql_string_literal(f)) for f in self.sql_like_fragments ] return "({})".format(" OR ".join(clauses))
Returns SQL like .. code-block:: sql (column_name LIKE '%drugname1%' OR column_name LIKE '%drugname2%') for the drug names that this Drug object knows about. Args: column_name: column name, pre-escaped if necessary Returns: SQL fragment as above
juraj-google-style
def export_outputs_for_mode(mode, serving_export_outputs=None, predictions=None, loss=None, metrics=None): if mode not in SIGNATURE_KEY_MAP: raise ValueError(f'Export output type not found for `mode`: {mode}. Expected one of: {list(SIGNATURE_KEY_MAP.keys())}.') signature_key = SIGNATURE_KEY_MAP[mode] if mode_keys.is_predict(mode): return get_export_outputs(serving_export_outputs, predictions) elif mode_keys.is_train(mode): return {signature_key: export_output_lib.TrainOutput(loss=loss, predictions=predictions, metrics=metrics)} else: return {signature_key: export_output_lib.EvalOutput(loss=loss, predictions=predictions, metrics=metrics)}
Util function for constructing a `ExportOutput` dict given a mode. The returned dict can be directly passed to `build_all_signature_defs` helper function as the `export_outputs` argument, used for generating a SignatureDef map. Args: mode: A `ModeKeys` specifying the mode. serving_export_outputs: Describes the output signatures to be exported to `SavedModel` and used during serving. Should be a dict or None. predictions: A dict of Tensors or single Tensor representing model predictions. This argument is only used if serving_export_outputs is not set. loss: A dict of Tensors or single Tensor representing calculated loss. metrics: A dict of (metric_value, update_op) tuples, or a single tuple. metric_value must be a Tensor, and update_op must be a Tensor or Op Returns: Dictionary mapping the key to an `ExportOutput` object. The key is the expected SignatureDef key for the mode. Raises: ValueError: if an appropriate ExportOutput cannot be found for the mode.
github-repos
def finalize_variable_values(self, var_list): if self.use_ema: self._overwrite_model_variables_with_average_value(var_list)
Set the final value of model's trainable variables. Sometimes there are some extra steps before ending the variable updates, such as overriding the model variables with its average value. Args: var_list: list of model variables.
github-repos
def download(self, resource_id): self.resource_id(str(resource_id)) self._request_uri = '{}/download'.format(self._request_uri)
Update the request URI to download the document for this resource. Args: resource_id (integer): The group id.
codesearchnet
def ParseFileObject(self, parser_mediator, file_object): if (file_object.read(1) != b'{'): raise errors.UnableToParseFile('is not a valid JSON file, missing opening brace.') file_object.seek(0, os.SEEK_SET) file_entry = parser_mediator.GetFileEntry() file_system = file_entry.GetFileSystem() json_file_path = parser_mediator.GetDisplayName() split_path = file_system.SplitPath(json_file_path) try: if ('containers' in split_path): if ('config.json' in split_path): self._ParseContainerConfigJSON(parser_mediator, file_object) if json_file_path.endswith('-json.log'): self._ParseContainerLogJSON(parser_mediator, file_object) elif ('graph' in split_path): if ('json' in split_path): self._ParseLayerConfigJSON(parser_mediator, file_object) except ValueError as exception: if (exception == 'No JSON object could be decoded'): raise errors.UnableToParseFile(exception) else: raise
Parses various Docker configuration and log files in JSON format. This methods checks whether the file_object points to a docker JSON config or log file, and calls the corresponding _Parse* function to generate Events. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. file_object (dfvfs.FileIO): a file-like object. Raises: UnableToParseFile: when the file cannot be parsed. ValueError: if the JSON file cannot be decoded.
codesearchnet
def added_tokens_decoder(self) -> dict[int, AddedToken]: return self._tokenizer.get_added_tokens_decoder()
Returns the added tokens in the vocabulary as a dictionary of index to AddedToken. Returns: `Dict[str, int]`: The added tokens.
github-repos
def ExamineEvent(self, mediator, event): pathspec = getattr(event, 'pathspec', None) if pathspec is None: return if self._paths_with_hashes.get(pathspec, None): return hash_attributes = {} for attribute_name, attribute_value in event.GetAttributes(): if attribute_name.endswith('_hash'): hash_attributes[attribute_name] = attribute_value self._paths_with_hashes[pathspec] = hash_attributes
Analyzes an event and creates extracts hashes as required. Args: mediator (AnalysisMediator): mediates interactions between analysis plugins and other components, such as storage and dfvfs. event (EventObject): event to examine.
juraj-google-style
async def get_json(self, url, json_callback=None, **kwargs): if (not json_callback): json_callback = json.loads response = (await self.request(method='get', url=url, **kwargs)) return json_callback(response)
Get a URL and return its JSON response. Args: url (str): URL to be requested. json_callback (func): Custom JSON loader function. Defaults to :meth:`json.loads`. kwargs (dict): Additional arguments to pass through to the request. Returns: response body returned by :func:`json_callback` function.
codesearchnet
def set_headline(self, level, message, timestamp=None, now_reference=None): if ((self.headline is not None) and (self.headline.message == message)): self.headline.created = monotonic() self.headline.count += 1 return msg_object = ServiceMessage(level, message, self._last_message_id, timestamp, now_reference) self.headline = msg_object self._last_message_id += 1
Set the persistent headline message for this service. Args: level (int): The level of the message (info, warning, error) message (string): The message contents timestamp (float): An optional monotonic value in seconds for when the message was created now_reference (float): If timestamp is not relative to monotonic() as called from this module then this should be now() as seen by whoever created the timestamp.
codesearchnet
def clear_agent(self, short_name, client_id): if (short_name not in self.services): raise ArgumentError('Unknown service name', short_name=short_name) if (short_name not in self.agents): raise ArgumentError('No agent registered for service', short_name=short_name) if (client_id != self.agents[short_name]): raise ArgumentError('Client was not registered for service', short_name=short_name, client_id=client_id, current_client=self.agents[short_name]) del self.agents[short_name]
Remove a client id from being the command handler for a service. Args: short_name (str): The name of the service to set an agent for. client_id (str): A globally unique id for the client that should no longer receive commands for this service.
codesearchnet
def print(self, tag=None, name=None): _name = name if (_name is None): _name = 'print' fn = streamsx.topology.functions.print_flush if (tag is not None): tag = (str(tag) + ': ') fn = (lambda v: streamsx.topology.functions.print_flush((tag + str(v)))) sp = self.for_each(fn, name=_name) sp._op().sl = _SourceLocation(_source_info(), 'print') return sp
Prints each tuple to stdout flushing after each tuple. If `tag` is not `None` then each tuple has "tag: " prepended to it before printing. Args: tag: A tag to prepend to each tuple. name(str): Name of the resulting stream. When `None` defaults to a generated name. Returns: streamsx.topology.topology.Sink: Stream termination. .. versionadded:: 1.6.1 `tag`, `name` parameters. .. versionchanged:: 1.7 Now returns a :py:class:`Sink` instance.
codesearchnet
def _add_monomer(self, monomer, mon_vector, move_direction): translate_by = (self.molecule.cart_coords[self.end] + (self.link_distance * move_direction)) monomer.translate_sites(range(len(monomer)), translate_by) if (not self.linear_chain): self._align_monomer(monomer, mon_vector, move_direction) does_cross = False for (i, site) in enumerate(monomer): try: self.molecule.append(site.specie, site.coords, properties=site.properties) except: does_cross = True polymer_length = len(self.molecule) self.molecule.remove_sites(range((polymer_length - i), polymer_length)) break if (not does_cross): self.length += 1 self.end += len(self.monomer)
extend the polymer molecule by adding a monomer along mon_vector direction Args: monomer (Molecule): monomer molecule mon_vector (numpy.array): monomer vector that points from head to tail. move_direction (numpy.array): direction along which the monomer will be positioned
codesearchnet
def verify_dataset_shuffled(x): assert isinstance(x, data_types.DatasetV2) graph_def = get_dataset_graph_def(x) for node in graph_def.node: if node.op.startswith('ShuffleDataset'): return True for function in graph_def.library.function: for node in function.node_def: if node.op.startswith('ShuffleDataset'): return True logging.warning('Expected a shuffled dataset but input dataset `x` is not shuffled. Please invoke `shuffle()` on input dataset.') return False
Verifies that the dataset is shuffled. Args: x: Dataset passed as an input to the model. Returns: boolean, whether the input dataset is shuffled or not.
github-repos
def set_license(self, license, **kwargs): data = {'license': license} return self.http_post('/license', post_data=data, **kwargs)
Add a new license. Args: license (str): The license string **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabPostError: If the server cannot perform the request Returns: dict: The new license information
codesearchnet
def dense_labels_to_sparse(dense, length): flat_values = array_ops.reshape(dense, [-1]) flat_indices = math_ops.range(array_ops.shape(flat_values, out_type=dtypes.int64)[0]) mask = array_ops.sequence_mask(length, maxlen=array_ops.shape(dense)[1]) flat_mask = array_ops.reshape(mask, [-1]) indices = array_ops.expand_dims(array_ops.boolean_mask(flat_indices, flat_mask), 1) values = array_ops.boolean_mask(flat_values, flat_mask) sparse = sparse_tensor.SparseTensor(indices=indices, values=math_ops.cast(values, dtypes.int32), dense_shape=array_ops.shape(flat_values, out_type=dtypes.int64)) reshaped = sparse_ops.sparse_reshape(sparse, array_ops.shape(dense)) max_length = math_ops.reduce_max(length) return sparse_tensor.SparseTensor(indices=reshaped.indices, values=reshaped.values, dense_shape=[math_ops.cast(reshaped.dense_shape[0], dtypes.int64), math_ops.cast(max_length, dtypes.int64)])
Convert dense labels with sequence lengths to sparse tensor. Args: dense: tensor of shape [batch, max_length] length: int tensor of shape [batch] The length of each sequence in dense. Returns: tf.sparse.SparseTensor with values only for the valid elements of sequences.
github-repos
def _send_to_consumer(self, block): self._consumer.write(block) self._sent += len(block) if self._callback: self._callback(self._sent, self.length)
Send a block of bytes to the consumer. Args: block (str): Block of bytes
juraj-google-style
def as_string(self) -> str: if len(self._messages) != 1: raise ValueError('FHIRPath did not evaluate to a single string.') if fhir_types.is_type_or_profile_of_code(self._messages[0]): return codes.get_code_as_string(self._messages[0]) return proto_utils.get_value_at_field(self._messages[0], 'value')
Returns the result as a string. Raises: ValueError if the `EvaluationResult` is not a single string.
github-repos
def _generate_bucket_value(self, bucketing_id): ratio = float(self._generate_unsigned_hash_code_32_bit(bucketing_id)) / MAX_HASH_VALUE return math.floor(ratio * MAX_TRAFFIC_VALUE)
Helper function to generate bucket value in half-closed interval [0, MAX_TRAFFIC_VALUE). Args: bucketing_id: ID for bucketing. Returns: Bucket value corresponding to the provided bucketing ID.
juraj-google-style
def export(self, name=None): with ops.name_scope(name, '%s_lookup_table_export_values' % self.name, [self.resource_handle]): with ops.colocate_with(self.resource_handle): exported_keys, exported_values = gen_lookup_ops.lookup_table_export_v2(self.resource_handle, self._key_dtype, self._value_dtype) return (exported_keys, exported_values)
Returns tensors of all keys and values in the table. Args: name: A name for the operation (optional). Returns: A pair of tensors with the first tensor containing all keys and the second tensors containing all values in the table.
github-repos
def read_elastic_tensor(self): header_pattern = 'TOTAL ELASTIC MODULI \\(kBar\\)\\s+Direction\\s+([X-Z][X-Z]\\s+)+\\-+' row_pattern = ('[X-Z][X-Z]\\s+' + '\\s+'.join((['(\\-*[\\.\\d]+)'] * 6))) footer_pattern = '\\-+' et_table = self.read_table_pattern(header_pattern, row_pattern, footer_pattern, postprocess=float) self.data['elastic_tensor'] = et_table
Parse the elastic tensor data. Returns: 6x6 array corresponding to the elastic tensor from the OUTCAR.
codesearchnet
def post(self, url=None, post_data={}, parse_data=False, key=None, parameters=None, listener=None): return self._fetch('POST', url, post_data=post_data, parse_data=parse_data, key=key, parameters=parameters, listener=listener, full_return=True)
Issue a POST request. Kwargs: url (str): Destination URL post_data (dict): Dictionary of parameter and values parse_data (bool): If true, parse response data key (string): If parse_data==True, look for this key when parsing data parameters (dict): Additional GET parameters to append to the URL listener (func): callback called when uploading a file Returns: dict. Response (a dict with keys: success, data, info, body) Raises: AuthenticationError, ConnectionError, urllib2.HTTPError, ValueError, Exception
codesearchnet
def convert_to_experiment_list(experiments): exp_list = experiments if experiments is None: exp_list = [] elif isinstance(experiments, Experiment): exp_list = [experiments] elif type(experiments) is dict: exp_list = [ Experiment.from_json(name, spec) for name, spec in experiments.items() ] if (type(exp_list) is list and all(isinstance(exp, Experiment) for exp in exp_list)): if len(exp_list) > 1: logger.warning("All experiments will be " "using the same SearchAlgorithm.") else: raise TuneError("Invalid argument: {}".format(experiments)) return exp_list
Produces a list of Experiment objects. Converts input from dict, single experiment, or list of experiments to list of experiments. If input is None, will return an empty list. Arguments: experiments (Experiment | list | dict): Experiments to run. Returns: List of experiments.
juraj-google-style
def serialize_ndarray_npy(o): with io.BytesIO() as f: np.save(f, o) f.seek(0) serialized = json.dumps(f.read().decode('latin-1')) return dict( _type='np.ndarray', npy=serialized)
Serializes a :obj:`numpy.ndarray` using numpy's built-in :obj:`save` function. This produces totally unreadable (and very un-JSON-like) results (in "npy" format), but it's basically guaranteed to work in 100% of cases. Args: o (:obj:`numpy.ndarray`): :obj:`ndarray` to be serialized. Returns: A dictionary that can be passed to :obj:`json.dumps`.
juraj-google-style
def get_point_group_symbol(self): rotations = self._space_group_data['rotations'] if (len(rotations) == 0): return '1' return spglib.get_pointgroup(rotations)[0].strip()
Get the point group associated with the structure. Returns: (Pointgroup): Point group for structure.
codesearchnet
def _destructively_move(self, dest_doc): if (dest_doc is self): raise RuntimeError('Attempted to overwrite a document with itself') dest_doc.clear() roots = [] self._push_all_models_freeze() try: while self.roots: r = next(iter(self.roots)) self.remove_root(r) roots.append(r) finally: self._pop_all_models_freeze() for r in roots: if (r.document is not None): raise RuntimeError(("Somehow we didn't detach %r" % r)) if (len(self._all_models) != 0): raise RuntimeError(('_all_models still had stuff in it: %r' % self._all_models)) for r in roots: dest_doc.add_root(r) dest_doc.title = self.title
Move all data in this doc to the dest_doc, leaving this doc empty. Args: dest_doc (Document) : The Bokeh document to populate with data from this one Returns: None
codesearchnet
def Draw(self, stoplist=None, triplist=None, height=520): output = str() if (not triplist): triplist = [] if (not stoplist): stoplist = [] if ((not self._cache) or triplist or stoplist): self._gheight = height self._tlist = triplist self._slist = stoplist self._decorators = [] self._stations = self._BuildStations(stoplist) self._cache = ('%s %s %s %s' % (self._DrawBox(), self._DrawHours(), self._DrawStations(), self._DrawTrips(triplist))) output = ('%s %s %s %s' % (self._DrawHeader(), self._cache, self._DrawDecorators(), self._DrawFooter())) return output
Main interface for drawing the marey graph. If called without arguments, the data generated in the previous call will be used. New decorators can be added between calls. Args: # Class Stop is defined in transitfeed.py stoplist: [Stop, Stop, ...] # Class Trip is defined in transitfeed.py triplist: [Trip, Trip, ...] Returns: # A string that contain a svg/xml web-page with a marey graph. " <svg width="1440" height="520" version="1.1" ... "
codesearchnet
def to_representation(self, value): if (not value): return None image = get_thumbnail(value, self.geometry_string, **self.options) try: request = self.context.get('request', None) return request.build_absolute_uri(image.url) except: try: return super(HyperlinkedSorlImageField, self).to_representation(image) except AttributeError: return super(HyperlinkedSorlImageField, self).to_native(image.url)
Perform the actual serialization. Args: value: the image to transform Returns: a url pointing at a scaled and cached image
codesearchnet
def objects_patch(self, bucket, key, info): url = Api._ENDPOINT + (Api._OBJECT_PATH % (bucket, Api._escape_key(key))) return google.datalab.utils.Http.request(url, method='PATCH', data=info, credentials=self._credentials)
Updates the metadata associated with an object. Args: bucket: the name of the bucket containing the object. key: the key of the object being updated. info: the metadata to update. Returns: A parsed object information dictionary. Raises: Exception if there is an error performing the operation.
juraj-google-style
def create_jlink(self, args): jlink = pylink.JLink() jlink.open(args.serial_no, args.ip_addr) if hasattr(args, 'tif') and args.tif is not None: if args.tif.lower() == 'swd': jlink.set_tif(pylink.JLinkInterfaces.SWD) else: jlink.set_tif(pylink.JLinkInterfaces.JTAG) if hasattr(args, 'device') and args.device is not None: jlink.connect(args.device) return jlink
Creates an instance of a J-Link from the given arguments. Args: self (Command): the ``Command`` instance args (Namespace): arguments to construct the ``JLink`` instance from Returns: An instance of a ``JLink``.
juraj-google-style
def nrows(self): if self.rank == 0: return None return self._ragged_shape[0]
The number of rows in this StructuredTensor (if rank>0). This means the length of the outer-most dimension of the StructuredTensor. Notice that if `self.rank > 1`, then this equals the number of rows of the first row partition. That is, `self.nrows() == self.row_partitions[0].nrows()`. Otherwise `self.nrows()` will be the first dimension of the field values. Returns: A scalar integer `Tensor` (or `None` if `self.rank == 0`).
github-repos
def _get_source_chunks(self, input_text, language=None): chunks = ChunkList() seek = 0 result = self._get_annotations(input_text, language=language) tokens = result['tokens'] language = result['language'] for (i, token) in enumerate(tokens): word = token['text']['content'] begin_offset = token['text']['beginOffset'] label = token['dependencyEdge']['label'] pos = token['partOfSpeech']['tag'] if (begin_offset > seek): chunks.append(Chunk.space()) seek = begin_offset chunk = Chunk(word, pos, label) if (chunk.label in _DEPENDENT_LABEL): chunk.dependency = (i < token['dependencyEdge']['headTokenIndex']) if chunk.is_punct(): chunk.dependency = chunk.is_open_punct() chunks.append(chunk) seek += len(word) return (chunks, language)
Returns a chunk list retrieved from Syntax Analysis results. Args: input_text (str): Text to annotate. language (:obj:`str`, optional): Language of the text. Returns: A chunk list. (:obj:`budou.chunk.ChunkList`)
codesearchnet
def prepare_csv_read(data, field_names, *args, **kwargs): if hasattr(data, 'readlines') or isinstance(data, list): pass elif isinstance(data, basestring): data = open(data) else: raise TypeError('Unable to handle data of type %r' % type(data)) return csv.DictReader(data, field_names, *args, **kwargs)
Prepare various input types for CSV parsing. Args: data (iter): Data to read field_names (tuple of str): Ordered names to assign to fields Returns: csv.DictReader: CSV reader suitable for parsing Raises: TypeError: Invalid value for data
juraj-google-style
def get_botcust2(): logger.debug('Getting new botcust2') params = {'botid': 'f6a012073e345a08', 'amp;skin': 'chat'} headers = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate, sdch, br', 'Accept-Language': 'en-US,en;q=0.8', 'Connection': 'keep-alive', 'DNT': '1', 'Host': 'kakko.pandorabots.com', 'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'} logger.debug('Sending POST request') response = requests.post(url, params=params, headers=headers) logger.debug('POST response {}'.format(response)) try: result = response.headers['set-cookie'][9:25] logger.debug('Getting botcust2 successful') except IndexError: result = False logger.critical('Getting botcust2 from html failed') return result
Gets a botcust2, used to identify a speaker with Mitsuku Returns: botcust2 (str): The botcust2 identifier
codesearchnet
def hstack(gctoos, remove_all_metadata_fields=False, error_report_file=None, fields_to_remove=[], reset_ids=False): row_meta_dfs = [] col_meta_dfs = [] data_dfs = [] srcs = [] for g in gctoos: row_meta_dfs.append(g.row_metadata_df) col_meta_dfs.append(g.col_metadata_df) data_dfs.append(g.data_df) srcs.append(g.src) logger.debug('shapes of row_meta_dfs: {}'.format([x.shape for x in row_meta_dfs])) all_row_metadata_df = assemble_common_meta(row_meta_dfs, fields_to_remove, srcs, remove_all_metadata_fields, error_report_file) all_col_metadata_df = assemble_concatenated_meta(col_meta_dfs, remove_all_metadata_fields) all_data_df = assemble_data(data_dfs, 'horiz') assert (all_data_df.shape[0] == all_row_metadata_df.shape[0]), 'Number of rows in metadata does not match number of rows in data - all_data_df.shape[0]: {} all_row_metadata_df.shape[0]: {}'.format(all_data_df.shape[0], all_row_metadata_df.shape[0]) assert (all_data_df.shape[1] == all_col_metadata_df.shape[0]), 'Number of columns in data does not match number of columns metadata - all_data_df.shape[1]: {} all_col_metadata_df.shape[0]: {}'.format(all_data_df.shape[1], all_col_metadata_df.shape[0]) if reset_ids: do_reset_ids(all_col_metadata_df, all_data_df, 'horiz') logger.info('Build GCToo of all...') concated = GCToo.GCToo(row_metadata_df=all_row_metadata_df, col_metadata_df=all_col_metadata_df, data_df=all_data_df) return concated
Horizontally concatenate gctoos. Args: gctoos (list of gctoo objects) remove_all_metadata_fields (bool): ignore/strip all common metadata when combining gctoos error_report_file (string): path to write file containing error report indicating problems that occurred during hstack, mainly for inconsistencies in common metadata fields_to_remove (list of strings): fields to be removed from the common metadata because they don't agree across files reset_ids (bool): set to True if sample ids are not unique Return: concated (gctoo object)
codesearchnet
def __init__(self, file_system, path_spec): super(SQLiteBlobDirectory, self).__init__(file_system, path_spec) self._number_of_entries = None
Initializes a directory. Args: file_system (SQLiteBlobFileSystem): file system. path_spec (SQLiteBlobPathSpec): path specification.
juraj-google-style
def __call__(self, shape, dtype=None): raise NotImplementedError('Initializer subclasses must implement the `__call__()` method.')
Returns a tensor object initialized as specified by the initializer. Args: shape: Shape of the tensor. dtype: Optional dtype of the tensor.
github-repos
def build_institute(internal_id, display_name, sanger_recipients=None, coverage_cutoff=None, frequency_cutoff=None): LOG.info('Building institute %s with display name %s', internal_id, display_name) institute_obj = Institute(internal_id=internal_id, display_name=display_name, sanger_recipients=sanger_recipients, coverage_cutoff=coverage_cutoff, frequency_cutoff=frequency_cutoff) for key in list(institute_obj): if (institute_obj[key] is None): institute_obj.pop(key) return institute_obj
Build a institute object Args: internal_id(str) display_name(str) sanger_recipients(list(str)): List with email addresses Returns: institute_obj(scout.models.Institute)
codesearchnet
def default_memcache_timeout_policy(key): timeout = None if key is not None and isinstance(key, model.Key): modelclass = model.Model._kind_map.get(key.kind()) if modelclass is not None: policy = getattr(modelclass, '_memcache_timeout', None) if policy is not None: if isinstance(policy, (int, long)): timeout = policy else: timeout = policy(key) return timeout
Default memcache timeout policy. This defers to _memcache_timeout on the Model class. Args: key: Key instance. Returns: Memcache timeout to use (integer), or None.
juraj-google-style
def pmap(f, axis_name=None, devices=None): if devices is None: devices = accelerators() if not isinstance(devices, (list, tuple)): raise ValueError('Must pass a list or tuple of devices') num_devices = len(devices) if not num_devices: raise ValueError('There must be at least 1 device') has_tpu = bool(tpu_devices(devices)) pmap_fn = _get_pmap_impl(f, devices, has_tpu) def wrapper(*args): if _pmap_config.devices() is not None: raise ValueError('Found a surrounding pmap. Nested pmap is not supported yet.') flattened_input_args = nest.flatten(args) flattened_per_device_args = [[] for _ in devices] for arg in flattened_input_args: if isinstance(arg, tensor_lib.Tensor): if not arg.shape.rank or arg.shape[0] != len(devices): raise ValueError('Input tensors need to have a first dimension equal to the number of devices; got tensor of shape %s and %s devices' % (arg.shape, len(devices))) for j, device in enumerate(devices): updated_arg = array_ops.gather_v2(arg, j) if not has_tpu: with ops.device(device): updated_arg = array_ops.identity(updated_arg) flattened_per_device_args[j].append(updated_arg) elif isinstance(arg, ShardedNdArray): for device_args, tensor in zip(flattened_per_device_args, arg.tensors): device_args.append(tensor) else: for device_args in flattened_per_device_args: device_args.append(arg) all_per_device_args = [nest.pack_sequence_as(args, device_args) for device_args in flattened_per_device_args] with pmap_config(axis_name, devices): results = pmap_fn(all_per_device_args) flattened_results = [nest.flatten(result) for result in results] final_tree = [] for i in range(len(flattened_results[0])): tensors = [] for j, device in enumerate(devices): assert isinstance(flattened_results[j][i], tensor_lib.Tensor), 'currently only tensor return items are supported' tensors.append(flattened_results[j][i]) final_tree.append(ShardedNdArray(tensors)) return nest.pack_sequence_as(results[0], final_tree) return wrapper
Transforms a function into a multi-device function. The semantics are similar to JAX's pmap. Args: f: The function to be converted. axis_name: Used for nested pmap, which is not supported yet. devices: The devices over which the returned function will run. Returns: A function that runs the underlying function `f` on `devices`. Its arguments can be `ShardedNdArray`s, tensors or other Python objects, and its return values are all `ShardedNdArray`s. If an input is a tensor, the length of its first dimension must equal the number of devices, and the tensor will be splitted along its first dimension among the devices. If an input is an unknown Python object, it will be replicated among the devices.
github-repos
def redo(self): trigger_log = self._to_live_trigger_log(state=TRIGGER_LOG_STATE['NEW']) trigger_log.save(force_insert=True) self.state = TRIGGER_LOG_STATE['REQUEUED'] self.save(update_fields=['state']) return trigger_log
Re-sync the change recorded in this trigger log. Creates a ``NEW`` live trigger log from the data in this archived trigger log and sets the state of this archived instance to ``REQUEUED``. .. seealso:: :meth:`.TriggerLog.redo` Returns: The :class:`.TriggerLog` instance that was created from the data of this archived log.
codesearchnet
def latent_dirichlet_allocation(concentration, topics_words): topics = ed.Dirichlet(concentration=concentration, name='topics') word_probs = tf.matmul(topics, topics_words) bag_of_words = ed.OneHotCategorical(probs=word_probs, name='bag_of_words') return bag_of_words
Latent Dirichlet Allocation in terms of its generative process. The model posits a distribution over bags of words and is parameterized by a concentration and the topic-word probabilities. It collapses per-word topic assignments. Args: concentration: A Tensor of shape [1, num_topics], which parameterizes the Dirichlet prior over topics. topics_words: A Tensor of shape [num_topics, num_words], where each row (topic) denotes the probability of each word being in that topic. Returns: bag_of_words: A random variable capturing a sample from the model, of shape [1, num_words]. It represents one generated document as a bag of words.
codesearchnet
def apply_encoding_options(self, min_token_count=1, limit_top_tokens=None): if (not self.has_vocab): raise ValueError('You need to build the vocabulary using `build_vocab` before using `apply_encoding_options`') if (min_token_count < 1): raise ValueError('`min_token_count` should atleast be 1') token_counts = list(self._token_counts.items()) token_counts = [x for x in token_counts if (x[1] >= min_token_count)] if (limit_top_tokens is not None): token_counts.sort(key=(lambda x: x[1]), reverse=True) filtered_tokens = list(zip(*token_counts))[0] filtered_tokens = filtered_tokens[:limit_top_tokens] else: filtered_tokens = zip(*token_counts)[0] self.create_token_indices(filtered_tokens)
Applies the given settings for subsequent calls to `encode_texts` and `decode_texts`. This allows you to play with different settings without having to re-run tokenization on the entire corpus. Args: min_token_count: The minimum token count (frequency) in order to include during encoding. All tokens below this frequency will be encoded to `0` which corresponds to unknown token. (Default value = 1) limit_top_tokens: The maximum number of tokens to keep, based their frequency. Only the most common `limit_top_tokens` tokens will be kept. Set to None to keep everything. (Default value: None)
codesearchnet
def __new__(cls, x=None, y=None, ildj=None, kwargs=None): return super(_Mapping, cls).__new__(cls, x, y, ildj, kwargs)
Custom __new__ so namedtuple items have defaults. Args: x: `Tensor` or None. Input to forward; output of inverse. y: `Tensor` or None. Input to inverse; output of forward. ildj: `Tensor`. This is the (un-reduce_sum'ed) inverse log det jacobian. kwargs: Python dictionary. Extra args supplied to forward/inverse/etc functions. Returns: mapping: New instance of _Mapping.
juraj-google-style
def vae(x, z_size, name=None): with tf.variable_scope(name, default_name="vae"): mu = tf.layers.dense(x, z_size, name="mu") log_sigma = tf.layers.dense(x, z_size, name="log_sigma") shape = common_layers.shape_list(x) epsilon = tf.random_normal([shape[0], shape[1], 1, z_size]) z = mu + tf.exp(log_sigma / 2) * epsilon kl = 0.5 * tf.reduce_mean( tf.expm1(log_sigma) + tf.square(mu) - log_sigma, axis=-1) free_bits = z_size kl_loss = tf.reduce_mean(tf.maximum(kl - free_bits, 0.0)) return z, kl_loss, mu, log_sigma
Simple variational autoencoder without discretization. Args: x: Input to the discretization bottleneck. z_size: Number of bits, where discrete codes range from 1 to 2**z_size. name: Name for the bottleneck scope. Returns: Embedding function, latent, loss, mu and log_simga.
juraj-google-style
def from_json(cls, json): if (json['name'] in _KEYRANGES_CLASSES): return _KEYRANGES_CLASSES[json['name']].from_json(json) raise ValueError('Invalid json %s', json)
Deserialize from json. Args: json: a dict of json compatible fields. Returns: a KeyRanges object. Raises: ValueError: if the json is invalid.
codesearchnet
def contrast(x, severity=1): c = [0.4, 0.3, 0.2, 0.1, 0.05][(severity - 1)] x = (np.array(x) / 255.0) means = np.mean(x, axis=(0, 1), keepdims=True) x_clip = (np.clip((((x - means) * c) + means), 0, 1) * 255) return around_and_astype(x_clip)
Change contrast of images. Args: x: numpy array, uncorrupted image, assumed to have uint8 pixel in [0,255]. severity: integer, severity of corruption. Returns: numpy array, image with uint8 pixels in [0,255]. Changed contrast.
codesearchnet
def _update_album_art_to_full_uri(self, item): if getattr(item, 'album_art_uri', False): item.album_art_uri = self.build_album_art_full_uri(item.album_art_uri)
Update an item's Album Art URI to be an absolute URI. Args: item: The item to update the URI for
codesearchnet
async def _on_state_update(self, state_update): notification_type = state_update.WhichOneof('state_update') if state_update.HasField('conversation'): try: (await self._handle_conversation_delta(state_update.conversation)) except exceptions.NetworkError: logger.warning('Discarding %s for %s: Failed to fetch conversation', notification_type.replace('_', ' '), state_update.conversation.conversation_id.id) return if (notification_type == 'typing_notification'): (await self._handle_set_typing_notification(state_update.typing_notification)) elif (notification_type == 'watermark_notification'): (await self._handle_watermark_notification(state_update.watermark_notification)) elif (notification_type == 'event_notification'): (await self._on_event(state_update.event_notification.event))
Receive a StateUpdate and fan out to Conversations. Args: state_update: hangouts_pb2.StateUpdate instance
codesearchnet
def speed(self): if self._stalled: return 0 time_sum = 0 data_len_sum = 0 for (time_diff, data_len) in self._samples: time_sum += time_diff data_len_sum += data_len if time_sum: return (data_len_sum / time_sum) else: return 0
Return the current transfer speed. Returns: int: The speed in bytes per second.
codesearchnet
def _measure_list_profile_column_widths(self, profile_data): num_columns = len(profile_data.column_names()) widths = [len(column_name) for column_name in profile_data.column_names()] for row in range(profile_data.row_count()): for col in range(num_columns): widths[col] = max(widths[col], len(str(profile_data.row_values(row)[col])) + 2) return widths
Determine the maximum column widths for each data list. Args: profile_data: list of ProfileDatum objects. Returns: List of column widths in the same order as columns in data.
github-repos
def encode(self, input_ids: jnp.ndarray, attention_mask: Optional[jnp.ndarray]=None, position_ids: Optional[jnp.ndarray]=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, return_dict: Optional[bool]=None, train: bool=False, params: Optional[dict]=None, dropout_rng: PRNGKey=None): output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states return_dict = return_dict if return_dict is not None else self.config.return_dict if attention_mask is None: attention_mask = jnp.ones_like(input_ids) if position_ids is None: batch_size, sequence_length = input_ids.shape position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length)) rngs = {} if dropout_rng is not None: rngs['dropout'] = dropout_rng def _encoder_forward(module, input_ids, attention_mask, position_ids, **kwargs): encode_module = module._get_encoder_module() return encode_module(input_ids, attention_mask, position_ids, **kwargs) return self.module.apply({'params': params or self.params}, input_ids=jnp.array(input_ids, dtype='i4'), attention_mask=jnp.array(attention_mask, dtype='i4'), position_ids=jnp.array(position_ids, dtype='i4'), output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, deterministic=not train, rngs=rngs, method=_encoder_forward)
Returns: Example: ```python >>> from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration >>> model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M") >>> text = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer(text, max_length=1024, return_tensors="np") >>> encoder_outputs = model.encode(**inputs) ```
github-repos
def plot(self, data, height=1000, render_large_data=False): import IPython if not isinstance(data, pd.DataFrame): raise ValueError('Expect a DataFrame.') if (len(data) > 10000 and not render_large_data): raise ValueError('Facets dive may not work well with more than 10000 rows. ' + 'Reduce data or set "render_large_data" to True.') jsonstr = data.to_json(orient='records') html_id = 'f' + datalab.utils.commands.Html.next_id() HTML_TEMPLATE = html = HTML_TEMPLATE.format(html_id=html_id, jsonstr=jsonstr, height=height) return IPython.core.display.HTML(html)
Plots a detail view of data. Args: data: a Pandas dataframe. height: the height of the output.
juraj-google-style
def brightness(im): im_hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV) h, s, v = cv2.split(im_hsv) height, weight = v.shape[:2] total_bright = 0 for i in v: total_bright = total_bright+sum(i) return float(total_bright)/(height*weight)
Return the brightness of an image Args: im(numpy): image Returns: float, average brightness of an image
juraj-google-style
def format(sql, args=None): resolved_vars = {} code = [] SqlStatement._find_recursive_dependencies(sql, args, code=code, resolved_vars=resolved_vars) parts = [] for (escape, placeholder, _, literal) in SqlStatement._get_tokens(sql): if escape: parts.append('$') elif placeholder: variable = placeholder[1:] try: value = resolved_vars[variable] except KeyError as e: raise Exception('Invalid sql. Unable to substitute $%s.' % e.args[0]) if isinstance(value, types.ModuleType): value = _utils.get_default_query_from_module(value) if isinstance(value, SqlStatement): sql = value.format(value._sql, resolved_vars) value = '(%s)' % sql elif '_repr_sql_' in dir(value): value = value._repr_sql_() elif isinstance(value, basestring): value = SqlStatement._escape_string(value) elif isinstance(value, list) or isinstance(value, tuple): if isinstance(value, tuple): value = list(value) expansion = '(' for v in value: if len(expansion) > 1: expansion += ', ' if isinstance(v, basestring): expansion += SqlStatement._escape_string(v) else: expansion += str(v) expansion += ')' value = expansion else: value = str(value) parts.append(value) elif literal: parts.append(literal) expanded = ''.join(parts) return expanded
Resolve variable references in a query within an environment. This computes and resolves the transitive dependencies in the query and raises an exception if that fails due to either undefined or circular references. Args: sql: query to format. args: a dictionary of values to use in variable expansion. Returns: The resolved SQL text with variables expanded. Raises: Exception on failure.
juraj-google-style
def _FormatHumanReadableSize(self, size): magnitude_1000 = 0 size_1000 = float(size) while size_1000 >= 1000: size_1000 /= 1000 magnitude_1000 += 1 magnitude_1024 = 0 size_1024 = float(size) while size_1024 >= 1024: size_1024 /= 1024 magnitude_1024 += 1 size_string_1000 = None if 0 < magnitude_1000 <= 7: size_string_1000 = '{0:.1f}{1:s}'.format( size_1000, self._UNITS_1000[magnitude_1000]) size_string_1024 = None if 0 < magnitude_1024 <= 7: size_string_1024 = '{0:.1f}{1:s}'.format( size_1024, self._UNITS_1024[magnitude_1024]) if not size_string_1000 or not size_string_1024: return '{0:d} B'.format(size) return '{0:s} / {1:s} ({2:d} B)'.format( size_string_1024, size_string_1000, size)
Represents a number of bytes as a human readable string. Args: size (int): size in bytes. Returns: str: human readable string of the size.
juraj-google-style
def chmod(target): assert isinstance(target, str) assert os.path.exists(target) file_mode = stat.S_IRUSR | stat.S_IWUSR folder_mode = stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR remove_immutable_attribute(target) if os.path.isfile(target): os.chmod(target, file_mode) elif os.path.isdir(target): os.chmod(target, folder_mode) for root, dirs, files in os.walk(target): for cur_dir in dirs: os.chmod(os.path.join(root, cur_dir), folder_mode) for cur_file in files: os.chmod(os.path.join(root, cur_file), file_mode) else: raise ValueError("Unsupported file type: {}".format(target))
Recursively set the chmod for files to 0600 and 0700 for folders. It's ok unless we need something more specific. Args: target (str): Root file or folder
juraj-google-style
def get_crate(version, crate_root=None): if (not crate_root): crate_root = _crates_cache() _remove_old_crates(crate_root) if _is_project_repo(version): return _extract_tarball(_build_tarball(version)) m = BRANCH_VERSION_RE.match(version) if m: return _build_from_release_branch(m.group(0), crate_root) uri = _lookup_uri(version) crate_dir = _download_and_extract(uri, crate_root) return crate_dir
Retrieve a Crate tarball, extract it and return the path. Args: version: The Crate version to get. Can be specified in different ways: - A concrete version like '0.55.0' - A version including a `x` as wildcards. Like: '1.1.x' or '1.x.x'. This will use the latest version that matches. - Release branch, like `3.1` - An alias: 'latest-stable' or 'latest-testing' - A URI pointing to a crate tarball crate_root: Where to extract the tarball to. If this isn't specified ``$XDG_CACHE_HOME/.cache/cr8/crates`` will be used.
codesearchnet
def is_scalar_batch(self, name='is_scalar_batch'): with self._name_scope(name): return ops.convert_to_tensor(self._is_scalar_helper(self.batch_shape, self.batch_shape_tensor), name='is_scalar_batch')
Indicates that `batch_shape == []`. Args: name: Python `str` prepended to names of ops created by this function. Returns: is_scalar_batch: `bool` scalar `Tensor`.
github-repos
def download_extract_tar(tar_url, folder, tar_filename=''): try: makedirs(folder) except OSError: if not isdir(folder): raise data_file = tar_filename if not data_file: fd, data_file = mkstemp('.tar.gz') download(tar_url, os.fdopen(fd, 'wb')) else: download(tar_url, data_file) with tarfile.open(data_file) as tar: tar.extractall(path=folder)
Download and extract the tar at the url to the given folder Args: tar_url (str): URL of tar file to download folder (str): Location of parent directory to extract to. Doesn't have to exist tar_filename (str): Location to download tar. Default is to a temp file
juraj-google-style
def btemp_threshold(img, min_in, max_in, threshold, threshold_out=None, **kwargs): threshold_out = (threshold_out if (threshold_out is not None) else (176 / 255.0)) low_factor = ((threshold_out - 1.0) / (min_in - threshold)) low_offset = (1.0 + (low_factor * min_in)) high_factor = (threshold_out / (max_in - threshold)) high_offset = (high_factor * max_in) def _bt_threshold(band_data): return da.where((band_data >= threshold), (high_offset - (high_factor * band_data)), (low_offset - (low_factor * band_data))) return apply_enhancement(img.data, _bt_threshold, pass_dask=True)
Scale data linearly in two separate regions. This enhancement scales the input data linearly by splitting the data into two regions; min_in to threshold and threshold to max_in. These regions are mapped to 1 to threshold_out and threshold_out to 0 respectively, resulting in the data being "flipped" around the threshold. A default threshold_out is set to `176.0 / 255.0` to match the behavior of the US National Weather Service's forecasting tool called AWIPS. Args: img (XRImage): Image object to be scaled min_in (float): Minimum input value to scale max_in (float): Maximum input value to scale threshold (float): Input value where to split data in to two regions threshold_out (float): Output value to map the input `threshold` to. Optional, defaults to 176.0 / 255.0.
codesearchnet
def _create_hparam_extractor(hparam_name): def extractor_fn(session_group): if (hparam_name in session_group.hparams): return _value_to_python(session_group.hparams[hparam_name]) return None return extractor_fn
Returns an extractor function that extracts an hparam from a session group. Args: hparam_name: str. Identies the hparam to extract from the session group. Returns: A function that takes a tensorboard.hparams.SessionGroup protobuffer and returns the value, as a native Python object, of the hparam identified by 'hparam_name'.
codesearchnet
def save_page(self, path=None): path = _prepare_path(path, "html") with open(path, "wb") as f: f.write(encode_string(self.body)) return path
Save a snapshot of the page. If invoked without arguments, it will save a file to :data:`capybara.save_path` and the file will be given a randomly generated filename. If invoked with a relative path, the path will be relative to :data:`capybara.save_path`. Args: path (str, optional): The path to where it should be saved. Returns: str: The path to which the file was saved.
juraj-google-style
def nodeid(self, iv, quantifier=False): return next(iter(self.nodeids(ivs=[iv], quantifier=quantifier)), None)
Return the nodeid of the predication selected by *iv*. Args: iv: the intrinsic variable of the predication to select quantifier: if `True`, treat *iv* as a bound variable and find its quantifier; otherwise the non-quantifier will be returned
juraj-google-style
def execute_command(self, command): self.info_log("executing command: %s" % command) try: ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) k = paramiko.RSAKey.from_private_key_file( self.browser_config.get('ssh_key_path') ) ssh.connect( self.private_ip, username=self.browser_config.get('username'), pkey=k ) sleep_time = 0.1 stdout = [] stderr = [] ssh_transport = ssh.get_transport() channel = ssh_transport.open_session() channel.setblocking(0) channel.exec_command(command) while True: while channel.recv_ready(): stdout.append(channel.recv(1000)) while channel.recv_stderr_ready(): stderr.append(channel.recv_stderr(1000)) if channel.exit_status_ready(): break sleep(sleep_time) ssh_transport.close() ssh.close() return b''.join(stdout), b''.join(stderr) except Exception as e: msg = "Execute_command exception: %s" % str(e) self.error_log(msg) raise Exception(msg)
Execute a command on the node Args: command (str)
juraj-google-style
def FromTrimmedData(data, index): header = Header() ms = StreamManager.GetStream(data) reader = BinaryReader(ms) header.DeserializeUnsigned(reader) reader.ReadByte() witness = Witness() witness.Deserialize(reader) header.Script = witness StreamManager.ReleaseStream(ms) return header
Deserialize into a Header object from the provided data. Args: data (bytes): index: UNUSED Returns: Header:
juraj-google-style
def get_by_uri(self, uri): self._helper.validate_resource_uri(uri) data = self._helper.do_get(uri) if data: new_resource = self.new(self._connection, data) else: new_resource = None return new_resource
Retrieves a resource by its URI Args: uri: URI of the resource Returns: Resource object
codesearchnet
def xmoe_tr_dense_2k(): hparams = mtf_transformer2.mtf_bitransformer_base() hparams.encoder_layers = (['self_att', 'drd'] * 4) hparams.decoder_layers = (['self_att', 'enc_att', 'drd'] * 4) hparams.batch_size = 64 hparams.shared_embedding_and_softmax_weights = True hparams.mesh_shape = 'batch:8' return hparams
Series of architectural experiments on Translation. # run on 8-core setup 119M params, einsum=0.95e13 Returns: a hparams
codesearchnet
def UploadUsers(self, hash_algorithm, hash_key, accounts): return self.rpc_helper.UploadAccount(hash_algorithm, base64.urlsafe_b64encode(hash_key), [GitkitUser.ToRequest(i) for i in accounts])
Uploads multiple users to Gitkit server. Args: hash_algorithm: string, the hash algorithm. hash_key: array, raw key of the hash algorithm. accounts: list of GitkitUser. Returns: A dict of failed accounts. The key is the index of the 'accounts' list, starting from 0.
codesearchnet
def create(self, callback_url): resource = self.resource.create({'subscribed_to': 'address', 'callback_url': callback_url}) subscription = self.wrap(resource) self.add(subscription) return subscription
Register a new Subscription on this collection's parent object. Args: callback_url (str): URI of an active endpoint which can receive notifications. Returns: A round.Subscription object if successful.
juraj-google-style
def _verify_pipeline_uuid(self, pipeline_uuid): try: uuid.UUID(pipeline_uuid) except ValueError as ve: raise ValueError(f"Incorrect pipeline uuid: '{pipeline_uuid}'") from ve
Verify the received pipeline_uuid format Args: pipeline_uuid: uuid of the pipeline Returns: If pipeline ID is not verified, will raise an exception
github-repos
def _preprocess_movie_lens(ratings_df): ratings_df['data'] = 1.0 num_timestamps = ratings_df[['userId', 'timestamp']].groupby('userId').nunique() last_user_timestamp = ratings_df[['userId', 'timestamp']].groupby('userId').max() ratings_df['numberOfTimestamps'] = ratings_df['userId'].apply((lambda x: num_timestamps['timestamp'][x])) ratings_df['lastTimestamp'] = ratings_df['userId'].apply((lambda x: last_user_timestamp['timestamp'][x])) ratings_df = ratings_df[(ratings_df['numberOfTimestamps'] > 2)] ratings_df = _create_row_col_indices(ratings_df) train_ratings_df = ratings_df[(ratings_df['timestamp'] < ratings_df['lastTimestamp'])] test_ratings_df = ratings_df[(ratings_df['timestamp'] == ratings_df['lastTimestamp'])] return (ratings_df, train_ratings_df, test_ratings_df)
Separate the rating datafram into train and test sets. Filters out users with less than two distinct timestamps. Creates train set and test set. The test set contains all the last interactions of users with more than two distinct timestamps. Args: ratings_df: pandas dataframe with columns 'userId', 'movieId', 'rating', 'timestamp'. Returns: tuple of dataframes (filtered_ratings, train_ratings, test_ratings).
codesearchnet
class BlipProcessor(ProcessorMixin): attributes = ['image_processor', 'tokenizer'] image_processor_class = ('BlipImageProcessor', 'BlipImageProcessorFast') tokenizer_class = ('BertTokenizer', 'BertTokenizerFast') def __init__(self, image_processor, tokenizer, **kwargs): tokenizer.return_token_type_ids = False super().__init__(image_processor, tokenizer) self.current_processor = self.image_processor def __call__(self, images: ImageInput=None, text: Optional[Union[str, List[str], TextInput, PreTokenizedInput]]=None, audio=None, videos=None, **kwargs: Unpack[BlipProcessorKwargs]) -> BatchEncoding: if images is None and text is None: raise ValueError('You have to specify either images or text.') text_encoding = None output_kwargs = self._merge_kwargs(BlipProcessorKwargs, tokenizer_init_kwargs=self.tokenizer.init_kwargs, **kwargs) if text is not None: text_encoding = self.tokenizer(text, **output_kwargs['text_kwargs']) if images is not None: encoding_image_processor = self.image_processor(images, **output_kwargs['images_kwargs']) if text_encoding is not None: encoding_image_processor.update(text_encoding) return encoding_image_processor return text_encoding def batch_decode(self, *args, **kwargs): return self.tokenizer.batch_decode(*args, **kwargs) def decode(self, *args, **kwargs): return self.tokenizer.decode(*args, **kwargs) @property def model_input_names(self): tokenizer_input_names = self.tokenizer.model_input_names image_processor_input_names = self.image_processor.model_input_names return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
Constructs a BLIP processor which wraps a BERT tokenizer and BLIP image processor into a single processor. [`BlipProcessor`] offers all the functionalities of [`BlipImageProcessor`] and [`BertTokenizerFast`]. See the docstring of [`~BlipProcessor.__call__`] and [`~BlipProcessor.decode`] for more information. Args: image_processor (`BlipImageProcessor`): An instance of [`BlipImageProcessor`]. The image processor is a required input. tokenizer (`BertTokenizerFast`): An instance of ['BertTokenizerFast`]. The tokenizer is a required input.
github-repos
def evalAsync(self, amplstatements, callback, **kwargs): if (self._langext is not None): amplstatements = self._langext.translate(amplstatements, **kwargs) def async_call(): self._lock.acquire() try: self._impl.eval(amplstatements) self._errorhandler_wrapper.check() except Exception: self._lock.release() raise else: self._lock.release() callback.run() Thread(target=async_call).start()
Interpret the given AMPL statement asynchronously. Args: amplstatements: A collection of AMPL statements and declarations to be passed to the interpreter. callback: Callback to be executed when the statement has been interpreted. Raises: RuntimeError: if the input is not a complete AMPL statement (e.g. if it does not end with semicolon) or if the underlying interpreter is not running.
codesearchnet
def get_nn_info(self, structure, n): site = structure[n] neighs_dists = structure.get_neighbors(site, self.cutoff) siw = [] if (self.get_all_sites == True): for (s, dist) in neighs_dists: w = dist siw.append({'site': s, 'image': self._get_image(structure, s), 'weight': w, 'site_index': self._get_original_site(structure, s)}) else: min_dist = min([dist for (neigh, dist) in neighs_dists]) for (s, dist) in neighs_dists: if (dist < ((1.0 + self.tol) * min_dist)): w = (min_dist / dist) siw.append({'site': s, 'image': self._get_image(structure, s), 'weight': w, 'site_index': self._get_original_site(structure, s)}) return siw
Get all near-neighbor sites as well as the associated image locations and weights of the site with index n using the closest neighbor distance-based method. Args: structure (Structure): input structure. n (integer): index of site for which to determine near neighbors. Returns: siw (list of tuples (Site, array, float)): tuples, each one of which represents a neighbor site, its image location, and its weight.
codesearchnet
def _sparse_tensor(self, data, batch_size=-1): indices = [] values = [] max_col_count = 0 for batch, batch_ix in zip(data, range(len(data))): for column, column_ix in zip(batch, range(len(batch))): indices.append([batch_ix, column_ix]) values.append(column) max_col_count = max(max_col_count, column_ix + 1) shape = [batch_size if batch_size != -1 else len(data), max_col_count] value_type = dtypes.string if not values or isinstance(values[0], str) else dtypes.int64 return sparse_tensor.SparseTensor(constant_op.constant(indices, dtypes.int64, [len(indices), 2]), constant_op.constant(values, value_type, [len(indices)]), constant_op.constant(shape, dtypes.int64))
Generates a SparseTensor. Args: data: Should be a list of list of strings or int64. Each item of the outer list represents a batch. Each item of the batch is a feature of a specific feature column. batch_size: optional batch size, especially for cases when data has no entry for some batches. Returns: A SparseTensor.
github-repos
def download_artifact_bundle(self, id_or_uri, file_path): uri = self.DOWNLOAD_PATH + '/' + extract_id_from_uri(id_or_uri) return self._client.download(uri, file_path)
Download the Artifact Bundle. Args: id_or_uri: ID or URI of the Artifact Bundle. file_path(str): Destination file path. Returns: bool: Successfully downloaded.
juraj-google-style
def post(cls, payload): if (not isinstance(payload, dict)): raise ValueError("The 'payload' parameter must be provided a dictionary object.") payload = cls.set_id_in_fkeys(payload) payload = cls.check_boolean_fields(payload) payload = cls.add_model_name_to_payload(payload) payload = cls.prepost_hooks(payload) cls.debug_logger.debug('POSTING payload {}'.format(json.dumps(payload, indent=4))) res = requests.post(url=cls.URL, json=payload, headers=HEADERS, verify=False) cls.write_response_html_to_file(res, 'bob.html') if (not res.ok): cls.log_error(res.text) res_json = res.json() if ('exception' in res_json): exc_type = res_json['exception'] if (exc_type == 'ActiveRecord::RecordNotUnique'): raise RecordNotUnique() res.raise_for_status() res = res.json() cls.log_post(res) cls.debug_logger.debug('Success') return res
Posts the data to the specified record. Args: payload: `dict`. This will be JSON-formatted prior to sending the request. Returns: `dict`. The JSON formatted response. Raises: `Requests.exceptions.HTTPError`: The status code is not ok. `RecordNotUnique`: The Rails server returned the exception ActiveRecord::RecordNotUnique.
codesearchnet
def set_token(self, token): self.token = token self.set_header('Authorization', 'Bearer {}'.format(token))
Set the token for the v20 context Args: token: The token used to access the v20 REST api
codesearchnet
def save_r_df(self, state_key, r_value, action_key=None): if action_key is not None: add_r_df = pd.DataFrame([(state_key, action_key, r_value)], columns=["state_key", "action_key", "r_value"]) else: add_r_df = pd.DataFrame([(state_key, r_value)], columns=["state_key", "r_value"]) if self.r_df is not None: self.r_df = pd.concat([add_r_df, self.r_df]) if action_key is not None: self.r_df = self.r_df.drop_duplicates(["state_key", "action_key"]) else: self.r_df = self.r_df.drop_duplicates(["state_key"]) else: self.r_df = add_r_df
Insert or update R-Value in `self.r_df`. Args: state_key: The key of state. r_value: R-Value(Reward). action_key: The key of action if it is nesesary for the parametar of value function. Exceptions: TypeError: If the type of `r_value` is not float.
juraj-google-style
def read(self, n): d = b'' while n: try: block = self._process.stdout.read(n) except ValueError: block = None if not block: self._process.poll() raise EOFError('Process ended') d += block n -= len(block) return d
Read *n* bytes from the subprocess' output channel. Args: n(int): The number of bytes to read. Returns: bytes: *n* bytes of output. Raises: EOFError: If the process exited.
juraj-google-style