code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def create_box_field(self, box_key, name, field_type, **kwargs): self._raise_unimplemented_error() uri = '/'.join([self.api_uri, self.boxes_suffix, box_key, self.fields_suffix ]) code, data = self._create_field(uri, name, field_type, **kwargs) return code, data
Creates a box field with the provided attributes. Args: box_key specifying the box to add the field to name required name string field_type required type string [TEXT_INPUT, DATE or PERSON] kwargs {} return (status code, field dict)
juraj-google-style
def InsertNodesAfter(new_nodes, target): for node in reversed(new_nodes): _InsertNodeAt(node, target, after=True)
Insert new_nodes after the given target location in the tree. Arguments: new_nodes: a sequence of new nodes to insert (the nodes should not be in the tree). target: the target node after which the new node node will be inserted. Raises: RuntimeError: if the tree is corrupted, or the insertion would corrupt it.
github-repos
def get(self, key, index=None): records = self.get_multi([key], index=index) try: return records[0][1] except IndexError: return None
Retrieves a value associated with a key from the database Args: key (str): The key to retrieve
juraj-google-style
def FillDeviceCapabilities(device, descriptor): preparsed_data = PHIDP_PREPARSED_DATA(0) ret = hid.HidD_GetPreparsedData(device, ctypes.byref(preparsed_data)) if (not ret): raise ctypes.WinError() try: caps = HidCapabilities() ret = hid.HidP_GetCaps(preparsed_data, ctypes.byref(c...
Fill out device capabilities. Fills the HidCapabilitites of the device into descriptor. Args: device: A handle to the open device descriptor: DeviceDescriptor to populate with the capabilities Returns: none Raises: WindowsError when unable to obtain capabilitites.
codesearchnet
def wont_implement_method(base_type, name, reason=None, explanation=None): if reason is not None: if reason not in _WONT_IMPLEMENT_REASONS: raise AssertionError(f'reason must be one of {list(_WONT_IMPLEMENT_REASONS.keys())}, got {reason!r}') reason_data = _WONT_IMPLEMENT_REASONS[reason] ...
Generate a stub method that raises WontImplementError. Note either reason or explanation must be specified. If both are specified, explanation is ignored. Args: base_type: The pandas type of the method that this is trying to replicate. name: The name of the method that this is aiming to replicate. reason: If specifie...
github-repos
def path(self, path): url = furl(self._request.rawurl) url.path = path self._request.url = url.url self.add_matcher(matcher('PathMatcher', path))
Defines a URL path to match. Only call this method if the URL has no path already defined. Arguments: path (str): URL path value to match. E.g: ``/api/users``. Returns: self: current Mock instance.
juraj-google-style
def _ParseAccountsData(self, account_data): if (not account_data): return {} lines = [line for line in account_data.splitlines() if line] user_map = {} for line in lines: if (not all(((ord(c) < 128) for c in line))): self.logger.info('SSH key contains non-ascii character: %s....
Parse the SSH key data into a user map. Args: account_data: string, the metadata server SSH key attributes data. Returns: dict, a mapping of the form: {'username': ['sshkey1, 'sshkey2', ...]}.
codesearchnet
def action_scope(self, action_fluents: Sequence[tf.Tensor]) -> Dict[(str, TensorFluent)]: return dict(zip(self.rddl.domain.action_fluent_ordering, action_fluents))
Returns a partial scope with current action-fluents. Args: action_fluents (Sequence[tf.Tensor]): The action fluents. Returns: A mapping from action fluent names to :obj:`rddl2tf.fluent.TensorFluent`.
codesearchnet
def make_sample_her_transitions(replay_strategy, replay_k, reward_fun): if replay_strategy == 'future': future_p = 1 - (1. / (1 + replay_k)) else: future_p = 0 def _sample_her_transitions(episode_batch, batch_size_in_transitions): T = episode_batch['u'].shape[1] ...
Creates a sample function that can be used for HER experience replay. Args: replay_strategy (in ['future', 'none']): the HER replay strategy; if set to 'none', regular DDPG experience replay is used replay_k (int): the ratio between HER replays and regular replays (e.g. k = 4 -> 4 times as many HER replays as regular ...
juraj-google-style
def _UpdateAuthorizedKeys(self, user, ssh_keys): pw_entry = self._GetUser(user) if (not pw_entry): return uid = pw_entry.pw_uid gid = pw_entry.pw_gid home_dir = pw_entry.pw_dir ssh_dir = os.path.join(home_dir, '.ssh') authorized_keys_file = os.path.join(ssh_dir, 'authorized_keys') ...
Update the authorized keys file for a Linux user with a list of SSH keys. Args: user: string, the name of the Linux user account. ssh_keys: list, the SSH key strings associated with the user. Raises: IOError, raised when there is an exception updating a file. OSError, raised when setting permissions or writing to a r...
codesearchnet
def baseline_optimizer_arguments(self, states, internals, reward): arguments = dict(time=self.global_timestep, variables=self.baseline.get_variables(), arguments=dict(states=states, internals=internals, reward=reward, update=tf.constant(value=True)), fn_reference=self.baseline.reference, fn_loss=self.fn_baseline_lo...
Returns the baseline optimizer arguments including the time, the list of variables to optimize, and various functions which the optimizer might require to perform an update step. Args: states: Dict of state tensors. internals: List of prior internal state tensors. reward: Reward tensor. Returns: Baseline optimizer ar...
codesearchnet
def error_buckets(gold, pred, X=None): buckets = defaultdict(list) gold = arraylike_to_numpy(gold) pred = arraylike_to_numpy(pred) for (i, (y, l)) in enumerate(zip(pred, gold)): buckets[(y, l)].append((X[i] if (X is not None) else i)) return buckets
Group items by error buckets Args: gold: an array-like of gold labels (ints) pred: an array-like of predictions (ints) X: an iterable of items Returns: buckets: A dict of items where buckets[i,j] is a list of items with predicted label i and true label j. If X is None, return indices instead. For a binary problem wit...
codesearchnet
def __call__(self, *args, **kwargs): def replica_local_fn(*args, **kwargs): if any((isinstance(arg, keras_tensor.KerasTensor) for arg in nest.flatten((args, kwargs)))): update_op = None else: update_op = self.update_state(*args, **kwargs) update_ops = [] ...
Accumulates statistics and then computes metric result value. Args: *args: **kwargs: A mini-batch of inputs to the Metric, passed on to `update_state()`. Returns: The metric value tensor.
github-repos
def ReleaseFileSystem(self, file_system): (identifier, cache_value) = self._file_system_cache.GetCacheValueByObject(file_system) if (not identifier): raise RuntimeError('Object not cached.') if (not cache_value): raise RuntimeError('Invalid cache value.') self._file_system_cache.ReleaseO...
Releases a cached file system object. Args: file_system (FileSystem): file system object. Returns: bool: True if the file system object can be closed. Raises: PathSpecError: if the path specification is incorrect. RuntimeError: if the file system object is not cached or an inconsistency is detected in the cache.
codesearchnet
def _get_required_container_version(): if 'dev' in beam_version.__version__: return names.BEAM_DEV_SDK_CONTAINER_TAG else: return _get_container_image_tag()
For internal use only; no backwards-compatibility guarantees. Returns: str: The tag of worker container images in GCR that corresponds to current version of the SDK.
github-repos
def _rename_if_any_arg_found_transformer(parent, node, full_name, name, logs, arg_names=None, arg_ok_predicate=None, remove_if_ok=False, message=None): for arg_name in arg_names: rename_node = _rename_if_arg_found_transformer(parent, node, full_name, name, logs, arg_name, arg_ok_predicate, remove_if_ok, mes...
Replaces the given call with tf.compat.v1 if any of the arg_names is found. Args: parent: Parent of node. node: ast.Call node to modify. full_name: full name of function to modify. name: name of function to modify. logs: list of logs to append to. arg_names: list of names of the argument to look for. arg_ok_predicate:...
github-repos
def __call__(cls, *args, **kwargs): if cls.instance is None: with threading.Lock(): if cls.instance is None: cls.instance = super(Singleton, cls).__call__(*args, **kwargs) return cls.instance
Return singleton instance. Args: cls (type): the class. args (tuple/list): initializer function arguments. kwargs (dict): initializer function keyword arguments.
juraj-google-style
def multi_post(self, urls, query_params=None, data=None, to_json=True, send_as_file=False): return self._multi_request(MultiRequest._VERB_POST, urls, query_params, data, to_json=to_json, send_as_file=send_as_file)
Issue multiple POST requests. Args: urls - A string URL or list of string URLs query_params - None, a dict, or a list of dicts representing the query params data - None, a dict or string, or a list of dicts and strings representing the data body. to_json - A boolean, should the responses be returned as JSON blobs send...
codesearchnet
def external_ids(self, **kwargs): path = self._get_series_id_season_number_episode_number_path('external_ids') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Get the external ids for a TV episode by combination of a season and episode number. Args: language: (optional) ISO 639 code. Returns: A dict respresentation of the JSON returned from the API.
codesearchnet
def draw(self, current_time, frame_time): self.set_default_viewport() self.timeline.draw(current_time, frame_time, self.fbo)
Draws a frame. Internally it calls the configured timeline's draw method. Args: current_time (float): The current time (preferrably always from the configured timer class) frame_time (float): The duration of the previous frame in seconds
codesearchnet
def ensure_dir(path): dirpath = os.path.dirname(path) if (dirpath and (not os.path.exists(dirpath))): os.makedirs(dirpath)
Ensure directory exists. Args: path(str): dir path
codesearchnet
def disconnect_async(self, connection_id, callback): try: context = self.connections.get_context(connection_id) except ArgumentError: callback(connection_id, self.id, False, "Could not find connection information") return self.connections.begin_disc...
Asynchronously disconnect from a device that has previously been connected Args: connection_id (int): A unique identifier for this connection on the DeviceManager that owns this adapter. callback (callable): A function called as callback(connection_id, adapter_id, success, failure_reason) when the disconnection finish...
juraj-google-style
def _build_param_string(params): pairs = [] for key, value in params.iteritems(): if value is None: value = '' pairs.append('{0}={1}'.format(key, value)) if len(pairs) > 0: return '?{0}'.format('&'.join(pairs)) return ''
Build query params string from a dictionary. Args: params (dict): A dictionary of params Returns: string: A valid url query params string.
juraj-google-style
def delete_credit_card(self, *, customer_id, credit_card_id): fmt = 'customers/{}/creditCards/{}'.format(customer_id, credit_card_id) return self.client._delete(self.url + fmt, headers=self.get_headers())
Delete a credit card (Token) associated with a user. Args: customer_id: Identifier of the client of whom you are going to delete the token. credit_card_id: Identifier of the token to be deleted. Returns:
juraj-google-style
def find_slot(self, wanted, slots=None): for slot in self.find_slots(wanted, slots): return slot return None
Searches the given slots or, if not given, active hotbar slot, hotbar, inventory, open window in this order. Args: wanted: function(Slot) or Slot or itemID or (itemID, metadata) Returns: Optional[Slot]: The first slot containing the item or None if not found.
juraj-google-style
def __x_google_quota_descriptor(self, metric_costs): return { 'metricCosts': { metric: cost for (metric, cost) in metric_costs.items() } } if metric_costs else None
Describes the metric costs for a call. Args: metric_costs: Dict of metric definitions to the integer cost value against that metric. Returns: A dict descriptor describing the Quota limits for the endpoint.
juraj-google-style
def create_token_type_ids_from_sequences(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None) -> List[int]: sep = [self.sep_token_id] cls = [self.cls_token_id] if token_ids_1 is None: return len(cls + token_ids_0 + sep) * [0] return len(cls + token_ids_0 + sep + sep + token_ids_1...
Create a mask from the two sequences passed to be used in a sequence-pair classification task. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of zeros.
github-repos
def run_as_function_for_tape_gradients(make_op, inputs): if gradients_util.PossibleTapeGradientTypes(inputs) == gradients_util.POSSIBLE_GRADIENT_TYPES_HIGHER_ORDER and (not (ops.get_default_graph().building_function and 'cflow_gradient_wrapper' in ops.get_default_graph().name)): results = tracing_compilatio...
Fix higher-order tape gradients by wrapping `make_op` in a function. Args: make_op: A function that takes a list of inputs and returns a list of output tensors. This function should set any handle data relevant to its outputs before returning. inputs: A list of tensors to check for tape gradients and pass to `make_op`...
github-repos
def array(self, size_chunk, start, bytesize): with open(self.img, 'rb') as f1: f1.seek((self.start_byte + (start * self.bytesize))) data = f1.read((size_chunk * self.bytesize)) Z = np.fromstring(data, dtype=self.dtype, count=size_chunk) if (self.grid == 'LOLA'): return (Z...
Read part of the binary file Args: size_chunk (int) : Size of the chunk to read start (int): Starting byte bytesize (int): Ending byte Returns: (np.array): array of the corresponding values
codesearchnet
def _to_enos_roles(roles): def to_host(h): extra = {} for (nic, roles) in h['nics']: for role in roles: extra[role] = nic return Host(h['host'], user='root', extra=extra) enos_roles = {} for (role, hosts) in roles.items(): enos_roles[role] = [to_h...
Transform the roles to use enoslib.host.Host hosts. Args: roles (dict): roles returned by :py:func:`enoslib.infra.provider.Provider.init`
codesearchnet
def VerifyGitkitToken(self, jwt): certs = self.rpc_helper.GetPublicCert() crypt.MAX_TOKEN_LIFETIME_SECS = (30 * 86400) parsed = None for aud in filter((lambda x: (x is not None)), [self.project_id, self.client_id]): try: parsed = crypt.verify_signed_jwt_with_certs(jwt, certs, aud) ...
Verifies a Gitkit token string. Args: jwt: string, the token to be checked Returns: GitkitUser, if the token is valid. None otherwise.
codesearchnet
def _dict_func(self, func, axis, *args, **kwargs): if ('axis' not in kwargs): kwargs['axis'] = axis if (axis == 0): index = self.columns else: index = self.index func = {idx: func[key] for key in func for idx in index.get_indexer_for([key])} def dict_apply_builder(df, func_d...
Apply function to certain indices across given axis. Args: func: The function to apply. axis: Target axis to apply the function along. Returns: A new PandasQueryCompiler.
codesearchnet
def op_or(self, *elements): expression = self.add_operator(Operator(',')) for element in elements: expression.add_element(element) return expression
Update the ``Expression`` by joining the specified additional ``elements`` using an "OR" ``Operator`` Args: *elements (BaseExpression): The ``Expression`` and/or ``Constraint`` elements which the "OR" ``Operator`` applies to. Returns: Expression: ``self`` or related ``Expression``.
codesearchnet
def fbresnet152(num_classes=1000, pretrained='imagenet'): model = FBResNet(Bottleneck, [3, 8, 36, 3], num_classes=num_classes) if pretrained is not None: settings = pretrained_settings['fbresnet152'][pretrained] assert num_classes == settings['num_classes'], \ "num_classes shoul...
Constructs a ResNet-152 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
juraj-google-style
def _ImportPythonModule(module_name): try: module_object = list(map(__import__, [module_name]))[0] except ImportError: return None if '.' in module_name: for submodule_name in module_name.split('.')[1:]: module_object = getattr(module_object, submodule_name, None) return module_object
Imports a Python module. Args: module_name (str): name of the module. Returns: module: Python module or None if the module cannot be imported.
juraj-google-style
def _force_edges_active_move(self, state: _STATE) -> _STATE: for _ in range(self._rand.randint(1, 4)): state = self._force_edge_active_move(state) return state
Move function which repeats _force_edge_active_move a few times. Args: state: Search state, not mutated. Returns: New search state which consists of incremental changes of the original state.
codesearchnet
class PatchTSMixerForTimeSeriesClassificationOutput(ModelOutput): loss: Optional[torch.FloatTensor] = None prediction_outputs: Optional[torch.FloatTensor] = None last_hidden_state: Optional[torch.FloatTensor] = None hidden_states: Optional[Tuple[torch.FloatTensor]] = None
Output type of [`PatchTSMixerForTimeSeriesClassificationOutput`]. Args: prediction_outputs (`torch.FloatTensor` of shape `(batch_size, num_labels)`): Prediction output from the classification head. last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_input_channels, num_patches, d_model)`): Backbone embed...
github-repos
def table_field_to_avro_field(table_field: Dict[str, Any], namespace: str) -> Dict[str, Any]: assert 'type' in table_field, 'Unable to get type for table field {}'.format(table_field) assert table_field['type'] in BIG_QUERY_TO_AVRO_TYPES, 'Unable to map BigQuery field type {} to avro type'.format(table_field['t...
Convert a BigQuery field to an avro field. Args: table_field (Dict[str, Any]): A BigQuery field in dict form. Returns: Dict[str, Any]: An equivalent Avro field in dict form.
github-repos
def get_application_configuration(name): _check() rc = _ec.get_application_configuration(name) if (rc is False): raise ValueError('Application configuration {0} not found.'.format(name)) return rc
Get a named application configuration. An application configuration is a named set of securely stored properties where each key and its value in the property set is a string. An application configuration object is used to store information that IBM Streams applications require, such as: * Database connection data * ...
codesearchnet
def update_parameters(parameters, grads, learning_rate=1.2): W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] dW1 = grads["dW1"] db1 = grads["db1"] dW2 = grads["dW2"] db2 = grads["db2"] W1 -= learning_rate * dW1 b1 -= l...
Updates parameters using the gradient descent update rule given above Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients Returns: parameters -- python dictionary containing your updated parameters
juraj-google-style
def _ParseLogLine(self, parser_mediator, structure): if not self._xchat_year: return time_elements_tuple = self._GetTimeElementsTuple(structure) try: date_time = dfdatetime_time_elements.TimeElements( time_elements_tuple=time_elements_tuple) date_time.is_local_time = True ...
Parses a log line. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. structure (pyparsing.ParseResults): structure of tokens derived from a line of a text file.
juraj-google-style
def _find_metric_value(session_or_group, metric_name): for metric_value in session_or_group.metric_values: if ((metric_value.name.tag == metric_name.tag) and (metric_value.name.group == metric_name.group)): return metric_value
Returns the metric_value for a given metric in a session or session group. Args: session_or_group: A Session protobuffer or SessionGroup protobuffer. metric_name: A MetricName protobuffer. The metric to search for. Returns: A MetricValue protobuffer representing the value of the given metric or None if no such metric ...
codesearchnet
def __init__(self, callback, callback_lock, options, queue_item): threading.Thread.__init__(self) self.__callback = callback self.__callback_lock = callback_lock self.__options = options self.__queue_item = queue_item
Constructs a crawler thread instance Args: callback (obj): The method to call when finished callback_lock (bool): The callback lock that prevents race conditions. options (:class:`nyawc.Options`): The settins/options object. queue_item (:class:`nyawc.QueueItem`): The queue item containing a request to execute.
juraj-google-style
def _make_flow(request, scopes, return_url=None): csrf_token = hashlib.sha256(os.urandom(1024)).hexdigest() request.session[_CSRF_KEY] = csrf_token state = json.dumps({ 'csrf_token': csrf_token, 'return_url': return_url, }) flow = client.OAuth2WebServerFlow( clie...
Creates a Web Server Flow Args: request: A Django request object. scopes: the request oauth2 scopes. return_url: The URL to return to after the flow is complete. Defaults to the path of the current request. Returns: An OAuth2 flow object that has been stored in the session.
juraj-google-style
def add_tile(self, tile_source, **kw): tile_renderer = TileRenderer(tile_source=tile_source, **kw) self.renderers.append(tile_renderer) return tile_renderer
Adds new ``TileRenderer`` into ``Plot.renderers`` Args: tile_source (TileSource) : a tile source instance which contain tileset configuration Keyword Arguments: Additional keyword arguments are passed on as-is to the tile renderer Returns: TileRenderer : TileRenderer
codesearchnet
def _prep_binary_content(self): if ((not self.data) and (not self.location) and ('Content-Location' not in self.resource.headers.keys())): raise Exception('creating/updating NonRDFSource requires content from self.binary.data, self.binary.location, or the Content-Location header') elif ('Content-Locatio...
Sets delivery method of either payload or header Favors Content-Location header if set Args: None Returns: None: sets attributes in self.binary and headers
codesearchnet
def framesToFrameRange(frames, sort=True, zfill=0, compress=False): if compress: frames = unique(set(), frames) frames = list(frames) if not frames: return '' if len(frames) == 1: return pad(frames[0], zfill) if sort: frame...
Converts an iterator of frames into a frame range string. Args: frames (collections.Iterable): sequence of frames to process sort (bool): sort the sequence before processing zfill (int): width for zero padding compress (bool): remove any duplicates before processing Returns: str:
juraj-google-style
def marquee(text='', width=78, mark='*'): if (not text): return (mark * width)[:width] nmark = ((((width - len(text)) - 2) if (nmark < 0): nmark = 0 marks = (mark * nmark) return ('%s %s %s' % (marks, text, marks))
Return the input string centered in a 'marquee'. Args: text (str): Input string width (int): Width of final output string. mark (str): Character used to fill string. :Examples: >>> marquee('A test', width=40) '**************** A test ****************' >>> marquee('A test', width=40, mark='-') '---------------- A te...
codesearchnet
def _get_connection(self): if (not getattr(self, '_connection', None)): logger.debug('Creating new connection.\n dsn: {}'.format(self._dsn)) d = parse_url_to_dict(self._dsn) self._connection = psycopg2.connect(database=d['path'].strip('/'), user=d['username'], password=d['password'], port=...
Returns connection to the postgres database. Returns: connection to postgres database who stores mpr data.
codesearchnet
def optimize(node): node = dead_code_elimination(node) node = constant_folding(node) node = assignment_propagation(node) return node
Perform a series of optimization passes. This function performs a series of optimizations (dead code elimination, constant folding, variable folding) on the given AST. It optimizes the code repeatedly until reaching a fixed point. The fixed point is determine roughly by checking whether the number of lines of generate...
codesearchnet
def _ReceiveItemOnActivity(self, zmq_socket): events = zmq_socket.poll( self._ZMQ_SOCKET_RECEIVE_TIMEOUT_MILLISECONDS) if events: try: received_object = self._zmq_socket.recv_pyobj() return received_object except zmq.error.Again: logger.error( '{0:s}...
Attempts to receive an item from a ZeroMQ socket. Args: zmq_socket (zmq.Socket): used to the receive the item. Returns: object: item from the socket. Raises: QueueEmpty: if no item could be received within the timeout. zmq.error.ZMQError: if an error occurs in ZeroMQ
juraj-google-style
def verify_docker_image_sha(chain, link): cot = link.cot task = link.task errors = [] if isinstance(task['payload'].get('image'), dict): docker_image_task_id = task['extra']['chainOfTrust']['inputs']['docker-image'] log.debug("Verifying {} {} against docker-image {}".forma...
Verify that built docker shas match the artifact. Args: chain (ChainOfTrust): the chain we're operating on. link (LinkOfTrust): the task link we're checking. Raises: CoTError: on failure.
juraj-google-style
def pop_stack(stack, op_id): if __debug__: (pushed_stack, pushed_op_id) = stack.pop() assert (pushed_op_id == op_id), ('Wanted %s, got %s' % (op_id, pushed_op_id)) else: pushed_stack = stack.pop() return pushed_stack
Proxy of pop, where we know we're popping a stack off of a stack. We know that we don't need to differentiate through this. See pop() for more. Args: stack: The stack to pop from. op_id: A unique variable that is also passed into the matching push. Allows optimization passes to track pairs of pushes and pops. Return...
codesearchnet
def resize_positional_embeddings(positional_embeddings: torch.Tensor, spatial_shapes: torch.LongTensor, max_length: int) -> torch.Tensor: batch_size = spatial_shapes.shape[0] embed_dim = positional_embeddings.shape[-1] source_dtype = positional_embeddings.dtype resulted_positional_embeddings = torch.emp...
Resize positional embeddings to image-specific size and pad to a fixed size. Args: positional_embeddings (`torch.Tensor`): Position embeddings of shape (height, width, embed_dim) spatial_shapes (`torch.LongTensor`): Spatial shapes of shape (batch_size, 2) to resize the positional embeddings to max_length (`int`): Maxi...
github-repos
def transfer(self, payment_id, data={}, **kwargs): url = "{}/{}/transfers".format(self.base_url, payment_id) return self.post_url(url, data, **kwargs)
Create Transfer for given Payment Id Args: payment_id : Id for which payment object has to be transfered Returns: Payment dict after getting transfered
juraj-google-style
def forward(self, hidden_states: torch.FloatTensor, p_mask: Optional[torch.FloatTensor]=None) -> torch.FloatTensor: x = self.dense(hidden_states).squeeze(-1) if p_mask is not None: if p_mask.dtype == torch.float16: x = x * (1 - p_mask) - 65500 * p_mask else: x = x * (1 - ...
Args: hidden_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`): The final hidden states of the model. p_mask (`torch.FloatTensor` of shape `(batch_size, seq_len)`, *optional*): Mask for tokens at invalid position, such as query and special symbols (PAD, SEP, CLS). 1.0 means token should be mask...
github-repos
def graph_distances(start, edges, distances): adj = {x: [] for x in range(len(distances))} for n1, n2 in edges: adj[n1].append(n2) adj[n2].append(n1) to_visit = [] new_dist = {} for n in adj[start]: heapq.heappush(to_visit, (distances[start, n], n)) while t...
Given an undirected adjacency list and a pairwise distance matrix between all nodes: calculates distances along graph from start node. Args: start (int): start node edges (list): adjacency list of tuples distances (array): 2d array of distances between nodes Returns: dict of node to distance from start
juraj-google-style
def extract_grid(self, longmin, longmax, latmin, latmax): (sample_min, sample_max) = map(int, map(self.sample_id, [longmin, longmax])) (line_min, line_max) = map(int, map(self.line_id, [latmax, latmin])) X = np.array(map(self.long_id, range(sample_min, sample_max, 1))) Y = np.array(map(self.lat_id, rang...
Extract part of the image ``img`` Args: longmin (float): Minimum longitude of the window longmax (float): Maximum longitude of the window latmin (float): Minimum latitude of the window latmax (float): Maximum latitude of the window Returns: A tupple of three arrays ``(X,Y,Z)`` with ``X`` contains the longitudes, ``Y`...
codesearchnet
def _install_signal_handler(self, signal_number, signal_name): old_signal_handler = None def handler(handled_signal_number, frame): signal.signal(signal_number, signal.SIG_DFL) sys.stderr.write(('TensorBoard caught %s; exiting...\n' % signal_name)) if (old_signal_handler not in (signal....
Set a signal handler to gracefully exit on the given signal. When this process receives the given signal, it will run `atexit` handlers and then exit with `0`. Args: signal_number: The numeric code for the signal to handle, like `signal.SIGTERM`. signal_name: The human-readable signal name.
codesearchnet
def print_terminal_table(headers, data_list, parse_row_fn): data_iter = iter(data_list) try: example = next(data_iter) example_row = parse_row_fn(example) data_iter = itertools.chain([example], data_iter) except StopIteration: example_row = ([''] * len(headers)) format_st...
Uses a set of headers, raw data, and a row parsing function, to print data to the terminal in a table of rows and columns. Args: headers (tuple of strings): The headers for each column of data data_list (list of dicts): Raw response data from the validator parse_row_fn (function): Parses a dict of data into a tuple of...
codesearchnet
def send_raw_tx(self, serialized_tx, id=None, endpoint=None): return self._call_endpoint(SEND_TX, params=[serialized_tx], id=id, endpoint=endpoint)
Submits a serialized tx to the network Args: serialized_tx: (str) a hexlified string of a transaction id: (int, optional) id to use for response tracking endpoint: (RPCEndpoint, optional) endpoint to specify to use Returns: bool: whether the tx was accepted or not
juraj-google-style
def GetStatusInformation(self): status = processing_status.TasksStatus() with self._lock: status.number_of_abandoned_tasks = len(self._tasks_abandoned) status.number_of_queued_tasks = len(self._tasks_queued) status.number_of_tasks_pending_merge = (len(self._tasks_pending_merge) + len(sel...
Retrieves status information about the tasks. Returns: TasksStatus: tasks status information.
codesearchnet
def round_f1_macro(y_true, y_predicted): try: predictions = [np.round(x) for x in y_predicted] except TypeError: predictions = y_predicted return f1_score(np.array(y_true), np.array(predictions), average='macro')
Calculates F1 macro measure. Args: y_true: list of true values y_predicted: list of predicted values Returns: F1 score
codesearchnet
def sheets_write(config, auth, sheet_url_or_name, sheet_tab, sheet_range, data, append=False, valueInputOption='RAW'): if config.verbose: print('SHEETS WRITE', sheet_url_or_name, sheet_tab, sheet_range) sheet_id = sheets_id(config, auth, sheet_url_or_name) range = sheets_tab_range(sheet_tab, sheet_r...
Write to sheets for specified range. Args: config - see starthinker/util/configuration.py auth - user or service sheet_url_or_name - one of: URL, document title, or id sheet_tab - name of tab to get id for sheet_range - A1 notation or blank if whole sheet data - list of lists representing rows. append - if true, data ...
github-repos
def SyncSleep(delay, name=None): return examples_sync_sleep(delay=delay, name=name)
Pause for `delay` seconds (which need not be an integer). This is a synchronous (blocking) version of a sleep op. It's purpose is to be contrasted with Examples>AsyncSleep. Args: delay: tf.Tensor which is a scalar of type float. name: An optional name for the op. Returns: The `delay` value.
github-repos
def remove_all(self, filter, force=False, timeout=(- 1)): return self._client.delete_all(filter=filter, force=force, timeout=timeout)
Deletes the set of datacenters according to the specified parameters. A filter is required to identify the set of resources to be deleted. Args: filter: A general filter/query string to narrow the list of items that will be removed. force: If set to true, the operation completes despite any problems with network conne...
codesearchnet
def get_pixel_coordinates(self, point, ccdnum): hdulist_index = self.get_hdulist_idx(ccdnum) if isinstance(point[0], Quantity) and isinstance(point[1], Quantity): pix_point = point[0].value, point[1].value else: pix_point = point if self.reading.inverted:...
Retrieves the pixel location of a point within the current HDUList given the location in the original FITS image. This takes into account that the image may be a cutout of a larger original. Args: point: tuple(float, float) (x, y) in original. Returns: (x, y) pixel in this image. @param extno: the extno from the ori...
juraj-google-style
def get_folder_items(self, folder_id, limit=100, offset=0, fields_list=None): qs = {'limit': limit, 'offset': offset} if fields_list: qs['fields'] = ','.join(fields_list) return self.__request('GET', ('folders/%s/items' % (folder_id,)), querystring=qs)
Get files and folders inside a given folder Args: folder_id (int): Where to get files and folders info. limit (int): The number of items to return. offset (int): The item at which to begin the response. fields_list (list): List of attributes to get. All attributes if None. Returns: dict. Response from Box. Raises...
codesearchnet
def loadfn(fname): if (fnmatch(fname, "*POSCAR*") or fnmatch(fname, "*CONTCAR*") or ".cif" in fname.lower()) or fnmatch(fname, "*.vasp"): return Structure.from_file(fname) elif fnmatch(fname, "*vasprun*"): from pymatgen.io.vasp import Vasprun return Vasprun(fname) el...
Convenience method to perform quick loading of data from a filename. The type of object returned depends the file type. Args: fname (string): A filename. Returns: Note that fname is matched using unix-style, i.e., fnmatch. (Structure) if *POSCAR*/*CONTCAR*/*.cif (Vasprun) *vasprun* (obj) if *json* (passthrough to mon...
juraj-google-style
def __init__(self, scope, parent, name): CodeControlFlow.__init__(self, scope, parent, name) self.declarations = None self.increment = None
Constructor for loops. Args: scope (CodeEntity): The program scope where this object belongs. parent (CodeEntity): This object's parent in the program tree. name (str): The name of the loop statement in the program.
juraj-google-style
def get_unique_families(hkls): def is_perm(hkl1, hkl2): h1 = np.abs(hkl1) h2 = np.abs(hkl2) return all([(i == j) for (i, j) in zip(sorted(h1), sorted(h2))]) unique = collections.defaultdict(list) for hkl1 in hkls: found = False for hkl2 in unique.keys(): ...
Returns unique families of Miller indices. Families must be permutations of each other. Args: hkls ([h, k, l]): List of Miller indices. Returns: {hkl: multiplicity}: A dict with unique hkl and multiplicity.
codesearchnet
def convert(self, y): if y is None: return None if isinstance(y, sparse_tensor.SparseTensor): return self._convert_sparse(y) assert isinstance(y, (tensor_lib.Tensor, ops.Operation)), y output = self._convert_helper(y) if isinstance(output, WrappedTensor): assert isinstance(y,...
Returns the converted value corresponding to y. Args: y: A Tensor or a ops.Operation object. If latter, y should not have any outputs. Returns: If y does not need to be converted, it returns y as is. Else it returns the "converted value" corresponding to y.
github-repos
def table(text): def table_bar(col_lengths): return "+-%s-+%s" % ( "-+-".join(["-" * length for length in col_lengths]), os.linesep, ) rows = [] for line in text.splitlines(): rows.append([part.strip() for part in line.split("|")]) max_cols = max(ma...
Format the text as a table. Text in format: first | second row 2 col 1 | 4 Will be formatted as:: +-------------+--------+ | first | second | +-------------+--------+ | row 2 col 1 | 4 | +-------------+--------+ Args: text (str): Text that needs to be formatted. Returns: str: Formatted string.
juraj-google-style
def ParseRow(self, parser_mediator, query, row, **unused_kwargs): query_hash = hash(query) event_data = AndroidWebViewCacheEventData() event_data.content_length = self._GetRowValue( query_hash, row, 'contentlength') event_data.query = query event_data.url = self._GetRowValue(query_hash...
Parses a row from the database. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. query (str): query that created the row. row (sqlite3.Row): row.
juraj-google-style
def alternatives(self, Class=None, set=None): for e in self.select(AlternativeLayers, None, True, ['Original', 'Suggestion']): if (Class is None): (yield e) elif (len(e) >= 1): for e2 in e: try: if isinstance(e2, Class): ...
Generator over alternatives, either all or only of a specific annotation type, and possibly restrained also by set. Arguments: * ``Class`` - The Class you want to retrieve (e.g. PosAnnotation). Or set to None to select all alternatives regardless of what type they are. * ``set`` - The set you want to retrieve (defau...
codesearchnet
def entropy(rho: Density, base: float = None) -> float: op = asarray(rho.asoperator()) probs = np.linalg.eigvalsh(op) probs = np.maximum(probs, 0.0) return scipy.stats.entropy(probs, base=base)
Returns the von-Neumann entropy of a mixed quantum state. Args: rho: A density matrix base: Optional logarithm base. Default is base e, and entropy is measures in nats. For bits set base to 2. Returns: The von-Neumann entropy of rho
juraj-google-style
def _trackable_needs_to_be_saved(obj): if hasattr(obj, '__dict__'): if '_serialize_to_tensors' in obj.__dict__ or '_gather_saveables_for_checkpoint' in obj.__dict__ or '_copy_trackable_to_cpu' in obj.__dict__: return True for t in type(obj).mro(): if t is base.Trackable: ...
Returns whether a trackable needs to be saved. Returns a bool to indicate whether obj's class has `_serialize_to_tensors`, `gather_saveables_for_checkpoint`, or `_copy_trackable_to_cpu` defined. Args: obj: A Trackable object.
github-repos
def verify_manylinux_compliance(auditwheel_log: str, compliance_tag: str) -> None: regex = 'following platform tag:\\s+"{}"'.format(compliance_tag) alt_regex = regex.replace('2014', '_2_17') if not (re.search(regex, auditwheel_log) or re.search(alt_regex, auditwheel_log)): raise RuntimeError('The wh...
Verify manylinux compliance. Args: auditwheel_log: "auditwheel show" execution results compliance_tag: manyLinux compliance tag Raises: RuntimeError: if the wheel is not manyLinux compliant.
github-repos
def add_asset(self, asset, asset_name, asset_type): if not self.can_update(): self._tcex.handle_error(910, [self.type]) if asset == 'PHONE': return self.tc_requests.add_victim_phone_asset(self.unique_id, asset_name) if asset == 'EMAIL': return self.t...
Adds a asset to the Victim Valid asset_type: + PHONE + EMAIL + NETWORK + SOCIAL + WEB Args: asset: asset_name: asset_type: PHONE, EMAIL, NETWORK, SOCIAL, or WEB Returns:
juraj-google-style
def compatible_with(value, logical_value): if isinstance(value, abstract.List) and (not value.is_concrete): return True elif isinstance(value, abstract.Dict) and (not value.is_concrete): return not logical_value or bool(value.get_instance_type_parameter(abstract_utils.K).bindings) elif isins...
Returns the conditions under which the value could be True or False. Args: value: An abstract value. logical_value: Either True or False. Returns: False: If the value could not evaluate to logical_value under any circumstance (e.g. value is the empty list and logical_value is True). True: If it is possible for the va...
github-repos
def confirm(statement): prompt = '{statement} [y/n]'.format(statement=statement) answer = _ask(prompt, limited_to=['yes', 'no', 'y', 'n']) return (answer and answer.startswith('y'))
Ask the user for confirmation about the specified statement. Args: statement (unicode): statement to ask the user confirmation about. Returns: bool: whether or not specified statement was confirmed.
codesearchnet
def normalize_cluster_spec(cluster_spec): if isinstance(cluster_spec, (dict, cluster_pb2.ClusterDef)): return server_lib.ClusterSpec(cluster_spec) elif not isinstance(cluster_spec, server_lib.ClusterSpec): raise ValueError("`cluster_spec' should be dict or a `tf.train.ClusterSpec` or a `tf.train...
Makes `cluster_spec` into a `ClusterSpec` object. Args: cluster_spec: a dict, ClusterDef or ClusterSpec object specifying the cluster configurations. Returns: a `ClusterSpec` object. Raises: ValueError: if `cluster_spec` is not a dict or a `ClusterSpec` or a `ClusterDef`.
github-repos
def patch_addContext(self, patch, text): if len(text) == 0: return pattern = text[patch.start2 : patch.start2 + patch.length1] padding = 0 while (text.find(pattern) != text.rfind(pattern) and (self.Match_MaxBits == 0 or len(pattern) < self.Match_MaxBits - self.Patch_Margin ...
Increase the context until it is unique, but don't let the pattern expand beyond Match_MaxBits. Args: patch: The patch to grow. text: Source text.
juraj-google-style
def _ParseFile(self, file_obj, line_parser): lines = [ l.strip() for l in utils.ReadFileBytesAsUnicode(file_obj).splitlines() ] try: for index, line in enumerate(lines): if line: line_parser(line) except (IndexError, KeyError) as e: raise parser.ParseError("Inv...
Process a file line by line. Args: file_obj: The file to parse. line_parser: The parser method used to process and store line content. Raises: parser.ParseError if the parser is unable to process the line.
juraj-google-style
def parse_json_path(self, jsonpath): if (jsonpath not in self.parsed): try: self.parsed[jsonpath] = self.parser(jsonpath) except Exception: self.log(('Invalid Json Path: ' + jsonpath), 'error') raise InvalidJsonPathError('Invalid Json Path') return self.parsed...
Parse a jsonpath Args: jsonpath: str Returns: a parsed json path
codesearchnet
def GetContainingXLAContext(ctxt): while ctxt: if ctxt.IsXLAContext(): return ctxt ctxt = ctxt.outer_context return None
Returns the first ancestor XLAContext of `ctxt`. Returns `ctxt` if `ctxt` is a XLAContext, or None if `ctxt` is not in a while loop. Args: ctxt: ControlFlowContext Returns: `ctxt` if `ctxt` is a XLAContext, the most nested XLAContext containing `ctxt`, or None if `ctxt` is not in a while loop.
github-repos
def format(obj, options): formatters = {float_types: (lambda x: '{:.{}g}'.format(x, options.digits))} for (_types, fmtr) in formatters.items(): if isinstance(obj, _types): return fmtr(obj) try: if (six.PY2 and isinstance(obj, six.string_types)): return str(obj.encode(...
Return a string representation of the Python object Args: obj: The Python object options: Format options
codesearchnet
def _collect_process_tree(starting_pid): ret = [] stack = [starting_pid] while stack: pid = stack.pop() if platform.system() == 'Darwin': command = ['pgrep', '-P', str(pid)] else: command = ['ps', '-o', 'pid', '--ppid', str(pid), '--noheaders'] try: ...
Collects PID list of the descendant processes from the given PID. This function only available on Unix like system. Args: starting_pid: The PID to start recursively traverse. Returns: A list of pid of the descendant processes.
github-repos
class DepthAnythingReassembleStage(nn.Module): def __init__(self, config): super().__init__() self.config = config self.layers = nn.ModuleList() for channels, factor in zip(config.neck_hidden_sizes, config.reassemble_factors): self.layers.append(DepthAnythingReassembleLa...
This class reassembles the hidden states of the backbone into image-like feature representations at various resolutions. This happens in 3 stages: 1. Take the patch embeddings and reshape them to image-like feature representations. 2. Project the channel dimension of the hidden states according to `config.neck_hidden_...
github-repos
def client_credentials(self, client_id, client_secret, audience, grant_type='client_credentials'): return self.post('https:
Client credentials grant This is the OAuth 2.0 grant that server processes utilize in order to access an API. Use this endpoint to directly request an access_token by using the Application Credentials (a Client Id and a Client Secret). Args: grant_type (str): Denotes the flow you're using. For client credentials use ...
codesearchnet
def user_bounded_trie(namespace, name, metric, ptransform=None): labels = create_labels(ptransform=ptransform, namespace=namespace, name=name) return create_monitoring_info(USER_BOUNDED_TRIE_URN, BOUNDED_TRIE_TYPE, metric.to_proto().SerializeToString(), labels)
Return the string set monitoring info for the URN, metric and labels. Args: namespace: User-defined namespace of BoundedTrie. name: Name of BoundedTrie. metric: The BoundedTrieData representing the metrics. ptransform: The ptransform id used as a label.
github-repos
def _ParseFSMVariables(self, template): self.values = [] for line in template: self._line_num += 1 line = line.rstrip() if (not line): return if self.comment_regex.match(line): continue if line.startswith('Value '): try: ...
Extracts Variables from start of template file. Values are expected as a contiguous block at the head of the file. These will be line separated from the State definitions that follow. Args: template: Valid template file, with Value definitions at the top. Raises: TextFSMTemplateError: If syntax or semantic errors ar...
codesearchnet
def assert_no_title(self, title, **kwargs): query = TitleQuery(title, **kwargs) @self.synchronize(wait=query.wait) def assert_no_title(): if query.resolves_for(self): raise ExpectationNotMet(query.negative_failure_message) return True ...
Asserts that the page doesn't have the given title. Args: title (str | RegexObject): The string that the title should include. **kwargs: Arbitrary keyword arguments for :class:`TitleQuery`. Returns: True Raises: ExpectationNotMet: If the assertion hasn't succeeded during the wait time.
juraj-google-style
def get_nc_attrs(nc): meta = { 'experiment': nc.experiment_id, 'frequency': nc.frequency, 'institute': nc.institute_id, 'model': nc.model_id, 'modeling_realm': nc.modeling_realm, 'ensemble_member': 'r{}i{}p{}'.format(nc.realization, nc.initialization_method, nc....
Gets netCDF file metadata attributes. Arguments: nc (netCDF4.Dataset): an open NetCDF4 Dataset to pull attributes from. Returns: dict: Metadata as extracted from the netCDF file.
juraj-google-style
def allconcat(self, x, mesh_axis, concat_axis, stack=False): x = x.to_laid_out_tensor() coord = self.laid_out_pcoord(mesh_axis) t = x.one_slice old_shape = t.shape.as_list() num_parts = self.shape[mesh_axis].size t = tf.expand_dims(t, concat_axis) t *= tf.reshape( tf.one_hot(coo...
Grouped allconcat (like MPI allgather followed by concat). TODO(noam): inefficient - replace with a XLA allconcat when available Args: x: a LaidOutTensor mesh_axis: an integer - the mesh axis along which to group concat_axis: an integer (the Tensor axis along which to concatenate) stack: a boolean - whether to stack ...
juraj-google-style
def processPhoneList(platformNames=[], numbers=[], excludePlatformNames=[]): platforms = platform_selection.getPlatformsByName(platformNames, mode="phonefy", excludePlatformNames=excludePlatformNames) results = [] for num in numbers: for pla in platforms: entities...
Method to perform searchs on a series of numbers. Args: ----- platformNames: List of names of the platforms. numbers: List of numbers to be queried. excludePlatformNames: A list of platforms not to be searched. Return: ------- A list of verified emails.
juraj-google-style
def prune_volumes(self, filters=None): params = {} if filters: params['filters'] = utils.convert_filters(filters) url = self._url('/volumes/prune') return self._result(self._post(url, params=params), True)
Delete unused volumes Args: filters (dict): Filters to process on the prune list. Returns: (dict): A dict containing a list of deleted volume names and the amount of disk space reclaimed in bytes. Raises: :py:class:`docker.errors.APIError` If the server returns an error.
juraj-google-style
def upload(self, content, content_type, filename=None): try: response = self.api.media_upload(content, content_type, filename) if "content_uri" in response: return response["content_uri"] else: raise MatrixUnexpectedResponse( ...
Upload content to the home server and recieve a MXC url. Args: content (bytes): The data of the content. content_type (str): The mimetype of the content. filename (str): Optional. Filename of the content. Raises: MatrixUnexpectedResponse: If the homeserver gave a strange response MatrixRequestError: If the upload fai...
juraj-google-style
def output_classes(self): return nest.map_structure(lambda component_spec: component_spec._to_legacy_output_classes(), self._element_spec)
Returns the class of each component of an element of this iterator. The expected values are `tf.Tensor` and `tf.SparseTensor`. Returns: A nested structure of Python `type` objects corresponding to each component of an element of this dataset.
github-repos
def _GetTfRecordEntries(self, path, max_entries, is_sequence, iterator_options): return self._GetEntries([path], max_entries, partial(tf.python_io.tf_record_iterator, options=iterator_options), is_sequence)
Extracts TFRecord examples into a dictionary of feature values. Args: path: The path to the TFRecord file(s). max_entries: The maximum number of examples to load. is_sequence: True if the input data from 'path' are tf.SequenceExamples, False if tf.Examples. Defaults to false. iterator_options: Options to pass to the i...
codesearchnet