code stringlengths 20 4.93k | docstring stringlengths 33 1.27k | source stringclasses 3 values |
|---|---|---|
def _CreateEventTag(self, event, comment, labels):
event_identifier = event.GetIdentifier()
event_tag = events.EventTag(comment=comment)
event_tag.SetEventIdentifier(event_identifier)
event_tag.AddLabels(labels)
event_identifier_string = event_identifier.CopyToString()
logger.debug('Created event tag: {0:s} for event: {1:s}'.format(comment, event_identifier_string))
return event_tag | Creates an event tag.
Args:
event (EventObject): event to tag.
comment (str): event tag comment.
labels (list[str]): event tag labels.
Returns:
EventTag: the event tag. | codesearchnet |
def add_droplets(self, droplet_ids):
return self.get_data(('load_balancers/%s/droplets/' % self.id), type=POST, params={'droplet_ids': droplet_ids}) | Assign a LoadBalancer to a Droplet.
Args:
droplet_ids (obj:`list` of `int`): A list of Droplet IDs | codesearchnet |
def extend(self, records):
fields = self.fields
for record in records:
record = _cast_record_to_str_tuple(record, fields)
self._records.append(record) | Add each record in *records* to the end of the table.
Args:
record: an iterable of :class:`Record` or other iterables
containing column values | juraj-google-style |
async def update_pairing_method(self, pairing: Pairing):
do_sequential_pairing = pairing == Pairing.sequential
await self.update(sequential_pairings=do_sequential_pairing) | |methcoro|
Args:
pairing:
Raises:
APIException | juraj-google-style |
def member_update(self, repl_id, member_id, params):
repl = self[repl_id]
result = repl.member_update(member_id, params)
self[repl_id] = repl
return result | apply new params to replica set member
Args:
repl_id - replica set identity
member_id - member index
params - new member's params
return True if operation success otherwise False | juraj-google-style |
def get_attribute(self, obj, attr):
if attr == '*':
return obj
if isinstance(obj, Mapping):
return obj.get(attr, None)
return getattr(obj, attr, None) | Get attribute of given object instance.
Reason for existence of this method is the fact that 'attribute' can
be also object's key from if is a dict or any other kind of mapping.
Note: it will return None if attribute key does not exist
Args:
obj (object): internal object to retrieve data from
Returns:
internal object's key value or attribute | juraj-google-style |
def with_output_types(self, type_hint):
type_hint = native_type_compatibility.convert_to_beam_type(type_hint)
validate_composite_type_param(type_hint, 'Type hints for a PTransform')
return super().with_output_types(type_hint) | Annotates the output type of a :class:`PTransform` with a type-hint.
Args:
type_hint (type): An instance of an allowed built-in type, a custom class,
or a :class:`~apache_beam.typehints.typehints.TypeConstraint`.
Raises:
TypeError: If **type_hint** is not a valid type-hint. See
:obj:`~apache_beam.typehints.typehints.validate_composite_type_param()`
for further details.
Returns:
PTransform: A reference to the instance of this particular
:class:`PTransform` object. This allows chaining type-hinting related
methods. | github-repos |
def write_uint8(self, value, little_endian=True):
if little_endian:
endian = "<"
else:
endian = ">"
return self.pack('%sB' % endian, value) | Pack the value as an unsigned byte and write 1 byte to the stream.
Args:
value:
little_endian (bool): specify the endianness. (Default) Little endian.
Returns:
int: the number of bytes written. | juraj-google-style |
def init_algebra(*, default_hs_cls='LocalSpace'):
from qnet.algebra.core.hilbert_space_algebra import LocalSpace
from qnet.algebra.core.abstract_quantum_algebra import QuantumExpression
default_hs_cls = getattr(importlib.import_module('qnet'), default_hs_cls)
if issubclass(default_hs_cls, LocalSpace):
QuantumExpression._default_hs_cls = default_hs_cls
else:
raise TypeError("default_hs_cls must be a subclass of LocalSpace") | Initialize the algebra system
Args:
default_hs_cls (str): The name of the :class:`.LocalSpace` subclass
that should be used when implicitly creating Hilbert spaces, e.g.
in :class:`.OperatorSymbol` | juraj-google-style |
def _InternalUnpackAny(msg):
from google.protobuf import symbol_database
factory = symbol_database.Default()
type_url = msg.type_url
if not type_url:
return None
type_name = type_url.split('/')[-1]
descriptor = factory.pool.FindMessageTypeByName(type_name)
if descriptor is None:
return None
message_class = factory.GetPrototype(descriptor)
message = message_class()
message.ParseFromString(msg.value)
return message | Unpacks Any message and returns the unpacked message.
This internal method is different from public Any Unpack method which takes
the target message as argument. _InternalUnpackAny method does not have
target message type and need to find the message type in descriptor pool.
Args:
msg: An Any message to be unpacked.
Returns:
The unpacked message. | juraj-google-style |
def split(self):
assert (self.status == SolverStatus.exhausted)
scopes = []
next_scopes = []
split_i = None
for (i, scope) in enumerate(self.scopes):
if (split_i is None):
r = scope.split()
if (r is not None):
(scope_, next_scope) = r
scopes.append(scope_)
next_scopes.append(next_scope)
split_i = i
continue
scopes.append(scope)
next_scopes.append(scope)
assert (split_i is not None)
phase = copy.copy(self)
phase.scopes = scopes
phase.status = SolverStatus.pending
phase.changed_scopes_i = set([split_i])
next_phase = copy.copy(phase)
next_phase.scopes = next_scopes
return (phase, next_phase) | Split the phase.
When a phase is exhausted, it gets split into a pair of phases to be
further solved. The split happens like so:
1) Select the first unsolved package scope.
2) Find some common dependency in the first N variants of the scope.
3) Split the scope into two: [:N] and [N:].
4) Create two copies of the phase, containing each half of the split
scope.
The result of this split is that we have a new phase (the first phase),
which contains a package scope with a common dependency. This
dependency can now be intersected with the current resolve, thus
progressing it.
Returns:
A 2-tuple of _ResolvePhase objects, where the first phase is the
best contender for resolving. | codesearchnet |
def add(self, X):
for each in X:
self.dpp_vector[each] = X[each]
self.fit(self.dpp_vector.reshape(1, (- 1))) | Add data about known pipeline and scores.
Updates ``dpp_vector`` and refits model with all data.
Args:
X (dict): mapping of pipeline indices to scores. Keys must correspond to the index of a
column in ``dpp_matrix`` and values are the corresponding score for pipeline on
the dataset. | codesearchnet |
def DEFINE_spaceseplist(name, default, help, comma_compat=False, flag_values=_flagvalues.FLAGS, **args):
parser = _argument_parser.WhitespaceSeparatedListParser(comma_compat=comma_compat)
serializer = _argument_parser.ListSerializer(' ')
DEFINE(parser, name, default, help, flag_values, serializer, **args) | Registers a flag whose value is a whitespace-separated list of strings.
Any whitespace can be used as a separator.
Args:
name: str, the flag name.
default: list|str|None, the default value of the flag.
help: str, the help message.
comma_compat: bool - Whether to support comma as an additional separator.
If false then only whitespace is supported. This is intended only for
backwards compatibility with flags that used to be comma-separated.
flag_values: FlagValues, the FlagValues instance with which the flag will
be registered. This should almost never need to be overridden.
**args: Dictionary with extra keyword args that are passed to the
Flag __init__. | codesearchnet |
def __init__(self, original_embedding: nn.Embedding, assistant_overlap_token_ids):
super().__init__()
self.original_embedding = original_embedding
self.weight = original_embedding.weight
self.assistant_overlap_token_ids = assistant_overlap_token_ids
self.map = False | Wraps an existing embedding layer and remaps token IDs before lookup.
Args:
original_embedding (nn.Embedding): Pre-trained or existing embedding layer.
assistant_overlap_token_ids (dict): Mapping from original token IDs to new token IDs.
Example: {old_id: new_id} | github-repos |
def reflection_matrix_pow(reflection_matrix: np.ndarray, exponent: float):
squared_phase = np.dot(reflection_matrix[:, 0],
reflection_matrix[0, :])
phase = complex(np.sqrt(squared_phase))
i = np.eye(reflection_matrix.shape[0]) * phase
pos_part = (i + reflection_matrix) * 0.5
neg_part = (i - reflection_matrix) * 0.5
pos_factor = phase**(exponent - 1)
neg_factor = pos_factor * complex(-1)**exponent
pos_part_raised = pos_factor * pos_part
neg_part_raised = neg_part * neg_factor
return pos_part_raised + neg_part_raised | Raises a matrix with two opposing eigenvalues to a power.
Args:
reflection_matrix: The matrix to raise to a power.
exponent: The power to raise the matrix to.
Returns:
The given matrix raised to the given power. | juraj-google-style |
def contains_call_signature(caller, key):
try:
args = inspect.signature(caller).parameters
except AttributeError:
args = inspect.getargspec(caller).args
return (key in args) | Check if a function or method call signature contains a specific
argument.
Args:
caller (Callable):
Method or function to check if signature is contain in.
key (str):
Signature to look for.
Returns:
True if ``key`` exits in ``caller`` call signature.
Examples:
>>> def foo(param): pass
>>> contains_call_signature(foo, "param")
True
>>> contains_call_signature(foo, "not_param")
False
>>> class Bar:
... def baz(self, param): pass
>>> bar = Bar()
>>> contains_call_signature(bar.baz, "param")
True
>>> contains_call_signature(bar.baz, "not_param")
False | codesearchnet |
def get_table_columns(metadata):
cols = OrderedDict()
for col in metadata.c:
name = str(col).rpartition(".")[2]
cols[name] = col.type.python_type.__name__
return cols | Extract columns names and python typos from metadata
Args:
metadata: Table metadata
Returns:
dict with columns names and python types | juraj-google-style |
def accept_prompt(self, text=None, response=None, wait=None):
with self.driver.accept_modal('prompt', text=text, response=response, wait=wait):
(yield) | Execute the wrapped code, accepting a prompt, optionally responding to the prompt.
Args:
text (str | RegexObject, optional): Text to match against the text in the modal.
response (str, optional): Response to provide to the prompt.
wait (int | float, optional): Maximum time to wait for the modal to appear after
executing the wrapped code.
Raises:
ModalNotFound: If a modal dialog hasn't been found. | codesearchnet |
def encoder_vgg(x, enc_final_size, reuse=False, scope_prefix='', hparams=None, is_training=True):
with tf.variable_scope((scope_prefix + 'encoder'), reuse=reuse):
x *= 256
x = (x - COLOR_NORMALIZATION_VECTOR)
with arg_scope(vgg.vgg_arg_scope()):
x = tf.pad(x, [[0, 0], [0, (VGG_IMAGE_SIZE - IMG_WIDTH)], [0, (VGG_IMAGE_SIZE - IMG_HEIGHT)], [0, 0]])
(_, end_points) = vgg.vgg_16(x, num_classes=enc_final_size, is_training=is_training)
pool5_key = [key for key in end_points.keys() if ('pool5' in key)]
assert (len(pool5_key) == 1)
enc = end_points[pool5_key[0]]
enc = tf.slice(enc, [0, 0, 0, 0], [(- 1), 2, 2, (- 1)])
enc_shape = enc.get_shape().as_list()
enc_shape[0] = (- 1)
enc_size = ((enc_shape[1] * enc_shape[2]) * enc_shape[3])
enc_flat = tf.reshape(enc, ((- 1), enc_size))
enc_flat = tf.nn.dropout(enc_flat, hparams.enc_keep_prob)
enc_flat = tf.layers.dense(enc_flat, enc_final_size, kernel_initializer=tf.truncated_normal_initializer(stddev=0.0001))
if hparams.enc_pred_use_l2norm:
enc_flat = tf.nn.l2_normalize(enc_flat, 1)
return enc_flat | VGG network to use as encoder without the top few layers.
Can be pretrained.
Args:
x: The image to encode. In the range 0 to 1.
enc_final_size: The desired size of the encoding.
reuse: To reuse in variable scope or not.
scope_prefix: The prefix before the scope name.
hparams: The python hparams.
is_training: boolean value indicating if training is happening.
Returns:
The generated image. | codesearchnet |
def get_effect_class(self, effect_name: str, package_name: str = None) -> Type['Effect']:
return self._project.get_effect_class(effect_name, package_name=package_name) | Get an effect class by the class name
Args:
effect_name (str): Name of the effect class
Keyword Args:
package_name (str): The package the effect belongs to. This is optional and only
needed when effect class names are not unique.
Returns:
:py:class:`Effect` class | juraj-google-style |
def __is_function_action(self, action_function):
is_function_action = True
if (not hasattr(action_function, '__call__')):
return False
try:
for (end_string, context) in action_function():
if (not isinstance(end_string, basestring)):
self.log_error('Action function must return end of filename as a string as first argument')
if (not isinstance(context, dict)):
self.log_error('Action function must return context as a dict as second argument')
break
except Exception:
is_function_action = False
return is_function_action | Detect if given function is really an action function.
Args:
action_function: Function to test.
Note:
We don't care if the variable refer to a function but rather if it is callable or not. | codesearchnet |
def get_gan_loss(self, true_frames, gen_frames, name):
with tf.variable_scope("%s_discriminator" % name, reuse=tf.AUTO_REUSE):
gan_d_loss, _, fake_logits_stop = self.d_step(
true_frames, gen_frames)
with tf.variable_scope("%s_discriminator" % name, reuse=True):
gan_g_loss_pos_d, gan_g_loss_neg_d = self.g_step(
gen_frames, fake_logits_stop)
gan_g_loss = gan_g_loss_pos_d + gan_g_loss_neg_d
tf.summary.scalar("gan_loss_%s" % name, gan_g_loss_pos_d + gan_d_loss)
if self.hparams.gan_optimization == "joint":
gan_loss = gan_g_loss + gan_d_loss
else:
curr_step = self.get_iteration_num()
gan_loss = tf.cond(
tf.logical_not(curr_step % 2 == 0), lambda: gan_g_loss,
lambda: gan_d_loss)
return gan_loss | Get the discriminator + generator loss at every step.
This performs an 1:1 update of the discriminator and generator at every
step.
Args:
true_frames: 5-D Tensor of shape (num_steps, batch_size, H, W, C)
Assumed to be ground truth.
gen_frames: 5-D Tensor of shape (num_steps, batch_size, H, W, C)
Assumed to be fake.
name: discriminator scope.
Returns:
loss: 0-D Tensor, with d_loss + g_loss | juraj-google-style |
def __init__(self, subscription_path, deduplicate=None, expansion_service=None):
if deduplicate is None:
deduplicate = False
if expansion_service is None:
expansion_service = _default_io_expansion_service()
super().__init__('beam:transform:org.apache.beam:pubsublite_read:v1', NamedTupleBasedPayloadBuilder(_ReadSchema(subscription_path=subscription_path, deduplicate=deduplicate)), expansion_service) | Initializes a read operation from Pub/Sub Lite, returning the serialized
bytes of SequencedMessage protos.
Args:
subscription_path: A Pub/Sub Lite Subscription path.
deduplicate: Whether to deduplicate messages based on the value of
the 'x-goog-pubsublite-dataflow-uuid' attribute. | github-repos |
def get_workunit(self, ignore_list=None):
if ignore_list is None:
ignore_list = []
potential_files = self.get_potential_files(ignore_list)
while len(potential_files) > 0:
potential_file = self.select_potential_file(potential_files)
potential_files.remove(potential_file)
if self._filter(potential_file):
continue
if self.directory_context.get_file_size(potential_file) == 0:
continue
if self.progress_manager.is_done(potential_file):
self._done.append(potential_file)
continue
else:
try:
self.progress_manager.lock(potential_file)
except FileLockedException:
continue
self._already_fetched.append(potential_file)
return self.builder.build_workunit(
self.directory_context.get_full_path(potential_file))
logger.info("No eligible workunits remain to be fetched.")
raise NoAvailableWorkException() | Gets a new unit of work.
Args:
ignore_list: list(str)
A list of filenames which should be ignored. Defaults to None.
Returns:
new_workunit: WorkUnit
A new unit of work that has not yet been processed. A lock on
it has been acquired.
Raises:
NoAvailableWorkException
There is no more work available. | juraj-google-style |
def get_branch_length(self, age=None, pos=0):
if (age is None):
age = self.age
return (self.length * pow(self.branches[pos][0], age)) | Get the length of a branch.
This method calculates the length of a branch in specific age.
The used formula: length * scale^age.
Args:
age (int): The age, for which you want to know the branch length.
Returns:
float: The length of the branch | codesearchnet |
def get_shreds(self, feature_extractors, sheet_name):
if self._shreds is None:
shreds = []
_, contours, _ = cv2.findContours(self._foreground_mask,
cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
for i, contour in enumerate(contours):
shred = self._make_shred(contour, i, feature_extractors,
sheet_name)
if shred is not None:
shreds.append(shred)
self._shreds = shreds
return self._shreds | Detects shreds in the current sheet and constructs Shred instances.
Caches the results for further invocations.
Args:
feature_extractors: iterable of AbstractShredFeature instances to
use for shreds feature assignment.
sheet_name: string, included in shred attributes.
Returns:
list of Shred instances. | juraj-google-style |
def create_token_type_ids_from_sequences(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None) -> List[int]:
sep = [self.sep_token_id]
cls = [self.cls_token_id]
if token_ids_1 is None:
return len(cls + token_ids_0 + sep) * [0]
return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0] | Creates a mask from the two sequences passed to be used in a sequence-pair classification task. MPNet does not
make use of token type ids, therefore a list of zeros is returned.
Args:
token_ids_0 (`List[int]`):
List of ids.
token_ids_1 (`List[int]`, *optional*):
Optional second list of IDs for sequence pairs.
Returns:
`List[int]`: List of zeros. | github-repos |
class FlavaProcessor(ProcessorMixin):
attributes = ['image_processor', 'tokenizer']
image_processor_class = 'FlavaImageProcessor'
tokenizer_class = ('BertTokenizer', 'BertTokenizerFast')
def __init__(self, image_processor=None, tokenizer=None, **kwargs):
feature_extractor = None
if 'feature_extractor' in kwargs:
warnings.warn('The `feature_extractor` argument is deprecated and will be removed in v5, use `image_processor` instead.', FutureWarning)
feature_extractor = kwargs.pop('feature_extractor')
image_processor = image_processor if image_processor is not None else feature_extractor
if image_processor is None:
raise ValueError('You need to specify an `image_processor`.')
if tokenizer is None:
raise ValueError('You need to specify a `tokenizer`.')
super().__init__(image_processor, tokenizer)
self.current_processor = self.image_processor
def __call__(self, images: Optional[ImageInput]=None, text: Optional[Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]]]=None, add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy]=False, max_length: Optional[int]=None, stride: int=0, pad_to_multiple_of: Optional[int]=None, return_image_mask: Optional[bool]=None, return_codebook_pixels: Optional[bool]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, return_tensors: Optional[Union[str, TensorType]]=None, **kwargs):
if text is None and images is None:
raise ValueError('You have to specify either text or images. Both cannot be none.')
if text is not None:
encoding = self.tokenizer(text=text, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, return_tensors=return_tensors, **kwargs)
if images is not None:
image_features = self.image_processor(images, return_image_mask=return_image_mask, return_codebook_pixels=return_codebook_pixels, return_tensors=return_tensors, **kwargs)
if text is not None and images is not None:
encoding.update(image_features)
return encoding
elif text is not None:
return encoding
else:
return BatchEncoding(data=dict(**image_features), tensor_type=return_tensors)
def batch_decode(self, *args, **kwargs):
return self.tokenizer.batch_decode(*args, **kwargs)
def decode(self, *args, **kwargs):
return self.tokenizer.decode(*args, **kwargs)
@property
def model_input_names(self):
tokenizer_input_names = self.tokenizer.model_input_names
image_processor_input_names = self.image_processor.model_input_names
return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
@property
def feature_extractor_class(self):
warnings.warn('`feature_extractor_class` is deprecated and will be removed in v5. Use `image_processor_class` instead.', FutureWarning)
return self.image_processor_class
@property
def feature_extractor(self):
warnings.warn('`feature_extractor` is deprecated and will be removed in v5. Use `image_processor` instead.', FutureWarning)
return self.image_processor | Constructs a FLAVA processor which wraps a FLAVA image processor and a FLAVA tokenizer into a single processor.
[`FlavaProcessor`] offers all the functionalities of [`FlavaImageProcessor`] and [`BertTokenizerFast`]. See the
[`~FlavaProcessor.__call__`] and [`~FlavaProcessor.decode`] for more information.
Args:
image_processor ([`FlavaImageProcessor`], *optional*): The image processor is a required input.
tokenizer ([`BertTokenizerFast`], *optional*): The tokenizer is a required input. | github-repos |
def store_sample(self, input_bytes, filename, type_tag):
if type_tag == 'unknown':
print 'Info: Unknown File -- Trying to Determine Type...'
type_tag = self.guess_type_tag(input_bytes, filename)
if type_tag == 'lz4':
input_bytes = lz4.loads(input_bytes)
md5 = self.data_store.store_sample(input_bytes, filename, type_tag)
if type_tag != 'lz4':
self.add_tags(md5, type_tag)
return md5 | Store a sample into the DataStore.
Args:
input_bytes: the actual bytes of the sample e.g. f.read()
filename: name of the file (used purely as meta data not for lookup)
type_tag: ('exe','pcap','pdf','json','swf', or ...)
Returns:
the md5 of the sample. | juraj-google-style |
def score_task(self, X, Y, t=0, metric="accuracy", verbose=True, **kwargs):
Y = self._to_numpy(Y)
Y_tp = self.predict_task(X, t=t, **kwargs)
probs = self.predict_proba(X)[t]
score = metric_score(
Y[t], Y_tp, metric, ignore_in_gold=[0], probs=probs, **kwargs
)
if verbose:
print(f"[t={t}] {metric.capitalize()}: {score:.3f}")
return score | Scores the predictive performance of the Classifier on task t
Args:
X: The input for the predict_task method
Y: A [n] or [n, 1] np.ndarray or torch.Tensor of gold labels in
{1,...,K_t}
t: The task index to score
metric: The metric with which to score performance on this task
Returns:
The (float) score of the Classifier for the specified task and
metric | juraj-google-style |
def _map_args(self, node: cfg.CFGNode, args: function.Args) -> tuple[list[tuple[str, _base.BaseValue]], dict[str, cfg.Variable]]:
formal_args: list[tuple[str, _base.BaseValue]] = [(p.name, self.signature.annotations[p.name]) for p in self.pytd_sig.params]
arg_dict: dict[str, cfg.Variable] = {}
for name, arg in zip(self.signature.param_names, args.posargs):
arg_dict[name] = arg
num_expected_posargs = len(self.signature.param_names)
if len(args.posargs) > num_expected_posargs and (not self.pytd_sig.starargs):
raise error_types.WrongArgCount(self.signature, args, self.ctx)
varargs_type = self.signature.annotations.get(self.signature.varargs_name)
if isinstance(varargs_type, _classes.ParameterizedClass):
for i, vararg in enumerate(args.posargs[num_expected_posargs:]):
name = function.argname(num_expected_posargs + i)
arg_dict[name] = vararg
formal_args.append((name, varargs_type.get_formal_type_parameter(abstract_utils.T)))
posonly_names = set(self.signature.posonly_params)
for name, arg in args.namedargs.items():
if name in posonly_names:
continue
elif name in arg_dict:
raise error_types.DuplicateKeyword(self.signature, args, self.ctx, name)
else:
arg_dict[name] = arg
kws = set(args.namedargs)
extra_kwargs = kws - {p.name for p in self.pytd_sig.params}
if extra_kwargs and (not self.pytd_sig.starstarargs):
if function.has_visible_namedarg(node, args, extra_kwargs):
raise error_types.WrongKeywordArgs(self.signature, args, self.ctx, extra_kwargs)
posonly_kwargs = kws & posonly_names
if posonly_kwargs and (not self.signature.kwargs_name):
raise error_types.WrongKeywordArgs(self.signature, args, self.ctx, posonly_kwargs)
kwargs_type = self.signature.annotations.get(self.signature.kwargs_name)
if isinstance(kwargs_type, _classes.ParameterizedClass):
for name in sorted(extra_kwargs):
formal_args.append((name, kwargs_type.get_formal_type_parameter(abstract_utils.V)))
packed_args = [('starargs', self.signature.varargs_name), ('starstarargs', self.signature.kwargs_name)]
for arg_type, name in packed_args:
actual = getattr(args, arg_type)
pytd_val = getattr(self.pytd_sig, arg_type)
if actual and pytd_val:
arg_dict[name] = actual
typ = self.ctx.convert.widen_type(self.signature.annotations[name])
formal_args.append((name, typ))
return (formal_args, arg_dict) | Map the passed arguments to a name->binding dictionary.
Args:
node: The current node.
args: The passed arguments.
Returns:
A tuple of:
a list of formal arguments, each a (name, abstract value) pair;
a name->variable dictionary of the passed arguments.
Raises:
InvalidParameters: If the passed arguments don't match this signature. | github-repos |
def attention_lm_small():
hparams = attention_lm_base()
hparams.num_hidden_layers = 4
hparams.hidden_size = 512
hparams.filter_size = 2048
hparams.layer_prepostprocess_dropout = 0.5
return hparams | Cheap model.
on lm1b_32k:
45M params
2 steps/sec on [GeForce GTX TITAN X]
Returns:
an hparams object. | codesearchnet |
def HandleForwardedIps(self, interface, forwarded_ips, interface_ip=None):
desired = self.ip_forwarding_utils.ParseForwardedIps(forwarded_ips)
configured = self.ip_forwarding_utils.GetForwardedIps(
interface, interface_ip)
to_add = sorted(set(desired) - set(configured))
to_remove = sorted(set(configured) - set(desired))
self._LogForwardedIpChanges(
configured, desired, to_add, to_remove, interface)
self._AddForwardedIps(to_add, interface)
self._RemoveForwardedIps(to_remove, interface) | Handle changes to the forwarded IPs on a network interface.
Args:
interface: string, the output device to configure.
forwarded_ips: list, the forwarded IP address strings desired.
interface_ip: string, current interface ip address. | juraj-google-style |
def ready(self, cluster):
ready_nodes = set()
next_ready_check = 9999999.99
unknown_leaders_exist = False
now = time.time()
exhausted = bool((self._free.queued() > 0))
partitions = list(self._batches.keys())
for tp in partitions:
leader = cluster.leader_for_partition(tp)
if ((leader is None) or (leader == (- 1))):
unknown_leaders_exist = True
continue
elif (leader in ready_nodes):
continue
elif (tp in self.muted):
continue
with self._tp_locks[tp]:
dq = self._batches[tp]
if (not dq):
continue
batch = dq[0]
retry_backoff = (self.config['retry_backoff_ms'] / 1000.0)
linger = (self.config['linger_ms'] / 1000.0)
backing_off = bool(((batch.attempts > 0) and ((batch.last_attempt + retry_backoff) > now)))
waited_time = (now - batch.last_attempt)
time_to_wait = (retry_backoff if backing_off else linger)
time_left = max((time_to_wait - waited_time), 0)
full = bool(((len(dq) > 1) or batch.records.is_full()))
expired = bool((waited_time >= time_to_wait))
sendable = (full or expired or exhausted or self._closed or self._flush_in_progress())
if (sendable and (not backing_off)):
ready_nodes.add(leader)
else:
next_ready_check = min(time_left, next_ready_check)
return (ready_nodes, next_ready_check, unknown_leaders_exist) | Get a list of nodes whose partitions are ready to be sent, and the
earliest time at which any non-sendable partition will be ready;
Also return the flag for whether there are any unknown leaders for the
accumulated partition batches.
A destination node is ready to send if:
* There is at least one partition that is not backing off its send
* and those partitions are not muted (to prevent reordering if
max_in_flight_requests_per_connection is set to 1)
* and any of the following are true:
* The record set is full
* The record set has sat in the accumulator for at least linger_ms
milliseconds
* The accumulator is out of memory and threads are blocking waiting
for data (in this case all partitions are immediately considered
ready).
* The accumulator has been closed
Arguments:
cluster (ClusterMetadata):
Returns:
tuple:
ready_nodes (set): node_ids that have ready batches
next_ready_check (float): secs until next ready after backoff
unknown_leaders_exist (bool): True if metadata refresh needed | codesearchnet |
def get(self, public_key, spent=None, headers=None):
return self.transport.forward_request(method='GET', path=self.path, params={'public_key': public_key, 'spent': spent}, headers=headers) | Get transaction outputs by public key. The public_key parameter
must be a base58 encoded ed25519 public key associated with
transaction output ownership.
Args:
public_key (str): Public key for which unfulfilled
conditions are sought.
spent (bool): Indicate if the result set should include only spent
or only unspent outputs. If not specified (``None``) the
result includes all the outputs (both spent and unspent)
associated with the public key.
headers (dict): Optional headers to pass to the request.
Returns:
:obj:`list` of :obj:`str`: List of unfulfilled conditions.
Example:
Given a transaction with `id` ``da1b64a907ba54`` having an
`ed25519` condition (at index ``0``) with alice's public
key::
>>> bdb = BigchainDB()
>>> bdb.outputs.get(alice_pubkey)
... ['../transactions/da1b64a907ba54/conditions/0'] | codesearchnet |
def _get_block_sizes(resnet_size):
choices = {18: [2, 2, 2, 2], 34: [3, 4, 6, 3], 50: [3, 4, 6, 3], 101: [3, 4, 23, 3], 152: [3, 8, 36, 3], 200: [3, 24, 36, 3]}
try:
return choices[resnet_size]
except KeyError:
err = 'Could not find layers for selected Resnet size.\nSize received: {}; sizes allowed: {}.'.format(resnet_size, choices.keys())
raise ValueError(err) | Retrieve the size of each block_layer in the ResNet model.
The number of block layers used for the Resnet model varies according
to the size of the model. This helper grabs the layer set we want, throwing
an error if a non-standard size has been selected.
Args:
resnet_size: The number of convolutional layers needed in the model.
Returns:
A list of block sizes to use in building the model.
Raises:
KeyError: if invalid resnet_size is received. | codesearchnet |
def create_new(cls, mapreduce_id, shard_number):
shard_id = cls.shard_id_from_number(mapreduce_id, shard_number)
state = cls(key_name=shard_id, mapreduce_id=mapreduce_id)
return state | Create new shard state.
Args:
mapreduce_id: unique mapreduce id as string.
shard_number: shard number for which to create shard state.
Returns:
new instance of ShardState ready to put into datastore. | codesearchnet |
def ignore_path(path):
ignore = False
for name in ['.tox', 'dist', 'build', 'node_modules', 'htmlcov']:
if path.find(name) >= 0:
ignore = True
break
return ignore | Verify whether to ignore a path.
Args:
path (str): path to check.
Returns:
bool: True when to ignore given path. | juraj-google-style |
def all_near_zero(a: Union[float, complex, Iterable[float], np.ndarray],
*,
atol: float = 1e-8) -> bool:
return np.all(np.less_equal(np.abs(a), atol)) | Checks if the tensor's elements are all near zero.
Args:
a: Tensor of elements that could all be near zero.
atol: Absolute tolerance. | juraj-google-style |
def connect_all(state):
hosts = [
host for host in state.inventory
if state.is_host_in_limit(host)
]
greenlet_to_host = {
state.pool.spawn(host.connect, state): host
for host in hosts
}
with progress_spinner(greenlet_to_host.values()) as progress:
for greenlet in gevent.iwait(greenlet_to_host.keys()):
host = greenlet_to_host[greenlet]
progress(host)
failed_hosts = set()
for greenlet, host in six.iteritems(greenlet_to_host):
greenlet.get()
if host.connection:
state.activate_host(host)
else:
failed_hosts.add(host)
state.fail_hosts(failed_hosts, activated_count=len(hosts)) | Connect to all the configured servers in parallel. Reads/writes state.inventory.
Args:
state (``pyinfra.api.State`` obj): the state containing an inventory to connect to | juraj-google-style |
def load_audio(audio: Union[str, np.ndarray], sampling_rate=16000, timeout=None) -> np.ndarray:
requires_backends(load_audio, ['librosa'])
if isinstance(audio, str):
if audio.startswith('http:
audio = librosa.load(BytesIO(requests.get(audio, timeout=timeout).content), sr=sampling_rate)[0]
elif os.path.isfile(audio):
audio = librosa.load(audio, sr=sampling_rate)[0]
elif isinstance(audio, np.ndarray):
audio = audio
else:
raise TypeError('Incorrect format used for `audio`. Should be an url linking to an audio, a local path, or numpy array.')
return audio | Loads `audio` to an np.ndarray object.
Args:
audio (`str` or `np.ndarray`):
The audio to be loaded to the numpy array format.
sampling_rate (`int`, *optional*, defaults to 16000):
The sampling rate to be used when loading the audio. It should be same as the
sampling rate the model you will be using further was trained with.
timeout (`float`, *optional*):
The timeout value in seconds for the URL request.
Returns:
`np.ndarray`: A numpy array representing the audio. | github-repos |
def spherical_to_cartesian(r,theta,phi):
x = r * np.sin(phi) * np.cos(theta)
y = r * np.sin(phi) * np.sin(theta)
z = r * np.cos(phi)
return (x,y,z) | Simple conversion of spherical to cartesian coordinates
Args:
r,theta,phi = scalar spherical coordinates
Returns:
x,y,z = scalar cartesian coordinates | juraj-google-style |
def update_network_asset(self, asset_id, name, asset_type):
self.update_asset('NETWORK', asset_id, name, asset_type) | Updates a Network Asset
Args:
name: The name provided to the network asset
asset_type: The type provided to the network asset
asset_id:
Returns: | juraj-google-style |
def set_seat_logical_name(self, seat):
rc = self._libinput.libinput_device_set_seat_logical_name(self._handle, seat.encode())
assert (rc == 0), 'Cannot assign device to {}'.format(seat) | Change the logical seat associated with this device by removing
the device and adding it to the new seat.
This command is identical to physically unplugging the device, then
re-plugging it as a member of the new seat. libinput will generate
a :attr:`~libinput.constant.EventType.DEVICE_REMOVED` event and this
:class:`Device` is considered removed from the context; it will not
generate further events.
A :attr:`~libinput.constant.EventType.DEVICE_ADDED` event is
generated with a new :class:`Device`. It is the caller's
responsibility to update references to the new device accordingly.
If the logical seat name already exists in the device's physical seat,
the device is added to this seat. Otherwise, a new seat is created.
Note:
This change applies to this device until removal or
:meth:`~libinput.LibInput.suspend`, whichever happens earlier.
Args:
seat (str): The new logical seat name.
Raises:
AssertionError | codesearchnet |
def get_unique_families(hkls):
def is_perm(hkl1, hkl2):
h1 = np.abs(hkl1)
h2 = np.abs(hkl2)
return all([i == j for i, j in zip(sorted(h1), sorted(h2))])
unique = collections.defaultdict(list)
for hkl1 in hkls:
found = False
for hkl2 in unique.keys():
if is_perm(hkl1, hkl2):
found = True
unique[hkl2].append(hkl1)
break
if not found:
unique[hkl1].append(hkl1)
pretty_unique = {}
for k, v in unique.items():
pretty_unique[sorted(v)[-1]] = len(v)
return pretty_unique | Returns unique families of Miller indices. Families must be permutations
of each other.
Args:
hkls ([h, k, l]): List of Miller indices.
Returns:
{hkl: multiplicity}: A dict with unique hkl and multiplicity. | juraj-google-style |
def __init__(self, start_at):
super().__init__()
self._timeout = start_at
self._timeout_triggered = False | Creates a timeout behaviour, which is run at start_at
Args:
start_at (datetime.datetime): when to start the behaviour | juraj-google-style |
def fram_wave(waveform: np.array, hop_length: int=160, fft_window_size: int=400, center: bool=True):
warnings.warn('The function `fram_wave` is deprecated and will be removed in version 4.31.0 of Transformers', FutureWarning)
frames = []
for i in range(0, waveform.shape[0] + 1, hop_length):
if center:
half_window = (fft_window_size - 1)
start = i - half_window if i > half_window else 0
end = i + half_window if i < waveform.shape[0] - half_window else waveform.shape[0]
frame = waveform[start:end]
if start == 0:
padd_width = (-i + half_window, 0)
frame = np.pad(frame, pad_width=padd_width, mode='reflect')
elif end == waveform.shape[0]:
padd_width = (0, i - waveform.shape[0] + half_window)
frame = np.pad(frame, pad_width=padd_width, mode='reflect')
else:
frame = waveform[i:i + fft_window_size]
frame_width = frame.shape[0]
if frame_width < waveform.shape[0]:
frame = np.lib.pad(frame, pad_width=(0, fft_window_size - frame_width), mode='constant', constant_values=0)
frames.append(frame)
frames = np.stack(frames, 0)
return frames | In order to compute the short time fourier transform, the waveform needs to be split in overlapping windowed
segments called `frames`.
The window length (window_length) defines how much of the signal is contained in each frame, while the hop length
defines the step between the beginning of each new frame.
Args:
waveform (`np.array` of shape `(sample_length,)`):
The raw waveform which will be split into smaller chunks.
hop_length (`int`, *optional*, defaults to 160):
Step between each window of the waveform.
fft_window_size (`int`, *optional*, defaults to 400):
Defines the size of the window.
center (`bool`, defaults to `True`):
Whether or not to center each frame around the middle of the frame. Centering is done by reflecting the
waveform on the left and on the right.
Return:
framed_waveform (`np.array` of shape `(waveform.shape // hop_length , fft_window_size)`):
The framed waveforms that can be fed to `np.fft`. | github-repos |
def get_appliance(self, id_or_uri, fields=''):
uri = self.URI + '/image-streamer-appliances/' + extract_id_from_uri(id_or_uri)
if fields:
uri += '?fields=' + fields
return self._client.get(uri) | Gets the particular Image Streamer resource based on its ID or URI.
Args:
id_or_uri:
Can be either the Os Deployment Server ID or the URI
fields:
Specifies which fields should be returned in the result.
Returns:
dict: Image Streamer resource. | juraj-google-style |
def processor_groups(mesh_shape, group_dims):
group_numbers = [pnum_to_group(mesh_shape, group_dims, pnum) for pnum in xrange(mesh_shape.size)]
ret = []
for (pnum, g) in enumerate(group_numbers):
while (len(ret) <= g):
ret.append([])
ret[g].append(pnum)
return ret | Groups of processors which differ only in the given dimensions.
Args:
mesh_shape: a Shape
group_dims: a list of integers
Returns:
a list of lists of integers (processor numbers) | codesearchnet |
def get_unique_variable(name):
candidates = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, name)
if (not candidates):
raise ValueError(('Couldnt find variable %s' % name))
for candidate in candidates:
if (candidate.op.name == name):
return candidate
raise ValueError('Variable %s does not uniquely identify a variable', name) | Gets the variable uniquely identified by that name.
Args:
name: a name that uniquely identifies the variable.
Returns:
a tensorflow variable.
Raises:
ValueError: if no variable uniquely identified by the name exists. | codesearchnet |
def export_artifacts(self, processed_artifacts, sketch_id):
for timeline_name, artifact_path in processed_artifacts:
print('Uploading {0:s} to timeline {1:s}'.format(
artifact_path, timeline_name))
new_timeline_id = self.upload_timeline(timeline_name, artifact_path)
self.add_timeline_to_sketch(sketch_id, new_timeline_id)
return sketch_id | Upload provided artifacts to specified, or new if non-existent, sketch.
Args:
processed_artifacts: List of (timeline_name, artifact_path) tuples
sketch_id: ID of sketch to append the timeline to
Returns:
int: ID of sketch. | juraj-google-style |
def set_match_statements(self, name, action, seqno, statements):
try:
current_statements = self.get(name)[action][seqno]['match']
except:
current_statements = []
commands = list()
for entry in set(current_statements).difference(statements):
commands.append(('route-map %s %s %s' % (name, action, seqno)))
commands.append(('no match %s' % entry))
for entry in set(statements).difference(current_statements):
commands.append(('route-map %s %s %s' % (name, action, seqno)))
commands.append(('match %s' % entry))
return (self.configure(commands) if commands else True) | Configures the match statements within the routemap clause.
The final configuration of match statements will reflect the list
of statements passed into the statements attribute. This implies
match statements found in the routemap that are not specified in the
statements attribute will be removed.
Args:
name (string): The full name of the routemap.
action (string): The action to take for this routemap clause.
seqno (integer): The sequence number for the routemap clause.
statements (list): A list of the match-related statements. Note
that the statements should omit the leading
match.
Returns:
True if the operation succeeds otherwise False | codesearchnet |
def idle(self, stop_signals: tuple = (SIGINT, SIGTERM, SIGABRT)):
def signal_handler(*args):
self.is_idle = False
for s in stop_signals:
signal(s, signal_handler)
self.is_idle = True
while self.is_idle:
time.sleep(1)
self.stop() | Blocks the program execution until one of the signals are received,
then gently stop the Client by closing the underlying connection.
Args:
stop_signals (``tuple``, *optional*):
Iterable containing signals the signal handler will listen to.
Defaults to (SIGINT, SIGTERM, SIGABRT). | juraj-google-style |
def _publish_scan_response(self, client):
devices = self._manager.scanned_devices
converted_devs = []
for uuid, info in devices.items():
slug = self._build_device_slug(uuid)
message = {}
message['uuid'] = uuid
if uuid in self._connections:
message['user_connected'] = True
elif 'user_connected' in info:
message['user_connected'] = info['user_connected']
else:
message['user_connected'] = False
message['connection_string'] = slug
message['signal_strength'] = info['signal_strength']
converted_devs.append({x: y for x, y in message.items()})
message['type'] = 'notification'
message['operation'] = 'advertisement'
self.client.publish(self.topics.gateway_topic(slug, 'data/advertisement'), message)
probe_message = {}
probe_message['type'] = 'response'
probe_message['client'] = client
probe_message['success'] = True
probe_message['devices'] = converted_devs
self.client.publish(self.topics.status, probe_message) | Publish a scan response message
The message contains all of the devices that are currently known
to this agent. Connection strings for direct connections are
translated to what is appropriate for this agent.
Args:
client (string): A unique id for the client that made this request | juraj-google-style |
def get_saved_model_tag_sets(saved_model_dir):
saved_model = read_saved_model(saved_model_dir)
all_tags = []
for meta_graph_def in saved_model.meta_graphs:
all_tags.append(list(meta_graph_def.meta_info_def.tags))
return all_tags | Retrieves all the tag-sets available in the SavedModel.
Args:
saved_model_dir: Directory containing the SavedModel.
Returns:
List of all tag-sets in the SavedModel, where a tag-set is represented as a
list of strings. | github-repos |
def render_wrapper(self, region='us-east-1'):
base = self.settings['pipeline']['base']
if self.base:
base = self.base
email = self.settings['pipeline']['notifications']['email']
slack = self.settings['pipeline']['notifications']['slack']
baking_process = self.settings['pipeline']['image']['builder']
provider = 'aws'
root_volume_size = self.settings['pipeline']['image']['root_volume_size']
bake_instance_type = self.settings['pipeline']['image']['bake_instance_type']
ami_id = ami_lookup(name=base, region=region)
ami_template_file = generate_packer_filename(provider, region, baking_process)
pipeline_id = self.compare_with_existing(region=region)
data = {
'app': {
'ami_id': ami_id,
'appname': self.app_name,
'group_name': self.group_name,
'repo_name': self.repo_name,
'base': base,
'environment': 'packaging',
'region': region,
'triggerjob': self.trigger_job,
'run_as_user': DEFAULT_RUN_AS_USER,
'email': email,
'slack': slack,
'root_volume_size': root_volume_size,
'bake_instance_type': bake_instance_type,
'ami_template_file': ami_template_file,
'pipeline': self.settings['pipeline']
},
'id': pipeline_id
}
self.log.debug('Wrapper app data:\n%s', pformat(data))
wrapper = get_template(template_file='pipeline/pipeline_wrapper.json.j2', data=data, formats=self.generated)
return json.loads(wrapper) | Generate the base Pipeline wrapper.
This renders the non-repeatable stages in a pipeline, like jenkins, baking, tagging and notifications.
Args:
region (str): AWS Region.
Returns:
dict: Rendered Pipeline wrapper. | juraj-google-style |
def GetDataStream(self, name, case_sensitive=True):
if not isinstance(name, py2to3.STRING_TYPES):
raise ValueError('Name is not a string.')
name_lower = name.lower()
matching_data_stream = None
for data_stream in self._GetDataStreams():
if data_stream.name == name:
return data_stream
if not case_sensitive and data_stream.name.lower() == name_lower:
if not matching_data_stream:
matching_data_stream = data_stream
return matching_data_stream | Retrieves a data stream by name.
Args:
name (str): name of the data stream.
case_sensitive (Optional[bool]): True if the name is case sensitive.
Returns:
DataStream: a data stream or None if not available.
Raises:
ValueError: if the name is not string. | juraj-google-style |
def embedding_lookup(self, features: Any, weights: Optional[Any]=None) -> Any:
if not self._built:
self.build()
nest.assert_same_structure(features, self._feature_config)
flat_inputs = nest.flatten(features)
flat_weights = [None] * len(flat_inputs)
if weights is not None:
nest.assert_same_structure(features, weights)
flat_weights = nest.flatten(weights)
flat_features = nest.flatten_with_joined_string_paths(self._feature_config)
outputs = []
for inp, weight, (path, feature) in zip(flat_inputs, flat_weights, flat_features):
table = self.embedding_tables[feature.table]
if weight is not None:
if isinstance(inp, tensor.Tensor):
raise ValueError('Weight specified for {}, but input is dense.'.format(path))
elif type(weight) is not type(inp):
raise ValueError('Weight for {} is of type {} but it does not match type of the input which is {}.'.format(path, type(weight), type(inp)))
elif feature.max_sequence_length > 0:
raise ValueError('Weight specified for {}, but this is a sequence feature.'.format(path))
if isinstance(inp, tensor.Tensor):
if feature.max_sequence_length > 0:
raise ValueError('Feature {} is a sequence feature but a dense tensor was passed.'.format(path))
outputs.append(embedding_ops.embedding_lookup_v2(table, inp))
elif isinstance(inp, sparse_tensor.SparseTensor):
outputs.append(self._embedding_lookup_for_sparse_tensor(inp, weight, table, feature))
elif isinstance(inp, ragged_tensor.RaggedTensor):
outputs.append(self._embedding_lookup_for_ragged_tensor(inp, weight, table, feature))
else:
raise ValueError('Input {} is type {}. Tensor, SparseTensor or RaggedTensor expected.'.format(path, type(inp)))
return nest.pack_sequence_as(self._feature_config, outputs) | Apply embedding lookup on TPUs using Tensorcore.
Note that all the sparse and ragged tensors will be converted to dense
tensors on CPU and then passed to the TPU to do embedding look up. Large
embedding lookup is not supported by this API, use the TPUEmbedding mid
level api instead.
Args:
features: a nested structure of Tensors, SparseTensors or RaggedTensors.
weights: a nested structure of Tensors, SparseTensors or RaggedTensors or
None for no weights. If not None, structure must match that of inputs,
but entries are allowed to be None.
Returns:
A nested structure of Tensors with the same structure as inputs. | github-repos |
def update(self, domain, type_name, search_command, body):
return self._request(domain, type_name, search_command, 'PUT', body) | Update entry in ThreatConnect Data Store
Args:
domain (string): One of 'local', 'organization', or 'system'.
type_name (string): This is a free form index type name. The ThreatConnect API will use
this resource verbatim.
search_command (string): Search command to pass to ES.
body (str): JSON body | codesearchnet |
def get_token(self, text, start=0):
best_class = best_match = None
for token_class, match in self.matching_tokens(text):
if best_match and best_match.end() >= match.end():
continue
best_match = match
best_class = token_class
return best_class, best_match | Retrieve the next token from some text.
Args:
text (str): the text from which tokens should be extracted
Returns:
(token_kind, token_text): the token kind and its content. | juraj-google-style |
def index_2d(seqs: List[List[Any]], target: Any) -> Tuple[(int, int)]:
for i in range(len(seqs)):
for j in range(len(seqs[i])):
if (seqs[i][j] == target):
return (i, j)
raise ValueError('Item not present.') | Finds the first index of a target item within a list of lists.
Args:
seqs: The list of lists to search.
target: The item to find.
Raises:
ValueError: Item is not present. | codesearchnet |
def _ParseCommentRecord(self, structure):
comment = structure[1]
if comment.startswith('Version'):
(_, _, self._version) = comment.partition(':')
elif comment.startswith('Software'):
(_, _, self._software) = comment.partition(':')
elif comment.startswith('Time'):
(_, _, time_format) = comment.partition(':')
if ('local' in time_format.lower()):
self._use_local_timezone = True | Parse a comment and store appropriate attributes.
Args:
structure (pyparsing.ParseResults): parsed log line. | codesearchnet |
def mset(self, values):
for (key, value) in values.items():
self.set(key, value) | Set the value of several keys at once.
Args:
values (dict): maps a key to its value. | codesearchnet |
def stream_sample(self, md5, kwargs=None):
max_rows = kwargs.get('max_rows', None) if kwargs else None
sample = self.get_sample(md5)['sample']
raw_bytes = sample['raw_bytes']
type_tag = sample['type_tag']
if type_tag == 'bro':
bro_log = bro_log_reader.BroLogReader(convert_datetimes=False)
mem_file = StringIO(raw_bytes)
generator = bro_log.read_log(mem_file)
return generator
elif type_tag == 'els_query':
els_log = json.loads(raw_bytes)
if 'fields' in els_log['hits']['hits'][0]:
generator = (row['fields'] for row in els_log['hits']['hits'][:max_rows])
else:
generator = (row['_source'] for row in els_log['hits']['hits'][:max_rows])
return generator
elif type_tag == 'log':
generator = ({'row':row} for row in raw_bytes.split('\n')[:max_rows])
return generator
elif type_tag == 'json':
generator = (row for row in json.loads(raw_bytes)[:max_rows])
return generator
else:
raise RuntimeError('Cannot stream file %s with type_tag:%s' % (md5, type_tag)) | Stream the sample by giving back a generator, typically used on 'logs'.
Args:
md5: the md5 of the sample
kwargs: a way of specifying subsets of samples (None for all)
max_rows: the maximum number of rows to return
Returns:
A generator that yields rows of the file/log | juraj-google-style |
def index_library_datasets(self, tick_f=None):
dataset_n = 0
partition_n = 0
def tick(d, p):
if tick_f:
tick_f('datasets: {} partitions: {}'.format(d, p))
for dataset in self.library.datasets:
if self.backend.dataset_index.index_one(dataset):
dataset_n += 1
tick(dataset_n, partition_n)
for partition in dataset.partitions:
self.backend.partition_index.index_one(partition)
partition_n += 1
tick(dataset_n, partition_n)
else:
pass | Indexes all datasets of the library.
Args:
tick_f (callable, optional): callable of one argument. Gets string with index state. | juraj-google-style |
def fswap(p, q):
(yield (cirq.ISWAP(q, p), (cirq.Z(p) ** 1.5)))
(yield (cirq.Z(q) ** 1.5)) | Decompose the Fermionic SWAP gate into two single-qubit gates and
one iSWAP gate.
Args:
p: the id of the first qubit
q: the id of the second qubit | codesearchnet |
def inner_text(node):
from lxml import etree
parts = [node.text]
for child in node.getchildren():
parts.append(etree.tostring(child, encoding="utf-8", method="text"))
parts.append(child.tail)
return "".join(map(decode_bytes, filter(None, parts))) | Returns the inner text of a given XML node, excluding tags.
Args:
node: (lxml.etree.Element): The node whose inner text is desired.
Returns:
str: The inner text of the node. | juraj-google-style |
def _parse_domain_id(self, config):
match = re.search('domain-id (.+)$', config)
value = (match.group(1) if match else None)
return dict(domain_id=value) | Scans the config block and parses the domain-id value
Args:
config (str): The config block to scan
Returns:
dict: A dict object that is intended to be merged into the
resource dict | codesearchnet |
def get(self, key):
''
value = self.child_datastore.get(key)
return self.deserializedValue(value) | Return the object named by key or None if it does not exist.
Retrieves the value from the ``child_datastore``, and de-serializes
it on the way out.
Args:
key: Key naming the object to retrieve
Returns:
object or None | codesearchnet |
def datasets_update(self, dataset_name, dataset_info):
url = Api._ENDPOINT + (Api._DATASETS_PATH % dataset_name)
return datalab.utils.Http.request(url, method='PUT', data=dataset_info,
credentials=self._credentials) | Updates the Dataset info.
Args:
dataset_name: the name of the dataset to update as a tuple of components.
dataset_info: the Dataset resource with updated fields. | juraj-google-style |
class AltCLIPEncoder(nn.Module):
def __init__(self, config: AltCLIPConfig):
super().__init__()
self.config = config
self.layers = nn.ModuleList([AltCLIPEncoderLayer(config) for _ in range(config.num_hidden_layers)])
self.gradient_checkpointing = False
def forward(self, inputs_embeds, attention_mask: Optional[torch.Tensor]=None, causal_attention_mask: Optional[torch.Tensor]=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, return_dict: Optional[bool]=None) -> Union[Tuple, BaseModelOutput]:
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
encoder_states = () if output_hidden_states else None
all_attentions = () if output_attentions else None
hidden_states = inputs_embeds
for idx, encoder_layer in enumerate(self.layers):
if output_hidden_states:
encoder_states = encoder_states + (hidden_states,)
if self.gradient_checkpointing and self.training:
layer_outputs = self._gradient_checkpointing_func(encoder_layer.__call__, hidden_states, attention_mask, causal_attention_mask, output_attentions)
else:
layer_outputs = encoder_layer(hidden_states, attention_mask, causal_attention_mask, output_attentions=output_attentions)
hidden_states = layer_outputs[0]
if output_attentions:
all_attentions = all_attentions + (layer_outputs[1],)
if output_hidden_states:
encoder_states = encoder_states + (hidden_states,)
if not return_dict:
return tuple((v for v in [hidden_states, encoder_states, all_attentions] if v is not None))
return BaseModelOutput(last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions) | Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a
[`AltCLIPEncoderLayer`].
Args:
config: AltCLIPConfig | github-repos |
def _verify_watches(self, watch_opts, expected_output_slot, expected_debug_ops, expected_debug_urls):
node_names = []
for watch in watch_opts:
node_names.append(watch.node_name)
if watch.node_name == '*':
self.assertEqual(-1, watch.output_slot)
self.assertEqual(expected_debug_ops, watch.debug_ops)
self.assertEqual(expected_debug_urls, watch.debug_urls)
else:
self.assertEqual(expected_output_slot, watch.output_slot)
self.assertEqual(expected_debug_ops, watch.debug_ops)
self.assertEqual(expected_debug_urls, watch.debug_urls)
return node_names | Verify a list of debug tensor watches.
This requires all watches in the watch list have exactly the same
output_slot, debug_ops and debug_urls.
Args:
watch_opts: Repeated protobuf field of DebugTensorWatch.
expected_output_slot: Expected output slot index, as an integer.
expected_debug_ops: Expected debug ops, as a list of strings.
expected_debug_urls: Expected debug URLs, as a list of strings.
Returns:
List of node names from the list of debug tensor watches. | github-repos |
def create(self, vectors):
if (type(vectors) is dict):
vectors = [vectors]
for vector in vectors:
if (not ('properties' in list(vector.keys()))):
raise Exception('Vector does not contain "properties" field.')
if (not ('item_type' in list(vector['properties'].keys()))):
raise Exception('Vector does not contain "item_type".')
if (not ('ingest_source' in list(vector['properties'].keys()))):
raise Exception('Vector does not contain "ingest_source".')
r = self.gbdx_connection.post(self.create_url, data=json.dumps(vectors))
r.raise_for_status()
return r.json() | Create a vectors in the vector service.
Args:
vectors: A single geojson vector or a list of geojson vectors. Item_type and ingest_source are required.
Returns:
(list): IDs of the vectors created
Example:
>>> vectors.create(
... {
... "type": "Feature",
... "geometry": {
... "type": "Point",
... "coordinates": [1.0,1.0]
... },
... "properties": {
... "text" : "item text",
... "name" : "item name",
... "item_type" : "type",
... "ingest_source" : "source",
... "attributes" : {
... "latitude" : 1,
... "institute_founded" : "2015-07-17",
... "mascot" : "moth"
... }
... }
... }
... ) | codesearchnet |
def add_permissions(self, grp_name, resource, permissions):
self.service.add_permissions(grp_name, resource, permissions, self.url_prefix, self.auth, self.session, self.session_send_opts) | Add additional permissions for the group associated with the given resource.
Args:
grp_name (string): Name of group.
resource (intern.resource.boss.BossResource): Identifies which data model object to operate on.
permissions (list): List of permissions to add to the given resource.
Raises:
requests.HTTPError on failure. | codesearchnet |
def GetHTTPHeaders(self):
http_headers = self._adwords_client.oauth2_client.CreateHttpHeader()
if self.enable_compression:
http_headers['accept-encoding'] = 'gzip'
http_headers.update(self.custom_http_headers)
return http_headers | Returns the HTTP headers required for request authorization.
Returns:
A dictionary containing the required headers. | codesearchnet |
def to_json_str(value: Any, *, json_indent=None, **kwargs) -> str:
def _encode_int_keys(v):
if isinstance(v, dict):
return {f'n_:{k}' if isinstance(k, int) else k: _encode_int_keys(v) for k, v in v.items()}
elif isinstance(v, list):
return [_encode_int_keys(v) for v in v]
return v
return json.dumps(_encode_int_keys(to_json(value, **kwargs)), indent=json_indent) | Serializes a (maybe) symbolic value into a JSON string.
Example::
@pg.members([
('x', pg.typing.Any())
])
class A(pg.Object):
pass
a1 = A(1)
json_str = a1.to_json_str()
a2 = pg.from_json_str(json_str)
assert pg.eq(a1, a2)
Args:
value: Value to serialize.
json_indent: The size of indentation for JSON format.
**kwargs: Additional keyword arguments that are passed to ``pg.to_json``.
Returns:
A JSON string. | github-repos |
def cross_section(verts, tris, plane_orig, plane_normal, **kwargs):
mesh = TriangleMesh(verts, tris)
plane = Plane(plane_orig, plane_normal)
return cross_section_mesh(mesh, plane, **kwargs) | Compute the planar cross section of a mesh. This returns a set of
polylines.
Args:
verts: Nx3 array of the vertices position
faces: Nx3 array of the faces, containing vertex indices
plane_orig: 3-vector indicating the plane origin
plane_normal: 3-vector indicating the plane normal
Returns:
A list of Nx3 arrays, each representing a disconnected portion
of the cross section as a polyline | juraj-google-style |
def get_guild_info(self, id: str) -> Dict[(str, Any)]:
return self._query(f'guilds/{id}', 'GET') | Get a guild's information by its id
Args:
id: snowflake id of the guild
Returns:
Dictionary data for the guild API object
Example:
{
"id": "41771983423143937",
"name": "Discord Developers",
"icon": "SEkgTU9NIElUUyBBTkRSRUkhISEhISEh",
"splash": null,
"owner_id": "80351110224678912",
"region": "us-east",
"afk_channel_id": "42072017402331136",
"afk_timeout": 300,
"embed_enabled": true,
"embed_channel_id": "41771983444115456",
"verification_level": 1,
"roles": [],
"emojis": [],
"features": ["INVITE_SPLASH"],
"unavailable": false
} | codesearchnet |
def prepare_soap_envelope(self, prepared_soap_header, prepared_soap_body):
soap_env_template = (
'<?xml version="1.0"?>'
'<s:Envelope xmlns:s="http:
' s:encodingStyle="http:
'{soap_header}'
'<s:Body>'
'{soap_body}'
'</s:Body>'
'</s:Envelope>')
return soap_env_template.format(
soap_header=prepared_soap_header,
soap_body=prepared_soap_body) | Prepare the SOAP Envelope for sending.
Args:
prepared_soap_header (str): A SOAP Header prepared by
`prepare_soap_header`
prepared_soap_body (str): A SOAP Body prepared by
`prepare_soap_body`
Returns:
str: A prepared SOAP Envelope | juraj-google-style |
def create_and_fill_np_array(start_or_end_logits, dataset, max_len):
step = 0
logits_concat = np.full((len(dataset), max_len), -100, dtype=np.float64)
for i, output_logit in enumerate(start_or_end_logits):
batch_size = output_logit.shape[0]
cols = output_logit.shape[1]
if step + batch_size < len(dataset):
logits_concat[step:step + batch_size, :cols] = output_logit
else:
logits_concat[step:, :cols] = output_logit[:len(dataset) - step]
step += batch_size
return logits_concat | Create and fill numpy array of size len_of_validation_data * max_length_of_output_tensor
Args:
start_or_end_logits(:obj:`tensor`):
This is the output predictions of the model. We can only enter either start or end logits.
eval_dataset: Evaluation dataset
max_len(:obj:`int`):
The maximum length of the output tensor. ( See the model.eval() part for more details ) | github-repos |
def copy_remote_file(web_file, destination):
size = 0
dir_name = os.path.dirname(destination)
if (not os.path.exists(dir_name)):
os.makedirs(dir_name)
with open(destination, 'wb') as file_:
chunk_size = (8 * 1024)
for chunk in web_file.iter_content(chunk_size=chunk_size):
if chunk:
file_.write(chunk)
size += len(chunk)
return size | Check if exist the destination path, and copy the online resource
file to local.
Args:
:web_file: reference to online file resource to take.
:destination: path to store the file. | codesearchnet |
def cancel(self, job_ids):
statuses = []
for job_id in job_ids:
try:
self.delete_instance(job_id)
statuses.append(True)
self.provisioned_blocks -= 1
except Exception:
statuses.append(False)
return statuses | Cancels the resources identified by the job_ids provided by the user.
Args:
- job_ids (list): A list of job identifiers
Returns:
- A list of status from cancelling the job which can be True, False
Raises:
- ExecutionProviderException or its subclasses | juraj-google-style |
def validate_config_has_one_of(config, one_of_keys):
intersection = set(config).intersection(one_of_keys)
if (len(intersection) > 1):
raise Exception(('Only one of the values in "%s" is needed' % ', '.join(intersection)))
if (len(intersection) == 0):
raise Exception(('One of the values in "%s" is needed' % ', '.join(one_of_keys))) | Validate a config dictionary to make sure it has one and only one
key in one_of_keys.
Args:
config: the config to validate.
one_of_keys: the list of possible keys that config can have one and only one.
Raises:
Exception if the config does not have any of them, or multiple of them. | codesearchnet |
def deserialize(config, custom_objects=None):
from tensorflow.python.keras.mixed_precision import loss_scale_optimizer
all_classes = {'adadelta': adadelta_v2.Adadelta, 'adagrad': adagrad_v2.Adagrad, 'adam': adam_v2.Adam, 'adamax': adamax_v2.Adamax, 'nadam': nadam_v2.Nadam, 'rmsprop': rmsprop_v2.RMSprop, 'sgd': gradient_descent_v2.SGD, 'ftrl': ftrl.Ftrl, 'lossscaleoptimizer': loss_scale_optimizer.LossScaleOptimizer, 'lossscaleoptimizerv1': loss_scale_optimizer.LossScaleOptimizer}
if config['class_name'].lower() in all_classes:
config['class_name'] = config['class_name'].lower()
return deserialize_keras_object(config, module_objects=all_classes, custom_objects=custom_objects, printable_module_name='optimizer') | Inverse of the `serialize` function.
Args:
config: Optimizer configuration dictionary.
custom_objects: Optional dictionary mapping names (strings) to custom
objects (classes and functions) to be considered during deserialization.
Returns:
A Keras Optimizer instance. | github-repos |
def ValidatePassword(self, password):
password = to_aes_key(password)
return (hashlib.sha256(password).digest() == self.LoadStoredData('PasswordHash')) | Validates if the provided password matches with the stored password.
Args:
password (string): a password.
Returns:
bool: the provided password matches with the stored password. | codesearchnet |
def _validate_alias_name(alias_name):
if not alias_name:
raise CLIError(EMPTY_ALIAS_ERROR)
if not re.match('^[a-zA-Z]', alias_name):
raise CLIError(INVALID_STARTING_CHAR_ERROR.format(alias_name[0])) | Check if the alias name is valid.
Args:
alias_name: The name of the alias to validate. | juraj-google-style |
def __gt__(self, other):
if not isinstance(other, interface.DateTimeValues):
raise ValueError('Other not an instance of DateTimeValues')
return not isinstance(other, Never) | Determines if the date time values are greater than other.
Args:
other (DateTimeValues): date time values to compare against.
Returns:
bool: True if the date time values are greater than other.
Raises:
ValueError: if other is not an instance of DateTimeValues. | juraj-google-style |
def GetSubNodeByLocation(self, location):
for sub_node in self.sub_nodes:
sub_node_location = getattr(sub_node.path_spec, 'location', None)
if location == sub_node_location:
return sub_node
return None | Retrieves a sub scan node based on the location.
Args:
location (str): location that should match the location of the path
specification of a sub scan node.
Returns:
SourceScanNode: sub scan node or None if not available. | juraj-google-style |
def files(self, request, id):
gist = self.send(request, id).json()
return gist['files'] | Returns a list of files in the gist
Arguments:
request: an initial request object
id: the gist identifier
Returns:
A list of the files | codesearchnet |
def detect_mbr(self, filename, offset, fs_id):
self.logger.debug('Detecting MBR partition type')
if fs_id not in self.__mbr_plugins:
return None
else:
plugins = self.__mbr_plugins.get(fs_id)
for plugin in plugins:
if plugin.detect(filename, offset):
return plugin.get_volume_object()
return None | Used by rawdisk.session.Session to match mbr partitions against
filesystem plugins.
Args:
filename: device or file that it will read in order to detect
the filesystem fs_id: filesystem id to match (ex. 0x07)
offset: offset for the filesystem that is being matched
Returns:
Volume object supplied by matched plugin.
If there is no match, None is returned | juraj-google-style |
def _validate_write(self, address):
if (not any((address.startswith(ns) for ns in self._write_list))):
raise AuthorizationException(address=address) | Raises an exception if the address is not allowed to be set
in this context, based on txn outputs.
Notes:
Checks that the address is either listed fully as one of the
outputs, or some portion of the address is listed as a namespace
in the outputs of the txn.
Args:
address (str): The address to be validated. The context manager
validates the address correctness (70 hex characters).
Returns:
None
Raises:
AuthorizationException | codesearchnet |
def get_vocabulary(self, include_special_tokens=True):
return self._lookup_layer.get_vocabulary(include_special_tokens) | Returns the current vocabulary of the layer.
Args:
include_special_tokens: If `True`, the returned vocabulary
will include the padding and OOV tokens,
and a term's index in the vocabulary will equal
the term's index when calling the layer. If `False`, the
returned vocabulary will not include any padding
or OOV tokens. | github-repos |
def infer(self, **kwargs) -> Any: | Returns the inferred value.
Args:
**kwargs: Optional keyword arguments for inference, which are usually
inferential subclass specific.
Returns:
Inferred value.
Raises:
AttributeError: If the value cannot be inferred. | github-repos |
def __init__(self, config: FastSpeech2ConformerConfig, num_layers=2, num_chans=384, kernel_size=3, dropout_rate=0.5):
super().__init__()
self.conv_layers = nn.ModuleList()
for idx in range(num_layers):
input_channels = config.hidden_size if idx == 0 else num_chans
layer = FastSpeech2ConformerPredictorLayer(input_channels, num_chans, kernel_size, dropout_rate)
self.conv_layers.append(layer)
self.linear = nn.Linear(num_chans, 1) | Initialize variance predictor module.
Args:
input_dim (`int`): Input dimension.
num_layers (`int`, *optional*, defaults to 2): Number of convolutional layers.
num_chans (`int`, *optional*, defaults to 384): Number of channels of convolutional layers.
kernel_size (`int`, *optional*, defaults to 3): Kernel size of convolutional layers.
dropout_rate (`float`, *optional*, defaults to 0.5): Dropout rate. | github-repos |
def NodeName(node):
if node.type < 256:
return token.tok_name[node.type]
else:
return pygram.python_grammar.number2symbol[node.type] | Produce a string name for a given node.
For a Leaf this is the token name, and for a Node this is the type.
Arguments:
node: a tree node
Returns:
Name as a string. | github-repos |
def permute(self, ordering: np.ndarray, *, axis: int) -> None:
if (axis == 0):
self.values = self.values[(ordering, :)]
elif (axis == 1):
self.values = self.values[(:, ordering)]
else:
raise ValueError('axis must be 0 or 1') | Permute the layer along an axis
Args:
axis: The axis to permute (0, permute the rows; 1, permute the columns)
ordering: The permutation vector | codesearchnet |
def _pipeline_cell(args, cell_body):
name = args.get('name')
if name is None:
raise Exception('Pipeline name was not specified.')
import google.datalab.utils as utils
bq_pipeline_config = utils.commands.parse_config(
cell_body, utils.commands.notebook_environment())
try:
airflow_spec = \
google.datalab.contrib.bigquery.commands.get_airflow_spec_from_config(name,
bq_pipeline_config)
except AttributeError:
return "Perhaps you're missing: import google.datalab.contrib.bigquery.commands"
error_message = ''
gcs_dag_bucket = args.get('gcs_dag_bucket')
gcs_dag_file_path = args.get('gcs_dag_file_path')
if gcs_dag_bucket:
try:
airflow = google.datalab.contrib.pipeline.airflow.Airflow(gcs_dag_bucket, gcs_dag_file_path)
airflow.deploy(name, airflow_spec)
error_message += ("Airflow pipeline successfully deployed! View dashboard for more "
"details.\n")
except AttributeError:
return "Perhaps you're missing: import google.datalab.contrib.pipeline.airflow"
location = args.get('location')
environment = args.get('environment')
if location and environment:
try:
composer = google.datalab.contrib.pipeline.composer.Composer(location, environment)
composer.deploy(name, airflow_spec)
error_message += ("Composer pipeline successfully deployed! View dashboard for more "
"details.\n")
except AttributeError:
return "Perhaps you're missing: import google.datalab.contrib.pipeline.composer"
if args.get('debug'):
error_message += '\n\n' + airflow_spec
return error_message | Implements the pipeline subcommand in the %%bq magic.
Args:
args: the arguments following '%%bq pipeline'.
cell_body: Cell contents. | juraj-google-style |
def DownloadDir(aff4_path, output_dir, bufsize=8192, preserve_path=True):
if (not os.path.isdir(output_dir)):
os.makedirs(output_dir)
fd = aff4.FACTORY.Open(aff4_path)
for child in fd.OpenChildren():
if preserve_path:
full_dir = utils.JoinPath(output_dir, child.urn.Path())
full_dir = os.path.dirname(full_dir)
if (not os.path.isdir(full_dir)):
os.makedirs(full_dir)
outfile = os.path.join(full_dir, child.urn.Basename())
else:
outfile = os.path.join(output_dir, child.urn.Basename())
logging.info(u'Downloading %s to %s', child.urn, outfile)
with open(outfile, 'wb') as out_fd:
try:
buf = child.Read(bufsize)
while buf:
out_fd.write(buf)
buf = child.Read(bufsize)
except IOError as e:
logging.error('Failed to read %s. Err: %s', child.urn, e) | Take an aff4 path and download all files in it to output_dir.
Args:
aff4_path: Any aff4 path as a string
output_dir: A local directory to write to, will be created if not there.
bufsize: Buffer size to use.
preserve_path: If set all paths will be created. Note that this works for
collections as well. It will download all files in the collection. This
only downloads files that are already in the datastore, it doesn't queue
anything on the client. | codesearchnet |
def __init__(self, target_shape, **kwargs):
super(Reshape, self).__init__(**kwargs)
self.target_shape = tuple(target_shape) | Creates a `tf.keras.layers.Reshape` layer instance.
Args:
target_shape: Target shape. Tuple of integers, does not include the
samples dimension (batch size).
**kwargs: Any additional layer keyword arguments. | github-repos |
def setup_build(self):
if not self.make_imports_dir():
return set()
default_output = self.write_default_pyi()
self.write_ninja_preamble()
files = set()
module_to_imports_map = {}
module_to_output = {}
for module, action, deps, stage in self.yield_sorted_modules():
if files >= self.filenames:
logging.info('skipped: %s %s (%s)', action, module.name, stage)
continue
if action == Action.GENERATE_DEFAULT:
module_to_output[module] = default_output
continue
if stage == Stage.SINGLE_PASS:
files.add(module.full_path)
suffix = ''
elif stage == Stage.FIRST_PASS:
suffix = FIRST_PASS_SUFFIX
else:
assert stage == Stage.SECOND_PASS
files.add(module.full_path)
suffix = ''
imports_map = module_to_imports_map[module] = get_imports_map(deps, module_to_imports_map, module_to_output)
imports = self.write_imports(module.name, imports_map, suffix)
deps = tuple((module_to_output[m] for m in deps if module_to_output[m] != default_output))
module_to_output[module] = self.write_build_statement(module, action, deps, imports, suffix)
return files | Write out the full build.ninja file.
Returns:
All files with build statements. | github-repos |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.