blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 616 | content_id stringlengths 40 40 | detected_licenses listlengths 0 112 | license_type stringclasses 2 values | repo_name stringlengths 5 115 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 777 values | visit_date timestamp[us]date 2015-08-06 10:31:46 2023-09-06 10:44:38 | revision_date timestamp[us]date 1970-01-01 02:38:32 2037-05-03 13:00:00 | committer_date timestamp[us]date 1970-01-01 02:38:32 2023-09-06 01:08:06 | github_id int64 4.92k 681M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 22 values | gha_event_created_at timestamp[us]date 2012-06-04 01:52:49 2023-09-14 21:59:50 ⌀ | gha_created_at timestamp[us]date 2008-05-22 07:58:19 2023-08-21 12:35:19 ⌀ | gha_language stringclasses 149 values | src_encoding stringclasses 26 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 3 10.2M | extension stringclasses 188 values | content stringlengths 3 10.2M | authors listlengths 1 1 | author_id stringlengths 1 132 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
22774504cce4883aefe3fdae4ef4056acda15052 | b1d7cf329110f02b8175303ebd09475136e84b0e | /enderecos/migrations/0001_initial.py | 10bdeace7f9208ad618fce5bb3bc078303660d8a | [] | no_license | Aleleonel/projeto_rest | 9df70817f9955399afb75b02121aa9500c9492d1 | a72b4e3b17c22efdbd8001f843c21aa24e9e9ae6 | refs/heads/master | 2022-05-24T10:56:10.818898 | 2020-04-30T15:49:33 | 2020-04-30T15:49:33 | 260,252,578 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 906 | py | # Generated by Django 3.0.5 on 2020-04-30 14:26
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Endereco',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('linha1', models.CharField(max_length=150)),
('linha2', models.CharField(blank=True, max_length=150, null=True)),
('cidade', models.CharField(max_length=100)),
('estado', models.CharField(max_length=50)),
('pais', models.CharField(max_length=70)),
('latitude', models.IntegerField(blank=True, null=True)),
('longitude', models.IntegerField(blank=True, null=True)),
],
),
]
| [
"aleleonel@gmail.com"
] | aleleonel@gmail.com |
437f01bf9c7f7e5398eb3f5fc71d48341c80f91e | 3709d35f525801a27ba0a3549e123d6a0e424831 | /scaling_transformer_inference_efficiency/chunk.py | 49be1f56a9dae04ea2b9c1015c5d40b05de5c173 | [
"CC-BY-4.0",
"Apache-2.0"
] | permissive | jinlmsft/google-research | 2869f23ef1630e5ab5c342b7a1c9b74d11d23469 | da706f1407ee89e21870a30a547c00b5da7e44d2 | refs/heads/master | 2023-04-19T15:17:12.941312 | 2023-04-08T02:56:08 | 2023-04-08T02:59:56 | 286,369,579 | 0 | 0 | Apache-2.0 | 2023-04-10T00:42:21 | 2020-08-10T03:46:22 | null | UTF-8 | Python | false | false | 21,554 | py | # coding=utf-8
# Copyright 2023 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Types storing a chunk of text in raw (token IDs) or processed (vectors) form.
Type `Chunk` stores token IDs, and is typically used as the input of a model.
This is the "unprocessed" representation of text.
Type `FullChunkResult` stores the "full" outputs of the model on a `Chunk`: the
KV caches and the per-token logits. This is often very big: for example on
PaLM-8B the KV cache is 16KiB per token, and the per-token logits are 1MiB
per token. Because this type is big, we prefer not to return it to the host e.g.
on JIT boundaries. Instead, we prefer to only use it for the internals of the
model.
Type `ChunkResult` is a reduced version of `FullChunkResult` that is small
enough to be returned at JIT boundaries. It contains the KV cache, the top few
highest-probability logits for each token, and the full set of logits for the
last token. (On PaLM-8B, the KV cache is 16KiB per token, the
highest-probability logits are 64B per token, and the final-token logits are
1MiB but only on one token.) The `ChunkResult` has sufficient information in it
for most scoring use cases (getting per-token or per-sequence scores) as well as
for generation use cases.
Type `InferFn` is a function type that we expect model implementations to
provide. It represents a single forwards pass through a model, processing input
tokens in a `Chunk` and returning a `FullChunkResult` that corresponds to the
input tokens.
##### Splitting text into chunks
When preparing inputs to a model, you can put all the text in a single chunk, or
in multiple different chunks. For example, all the text in a single chunk:
```
# A chunk with batch size 2.
chunk = Chunk.tokenize(
vocab,
["Humans have 2 legs.",
"Humans have 3 legs.",
],
is_first_chunk=True)
```
Alternatively, text split over multiple chunks:
```
# Prefix has batch size 1.
prefix_chunk = Chunk.tokenize(vocab, ["Humans have"], is_first_chunk=True)
# Suffix has batch size 2.
suffix_chunk = Chunk.tokenize(vocab, ["2 legs.", "3 legs."],
is_first_chunk=False)
chunks = [prefix_chunk, suffix_chunk]
```
Here, `chunk` and `chunks` represent the same set of two sentences, but we
have split them into chunks differently. In particular, in the second
representation we have taken advantage of the common prefix "Humans have" and
stored it only once, rather than twice. This can make the model more efficient
when processing `chunks`. For example, compare inference on `chunk` vs `chunks`:
```
infer: InferFn # From somewhere
weights: Weights # From somewhere
# Processing `chunk` by itself
chunk_result = infer(weights, [], chunk)
# Processing `chunks`, in two infer calls.
prefix_result = infer(weights, [], prefix_chunk)
suffix_result = infer(weights, [prefix_result.kv_cache], suffix_chunk)
```
In this example, when processing `chunk`, the `infer` function must redundantly
run the model on the "Humans have" tokens twice: once for each batch element. In
contrast, when processing `chunks`, the first `infer` call processes the
"Humans have" tokens just once, and then in the second `infer` call this
processing is shared across both batch elements "2 legs." and "3 legs.".
"""
from typing import Any, Optional, Sequence, Tuple, Union
from flax import struct
import jax
from jax import lax
import jax.numpy as jnp
import jax.scipy
from jax.sharding import PartitionSpec as P
import numpy as np
from seqio.vocabularies import Vocabulary
import typing_extensions
from scaling_transformer_inference_efficiency import attention
from scaling_transformer_inference_efficiency import checkpoint
from scaling_transformer_inference_efficiency import partitioning
from scaling_transformer_inference_efficiency import special2
Weights = Any
_BOS_ID = 0
@struct.dataclass
class Chunk:
"""A chunk of token IDs. These are typically used as the input to a model."""
tokens: Union[np.ndarray, jnp.ndarray] # int32[batch, max_len]
lengths: Union[np.ndarray, jnp.ndarray] # int32[batch]
@classmethod
def logical_axes(cls):
return Chunk( # pytype: disable=wrong-arg-types # jax-ndarray
tokens=P('batch', 'time'),
lengths=P('batch'),
)
@classmethod
def physical_axes(cls):
"""Returns the partition specs for the weights in their physical axes."""
return jax.tree_map(partitioning.logical_to_physical, Chunk.logical_axes())
@classmethod
def zeros(cls, batch, seqlen):
"""Returns an all-zeros Chunk of the specified shape.
The returned Chunk doesn't have a useful meaning as text. This function is
primarily intended to be used as initial loop state, which will subsequently
be overwritten with meaningful token IDs.
Args:
batch: batch size.
seqlen: number of tokens in the chunk.
Returns:
A Chunk with zeros in all locations.
"""
return Chunk(
tokens=jnp.zeros((batch, seqlen), jnp.int32),
lengths=jnp.zeros((batch,), jnp.int32),
)
@classmethod
def tokenize(
cls,
vocab,
texts,
is_first_chunk,
append_eos = False,
pad_length = None,
):
"""Parses the text into token IDs and creates a Chunk from them.
For example:
```
# A chunk with batch size 2.
chunk = Chunk.tokenize(
vocab,
["Humans have 2 legs.",
"Humans have 3 legs.",
],
is_first_chunk=True)
```
Alternatively, the same text split over multiple chunks:
```
# Prefix has batch size 1.
prefix_chunk = Chunk.tokenize(vocab, ["Humans have"], is_first_chunk=True)
# Suffix has batch size 2.
suffix_chunk = Chunk.tokenize(vocab, ["2 legs.", "3 legs."],
is_first_chunk=False)
chunks = [prefix_chunk, suffix_chunk]
```
Args:
vocab: The vocabulary with which to parse the text into tokens.
texts: The batch of sequences to parse.
is_first_chunk: Whether this is the first chunk in a logical sequence. If
so, as part of tokenization we will prepend the special
"beginning-of-sequence" token (token ID = 0), which informs the model
that this is indeed the beginning of the sequence.
append_eos: Whether to append eos or not.
pad_length: Optionally pad all sequences to a specified length.
Returns:
The batch of sequences, parsed into a Chunk. The result's sequence length
equals the longest sequence length of any input string.
"""
# t5x/models.py;l=643
# t5x/google/prediction_service/handler.py;l=514
# t5x/google/prediction_service/handler.py;l=425
# seqio/dataset_providers.py;l=1106
# - task evaluation
# seqio/dataset_providers.py;l=943
# - preprocessor evaluation
# prediction_service.gin;l=41
# - Gin config
# seqio.DecoderFeatureConverter
# First we:
# * parse the strings into token IDs
# * pad all the sequences so their lengths equal the longest sequences's
# length, and form a batch
lengths = [] # List[int]
batch_tokens = [] # List[int32[seqlen]]. Each seqlen can be different.
max_length = 0
for text in texts:
# Parsing:
tokens = np.array(vocab.encode_tf(text))
if append_eos:
tokens = jnp.concatenate([tokens, np.array([vocab.eos_id])], axis=-1)
length, = tokens.shape
if length > max_length:
max_length = length
lengths.append(length)
batch_tokens.append(tokens)
if pad_length is not None:
max_length = pad_length
if is_first_chunk:
max_length = max_length - 1
# Padding to max length, and then concatenating into a batch
batch_tokens = np.array([
np.pad(
tokens, (0, max_length - tokens.shape[0]),
constant_values=vocab.pad_id) for tokens in batch_tokens
])
# ^ batch_tokens: int32[batch, seqlen]
lengths = np.array(lengths)
# ^ lengths: int32[batch]
# The model expects a beginning-of-sequence token (id equal to _BOS_ID) at
# the beginning of the logical string of text. If this is the first chunk in
# a list, we should add it. Otherwise, if it's a later chunk in the list,
# then the beginning-of-sequence token has already been added to the first
# one, so we don't need it again.
if is_first_chunk:
batch_tokens = jnp.concatenate([
jnp.full((batch_tokens.shape[0], 1), _BOS_ID, jnp.int32), batch_tokens
],
axis=1)
lengths = lengths + 1
# After padding and beginning-of-sequence insertion, an example output would
# be:
#
# batch_tokens:
# [[0, 123, 456, 789, 0, 0],
# [0, 234, 567, 890, 123, 456],
# ]
# lengths:
# [4, 6]
#
# Alternatively, for the same text but without beginning-of-sequence
# insertion:
#
# batch_tokens:
# [[123, 456, 789, 0, 0],
# [234, 567, 890, 123, 456],
# ]
# lengths:
# [3, 5]
return Chunk(
tokens=batch_tokens,
lengths=lengths,
)
def detokenize(self, vocab):
"""Turns a chunk back into text.
```
orig_texts = ["Humans have 2 legs.",
"Humans have 3 legs.",
]
chunk = Chunk.tokenize(vocab, orig_texts, is_first_chunk=True)
texts = chunk.detokenize(vocab)
assert(texts == orig_texts)
```
Args:
vocab: Vocabulary for detokenization.
Returns:
Text form of the chunk.
"""
me = self.copy_to_host()
# Mask out everything above 'lengths', by replacing it with the
# end-of-sequence token ID. Then vocab.decode_tf won't decode past the
# end-of-sequence token.
masked_tokens = np.where(
np.array(me.token_mask), np.array(me.tokens), vocab.eos_id)
decoded = vocab.decode_tf(masked_tokens)
if hasattr(decoded, 'numpy'):
return list(vocab.decode_tf(masked_tokens).numpy())
else:
return list(vocab.decode_tf(masked_tokens))
@property
def token_mask(self):
"""Gets a mask which is true for in-bounds tokens. bool[batch, seqlen]."""
token_index = jax.lax.broadcasted_iota(jnp.int32, self.tokens.shape, 1)
return token_index < self.lengths[:, np.newaxis]
def copy_to_host(self):
"""Copies the data from the device to the host."""
return Chunk(np.array(self.tokens), np.array(self.lengths))
def split_at(self, n):
"""Splits a chunk into two chunks, where the first has length `n`."""
assert n < self.tokens.shape[1]
me = self.copy_to_host()
first = Chunk(me.tokens[:, :n], np.minimum(me.lengths, n))
second = Chunk(me.tokens[:, n:], np.maximum(me.lengths, n) - n)
return first, second
def pad_to_length(self, n):
"""Pads the chunk to the target length."""
seqlen = self.tokens.shape[1]
assert n >= seqlen
tokens = jnp.pad(self.tokens, ((0, 0), (0, n - seqlen)))
return Chunk(tokens, self.lengths)
def update(self, token_i, token):
"""Writes the batch of tokens to the specified token index."""
assert token.tokens.shape[1] == 1, 'token must have seqlen=1'
return Chunk(
tokens=lax.dynamic_update_index_in_dim(self.tokens, token.tokens[:, 0],
token_i, 1),
lengths=self.lengths+1)
@struct.dataclass
class ChunkResult:
"""Result of analyzing a `Chunk` by the neural net.
This is returned at JIT boundaries.
"""
# Scores and other candidates for the _current_ token (not the next one).
per_token_scores: jnp.ndarray # float32[batch, seqlen]
top_token_ids: jnp.ndarray # int32[batch, seqlen, top_k]
top_token_probs: jnp.ndarray # float32[batch, seqlen, top_k]
# Logits for the _next_ token
next_token_logits: jnp.ndarray # float32[batch, vocab_size]
# KV cache.
kv_cache: attention.KVCache
@classmethod
def logical_axes(cls, circular=False):
return ChunkResult( # pytype: disable=wrong-arg-types # jax-ndarray
per_token_scores=P('batch', 'time'),
top_token_ids=P('batch', 'time', 'top_k'),
top_token_probs=P('batch', 'time', 'top_k'),
next_token_logits=P('logit_batch', 'vocab'),
kv_cache=attention.KVCache.logical_axes(circular=circular),
)
@classmethod
def physical_axes(cls, circular=False):
"""Returns the partition specs for the weights in their physical axes."""
return jax.tree_map(
partitioning.logical_to_physical,
ChunkResult.logical_axes(circular=circular),
)
def copy_to_host(self):
return jax.tree_map(jax.device_get, self)
@classmethod
def zeros(
cls,
hparams,
batch,
seqlen,
kv_batch = None,
circular = False,
):
"""Creates an all-zeros ChunkResult of the specified shape."""
cache_batch = kv_batch if kv_batch is not None else batch
return ChunkResult(
kv_cache=attention.KVCache.zeros(hparams, cache_batch, seqlen, circular), # pylint: disable = line-too-long
per_token_scores=jnp.zeros((batch, seqlen), jnp.float32),
top_token_ids=jnp.zeros((batch, seqlen, _TOP_K), jnp.int32),
top_token_probs=jnp.zeros((batch, seqlen, _TOP_K), jnp.float32),
next_token_logits=jnp.zeros((batch, hparams.vocab), jnp.float32),
)
def update(self,
token_i,
token_chunk,
token_full_result,
per_device = False):
"""Writes a single-token FullChunkResult to the specified index of this.
The index token_i is assumed to be the last token written to this
ChunkResult so far.
Args:
token_i: The seqlen index to write to.
token_chunk: The input tokens with which to write. Shape Chunk[batch, 1].
token_full_result: The results to write. Shape FullChunkResult[batch, 1].
per_device: Whether this is used in a per device or global context.
Returns:
This, but with token written at index token_i.
"""
token_batch, token_seqlen, token_vocab = token_full_result.logits.shape
batch, vocab = self.next_token_logits.shape
assert batch == token_batch
assert token_seqlen == 1
assert token_vocab == vocab
token_small = token_full_result.to_chunk_result(self.next_token_logits,
token_chunk, per_device)
return ChunkResult(
kv_cache=self.kv_cache.write_token(token_i, token_full_result.kv_cache),
per_token_scores=lax.dynamic_update_index_in_dim(
self.per_token_scores, token_small.per_token_scores[:, 0], token_i,
1),
top_token_ids=lax.dynamic_update_index_in_dim(
self.top_token_ids, token_small.top_token_ids[:, 0, :], token_i, 1),
top_token_probs=lax.dynamic_update_index_in_dim(
self.top_token_probs, token_small.top_token_probs[:, 0, :], token_i,
1),
next_token_logits=token_full_result.logits[:, 0, :],
)
_TOP_K = 4
_BOS_ID = 0
def _bos_logits(vocab_size):
"""Logits that put assign probability 1.0 to on _BOS_ID."""
logits = jnp.full((vocab_size,), -1e10)
return logits.at[_BOS_ID].set(0.0)
@struct.dataclass
class FullChunkResult:
"""Result produced by an 'infer' call."""
logits: jnp.ndarray # float32[batch, seqlen, vocab_size]
kv_cache: attention.KVCache
@classmethod
def logical_axes(cls):
return FullChunkResult( # pytype: disable=wrong-arg-types # jax-ndarray
logits=P('logit_batch', 'time', 'vocab'),
kv_cache=attention.KVCache.logical_axes(),
)
def to_chunk_result(
self,
prev_logits,
chunk,
do_top_k = False,
):
"""Converts this to its more minimal form, ChunkResult.
Args:
prev_logits: The `next_token_logits` of the previous chunk in the
sequence, or None if this is the first chunk in the sequence.
float32[batch, vocab_size]. In 2D [batch.x, time, vocab.yz]
chunk: Input token IDs for this chunk.
do_top_k: Whether to do top_k - small latency impact.
Returns:
This, but in its minimized form.
"""
# Example 1 (first chunk in a sequence):
#
# prev_logits = None
# tokens = [0, 123, 456, 789]
# self.logits = [logits_123, logits_456, logits_789, logits_next]
#
# Here `logits_123` is a set of logits that assigns a reasonably high
# probability to the token ID 123. Note that `self.logits`` is shifted left
# by 1 from `tokens`, because `self.logits` is predicting the next token.
#
# We compute scores for the 4 tokens we've seen, by shifting `self.logits`
# right by 1. We need a probability distribution for the first token. Since
# the first token in the sequence must always be the beginning-of-sequence
# token (ID=0), we use the special `_bos_logits` for the first token, which
# assigns probability 1.0 to ID=0.
#
# So we compute per-token scores as:
#
# shifted_logits = [_bos_logits, logits_123, logits_456, logits_789]
# per_token_scores = shifted_logits[self.logits]
# = [_bos_logits[0], logits_123[123], logits_456[456], logits_789[789]]
#
# The values `logits_next` have "fallen off the end" of the chunk. They are
# not useful in computing scores for this chunk, but we remember them
# because we'll use them to compute scores for the beginning of the next
# chunk. We store this in `ChunkResult.next_token_logits`.
#
# Example 2 (second chunk in a sequence):
#
# prev_logits = <some jnp.ndarray>
# tokens = [987, 654, 321]
# self.logits = [logits_654, logits_321, logits_next]
#
# This time when computing `shifted_logits`, we don't use `_bos_logits`.
# Instead we use `prev_chunk.next_token_logits`. That yields:
#
# shifted_logits = [prev_logits, logits_654, logits_321]
# per_token_scores = shifted_logits[self.logits]
# = [prev_logits[987], logits_654[654], logits_321[321]]
#
# Example 3 (second chunk in a sequence is empty):
#
# prev_chunk = <some ChunkResult>
# tokens = []
# self.logits = []
#
# This is mostly degenerate but there's an important special case we need
# to handle: `ChunkResult.next_token_logits` doesn't come from
# `self.logits[-1]` like usual (because that would be empty); instead it
# comes from `prev_chunk.next_token_logits`.
batch, seqlen, vocab_size = self.logits.shape
lengths = chunk.lengths
# First figure out what logits to use for the first token.
if prev_logits is None:
# Use beginning-of-sequence marker as the logits.
prev_logits = jnp.broadcast_to(
_bos_logits(vocab_size), (batch, vocab_size))
# ^ prev_logits: f32[batch, vocab]
else:
prev_logits = attention.flat_broadcast(prev_logits, batch)
# ^ prev_logits: f32[batch, vocab]
# Now shift in the prev_logits and shift out the last token's logits.
shifted_logits = jnp.concatenate(
[prev_logits[:, np.newaxis, :], self.logits[:, :-1, :]], axis=1)
batch_iota = lax.broadcasted_iota(jnp.int32, (batch,), 0)
next_token_logits = self.logits[batch_iota, lengths - 1, :]
# ^ next_token_logits: f32[batch, vocab]
length_is_zero = lengths == 0
# ^ length_is_zero: f32[batch]
length_is_zero = length_is_zero[:, np.newaxis]
# length_is_zero: bool[batch, 1]
# Special handling for the case where the sequence length is zero, see
# Example 3 above.
next_token_logits = jnp.where(length_is_zero, prev_logits,
next_token_logits)
# Now compute the compressed representation of shifted_logits, extracting
# per-token scores, and per-token top token IDs.
batch_iota = lax.broadcasted_iota(jnp.int32, (batch, seqlen), 0)
token_iota = lax.broadcasted_iota(jnp.int32, (batch, seqlen), 1)
logits_max = jnp.max(shifted_logits, axis=-1)
logits_sumexp = jnp.sum(
special2.exp2(shifted_logits - logits_max[:, :, np.newaxis]), axis=-1
)
logits_sum = jnp.log2(logits_sumexp) + logits_max
per_token_scores = (
shifted_logits[batch_iota, token_iota, chunk.tokens] - logits_sum
) * special2.LN_2
if do_top_k:
top_logits, top_ids = lax.top_k(shifted_logits, k=_TOP_K)
top_probs = special2.exp2(top_logits - logits_max[:, :, np.newaxis]) * (
1.0 / logits_sumexp[:, :, np.newaxis])
# TODO(sholto): Do fast top_k using binary search
else:
top_ids = jnp.zeros((batch, seqlen, _TOP_K), jnp.int32)
top_probs = jnp.zeros((batch, seqlen, _TOP_K), jnp.float32)
return ChunkResult(
per_token_scores=per_token_scores,
top_token_ids=top_ids,
top_token_probs=top_probs,
next_token_logits=next_token_logits,
kv_cache=self.kv_cache,
)
class InferFn(typing_extensions.Protocol):
"""A function providing a forwards pass through a model."""
def __call__(self, weights, kv_caches,
chunk):
Ellipsis
| [
"copybara-worker@google.com"
] | copybara-worker@google.com |
1bc4f38343078af5d25b0a62599c3eece7efd669 | c08b96db4551a3cedbc091b9b19f668e8e58e53e | /tests/test_tasks_publishnb.py | 58e30fb2afe980157a20f04a8cc14c4c0ab3d33d | [
"MIT"
] | permissive | lsst-sqre/sqre-uservice-nbreport | efa1163cc58f7388742d0acfbf14a28150a2da59 | e5911ab1a1f2dfae46cdae0337138cbac786872b | refs/heads/master | 2020-03-23T23:12:05.391843 | 2018-08-21T22:20:11 | 2018-08-21T22:20:11 | 142,221,500 | 1 | 0 | null | 2018-08-21T22:27:53 | 2018-07-24T23:07:15 | Python | UTF-8 | Python | false | false | 1,089 | py | """Tests for the `uservice_nbreport.tasks.publishnb` module.
"""
import responses
from uservice_nbreport.tasks.publishnb import get_edition_url
@responses.activate
def test_get_edition_url():
responses.add(
responses.GET,
'https://keeper.lsst.codes/products/testr-000/editions/',
status=200,
json={
'editions': [
'https://keeper.lsst.codes/editions/119',
'https://keeper.lsst.codes/editions/120'
]
}
)
responses.add(
responses.GET,
'https://keeper.lsst.codes/editions/120',
status=200,
json={
'slug': 'test',
}
)
responses.add(
responses.GET,
'https://keeper.lsst.codes/editions/119',
status=200,
json={
'slug': '1'
}
)
edition_url = get_edition_url(
keeper_url='https://keeper.lsst.codes',
ltd_token='testtoken',
ltd_product='testr-000',
instance_id='1')
assert edition_url == 'https://keeper.lsst.codes/editions/119'
| [
"jsick@lsst.org"
] | jsick@lsst.org |
ef3af1aa439186701c7df404f7ab023da5a62fae | 741ee09b8b73187fab06ecc1f07f46a6ba77e85c | /AutonomousSourceCode/data/raw/sort/c8249785-8c09-49ae-a506-d5303e3f9b3c__sort_words.py | 995c6f3081573379333cfb90301cf89e7624a03d | [] | no_license | erickmiller/AutomatousSourceCode | fbe8c8fbf215430a87a8e80d0479eb9c8807accb | 44ee2fb9ac970acf7389e5da35b930d076f2c530 | refs/heads/master | 2021-05-24T01:12:53.154621 | 2020-11-20T23:50:11 | 2020-11-20T23:50:11 | 60,889,742 | 6 | 1 | null | null | null | null | UTF-8 | Python | false | false | 394 | py | # import re
# def sort_words(s):
# results = re.findall("[\w;]+", s)
# return "\n".join(map(str, sorted(results)))
# print sort_words(" one, ,two three,4,")
# ############## This works as well
def sort_words(s):
for i in sorted("\n".join(s.split(',')).split()):
print i
# print count_words(" one, ,two three,4,")
print sort_words(" one, ,two three,4,") | [
"erickmiller@gmail.com"
] | erickmiller@gmail.com |
63d28708c0e3a847179936dab12755af642dbfbe | 1f71d796efcddf51a46cf74f59584f76d56c664e | /venv/Scripts/easy_install-3.7-script.py | e6b080fdbdd6095b4de6bd52dc13c8cc3e704ce3 | [] | no_license | vunited/flask_studentManagement | 12021c7811af2cf95f04fcf635dd62bac0a5b5fa | 9ae15d0e9fd6d4b9111d4f3b3b90d52b4db8ab7a | refs/heads/master | 2020-11-29T11:26:47.109519 | 2019-12-27T14:03:19 | 2019-12-27T14:03:19 | 230,102,430 | 2 | 0 | null | 2019-12-27T14:07:08 | 2019-12-25T12:51:04 | Python | UTF-8 | Python | false | false | 474 | py | #!C:\Users\Administrator\Desktop\flask_studentManagement\venv\Scripts\python.exe
# EASY-INSTALL-ENTRY-SCRIPT: 'setuptools==40.8.0','console_scripts','easy_install-3.7'
__requires__ = 'setuptools==40.8.0'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('setuptools==40.8.0', 'console_scripts', 'easy_install-3.7')()
)
| [
"you@example.com"
] | you@example.com |
9c7f3644b9c1ba9169318bd1f02e1f2aa12186d7 | ab8a34e5b821dde7b09abe37c838de046846484e | /twilio/sample-code-master/preview/sync/document_permission/delete-default/delete-default.6.x.py | dbe85db59955ddeaee70650345a046325e275ae2 | [] | no_license | sekharfly/twilio | 492b599fff62618437c87e05a6c201d6de94527a | a2847e4c79f9fbf5c53f25c8224deb11048fe94b | refs/heads/master | 2020-03-29T08:39:00.079997 | 2018-09-21T07:20:24 | 2018-09-21T07:20:24 | 149,721,431 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 516 | py | # Download the helper library from https://www.twilio.com/docs/python/install
from twilio.rest import Client
# Your Account Sid and Auth Token from twilio.com/console
account_sid = 'ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
auth_token = 'your_auth_token'
client = Client(account_sid, auth_token)
client.preview.sync.services('ISXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX') \
.documents('ETXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX') \
.document_permissions('identity') \
.delete()
| [
"sekharfly@gmail.com"
] | sekharfly@gmail.com |
79dc80302a4a44c8de34cbaed417dd4234182c32 | 0115a30d4d26932dfde5752b8533d886f182ebfa | /research/plot_data_heatmap.py | 477258764852cf477503227bf68d23351b27e03e | [] | no_license | mattbellis/Siena_College_Physics_2012_2013_Cosmology | 4d8c8282cc875a4b89fe470db7b0d77122262451 | fd05a64e0280cf3b1e7bd13f23eaee0bbe11c132 | refs/heads/master | 2020-04-06T06:38:48.367779 | 2013-10-02T05:53:31 | 2013-10-02T05:53:31 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,378 | py | ################################################################################
# This is a very fast way of reading in a text file, when
# you know how the data is formatted, e.g. how many columns
# there are.
#
# Depending on the size of the file, this still may take some time (~5-20 sec),
# but is still faster than other traditional ways of reading in files.
#
# The trade-off is that this method works best when you have a good amount of
# memory (RAM) available.
################################################################################
import numpy as np
# Pyplot is module for plotting in matplotlib library.
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# We need to give the full path to the directory. This will obviously be
# different on your machine, so you will want to edit this by hand.
#infile = open('/Users/Chris/Desktop/M_Bellis Research/astro_data/wechsler_gals_1M.cat')
infile = open('/home/bellis/Work/Astronomy/catalogs/Wechsler/wechsler_gals.cat')
# This command will take the entire file, split it into different values using
# whitespace (tab,space,end-of-line), and iterpret the entries as floats
# (as opposed to strings, characters, or integers).
content = np.array(infile.read().split()).astype('float')
# Now we have this *huge* array. We want to pull out the values we want.
# In this case, we know that the columns are RA, Dec, and z.
# First, how big is this array.
nentries = len(content)
# Next, how many galaxies are in this file?
ncolumns = 3
ngals = nentries/ncolumns
print "# galaxies: %d" % (ngals)
# Now we just need to make an array that has the index of each value we
# want to extract.
index = np.arange(0,nentries,ncolumns)
# So for three columns, this index array looks like
# [0,3,6,9,12,...,nentries-2]
# We can use this now to pull out the columns we want!
ra = content[index]
dec = content[index+1]
z = content[index+2]
# Let's make sure these arrays at least have the same size.
print "\nNumber of entries in coordinate arrays"
print "# ra coords: %d" % (len(ra))
print "# dec coords: %d" % (len(dec))
print "# z coords: %d" % (len(z))
# And just for the heck of it, we can dump the first 5 entries of each array.
print "\nFirst five entries in arrays."
print ra[0:5]
print dec[0:5]
print z[0:5]
print "\n"
# Choose 10k random pts from 1M range.
index = range(100000)
np.random.shuffle(index)
index=index[0:100000]
radius = z[index].copy()
theta = np.deg2rad(ra[index])
phi = np.deg2rad(dec[index])
#radius = z.copy()
#theta = np.deg2rad(ra)
#phi = np.deg2rad(dec)
# Does this free up memory for us?
#del ra
#del dec
#del z
x = radius*np.cos(theta)*np.cos(phi)
y = radius*np.sin(theta)*np.cos(phi)
z = radius*np.sin(phi)
# Plotting RA vs. Dec
fig = plt.figure()
#ax = plt.subplot(111,polar=True)
#ax = fig.add_axes([0.1, -0.75, 0.8, 1.6], projection='polar')
ax = fig.add_axes([0.1, -0.75, 0.8, 1.6])
# Heat map
heatmap, xedges, yedges = np.histogram2d(x, y, bins=200)
#heatmap, xedges, yedges = np.histogram2d(x, y, bins=100)
extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]]
heatmap = np.log(heatmap)
plt.clf()
plt.imshow(heatmap,extent=extent,cmap=plt.cm.winter)
plt.show()
# Draw plot
plt.show()
ax.set_title('RA v. Dec for slices of Z')
ax.xlabel('Right Ascension')
ax.ylabel('Declination')
# Save plot file
fig.savefig('Ra_v_Dec_100k.png')
| [
"matthew.bellis@gmail.com"
] | matthew.bellis@gmail.com |
f0d5850e364accfc24295ae3d9d98a0046bde1b2 | d3ce58c4576431df14de0990f45cfd574f0aa45f | /.history/riskCalculator/forms_20201020003540.py | 1ff165ad8f4a597786110d5dd3dd387a85489954 | [] | no_license | rahulsolankib/portfolio | fe93f0e6b0b28990f0b9fad84dbf7c3aa07243c4 | 281ed429e2590376aee4649b2ea7b3e8facaf6f1 | refs/heads/master | 2023-01-02T06:55:21.319094 | 2020-10-26T08:55:22 | 2020-10-26T08:55:22 | 305,586,595 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 940 | py | from django import forms
from .models import Que,RiskModel
class QForm(forms.Form):
age_group=(
(4,'Less than 25 years')
,(3,'25-35 years')
,(2,'36-40 years')
,(1,'51 above')
)
age_group=(
(3,'more than 5 years')
,(2,'2-5 years')
,(1,'less than 2 years')
)
ques1 = forms.TypedChoiceField(label='Which age group do you belong?',choices=age_group, coerce=int, initial=4)
ques2 = forms.TypedChoiceField(label='When do you think you need your capital?',choices=age_group, coerce=int, initial=1)
ques3 = forms.CharField()
ques4 = forms.CharField()
ques5 = forms.CharField()
class QueForm(forms.ModelForm):
class Meta:
model = Que
fields = ['ques1','ques2','ques3','ques4','ques5']
class RiskForm(forms.ModelForm):
class Meta:
model = RiskModel
fields = ['userid','risk_score'] | [
"rahulsolankib@gmail.com"
] | rahulsolankib@gmail.com |
673a4ecf7c8354ffe8868fcaabe971c1ab0b0bed | 36c5770217c104bea5cc1e5e43a9faa803daccec | /2021/Day_10/test_day10.py | 509a257a78a2eee6ec1a3644be31a90002e9419e | [] | no_license | sco1/adventofcode | 3a2ac0905c04e5a42d409d27e71dc7c5b3cf33a4 | cb029bb825f35944f505f8c88346bd2504695821 | refs/heads/main | 2023-04-30T10:25:02.770042 | 2023-04-17T01:07:46 | 2023-04-17T18:11:35 | 160,292,002 | 0 | 1 | null | 2023-04-06T13:17:54 | 2018-12-04T03:37:54 | Python | UTF-8 | Python | false | false | 704 | py | from textwrap import dedent
from .aoc_2021_day10 import parse_subsystem_code, score_autocomplete
SAMPLE_INPUT = dedent(
"""\
[({(<(())[]>[[{[]{<()<>>
[(()[<>])]({[<{<<[]>>(
{([(<{}[<>[]}>{[]{[(<()>
(((({<>}<{<{<>}{[]{[]{}
[[<[([]))<([[{}[[()]]]
[{[{({}]{}}([{[{{{}}([]
{<[[]]>}<{[{[{[]{()[[[]
[<(<(<(<{}))><([]([]()
<{([([[(<>()){}]>(<<{{
<{([{{}}[<[[[<>{}]]]>[]]
"""
).splitlines()
def test_part_one() -> None:
syntax_score, _ = parse_subsystem_code(SAMPLE_INPUT)
assert syntax_score == 26397
def test_part_two() -> None:
_, incomplete_lines = parse_subsystem_code(SAMPLE_INPUT)
assert score_autocomplete(incomplete_lines) == 288957
| [
"sco1.git@gmail.com"
] | sco1.git@gmail.com |
afac01645fddae62858ab76b0947a5a0723f02f3 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02675/s941998787.py | 41727de13518ee3d758d33916cfee0252c8a5388 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 282 | py | N= int(input())
N1=N%10
if N1==2:
print("hon")
elif N1==4:
print("hon")
elif N1==5:
print("hon")
elif N1==7:
print("hon")
elif N1==9:
print("hon")
elif N1==0:
print("pon")
elif N1==1:
print("pon")
elif N1==6:
print("pon")
elif N1==8:
print("pon")
else:
print("bon")
| [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
45dfdc4a46f7d22d0bcaa32665e633280c4e5cd3 | 18eee1dc9d6b3e97aa1bd99addb5401bad2a8647 | /apps/goods/filters.py | dde361f4a48c97e84ac6ef82523c0a202e91b7d6 | [
"Apache-2.0"
] | permissive | xxcfun/mxshop-api | 1a2b1e4c7e4ae86b47e27c16f5dde401a0ff4af0 | 1472ad0d959439ea80c1f8d8bfd3629c15d3017d | refs/heads/master | 2023-08-18T19:34:47.941932 | 2021-09-14T10:57:26 | 2021-09-14T10:57:26 | 380,106,131 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 948 | py | import django_filters
from goods.models import Goods
from django.db.models import Q
class GoodsFilter(django_filters.rest_framework.FilterSet):
""" 商品过滤的类 """
# 参数解读:name是要过滤的字段,lookup是执行的行为
pricemin = django_filters.NumberFilter(field_name='shop_price', lookup_expr='gte')
pricemax = django_filters.NumberFilter(field_name='shop_price', lookup_expr='lte')
top_category = django_filters.NumberFilter(field_name='category', method='top_category_filter')
def top_category_filter(self, queryset, name, value):
# 不管当前点击的是一级分类二级分类还是三级分类,都能找到
return queryset.filter(Q(category_id=value) | Q(category__parent_category_id=value) | Q(category__parent_category__parent_category_id=value))
class Meta:
model = Goods
fields = ['pricemin', 'pricemax', 'is_hot', 'is_new']
| [
"55070348+hhdMrLion@users.noreply.github.com"
] | 55070348+hhdMrLion@users.noreply.github.com |
e250315934d3aab16f7dc6c05d0fe65f7ab19055 | 3670f2ca6f5609e14cce8c31cb1348052d0b6358 | /xacro/rqt_runtime_monitor/setup.py | 030f81f149fd6d14d091aef5c2627c152acf4f95 | [] | no_license | jincheng-ai/ros-melodic-python3-opencv4 | b0f4d3860ab7ae3d683ade8aa03e74341eff7fcf | 47c74188560c2274b8304647722d0c9763299a4b | refs/heads/main | 2023-05-28T17:37:34.345164 | 2021-06-17T09:59:25 | 2021-06-17T09:59:25 | 377,856,153 | 5 | 0 | null | null | null | null | UTF-8 | Python | false | false | 228 | py | #!/usr/bin/env python
from distutils.core import setup
from catkin_pkg.python_setup import generate_distutils_setup
d = generate_distutils_setup(
packages=['rqt_runtime_monitor'],
package_dir={'': 'src'}
)
setup(**d)
| [
"shuyuanhao@cetiti.com"
] | shuyuanhao@cetiti.com |
0952c45060b73fc80ede7f00cc3160109fa1c450 | 073c2fd73875ce4e7d061623b8403f8d77c45d92 | /cohesity_management_sdk/models/restore_app_object.py | 9fdd52cdbb80907188731b0c0d483c183d9e339a | [
"Apache-2.0"
] | permissive | naveena-maplelabs/management-sdk-python | b11441b2edccc5a1262785bd559ad4b3ea984c3b | 06ce4119d955dc08cdbc5109c935afcfcd9d65ab | refs/heads/master | 2021-05-20T10:52:12.776816 | 2020-03-10T03:28:08 | 2020-03-10T03:28:08 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,477 | py | # -*- coding: utf-8 -*-
# Copyright 2020 Cohesity Inc.
import cohesity_management_sdk.models.entity_proto
import cohesity_management_sdk.models.restore_app_object_params
class RestoreAppObject(object):
"""Implementation of the 'RestoreAppObject' model.
Message that captures information about an application object being
restored.
Attributes:
app_entity (EntityProto): Specifies the attributes and the latest
statistics about an entity.
display_name (string): The proper display name of this object in the
UI, if app_entity is not empty. For example, for SQL databases the
name should also include the instance name.
restore_params (RestoreAppObjectParams): TODO: type description here.
"""
# Create a mapping from Model property names to API property names
_names = {
"app_entity":'appEntity',
"display_name":'displayName',
"restore_params":'restoreParams'
}
def __init__(self,
app_entity=None,
display_name=None,
restore_params=None):
"""Constructor for the RestoreAppObject class"""
# Initialize members of the class
self.app_entity = app_entity
self.display_name = display_name
self.restore_params = restore_params
@classmethod
def from_dictionary(cls,
dictionary):
"""Creates an instance of this model from a dictionary
Args:
dictionary (dictionary): A dictionary representation of the object as
obtained from the deserialization of the server's response. The keys
MUST match property names in the API description.
Returns:
object: An instance of this structure class.
"""
if dictionary is None:
return None
# Extract variables from the dictionary
app_entity = cohesity_management_sdk.models.entity_proto.EntityProto.from_dictionary(dictionary.get('appEntity')) if dictionary.get('appEntity') else None
display_name = dictionary.get('displayName')
restore_params = cohesity_management_sdk.models.restore_app_object_params.RestoreAppObjectParams.from_dictionary(dictionary.get('restoreParams')) if dictionary.get('restoreParams') else None
# Return an object of this model
return cls(app_entity,
display_name,
restore_params)
| [
"ashish@cohesity.com"
] | ashish@cohesity.com |
aa2759ad7133838c170a215aae51575e1f6c5d36 | 603d37a05bada0fae1d468cc36d80d6b9d10ac09 | /randlov1998/balance_lspi.py | a10bc0a144ac92a2ccb1acd8ac405fa24c51b22f | [
"MIT"
] | permissive | eejd/agent-bicycle | 8b8b5162177e21f27889ca0b89348000c1f724d8 | 1ecc3fcad8504385e9e85ccbc539464cb4e6c4e6 | refs/heads/master | 2020-12-31T06:23:01.487407 | 2013-12-11T04:48:16 | 2013-12-11T04:48:16 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,407 | py | from pybrain.rl.agents.linearfa import LinearFA_Agent
from pybrain.rl.learners.valuebased.linearfa import LSPI
from pybrain.rl.experiments import EpisodicExperiment
from environment import Environment
from tasks import LSPIBalanceTask
from training import LinearFATraining
task = LSPIBalanceTask()
learner = LSPI(task.nactions, task.outdim)
# TODO this LSPI does not have eligibility traces.
#learner._lambda = 0.95
task.discount = learner.rewardDiscount
agent = LinearFA_Agent(learner)
# The state has a huge number of dimensions, and the logging causes me to run
# out of memory. We needn't log, since learning is done online.
agent.logging = False
performance_agent = LinearFA_Agent(learner)
performance_agent.logging = False
performance_agent.greedy = True
performance_agent.learning = False
experiment = EpisodicExperiment(task, agent)
# TODO PyBrain says that the learning rate needs to decay, but I don't see that
# described in Randlov's paper.
# A higher number here means the learning rate decays slower.
learner.learningRateDecay = 100000
# NOTE increasing this number above from the default of 100 is what got the
# learning to actually happen, and fixed the bug/issue where the performance
# agent's performance stopped improving.
tr = LinearFATraining('balance_lspi', experiment,
performance_agent, verbose=True)
tr.train(55000, performance_interval=10, n_performance_episodes=5)
| [
"cld72@cornell.edu"
] | cld72@cornell.edu |
7e48a295782fb5d9b146dadab137c5711928f165 | 76adbcc676882343e166485f42c4e8fc38b851f8 | /constants/ad.py | defc860fdf9ebbb5d315cfee86c06adb7a30e0bf | [
"MIT"
] | permissive | adWharf/core | 5856b123fccabfc812707a605270015ed0750304 | f7e04db8b9635f0adf67d9f7488ae64f291a564c | refs/heads/master | 2020-03-18T05:35:01.017239 | 2018-06-07T03:52:29 | 2018-06-07T03:52:29 | 134,350,070 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 620 | py | #!/usr/bin/env python
# encoding: utf-8
"""
@author: william
@contact: 1342247033@qq.com
@site: http://www.xiaolewei.com
@file: ad_status.py
@time: 12/04/2018 17:10
"""
from enum import Enum
ADSTATUS_UNKNOW = -1 # 未知
ADSTATUS_NORMAL = 0 # 正常
ADSTATUS_PENDING = 1 # 待审核
ADSTATUS_DENIED = 2 # 审核不通过
ADSTATUS_FROZEN = 3 # 冻结
ADSTATUS_SUSPEND = 4 # 挂起
ADSTATUS_PREPARE = 5 # 准备状态
ADSTATUS_DELETED = 6 # 删除
AD_BID_TYPE = Enum('AD_BID_TYPE', ('CPM', 'OCPM'))
AD_BID_TYPE_OCPM_OPT_MORE_CLICK = 2
AD_BID_TYPE_OCPM_OPT_MORE_ORDER = 7
| [
"1342247033@qq.com"
] | 1342247033@qq.com |
c046458b9836688b0409b199f115260b9bf29216 | 15f321878face2af9317363c5f6de1e5ddd9b749 | /solutions_python/Problem_117/1559.py | daf42e84e31ecc0e6ae24ab62ef2ed6872425d11 | [] | no_license | dr-dos-ok/Code_Jam_Webscraper | c06fd59870842664cd79c41eb460a09553e1c80a | 26a35bf114a3aa30fc4c677ef069d95f41665cc0 | refs/heads/master | 2020-04-06T08:17:40.938460 | 2018-10-14T10:12:47 | 2018-10-14T10:12:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,439 | py | #f = open('B-small-attempt0.in')
f = open('B-large.in')
#f = open('test.in')
count = int(f.readline())
output = ''
def check():
global matrix,rowCount,columnCount
currentRow = 0
currentMin = 100
for i in range(0,rowCount):
tempMin = min(matrix[i])
if tempMin < currentMin:
currentMin = tempMin
currentRow = i
minIndex = matrix[currentRow].index(currentMin)
if matrix[currentRow].count(currentMin) == len(matrix[currentRow]):
del matrix[currentRow]
rowCount -= 1
if rowCount == 0:
return True
return check()
else:
for j in range(0,rowCount):
if matrix[j][minIndex] != currentMin:
return False
del matrix[j][minIndex]
columnCount -= 1
if columnCount == 0:
return True
return check()
for i in range(0,count):
rowAndColumn = f.readline().split()
rowCount = int(rowAndColumn[0])
columnCount = int(rowAndColumn[1])
matrix = [[]] * rowCount
for j in range(0,rowCount):
matrix[j] = f.readline().split()
for k in range(0,len(matrix[j])):
matrix[j][k] = int(matrix[j][k])
if check():
output += 'Case #' + str(i+1) + ': YES\n'
else:
output += 'Case #' + str(i+1) + ': NO\n'
print(output)
newf = open('output.txt','w')
newf.write(output)
#Case #1: YES
#Case #2: NO
#Case #3: YES
| [
"miliar1732@gmail.com"
] | miliar1732@gmail.com |
042656e281ad8a91f55e1a538bda15e8a457df7e | e0b6f5bd451aa8af3273fbc948799637681342e1 | /scripts/wm_representation/functions/encoding_leave_one_out.py | 255b51c1b420fca68791c82c9ac918ab9f5aeeab | [] | no_license | davidbestue/encoding | 6b304f6e7429f94f97bd562c7544d1fdccf7bdc1 | c27319aa3bb652b3bfc6b7340044c0fda057bc62 | refs/heads/master | 2022-05-05T23:41:42.419252 | 2022-04-27T08:34:52 | 2022-04-27T08:34:52 | 144,248,690 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,238 | py |
from model_functions import *
from fake_data_generator import *
from Weights_matrixs import *
from Representation import *
from process_encoding import *
from process_wm import *
from data_to_use import *
from bootstrap_functions import *
from leave_one_out import *
from joblib import Parallel, delayed
import multiprocessing
import time
import random
#
numcores = multiprocessing.cpu_count() - 3
Subjects=['d001', 'n001', 'b001', 'r001', 's001', 'l001']
brain_regions = ['visual', 'ips', 'pfc']
path_save_signal='/home/david/Desktop/target_close/signal_encoding.xlsx'
path_save_shuffle='/home/david/Desktop/target_close/shuffle_encoding.xlsx'
Reconstructions=[]
Reconstructions_shuff=[]
for Subject in Subjects:
for Brain_region in brain_regions:
print(Subject + ', ' + Brain_region)
#plt.figure()
### Data to use
enc_fmri_paths, enc_beh_paths, wm_fmri_paths, wm_beh_paths, masks = data_to_use( Subject, 'together', Brain_region)
##### Process training data
training_dataset, training_targets = process_encoding_files(enc_fmri_paths, masks, enc_beh_paths, sys_use='unix', hd=6, TR=2.335) #4
error_= Pop_vect_leave_one_out(training_dataset, training_targets) #no hay que hacer paralel porque no hay multiple wm
Reconstruction = pd.DataFrame([error_]) #solo hay 1!
Reconstruction.columns=['decoding']
Reconstruction['region'] = Brain_region
Reconstruction['subject'] = Subject
Reconstruction['label'] = 'signal'
Reconstructions.append(Reconstruction)
#
error_shuff = shuff_Pop_vect_leave_one_out2(training_dataset, training_targets, 10)
Reconstruction_shuff = pd.DataFrame(error_shuff)
Reconstruction_shuff.columns=['decoding']
Reconstruction_shuff['region'] = Brain_region
Reconstruction_shuff['subject'] = Subject
Reconstruction_shuff['label'] = 'shuffle'
Reconstructions_shuff.append(Reconstruction_shuff)
### Save signal from the reconstructions and shuffles
Decoding_df = pd.concat(Reconstructions, axis=0)
Decoding_df.to_excel( path_save_signal )
Shuffle_df = pd.concat(Reconstructions_shuff, axis=0)
Shuffle_df.to_excel( path_save_shuffle ) | [
"davidsanchezbestue@hotmail.com"
] | davidsanchezbestue@hotmail.com |
37747e30a88b90ba50ba53fe7451a8b52b8155e2 | 236402efa32923fefc9f3924ba4155142e8052fe | /2017/_10_knot_hash_test.py | c48df44d9d3edca3a18e7a40edb3223d50d2b400 | [
"MIT"
] | permissive | pchudzik/adventofcode | 7c32126948ea57cdef3858ae3eb63cafdd67abb0 | 72304880c6b080d6c177d11fc9b9eb7b58e876b7 | refs/heads/master | 2022-05-08T00:20:58.586672 | 2022-04-29T19:30:34 | 2022-04-29T19:30:34 | 164,089,632 | 0 | 0 | MIT | 2022-04-22T14:29:37 | 2019-01-04T09:51:33 | Python | UTF-8 | Python | false | false | 800 | py | import pytest
from _10_knot_hash import single_round, knot_hash
@pytest.mark.parametrize(
"puzzle, positions, result",
[
([0, 1, 2, 3, 4], [3], [2, 1, 0, 3, 4]),
([0, 1, 2, 3, 4], [3, 4], [4, 3, 0, 1, 2]),
([0, 1, 2, 3, 4], [3, 4, 1], [4, 3, 0, 1, 2]),
([0, 1, 2, 3, 4], [3, 4, 1, 5], [3, 4, 2, 1, 0]),
]
)
def test_rotate(puzzle, positions, result):
assert single_round(puzzle, positions)[0] == result
@pytest.mark.parametrize(
"puzzle, result", [
("", "a2582a3a0e66e6e86e3812dcb672a272"),
("AoC 2017", "33efeb34ea91902bb2f59c9920caa6cd"),
("1,2,3", "3efbe78a8d82f29979031a4aa0b16a9d"),
("1,2,4", "63960835bcdc130f0b66d7ff4f6a5a8e")])
def test_knot_hash(puzzle, result):
assert knot_hash(puzzle) == result
| [
"pawel.chudzik@gmail.com"
] | pawel.chudzik@gmail.com |
70b65b3099824795baa28a830cdabe6a194a359a | aeb4759e515adc4493f8d062011814c9fc749ad8 | /src/desktop/Level/Rating/rating.py | 7a4e53d7c1140920469deadf1d6f4d9b16af9c67 | [] | no_license | cloew/PyMine | 3b67f54168ddfe8a2b0262f929e6688e4797486a | eac57ca9c585ec86befff126d50c9df3614b104f | refs/heads/master | 2021-01-16T18:56:55.431131 | 2014-06-23T02:43:17 | 2014-06-23T02:43:17 | 4,244,619 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 279 | py |
class Rating:
""" Represents the Rating """
def __init__(self, level):
""" Initialize the Rating """
self.level = level
self.awarded = False
def checkAwarded(self):
""" Check if the Rating should be awarded """ | [
"cloew123@gmail.com"
] | cloew123@gmail.com |
a32c536428203a1d2fab7eea522cf5846aa50345 | 73f4a527f2dbe9bcfbceab7cab1370c23bbbfa36 | /lec4_serving/send_url.py | 24be6fa0cc47546b98000ea82c02f272420c3463 | [] | no_license | pai-plznw4me/network_study | c5962706c29c5475badb3d32c8e21f20dd21e67a | 845dd045e68bce670b241cf9f1553c23344fb984 | refs/heads/master | 2022-12-14T02:32:30.427499 | 2020-09-13T15:07:43 | 2020-09-13T15:07:43 | 292,311,096 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 639 | py | import json
import base64
from PIL import Image
import numpy as np
import io
import matplotlib.pyplot as plt
import tensorflow as tf
# Image to binary
img_path = "/Users/seongjungkim/PycharmProjects/network_study/lec4_serving/sample.png"
f = open(img_path, mode='rb')
image = f.read()
b_image = base64.encodebytes(image).decode("utf-8")
data = {'image': b_image}
j_image = json.dumps(data)
# binary to Image
raw_image = base64.b64decode(b_image)
img = Image.open(io.BytesIO(raw_image))
#
data = json.dumps({"signature_name": "serving_default", "instances": b_image})
print('Data: {} ... {}'.format(data[:50], data[len(data)-52:]))
| [
"plznw4me@naver.com"
] | plznw4me@naver.com |
b25ec287087535274a5c4e7bd595d08c888d6d73 | dab869acd10a3dc76e2a924e24b6a4dffe0a875f | /Laban/LabanLib/analysis/spreadindAndClosing.py | 66d70d80949167c1383fa65ab43fe1af03e37b9b | [] | no_license | ranBernstein/Laban | d82aff9b0483dd007e03a06e51f7d635f62ed05d | 54c88afa9493deacbdd182904cc5d180ecb208b4 | refs/heads/master | 2021-01-23T13:17:51.777880 | 2017-02-14T09:02:54 | 2017-02-14T09:02:54 | 25,508,010 | 3 | 1 | null | 2017-02-14T09:02:55 | 2014-10-21T07:16:01 | Tcl | UTF-8 | Python | false | false | 1,282 | py | from LabanLib.LabanUtils import AbstractLabanAnalyzer
from LabanLib.LabanUtils import AbstractAnalysis
import mocapUtils.kinect.angleExtraction as ae
class SpreadindAndClosing(AbstractAnalysis.AbstractAnalysis):
def getPositiveAndNegetive(self):
return 'Spreading', 'Closing'
def wrapper(self, lineInFloats, headers, jointsIndices):
return ae.calcAverageDistanceOfIndicesFromLine(lineInFloats, \
jointsIndices, *self.extractor.getLongAxeIndices(headers))
def analyze(inputFile):
extractor = AbstractLabanAnalyzer.getExtractor(inputFile)
analysis = SpreadindAndClosing(extractor)
return analysis.extract(inputFile)
"""
def spreadindAndClosingWrapper(extractor, lineInFloats, headers, jointsIndices):
return ae.calcAverageDistanceOfIndicesFromLine(lineInFloats, \
jointsIndices, *extractor.getLongAxeIndices(headers))
def extractSpreadindAndClosing(extractor, fileName):
return extractor.extractLaban(fileName, extractor.spreadindAndClosingWrapper)
def plotSpreadindAndClosing(extractor, fileName):
input = extractor.extractLaban(fileName, extractor.spreadindAndClosingWrapper)
extractor.plotResults(input, 'Spreading', 'Closing')
""" | [
"bernstein.ran@gmail.com"
] | bernstein.ran@gmail.com |
c557512ab437d92c6ff97db1ed111ba0e9a9e98d | 1f0d46b55fe351dc61436069aca183dfa0d07e92 | /restful01/restful01/settings.py | 6499adabd3b5867434fb5271cd3e6dfa17a9fb17 | [] | no_license | aguncn/DjangoRESTfulWebServices | bfba746fc20f3aaa8bf8cedad6e569d0ea5716b2 | 1ab78478f84178d3eca9de1a262bae242930a9c5 | refs/heads/master | 2020-04-12T04:28:05.056831 | 2018-12-18T14:04:17 | 2018-12-18T14:04:17 | 162,296,613 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 4,141 | py | """
Django settings for restful01 project.
Generated by 'django-admin startproject' using Django 2.0.8.
For more information on this file, see
https://docs.djangoproject.com/en/2.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.0/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '6y8j^#_uy109&92vcddfse(x^=*#)%lj$k0=rm@y(1rb4j$!3_'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'toys.apps.ToysConfig',
'drones.apps.DronesConfig',
'django_filters',
'rest_framework.authtoken',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'restful01.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'restful01.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.0/howto/static-files/
STATIC_URL = '/static/'
REST_FRAMEWORK = {
'DEFAULT_PAGINATION_CLASS':
'drones.custompagination.LimitOffsetPaginationWithUpperBound',
'PAGE_SIZE': 4,
'DEFAULT_FILTER_BACKENDS': (
'django_filters.rest_framework.DjangoFilterBackend',
'rest_framework.filters.OrderingFilter',
'rest_framework.filters.SearchFilter',
),
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.BasicAuthentication',
'rest_framework.authentication.SessionAuthentication',
),
'DEFAULT_THROTTLE_CLASSES': (
'rest_framework.throttling.AnonRateThrottle',
'rest_framework.throttling.UserRateThrottle',
),
'DEFAULT_THROTTLE_RATES': {
'anon': '300/hour',
'user': '100/hour',
'drones': '200/hour',
'pilots': '150/hour',
},
'DEFAULT_VERSIONING_CLASS':
'rest_framework.versioning.NamespaceVersioning',
}
| [
"aguncn@163.com"
] | aguncn@163.com |
57ece42cba9c0ad3533058c32259e55d8a82d896 | 1bd24cc6d3ebd0d57123a89589493ec8a0cfce90 | /cachemagic/client.py | 4e7a4b42650ce62a9b11315dc4dd833f21c86832 | [
"BSD-2-Clause"
] | permissive | ntucker/django-cache-magic | ff6b5892d84147c20d6500972c43ec61cf70dd80 | e6c6195fd80c846f7a49e27d1bc519ee109d80dc | refs/heads/master | 2021-01-16T20:32:44.592682 | 2015-11-26T07:24:29 | 2015-11-26T07:24:29 | 5,353,416 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,041 | py | import os
from eventlet.greenthread import sleep
from eventlet.queue import LightQueue
from eventlet.queue import Empty
from redis import Redis, ConnectionPool
from redis.exceptions import ConnectionError
from redis.connection import UnixDomainSocketConnection, Connection
from django_redis.client.default import DefaultClient
from django.conf import settings
_connection_pools = {}
class EventletConnectionPool(ConnectionPool):
def __init__(self, connection_class=Connection, max_connections=None,
**connection_kwargs):
self.pid = os.getpid()
self.connection_class = connection_class
self.connection_kwargs = connection_kwargs
self.max_connections = max_connections or 2 ** 31
self._created_connections = 0
self._available_connections = LightQueue()
self._in_use_connections = set()
def get_connection(self, command_name, *keys, **options):
"Get a connection from the pool"
try:
connection = self._available_connections.get_nowait()
except Empty:
if self._created_connections < self.max_connections:
connection = self.make_connection()
else:
try:
connection = self._available_connections.get()
except Empty:
raise ConnectionError("Couldn't find a free connection")
self._in_use_connections.add(connection)
return connection
def release(self, connection):
"Releases the connection back to the pool"
self._checkpid()
if connection.pid == self.pid:
self._in_use_connections.remove(connection)
self._available_connections.put_nowait(connection)
def disconnect(self):
"Disconnects all connections in the pool"
while True:
try:
self._available_connections.get_nowait().disconnect()
except Empty:
break
for connection in self._in_use_connections:
connection.disconnect()
def get_or_create_connection_pool(**params):
global _connection_pools
key = str(params)
if key not in _connection_pools:
_connection_pools[key] = EventletConnectionPool(**params)
return _connection_pools[key]
class EventletConnectionClient(DefaultClient):
def _connect(self, host, port, db):
"""
Creates a redis connection with connection pool.
"""
kwargs = {
"db": db,
"parser_class": self.parser_class,
"password": self._options.get('PASSWORD', None),
"max_connections": settings.REDIS_POOL_SIZE,
}
if host == "unix":
kwargs.update({'path': port, 'connection_class': UnixDomainSocketConnection})
else:
kwargs.update({'host': host, 'port': port, 'connection_class': Connection})
connection_pool = get_or_create_connection_pool(**kwargs)
connection = Redis(connection_pool=connection_pool)
return connection | [
"me@ntucker.me"
] | me@ntucker.me |
4777b979f6c28f8a4170bab3f4fd94f6a7ffafe8 | afa2ebb439e6592caf42c507a789833b9fbf44b2 | /supervised_learning/0x0F-word_embeddings/4-fasttext.py | a146dc1491d8836f475b9aa168ce255865c9b637 | [] | no_license | anaruzz/holbertonschool-machine_learning | 64c66a0f1d489434dd0946193747ed296760e6c8 | 91300120d38acb6440a6dbb8c408b1193c07de88 | refs/heads/master | 2023-07-30T20:09:30.416167 | 2021-09-23T16:22:40 | 2021-09-23T16:22:40 | 279,293,274 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 710 | py | #!/usr/bin/env python3
"""
Script that creates and trains a genism fastText model
"""
from gensim.models import FastText
def fasttext_model(sentences, size=100,
min_count=5, negative=5,
window=5, cbow=True,
iterations=5, seed=0, workers=1):
"""
Returns: the trained model
"""
model = FastText(sentences,
size=size,
min_count=min_count,
negative=negative,
window=window,
sg=not cbow,
iter=iterations,
seed=seed,
workers=workers
)
return model
| [
"laabidigh@gmail.com"
] | laabidigh@gmail.com |
03e0cd27d6677f5cd0a88b2b0f302e2ce6674f03 | 725ac5a0bf72829be627bf8dc82fdc51ba0f94ae | /NER/Bert_CRF_Ner/crf.py | 4cbd9ff657cd4c7be2f5c315e2061e1921994401 | [] | no_license | shawroad/NLP_pytorch_project | fa14b6e4a156229765e1d552901d0492d8e1def3 | 1272fed2dc8fef78a9ded0f1ae1644d613a3b57b | refs/heads/master | 2023-06-25T02:37:35.503251 | 2023-06-12T10:57:11 | 2023-06-12T10:57:11 | 229,694,655 | 530 | 104 | null | 2020-12-08T09:21:47 | 2019-12-23T06:54:29 | Python | UTF-8 | Python | false | false | 20,384 | py | import torch
import torch.nn as nn
from typing import List, Optional
class CRF(nn.Module):
"""Conditional random field.
This module implements a conditional random field [LMP01]_. The forward computation
of this class computes the log likelihood of the given sequence of tags and
emission score tensor. This class also has `~CRF.decode` method which finds
the best tag sequence given an emission score tensor using `Viterbi algorithm`_.
Args:
num_tags: Number of tags.
batch_first: Whether the first dimension corresponds to the size of a minibatch.
Attributes:
start_transitions (`~torch.nn.Parameter`): Start transition score tensor of size
``(num_tags,)``.
end_transitions (`~torch.nn.Parameter`): End transition score tensor of size
``(num_tags,)``.
transitions (`~torch.nn.Parameter`): Transition score tensor of size
``(num_tags, num_tags)``.
.. [LMP01] Lafferty, J., McCallum, A., Pereira, F. (2001).
"Conditional random fields: Probabilistic models for segmenting and
labeling sequence data". *Proc. 18th International Conf. on Machine
Learning*. Morgan Kaufmann. pp. 282–289.
.. _Viterbi algorithm: https://en.wikipedia.org/wiki/Viterbi_algorithm
"""
def __init__(self, num_tags: int, batch_first: bool = False) -> None:
if num_tags <= 0:
raise ValueError(f'invalid number of tags: {num_tags}')
super().__init__()
self.num_tags = num_tags
self.batch_first = batch_first
self.start_transitions = nn.Parameter(torch.empty(num_tags))
self.end_transitions = nn.Parameter(torch.empty(num_tags))
self.transitions = nn.Parameter(torch.empty(num_tags, num_tags))
self.reset_parameters()
def reset_parameters(self) -> None:
"""Initialize the transition parameters.
The parameters will be initialized randomly from a uniform distribution
between -0.1 and 0.1.
"""
nn.init.uniform_(self.start_transitions, -0.1, 0.1)
nn.init.uniform_(self.end_transitions, -0.1, 0.1)
nn.init.uniform_(self.transitions, -0.1, 0.1)
def __repr__(self) -> str:
return f'{self.__class__.__name__}(num_tags={self.num_tags})'
def forward(self, emissions: torch.Tensor,
tags: torch.LongTensor,
mask: Optional[torch.ByteTensor] = None,
reduction: str = 'mean') -> torch.Tensor:
"""Compute the conditional log likelihood of a sequence of tags given emission scores.
Args:
emissions (`~torch.Tensor`): Emission score tensor of size
``(seq_length, batch_size, num_tags)`` if ``batch_first`` is ``False``,
``(batch_size, seq_length, num_tags)`` otherwise.
tags (`~torch.LongTensor`): Sequence of tags tensor of size
``(seq_length, batch_size)`` if ``batch_first`` is ``False``,
``(batch_size, seq_length)`` otherwise.
mask (`~torch.ByteTensor`): Mask tensor of size ``(seq_length, batch_size)``
if ``batch_first`` is ``False``, ``(batch_size, seq_length)`` otherwise.
reduction: Specifies the reduction to apply to the output:
``none|sum|mean|token_mean``. ``none``: no reduction will be applied.
``sum``: the output will be summed over batches. ``mean``: the output will be
averaged over batches. ``token_mean``: the output will be averaged over tokens.
Returns:
`~torch.Tensor`: The log likelihood. This will have size ``(batch_size,)`` if
reduction is ``none``, ``()`` otherwise.
"""
if reduction not in ('none', 'sum', 'mean', 'token_mean'):
raise ValueError(f'invalid reduction: {reduction}')
if mask is None:
mask = torch.ones_like(tags, dtype=torch.uint8, device=tags.device)
if mask.dtype != torch.uint8:
mask = mask.byte()
self._validate(emissions, tags=tags, mask=mask)
if self.batch_first:
emissions = emissions.transpose(0, 1)
tags = tags.transpose(0, 1)
mask = mask.transpose(0, 1)
# shape: (batch_size,)
numerator = self._compute_score(emissions, tags, mask)
# shape: (batch_size,)
denominator = self._compute_normalizer(emissions, mask)
# shape: (batch_size,)
llh = numerator - denominator
if reduction == 'none':
return llh
if reduction == 'sum':
return llh.sum()
if reduction == 'mean':
return llh.mean()
return llh.sum() / mask.float().sum()
def decode(self, emissions: torch.Tensor,
mask: Optional[torch.ByteTensor] = None,
nbest: Optional[int] = None,
pad_tag: Optional[int] = None) -> List[List[List[int]]]:
"""Find the most likely tag sequence using Viterbi algorithm.
Args:
emissions (`~torch.Tensor`): Emission score tensor of size
``(seq_length, batch_size, num_tags)`` if ``batch_first`` is ``False``,
``(batch_size, seq_length, num_tags)`` otherwise.
mask (`~torch.ByteTensor`): Mask tensor of size ``(seq_length, batch_size)``
if ``batch_first`` is ``False``, ``(batch_size, seq_length)`` otherwise.
nbest (`int`): Number of most probable paths for each sequence
pad_tag (`int`): Tag at padded positions. Often input varies in length and
the length will be padded to the maximum length in the batch. Tags at
the padded positions will be assigned with a padding tag, i.e. `pad_tag`
Returns:
A PyTorch tensor of the best tag sequence for each batch of shape
(nbest, batch_size, seq_length)
"""
if nbest is None:
nbest = 1
if mask is None:
mask = torch.ones(emissions.shape[:2], dtype=torch.uint8,
device=emissions.device)
if mask.dtype != torch.uint8:
mask = mask.byte()
self._validate(emissions, mask=mask)
if self.batch_first:
emissions = emissions.transpose(0, 1)
mask = mask.transpose(0, 1)
if nbest == 1:
return self._viterbi_decode(emissions, mask, pad_tag).unsqueeze(0)
return self._viterbi_decode_nbest(emissions, mask, nbest, pad_tag)
def _validate(self, emissions: torch.Tensor,
tags: Optional[torch.LongTensor] = None,
mask: Optional[torch.ByteTensor] = None) -> None:
if emissions.dim() != 3:
raise ValueError(f'emissions must have dimension of 3, got {emissions.dim()}')
if emissions.size(2) != self.num_tags:
raise ValueError(
f'expected last dimension of emissions is {self.num_tags}, '
f'got {emissions.size(2)}')
if tags is not None:
if emissions.shape[:2] != tags.shape:
raise ValueError(
'the first two dimensions of emissions and tags must match, '
f'got {tuple(emissions.shape[:2])} and {tuple(tags.shape)}')
if mask is not None:
if emissions.shape[:2] != mask.shape:
raise ValueError(
'the first two dimensions of emissions and mask must match, '
f'got {tuple(emissions.shape[:2])} and {tuple(mask.shape)}')
no_empty_seq = not self.batch_first and mask[0].all()
no_empty_seq_bf = self.batch_first and mask[:, 0].all()
if not no_empty_seq and not no_empty_seq_bf:
raise ValueError('mask of the first timestep must all be on')
def _compute_score(self, emissions: torch.Tensor,
tags: torch.LongTensor,
mask: torch.ByteTensor) -> torch.Tensor:
# emissions: (seq_length, batch_size, num_tags)
# tags: (seq_length, batch_size)
# mask: (seq_length, batch_size)
seq_length, batch_size = tags.shape
mask = mask.float()
# Start transition score and first emission
# shape: (batch_size,)
score = self.start_transitions[tags[0]]
score += emissions[0, torch.arange(batch_size), tags[0]]
for i in range(1, seq_length):
# Transition score to next tag, only added if next timestep is valid (mask == 1)
# shape: (batch_size,)
score += self.transitions[tags[i - 1], tags[i]] * mask[i]
# Emission score for next tag, only added if next timestep is valid (mask == 1)
# shape: (batch_size,)
score += emissions[i, torch.arange(batch_size), tags[i]] * mask[i]
# End transition score
# shape: (batch_size,)
seq_ends = mask.long().sum(dim=0) - 1
# shape: (batch_size,)
last_tags = tags[seq_ends, torch.arange(batch_size)]
# shape: (batch_size,)
score += self.end_transitions[last_tags]
return score
def _compute_normalizer(self, emissions: torch.Tensor,
mask: torch.ByteTensor) -> torch.Tensor:
# emissions: (seq_length, batch_size, num_tags)
# mask: (seq_length, batch_size)
seq_length = emissions.size(0)
# Start transition score and first emission; score has size of
# (batch_size, num_tags) where for each batch, the j-th column stores
# the score that the first timestep has tag j
# shape: (batch_size, num_tags)
score = self.start_transitions + emissions[0]
for i in range(1, seq_length):
# Broadcast score for every possible next tag
# shape: (batch_size, num_tags, 1)
broadcast_score = score.unsqueeze(2)
# Broadcast emission score for every possible current tag
# shape: (batch_size, 1, num_tags)
broadcast_emissions = emissions[i].unsqueeze(1)
# Compute the score tensor of size (batch_size, num_tags, num_tags) where
# for each sample, entry at row i and column j stores the sum of scores of all
# possible tag sequences so far that end with transitioning from tag i to tag j
# and emitting
# shape: (batch_size, num_tags, num_tags)
next_score = broadcast_score + self.transitions + broadcast_emissions
# Sum over all possible current tags, but we're in score space, so a sum
# becomes a log-sum-exp: for each sample, entry i stores the sum of scores of
# all possible tag sequences so far, that end in tag i
# shape: (batch_size, num_tags)
next_score = torch.logsumexp(next_score, dim=1)
# Set score to the next score if this timestep is valid (mask == 1)
# shape: (batch_size, num_tags)
score = torch.where(mask[i].unsqueeze(1), next_score, score)
# End transition score
# shape: (batch_size, num_tags)
score += self.end_transitions
# Sum (log-sum-exp) over all possible tags
# shape: (batch_size,)
return torch.logsumexp(score, dim=1)
def _viterbi_decode(self, emissions: torch.FloatTensor,
mask: torch.ByteTensor,
pad_tag: Optional[int] = None) -> List[List[int]]:
# emissions: (seq_length, batch_size, num_tags)
# mask: (seq_length, batch_size)
# return: (batch_size, seq_length)
if pad_tag is None:
pad_tag = 0
device = emissions.device
seq_length, batch_size = mask.shape
# Start transition and first emission
# shape: (batch_size, num_tags)
score = self.start_transitions + emissions[0]
history_idx = torch.zeros((seq_length, batch_size, self.num_tags),
dtype=torch.long, device=device)
oor_idx = torch.zeros((batch_size, self.num_tags),
dtype=torch.long, device=device)
oor_tag = torch.full((seq_length, batch_size), pad_tag,
dtype=torch.long, device=device)
# - score is a tensor of size (batch_size, num_tags) where for every batch,
# value at column j stores the score of the best tag sequence so far that ends
# with tag j
# - history_idx saves where the best tags candidate transitioned from; this is used
# when we trace back the best tag sequence
# - oor_idx saves the best tags candidate transitioned from at the positions
# where mask is 0, i.e. out of range (oor)
# Viterbi algorithm recursive case: we compute the score of the best tag sequence
# for every possible next tag
for i in range(1, seq_length):
# Broadcast viterbi score for every possible next tag
# shape: (batch_size, num_tags, 1)
broadcast_score = score.unsqueeze(2)
# Broadcast emission score for every possible current tag
# shape: (batch_size, 1, num_tags)
broadcast_emission = emissions[i].unsqueeze(1)
# Compute the score tensor of size (batch_size, num_tags, num_tags) where
# for each sample, entry at row i and column j stores the score of the best
# tag sequence so far that ends with transitioning from tag i to tag j and emitting
# shape: (batch_size, num_tags, num_tags)
next_score = broadcast_score + self.transitions + broadcast_emission
# Find the maximum score over all possible current tag
# shape: (batch_size, num_tags)
next_score, indices = next_score.max(dim=1)
# Set score to the next score if this timestep is valid (mask == 1)
# and save the index that produces the next score
# shape: (batch_size, num_tags)
score = torch.where(mask[i].unsqueeze(-1), next_score, score)
indices = torch.where(mask[i].unsqueeze(-1), indices, oor_idx)
history_idx[i - 1] = indices
# End transition score
# shape: (batch_size, num_tags)
end_score = score + self.end_transitions
_, end_tag = end_score.max(dim=1)
# shape: (batch_size,)
seq_ends = mask.long().sum(dim=0) - 1
# insert the best tag at each sequence end (last position with mask == 1)
history_idx = history_idx.transpose(1, 0).contiguous()
history_idx.scatter_(1, seq_ends.view(-1, 1, 1).expand(-1, 1, self.num_tags),
end_tag.view(-1, 1, 1).expand(-1, 1, self.num_tags))
history_idx = history_idx.transpose(1, 0).contiguous()
# The most probable path for each sequence
best_tags_arr = torch.zeros((seq_length, batch_size),
dtype=torch.long, device=device)
best_tags = torch.zeros(batch_size, 1, dtype=torch.long, device=device)
for idx in range(seq_length - 1, -1, -1):
best_tags = torch.gather(history_idx[idx], 1, best_tags)
best_tags_arr[idx] = best_tags.data.view(batch_size)
return torch.where(mask, best_tags_arr, oor_tag).transpose(0, 1)
def _viterbi_decode_nbest(self, emissions: torch.FloatTensor,
mask: torch.ByteTensor,
nbest: int,
pad_tag: Optional[int] = None) -> List[List[List[int]]]:
# emissions: (seq_length, batch_size, num_tags)
# mask: (seq_length, batch_size)
# return: (nbest, batch_size, seq_length)
if pad_tag is None:
pad_tag = 0
device = emissions.device
seq_length, batch_size = mask.shape
# Start transition and first emission
# shape: (batch_size, num_tags)
score = self.start_transitions + emissions[0]
history_idx = torch.zeros((seq_length, batch_size, self.num_tags, nbest),
dtype=torch.long, device=device)
oor_idx = torch.zeros((batch_size, self.num_tags, nbest),
dtype=torch.long, device=device)
oor_tag = torch.full((seq_length, batch_size, nbest), pad_tag,
dtype=torch.long, device=device)
# + score is a tensor of size (batch_size, num_tags) where for every batch,
# value at column j stores the score of the best tag sequence so far that ends
# with tag j
# + history_idx saves where the best tags candidate transitioned from; this is used
# when we trace back the best tag sequence
# - oor_idx saves the best tags candidate transitioned from at the positions
# where mask is 0, i.e. out of range (oor)
# Viterbi algorithm recursive case: we compute the score of the best tag sequence
# for every possible next tag
for i in range(1, seq_length):
if i == 1:
broadcast_score = score.unsqueeze(-1)
broadcast_emission = emissions[i].unsqueeze(1)
# shape: (batch_size, num_tags, num_tags)
next_score = broadcast_score + self.transitions + broadcast_emission
else:
broadcast_score = score.unsqueeze(-1)
broadcast_emission = emissions[i].unsqueeze(1).unsqueeze(2)
# shape: (batch_size, num_tags, nbest, num_tags)
next_score = broadcast_score + self.transitions.unsqueeze(1) + broadcast_emission
# Find the top `nbest` maximum score over all possible current tag
# shape: (batch_size, nbest, num_tags)
next_score, indices = next_score.view(batch_size, -1, self.num_tags).topk(nbest, dim=1)
if i == 1:
score = score.unsqueeze(-1).expand(-1, -1, nbest)
indices = indices * nbest
# convert to shape: (batch_size, num_tags, nbest)
next_score = next_score.transpose(2, 1)
indices = indices.transpose(2, 1)
# Set score to the next score if this timestep is valid (mask == 1)
# and save the index that produces the next score
# shape: (batch_size, num_tags, nbest)
score = torch.where(mask[i].unsqueeze(-1).unsqueeze(-1), next_score, score)
indices = torch.where(mask[i].unsqueeze(-1).unsqueeze(-1), indices, oor_idx)
history_idx[i - 1] = indices
# End transition score shape: (batch_size, num_tags, nbest)
end_score = score + self.end_transitions.unsqueeze(-1)
_, end_tag = end_score.view(batch_size, -1).topk(nbest, dim=1)
# shape: (batch_size,)
seq_ends = mask.long().sum(dim=0) - 1
# insert the best tag at each sequence end (last position with mask == 1)
history_idx = history_idx.transpose(1, 0).contiguous()
history_idx.scatter_(1, seq_ends.view(-1, 1, 1, 1).expand(-1, 1, self.num_tags, nbest),
end_tag.view(-1, 1, 1, nbest).expand(-1, 1, self.num_tags, nbest))
history_idx = history_idx.transpose(1, 0).contiguous()
# The most probable path for each sequence
best_tags_arr = torch.zeros((seq_length, batch_size, nbest),
dtype=torch.long, device=device)
best_tags = torch.arange(nbest, dtype=torch.long, device=device) \
.view(1, -1).expand(batch_size, -1)
for idx in range(seq_length - 1, -1, -1):
best_tags = torch.gather(history_idx[idx].view(batch_size, -1), 1, best_tags)
best_tags_arr[idx] = best_tags.data.view(batch_size, -1) // nbest
return torch.where(mask.unsqueeze(-1), best_tags_arr, oor_tag).permute(2, 1, 0) | [
"shawroad@MacBook-ProTCL.local"
] | shawroad@MacBook-ProTCL.local |
e01a8c3188180bcc943b04780d8c808652a8520a | 1d928c3f90d4a0a9a3919a804597aa0a4aab19a3 | /python/nilearn/2016/12/test_second_level_model.py | 05348ad4246df5a09c80238579bceaaf62e77835 | [] | no_license | rosoareslv/SED99 | d8b2ff5811e7f0ffc59be066a5a0349a92cbb845 | a062c118f12b93172e31e8ca115ce3f871b64461 | refs/heads/main | 2023-02-22T21:59:02.703005 | 2021-01-28T19:40:51 | 2021-01-28T19:40:51 | 306,497,459 | 1 | 1 | null | 2020-11-24T20:56:18 | 2020-10-23T01:18:07 | null | UTF-8 | Python | false | false | 9,472 | py | # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
# vi: set ft=python sts=4 ts=4 sw=4 et:
"""
Test the second level model.
"""
from __future__ import with_statement
import os
import numpy as np
from nibabel import load, Nifti1Image, save
from nistats.first_level_model import FirstLevelModel, run_glm
from nistats.second_level_model import SecondLevelModel
from nistats.design_matrix import create_second_level_design
from nose.tools import assert_true, assert_equal, assert_raises
from numpy.testing import (assert_almost_equal, assert_array_equal)
from nibabel.tmpdirs import InTemporaryDirectory
import pandas as pd
# This directory path
BASEDIR = os.path.dirname(os.path.abspath(__file__))
FUNCFILE = os.path.join(BASEDIR, 'functional.nii.gz')
def write_fake_fmri_data(shapes, rk=3, affine=np.eye(4)):
mask_file, fmri_files, design_files = 'mask.nii', [], []
for i, shape in enumerate(shapes):
fmri_files.append('fmri_run%d.nii' % i)
data = np.random.randn(*shape)
data[1:-1, 1:-1, 1:-1] += 100
save(Nifti1Image(data, affine), fmri_files[-1])
design_files.append('dmtx_%d.csv' % i)
pd.DataFrame(np.random.randn(shape[3], rk),
columns=['', '', '']).to_csv(design_files[-1])
save(Nifti1Image((np.random.rand(*shape[:3]) > .5).astype(np.int8),
affine), mask_file)
return mask_file, fmri_files, design_files
def test_high_level_glm_with_paths():
with InTemporaryDirectory():
shapes = ((7, 8, 9, 1),)
mask, FUNCFILE, _ = write_fake_fmri_data(shapes)
FUNCFILE = FUNCFILE[0]
func_img = load(FUNCFILE)
# ols case
model = SecondLevelModel(mask=mask)
# asking for contrast before model fit gives error
assert_raises(ValueError, model.compute_contrast, [])
# fit model
Y = [func_img] * 4
X = pd.DataFrame([[1]] * 4)
model = model.fit(Y, design_matrix=X)
c1 = np.eye(len(model.design_matrix_.columns))[0]
z_image = model.compute_contrast(c1, None, 'z_score')
assert_true(isinstance(z_image, Nifti1Image))
assert_array_equal(z_image.get_affine(), load(mask).get_affine())
# Delete objects attached to files to avoid WindowsError when deleting
# temporary directory
del z_image, FUNCFILE, func_img, model
def test_fmri_inputs():
# Test processing of FMRI inputs
with InTemporaryDirectory():
# prepare fake data
p, q = 80, 10
X = np.random.randn(p, q)
shapes = ((7, 8, 9, 10),)
mask, FUNCFILE, _ = write_fake_fmri_data(shapes)
FUNCFILE = FUNCFILE[0]
func_img = load(FUNCFILE)
T = func_img.shape[-1]
des = pd.DataFrame(np.ones((T, 1)), columns=['a'])
des_fname = 'design.csv'
des.to_csv(des_fname)
# prepare correct input first level models
flm = FirstLevelModel(subject_id='1').fit(FUNCFILE, design_matrices=des)
flms = [flm, flm, flm]
# prepare correct input dataframe and lists
shapes = ((7, 8, 9, 1),)
_, FUNCFILE, _ = write_fake_fmri_data(shapes)
FUNCFILE = FUNCFILE[0]
dfcols = ['subject_id', 'map_name', 'effects_map_path']
dfrows = [['1', 'a', FUNCFILE], ['2', 'a', FUNCFILE],
['3', 'a', FUNCFILE]]
niidf = pd.DataFrame(dfrows, columns=dfcols)
niimgs = [FUNCFILE, FUNCFILE, FUNCFILE]
flcondstr = [('a', 'a')]
flcondval = [('a', np.array([1]))]
confounds = pd.DataFrame([['1', 1], ['2', 2], ['3', 3]],
columns=['subject_id', 'conf1'])
sdes = pd.DataFrame(X[:3, :3], columns=['a', 'b', 'c'])
# smoke tests with correct input
# First level models as input
SecondLevelModel(mask=mask).fit(flms, flcondstr)
SecondLevelModel().fit(flms, flcondval)
SecondLevelModel().fit(flms, flcondval, confounds)
SecondLevelModel().fit(flms, flcondval, None, sdes)
# dataframes as input
SecondLevelModel().fit(niidf, None)
SecondLevelModel().fit(niidf, None, confounds)
SecondLevelModel().fit(niidf, None, confounds, sdes)
SecondLevelModel().fit(niidf, None, None, sdes)
SecondLevelModel().fit(niidf, flcondstr)
SecondLevelModel().fit(niidf, flcondstr, confounds)
SecondLevelModel().fit(niidf, flcondstr, confounds, sdes)
SecondLevelModel().fit(niidf, flcondstr, None, sdes)
# niimgs as input
SecondLevelModel().fit(niimgs, None, None, sdes)
# test wrong input errors
# test first level model requirements
assert_raises(ValueError, SecondLevelModel().fit, flm, flcondval)
assert_raises(ValueError, SecondLevelModel().fit, [flm], flcondval)
assert_raises(ValueError, SecondLevelModel().fit, flms)
assert_raises(ValueError, SecondLevelModel().fit, flms + [''],
flcondval)
# test dataframe requirements
assert_raises(ValueError, SecondLevelModel().fit, niidf['subject_id'])
# test niimgs requirements
assert_raises(ValueError, SecondLevelModel().fit, niimgs)
assert_raises(ValueError, SecondLevelModel().fit, niimgs + [[]], sdes)
# test first_level_conditions, confounds, and design
assert_raises(ValueError, SecondLevelModel().fit, flms, ['', []])
assert_raises(ValueError, SecondLevelModel().fit, flms, flcondval, [])
assert_raises(ValueError, SecondLevelModel().fit, flms, flcondval,
confounds['conf1'])
assert_raises(ValueError, SecondLevelModel().fit, flms, flcondval,
None, [])
def _first_level_dataframe():
conditions = ['map_name', 'subject_id', 'map_path']
names = ['con_01', 'con_02', 'con_01', 'con_02']
subjects = ['01', '01', '02', '02']
maps = ['', '', '', '']
dataframe = pd.DataFrame({'map_name': names,
'subject_id': subjects,
'effects_map_path': maps})
return dataframe
def test_create_second_level_design():
with InTemporaryDirectory():
shapes = ((7, 8, 9, 1),)
mask, FUNCFILE, _ = write_fake_fmri_data(shapes)
FUNCFILE = FUNCFILE[0]
first_level_input = _first_level_dataframe()
first_level_input['effects_map_path'] = [FUNCFILE] * 4
confounds = [['01', 0.1], ['02', 0.75]]
confounds = pd.DataFrame(confounds, columns=['subject_id', 'f1'])
design = create_second_level_design(first_level_input, confounds)
expected_design = np.array([[1, 0, 1, 0, 0.1], [0, 1, 1, 0, 0.1],
[1, 0, 0, 1, 0.75], [0, 1, 0, 1, 0.75]])
assert_array_equal(design, expected_design)
assert_true(len(design.columns) == 2 + 2 + 1)
assert_true(len(design) == 2 + 2)
model = SecondLevelModel(mask=mask).fit(first_level_input,
confounds=confounds)
design = model.design_matrix_
assert_array_equal(design, expected_design)
assert_true(len(design.columns) == 2 + 2 + 1)
assert_true(len(design) == 2 + 2)
def test_second_level_model_glm_computation():
with InTemporaryDirectory():
shapes = ((7, 8, 9, 1),)
mask, FUNCFILE, _ = write_fake_fmri_data(shapes)
FUNCFILE = FUNCFILE[0]
func_img = load(FUNCFILE)
# ols case
model = SecondLevelModel(mask=mask)
Y = [func_img] * 4
X = pd.DataFrame([[1]] * 4)
model = model.fit(Y, design_matrix=X)
labels1 = model.labels_
results1 = model.results_
labels2, results2 = run_glm(model.masker_.transform(Y), X, 'ols')
assert_almost_equal(labels1, labels2, decimal=1)
assert_equal(len(results1), len(results2))
def test_second_level_model_contrast_computation():
with InTemporaryDirectory():
shapes = ((7, 8, 9, 1),)
mask, FUNCFILE, _ = write_fake_fmri_data(shapes)
FUNCFILE = FUNCFILE[0]
func_img = load(FUNCFILE)
# ols case
model = SecondLevelModel(mask=mask)
# asking for contrast before model fit gives error
assert_raises(ValueError, model.compute_contrast, [])
# fit model
Y = [func_img] * 4
X = pd.DataFrame([[1]] * 4, columns=['c1'])
model = model.fit(Y, design_matrix=X)
ncol = len(model.design_matrix_.columns)
c1, cnull = np.eye(ncol)[0, :], np.zeros(ncol)
# smoke test for different contrasts in fixed effects
model.compute_contrast(c1)
model.compute_contrast(c1, 'F')
model.compute_contrast(c1, 'F', 'z_score')
model.compute_contrast(c1, 'F', 'stat')
model.compute_contrast(c1, 'F', 'p_value')
model.compute_contrast(c1, 'F', 'effect_size')
model.compute_contrast(c1, 'F', 'effect_variance')
# formula should work (passing variable name directly)
model.compute_contrast('c1')
# passing null contrast should give back a value error
assert_raises(ValueError, model.compute_contrast, cnull)
# passing wrong parameters
assert_raises(ValueError, model.compute_contrast, [])
assert_raises(ValueError, model.compute_contrast, c1, '', '')
assert_raises(ValueError, model.compute_contrast, c1, '', [])
| [
"rodrigosoaresilva@gmail.com"
] | rodrigosoaresilva@gmail.com |
b7ba2e20045fe5087b250059c90d811008bb3a0d | 6a7e9e0e9c08132166f566bd88ae1c46ff8f9c0a | /azure-mgmt-recoveryservicesbackup/azure/mgmt/recoveryservicesbackup/operations/backup_workload_items_operations.py | 1604739bf2f67ed41caf837bd9dff06514352a54 | [
"MIT"
] | permissive | ashirey-msft/azure-sdk-for-python | d92381d11c48f194ec9f989f5f803db614fb73f2 | e04778e13306dad2e8fb044970215bad6296afb6 | refs/heads/master | 2020-03-23T06:05:39.283442 | 2018-09-15T00:18:26 | 2018-09-15T00:18:26 | 141,188,192 | 0 | 1 | MIT | 2018-07-16T20:02:52 | 2018-07-16T20:02:52 | null | UTF-8 | Python | false | false | 5,793 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
import uuid
from msrest.pipeline import ClientRawResponse
from msrestazure.azure_exceptions import CloudError
from .. import models
class BackupWorkloadItemsOperations(object):
"""BackupWorkloadItemsOperations operations.
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
:ivar api_version: Client Api Version. Constant value: "2016-12-01".
"""
models = models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.api_version = "2016-12-01"
self.config = config
def list(
self, vault_name, resource_group_name, fabric_name, container_name, filter=None, skip_token=None, custom_headers=None, raw=False, **operation_config):
"""Provides a pageable list of workload item of a specific container
according to the query filter and the pagination parameters.
:param vault_name: The name of the recovery services vault.
:type vault_name: str
:param resource_group_name: The name of the resource group where the
recovery services vault is present.
:type resource_group_name: str
:param fabric_name: Fabric name associated with the container.
:type fabric_name: str
:param container_name: Name of the container.
:type container_name: str
:param filter: OData filter options.
:type filter: str
:param skip_token: skipToken Filter.
:type skip_token: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: An iterator like instance of WorkloadItemResource
:rtype:
~azure.mgmt.recoveryservicesbackup.models.WorkloadItemResourcePaged[~azure.mgmt.recoveryservicesbackup.models.WorkloadItemResource]
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = self.list.metadata['url']
path_format_arguments = {
'vaultName': self._serialize.url("vault_name", vault_name, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str'),
'fabricName': self._serialize.url("fabric_name", fabric_name, 'str'),
'containerName': self._serialize.url("container_name", container_name, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
if filter is not None:
query_parameters['$filter'] = self._serialize.query("filter", filter, 'str')
if skip_token is not None:
query_parameters['$skipToken'] = self._serialize.query("skip_token", skip_token, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
# Deserialize response
deserialized = models.WorkloadItemResourcePaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.WorkloadItemResourcePaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
list.metadata = {'url': '/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/items'}
| [
"noreply@github.com"
] | ashirey-msft.noreply@github.com |
dd85d8be247805272abc012764d2e3e69ca6f17d | 83f9454342304d396910ce1230bfb3c61b535868 | /pygamestartercode-cheeseian-master/00-IntroToPython/syntax_reference.py | 266e2bcad1900d7820827111f10ebf29baa6b00f | [
"MIT"
] | permissive | rhit-catapult/2021-session2-individual-repos | ce3016549f6e80532c601318aeb3595cf8fa1a24 | f3afc0503c658737791794ad8506eaf13469f9ce | refs/heads/main | 2023-06-21T16:54:02.781289 | 2021-07-16T20:07:14 | 2021-07-16T20:07:14 | 386,750,928 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 48 | py | def main():
print("hello, world!")
main()
| [
"fisherds@rose-hulman.edu"
] | fisherds@rose-hulman.edu |
c897c38ea593fd6d028a685aaec707ba4871e9b4 | dcce56815dca2b18039e392053376636505ce672 | /dumpscripts/pkgutil_nested.py | b4bb0040cc6ae75c9a0913c71ae015783378d7f1 | [] | no_license | robertopauletto/PyMOTW-it_3.0 | 28ff05d8aeccd61ade7d4107a971d9d2576fb579 | c725df4a2aa2e799a969e90c64898f08b7eaad7d | refs/heads/master | 2021-01-20T18:51:30.512327 | 2020-01-09T19:30:14 | 2020-01-09T19:30:14 | 63,536,756 | 4 | 1 | null | null | null | null | UTF-8 | Python | false | false | 249 | py | # pkgutil_nested.py
import nested
import nested.shallow
print('nested.shallow:', nested.shallow.__file__)
nested.shallow.func()
print()
import nested.second.deep
print('nested.second.deep:', nested.second.deep.__file__)
nested.second.deep.func()
| [
"roberto.pauletto@gmail.com"
] | roberto.pauletto@gmail.com |
c929f35734c74a445856865534f29bc5fdd8f689 | 5de0c0e76bdde469156d057007a5008a63a0d66b | /buggernaut/Area.py | 5abcc8083016ae2fc276ffc7c8f945084c107c7b | [] | no_license | mattharkness/sixthdev | 6bcfd1c490efafb114dc5f014c6e5f1d91d56b4d | a7df929147d82d225606c216f69c48d898e19ebe | refs/heads/master | 2023-06-08T05:57:38.928657 | 2021-06-15T16:53:15 | 2021-06-15T16:53:15 | 338,441,562 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 134 | py | __ver__="$Id$"
from strongbox import *
auto =None
class Area(Strongbox):
ID = attr(int, default=auto)
area = attr(str)
| [
"sabren"
] | sabren |
a9e503dace64170111dcdcb34e55dd72123e8a86 | 51885733158fe128158e46440eb64994f85898af | /seleniumbase/fixtures/js_utils.py | 1f8c7372fc9fcb387bd5442da26fcc1421f0e529 | [
"MIT"
] | permissive | Jionhi/SeleniumBase | a70bf02dea75669fe1f0cb4c90023ebec649700c | cc5d6cc58e72512b992e2b28a69ce22bcb5b91fd | refs/heads/master | 2023-08-28T23:13:24.850403 | 2021-10-28T23:13:37 | 2021-10-28T23:13:37 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 33,699 | py | """
This module contains useful Javascript utility methods for base_case.py
These helper methods SHOULD NOT be called directly from tests.
"""
import re
import requests
import time
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import WebDriverException
from seleniumbase import config as sb_config
from seleniumbase.config import settings
from seleniumbase.fixtures import constants
from seleniumbase.fixtures import shared_utils
def wait_for_ready_state_complete(driver, timeout=settings.LARGE_TIMEOUT):
"""
The DOM (Document Object Model) has a property called "readyState".
When the value of this becomes "complete", page resources are considered
fully loaded (although AJAX and other loads might still be happening).
This method will wait until document.readyState == "complete".
This may be redundant, as methods already wait for page elements to load.
If the timeout is exceeded, the test will still continue
because readyState == "interactive" may be good enough.
(Previously, tests would fail immediately if exceeding the timeout.)
"""
start_ms = time.time() * 1000.0
stop_ms = start_ms + (timeout * 1000.0)
for x in range(int(timeout * 10)):
shared_utils.check_if_time_limit_exceeded()
try:
ready_state = driver.execute_script("return document.readyState")
except WebDriverException:
# Bug fix for: [Permission denied to access property "document"]
time.sleep(0.03)
return True
if ready_state == "complete":
time.sleep(0.01) # Better be sure everything is done loading
return True
else:
now_ms = time.time() * 1000.0
if now_ms >= stop_ms:
break
time.sleep(0.1)
return False # readyState stayed "interactive" (Not "complete")
def execute_async_script(driver, script, timeout=settings.EXTREME_TIMEOUT):
driver.set_script_timeout(timeout)
return driver.execute_async_script(script)
def wait_for_angularjs(driver, timeout=settings.LARGE_TIMEOUT, **kwargs):
if not settings.WAIT_FOR_ANGULARJS:
return
NG_WRAPPER = (
"%(prefix)s"
"var $elm=document.querySelector("
"'[data-ng-app],[ng-app],.ng-scope')||document;"
"if(window.angular && angular.getTestability){"
"angular.getTestability($elm).whenStable(%(handler)s)"
"}else{"
"var $inj;try{$inj=angular.element($elm).injector()||"
"angular.injector(['ng'])}catch(ex){"
"$inj=angular.injector(['ng'])};$inj.get=$inj.get||"
"$inj;$inj.get('$browser')."
"notifyWhenNoOutstandingRequests(%(handler)s)}"
"%(suffix)s"
)
def_pre = "var cb=arguments[arguments.length-1];if(window.angular){"
prefix = kwargs.pop("prefix", def_pre)
handler = kwargs.pop("handler", "function(){cb(true)}")
suffix = kwargs.pop("suffix", "}else{cb(false)}")
script = NG_WRAPPER % {
"prefix": prefix,
"handler": handler,
"suffix": suffix,
}
try:
execute_async_script(driver, script, timeout=timeout)
except Exception:
time.sleep(0.05)
def is_html_inspector_activated(driver):
try:
driver.execute_script("HTMLInspector") # Fails if not defined
return True
except Exception:
return False
def is_jquery_activated(driver):
try:
driver.execute_script("jQuery('html')") # Fails if jq is not defined
return True
except Exception:
return False
def wait_for_jquery_active(driver, timeout=None):
if not timeout:
timeout = int(settings.MINI_TIMEOUT * 10.0)
else:
timeout = int(timeout * 10.0)
for x in range(timeout):
# jQuery needs a small amount of time to activate.
try:
driver.execute_script("jQuery('html')")
wait_for_ready_state_complete(driver)
wait_for_angularjs(driver)
return
except Exception:
time.sleep(0.1)
def raise_unable_to_load_jquery_exception(driver):
has_csp_error = False
csp_violation = "violates the following Content Security Policy directive"
browser_logs = []
try:
browser_logs = driver.get_log("browser")
except (ValueError, WebDriverException):
pass
for entry in browser_logs:
if entry["level"] == "SEVERE":
if csp_violation in entry["message"]:
has_csp_error = True
if has_csp_error:
raise Exception(
"""Unable to load jQuery on "%s" due to a violation """
"""of the website's Content Security Policy directive. """
"""To override this policy, add "--disable-csp" on the """
"""command-line when running your tests.""" % driver.current_url
)
else:
raise Exception(
"""Unable to load jQuery on "%s" because this website """
"""restricts external JavaScript resources from loading."""
% driver.current_url
)
def activate_jquery(driver):
"""If "jQuery is not defined", use this method to activate it for use.
This happens because jQuery is not always defined on web sites."""
try:
# Let's first find out if jQuery is already defined.
driver.execute_script("jQuery('html');")
# Since that command worked, jQuery is defined. Let's return.
return
except Exception:
# jQuery is not currently defined. Let's proceed by defining it.
pass
jquery_js = constants.JQuery.MIN_JS
add_js_link(driver, jquery_js)
for x in range(int(settings.MINI_TIMEOUT * 10.0)):
# jQuery needs a small amount of time to activate.
try:
driver.execute_script("jQuery('html');")
return
except Exception:
time.sleep(0.1)
try:
add_js_link(driver, jquery_js)
time.sleep(0.1)
driver.execute_script("jQuery('head');")
except Exception:
pass
# Since jQuery still isn't activating, give up and raise an exception
raise_unable_to_load_jquery_exception(driver)
def are_quotes_escaped(string):
if string.count("\\'") != string.count("'") or (
string.count('\\"') != string.count('"')
):
return True
return False
def escape_quotes_if_needed(string):
"""
re.escape() works differently in Python 3.7.0 than earlier versions:
Python 3.6.5:
>>> import re
>>> re.escape('"')
'\\"'
Python 3.7.0:
>>> import re
>>> re.escape('"')
'"'
SeleniumBase needs quotes to be properly escaped for Javascript calls.
"""
if are_quotes_escaped(string):
if string.count("'") != string.count("\\'"):
string = string.replace("'", "\\'")
if string.count('"') != string.count('\\"'):
string = string.replace('"', '\\"')
return string
def safe_execute_script(driver, script):
"""When executing a script that contains a jQuery command,
it's important that the jQuery library has been loaded first.
This method will load jQuery if it wasn't already loaded."""
try:
driver.execute_script(script)
except Exception:
# The likely reason this fails is because: "jQuery is not defined"
activate_jquery(driver) # It's a good thing we can define it here
driver.execute_script(script)
def wait_for_css_query_selector(
driver, selector, timeout=settings.SMALL_TIMEOUT
):
element = None
start_ms = time.time() * 1000.0
stop_ms = start_ms + (timeout * 1000.0)
for x in range(int(timeout * 10)):
try:
selector = re.escape(selector)
selector = escape_quotes_if_needed(selector)
element = driver.execute_script(
"""return document.querySelector('%s')""" % selector
)
if element:
return element
except Exception:
element = None
if not element:
now_ms = time.time() * 1000.0
if now_ms >= stop_ms:
break
time.sleep(0.1)
raise NoSuchElementException(
"Element {%s} was not present after %s seconds!" % (selector, timeout)
)
def highlight_with_js(driver, selector, loops, o_bs):
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(128, 128, 128, 0.5)';"""
% selector
)
try:
driver.execute_script(script)
except Exception:
return
for n in range(loops):
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(255, 0, 0, 1)';"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(128, 0, 128, 1)';"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(0, 0, 255, 1)';"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(0, 255, 0, 1)';"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(128, 128, 0, 1)';"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(128, 0, 128, 1)';"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = """document.querySelector('%s').style.boxShadow =
'%s';""" % (
selector,
o_bs,
)
driver.execute_script(script)
def highlight_with_jquery(driver, selector, loops, o_bs):
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(128, 128, 128, 0.5)');"""
% selector
)
safe_execute_script(driver, script)
for n in range(loops):
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(255, 0, 0, 1)');"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(128, 0, 128, 1)');"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(0, 0, 255, 1)');"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(0, 255, 0, 1)');"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(128, 128, 0, 1)');"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(128, 0, 128, 1)');"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = """jQuery('%s').css('box-shadow', '%s');""" % (selector, o_bs)
driver.execute_script(script)
def add_css_link(driver, css_link):
script_to_add_css = """function injectCSS(css) {
var head_tag=document.getElementsByTagName("head")[0];
var link_tag=document.createElement("link");
link_tag.rel="stylesheet";
link_tag.type="text/css";
link_tag.href=css;
link_tag.crossorigin="anonymous";
head_tag.appendChild(link_tag);
}
injectCSS("%s");"""
css_link = escape_quotes_if_needed(css_link)
driver.execute_script(script_to_add_css % css_link)
def add_js_link(driver, js_link):
script_to_add_js = """function injectJS(link) {
var body_tag=document.getElementsByTagName("body")[0];
var script_tag=document.createElement("script");
script_tag.src=link;
script_tag.type="text/javascript";
script_tag.crossorigin="anonymous";
script_tag.defer;
script_tag.onload=function() { null };
body_tag.appendChild(script_tag);
}
injectJS("%s");"""
js_link = escape_quotes_if_needed(js_link)
driver.execute_script(script_to_add_js % js_link)
def add_css_style(driver, css_style):
add_css_style_script = """function injectStyle(css) {
var head_tag=document.getElementsByTagName("head")[0];
var style_tag=document.createElement("style");
style_tag.type="text/css";
style_tag.appendChild(document.createTextNode(css));
head_tag.appendChild(style_tag);
}
injectStyle("%s");"""
css_style = css_style.replace("\n", "")
css_style = escape_quotes_if_needed(css_style)
driver.execute_script(add_css_style_script % css_style)
def add_js_code_from_link(driver, js_link):
if js_link.startswith("//"):
js_link = "http:" + js_link
js_code = requests.get(js_link).text
add_js_code_script = (
"""var body_tag=document.getElementsByTagName('body').item(0);"""
"""var script_tag=document.createElement("script");"""
"""script_tag.type="text/javascript";"""
"""script_tag.onload=function() { null };"""
"""script_tag.appendChild(document.createTextNode("%s"));"""
"""body_tag.appendChild(script_tag);"""
)
js_code = js_code.replace("\n", " ")
js_code = escape_quotes_if_needed(js_code)
driver.execute_script(add_js_code_script % js_code)
def add_js_code(driver, js_code):
add_js_code_script = (
"""var body_tag=document.getElementsByTagName('body').item(0);"""
"""var script_tag=document.createElement("script");"""
"""script_tag.type="text/javascript";"""
"""script_tag.onload=function() { null };"""
"""script_tag.appendChild(document.createTextNode("%s"));"""
"""body_tag.appendChild(script_tag);"""
)
js_code = js_code.replace("\n", " ")
js_code = escape_quotes_if_needed(js_code)
driver.execute_script(add_js_code_script % js_code)
def add_meta_tag(driver, http_equiv=None, content=None):
if http_equiv is None:
http_equiv = "Content-Security-Policy"
if content is None:
content = (
"default-src *; style-src 'self' 'unsafe-inline'; "
"script-src: 'self' 'unsafe-inline' 'unsafe-eval'"
)
script_to_add_meta = """function injectMeta() {
var meta_tag=document.createElement('meta');
meta_tag.httpEquiv="%s";
meta_tag.content="%s";
document.getElementsByTagName('head')[0].appendChild(meta_tag);
}
injectMeta();""" % (
http_equiv,
content,
)
driver.execute_script(script_to_add_meta)
def is_jquery_confirm_activated(driver):
try:
driver.execute_script("jconfirm") # Fails if jq_confirm is not defined
return True
except Exception:
return False
def activate_jquery_confirm(driver):
jquery_js = constants.JQuery.MIN_JS
jq_confirm_css = constants.JqueryConfirm.MIN_CSS
jq_confirm_js = constants.JqueryConfirm.MIN_JS
if not is_jquery_activated(driver):
add_js_link(driver, jquery_js)
wait_for_jquery_active(driver, timeout=1.2)
add_css_link(driver, jq_confirm_css)
add_js_link(driver, jq_confirm_js)
for x in range(int(settings.MINI_TIMEOUT * 10.0)):
# jQuery-Confirm needs a small amount of time to load & activate.
try:
driver.execute_script("jconfirm")
wait_for_ready_state_complete(driver)
wait_for_angularjs(driver)
return
except Exception:
time.sleep(0.1)
def activate_html_inspector(driver):
jquery_js = constants.JQuery.MIN_JS
html_inspector_js = constants.HtmlInspector.MIN_JS
if is_html_inspector_activated(driver):
return
if not is_jquery_activated(driver):
add_js_link(driver, jquery_js)
wait_for_jquery_active(driver, timeout=1.2)
wait_for_ready_state_complete(driver)
wait_for_angularjs(driver)
add_js_link(driver, html_inspector_js)
wait_for_ready_state_complete(driver)
wait_for_angularjs(driver)
for x in range(int(settings.MINI_TIMEOUT * 10.0)):
# HTML-Inspector needs a small amount of time to load & activate.
try:
driver.execute_script("HTMLInspector")
wait_for_ready_state_complete(driver)
wait_for_angularjs(driver)
return
except Exception:
time.sleep(0.1)
wait_for_ready_state_complete(driver)
wait_for_angularjs(driver)
def activate_messenger(driver):
from seleniumbase.core import style_sheet
jquery_js = constants.JQuery.MIN_JS
messenger_css = constants.Messenger.MIN_CSS
messenger_js = constants.Messenger.MIN_JS
msgr_theme_flat_js = constants.Messenger.THEME_FLAT_JS
msgr_theme_future_js = constants.Messenger.THEME_FUTURE_JS
msgr_theme_flat_css = constants.Messenger.THEME_FLAT_CSS
msgr_theme_future_css = constants.Messenger.THEME_FUTURE_CSS
msgr_theme_block_css = constants.Messenger.THEME_BLOCK_CSS
msgr_theme_air_css = constants.Messenger.THEME_AIR_CSS
msgr_theme_ice_css = constants.Messenger.THEME_ICE_CSS
spinner_css = constants.Messenger.SPINNER_CSS
underscore_js = constants.Underscore.MIN_JS
msg_style = (
"Messenger.options = {'maxMessages': 8, "
"extraClasses: 'messenger-fixed "
"messenger-on-bottom messenger-on-right', "
"theme: 'flat'}"
)
if not is_jquery_activated(driver):
add_js_link(driver, jquery_js)
wait_for_jquery_active(driver, timeout=0.9)
add_css_link(driver, messenger_css)
add_css_link(driver, msgr_theme_flat_css)
add_css_link(driver, msgr_theme_future_css)
add_css_link(driver, msgr_theme_block_css)
add_css_link(driver, msgr_theme_air_css)
add_css_link(driver, msgr_theme_ice_css)
add_js_link(driver, underscore_js)
add_css_link(driver, spinner_css)
add_js_link(driver, messenger_js)
add_css_style(driver, style_sheet.messenger_style)
for x in range(int(settings.MINI_TIMEOUT * 10.0)):
# Messenger needs a small amount of time to load & activate.
try:
result = driver.execute_script(
""" if (typeof Messenger === 'undefined') { return "U"; } """
)
if result == "U":
time.sleep(0.01)
continue
else:
break
except Exception:
time.sleep(0.01)
try:
driver.execute_script(msg_style)
add_js_link(driver, msgr_theme_flat_js)
add_js_link(driver, msgr_theme_future_js)
wait_for_ready_state_complete(driver)
wait_for_angularjs(driver)
return
except Exception:
time.sleep(0.1)
def set_messenger_theme(
driver, theme="default", location="default", max_messages="default"
):
if theme == "default":
theme = "flat"
if location == "default":
location = "bottom_right"
if sb_config.mobile_emulator:
location = "top_center"
if max_messages == "default":
max_messages = "8"
valid_themes = ["flat", "future", "block", "air", "ice"]
if theme not in valid_themes:
raise Exception("Theme: %s is not in %s!" % (theme, valid_themes))
valid_locations = [
"top_left",
"top_center",
"top_right",
"bottom_left",
"bottom_center",
"bottom_right",
]
if location not in valid_locations:
raise Exception(
"Location: %s is not in %s!" % (location, valid_locations)
)
if location == "top_left":
messenger_location = "messenger-on-top messenger-on-left"
elif location == "top_center":
messenger_location = "messenger-on-top"
elif location == "top_right":
messenger_location = "messenger-on-top messenger-on-right"
elif location == "bottom_left":
messenger_location = "messenger-on-bottom messenger-on-left"
elif location == "bottom_center":
messenger_location = "messenger-on-bottom"
elif location == "bottom_right":
messenger_location = "messenger-on-bottom messenger-on-right"
msg_style = (
"Messenger.options = {'maxMessages': %s, "
"extraClasses: 'messenger-fixed %s', theme: '%s'}"
% (max_messages, messenger_location, theme)
)
try:
driver.execute_script(msg_style)
except Exception:
activate_messenger(driver)
driver.execute_script(msg_style)
time.sleep(0.1)
def post_message(driver, message, msg_dur, style="info"):
"""A helper method to post a message on the screen with Messenger.
(Should only be called from post_message() in base_case.py)"""
if not msg_dur:
msg_dur = settings.DEFAULT_MESSAGE_DURATION
msg_dur = float(msg_dur)
message = re.escape(message)
message = escape_quotes_if_needed(message)
messenger_script = (
"""Messenger().post({message: "%s", type: "%s", """
"""hideAfter: %s, hideOnNavigate: true});"""
% (message, style, msg_dur)
)
try:
driver.execute_script(messenger_script)
except Exception:
activate_messenger(driver)
set_messenger_theme(driver)
try:
driver.execute_script(messenger_script)
except Exception:
time.sleep(0.2)
activate_messenger(driver)
time.sleep(0.2)
set_messenger_theme(driver)
time.sleep(0.5)
driver.execute_script(messenger_script)
def post_messenger_success_message(driver, message, msg_dur):
if not msg_dur:
msg_dur = settings.DEFAULT_MESSAGE_DURATION
msg_dur = float(msg_dur)
try:
theme = "flat"
location = "bottom_right"
if sb_config.mobile_emulator:
location = "top_center"
set_messenger_theme(driver, theme=theme, location=location)
post_message(driver, message, msg_dur, style="success")
time.sleep(msg_dur + 0.07)
except Exception:
pass
def post_messenger_error_message(driver, message, msg_dur):
if not msg_dur:
msg_dur = settings.DEFAULT_MESSAGE_DURATION
msg_dur = float(msg_dur)
try:
set_messenger_theme(driver, theme="block", location="top_center")
post_message(driver, message, msg_dur, style="error")
time.sleep(msg_dur + 0.07)
except Exception:
pass
def highlight_with_js_2(driver, message, selector, o_bs, msg_dur):
if selector == "html":
selector = "body"
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(128, 128, 128, 0.5)';"""
% selector
)
try:
driver.execute_script(script)
except Exception:
return
time.sleep(0.0181)
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(205, 30, 0, 1)';"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(128, 0, 128, 1)';"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(50, 50, 128, 1)';"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""document.querySelector('%s').style.boxShadow =
'0px 0px 6px 6px rgba(50, 205, 50, 1)';"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
try:
activate_jquery(driver)
post_messenger_success_message(driver, message, msg_dur)
except Exception:
pass
script = """document.querySelector('%s').style.boxShadow = '%s';""" % (
selector,
o_bs,
)
driver.execute_script(script)
def highlight_with_jquery_2(driver, message, selector, o_bs, msg_dur):
if selector == "html":
selector = "body"
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(128, 128, 128, 0.5)');"""
% selector
)
try:
safe_execute_script(driver, script)
except Exception:
return
time.sleep(0.0181)
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(205, 30, 0, 1)');"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(128, 0, 128, 1)');"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(50, 50, 200, 1)');"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
script = (
"""jQuery('%s').css('box-shadow',
'0px 0px 6px 6px rgba(50, 205, 50, 1)');"""
% selector
)
driver.execute_script(script)
time.sleep(0.0181)
try:
activate_jquery(driver)
post_messenger_success_message(driver, message, msg_dur)
except Exception:
pass
script = """jQuery('%s').css('box-shadow', '%s');""" % (selector, o_bs)
driver.execute_script(script)
def get_scroll_distance_to_element(driver, element):
try:
scroll_position = driver.execute_script("return window.scrollY;")
element_location = None
element_location = element.location["y"]
element_location = element_location - 130
if element_location < 0:
element_location = 0
distance = element_location - scroll_position
return distance
except Exception:
return 0
def scroll_to_element(driver, element):
element_location = None
try:
element_location = element.location["y"]
except Exception:
# element.location_once_scrolled_into_view # Old hack
return False
element_location = element_location - 130
if element_location < 0:
element_location = 0
scroll_script = "window.scrollTo(0, %s);" % element_location
# The old jQuery scroll_script required by=By.CSS_SELECTOR
# scroll_script = "jQuery('%s')[0].scrollIntoView()" % selector
try:
driver.execute_script(scroll_script)
return True
except Exception:
return False
def slow_scroll_to_element(driver, element, browser):
if browser == "ie":
# IE breaks on slow-scrolling. Do a fast scroll instead.
scroll_to_element(driver, element)
return
scroll_position = driver.execute_script("return window.scrollY;")
element_location = None
try:
element_location = element.location["y"]
except Exception:
element.location_once_scrolled_into_view
return
element_location = element_location - 130
if element_location < 0:
element_location = 0
distance = element_location - scroll_position
if distance != 0:
total_steps = int(abs(distance) / 50.0) + 2.0
step_value = float(distance) / total_steps
new_position = scroll_position
for y in range(int(total_steps)):
time.sleep(0.011)
new_position += step_value
scroll_script = "window.scrollTo(0, %s);" % new_position
driver.execute_script(scroll_script)
time.sleep(0.01)
scroll_script = "window.scrollTo(0, %s);" % element_location
driver.execute_script(scroll_script)
time.sleep(0.01)
if distance > 430 or distance < -300:
# Add small recovery time for long-distance slow-scrolling
time.sleep(0.162)
else:
time.sleep(0.045)
def get_drag_and_drop_script():
script = r"""(function( $ ) {
$.fn.simulateDragDrop = function(options) {
return this.each(function() {
new $.simulateDragDrop(this, options);
});
};
$.simulateDragDrop = function(elem, options) {
this.options = options;
this.simulateEvent(elem, options);
};
$.extend($.simulateDragDrop.prototype, {
simulateEvent: function(elem, options) {
/*Simulating drag start*/
var type = 'dragstart';
var event = this.createEvent(type);
this.dispatchEvent(elem, type, event);
/*Simulating drop*/
type = 'drop';
var dropEvent = this.createEvent(type, {});
dropEvent.dataTransfer = event.dataTransfer;
this.dispatchEvent(
$(options.dropTarget)[0], type, dropEvent);
/*Simulating drag end*/
type = 'dragend';
var dragEndEvent = this.createEvent(type, {});
dragEndEvent.dataTransfer = event.dataTransfer;
this.dispatchEvent(elem, type, dragEndEvent);
},
createEvent: function(type) {
var event = document.createEvent("CustomEvent");
event.initCustomEvent(type, true, true, null);
event.dataTransfer = {
data: {
},
setData: function(type, val){
this.data[type] = val;
},
getData: function(type){
return this.data[type];
}
};
return event;
},
dispatchEvent: function(elem, type, event) {
if(elem.dispatchEvent) {
elem.dispatchEvent(event);
}else if( elem.fireEvent ) {
elem.fireEvent("on"+type, event);
}
}
});
})(jQuery);"""
return script
def get_drag_and_drop_with_offset_script(selector, x, y):
script_a = """
var source = document.querySelector("%s");
var offsetX = %f;
var offsetY = %f;
""" % (
selector,
x,
y,
)
script_b = r"""
var rect = source.getBoundingClientRect();
var dragPt = {x: rect.left + (rect.width >> 1),
y: rect.top + (rect.height >> 1)};
var dropPt = {x: dragPt.x + offsetX, y: dragPt.y + offsetY};
var target = document.elementFromPoint(dropPt.x, dropPt.y);
var dataTransfer = {
dropEffect: '',
effectAllowed: 'all',
files: [],
items: {},
types: [],
setData: function (format, data) {
this.items[format] = data;
this.types.push(format);
},
getData: function (format) {
return this.items[format];
},
clearData: function (format) { }
};
var emit = function (event, target, pt) {
var evt = document.createEvent('MouseEvent');
evt.initMouseEvent(event, true, true, window, 0, 0, 0, pt.x, pt.y,
false, false, false, false, 0, null);
evt.dataTransfer = dataTransfer;
target.dispatchEvent(evt);
};
emit('mousedown', source, dragPt);
emit('mousemove', source, dragPt);
emit('mousemove', source, dropPt);
emit('mouseup', source, dropPt);"""
script = script_a + script_b
return script
def clear_out_console_logs(driver):
try:
# Clear out the current page log before navigating to a new page
# (To make sure that assert_no_js_errors() uses current results)
driver.get_log("browser")
except Exception:
pass
def _jq_format(code):
"""
DEPRECATED - Use re.escape() instead, which performs the intended action.
Use before throwing raw code such as 'div[tab="advanced"]' into jQuery.
Selectors with quotes inside of quotes would otherwise break jQuery.
If you just want to escape quotes, there's escape_quotes_if_needed().
This is similar to "json.dumps(value)", but with one less layer of quotes.
"""
code = code.replace("\\", "\\\\").replace("\t", "\\t").replace("\n", "\\n")
code = code.replace('"', '\\"').replace("'", "\\'")
code = code.replace("\v", "\\v").replace("\a", "\\a").replace("\f", "\\f")
code = code.replace("\b", "\\b").replace(r"\u", "\\u").replace("\r", "\\r")
return code
| [
"mdmintz@gmail.com"
] | mdmintz@gmail.com |
b33d36dbbdef12c04a89dad0a52e6461d82f31a1 | abf17bef22471c32d9b05be27704ceb7f3877b0d | /namo/multithread_scripts/threaded_test.py | f7f7ceadc0d18c0f8b9b1010d0407a107e1b1aaf | [] | no_license | beomjoonkim/adversarial_actor_critic | a91d608c1da9ac7a9b4ed4b0dce2f08d68456346 | 753a031ff70473003bdcf844c256e28337a8e8c9 | refs/heads/master | 2021-02-24T19:19:56.463064 | 2020-05-17T14:14:03 | 2020-05-17T14:14:03 | 245,438,421 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,231 | py | import numpy as np
import scipy.io as sio
import os
import sys
import threading
from Queue import Queue
from multiprocessing.pool import ThreadPool # dummy is nothing but multiprocessing but wrapper around threading
from multiprocessing import cpu_count
import pickle
import socket
import argparse
import csv
import time
import itertools
import sys
def worker_p(config):
algo = config[0]
n_data = config[1]
n_trial = config[2]
Qloss = config[3]
epoch = config[4]
n_score = config[5]
d_lr = config[6]
g_lr = config[7]
other_pi = config[8]
explr_const = config[9]
command = './test_with_gpu.sh ' + str(n_data) + ' ' + str(n_trial) + ' '\
+ str(algo) + ' ' + str(Qloss) + ' ' + str(n_score) +' '+ str(epoch)+ ' '\
+ str(d_lr) + ' ' + str(g_lr) + ' ' + str(other_pi) + ' ' + str(explr_const)
print command
os.system(command)
def worker_wrapper_multi_input(multi_args):
return worker_p(multi_args)
def main():
n_workers = 4
algo = sys.argv[1]
n_datas = [int(k) for k in sys.argv[2].split(',')]
n_datas = range(int(n_datas[0]),int(n_datas[1])+100,100)
Qloss = sys.argv[3]
epochs = [int(k) for k in sys.argv[4].split(',')]
epochs = range(int(epochs[0]),int(epochs[1])+1)
n_score = sys.argv[5]
d_lr = float(sys.argv[6])
g_lr = float(sys.argv[7])
explr_const = float(sys.argv[8])
trials = [int(k) for k in sys.argv[9].split(',')]
# Other pi???
n_workers = cpu_count()
configs = []
for n_data in n_datas:
otherpi_wfile = 'n_data_'+str(n_data)+'/onlyplace/adv/dg_lr_0.001_0.0001/n_score_5/'
for trial in trials:
otherpi_wfile = 'n_data_'+str(n_data)+'/onlyplace/adv/dg_lr_0.001_0.0001/n_score_5/n_trial_'+str(trial)+'/'
for epoch in epochs:
otherpi_wfile = 'n_data_'+str(n_data)\
+'/onlyplace/adv/dg_lr_0.001_0.0001/n_score_5/n_trial_'\
+str(trial)+'/train_results/'+'a_gen_epoch_'+str(epoch)+'.h5'
configs.append([algo,n_data,trial,Qloss,epoch,n_score,d_lr,g_lr,otherpi_wfile,explr_const])
print configs
pool = ThreadPool(n_workers)
results = pool.map(worker_wrapper_multi_input,configs)
if __name__ == '__main__':
main()
| [
"beomjoon@mit.edu"
] | beomjoon@mit.edu |
8a836540b8bcf7111da0d1aeb1a8ba28f87054d2 | 15f321878face2af9317363c5f6de1e5ddd9b749 | /solutions_python/Problem_155/502.py | 2febc7c4005d384624053bf32ddf35dd96d1547e | [] | no_license | dr-dos-ok/Code_Jam_Webscraper | c06fd59870842664cd79c41eb460a09553e1c80a | 26a35bf114a3aa30fc4c677ef069d95f41665cc0 | refs/heads/master | 2020-04-06T08:17:40.938460 | 2018-10-14T10:12:47 | 2018-10-14T10:12:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 445 | py | __author__ = 'Thanabhat Koomsubha'
def solve(cc):
S, K = input().split()
S = int(S)
sum = 0
sol = 0
for i in range(S + 1):
if int(K[i]) == 0:
continue
if sum < i:
sol += (i - sum)
sum += (i - sum)
sum += int(K[i])
print('Case #%d: %d' % (cc + 1, sol))
def main():
T = int(input())
for i in range(T):
solve(i)
main() | [
"miliar1732@gmail.com"
] | miliar1732@gmail.com |
5ca6d145b7e5a2b67acf13a34f61f65aca3d84d7 | a5a1a4a34d5e404d483cd442527ed154cdc4ab54 | /scripts/lt2_scripts/test_scripts/awg_trigger_jitter2.py | e46dbd0ef3c7acb49ef273d85428f4fa9adb0cb9 | [] | no_license | AdriaanRol/measurement | c0abb9cfb2e7061a060c109f6be61a420ca8586e | 32e0912b83d5ceedf00378df1d6a48feb9ab8f17 | refs/heads/master | 2021-01-20T16:48:03.044302 | 2014-03-26T15:42:09 | 2014-03-26T15:42:09 | 18,175,928 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,444 | py | import os
import qt
import numpy as np
import msvcrt
from measurement.lib.AWG_HW_sequencer_v2 import Sequence
from measurement.lib.config import awgchannels_lt2 as awgcfg
reload(awgcfg)
AWG = qt.instruments['AWG']
def generate_sequence(do_program=True):
seq = Sequence('Test')
# vars for the channel names
trigger_chan= 'trigger'
trigger2_chan= 'trigger2'
awgcfg.configure_sequence(seq, 'awg_trigger_jitter')
ename='trigger'
seq.add_element(ename,goto_target='trigger')
seq.add_pulse('trigger',trigger_chan,ename,start=0,duration=500, amplitude=1)
seq.add_pulse('wait',trigger_chan,ename,start=0,
start_reference='trigger', link_start_to = 'end',duration=2500, amplitude=0)
seq.add_pulse('trigger2',trigger2_chan,ename,start=0,duration=500, amplitude=1)
seq.add_pulse('wait2',trigger2_chan,ename,start=0,
start_reference='trigger2', link_start_to = 'end',duration=2500, amplitude=0)
#sweep the pulse length
seq.set_instrument(AWG)
seq.set_clock(1e9)
seq.set_send_waveforms(do_program)
seq.set_send_sequence(do_program)
seq.set_program_channels(True)
seq.set_start_sequence(False)
seq.force_HW_sequencing(True)
seq.send_sequence()
return True
if __name__ == "__main__":
generate_sequence()
| [
"wolfgangpfff@gmail.com"
] | wolfgangpfff@gmail.com |
b6ae99dcc6e6e51be66d43ad5943b593979f5014 | be0f3dfbaa2fa3d8bbe59229aef3212d032e7dd1 | /DaVinci_v41r2/InstallArea/x86_64-slc6-gcc49-opt/python/CommonParticles/StdVeryLooseJpsi2MuMu.py | a1d496cd15b2307fbcbd8402a282ac808ceb9a75 | [] | no_license | Sally27/backup_cmtuser_full | 34782102ed23c6335c48650a6eaa901137355d00 | 8924bebb935b96d438ce85b384cfc132d9af90f6 | refs/heads/master | 2020-05-21T09:27:04.370765 | 2018-12-12T14:41:07 | 2018-12-12T14:41:07 | 185,989,173 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,764 | py | #!/usr/bin/env python
# =============================================================================
# $Id: StdVeryLooseJpsi2MuMu.py,v 1.1 2010-01-18 10:08:49 gcowan Exp $
# =============================================================================
## @file CommonParticles/StdVeryLooseJpsi2MuMu.py
# configuration file for 'Standard Very Loose Jpsi2MuMu'
# @author Greig Cowan
# @date 2009-06-23
# =============================================================================
"""
Configuration file for 'Standard Very Loose Jpsi2MuMu'
"""
__author__ = "Greig Cowan"
__version__ = "CVS tag $Name: not supported by cvs2svn $, version $Revision: 1.1 $"
# =============================================================================
__all__ = (
'StdVeryLooseJpsi2MuMu' ,
'locations'
)
# =============================================================================
from Gaudi.Configuration import *
from Configurables import CombineParticles
from CommonParticles.Utils import *
## ============================================================================
## create the algorithm
StdVeryLooseJpsi2MuMu = CombineParticles ("StdVeryLooseJpsi2MuMu")
StdVeryLooseJpsi2MuMu.Inputs = ["Phys/StdVeryLooseMuons/Particles"]
StdVeryLooseJpsi2MuMu.DecayDescriptor = "J/psi(1S) -> mu+ mu-"
StdVeryLooseJpsi2MuMu.CombinationCut = "(ADAMASS('J/psi(1S)') < 100.*MeV) & (ADOCACHI2CUT(30, ''))"
StdVeryLooseJpsi2MuMu.MotherCut = "(VFASPF(VCHI2) < 25.)"
## configure Data-On-Demand service
locations = updateDoD ( StdVeryLooseJpsi2MuMu )
## ============================================================================
if '__main__' == __name__ :
print __doc__
print __author__
print __version__
print locationsDoD ( locations )
| [
"slavomirastefkova@b2pcx39016.desy.de"
] | slavomirastefkova@b2pcx39016.desy.de |
f33fd4cc15561829b611c63b8394c988d0708fa6 | de24f83a5e3768a2638ebcf13cbe717e75740168 | /moodledata/vpl_data/77/usersdata/247/43349/submittedfiles/exercicio24.py | 6f73a9b4da2d7555834aaae02a5105c1ce437493 | [] | no_license | rafaelperazzo/programacao-web | 95643423a35c44613b0f64bed05bd34780fe2436 | 170dd5440afb9ee68a973f3de13a99aa4c735d79 | refs/heads/master | 2021-01-12T14:06:25.773146 | 2017-12-22T16:05:45 | 2017-12-22T16:05:45 | 69,566,344 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 235 | py | # -*- coding: utf-8 -*-
import math
a=int(input('digite a'))
b=int(input('digite b'))
if a>0 and b>0:
d=2
while d<=a:
if a%d==0 and b%d==0:
mdc=d
d=1+d
print("mdc(%d,%d)=%d" %(a,b,mdc))
| [
"rafael.mota@ufca.edu.br"
] | rafael.mota@ufca.edu.br |
b6ce54f2fb1f2d0fd8c484522e225a8f27118954 | eaae3d8aa68a37824387ab05201a84f347004e3b | /docs/html/python_example.py | c039a2fb109d51c291515f87d2ba721a2fb82baa | [
"MIT"
] | permissive | apmoore1/rtf2xml | 254e2732e89003bc5622776a82757c20e5fe6999 | b80bd43c91c7e18088489eae9060d4d7a5b96d13 | refs/heads/master | 2021-06-10T09:15:42.018346 | 2017-01-16T19:08:59 | 2017-01-16T19:08:59 | 79,149,815 | 0 | 1 | null | 2017-01-16T19:04:48 | 2017-01-16T19:04:48 | null | UTF-8 | Python | false | false | 1,879 | py | #!/usr/bin/env python
import sys
import rtf2xml.ParseRtf
def Handle_Main():
"""Handles options and creates a parse object """
try:
parse_obj =rtf2xml.ParseRtf.ParseRtf(
in_file = 'in.rtf',
# these are optional
# determine the output file
out_file = 'out.xml',
# determine the run level. The default is 1.
run_level = 3,
# The name of a debug directory, if you are running at
# run level 3 or higer.
debug = 'debug_dir',
# Convert symbol fonts to unicode equivelents. Default
# is 1
convert_symbol = 1,
# Convert Zapf fonts to unicode equivelents. Default
# is 1.
convert_zapf = 1,
# Convert Wingding fonts to unicode equivelents.
# Default is 1.
convert_wingdings = 1,
# Convert RTF caps to real caps.
# Default is 1.
convert_caps = 1,
# Indent resulting XML.
# Default is 0 (no indent).
indent = 1,
# Form lists from RTF. Default is 1.
form_lists = 1,
# Convert headings to sections. Default is 0.
headings_to_sections = 1,
# Group paragraphs with the same style name. Default is 1.
group_styles = 1,
# Group borders. Default is 1.
group_borders = 1,
# Write or do not write paragraphs. Default is 0.
empty_paragraphs = 0,
)
parse_obj.parse_rtf()
except rtf2xml.ParseRtf.InvalidRtfException, msg:
sys.stderr.write(str(msg))
sys.exit(1)
except rtf2xml.ParseRtf.RtfInvalidCodeException, msg:
sys.stderr.write(str(msg))
sys.exit(1)
if __name__=='__main__':
Handle_Main()
| [
"paulhtremblay@gmail.com"
] | paulhtremblay@gmail.com |
3cfcb913843a61f0714676c7d5e8d964be7c087b | c9ddbdb5678ba6e1c5c7e64adf2802ca16df778c | /cases/pa3/sample/stmt_if-31.py | 364b8bf478ae76a077503c92bb21412131c7b116 | [] | no_license | Virtlink/ccbench-chocopy | c3f7f6af6349aff6503196f727ef89f210a1eac8 | c7efae43bf32696ee2b2ee781bdfe4f7730dec3f | refs/heads/main | 2023-04-07T15:07:12.464038 | 2022-02-03T15:42:39 | 2022-02-03T15:42:39 | 451,969,776 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 85 | py | if False:
print("No")
elif True:
if True:
$Var("Yes")
else:
pass
| [
"647530+Virtlink@users.noreply.github.com"
] | 647530+Virtlink@users.noreply.github.com |
86621207d93660ccde1902492993dbcefdc6f01b | f70273d172dec4c83e6ea6a1fbee63e0dfa679bd | /tests/integ/group_test.py | 6e28cdbaa832dbe0d34d99453f392b1fe1719bb3 | [
"LicenseRef-scancode-unknown-license-reference",
"Apache-2.0"
] | permissive | tverbeiren/hsds | 5b229a7226a5da957c0a81392a5c291d76edd55b | c9d2b56d2188c91341ff578cc32d41d4c6d3a7fd | refs/heads/master | 2020-04-19T18:02:42.601234 | 2019-01-28T07:27:40 | 2019-01-28T07:27:40 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 16,684 | py | ##############################################################################
# Copyright by The HDF Group. #
# All rights reserved. #
# #
# This file is part of HSDS (HDF5 Scalable Data Service), Libraries and #
# Utilities. The full HSDS copyright notice, including #
# terms governing use, modification, and redistribution, is contained in #
# the file COPYING, which can be found at the root of the source code #
# distribution tree. If you do not have access to this file, you may #
# request a copy from help@hdfgroup.org. #
##############################################################################
import unittest
import requests
import time
import json
import helper
import config
class GroupTest(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(GroupTest, self).__init__(*args, **kwargs)
self.base_domain = helper.getTestDomainName(self.__class__.__name__)
helper.setupDomain(self.base_domain)
# main
def testGetRootGroup(self):
print("testGetRootGroup", self.base_domain)
headers = helper.getRequestHeaders(domain=self.base_domain)
req = helper.getEndpoint() + '/'
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
root_uuid = rspJson["root"]
helper.validateId(root_uuid)
req = helper.getEndpoint() + '/groups/' + root_uuid
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("id" in rspJson)
group_id = rspJson["id"]
helper.validateId(group_id)
self.assertTrue("root" in rspJson)
root_id = rspJson["root"]
self.assertEqual(group_id, root_id)
self.assertTrue("domain" in rspJson)
#self.assertEqual(rspJson["domain"], self.base_domain) #TBD
self.assertTrue("created" in rspJson)
self.assertTrue("lastModified" in rspJson)
self.assertTrue("linkCount" in rspJson)
self.assertTrue("attributeCount" in rspJson)
# try get with a different user (who has read permission)
headers = helper.getRequestHeaders(domain=self.base_domain, username="test_user2")
rsp = requests.get(req, headers=headers)
if config.get("default_public"):
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertEqual(rspJson["root"], root_uuid)
else:
self.assertEqual(rsp.status_code, 403)
# try to do a GET with a different domain (should fail)
another_domain = helper.getParentDomain(self.base_domain)
headers = helper.getRequestHeaders(domain=another_domain)
req = helper.getEndpoint() + '/groups/' + root_uuid
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 400)
def testGet(self):
domain = helper.getTestDomain("tall.h5")
headers = helper.getRequestHeaders(domain=domain)
# verify domain exists
req = helper.getEndpoint() + '/'
rsp = requests.get(req, headers=headers)
if rsp.status_code != 200:
print("WARNING: Failed to get domain: {}. Is test data setup?".format(domain))
return # abort rest of test
rspJson = json.loads(rsp.text)
grp_uuid = root_uuid = rspJson["root"]
self.assertTrue(grp_uuid.startswith("g-"))
# get the group json
req = helper.getEndpoint() + '/groups/' + grp_uuid
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
for name in ("id", "hrefs", "attributeCount", "linkCount",
"domain", "root", "created", "lastModified"):
self.assertTrue(name in rspJson)
self.assertEqual(rspJson["id"], grp_uuid)
hrefs = rspJson["hrefs"]
self.assertEqual(len(hrefs), 5)
self.assertEqual(rspJson["id"], grp_uuid)
self.assertEqual(rspJson["attributeCount"], 2)
self.assertEqual(rspJson["linkCount"], 2)
self.assertEqual(rspJson["root"], root_uuid)
self.assertEqual(rspJson["domain"], domain)
now = time.time()
# the object shouldn't have been just created or updated
self.assertTrue(rspJson["created"] < now - 60 * 5)
self.assertTrue(rspJson["lastModified"] < now - 60 * 5)
# request the group path
req = helper.getEndpoint() + '/groups/' + grp_uuid
params = {"getalias": 1}
rsp = requests.get(req, params=params, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("alias" in rspJson)
self.assertEqual(rspJson["alias"], ['/'])
# verify trying to read this group from a different domain fails
headers = helper.getRequestHeaders(domain=self.base_domain)
req = helper.getEndpoint() + '/groups/' + grp_uuid
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 400)
def testGetInvalidUUID(self):
print("testGetInvalidUUID", self.base_domain)
headers = helper.getRequestHeaders(domain=self.base_domain)
req = helper.getEndpoint() + '/'
invalid_uuid = "foobar"
req = helper.getEndpoint() + "/groups/" + invalid_uuid
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 400)
import uuid
bad_uuid = "g-" + str(uuid.uuid1())
req = helper.getEndpoint() + "/groups/" + bad_uuid
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 404)
def testPost(self):
# test POST group
print("testPost", self.base_domain)
headers = helper.getRequestHeaders(domain=self.base_domain)
req = helper.getEndpoint() + '/groups'
# create a new group
rsp = requests.post(req, headers=headers)
self.assertEqual(rsp.status_code, 201)
rspJson = json.loads(rsp.text)
self.assertEqual(rspJson["linkCount"], 0)
self.assertEqual(rspJson["attributeCount"], 0)
group_id = rspJson["id"]
self.assertTrue(helper.validateId(group_id))
# verify we can do a get on the new group
req = helper.getEndpoint() + '/groups/' + group_id
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("id" in rspJson)
self.assertEqual(rspJson["id"], group_id)
self.assertTrue("root" in rspJson)
self.assertTrue(rspJson["root"] != group_id)
self.assertTrue("domain" in rspJson)
#self.assertEqual(rspJson["domain"], domain) # TBD
# try getting the path of the group
params = {"getalias": 1}
rsp = requests.get(req, params=params, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("alias" in rspJson)
self.assertEqual(rspJson["alias"], [])
# try POST with user who doesn't have create permission on this domain
headers = helper.getRequestHeaders(domain=self.base_domain, username="test_user2")
req = helper.getEndpoint() + '/groups'
rsp = requests.post(req, headers=headers)
self.assertEqual(rsp.status_code, 403) # forbidden
def testPostWithLink(self):
# test PUT_root
print("testPostWithLink", self.base_domain)
headers = helper.getRequestHeaders(domain=self.base_domain)
# get root id
req = helper.getEndpoint() + '/'
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
root_uuid = rspJson["root"]
helper.validateId(root_uuid)
# delete the domain
rsp = requests.delete(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
# try getting the domain
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 410)
# try re-creating a domain
rsp = requests.put(req, headers=headers)
self.assertEqual(rsp.status_code, 201)
rspJson = json.loads(rsp.text)
new_root_id = rspJson["root"]
self.assertTrue(new_root_id != root_uuid)
root_uuid = new_root_id
# get root group and verify link count is 0
req = helper.getEndpoint() + '/groups/' + root_uuid
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertEqual(rspJson["linkCount"], 0)
# create new group
payload = { 'link': { 'id': root_uuid, 'name': 'linked_group' } }
req = helper.getEndpoint() + "/groups"
rsp = requests.post(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 201)
rspJson = json.loads(rsp.text)
self.assertEqual(rspJson["linkCount"], 0)
self.assertEqual(rspJson["attributeCount"], 0)
new_group_id = rspJson["id"]
self.assertTrue(helper.validateId(rspJson["id"]) )
self.assertTrue(new_group_id != root_uuid)
# get root group and verify link count is 1
req = helper.getEndpoint() + '/groups/' + root_uuid
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertEqual(rspJson["linkCount"], 1)
# read the link back and verify
req = helper.getEndpoint() + "/groups/" + root_uuid + "/links/linked_group"
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200) # link doesn't exist yet
rspJson = json.loads(rsp.text)
self.assertTrue("link" in rspJson)
link_json = rspJson["link"]
self.assertEqual(link_json["collection"], "groups")
self.assertEqual(link_json["class"], "H5L_TYPE_HARD")
self.assertEqual(link_json["title"], "linked_group")
self.assertEqual(link_json["id"], new_group_id)
# try getting the path of the group
req = helper.getEndpoint() + "/groups/" + new_group_id
params = {"getalias": 1}
rsp = requests.get(req, params=params, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("alias" in rspJson)
self.assertEqual(rspJson["alias"], ['/linked_group',])
def testDelete(self):
# test Delete
print("testDelete", self.base_domain)
headers = helper.getRequestHeaders(domain=self.base_domain)
# get domain
req = helper.getEndpoint() + '/'
rsp = requests.get(req, headers=headers)
rspJson = json.loads(rsp.text)
self.assertTrue("root" in rspJson)
root_id = rspJson["root"]
req = helper.getEndpoint() + '/groups'
# create a new group
rsp = requests.post(req, headers=headers)
self.assertEqual(rsp.status_code, 201)
rspJson = json.loads(rsp.text)
self.assertTrue("id" in rspJson)
group_id = rspJson["id"]
self.assertTrue(helper.validateId(group_id))
# verify we can do a get on the new group
req = helper.getEndpoint() + '/groups/' + group_id
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("id" in rspJson)
self.assertEqual(rspJson["id"], group_id)
self.assertTrue("root" in rspJson)
self.assertTrue(rspJson["root"] != group_id)
self.assertTrue("domain" in rspJson)
#self.assertEqual(rspJson["domain"], self.base_domain) #TBD
# try DELETE with user who doesn't have create permission on this domain
headers = helper.getRequestHeaders(domain=self.base_domain, username="test_user2")
rsp = requests.delete(req, headers=headers)
self.assertEqual(rsp.status_code, 403) # forbidden
# try to do a DELETE with a different domain (should fail)
another_domain = helper.getParentDomain(self.base_domain)
headers = helper.getRequestHeaders(domain=another_domain)
req = helper.getEndpoint() + '/groups/' + group_id
rsp = requests.delete(req, headers=headers)
self.assertEqual(rsp.status_code, 400)
# delete the new group
headers = helper.getRequestHeaders(domain=self.base_domain)
rsp = requests.delete(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue(rspJson is not None)
# a get for the group should now return 410 (GONE)
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 410)
# try deleting the root group
req = helper.getEndpoint() + '/groups/' + root_id
rsp = requests.delete(req, headers=headers)
self.assertEqual(rsp.status_code, 403) # Forbidden
def testGetByPath(self):
domain = helper.getTestDomain("tall.h5")
print("testGetByPath", domain)
headers = helper.getRequestHeaders(domain=domain)
# verify domain exists
req = helper.getEndpoint() + '/'
rsp = requests.get(req, headers=headers)
if rsp.status_code != 200:
print("WARNING: Failed to get domain: {}. Is test data setup?".format(domain))
return # abort rest of test
rspJson = json.loads(rsp.text)
root_uuid = rspJson["root"]
# get the group at "/g1/g1.1"
h5path = "/g1/g1.1"
req = helper.getEndpoint() + "/groups/"
params = {"h5path": h5path}
rsp = requests.get(req, headers=headers, params=params)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
for name in ("id", "hrefs", "attributeCount", "linkCount",
"domain", "root", "created", "lastModified"):
self.assertTrue(name in rspJson)
# verify we get the same id when following the path via service calls
g11id = helper.getUUIDByPath(domain, "/g1/g1.1")
self.assertEqual(g11id, rspJson["id"])
# Try with a trailing slash
h5path = "/g1/g1.1/"
req = helper.getEndpoint() + "/groups/"
params = {"h5path": h5path}
rsp = requests.get(req, headers=headers, params=params)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertEqual(g11id, rspJson["id"])
# try relative h5path
g1id = helper.getUUIDByPath(domain, "/g1/")
h5path = "./g1.1"
req = helper.getEndpoint() + "/groups/" + g1id
params = {"h5path": h5path}
rsp = requests.get(req, headers=headers, params=params)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertEqual(g11id, rspJson["id"])
# try a invalid link and verify a 404 is returened
h5path = "/g1/foobar"
req = helper.getEndpoint() + "/groups/"
params = {"h5path": h5path}
rsp = requests.get(req, headers=headers, params=params)
self.assertEqual(rsp.status_code, 404)
# try passing a path to a dataset and verify we get 404
h5path = "/g1/g1.1/dset1.1.1"
req = helper.getEndpoint() + "/groups/"
params = {"h5path": h5path}
rsp = requests.get(req, headers=headers, params=params)
self.assertEqual(rsp.status_code, 404)
# try getting the path of the group
req = helper.getEndpoint() + "/groups/" + g11id
params = {"getalias": 1}
rsp = requests.get(req, params=params, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("alias" in rspJson)
self.assertEqual(rspJson["alias"], ['/g1/g1.1',])
if __name__ == '__main__':
#setup test files
unittest.main()
| [
"jreadey@hdfgroup.org"
] | jreadey@hdfgroup.org |
f361ea6bd7ea53223419d2c8453f09fc3be2b524 | fb67821b542292fe921c9e628ebe69b9bd1ecb66 | /firstpro/firstpro/settings.py | b324acccee7bfd033b42b30bdecb4c4cc0d89c90 | [] | no_license | smrkhan123/MyDjango | 97cc13e33a686f325be03618b915cf571d4a6fc2 | be106cb64a52bf7bef8b4960089af1afe5480df6 | refs/heads/master | 2020-08-20T07:53:59.285250 | 2019-10-18T10:25:56 | 2019-10-18T10:25:56 | 215,998,313 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,236 | py |
"""
Django settings for firstpro project.
Generated by 'django-admin startproject' using Django 1.11.10.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.11/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '*4fn4+pe&9005+*p#llam7-n%7!*!xy%_l)=a(o72kv-8=$(9k'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'Article',
'sameer',
'Accounts'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'firstpro.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR,'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'firstpro.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.11/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.11/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS=[
os.path.join(BASE_DIR,'static')
]
| [
"sk862147@gmail.com"
] | sk862147@gmail.com |
3f49b661713b37e8fcba1533d294c012ebf6b117 | 238900636ac22ba9776dfd022fcd5e2b6e788279 | /src/brainroller/findemptylocs.py | 09a7074d34a65cd3ca0d2c958b2d589bae786de2 | [] | no_license | ricepaper1/tensorflow_apps | 5b22f928a7283c353bde7242fa2d1060e6778594 | bcbf2873afc13e05cfdd0bf99bc5e222aa3820b8 | refs/heads/master | 2022-02-07T13:38:34.578175 | 2019-05-29T19:19:13 | 2019-05-29T19:19:13 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,505 | py | #! /usr/bin/env python
'''
Created on May 31, 2016
@author: welling
'''
import sys
import random
import os.path
import cPickle
import math
import spilltree3D
sys.path.extend(['/home/welling/Fiasco/fiasco_final/bin/LINUXX86_64',
'/home/welling/shtools/SHTOOLS-3.2'])
# sys.path.extend(['/home/welling/Fiasco/Fiasco_final/src/fmri',
# '/home/welling/Fiasco/Fiasco_final/bin/LINUXX86_64',
# '/home/welling/git/SHTOOLS'])
from traceneighbors import UsefulVtx, Vtx, loadSkipTable
# from transforms import eulerRzRyRzToTrans, transToEulerRzRyRz, makeAligningRotation
# from writegeom import writeBOV, plotSphere, writeVtkPolylines
# from sampler import ArraySampler
#from yamlblocks import BlockGenerator
radPixels = 20
cutoffRad = 10.0
baseName = 'block'
# radPixels = 100
# baseName = 'bigblock'
maxL = 48
fishCoreFile = '/pylon1/pscstaff/awetzel/ZF-test-files/60nm-cores/V4750_2150_04000-08999.vol'
fishCoreXSize = 1024
fishCoreYSize = 1024
fishCoreZSize = 4900
fishCoreXOffset = 4750. - (fishCoreXSize/2)
fishCoreYOffset = 2150. - (fishCoreYSize/2)
fishCoreZOffset = 4000
#baseDir = '/pylon2/pscstaff/welling'
baseDir = '/home/welling/brainroller'
usefulVtxFile = os.path.join(baseDir, 'useful_trace_neighborhoods.pkl')
skipFile = os.path.join(baseDir, 'skips.txt')
traceFile = os.path.join(baseDir, 'traces.pkl')
class PlainPt(object):
def __init__(self, x, y, z):
self.coords = [x, y, z]
def getLoc(self):
return self.coords
class BoundedRandomSampler(object):
def __init__(self, rMax):
self.rMax = rMax
xMin = rMax
xMax = fishCoreXSize - rMax
yMin = rMax
yMax = fishCoreYSize - rMax
zMin = rMax
zMax = fishCoreYSize - rMax
self.xOffset = xMin + fishCoreXOffset
self.yOffset = yMin + fishCoreYOffset
self.zOffset = zMin + fishCoreZOffset
self.xScale = xMax - xMin
self.yScale = yMax - yMin
self.zScale = zMax - zMin
def getPt(self):
return PlainPt(random.random() * self.xScale + self.xOffset,
random.random() * self.yScale + self.yOffset,
random.random() * self.zScale + self.zOffset)
def outerClip(self, pt):
x, y, z = pt.getLoc()
if z < fishCoreZOffset - self.rMax:
return False
elif x < fishCoreXOffset - self.rMax:
return False
elif y < fishCoreYOffset - self.rMax:
return False
elif z > fishCoreZOffset + fishCoreZSize + self.rMax:
return False
elif x > fishCoreXOffset + fishCoreXSize + self.rMax:
return False
elif y > fishCoreYOffset + fishCoreYSize + self.rMax:
return False
else:
return True
def main():
edgeLen = 2*radPixels + 1
rMax = float(radPixels)
# transformer = SHTransformer(edgeLen, maxL)
# with open(usefulVtxFile, 'r') as pklF:
# with open(skipFile, 'r') as skipF:
# usefulVtxDict = UsefulVtx.load(pklF, 30000, skipF)
with open(traceFile, 'r') as f:
vtxDict, objDict = cPickle.load(f)
with open(skipFile, 'rU') as skipF:
skipTbl = loadSkipTable(skipF, 30000)
for v in vtxDict.values():
v.setSkipTable(skipTbl)
print '%d vertices in %d objects' % (len(vtxDict), len(objDict))
# print 'Loaded %d useful vertices' % len(usefulVtxDict)
ptSampler = BoundedRandomSampler(rMax)
testPts = []
for v in vtxDict.values():
if ptSampler.outerClip(v):
x, y, z = v.getLoc()
testPts.append(PlainPt(x, y, z))
print '%d useful trace points' % len(testPts)
spilltree = spilltree3D.SpTree(testPts)
print 'spilltree created'
random.seed(1234)
samplePts = []
ct = 0
tryCt = 0
cutSqr = cutoffRad * cutoffRad
while True:
pt = ptSampler.getPt()
_, sepsqr = spilltree.findApproxNearest(pt)
# print 'samplept: %s' % pt.getLoc()
# print 'nearPt: %s at %s' % (nearPt.id, nearPt.getLoc())
# print 'sepsqr: %s' % sepsqr
if sepsqr > cutSqr:
samplePts.append(tuple(pt.getLoc()))
ct += 1
tryCt += 1
if tryCt % 1000 == 1:
print '%d samples in %d tries' % (ct, tryCt)
if ct >= 5000:
break
with open('emptySamps.pkl', 'w') as f:
cPickle.dump(samplePts, f)
# blockGen = BlockGenerator(rMax, edgeLen, maxL,
# usefulVtxDict, fishCoreFile,
# fishCoreXSize, fishCoreYSize, fishCoreZSize,
# baseName=baseName)
#
# #sampleVtx = usefulVtxDict[6985]
# #sampleVtx = usefulVtxDict.values()[17]
# random.seed(1234)
# indexList = usefulVtxDict.keys()[:]
# indexList.sort()
# for idx, sampleId in enumerate(random.sample(indexList, 5000)):
# if (idx >= 4968):
# try:
# print 'starting sample %s' % sampleId
# blockGen.writeBlock(sampleId,
# {'xOffset': fishCoreXOffset,
# 'yOffset': fishCoreYOffset,
# 'zOffset': fishCoreZOffset})
# except Exception, e:
# print 'Sample id %s failed: %s' % (sampleId, e)
print 'completed main loop'
if __name__ == '__main__':
main()
| [
"welling@psc.edu"
] | welling@psc.edu |
46237aef41eb9f622b8d9a2300c0e36dec363cf0 | e0db9c0559d0cd362a0e3e7b96fd0c1c0fbc68e4 | /string21.py | 6b39115b313556a222d709f14fe90874f5bc1d37 | [] | no_license | BrettMcGregor/w3resource | ba338e91d24db773de6db6aec8c776a7df003ba0 | cea43e3f471edff1ca0843eeab1fa299f491badf | refs/heads/master | 2020-03-13T04:11:10.194964 | 2018-05-22T03:21:13 | 2018-05-22T03:21:13 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 332 | py | # Write a Python function to convert a given string to all
# uppercase if it contains at least 2 uppercase characters in the
# first 4 characters.
stringa = "HtTps://www.w3rresource.com"
count = 0
for i in range(5):
if stringa[i].isupper():
count += 1
if count > 1:
print(stringa.upper())
else:
print(stringa)
| [
"brett.w.mcgregor@gmail.com"
] | brett.w.mcgregor@gmail.com |
a1a5068f26f9fcec4afd026d777bab1f3e4795e6 | 3cdf103f66fd032352e96640ed072e30c63e1b74 | /template/__init__.py | 6ee598aa88d19cabed6b7d21aeb4457cc7b52877 | [] | no_license | JayGitH/Bussiness-Monitoring | a193872c08553370c0f4624215a8cbf0f94ea3dc | 0535e26acf4f16a385e0da538178b36dab9bdbc9 | refs/heads/master | 2023-03-16T13:49:23.520708 | 2019-09-17T07:55:24 | 2019-09-17T07:55:24 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 135 | py | # -*- coding:utf-8 -*-
# __author__ = Amos
# Email = 379833553@qq.com
# Create_at = 2019/1/16 2:34 PM
# FileName = __init__.py
| [
"379833553@qq.com"
] | 379833553@qq.com |
b470fb10194019ee727baacef3b307c45c089584 | 0e1e643e864bcb96cf06f14f4cb559b034e114d0 | /Exps_7_v3/doc3d/Ablation4_ch016_ep010/Gather1_W_change_C_fix_2blk/ep0_test/pyr_0s/L5/step10_a.py | 068fb2c7bc9f20951d26d26257e3c2b442a0c6ca | [] | no_license | KongBOy/kong_model2 | 33a94a9d2be5b0f28f9d479b3744e1d0e0ebd307 | 1af20b168ffccf0d5293a393a40a9fa9519410b2 | refs/heads/master | 2022-10-14T03:09:22.543998 | 2022-10-06T11:33:42 | 2022-10-06T11:33:42 | 242,080,692 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,723 | py | #############################################################################################################################################################################################################
#############################################################################################################################################################################################################
### 把 kong_model2 加入 sys.path
import os
code_exe_path = os.path.realpath(__file__) ### 目前執行 step10_b.py 的 path
code_exe_path_element = code_exe_path.split("\\") ### 把 path 切分 等等 要找出 kong_model 在第幾層
code_dir = "\\".join(code_exe_path_element[:-1])
kong_layer = code_exe_path_element.index("kong_model2") ### 找出 kong_model2 在第幾層
kong_model2_dir = "\\".join(code_exe_path_element[:kong_layer + 1]) ### 定位出 kong_model2 的 dir
import sys ### 把 kong_model2 加入 sys.path
sys.path.append(kong_model2_dir)
sys.path.append(code_dir)
# print(__file__.split("\\")[-1])
# print(" code_exe_path:", code_exe_path)
# print(" code_exe_path_element:", code_exe_path_element)
# print(" code_dir:", code_dir)
# print(" kong_layer:", kong_layer)
# print(" kong_model2_dir:", kong_model2_dir)
#############################################################################################################################################################################################################
kong_to_py_layer = len(code_exe_path_element) - 1 - kong_layer ### 中間 -1 是為了長度轉index
# print(" kong_to_py_layer:", kong_to_py_layer)
if (kong_to_py_layer == 0): template_dir = ""
elif(kong_to_py_layer == 2): template_dir = code_exe_path_element[kong_layer + 1][0:] ### [7:] 是為了去掉 step1x_, 後來覺得好像改有意義的名字不去掉也行所以 改 0
elif(kong_to_py_layer == 3): template_dir = code_exe_path_element[kong_layer + 1][0:] + "/" + code_exe_path_element[kong_layer + 2][0:] ### [5:] 是為了去掉 mask_ ,前面的 mask_ 是為了python 的 module 不能 數字開頭, 隨便加的這樣子, 後來覺得 自動排的順序也可以接受, 所以 改0
elif(kong_to_py_layer > 3): template_dir = code_exe_path_element[kong_layer + 1][0:] + "/" + code_exe_path_element[kong_layer + 2][0:] + "/" + "/".join(code_exe_path_element[kong_layer + 3: -1])
# print(" template_dir:", template_dir) ### 舉例: template_dir: 7_mask_unet/5_os_book_and_paper_have_dtd_hdr_mix_bg_tv_s04_mae
#############################################################################################################################################################################################################
exp_dir = template_dir
#############################################################################################################################################################################################################
from step06_a_datas_obj import *
from step09_0side_L5 import *
from step10_a2_loss_info_obj import *
from step10_b2_exp_builder import Exp_builder
rm_paths = [path for path in sys.path if code_dir in path]
for rm_path in rm_paths: sys.path.remove(rm_path)
rm_moduless = [module for module in sys.modules if "step09" in module]
for rm_module in rm_moduless: del sys.modules[rm_module]
import Exps_7_v3.doc3d.Ablation4_ch016_ep010.I_w_M_to_W_pyr.pyr_0s.L5.step10_a as I_w_M_to_W_p20_pyr
from Exps_7_v3.doc3d.Ablation4_ch016_ep010.W_w_M_to_C_pyr.pyr_2s.L5.step10_a import ch032_1side_6__2side_6__ep010 as W_w_M_to_C_p20_2s_L5_Mae_Sob_k09
#############################################################################################################################################################################################################
'''
exp_dir 是 決定 result_dir 的 "上一層"資料夾 名字喔! exp_dir要巢狀也沒問題~
比如:exp_dir = "6_mask_unet/自己命的名字",那 result_dir 就都在:
6_mask_unet/自己命的名字/result_a
6_mask_unet/自己命的名字/result_b
6_mask_unet/自己命的名字/...
'''
use_db_obj = type8_blender_kong_doc3d_v2
use_loss_obj = [mae_s001_sobel_k9_s001_loss_info_builder.set_loss_target("UNet_Wz").copy(), mae_s001_sobel_k9_s001_loss_info_builder.set_loss_target("UNet_Wy").copy(), mae_s001_sobel_k9_s001_loss_info_builder.set_loss_target("UNet_Wx").copy(), mae_s001_sobel_k9_s001_loss_info_builder.set_loss_target("UNet_Cx").copy(), mae_s001_sobel_k9_s001_loss_info_builder.set_loss_target("UNet_Cy").copy()] ### z, y, x 順序是看 step07_b_0b_Multi_UNet 來對應的喔
#############################################################
### 為了resul_analyze畫空白的圖,建一個empty的 Exp_builder
empty = Exp_builder().set_basic("train", use_db_obj, ch032_pyramid_0side_and_1s6_2s6, use_loss_obj, exp_dir=exp_dir, code_exe_path=code_exe_path, describe_end=ch032_pyramid_0side.kong_model.model_describe) .set_train_args(epochs= 1) .set_train_iter_args(it_see_fq=900 * 5, it_save_fq=900 * 5, it_down_step="half", it_down_fq=900).set_train_in_gt_use_range(use_in_range=Range(0, 1), use_gt_range=Range(0, 1)).set_result_name(result_name="為了resul_analyze畫空白的圖,建一個empty的 Exp_builder")
#############################################################
ch032_0side = Exp_builder().set_basic("train", use_db_obj, ch032_pyramid_0side_and_1s6_2s6, use_loss_obj, exp_dir=exp_dir, code_exe_path=code_exe_path, describe_end=ch032_pyramid_0side.kong_model.model_describe) .set_train_args(epochs= 1) .set_train_iter_args(it_see_fq=900 * 5, it_save_fq=900 * 5, it_down_step="half", it_down_fq=900).set_train_in_gt_use_range(use_in_range=Range(0, 1), use_gt_range=Range(0, 1)).set_multi_model_reload_exp_builders_dict(I_to_Wx_Wy_Wz=I_w_M_to_W_p20_pyr.ch032_0side, W_to_Cx_Cy=W_w_M_to_C_p20_2s_L5_Mae_Sob_k09).set_result_name(result_name="p20_L5-ch032_0side")
#############################################################
if(__name__ == "__main__"):
print("build exps cost time:", time.time() - start_time)
if len(sys.argv) < 2:
############################################################################################################
### 直接按 F5 或打 python step10_b1_exp_obj_load_and_train_and_test.py,後面沒有接東西喔!才不會跑到下面給 step10_b_subprocss.py 用的程式碼~~~
ch032_0side.build().run()
# print('no argument')
sys.exit()
### 以下是給 step10_b_subprocess.py 用的,相當於cmd打 python step10_b1_exp_obj_load_and_train_and_test.py 某個exp.build().run()
eval(sys.argv[1])
| [
"s89334roy@yahoo.com.tw"
] | s89334roy@yahoo.com.tw |
97a0aef5d117499161a3018f5013cc2915e46737 | 51f887286aa3bd2c3dbe4c616ad306ce08976441 | /pybind/slxos/v17s_1_02/routing_system/protocol/hide_vrrp_holder/__init__.py | 9c1361ce79cf8dbdd16c59ad7805d05c225207bd | [
"Apache-2.0"
] | permissive | b2220333/pybind | a8c06460fd66a97a78c243bf144488eb88d7732a | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | refs/heads/master | 2020-03-18T09:09:29.574226 | 2018-04-03T20:09:50 | 2018-04-03T20:09:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 9,002 | py |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class hide_vrrp_holder(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-common-def - based on the path /routing-system/protocol/hide-vrrp-holder. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__vrrp','__vrrp_extended',)
_yang_name = 'hide-vrrp-holder'
_rest_name = ''
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__vrrp = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="vrrp", rest_name="vrrp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Virtual Router Redundacy Protocol (VRRP)', u'cli-show-no': None}}, namespace='urn:brocade.com:mgmt:brocade-vrrp', defining_module='brocade-vrrp', yang_type='empty', is_config=True)
self.__vrrp_extended = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="vrrp-extended", rest_name="vrrp-extended", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Virtual Router Redundacy Protocol Extended (VRRP-E)', u'cli-show-no': None}}, namespace='urn:brocade.com:mgmt:brocade-vrrp', defining_module='brocade-vrrp', yang_type='empty', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'routing-system', u'protocol', u'hide-vrrp-holder']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'protocol']
def _get_vrrp(self):
"""
Getter method for vrrp, mapped from YANG variable /routing_system/protocol/hide_vrrp_holder/vrrp (empty)
YANG Description: Virtual Router Redundacy Protocol (VRRP)
"""
return self.__vrrp
def _set_vrrp(self, v, load=False):
"""
Setter method for vrrp, mapped from YANG variable /routing_system/protocol/hide_vrrp_holder/vrrp (empty)
If this variable is read-only (config: false) in the
source YANG file, then _set_vrrp is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_vrrp() directly.
YANG Description: Virtual Router Redundacy Protocol (VRRP)
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="vrrp", rest_name="vrrp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Virtual Router Redundacy Protocol (VRRP)', u'cli-show-no': None}}, namespace='urn:brocade.com:mgmt:brocade-vrrp', defining_module='brocade-vrrp', yang_type='empty', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """vrrp must be of a type compatible with empty""",
'defined-type': "empty",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="vrrp", rest_name="vrrp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Virtual Router Redundacy Protocol (VRRP)', u'cli-show-no': None}}, namespace='urn:brocade.com:mgmt:brocade-vrrp', defining_module='brocade-vrrp', yang_type='empty', is_config=True)""",
})
self.__vrrp = t
if hasattr(self, '_set'):
self._set()
def _unset_vrrp(self):
self.__vrrp = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="vrrp", rest_name="vrrp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Virtual Router Redundacy Protocol (VRRP)', u'cli-show-no': None}}, namespace='urn:brocade.com:mgmt:brocade-vrrp', defining_module='brocade-vrrp', yang_type='empty', is_config=True)
def _get_vrrp_extended(self):
"""
Getter method for vrrp_extended, mapped from YANG variable /routing_system/protocol/hide_vrrp_holder/vrrp_extended (empty)
YANG Description: Virtual Router Redundacy Protocol Extended (VRRP-E)
"""
return self.__vrrp_extended
def _set_vrrp_extended(self, v, load=False):
"""
Setter method for vrrp_extended, mapped from YANG variable /routing_system/protocol/hide_vrrp_holder/vrrp_extended (empty)
If this variable is read-only (config: false) in the
source YANG file, then _set_vrrp_extended is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_vrrp_extended() directly.
YANG Description: Virtual Router Redundacy Protocol Extended (VRRP-E)
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="vrrp-extended", rest_name="vrrp-extended", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Virtual Router Redundacy Protocol Extended (VRRP-E)', u'cli-show-no': None}}, namespace='urn:brocade.com:mgmt:brocade-vrrp', defining_module='brocade-vrrp', yang_type='empty', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """vrrp_extended must be of a type compatible with empty""",
'defined-type': "empty",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="vrrp-extended", rest_name="vrrp-extended", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Virtual Router Redundacy Protocol Extended (VRRP-E)', u'cli-show-no': None}}, namespace='urn:brocade.com:mgmt:brocade-vrrp', defining_module='brocade-vrrp', yang_type='empty', is_config=True)""",
})
self.__vrrp_extended = t
if hasattr(self, '_set'):
self._set()
def _unset_vrrp_extended(self):
self.__vrrp_extended = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="vrrp-extended", rest_name="vrrp-extended", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Virtual Router Redundacy Protocol Extended (VRRP-E)', u'cli-show-no': None}}, namespace='urn:brocade.com:mgmt:brocade-vrrp', defining_module='brocade-vrrp', yang_type='empty', is_config=True)
vrrp = __builtin__.property(_get_vrrp, _set_vrrp)
vrrp_extended = __builtin__.property(_get_vrrp_extended, _set_vrrp_extended)
_pyangbind_elements = {'vrrp': vrrp, 'vrrp_extended': vrrp_extended, }
| [
"badaniya@brocade.com"
] | badaniya@brocade.com |
68dbd7dfa64f57c6443696841085abbebad9111d | d6952f048727add5b54a521d04f6c9b5889bcd35 | /pollination_sdk/models/project_folder.py | 0e150c669fb8aacf50defcdfda3311527b182f19 | [] | no_license | TfedUD/python-sdk | bf719644041c2ab7b741af9c7fb8e5acfe085922 | 7ddc34611de44d2f9c5b217cf9b9e7cec27b2a27 | refs/heads/master | 2023-08-10T21:13:45.270193 | 2021-06-21T14:48:36 | 2021-06-21T14:51:01 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,473 | py | # coding: utf-8
"""
pollination-server
Pollination Server OpenAPI Definition # noqa: E501
The version of the OpenAPI document: 0.13.0
Contact: info@pollination.cloud
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from pollination_sdk.configuration import Configuration
class ProjectFolder(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'annotations': 'dict(str, str)',
'path': 'str',
'type': 'str'
}
attribute_map = {
'annotations': 'annotations',
'path': 'path',
'type': 'type'
}
def __init__(self, annotations=None, path=None, type='ProjectFolder', local_vars_configuration=None): # noqa: E501
"""ProjectFolder - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._annotations = None
self._path = None
self._type = None
self.discriminator = None
if annotations is not None:
self.annotations = annotations
if path is not None:
self.path = path
if type is not None:
self.type = type
@property
def annotations(self):
"""Gets the annotations of this ProjectFolder. # noqa: E501
An optional dictionary to add annotations to inputs. These annotations will be used by the client side libraries. # noqa: E501
:return: The annotations of this ProjectFolder. # noqa: E501
:rtype: dict(str, str)
"""
return self._annotations
@annotations.setter
def annotations(self, annotations):
"""Sets the annotations of this ProjectFolder.
An optional dictionary to add annotations to inputs. These annotations will be used by the client side libraries. # noqa: E501
:param annotations: The annotations of this ProjectFolder. # noqa: E501
:type annotations: dict(str, str)
"""
self._annotations = annotations
@property
def path(self):
"""Gets the path of this ProjectFolder. # noqa: E501
The path to a folder where files and folders can be sourced. For a local filesystem this can be \"C:\\Users\\me\\jobs\\test\". # noqa: E501
:return: The path of this ProjectFolder. # noqa: E501
:rtype: str
"""
return self._path
@path.setter
def path(self, path):
"""Sets the path of this ProjectFolder.
The path to a folder where files and folders can be sourced. For a local filesystem this can be \"C:\\Users\\me\\jobs\\test\". # noqa: E501
:param path: The path of this ProjectFolder. # noqa: E501
:type path: str
"""
self._path = path
@property
def type(self):
"""Gets the type of this ProjectFolder. # noqa: E501
:return: The type of this ProjectFolder. # noqa: E501
:rtype: str
"""
return self._type
@type.setter
def type(self, type):
"""Sets the type of this ProjectFolder.
:param type: The type of this ProjectFolder. # noqa: E501
:type type: str
"""
if (self.local_vars_configuration.client_side_validation and
type is not None and not re.search(r'^ProjectFolder$', type)): # noqa: E501
raise ValueError(r"Invalid value for `type`, must be a follow pattern or equal to `/^ProjectFolder$/`") # noqa: E501
self._type = type
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, ProjectFolder):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, ProjectFolder):
return True
return self.to_dict() != other.to_dict()
| [
"antoinedao1@gmail.com"
] | antoinedao1@gmail.com |
9642a250b171c78d131ca215119d3d7609d57884 | 35d67ed561d875293aa137d21ced50d2912dbff8 | /pydl/pydlutils/fits/__init__.py | 11301fe5886469adf07784fdc9986cc084c8eada | [] | no_license | eigenbrot/pydl | 917b7b8794c26c45658e549bc1ea805f5ca29bc4 | aa8e37b031c8b5114851519e1ed0860a9cdb4ba3 | refs/heads/master | 2021-01-15T08:48:40.861756 | 2013-10-19T15:55:28 | 2013-10-19T15:55:28 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 38 | py | from hogg_mrdfits import hogg_mrdfits
| [
"benjamin.weaver@nyu.edu"
] | benjamin.weaver@nyu.edu |
3ab90496ea31ecc33f1d7296a54b98f6b01e95a3 | fe3265b72e691c6df8ecd936c25b6d48ac33b59a | /homeassistant/components/livisi/switch.py | bcb9a2044119ad26ab3c66b78575729ed4c684bc | [
"Apache-2.0"
] | permissive | bdraco/home-assistant | dcaf76c0967783a08eec30ce704e5e9603a2f0ca | bfa315be51371a1b63e04342a0b275a57ae148bd | refs/heads/dev | 2023-08-16T10:39:15.479821 | 2023-02-21T22:38:50 | 2023-02-21T22:38:50 | 218,684,806 | 13 | 7 | Apache-2.0 | 2023-02-21T23:40:57 | 2019-10-31T04:33:09 | Python | UTF-8 | Python | false | false | 5,432 | py | """Code to handle a Livisi switches."""
from __future__ import annotations
from typing import Any
from homeassistant.components.switch import SwitchEntity
from homeassistant.config_entries import ConfigEntry
from homeassistant.core import HomeAssistant, callback
from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers.dispatcher import async_dispatcher_connect
from homeassistant.helpers.entity import DeviceInfo
from homeassistant.helpers.entity_platform import AddEntitiesCallback
from homeassistant.helpers.update_coordinator import CoordinatorEntity
from .const import (
DOMAIN,
LIVISI_REACHABILITY_CHANGE,
LIVISI_STATE_CHANGE,
LOGGER,
PSS_DEVICE_TYPE,
)
from .coordinator import LivisiDataUpdateCoordinator
async def async_setup_entry(
hass: HomeAssistant,
config_entry: ConfigEntry,
async_add_entities: AddEntitiesCallback,
) -> None:
"""Set up switch device."""
coordinator: LivisiDataUpdateCoordinator = hass.data[DOMAIN][config_entry.entry_id]
@callback
def handle_coordinator_update() -> None:
"""Add switch."""
shc_devices: list[dict[str, Any]] = coordinator.data
entities: list[SwitchEntity] = []
for device in shc_devices:
if (
device["type"] == PSS_DEVICE_TYPE
and device["id"] not in coordinator.devices
):
livisi_switch: SwitchEntity = create_entity(
config_entry, device, coordinator
)
LOGGER.debug("Include device type: %s", device["type"])
coordinator.devices.add(device["id"])
entities.append(livisi_switch)
async_add_entities(entities)
config_entry.async_on_unload(
coordinator.async_add_listener(handle_coordinator_update)
)
def create_entity(
config_entry: ConfigEntry,
device: dict[str, Any],
coordinator: LivisiDataUpdateCoordinator,
) -> SwitchEntity:
"""Create Switch Entity."""
config_details: dict[str, Any] = device["config"]
capabilities: list = device["capabilities"]
room_id: str = device["location"]
room_name: str = coordinator.rooms[room_id]
livisi_switch = LivisiSwitch(
config_entry,
coordinator,
unique_id=device["id"],
manufacturer=device["manufacturer"],
device_type=device["type"],
name=config_details["name"],
capability_id=capabilities[0],
room=room_name,
)
return livisi_switch
class LivisiSwitch(CoordinatorEntity[LivisiDataUpdateCoordinator], SwitchEntity):
"""Represents the Livisi Switch."""
def __init__(
self,
config_entry: ConfigEntry,
coordinator: LivisiDataUpdateCoordinator,
unique_id: str,
manufacturer: str,
device_type: str,
name: str,
capability_id: str,
room: str,
) -> None:
"""Initialize the Livisi Switch."""
self.config_entry = config_entry
self._attr_unique_id = unique_id
self._attr_name = name
self._capability_id = capability_id
self.aio_livisi = coordinator.aiolivisi
self._attr_available = False
self._attr_device_info = DeviceInfo(
identifiers={(DOMAIN, unique_id)},
manufacturer=manufacturer,
model=device_type,
name=name,
suggested_area=room,
via_device=(DOMAIN, config_entry.entry_id),
)
super().__init__(coordinator)
async def async_turn_on(self, **kwargs: Any) -> None:
"""Turn the entity on."""
response = await self.aio_livisi.async_pss_set_state(
self._capability_id, is_on=True
)
if response is None:
self._attr_available = False
raise HomeAssistantError(f"Failed to turn on {self._attr_name}")
async def async_turn_off(self, **kwargs: Any) -> None:
"""Turn the entity off."""
response = await self.aio_livisi.async_pss_set_state(
self._capability_id, is_on=False
)
if response is None:
self._attr_available = False
raise HomeAssistantError(f"Failed to turn off {self._attr_name}")
async def async_added_to_hass(self) -> None:
"""Register callbacks."""
response = await self.coordinator.async_get_pss_state(self._capability_id)
if response is None:
self._attr_is_on = False
self._attr_available = False
else:
self._attr_is_on = response
self.async_on_remove(
async_dispatcher_connect(
self.hass,
f"{LIVISI_STATE_CHANGE}_{self._capability_id}",
self.update_states,
)
)
self.async_on_remove(
async_dispatcher_connect(
self.hass,
f"{LIVISI_REACHABILITY_CHANGE}_{self.unique_id}",
self.update_reachability,
)
)
@callback
def update_states(self, state: bool) -> None:
"""Update the states of the switch device."""
self._attr_is_on = state
self.async_write_ha_state()
@callback
def update_reachability(self, is_reachable: bool) -> None:
"""Update the reachability of the switch device."""
self._attr_available = is_reachable
self.async_write_ha_state()
| [
"noreply@github.com"
] | bdraco.noreply@github.com |
d6485400fdda6e0149259558267ad521c3cc1308 | acb8e84e3b9c987fcab341f799f41d5a5ec4d587 | /langs/3/gPt.py | 6609d7c4ae48c079d14c390b468e1c66a9fc91f9 | [] | no_license | G4te-Keep3r/HowdyHackers | 46bfad63eafe5ac515da363e1c75fa6f4b9bca32 | fb6d391aaecb60ab5c4650d4ae2ddd599fd85db2 | refs/heads/master | 2020-08-01T12:08:10.782018 | 2016-11-13T20:45:50 | 2016-11-13T20:45:50 | 73,624,224 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 486 | py | import sys
def printFunction(lineRemaining):
if lineRemaining[0] == '"' and lineRemaining[-1] == '"':
if len(lineRemaining) > 2:
#data to print
lineRemaining = lineRemaining[1:-1]
print ' '.join(lineRemaining)
else:
print
def main(fileName):
with open(fileName) as f:
for line in f:
data = line.split()
if data[0] == 'gPT':
printFunction(data[1:])
else:
print 'ERROR'
return
if __name__ == '__main__':
main(sys.argv[1]) | [
"juliettaylorswift@gmail.com"
] | juliettaylorswift@gmail.com |
c42ca593d2e114e99beff8bf337b2848a70f7845 | f62ddee2dcadcae0e90f969be513b04e16dabf58 | /data/source/tests/pyyal_test_support/test_case.py | 4f8ed1569fc2bf2986ab49641031b7bcfc7bd2b2 | [
"Apache-2.0"
] | permissive | libyal/libyal | 30bccf56471dbf874292fe32d5d9173fd470df0e | 124111953917f65782a66a80e96a502ce2331b09 | refs/heads/main | 2023-07-25T09:25:46.068071 | 2023-07-08T09:56:44 | 2023-07-08T09:56:44 | 23,780,738 | 196 | 30 | Apache-2.0 | 2022-11-27T19:01:42 | 2014-09-08T05:57:58 | C | UTF-8 | Python | false | false | 85 | py | class SupportFunctionsTests(unittest.TestCase):
"""Tests the support functions."""
| [
"joachim.metz@gmail.com"
] | joachim.metz@gmail.com |
84a74f7f1adcb4948a83a6a6bc07fe6fd2aebf37 | 69851e673bad63c54138fd4c6a7532d298b28728 | /test/asyncore_echo_server.py | b12383219e34e278fe9d0fe6ed31f733b0d8f4cc | [] | no_license | ppppdm/mtcpsoft | a23e5b7b5f0144a2bad927824194b9534ee0a2f0 | 3a02474960d2903d4979a89b1c7568932f7ec006 | refs/heads/master | 2020-05-30T15:05:12.850440 | 2013-08-02T09:00:06 | 2013-08-02T09:00:06 | 9,294,934 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,062 | py | import logging
import asyncore
import socket
logging.basicConfig(level=logging.DEBUG, format="%(created)-15s %(msecs)d %(levelname)8s %(thread)d %(name)s %(message)s")
log = logging.getLogger(__name__)
BACKLOG = 5
SIZE = 1024
class EchoHandler(asyncore.dispatcher):
def __init__(self, conn_sock, client_address, server):
self.server = server
self.client_address = client_address
self.buffer = ""
# We dont have anything to write, to start with
self.is_writable = False
# Create ourselves, but with an already provided socket
asyncore.dispatcher.__init__(self, conn_sock)
log.debug("created handler; waiting for loop")
def readable(self):
return True # We are always happy to read
def writable(self):
return self.is_writable # But we might not have
# anything to send all the time
def handle_read(self):
log.debug("handle_read")
data = self.recv(SIZE)
log.debug("after recv")
if data:
log.debug("got data")
self.buffer += data
self.is_writable = True # sth to send back now
else:
log.debug("got null data")
def handle_write(self):
log.debug("handle_write")
if self.buffer:
sent = self.send(self.buffer)
log.debug("sent data")
self.buffer = self.buffer[sent:]
else:
log.debug("nothing to send")
if len(self.buffer) == 0:
self.is_writable = False
# Will this ever get called? Does loop() call
# handle_close() if we called close, to start with?
def handle_close(self):
log.debug("handle_close")
log.info("conn_closed: client_address=%s:%s" % \
(self.client_address[0],
self.client_address[1]))
self.close()
#pass
class EchoServer(asyncore.dispatcher):
allow_reuse_address = False
request_queue_size = 5
address_family = socket.AF_INET
socket_type = socket.SOCK_STREAM
def __init__(self, address, handlerClass=EchoHandler):
self.address = address
self.handlerClass = handlerClass
asyncore.dispatcher.__init__(self)
self.create_socket(self.address_family,
self.socket_type)
if self.allow_reuse_address:
self.set_resue_addr()
self.server_bind()
self.server_activate()
def server_bind(self):
self.bind(self.address)
log.debug("bind: address=%s:%s" % (self.address[0], self.address[1]))
def server_activate(self):
self.listen(self.request_queue_size)
log.debug("listen: backlog=%d" % self.request_queue_size)
def fileno(self):
return self.socket.fileno()
def serve_forever(self):
asyncore.loop()
# TODO: try to implement handle_request()
# Internal use
def handle_accept(self):
(conn_sock, client_address) = self.accept()
if self.verify_request(conn_sock, client_address):
self.process_request(conn_sock, client_address)
def verify_request(self, conn_sock, client_address):
return True
def process_request(self, conn_sock, client_address):
log.info("conn_made: client_address=%s:%s" % \
(client_address[0],
client_address[1]))
self.handlerClass(conn_sock, client_address, self)
def handle_close(self):
self.close()
if __name__=='__main__':
import asyncore_echo_server
interface = 'localhost'
port = 6001
server = asyncore_echo_server.EchoServer((interface, port))
server.serve_forever()
| [
"ppppdm@gmail.com"
] | ppppdm@gmail.com |
58e929b86c5c6af28135527b1d7338671acd45f5 | a8939556f37cbc7313b7e648f0feed143951fc86 | /biblioteca/apps/permisos/migrations/0003_permiso_emma.py | ab9d6c63eda9676785137caa3ae5987b4ba00510 | [] | no_license | miemma/biblioteca | 212336177fd304be7d20f57001fb567e220bb1ef | 0c170cc9ae75f0047a6e1ef6a039d47084989333 | refs/heads/master | 2020-12-25T15:09:17.683860 | 2016-08-02T14:50:56 | 2016-08-02T14:50:56 | 51,967,706 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 450 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.9.2 on 2016-07-27 14:14
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('permisos', '0002_auto_20160726_1028'),
]
operations = [
migrations.AddField(
model_name='permiso',
name='emma',
field=models.BooleanField(default=True),
),
]
| [
"mauriciodinki@gmail.com"
] | mauriciodinki@gmail.com |
c8846c0df39e4346d96c1582996cda74c3687160 | 09cc53ebea64b1c73763ee3ddc561996c61ba531 | /sur/apps.py | 72b4f6be067b305d3bb765c7d2b09b519a2a2fa0 | [] | no_license | phasety/sur | a06e5c629f5b4e79f117fe722d70f11a4a2f7a0d | 0e4ceba33cde686f91e28de92eadeb1a1d5b7b34 | refs/heads/develop | 2021-01-21T14:40:16.370011 | 2016-05-22T16:35:32 | 2016-05-22T16:35:32 | 59,416,879 | 4 | 1 | null | 2016-07-25T22:50:38 | 2016-05-22T14:34:50 | Python | UTF-8 | Python | false | false | 105 | py | from django.apps import AppConfig
class SurConfig(AppConfig):
name = 'sur'
verbose_name = "Sur" | [
"gaitan@gmail.com"
] | gaitan@gmail.com |
a4d2108824be0ac9fcd647eb84358ec9b51fbaea | 15f321878face2af9317363c5f6de1e5ddd9b749 | /solutions_python/Problem_207/535.py | 49c82db37b6a6d5585e9da8e7da7c644ace9e867 | [] | no_license | dr-dos-ok/Code_Jam_Webscraper | c06fd59870842664cd79c41eb460a09553e1c80a | 26a35bf114a3aa30fc4c677ef069d95f41665cc0 | refs/heads/master | 2020-04-06T08:17:40.938460 | 2018-10-14T10:12:47 | 2018-10-14T10:12:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,029 | py | # input() reads a string with a line of input, stripping the '\n' (newline) at the end.
# This is all you need for most Google Code Jam problems.
t = int(input()) # read a line with a single integer
for c in range(1, t + 1):
n,r,o,y,g,b,v = map(int, input().split(" ")) # read a list of integers, 2 in this case
l = {'R':r, 'O':o, 'Y': y, 'G': g, 'B': b, 'V': v}
neigh = {'R': 'BGY', 'O': 'VGB', 'Y': 'BVR', 'G': 'VRO', 'B': 'ROY', 'V': 'OYG'}
# pierwszy nie ma znaczenia
can = True
res = ""
m = max(l, key=l.get)
res += m
l[m] = l[m] - 1
# potem najdluzszych cykli
visited = {'R': -1, 'O':-1, 'Y': -1, 'G': -1, 'B': -1, 'V': -1}
visited[m] = 0
for i in range(n-1):
prev = res[-1]
nex = dict((k, l[k]) for k in neigh[prev] if l[k])
if not nex:
can = False
break
m = min(nex, key=visited.get)
res += m
visited[m] = i
l[m] = l[m] - 1
if can and n > 1:
can = res[-1] in neigh[res[0]]
print("Case #{}: {}".format(c, res if can else "IMPOSSIBLE"))
| [
"miliar1732@gmail.com"
] | miliar1732@gmail.com |
54b39fb0c7e6ef24872826a2dddd60ede9dae49d | f561a219c57bd75790d3155acac6f54299a88b08 | /city/admin.py | 0519873f37af6e7fa4913e0e2fe60a712456d5c7 | [] | no_license | ujjwalagrawal17/OfferCartServer | 1e81cf2dc17f19fa896062c2a084e6b232a8929e | b3cd1c5f8eecc167b6f4baebed3c4471140d905f | refs/heads/master | 2020-12-30T15:31:04.380084 | 2017-05-24T18:26:20 | 2017-05-24T18:26:20 | 91,155,405 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 523 | py | from django.contrib import admin
from .models import *
# Register your models here.
class CityDataAdmin(admin.ModelAdmin):
list_display = ["id", "name", "created", "modified"]
admin.site.register(CityData, CityDataAdmin)
class UserCityDataAdmin(admin.ModelAdmin):
list_display = ["id", "city_id", "user_id"]
admin.site.register(UserCityData, UserCityDataAdmin)
class CityFcmDataAdmin(admin.ModelAdmin):
list_display = ["id", "city_id", "user_id"]
admin.site.register(CityFcmData, CityFcmDataAdmin)
| [
"ujjwal.iitism@gmail.com"
] | ujjwal.iitism@gmail.com |
bc9c12c96f2b5f6f38674dd2cee18c4c49df274b | a85303ac9116e57d756afd5feb9e0b22f6ebe7a4 | /tools/region_recall.py | 765657c8c9aa1bfcf4fb68703c9d956f4c26c156 | [] | no_license | TWSFar/visdrone | 866b1a80f02bd05183176047ea25a4600d34a3cc | 54bb301cfdd7b0ce44e3e4d168441721776efe11 | refs/heads/master | 2020-07-12T05:14:50.191525 | 2019-08-27T10:33:40 | 2019-08-27T10:33:40 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,150 | py | import os, sys
import cv2
import argparse
import numpy as np
from glob import glob
from tqdm import tqdm
import utils
import pdb
from datasets import get_dataset
def parse_args():
parser = argparse.ArgumentParser(description="show mask results")
parser.add_argument('dataset', type=str, default='VisDrone',
choices=['VisDrone', 'HKB'], help='dataset name')
args = parser.parse_args()
return args
if __name__ == '__main__':
args = parse_args()
dataset = get_dataset(args.dataset)
val_list = dataset.get_imglist('val')
mask_path = '../pytorch-deeplab-xception/run/mask-hkbval'
label_object = []
detect_object = []
mask_object = []
undetected_img = []
pixel_num = []
for img_path in tqdm(val_list, ncols=80):
img_name = os.path.basename(img_path)
raw_file = os.path.join(mask_path, img_name[:-4]+'.png')
img = cv2.imread(img_path)
height, width = img.shape[:2]
mask_img = cv2.imread(raw_file, cv2.IMREAD_GRAYSCALE)
mask_h, mask_w = mask_img.shape[:2]
pixel_num.append(np.sum(mask_img))
label_box, _ = dataset.get_gtbox(img_path)
region_box, contours = utils.generate_box_from_mask(mask_img)
region_box = utils.region_postprocess(region_box, contours, (mask_w, mask_h))
region_box = utils.resize_box(region_box, (mask_w, mask_h), (width, height))
region_box = utils.generate_crop_region(region_box, (width, height))
count = 0
for box1 in label_box:
for box2 in region_box:
if utils.overlap(box2, box1):
count += 1
break
label_object.append(len(label_box))
detect_object.append(count)
mask_object.append(len(region_box))
if len(label_box) != count:
undetected_img.append(img_name)
print('recall: %f' % (np.sum(detect_object) / np.sum(label_object)))
# print('cost avg: %f, std: %f' % (np.mean(pixel_num), np.std(pixel_num)))
print('detect box avg: %f' %(np.mean(mask_object)))
print(sorted(undetected_img)) | [
"cyfhorse@gmail.com"
] | cyfhorse@gmail.com |
7828582f9ad5c1df41c4957b9ddc46bc8217c64a | ebd24e400986c57b4bb1b9578ebd8807a6db62e8 | /InstaGrade-FormBuilder/xlsxwriter/test/comparison/test_chart_format12.py | 5d79eda350742be44b8247dff8014a096551d270 | [] | no_license | nate-parrott/ig | 6abed952bf32119a536a524422037ede9b431926 | 6e0b6ac0fb4b59846680567150ce69a620e7f15d | refs/heads/master | 2021-01-12T10:15:15.825004 | 2016-12-13T21:23:17 | 2016-12-13T21:23:17 | 76,399,529 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,926 | py | ###############################################################################
#
# Tests for XlsxWriter.
#
# Copyright (c), 2013-2014, John McNamara, jmcnamara@cpan.org
#
from ..excel_comparsion_test import ExcelComparisonTest
from ...workbook import Workbook
class TestCompareXLSXFiles(ExcelComparisonTest):
"""
Test file created by XlsxWriter against a file created by Excel.
"""
def setUp(self):
self.maxDiff = None
filename = 'chart_format12.xlsx'
test_dir = 'xlsxwriter/test/comparison/'
self.got_filename = test_dir + '_test_' + filename
self.exp_filename = test_dir + 'xlsx_files/' + filename
self.ignore_files = []
self.ignore_elements = {}
def test_create_file(self):
"""Test the creation of an XlsxWriter file with chart formatting."""
workbook = Workbook(self.got_filename)
worksheet = workbook.add_worksheet()
chart = workbook.add_chart({'type': 'line'})
chart.axis_ids = [54794880, 56296576]
data = [
[1, 2, 3, 4, 5],
[2, 4, 6, 8, 10],
[3, 6, 9, 12, 15],
]
worksheet.write_column('A1', data[0])
worksheet.write_column('B1', data[1])
worksheet.write_column('C1', data[2])
chart.add_series({
'categories': '=Sheet1!$A$1:$A$5',
'values': '=Sheet1!$B$1:$B$5',
'trendline': {
'type': 'moving_average',
'period': 2,
'line': {
'color': 'red',
'width': 1,
'dash_type': 'long_dash',
},
},
})
chart.add_series({
'categories': '=Sheet1!$A$1:$A$5',
'values': '=Sheet1!$C$1:$C$5',
})
worksheet.insert_chart('E9', chart)
workbook.close()
self.assertExcelEqual()
| [
"nateparro2t@gmail.com"
] | nateparro2t@gmail.com |
1ae9b2b955575614b0511b5ddb39ed26c3e76eb2 | 0b01cb61a4ae4ae236a354cbfa23064e9057e434 | /alipay/aop/api/domain/KoubeiMerchantKbcloudSubuserinfoQueryModel.py | 6eff376ef9495acbde6543c3e202f036563ce525 | [
"Apache-2.0"
] | permissive | hipacloud/alipay-sdk-python-all | e4aec2869bf1ea6f7c6fb97ac7cc724be44ecd13 | bdbffbc6d5c7a0a3dd9db69c99443f98aecf907d | refs/heads/master | 2022-11-14T11:12:24.441822 | 2020-07-14T03:12:15 | 2020-07-14T03:12:15 | 277,970,730 | 0 | 0 | Apache-2.0 | 2020-07-08T02:33:15 | 2020-07-08T02:33:14 | null | UTF-8 | Python | false | false | 896 | py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
from alipay.aop.api.constant.ParamConstants import *
class KoubeiMerchantKbcloudSubuserinfoQueryModel(object):
def __init__(self):
self._user_id = None
@property
def user_id(self):
return self._user_id
@user_id.setter
def user_id(self, value):
self._user_id = value
def to_alipay_dict(self):
params = dict()
if self.user_id:
if hasattr(self.user_id, 'to_alipay_dict'):
params['user_id'] = self.user_id.to_alipay_dict()
else:
params['user_id'] = self.user_id
return params
@staticmethod
def from_alipay_dict(d):
if not d:
return None
o = KoubeiMerchantKbcloudSubuserinfoQueryModel()
if 'user_id' in d:
o.user_id = d['user_id']
return o
| [
"liuqun.lq@alibaba-inc.com"
] | liuqun.lq@alibaba-inc.com |
8a6360f4ec2a34b4453ff5db3ba9d22e1f3948ef | 1460bad3dfffb5d194bad82ec79c1aac32c46a4d | /06. Inventory.py | 716a0482eb6aad72539af07ef3861ee45d21fe56 | [] | no_license | antondelchev/Objects-and-Classes---Exericse | 29942f9db057995efb41b6cdc1afac0b246f5546 | 199512a917798b81518549fa1c792be07558be3f | refs/heads/main | 2023-06-01T04:29:36.817189 | 2021-06-26T11:55:40 | 2021-06-26T11:55:40 | 379,929,431 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 675 | py | class Inventory:
def __init__(self, capacity):
self.__capacity = capacity
self.items = []
def __repr__(self):
return f"Items: {', '.join(self.items)}.\nCapacity left: {self.get_capacity() - len(self.items)}"
def add_item(self, item):
if len(self.items) < self.__capacity:
self.items.append(item)
else:
return "not enough room in the inventory"
def get_capacity(self):
return self.__capacity
inventory = Inventory(2)
inventory.add_item("potion")
inventory.add_item("sword")
print(inventory.add_item("bottle"))
print(inventory.get_capacity())
print(inventory)
| [
"noreply@github.com"
] | antondelchev.noreply@github.com |
ef72b21dbd259ad4ab9f45b39c810dc2058e2b49 | 46ce3ba4d13a4d6aa20cbfc167937882b18b7f79 | /text-to-speech/caching.py | a590f96f840d3dbaf479ab8954f2f50bc7bb8a20 | [
"MIT"
] | permissive | hsouporto/Bahasa-NLP-Tensorflow | 835645b9cc68b0b69e331298648f820981508be6 | 4e6427230e36c2d79ec951c7f2c3501bf75f9a8a | refs/heads/master | 2022-03-04T17:07:14.443843 | 2019-11-24T17:33:02 | 2019-11-24T17:33:02 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,272 | py | import numpy as np
import librosa
import os
import scipy
import tqdm
sampling_rate = 22050
n_fft = 2048
frame_shift = 0.0125
frame_length = 0.05
fourier_window_size = 2048
max_db = 100
ref_db = 20
preemphasis = 0.97
hop_length = int(sampling_rate * frame_shift)
win_length = int(sampling_rate * frame_length)
n_mels = 80
resampled = 5
reduction_factor = 5
def get_spectrogram(audio_file):
y, sr = librosa.load(audio_file, sr = sampling_rate)
y, _ = librosa.effects.trim(y)
y = np.append(y[0], y[1:] - preemphasis * y[:-1])
linear = librosa.stft(
y = y,
n_fft = fourier_window_size,
hop_length = hop_length,
win_length = win_length,
)
mag = np.abs(linear)
mel_basis = librosa.filters.mel(sampling_rate, fourier_window_size, n_mels)
mel = np.dot(mel_basis, mag)
mel = 20 * np.log10(np.maximum(1e-5, mel))
mag = 20 * np.log10(np.maximum(1e-5, mag))
mel = np.clip((mel - ref_db + max_db) / max_db, 1e-8, 1)
mag = np.clip((mag - ref_db + max_db) / max_db, 1e-8, 1)
return mel.T.astype(np.float32), mag.T.astype(np.float32)
def load_file(path):
mel, mag = get_spectrogram(path)
t = mel.shape[0]
num_paddings = resampled - (t % resampled) if t % resampled != 0 else 0
mel = np.pad(mel, [[0, num_paddings], [0, 0]], mode = 'constant')
mag = np.pad(mag, [[0, num_paddings], [0, 0]], mode = 'constant')
return mel.reshape((-1, n_mels * resampled)), mag
if not os.path.exists('mel'):
os.mkdir('mel')
if not os.path.exists('mag'):
os.mkdir('mag')
tolong_sebut = [
'tolong-sebut/' + i for i in os.listdir('tolong-sebut') if '.wav' in i
]
sebut_perkataan_man = [
'sebut-perkataan-man/' + i
for i in os.listdir('sebut-perkataan-man')
if '.wav' in i
]
sebut_perkataan_woman = [
'sebut-perkataan-woman/' + i
for i in os.listdir('sebut-perkataan-woman')
if '.wav' in i
]
wavs = tolong_sebut + sebut_perkataan_man + sebut_perkataan_woman
for path in tqdm.tqdm(wavs):
try:
mel, mag = load_file(path)
root, ext = os.path.splitext(path)
root = root.replace('/', '-')
np.save('mel/%s.npy' % (root), mel)
np.save('mag/%s.npy' % (root), mag)
except Exception as e:
print(e)
pass
| [
"husein.zol05@gmail.com"
] | husein.zol05@gmail.com |
d12073849a04a0418ceebefbbd28dba15b9a9086 | bde6ed092b7b29703737e11c5a5ff90934af3d74 | /hackerrank/30-days-of-code/day22.py | 3692eb9c7775e83b659713b768a2fa6812c9da0b | [] | no_license | takecian/ProgrammingStudyLog | 2ab7ea601e0996b3fa502b81ec141bc3772442b6 | 94485d131c0cc9842f1f4799da2d861dbf09b12a | refs/heads/master | 2023-04-28T16:56:18.943574 | 2023-04-18T06:34:58 | 2023-04-18T06:34:58 | 128,525,713 | 4 | 0 | null | 2022-12-09T06:15:19 | 2018-04-07T12:21:29 | Python | UTF-8 | Python | false | false | 951 | py | class Node:
def __init__(self, data):
self.right = self.left = None
self.data = data
class Solution:
def insert(self, root, data):
if root == None:
return Node(data)
else:
if data <= root.data:
cur = self.insert(root.left, data)
root.left = cur
else:
cur = self.insert(root.right, data)
root.right = cur
return root
def getHeight(self, root):
# Write your code here
if root == None:
return 0
else:
if root.left == None and root.right == None:
return 0
else:
return 1 + max(self.getHeight(root.left), self.getHeight(root.right))
T = int(input())
myTree = Solution()
root = None
for i in range(T):
data = int(input())
root = myTree.insert(root, data)
height = myTree.getHeight(root)
print(height) | [
"takecian@gmail.com"
] | takecian@gmail.com |
57228292fe41800007162ca6be34000fa2208823 | 50948d4cb10dcb1cc9bc0355918478fb2841322a | /azure-mgmt-network/azure/mgmt/network/v2018_10_01/models/topology.py | 0838d2b70b2837816c9f5e03ae5813cc13404e92 | [
"MIT"
] | permissive | xiafu-msft/azure-sdk-for-python | de9cd680b39962702b629a8e94726bb4ab261594 | 4d9560cfd519ee60667f3cc2f5295a58c18625db | refs/heads/master | 2023-08-12T20:36:24.284497 | 2019-05-22T00:55:16 | 2019-05-22T00:55:16 | 187,986,993 | 1 | 0 | MIT | 2020-10-02T01:17:02 | 2019-05-22T07:33:46 | Python | UTF-8 | Python | false | false | 1,821 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class Topology(Model):
"""Topology of the specified resource group.
Variables are only populated by the server, and will be ignored when
sending a request.
:ivar id: GUID representing the operation id.
:vartype id: str
:ivar created_date_time: The datetime when the topology was initially
created for the resource group.
:vartype created_date_time: datetime
:ivar last_modified: The datetime when the topology was last modified.
:vartype last_modified: datetime
:param resources:
:type resources:
list[~azure.mgmt.network.v2018_10_01.models.TopologyResource]
"""
_validation = {
'id': {'readonly': True},
'created_date_time': {'readonly': True},
'last_modified': {'readonly': True},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'created_date_time': {'key': 'createdDateTime', 'type': 'iso-8601'},
'last_modified': {'key': 'lastModified', 'type': 'iso-8601'},
'resources': {'key': 'resources', 'type': '[TopologyResource]'},
}
def __init__(self, **kwargs):
super(Topology, self).__init__(**kwargs)
self.id = None
self.created_date_time = None
self.last_modified = None
self.resources = kwargs.get('resources', None)
| [
"lmazuel@microsoft.com"
] | lmazuel@microsoft.com |
f864ade49aab086a7a31f2e135fad9a46cdb13ca | 163bbb4e0920dedd5941e3edfb2d8706ba75627d | /Code/CodeRecords/2425/60631/272083.py | 5ac84ac36934d8e5049118c3115c584027eceaf7 | [] | no_license | AdamZhouSE/pythonHomework | a25c120b03a158d60aaa9fdc5fb203b1bb377a19 | ffc5606817a666aa6241cfab27364326f5c066ff | refs/heads/master | 2022-11-24T08:05:22.122011 | 2020-07-28T16:21:24 | 2020-07-28T16:21:24 | 259,576,640 | 2 | 1 | null | null | null | null | UTF-8 | Python | false | false | 376 | py | t=int(input())
for ti in range(t):
si=input().split(' ')
n=int(si[0])
k=si[1]
s=input().split(' ')
for i in range(n):
if i+1==n:
break
lo=int(s[i])-int(k)
hi=int(s[i+1])-int(k)
if lo*hi <0:
if hi+lo>0:
print(s[i])
else:
print(s[i+1])
#print(n,k,s)
| [
"1069583789@qq.com"
] | 1069583789@qq.com |
798069de8b0f9e0ce67fbbced24721a901e7d47c | 6b2a8dd202fdce77c971c412717e305e1caaac51 | /solutions_5766201229705216_0/Python/bponsler/binary.py | b5e3209f540bab8969cf4715b64d40daee21ea46 | [] | no_license | alexandraback/datacollection | 0bc67a9ace00abbc843f4912562f3a064992e0e9 | 076a7bc7693f3abf07bfdbdac838cb4ef65ccfcf | refs/heads/master | 2021-01-24T18:27:24.417992 | 2017-05-23T09:23:38 | 2017-05-23T09:23:38 | 84,313,442 | 2 | 4 | null | null | null | null | UTF-8 | Python | false | false | 2,355 | py | from sys import stdin
def addToTree(items, tree):
treeItem0 = tree.get(items[0], [])
treeItem1 = tree.get(items[1], [])
treeItem0.append(items[1])
treeItem1.append(items[0])
tree[items[0]] = treeItem0
tree[items[1]] = treeItem1
return tree
def countChildren(tree, node, visited=None):
visited = [] if visited is None else visited
visited.append(node)
children = tree[node]
# Ignore parents
children = filter(lambda c: c not in visited, children)
num = len(children)
for child in children:
c = countChildren(tree, child, visited)
num += c
return num
def numChildrenToRemove(tree, node, visited=None):
visited = [] if visited is None else visited
visited.append(node)
numChildren = 0
children = tree[node]
# Ignore visited nodes
children = filter(lambda c: c not in visited, children)
if len(children) == 0:
return 0 # No children have to be removed, valid leaf
elif len(children) == 1:
# Have to remove the child and its children
nc = 1 + countChildren(tree, children[0], visited)
numChildren += nc
else:
childCounts = [numChildrenToRemove(tree, c, visited) \
for c in children]
numChildren += sum(childCounts)
# Need to remove children until achieve 2 children
numChildren += len(childCounts) - 2
return numChildren
def handleTest(case, numNodes, lines):
tree = {}
for line in lines:
items = map(int, line.split(" "))
tree = addToTree(items, tree)
# Roots must have at least 2 nodes to start
possibleRoots = filter(lambda e: len(tree[e]) >= 2, tree)
minCount = None
for root in possibleRoots:
count = numChildrenToRemove(tree, root)
minCount = count if minCount is None else min(minCount, count)
if minCount is None:
minCount = len(tree)
print "Case #%d: %s" % (case, minCount)
if __name__ == '__main__':
data = stdin.read().strip()
lines = data.split("\n")
numTests = int(lines[0])
case = 1
index = 1
while index < len(lines):
numNodes = int(lines[index])
testLines = lines[index+1:index + numNodes]
handleTest(case, numNodes, testLines)
case += 1
index += numNodes
| [
"eewestman@gmail.com"
] | eewestman@gmail.com |
b60d9dcf9a73cb0bc79ae108a7050edd0b8d277c | 98c6ea9c884152e8340605a706efefbea6170be5 | /examples/data/Assignment_9/glljos003/question2.py | 21e349f32092b4b61fac4704fcb4b74a41414501 | [] | no_license | MrHamdulay/csc3-capstone | 479d659e1dcd28040e83ebd9e3374d0ccc0c6817 | 6f0fa0fa1555ceb1b0fb33f25e9694e68b6a53d2 | refs/heads/master | 2021-03-12T21:55:57.781339 | 2014-09-22T02:22:22 | 2014-09-22T02:22:22 | 22,372,174 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,533 | py | choice_input = input("Enter the input filename:\n")
choice_output = input("Enter the output filename:\n")
input_file = open(choice_input, "r")
output_file = open(choice_output, "w")
width = eval(input("Enter the line width:\n"))
string = input_file.read()
x = string.splitlines(True)
string = "".join(x)
paragraphs = string.split("\n\n")
for i in range(len(paragraphs)):
paragraphs[i] = paragraphs[i].replace("\n", " ")
formatted_paragraphs = []
for para in paragraphs:
para = para.split(" ")
new_string = []
count = 0
for s in para: #s is each word in the current paragraph
if count + int(len(s)) <= width: #when the length of the new line is under the specified width, the string is just added to the list
new_string.append(s)
new_string.append(" ")
count+= int(len(s)+1)
else:
new_string.append("\n") #when the length of the new line exceeds the specified with, a newline character is added then string is appended
count = 0
new_string.append(s)
new_string.append(" ")
count+= int(len(s)+1)
formatted_paragraphs.append(new_string)
for i in formatted_paragraphs:
if i[-1] == " ":
i[-1] = ""
else:
continue
for para in formatted_paragraphs:
string = "".join(para)
string = string + "\n"
print(string, file=output_file)
input_file.close()
output_file.close()
| [
"jarr2000@gmail.com"
] | jarr2000@gmail.com |
5f19f916ce55cb6e965e8125fbe30a94008013c9 | 88ae8695987ada722184307301e221e1ba3cc2fa | /v8/tools/release/list_deprecated.py | 3549ecd427e785df1a537870920d2a6b5cb18bc2 | [
"BSD-3-Clause",
"SunPro",
"Apache-2.0"
] | permissive | iridium-browser/iridium-browser | 71d9c5ff76e014e6900b825f67389ab0ccd01329 | 5ee297f53dc7f8e70183031cff62f37b0f19d25f | refs/heads/master | 2023-08-03T16:44:16.844552 | 2023-07-20T15:17:00 | 2023-07-23T16:09:30 | 220,016,632 | 341 | 40 | BSD-3-Clause | 2021-08-13T13:54:45 | 2019-11-06T14:32:31 | null | UTF-8 | Python | false | false | 6,376 | py | #!/usr/bin/env python3
# Copyright 2018 the V8 project authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import argparse
from datetime import datetime
import re
import subprocess
import sys
from pathlib import Path
import logging
from multiprocessing import Pool
RE_GITHASH = re.compile(r"^[0-9a-f]{40}")
RE_AUTHOR_TIME = re.compile(r"^author-time (\d+)$")
RE_FILENAME = re.compile(r"^filename (.+)$")
VERSION_CACHE = dict()
RE_VERSION_MAJOR = re.compile(r".*V8_MAJOR_VERSION ([0-9]+)")
RE_VERSION_MINOR = re.compile(r".*V8_MINOR_VERSION ([0-9]+)")
RE_MACRO_END = re.compile(r"\);")
RE_DEPRECATE_MACRO = re.compile(r"\(.*?,(.*)\);", re.MULTILINE)
class HeaderFile(object):
def __init__(self, path):
self.path = path
self.blame_list = self.get_blame_list()
@classmethod
def get_api_header_files(cls, options):
files = subprocess.check_output(
['git', 'ls-tree', '--name-only', '-r', 'HEAD', options.include_dir],
encoding='UTF-8')
files = map(Path, filter(lambda l: l.endswith('.h'), files.splitlines()))
with Pool(processes=24) as pool:
return pool.map(cls, files)
def extract_version(self, hash):
if hash in VERSION_CACHE:
return VERSION_CACHE[hash]
if hash == '0000000000000000000000000000000000000000':
return 'HEAD'
result = subprocess.check_output(
['git', 'show', f"{hash}:include/v8-version.h"], encoding='UTF-8')
major = RE_VERSION_MAJOR.search(result).group(1)
minor = RE_VERSION_MINOR.search(result).group(1)
version = f"{major}.{minor}"
VERSION_CACHE[hash] = version
return version
def get_blame_list(self):
logging.info(f"blame list for {self.path}")
result = subprocess.check_output(
['git', 'blame', '-t', '--line-porcelain', self.path],
encoding='UTF-8')
line_iter = iter(result.splitlines())
blame_list = list()
current_blame = None
while True:
line = next(line_iter, None)
if line is None:
break
if RE_GITHASH.match(line):
if current_blame is not None:
blame_list.append(current_blame)
hash = line.split(" ")[0]
current_blame = {
'datetime': 0,
'filename': None,
'content': None,
'hash': hash
}
continue
match = RE_AUTHOR_TIME.match(line)
if match:
current_blame['datetime'] = datetime.fromtimestamp(
int(match.groups()[0]))
continue
match = RE_FILENAME.match(line)
if match:
current_blame['filename'] = match.groups()[0]
current_blame['content'] = next(line_iter).strip()
continue
blame_list.append(current_blame)
return blame_list
def filter_and_print(self, macro, options):
before = options.before
index = 0
re_macro = re.compile(macro)
deprecated = list()
while index < len(self.blame_list):
blame = self.blame_list[index]
line = blame['content']
if line.startswith("#") or line.startswith("//"):
index += 1
continue
commit_datetime = blame['datetime']
if commit_datetime >= before:
index += 1
continue
commit_hash = blame['hash']
match = re_macro.search(line)
if match:
pos = match.end()
start = -1
parens = 0
while True:
if pos >= len(line):
# Extend to next line
index = index + 1
blame = self.blame_list[index]
line = line + blame['content']
if line[pos] == '(':
parens = parens + 1
elif line[pos] == ')':
parens = parens - 1
if parens == 0:
# Exclude closing ")
pos = pos - 1
break
elif line[pos] == '"' and start == -1:
start = pos + 1
pos = pos + 1
# Extract content and replace double quotes from merged lines
content = line[start:pos].strip().replace('""', '')
deprecated.append((index + 1, commit_datetime, commit_hash, content))
index = index + 1
for linenumber, commit_datetime, commit_hash, content in deprecated:
self.print_details(linenumber, commit_datetime, commit_hash, content)
def print_details(self, linenumber, commit_datetime, commit_hash, content):
commit_date = commit_datetime.date()
file_position = (f"{self.path}:{linenumber}").ljust(40)
v8_version = f"v{self.extract_version(commit_hash)}".rjust(5)
print(f"{file_position} {v8_version} {commit_date} {commit_hash[:8]}"
f" {content}")
def print_v8_version(self, options):
commit_hash, commit_datetime = subprocess.check_output(
['git', 'log', '-1', '--format=%H%n%ct', self.path],
encoding='UTF-8').splitlines()
commit_datetime = datetime.fromtimestamp(int(commit_datetime))
self.print_details(11, commit_datetime, commit_hash, content="")
def parse_options(args):
parser = argparse.ArgumentParser(
description="Collect deprecation statistics")
parser.add_argument("include_dir", nargs='?', help="Path to includes dir")
parser.add_argument("--before", help="Filter by date")
parser.add_argument("--verbose",
"-v",
help="Verbose logging",
action="store_true")
options = parser.parse_args(args)
if options.verbose:
logging.basicConfig(level=logging.DEBUG)
if options.before:
options.before = datetime.strptime(options.before, '%Y-%m-%d')
else:
options.before = datetime.now()
if options.include_dir is None:
base_path = Path(__file__).parent.parent
options.include_dir = str((base_path / 'include').relative_to(base_path))
return options
def main(args):
options = parse_options(args)
print("# CURRENT V8 VERSION:")
version = HeaderFile(Path(options.include_dir) / 'v8-version.h')
version.print_v8_version(options)
header_files = HeaderFile.get_api_header_files(options)
print("\n")
print("# V8_DEPRECATE_SOON:")
for header in header_files:
header.filter_and_print("V8_DEPRECATE_SOON", options)
print("\n")
print("# V8_DEPRECATED:")
for header in header_files:
header.filter_and_print("V8_DEPRECATED", options)
if __name__ == "__main__":
main(sys.argv[1:])
| [
"jengelh@inai.de"
] | jengelh@inai.de |
35b61c03110c01137c01c1f734774cc7bd7e4811 | 2de1934821e11edaf8c4cbf4993f5138a17b20f2 | /tasks/migrations/0007_remove_project_dedline.py | 8a4e6215bab5d342f332ea44dcdd89d9285d449d | [] | no_license | jonqwerty/taskmanager | e39c0cc5b27619bd21e6d064dda6d779337cf9e0 | 04a8a2672ae50f726bab3f7b9e794e544b9c2bd2 | refs/heads/main | 2023-01-03T09:54:48.832504 | 2020-10-28T19:07:49 | 2020-10-28T19:07:49 | 301,083,308 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 325 | py | # Generated by Django 3.0.2 on 2020-10-18 09:04
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('tasks', '0006_project_dedline'),
]
operations = [
migrations.RemoveField(
model_name='project',
name='dedline',
),
]
| [
"you@example.com"
] | you@example.com |
e3d450aa45d3d9aff94356466a82ee5821f57f30 | 4ac9cf4c921e71ad4a5308b6de4900051fc6e162 | /MAIN/tasks/Others.py | e8a4911ca01a52171db6036d33b84b96eeba58a2 | [] | no_license | heyuantao/ACMTOOLS | 0928cb889222746dc20e677728c8b6816e28b2a0 | cd0c7dee272dc6b14c496cf02bfbbce863acfd59 | refs/heads/master | 2022-12-09T22:43:39.326033 | 2020-05-10T12:46:04 | 2020-05-10T12:46:04 | 172,885,165 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 244 | py | from celery.task import Task
import time
class HelpTask(Task):
name = 'help_task'
def run(self,*args,**kwargs):
print('start course task')
time.sleep(10)
print(kwargs)
#runTask(1000)
print('end') | [
"he_yuan_tao@163.com"
] | he_yuan_tao@163.com |
d20d7ab7c1a3393d6cd4d838f2875087cc0e35db | 4419d7b3479ea5ae7a76d6153fd21f8cec78a406 | /virtual/lib/python3.6/site-packages/werkzeug/contrib/cache.py | 6f676a3e100fa3a56d4d4dd16294b4b635dc2bf1 | [
"MIT"
] | permissive | monicaoyugi/News-Updates | 276f2e5b2e7ec1d1bc21786160f6df32d5f0d8c2 | 2652b28153f36284447952fdef4c63496af90418 | refs/heads/master | 2020-12-26T08:36:37.365993 | 2020-02-07T13:03:51 | 2020-02-07T13:03:51 | 237,449,423 | 0 | 2 | null | null | null | null | UTF-8 | Python | false | false | 32,117 | py | # -*- coding: utf-8 -*-
"""
werkzeug.contrib.cache
~~~~~~~~~~~~~~~~~~~~~~
The main problem with dynamic Web sites is, well, they're dynamic. Each
time a user requests a page, the webserver executes a lot of code, queries
the database, renders templates until the visitor gets the page he sees.
This is a lot more expensive than just loading a file from the file system
and sending it to the visitor.
For most Web applications, this overhead isn't a big deal but once it
becomes, you will be glad to have a cache system in place.
How Caching Works
=================
Caching is pretty simple. Basically you have a cache object lurking around
somewhere that is connected to a remote cache or the file system or
something else. When the request comes in you check if the current page
is already in the cache and if so, you're returning it from the cache.
Otherwise you generate the page and put it into the cache. (Or a fragment
of the page, you don't have to cache the full thing)
Here is a simple example of how to cache a sidebar for 5 minutes::
def get_sidebar(user):
identifier = 'sidebar_for/user%d' % user.id
value = cache.get(identifier)
if value is not None:
return value
value = generate_sidebar_for(user=user)
cache.set(identifier, value, timeout=60 * 5)
return value
Creating a Cache Object
=======================
To create a cache object you just import the cache system of your choice
from the cache module and instantiate it. Then you can start working
with that object:
>>> from werkzeug.contrib.cache import SimpleCache
>>> c = SimpleCache()
>>> c.set("foo", "value")
>>> c.get("foo")
'value'
>>> c.get("missing") is None
True
Please keep in mind that you have to create the cache and put it somewhere
you have access to it (either as a module global you can import or you just
put it into your WSGI application).
:copyright: 2007 Pallets
:README.md: BSD-3-Clause
"""
import errno
import os
import platform
import re
import tempfile
import warnings
from hashlib import md5
from time import time
from .._compat import integer_types
from .._compat import iteritems
from .._compat import string_types
from .._compat import text_type
from .._compat import to_native
from ..posixemulation import rename
try:
import cPickle as pickle
except ImportError: # pragma: no cover
import pickle
warnings.warn(
"'werkzeug.contrib.cache' is deprecated as of version 0.15 and will"
" be removed in version 1.0. It has moved to https://github.com"
"/pallets/cachelib.",
DeprecationWarning,
stacklevel=2,
)
def _items(mappingorseq):
"""Wrapper for efficient iteration over mappings represented by dicts
or sequences::
>>> for k, v in _items((i, i*i) for i in xrange(5)):
... assert k*k == v
>>> for k, v in _items(dict((i, i*i) for i in xrange(5))):
... assert k*k == v
"""
if hasattr(mappingorseq, "items"):
return iteritems(mappingorseq)
return mappingorseq
class BaseCache(object):
"""Baseclass for the cache systems. All the cache systems implement this
API or a superset of it.
:param default_timeout: the default timeout (in seconds) that is used if
no timeout is specified on :meth:`set`. A timeout
of 0 indicates that the cache never expires.
"""
def __init__(self, default_timeout=300):
self.default_timeout = default_timeout
def _normalize_timeout(self, timeout):
if timeout is None:
timeout = self.default_timeout
return timeout
def get(self, key):
"""Look up key in the cache and return the value for it.
:param key: the key to be looked up.
:returns: The value if it exists and is readable, else ``None``.
"""
return None
def delete(self, key):
"""Delete `key` from the cache.
:param key: the key to delete.
:returns: Whether the key existed and has been deleted.
:rtype: boolean
"""
return True
def get_many(self, *keys):
"""Returns a list of values for the given keys.
For each key an item in the list is created::
foo, bar = cache.get_many("foo", "bar")
Has the same error handling as :meth:`get`.
:param keys: The function accepts multiple keys as positional
arguments.
"""
return [self.get(k) for k in keys]
def get_dict(self, *keys):
"""Like :meth:`get_many` but return a dict::
d = cache.get_dict("foo", "bar")
foo = d["foo"]
bar = d["bar"]
:param keys: The function accepts multiple keys as positional
arguments.
"""
return dict(zip(keys, self.get_many(*keys)))
def set(self, key, value, timeout=None):
"""Add a new key/value to the cache (overwrites value, if key already
exists in the cache).
:param key: the key to set
:param value: the value for the key
:param timeout: the cache timeout for the key in seconds (if not
specified, it uses the default timeout). A timeout of
0 idicates that the cache never expires.
:returns: ``True`` if key has been updated, ``False`` for backend
errors. Pickling errors, however, will raise a subclass of
``pickle.PickleError``.
:rtype: boolean
"""
return True
def add(self, key, value, timeout=None):
"""Works like :meth:`set` but does not overwrite the values of already
existing keys.
:param key: the key to set
:param value: the value for the key
:param timeout: the cache timeout for the key in seconds (if not
specified, it uses the default timeout). A timeout of
0 idicates that the cache never expires.
:returns: Same as :meth:`set`, but also ``False`` for already
existing keys.
:rtype: boolean
"""
return True
def set_many(self, mapping, timeout=None):
"""Sets multiple keys and values from a mapping.
:param mapping: a mapping with the keys/values to set.
:param timeout: the cache timeout for the key in seconds (if not
specified, it uses the default timeout). A timeout of
0 idicates that the cache never expires.
:returns: Whether all given keys have been set.
:rtype: boolean
"""
rv = True
for key, value in _items(mapping):
if not self.set(key, value, timeout):
rv = False
return rv
def delete_many(self, *keys):
"""Deletes multiple keys at once.
:param keys: The function accepts multiple keys as positional
arguments.
:returns: Whether all given keys have been deleted.
:rtype: boolean
"""
return all(self.delete(key) for key in keys)
def has(self, key):
"""Checks if a key exists in the cache without returning it. This is a
cheap operation that bypasses loading the actual data on the backend.
This method is optional and may not be implemented on all caches.
:param key: the key to check
"""
raise NotImplementedError(
"%s doesn't have an efficient implementation of `has`. That "
"means it is impossible to check whether a key exists without "
"fully loading the key's data. Consider using `self.get` "
"explicitly if you don't care about performance."
)
def clear(self):
"""Clears the cache. Keep in mind that not all caches support
completely clearing the cache.
:returns: Whether the cache has been cleared.
:rtype: boolean
"""
return True
def inc(self, key, delta=1):
"""Increments the value of a key by `delta`. If the key does
not yet exist it is initialized with `delta`.
For supporting caches this is an atomic operation.
:param key: the key to increment.
:param delta: the delta to add.
:returns: The new value or ``None`` for backend errors.
"""
value = (self.get(key) or 0) + delta
return value if self.set(key, value) else None
def dec(self, key, delta=1):
"""Decrements the value of a key by `delta`. If the key does
not yet exist it is initialized with `-delta`.
For supporting caches this is an atomic operation.
:param key: the key to increment.
:param delta: the delta to subtract.
:returns: The new value or `None` for backend errors.
"""
value = (self.get(key) or 0) - delta
return value if self.set(key, value) else None
class NullCache(BaseCache):
"""A cache that doesn't cache. This can be useful for unit testing.
:param default_timeout: a dummy parameter that is ignored but exists
for API compatibility with other caches.
"""
def has(self, key):
return False
class SimpleCache(BaseCache):
"""Simple memory cache for single process environments. This class exists
mainly for the development server and is not 100% thread safe. It tries
to use as many atomic operations as possible and no locks for simplicity
but it could happen under heavy load that keys are added multiple times.
:param threshold: the maximum number of items the cache stores before
it starts deleting some.
:param default_timeout: the default timeout that is used if no timeout is
specified on :meth:`~BaseCache.set`. A timeout of
0 indicates that the cache never expires.
"""
def __init__(self, threshold=500, default_timeout=300):
BaseCache.__init__(self, default_timeout)
self._cache = {}
self.clear = self._cache.clear
self._threshold = threshold
def _prune(self):
if len(self._cache) > self._threshold:
now = time()
toremove = []
for idx, (key, (expires, _)) in enumerate(self._cache.items()):
if (expires != 0 and expires <= now) or idx % 3 == 0:
toremove.append(key)
for key in toremove:
self._cache.pop(key, None)
def _normalize_timeout(self, timeout):
timeout = BaseCache._normalize_timeout(self, timeout)
if timeout > 0:
timeout = time() + timeout
return timeout
def get(self, key):
try:
expires, value = self._cache[key]
if expires == 0 or expires > time():
return pickle.loads(value)
except (KeyError, pickle.PickleError):
return None
def set(self, key, value, timeout=None):
expires = self._normalize_timeout(timeout)
self._prune()
self._cache[key] = (expires, pickle.dumps(value, pickle.HIGHEST_PROTOCOL))
return True
def add(self, key, value, timeout=None):
expires = self._normalize_timeout(timeout)
self._prune()
item = (expires, pickle.dumps(value, pickle.HIGHEST_PROTOCOL))
if key in self._cache:
return False
self._cache.setdefault(key, item)
return True
def delete(self, key):
return self._cache.pop(key, None) is not None
def has(self, key):
try:
expires, value = self._cache[key]
return expires == 0 or expires > time()
except KeyError:
return False
_test_memcached_key = re.compile(r"[^\x00-\x21\xff]{1,250}$").match
class MemcachedCache(BaseCache):
"""A cache that uses memcached as backend.
The first argument can either be an object that resembles the API of a
:class:`memcache.Client` or a tuple/list of server addresses. In the
event that a tuple/list is passed, Werkzeug tries to import the best
available memcache library.
This cache looks into the following packages/modules to find bindings for
memcached:
- ``pylibmc``
- ``google.appengine.api.memcached``
- ``memcached``
- ``libmc``
Implementation notes: This cache backend works around some limitations in
memcached to simplify the interface. For example unicode keys are encoded
to utf-8 on the fly. Methods such as :meth:`~BaseCache.get_dict` return
the keys in the same format as passed. Furthermore all get methods
silently ignore key errors to not cause problems when untrusted user data
is passed to the get methods which is often the case in web applications.
:param servers: a list or tuple of server addresses or alternatively
a :class:`memcache.Client` or a compatible client.
:param default_timeout: the default timeout that is used if no timeout is
specified on :meth:`~BaseCache.set`. A timeout of
0 indicates that the cache never expires.
:param key_prefix: a prefix that is added before all keys. This makes it
possible to use the same memcached server for different
applications. Keep in mind that
:meth:`~BaseCache.clear` will also clear keys with a
different prefix.
"""
def __init__(self, servers=None, default_timeout=300, key_prefix=None):
BaseCache.__init__(self, default_timeout)
if servers is None or isinstance(servers, (list, tuple)):
if servers is None:
servers = ["127.0.0.1:11211"]
self._client = self.import_preferred_memcache_lib(servers)
if self._client is None:
raise RuntimeError("no memcache module found")
else:
# NOTE: servers is actually an already initialized memcache
# client.
self._client = servers
self.key_prefix = to_native(key_prefix)
def _normalize_key(self, key):
key = to_native(key, "utf-8")
if self.key_prefix:
key = self.key_prefix + key
return key
def _normalize_timeout(self, timeout):
timeout = BaseCache._normalize_timeout(self, timeout)
if timeout > 0:
timeout = int(time()) + timeout
return timeout
def get(self, key):
key = self._normalize_key(key)
# memcached doesn't support keys longer than that. Because often
# checks for so long keys can occur because it's tested from user
# submitted data etc we fail silently for getting.
if _test_memcached_key(key):
return self._client.get(key)
def get_dict(self, *keys):
key_mapping = {}
have_encoded_keys = False
for key in keys:
encoded_key = self._normalize_key(key)
if not isinstance(key, str):
have_encoded_keys = True
if _test_memcached_key(key):
key_mapping[encoded_key] = key
_keys = list(key_mapping)
d = rv = self._client.get_multi(_keys)
if have_encoded_keys or self.key_prefix:
rv = {}
for key, value in iteritems(d):
rv[key_mapping[key]] = value
if len(rv) < len(keys):
for key in keys:
if key not in rv:
rv[key] = None
return rv
def add(self, key, value, timeout=None):
key = self._normalize_key(key)
timeout = self._normalize_timeout(timeout)
return self._client.add(key, value, timeout)
def set(self, key, value, timeout=None):
key = self._normalize_key(key)
timeout = self._normalize_timeout(timeout)
return self._client.set(key, value, timeout)
def get_many(self, *keys):
d = self.get_dict(*keys)
return [d[key] for key in keys]
def set_many(self, mapping, timeout=None):
new_mapping = {}
for key, value in _items(mapping):
key = self._normalize_key(key)
new_mapping[key] = value
timeout = self._normalize_timeout(timeout)
failed_keys = self._client.set_multi(new_mapping, timeout)
return not failed_keys
def delete(self, key):
key = self._normalize_key(key)
if _test_memcached_key(key):
return self._client.delete(key)
def delete_many(self, *keys):
new_keys = []
for key in keys:
key = self._normalize_key(key)
if _test_memcached_key(key):
new_keys.append(key)
return self._client.delete_multi(new_keys)
def has(self, key):
key = self._normalize_key(key)
if _test_memcached_key(key):
return self._client.append(key, "")
return False
def clear(self):
return self._client.flush_all()
def inc(self, key, delta=1):
key = self._normalize_key(key)
return self._client.incr(key, delta)
def dec(self, key, delta=1):
key = self._normalize_key(key)
return self._client.decr(key, delta)
def import_preferred_memcache_lib(self, servers):
"""Returns an initialized memcache client. Used by the constructor."""
try:
import pylibmc
except ImportError:
pass
else:
return pylibmc.Client(servers)
try:
from google.appengine.api import memcache
except ImportError:
pass
else:
return memcache.Client()
try:
import memcache
except ImportError:
pass
else:
return memcache.Client(servers)
try:
import libmc
except ImportError:
pass
else:
return libmc.Client(servers)
# backwards compatibility
GAEMemcachedCache = MemcachedCache
class RedisCache(BaseCache):
"""Uses the Redis key-value store as a cache backend.
The first argument can be either a string denoting address of the Redis
server or an object resembling an instance of a redis.Redis class.
Note: Python Redis API already takes care of encoding unicode strings on
the fly.
.. versionadded:: 0.7
.. versionadded:: 0.8
`key_prefix` was added.
.. versionchanged:: 0.8
This cache backend now properly serializes objects.
.. versionchanged:: 0.8.3
This cache backend now supports password authentication.
.. versionchanged:: 0.10
``**kwargs`` is now passed to the redis object.
:param host: address of the Redis server or an object which API is
compatible with the official Python Redis client (redis-py).
:param port: port number on which Redis server listens for connections.
:param password: password authentication for the Redis server.
:param db: db (zero-based numeric index) on Redis Server to connect.
:param default_timeout: the default timeout that is used if no timeout is
specified on :meth:`~BaseCache.set`. A timeout of
0 indicates that the cache never expires.
:param key_prefix: A prefix that should be added to all keys.
Any additional keyword arguments will be passed to ``redis.Redis``.
"""
def __init__(
self,
host="localhost",
port=6379,
password=None,
db=0,
default_timeout=300,
key_prefix=None,
**kwargs
):
BaseCache.__init__(self, default_timeout)
if host is None:
raise ValueError("RedisCache host parameter may not be None")
if isinstance(host, string_types):
try:
import redis
except ImportError:
raise RuntimeError("no redis module found")
if kwargs.get("decode_responses", None):
raise ValueError("decode_responses is not supported by RedisCache.")
self._client = redis.Redis(
host=host, port=port, password=password, db=db, **kwargs
)
else:
self._client = host
self.key_prefix = key_prefix or ""
def _normalize_timeout(self, timeout):
timeout = BaseCache._normalize_timeout(self, timeout)
if timeout == 0:
timeout = -1
return timeout
def dump_object(self, value):
"""Dumps an object into a string for redis. By default it serializes
integers as regular string and pickle dumps everything else.
"""
t = type(value)
if t in integer_types:
return str(value).encode("ascii")
return b"!" + pickle.dumps(value)
def load_object(self, value):
"""The reversal of :meth:`dump_object`. This might be called with
None.
"""
if value is None:
return None
if value.startswith(b"!"):
try:
return pickle.loads(value[1:])
except pickle.PickleError:
return None
try:
return int(value)
except ValueError:
# before 0.8 we did not have serialization. Still support that.
return value
def get(self, key):
return self.load_object(self._client.get(self.key_prefix + key))
def get_many(self, *keys):
if self.key_prefix:
keys = [self.key_prefix + key for key in keys]
return [self.load_object(x) for x in self._client.mget(keys)]
def set(self, key, value, timeout=None):
timeout = self._normalize_timeout(timeout)
dump = self.dump_object(value)
if timeout == -1:
result = self._client.set(name=self.key_prefix + key, value=dump)
else:
result = self._client.setex(
name=self.key_prefix + key, value=dump, time=timeout
)
return result
def add(self, key, value, timeout=None):
timeout = self._normalize_timeout(timeout)
dump = self.dump_object(value)
return self._client.setnx(
name=self.key_prefix + key, value=dump
) and self._client.expire(name=self.key_prefix + key, time=timeout)
def set_many(self, mapping, timeout=None):
timeout = self._normalize_timeout(timeout)
# Use transaction=False to batch without calling redis MULTI
# which is not supported by twemproxy
pipe = self._client.pipeline(transaction=False)
for key, value in _items(mapping):
dump = self.dump_object(value)
if timeout == -1:
pipe.set(name=self.key_prefix + key, value=dump)
else:
pipe.setex(name=self.key_prefix + key, value=dump, time=timeout)
return pipe.execute()
def delete(self, key):
return self._client.delete(self.key_prefix + key)
def delete_many(self, *keys):
if not keys:
return
if self.key_prefix:
keys = [self.key_prefix + key for key in keys]
return self._client.delete(*keys)
def has(self, key):
return self._client.exists(self.key_prefix + key)
def clear(self):
status = False
if self.key_prefix:
keys = self._client.keys(self.key_prefix + "*")
if keys:
status = self._client.delete(*keys)
else:
status = self._client.flushdb()
return status
def inc(self, key, delta=1):
return self._client.incr(name=self.key_prefix + key, amount=delta)
def dec(self, key, delta=1):
return self._client.decr(name=self.key_prefix + key, amount=delta)
class FileSystemCache(BaseCache):
"""A cache that stores the items on the file system. This cache depends
on being the only user of the `cache_dir`. Make absolutely sure that
nobody but this cache stores files there or otherwise the cache will
randomly delete files therein.
:param cache_dir: the directory where cache files are stored.
:param threshold: the maximum number of items the cache stores before
it starts deleting some. A threshold value of 0
indicates no threshold.
:param default_timeout: the default timeout that is used if no timeout is
specified on :meth:`~BaseCache.set`. A timeout of
0 indicates that the cache never expires.
:param mode: the file mode wanted for the cache files, default 0600
"""
#: used for temporary files by the FileSystemCache
_fs_transaction_suffix = ".__wz_cache"
#: keep amount of files in a cache element
_fs_count_file = "__wz_cache_count"
def __init__(self, cache_dir, threshold=500, default_timeout=300, mode=0o600):
BaseCache.__init__(self, default_timeout)
self._path = cache_dir
self._threshold = threshold
self._mode = mode
try:
os.makedirs(self._path)
except OSError as ex:
if ex.errno != errno.EEXIST:
raise
self._update_count(value=len(self._list_dir()))
@property
def _file_count(self):
return self.get(self._fs_count_file) or 0
def _update_count(self, delta=None, value=None):
# If we have no threshold, don't count files
if self._threshold == 0:
return
if delta:
new_count = self._file_count + delta
else:
new_count = value or 0
self.set(self._fs_count_file, new_count, mgmt_element=True)
def _normalize_timeout(self, timeout):
timeout = BaseCache._normalize_timeout(self, timeout)
if timeout != 0:
timeout = time() + timeout
return int(timeout)
def _list_dir(self):
"""return a list of (fully qualified) cache filenames
"""
mgmt_files = [
self._get_filename(name).split("/")[-1] for name in (self._fs_count_file,)
]
return [
os.path.join(self._path, fn)
for fn in os.listdir(self._path)
if not fn.endswith(self._fs_transaction_suffix) and fn not in mgmt_files
]
def _prune(self):
if self._threshold == 0 or not self._file_count > self._threshold:
return
entries = self._list_dir()
now = time()
for idx, fname in enumerate(entries):
try:
remove = False
with open(fname, "rb") as f:
expires = pickle.load(f)
remove = (expires != 0 and expires <= now) or idx % 3 == 0
if remove:
os.remove(fname)
except (IOError, OSError):
pass
self._update_count(value=len(self._list_dir()))
def clear(self):
for fname in self._list_dir():
try:
os.remove(fname)
except (IOError, OSError):
self._update_count(value=len(self._list_dir()))
return False
self._update_count(value=0)
return True
def _get_filename(self, key):
if isinstance(key, text_type):
key = key.encode("utf-8") # XXX unicode review
hash = md5(key).hexdigest()
return os.path.join(self._path, hash)
def get(self, key):
filename = self._get_filename(key)
try:
with open(filename, "rb") as f:
pickle_time = pickle.load(f)
if pickle_time == 0 or pickle_time >= time():
return pickle.load(f)
else:
os.remove(filename)
return None
except (IOError, OSError, pickle.PickleError):
return None
def add(self, key, value, timeout=None):
filename = self._get_filename(key)
if not os.path.exists(filename):
return self.set(key, value, timeout)
return False
def set(self, key, value, timeout=None, mgmt_element=False):
# Management elements have no timeout
if mgmt_element:
timeout = 0
# Don't prune on management element update, to avoid loop
else:
self._prune()
timeout = self._normalize_timeout(timeout)
filename = self._get_filename(key)
try:
fd, tmp = tempfile.mkstemp(
suffix=self._fs_transaction_suffix, dir=self._path
)
with os.fdopen(fd, "wb") as f:
pickle.dump(timeout, f, 1)
pickle.dump(value, f, pickle.HIGHEST_PROTOCOL)
rename(tmp, filename)
os.chmod(filename, self._mode)
except (IOError, OSError):
return False
else:
# Management elements should not count towards threshold
if not mgmt_element:
self._update_count(delta=1)
return True
def delete(self, key, mgmt_element=False):
try:
os.remove(self._get_filename(key))
except (IOError, OSError):
return False
else:
# Management elements should not count towards threshold
if not mgmt_element:
self._update_count(delta=-1)
return True
def has(self, key):
filename = self._get_filename(key)
try:
with open(filename, "rb") as f:
pickle_time = pickle.load(f)
if pickle_time == 0 or pickle_time >= time():
return True
else:
os.remove(filename)
return False
except (IOError, OSError, pickle.PickleError):
return False
class UWSGICache(BaseCache):
"""Implements the cache using uWSGI's caching framework.
.. note::
This class cannot be used when running under PyPy, because the uWSGI
API implementation for PyPy is lacking the needed functionality.
:param default_timeout: The default timeout in seconds.
:param cache: The name of the caching instance to connect to, for
example: mycache@localhost:3031, defaults to an empty string, which
means uWSGI will cache in the local instance. If the cache is in the
same instance as the werkzeug app, you only have to provide the name of
the cache.
"""
def __init__(self, default_timeout=300, cache=""):
BaseCache.__init__(self, default_timeout)
if platform.python_implementation() == "PyPy":
raise RuntimeError(
"uWSGI caching does not work under PyPy, see "
"the docs for more details."
)
try:
import uwsgi
self._uwsgi = uwsgi
except ImportError:
raise RuntimeError(
"uWSGI could not be imported, are you running under uWSGI?"
)
self.cache = cache
def get(self, key):
rv = self._uwsgi.cache_get(key, self.cache)
if rv is None:
return
return pickle.loads(rv)
def delete(self, key):
return self._uwsgi.cache_del(key, self.cache)
def set(self, key, value, timeout=None):
return self._uwsgi.cache_update(
key, pickle.dumps(value), self._normalize_timeout(timeout), self.cache
)
def add(self, key, value, timeout=None):
return self._uwsgi.cache_set(
key, pickle.dumps(value), self._normalize_timeout(timeout), self.cache
)
def clear(self):
return self._uwsgi.cache_clear(self.cache)
def has(self, key):
return self._uwsgi.cache_exists(key, self.cache) is not None
| [
"monicaoyugi@gmail.com"
] | monicaoyugi@gmail.com |
56218f17d65a63ee7913d4b8454847a721beaa95 | 7eaeb56a2ed19a30559dac8673a979fc64d76e8a | /tests/tools/tool_arguments_tests.py | 39b5ccdebbc496889cfe8388bf70b1bf22c41973 | [
"MIT"
] | permissive | avast/retdec-regression-tests-framework | 95935b6a66bee66f58a9f2ea1296f747536aeaae | f8f43c0870df638d114f685a30f8abf8b51d6d1e | refs/heads/master | 2023-05-30T18:52:37.332065 | 2022-12-05T14:37:40 | 2022-12-05T14:37:40 | 113,967,405 | 8 | 5 | MIT | 2020-04-07T12:28:40 | 2017-12-12T09:01:52 | Python | UTF-8 | Python | false | false | 8,409 | py | """
Tests for the :mod:`regression_tests.tools.tool_arguments` module.
"""
import os
import unittest
from regression_tests.filesystem.directory import Directory
from regression_tests.filesystem.file import File
from regression_tests.filesystem.file import StandaloneFile
from regression_tests.test_settings import InvalidTestSettingsError
from regression_tests.tools.decompiler_arguments import DecompilerArguments
from regression_tests.tools.tool_arguments import ToolArguments
from regression_tests.tools.tool_test_settings import ToolTestSettings
from tests.filesystem.directory_tests import ROOT_DIR
class ToolArgumentsTests(unittest.TestCase):
"""Tests for `ToolArguments`."""
def test_args_is_set_to_none_when_empty_args_are_passed(self):
# When a user writes the following settings
#
# args=''
#
# we want to consider it as
#
# args=None
#
# Otherwise, the runner would pass an extra empty argument when running
# the tool.
args = ToolArguments(args='')
self.assertIsNone(args.args)
def test_as_list_returns_empty_list_when_nothing_is_set(self):
args = ToolArguments()
self.assertEqual(args.as_list, [])
def test_as_list_returns_correct_list_when_just_input_file_is_set(self):
args = ToolArguments(
input_files=(StandaloneFile('file.exe'),)
)
self.assertEqual(args.as_list, ['file.exe'])
def test_as_list_returns_correct_list_when_just_args_is_set(self):
args = ToolArguments(
args=' --arg1 --arg2 '
)
self.assertEqual(args.as_list, ['--arg1', '--arg2'])
def test_as_str_returns_space_separated_string_of_arguments(self):
args = ToolArguments(
input_files=(StandaloneFile('file.exe'),),
args='--arg1 --arg2'
)
self.assertEqual(args.as_str, 'file.exe --arg1 --arg2')
def test_args_as_list_returns_empty_list_when_no_args(self):
args = ToolArguments(
args=None
)
self.assertEqual(args.args_as_list, [])
def test_args_as_list_separates_args_by_whitespace(self):
args = ToolArguments(
args=' \t --arg1 \t --arg2 \t '
)
self.assertEqual(args.args_as_list, ['--arg1', '--arg2'])
def test_from_test_settings_input_files_are_present_when_single_input_is_given(self):
test_settings = ToolTestSettings(tool='tool', input='test.exe')
args = ToolArguments.from_test_settings(test_settings)
self.assertEqual(len(args.input_files), 1)
self.assertEqual(args.input_files[0].name, 'test.exe')
def test_from_test_settings_input_files_are_present_when_two_inputs_are_given(self):
test_settings = ToolTestSettings(tool='tool', input=('test1.exe', 'test2.exe'))
args = ToolArguments.from_test_settings(test_settings)
self.assertEqual(len(args.input_files), 2)
self.assertEqual(args.input_files[0].name, 'test1.exe')
self.assertEqual(args.input_files[1].name, 'test2.exe')
def test_from_test_settings_input_files_is_empty_tuple_when_input_is_not_given(self):
test_settings = ToolTestSettings(tool='tool')
args = ToolArguments.from_test_settings(test_settings)
self.assertEqual(args.input_files, ())
def test_from_test_settings_args_is_present_when_set(self):
test_settings = ToolTestSettings(
tool='tool',
input='test.exe',
args='--arg1 --arg2'
)
args = ToolArguments.from_test_settings(test_settings)
self.assertEqual(args.args, test_settings.args)
def scenario_invalid_settings_error_is_raised(self, test_settings, ref_exc_substr):
with self.assertRaises(InvalidTestSettingsError) as cm:
ToolArguments.from_test_settings(test_settings)
self.assertIn(ref_exc_substr, str(cm.exception))
def test_from_test_settings_error_is_raised_when_input_is_list(self):
test_settings = ToolTestSettings(
tool='tool',
input=['test1.exe', 'test2.exe']
)
self.scenario_invalid_settings_error_is_raised(test_settings, 'input')
def test_from_test_settings_error_is_raised_when_args_is_list(self):
test_settings = ToolTestSettings(
tool='tool',
input='test.exe',
args=['--arg1', '--arg2']
)
self.scenario_invalid_settings_error_is_raised(test_settings, 'args')
def test_without_paths_and_output_files_returns_same_args_when_there_are_no_files(self):
args = ToolArguments()
self.assertEqual(args, args.without_paths_and_output_files)
def test_without_paths_and_output_files_returns_correct_args_when_there_are_files(self):
args = ToolArguments(
input_files=(File('file.exe', Directory(os.path.join(ROOT_DIR, 'inputs'))),),
)
stripped_args = args.without_paths_and_output_files
self.assertEqual(len(stripped_args.input_files), 1)
self.assertEqual(stripped_args.input_files[0].path, 'file.exe')
def test_with_rebased_files_returns_same_args_when_there_are_no_files(self):
args = ToolArguments()
rebased_args = args.with_rebased_files(
Directory(os.path.join(ROOT_DIR, 'inputs')),
Directory(os.path.join(ROOT_DIR, 'outputs'))
)
self.assertEqual(args, rebased_args)
def test_with_rebased_files_returns_correct_args_when_there_are_files(self):
args = ToolArguments(
input_files=(StandaloneFile('file.exe'),)
)
rebased_args = args.with_rebased_files(
Directory(os.path.join(ROOT_DIR, 'inputs')),
Directory(os.path.join(ROOT_DIR, 'outputs'))
)
self.assertEqual(len(rebased_args.input_files), 1)
self.assertEqual(
rebased_args.input_files[0].path,
os.path.join(ROOT_DIR, 'inputs', 'file.exe')
)
def test_clone_returns_other_args_equal_to_original_args(self):
args = ToolArguments(
input_files=(StandaloneFile('file.exe'),),
args='--arg'
)
cloned_args = args.clone()
self.assertIsNot(args, cloned_args)
self.assertEqual(args, cloned_args)
def test_clone_preserves_instance_type(self):
args = DecompilerArguments()
cloned_args = args.clone()
self.assertIsInstance(cloned_args, DecompilerArguments)
def test_clone_but_returns_other_args_equal_to_original_args_except_for_changed_attributes(self):
args = ToolArguments(
input_files=(StandaloneFile('file.exe'),),
args='--arg'
)
cloned_args = args.clone_but(args='--other-arg')
self.assertIsNot(args, cloned_args)
self.assertEqual(cloned_args.input_files, args.input_files)
self.assertEqual(cloned_args.args, '--other-arg')
def test_clone_but_preserves_instance_type(self):
args = DecompilerArguments()
cloned_args = args.clone_but()
self.assertIsInstance(cloned_args, DecompilerArguments)
def test_two_args_having_same_data_are_equal(self):
args1 = ToolArguments(
input_files=(StandaloneFile('file.exe'),),
args='--arg'
)
args2 = ToolArguments(
input_files=(StandaloneFile('file.exe'),),
args='--arg'
)
self.assertEqual(args1, args2)
def test_two_args_having_different_input_file_are_not_equal(self):
args1 = ToolArguments(
input_files=(StandaloneFile('file1.exe'),)
)
args2 = ToolArguments(
input_files=(StandaloneFile('file2.exe'),)
)
self.assertNotEqual(args1, args2)
def test_two_args_having_different_args_are_not_equal(self):
args1 = ToolArguments(
input_files=(StandaloneFile('file.exe'),),
args='--arg'
)
args2 = ToolArguments(
input_files=(StandaloneFile('file.exe'),),
args='--other-arg'
)
self.assertNotEqual(args1, args2)
def test_repr_returns_executable_repr_that_creates_original_args(self):
args = ToolArguments(
input_files=(StandaloneFile('file.exe'),),
args='--arg'
)
self.assertEqual(args, eval(repr(args)))
| [
"petr.zemek@avast.com"
] | petr.zemek@avast.com |
9698b7d231d1791a36e5749d1078d6cc01552709 | ad1a89d4b3e850b114df494c7d06f312105cd7c8 | /settings/dev_files.py | 7b05f2856fcbb2851bbba8018ea462d49c078f5f | [
"MIT",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | brenns10/fluffy | 22436bc958ce6c4f2bf06038e426dc74b82a0ee9 | a63f81bde64901416e6c575c4c8db2fbce6c346d | refs/heads/master | 2021-08-11T18:56:31.179119 | 2017-11-14T02:47:00 | 2017-11-14T02:47:00 | 110,591,621 | 0 | 0 | null | 2017-11-13T19:25:15 | 2017-11-13T19:25:14 | null | UTF-8 | Python | false | false | 724 | py | # fluffy-specific configuration options
# storage backend (how are the files stored after being uploaded?)
STORAGE_BACKEND = {
'name': 'file',
'object_path': 'tmp/object/{name}',
'html_path': 'tmp/html/{name}',
}
# branding
BRANDING = 'fluffy'
CUSTOM_FOOTER_HTML = None
# URL patterns
HOME_URL = 'http://localhost:5000/'
FILE_URL = 'http://localhost:5001/object/{name}'
HTML_URL = 'http://localhost:5001/html/{name}'
STATIC_ASSETS_URL = 'http://localhost:5000/{name}'
# abuse contact email address
ABUSE_CONTACT = 'abuse@example.com'
# max upload size per file (in bytes)
MAX_UPLOAD_SIZE = 10 * 1048576 # 10 MB
# max size Flask will accept; maybe a little larger?
MAX_CONTENT_LENGTH = MAX_UPLOAD_SIZE * 2
| [
"ckuehl@ocf.berkeley.edu"
] | ckuehl@ocf.berkeley.edu |
3184f30af2cdc68191cc157d51f74446c3d7a8dd | 2494ff34517bc6d631190dd22e51c50f5c9e5a21 | /build/lib/multi_dataset_crystalography/dataset/sample_loader.py | 9e2f869ccee30a9cbaabaaa0add39276d04b826b | [] | no_license | ConorFWild/pandda | fd6afb7d441fc93e6bf666f5d7f93d80d7a85fed | f57e825695ec0d0362e24a7cda26347bb24a9ebd | refs/heads/master | 2020-12-14T02:32:40.208438 | 2020-08-06T08:26:02 | 2020-08-06T08:26:02 | 234,607,199 | 3 | 2 | null | 2020-02-18T15:24:52 | 2020-01-17T18:04:33 | Python | UTF-8 | Python | false | false | 51,591 | py | from __future__ import print_function
from collections import OrderedDict
import traceback
import sys, copy, gc
import time
import logging
logger = logging.getLogger(__name__)
import numpy
from scipy import spatial
from joblib import Parallel, delayed
import iotbx.pdb, iotbx.mtz, iotbx.ccp4_map
import cctbx.maptbx, cctbx.uctbx
from libtbx import easy_mp
from libtbx.utils import Sorry, Failure
from libtbx.math_utils import ifloor, iceil
import cctbx
import cctbx_uctbx_ext
import scitbx.matrix
from scitbx.array_family import flex
from giant.xray.maps.scale import scale_map_to_reference
from multi_dataset_crystalography.grid import Grid, GridPartition
from multi_dataset_crystalography.grid.masks import AtomicMask, GridMask
from giant.structure.select import calphas, protein, sel_altloc, non_water
import joblib as jl
from bamboo.common import Meta, Info
from bamboo.common.holders import HolderList
from multi_dataset_crystalography.functions import wrapper_run
from multi_dataset_crystalography.dataset.reference import PanddaReferenceDataset
import dask
# from dask.distributed import worker_client
class MapLoaderDask(object):
def __init__(self, verbose, resolution_factor, density_scaling):
self.verbose = verbose
self.resolution_factor = resolution_factor
self.density_scaling = density_scaling
def __call__(self, dataset, grid, reference_map, map_resolution):
assert reference_map.is_sparse(), 'Reference map is not in sparse form'
# ============================================================================>
# Create map handler in the native coordinate frame
# ============================================================================>
# Extract the map data
# TODO: make sure new version works
# fft_map = dataset.data.fft_maps['truncated']
dataset.data.fft_maps['truncated'] = dataset.data.miller_arrays['truncated'].fft_map(
resolution_factor=float(self.resolution_factor),
d_min=float(map_resolution),
symmetry_flags=cctbx.maptbx.use_space_group_symmetry)
fft_map = dataset.data.fft_maps['truncated']
gc.collect()
# Scale the map
if self.density_scaling == 'none':
pass
elif self.density_scaling == 'sigma':
fft_map.apply_sigma_scaling()
elif self.density_scaling == 'volume':
fft_map.apply_volume_scaling()
# Create map object
# native_map_true = ElectronDensityMap.from_fft_map(fft_map).as_map()
native_map_true = ElectronDensityMap.from_fft_map(fft_map)
# ============================================================================>
# Morph the map to the reference frame
# ============================================================================>
# Extract the map sites from the grid partition
point_mappings_grid = grid.partition.nn_groups[grid.global_mask().outer_mask_indices()]
assert sum(point_mappings_grid == -1) == 0
sites_cart_map = grid.grid2cart(grid.global_mask().outer_mask(),
origin_shift=True,
)
# Translate the grid partition mappings to the dataset alignment mappings
mappings_grid2dataset = get_interpolated_mapping_between_coordinates(query_list=grid.partition.sites_cart,
ref_list=dataset.model.alignment.reference_sites,
tol=0.01,
)
point_mappings_dataset = numpy.array([mappings_grid2dataset[i] for i in point_mappings_grid])
assert sum(point_mappings_dataset == -1) == 0
sites_cart_map_d = dataset.model.alignment.ref2nat(coordinates=sites_cart_map,
mappings=point_mappings_dataset,
)
morphed_map_data = native_map_true.get_cart_values(sites_cart_map_d)
# Scale map to reference
scale_mask = grid.index_on_other(query=grid.global_mask().inner_mask_indices(),
other=grid.global_mask().outer_mask_indices())
scaled_map_data = scale_map_to_reference(ref_vals=reference_map.data,
vals=morphed_map_data,
mask_idxs=flex.size_t(scale_mask))
# Create map holder
morphed_map = reference_map.new_from_template(map_data=scaled_map_data, sparse=reference_map.is_sparse())
morphed_map.meta.num = dataset.num
morphed_map.meta.tag = dataset.tag
morphed_map.meta.type = 'observed map'
morphed_map.meta.resolution = reference_map.meta.resolution
morphed_map.meta.map_uncertainty = None
morphed_map.meta.obs_map_mean = morphed_map_data.min_max_mean().mean
morphed_map.meta.obs_map_rms = morphed_map_data.standard_deviation_of_the_sample()
morphed_map.meta.scl_map_mean = scaled_map_data.min_max_mean().mean
morphed_map.meta.scl_map_rms = scaled_map_data.standard_deviation_of_the_sample()
# Print a running row of dots
print('>', end='');
sys.stdout.flush()
return morphed_map.make_sparse()
def repr(self):
repr = OrderedDict()
return repr
class DefaultSampleLoader:
def __init__(self, resolution_factor, density_scaling, cpus, verbose, grid_getter, reference=None,
multiprocessing="dask", grid=None, ref_map=None):
self.resolution_factor = resolution_factor
self.density_scaling = density_scaling
self.cpus = cpus
self.verbose = verbose
self.grid_getter = grid_getter
self.grid = grid
self.reference = reference
self.ref_map = ref_map
self.truncated_reference = None
self.truncated_datasets = None
self.ref_map = ref_map
self.multiprocessing = multiprocessing
def __call__(self, cut_resolution, datasets):
# ============================================================================>
# Load maps for characterisation datasets
# ============================================================================>
gc.collect()
pandda_load_and_morph_maps = PanddaLoadAndMorphMaps(self.resolution_factor, self.density_scaling,
self.cpus,
self.verbose,
multiprocessing=self.multiprocessing
)
sample = pandda_load_and_morph_maps(datasets=self.truncated_datasets,
ref_map=self.ref_map,
map_resolution=cut_resolution,
grid=self.grid)
return sample
def truncate_datasets(self, datasets):
pandda_diffraction_data_truncater = PanddaDiffractionDataTruncater()
truncated_reference, truncated_datasets = pandda_diffraction_data_truncater(datasets=datasets,
reference=self.reference)
self.truncated_reference = truncated_reference
self.truncated_datasets = truncated_datasets
def get_reference(self, cut_resolution):
# ============================================================================>
# Load the reference map so that we can re-scale the individual maps to this
# ============================================================================>
pandda_reference_map_loader = PanddaReferenceMapLoader(self.resolution_factor, self.density_scaling)
ref_map = pandda_reference_map_loader(self.reference,
cut_resolution,
self.grid)
self.ref_map = ref_map
def get_grid(self, reference):
self.grid = self.grid_getter(reference)
return self.grid
def get_sample(self, cut_resolution, dataset):
arg = MapLoader(dataset=dataset, grid=self.grid, reference_map=self.ref_map, verbose=self.verbose,
map_resolution=cut_resolution, resolution_factor=self.resolution_factor,
density_scaling=self.density_scaling)
sample = wrapper_run(arg)
return sample
def instantiate(self, grid, ref_map):
return DefaultSampleLoader(self.resolution_factor,
self.density_scaling,
self.cpus,
self.verbose,
self.grid_getter,
reference=self.reference,
multiprocessing="dask",
grid=grid,
ref_map=ref_map)
def __repr__(self):
repr = {"resolution_factor": self.resolution_factor,
"density_scaling": self.density_scaling,
}
return repr
class PanddaDiffractionDataTruncater:
def __init__(self):
pass
def __call__(self, datasets, reference):
"""Truncate data at the same indices across all the datasets"""
# ==============================>
# Find how many reflections are present in the reference dataset
# ==============================>
ref_cols = reference.meta.column_labels
ref_size = reference.data.miller_arrays[ref_cols].set().size()
# ==============================>
# Truncate miller indices to the common set (not including the reference dataset)
# ==============================>
datasets = [d for dtag, d in datasets.items()]
common_set = datasets[0].data.miller_arrays['scaled'].set()
for dataset in datasets[1:]:
common_set = common_set.common_set(dataset.data.miller_arrays['scaled'], assert_is_similar_symmetry=False)
# ==============================>
# Truncate diffraction data for all of the datasets (including the reference dataset)
# ==============================>
reference.data.miller_arrays['truncated'] = reference.data.miller_arrays[
ref_cols].common_set(common_set, assert_is_similar_symmetry=False)
for dataset in datasets:
dataset.data.miller_arrays['truncated'] = dataset.data.miller_arrays['scaled'].common_set(common_set,
assert_is_similar_symmetry=False)
# TODO: Make sure that the reference and datasets modified in this whole class are copies rather than references
truncated_reference = reference
truncated_datasets = {d.tag: d for d in datasets}
return truncated_reference, truncated_datasets
def repr(self):
repr = OrderedDict()
return repr
class ElectronDensityMap(object):
def __init__(self, map_data, unit_cell, map_indices=None, map_size=None, map_origin=(0.0, 0.0, 0.0), sparse=False,
meta=None, parent=None, children=None):
assert isinstance(map_data, flex.double)
assert isinstance(unit_cell, cctbx.uctbx.unit_cell) or isinstance(unit_cell, cctbx_uctbx_ext.unit_cell)
if sparse:
assert map_data.nd() == 1, 'Map data must be 1-dimensional when sparse=True'
assert [map_indices, map_size].count(None) == 0, 'Must provide map_indices and map_size when sparse=True'
assert len(map_data) == len(
map_indices), 'map_data and map_indices must be the same length when sparse=True ({} != {})'.format(
len(map_data), len(map_indices))
assert max(map_indices) < numpy.prod(map_size), 'indices are not compatible with map_size ({} > {})'.format(
max(map_indices), numpy.prod(map_size))
if not isinstance(map_indices, flex.size_t):
map_indices = flex.size_t(map_indices)
else:
if map_size is None:
assert map_data.nd() == 3, 'map_data must be 3-dimension if map_size is not given'
map_size = map_data.all()
assert len(map_size) == 3, 'map_size must be three dimensional'
assert map_indices is None, 'Do not provide map_indices for non-sparse matrices'
assert numpy.prod(map_size) == map_data.size()
# Reshape the map data if necessary
if map_data.nd() == 1:
map_data = map_data.deep_copy()
map_data.reshape(flex.grid(map_size))
assert map_data.all() == map_size, 'map_data is not the same shape as map_size ({} != {})'.format(
map_data.all(), map_size)
self.data = map_data
self._map_size = map_size
self._map_indices = map_indices
self._map_origin = map_origin
self.unit_cell = unit_cell
self.meta = meta if meta else Meta()
self.parent = parent
self.children = children if children else []
assert len(self._map_size) == 3, 'map_size must be tuple of length 3'
assert sparse == self.is_sparse()
@classmethod
def from_fft_map(cls, fft_map):
return cls(map_data=fft_map.real_map(), unit_cell=fft_map.unit_cell())
def new_from_template(self, map_data, sparse=False, copy_meta=False, same_parent=False):
"""Create a new ElectronDensityMap using this map as a template"""
# map_data is in sparse form
if sparse:
# Make sparse copy if template is not sparse
if not self.is_sparse():
self = self.copy().make_sparse()
# Check input map data is compatible
assert map_data.nd() == 1, 'map_data must 1-dimensional'
assert map_data.size() == self._map_indices.size()
# Extract parameters for sparseness
map_size = self._map_size
map_indices = self._map_indices
else:
assert map_data.size() == numpy.prod(self._map_size)
map_size = self._map_size
map_indices = None
if copy_meta:
meta = copy.deepcopy(self.meta)
else:
meta = None
if same_parent:
parent = self.parent
else:
parent = None
return ElectronDensityMap(map_data=map_data,
unit_cell=self.unit_cell,
map_indices=map_indices,
map_size=map_size,
map_origin=self._map_origin,
meta=meta,
parent=parent,
sparse=sparse)
def copy(self):
return self.new_from_template(map_data=self.data.deep_copy(),
sparse=self.is_sparse(),
copy_meta=True,
same_parent=True)
def normalised_copy(self):
"""Perform rms scaling on map data"""
# Create output copy and make sparse (always calculate the mean and rms from the sparse values)
result = self.copy().make_sparse()
map_data = result.get_map_data(sparse=True)
# Apply normalisation
result.data = (result.data - numpy.mean(map_data)) * (1.0 / numpy.std(map_data))
# Return the modified map
if self.is_sparse():
return result.make_sparse()
else:
return result.make_dense()
def _check_compatibility(self, other):
assert self.is_sparse() is other.is_sparse()
def __add__(self, other):
if isinstance(other, ElectronDensityMap):
self._check_compatibility(other=other)
return self.__add__(other.data)
else:
return self.new_from_template(map_data=self.data + other, sparse=self.is_sparse())
def __sub__(self, other):
if isinstance(other, ElectronDensityMap):
self._check_compatibility(other=other)
return self.__sub__(other.data)
else:
return self.new_from_template(map_data=self.data - other, sparse=self.is_sparse())
def __mul__(self, other):
if isinstance(other, ElectronDensityMap):
self._check_compatibility(other=other)
return self.__mul__(other.data)
else:
return self.new_from_template(map_data=self.data * other, sparse=self.is_sparse())
def __div__(self, other):
if isinstance(other, ElectronDensityMap):
self._check_compatibility(other=other)
return self.__div__(other.data)
else:
return self.new_from_template(map_data=self.data * (1.0 / other), sparse=self.is_sparse())
def __rdiv__(self, other):
return self.__div__(other)
def is_sparse(self):
return (self._map_indices is not None)
def embed(self, map_data):
"""Embed map data relative to the real map origin, rather than (0,0,0)"""
if self._map_origin == (0.0, 0.0, 0.0): return map_data
return cctbx.maptbx.rotate_translate_map(unit_cell=self.unit_cell,
map_data=map_data,
rotation_matrix=scitbx.matrix.rec([1, 0, 0, 0, 1, 0, 0, 0, 1],
(3, 3)).elems,
translation_vector=(
-1.0 * scitbx.matrix.rec(self._map_origin, (3, 1))).elems)
def as_map(self):
map_data = self.get_map_data(sparse=False)
return basic_map(
cctbx.maptbx.basic_map_unit_cell_flag(),
self.embed(map_data),
map_data.focus(),
self.unit_cell.orthogonalization_matrix(),
cctbx.maptbx.out_of_bounds_clamp(0).as_handle(),
self.unit_cell)
def get_cart_values(self, cart_points):
assert not self.is_sparse(), 'map must not be in sparse format for sampling'
# Shift input points to the grid frame -- TODO implement in function so that rotations can be automated integrated
cart_points = (cart_points - self._map_origin)
frac_values = self.unit_cell.fractionalize(cart_points)
# Get the map data with the correct origin
map_data = self.get_map_data(sparse=False)
map_vals = map(map_data.eight_point_interpolation, frac_values)
return flex.double(map_vals)
def to_file(self, filename, space_group):
map_data = self.get_map_data(sparse=False)
iotbx.ccp4_map.write_ccp4_map(
file_name=filename,
unit_cell=self.unit_cell,
space_group=space_group,
map_data=self.embed(map_data),
labels=flex.std_string(['Output map from giant/pandda']))
def get_map_data(self, sparse):
"""Get the map data as sparse/dense without altering state of master object"""
if sparse is not self.is_sparse():
result = self.copy()
if sparse:
result.make_sparse()
else:
result.make_dense()
else:
result = self
return result.data
def make_sparse(self):
"""Convert the map data into sparse form"""
if self.is_sparse(): return self
data_flat = self.data.as_1d()
data_mask = (data_flat != 0.0)
sparse_idxs = data_mask.iselection()
sparse_data = data_flat.select(data_mask)
self.data = sparse_data
self._map_indices = sparse_idxs
return self
def make_dense(self):
"""Convert the map data into dense form"""
if not self.is_sparse(): return self
data_bulk = numpy.zeros(numpy.product(self._map_size))
data_bulk.put(indices=self._map_indices, values=self.data)
data_bulk = flex.double(data_bulk)
data_bulk.reshape(flex.grid(self._map_size))
self.data = data_bulk
self._map_indices = None
return self
class PanddaReferenceMapLoader:
def __init__(self, resolution_factor, density_scaling):
self.resolution_factor = resolution_factor
self.density_scaling = density_scaling
def __call__(self, reference, map_resolution, grid):
"""Load the reference map, and calculate some map statistics"""
# ==============================>
# Take the scaled diffraction data for the reference dataset and create fft
# ==============================>
# ref_dataset = self.datasets.reference()
ref_dataset = reference
fft_map = ref_dataset.data.miller_arrays['truncated'].fft_map(
resolution_factor=float(self.resolution_factor),
d_min=float(map_resolution),
symmetry_flags=cctbx.maptbx.use_space_group_symmetry)
# ==============================>
# Scale the map
# ==============================>
if self.density_scaling == 'none':
pass
elif self.density_scaling == 'sigma':
fft_map.apply_sigma_scaling()
elif self.density_scaling == 'volume':
fft_map.apply_volume_scaling()
# ==============================>
# Transform to the reference frame
# ==============================>
# Extract the points for the map (in the grid frame)
masked_cart = grid.grid2cart(grid.global_mask().outer_mask(), origin_shift=True)
# Create map handler in the native frame and extract the map values
ref_map_true = ElectronDensityMap.from_fft_map(fft_map)
masked_vals = ref_map_true.get_cart_values(masked_cart)
# ==============================>
# Create a new electron density map object for the "grid map"
# ==============================>
ref_map = ElectronDensityMap(map_data=masked_vals, unit_cell=grid.unit_cell(),
map_indices=grid.global_mask().outer_mask_indices(),
map_size=grid.grid_size(),
map_origin=grid.cart_origin(),
sparse=True)
# Store the map as a child of the dataset
ref_dataset.child = ref_map
# ==============================>
# Add some meta for debugging, etc
# ==============================>
ref_map.meta.type = 'reference-map'
ref_map.meta.resolution = map_resolution
ref_map.meta.map_mean = ref_map.get_map_data(sparse=True).min_max_mean().mean
ref_map.meta.map_rms = ref_map.get_map_data(sparse=True).standard_deviation_of_the_sample()
return ref_map
def repr(self):
repr = OrderedDict()
repr["resolution_factor"] = self.resolution_factor
repr["density_scaling"] = self.density_scaling
return repr
class MapHolderList(HolderList):
_holder_class = ElectronDensityMap
def _get_num(self, item):
return item.meta.num
def _get_tag(self, item):
return item.meta.tag
class PanddaLoadAndMorphMaps:
def __init__(self, resolution_factor, density_scaling, cpus, verbose, multiprocessing):
self.resolution_factor = resolution_factor
self.density_scaling = density_scaling
self.cpus = cpus
self.verbose = verbose
self.multiprocessing = multiprocessing
def __call__(self, datasets, ref_map, map_resolution, grid):
"""Create map from miller arrays. Transform map into the reference frame by sampling at the given points."""
assert ref_map.is_sparse(), 'Reference map is not in sparse form'
# ==============================>
# Create holder for the output map objects
# ==============================>
sample = {}
# ==============================>
# Return empty list if no datasets
# ==============================>
if not datasets: return sample
# ==============================>
# Load maps in parallel
# ==============================>
print('Loading maps (using {!s} cores)'.format(self.cpus))
arg_list = [
MapLoader(dataset=d, grid=grid, reference_map=ref_map, verbose=self.verbose,
map_resolution=map_resolution, resolution_factor=self.resolution_factor,
density_scaling=self.density_scaling)
for dtag, d in datasets.items()]
res = arg_list[0].run()
# Print a sort of progress bar
print('1' + ''.join(['{:<5}'.format(i) for i in range(0, len(arg_list) + 5, 5)])[2:])
print(' ' * len(arg_list) + '|\r', end='')
sys.stdout.flush()
gc.collect()
if self.multiprocessing == "dask":
with worker_client(timeout=120, separate_thread=False) as client:
dataset_maps_futures = client.map(wrapper_run, arg_list)
dataset_maps = client.gather(dataset_maps_futures)
# results = []
# for arg in arg_list:
# y = dask.delayed(wrapper_run)(arg)
# results.append(y)
# dataset_maps = dask.compute(results)
# print(dask.distributed.get_worker())
#
# client = dask.distributed.get_client()
# map_futures = client.map(wrapper_run, arg_list)
# dask.distributed.secede()
# dataset_maps = client.gather(map_futures)
# dask.distributed.rejoin()
else:
dataset_maps = jl.Parallel(n_jobs=self.cpus,
verbose=5)(jl.delayed(wrapper_run)(arg)
for arg
in arg_list)
# ==============================>
# Managed
# ==============================>
print('|')
sample = {m.meta.tag: m
for m
in dataset_maps}
# ==============================>
# Clear fft map data to save memory
# ==============================>
# for dtag, m in sample.items():
# # TODO: is this the best way of handling this now?
# map_dataset = datasets[m.meta.tag]
# map_dataset.data.fft_maps['truncated'] = None
return sample
class MapLoader(object):
def __init__(self, dataset, grid, reference_map, verbose, map_resolution, resolution_factor, density_scaling):
"""
The main object for loading the maps for PanDDA.
Constructed so that the class can be initialised and then called within a multiprocessing function.
Usage:
new = MapLoader(...)
output = new.run()
or:
output = MapLoader.process(...)
"""
self.data = (dataset, grid, reference_map, verbose, map_resolution, resolution_factor, density_scaling)
# @classmethod
# def process(cls, dataset, grid, reference_map, args, verbose):
# """Process the dataset immediately and return output"""
# return cls(dataset=dataset, grid=grid, reference_map=reference_map, args=args, verbose=verbose)
def run(self):
dataset, grid, reference_map, verbose, map_resolution, resolution_factor, density_scaling = self.data
# log_file = dataset.file_manager.get_file('dataset_log')
assert reference_map.is_sparse(), 'Reference map is not in sparse form'
# ============================================================================>
# Create map handler in the native coordinate frame
# ============================================================================>
# Extract the map data
# TODO: make sure new version works
# fft_map = dataset.data.fft_maps['truncated']
dataset.data.fft_maps['truncated'] = dataset.data.miller_arrays['truncated'].fft_map(
resolution_factor=float(resolution_factor),
d_min=float(map_resolution),
symmetry_flags=cctbx.maptbx.use_space_group_symmetry)
fft_map = dataset.data.fft_maps['truncated']
gc.collect()
# Scale the map
if density_scaling == 'none':
pass
elif density_scaling == 'sigma':
fft_map.apply_sigma_scaling()
elif density_scaling == 'volume':
fft_map.apply_volume_scaling()
# Create map object
# native_map_true = ElectronDensityMap.from_fft_map(fft_map).as_map()
native_map_true = ElectronDensityMap.from_fft_map(fft_map)
# ============================================================================>
# Morph the map to the reference frame
# ============================================================================>
# Extract the map sites from the grid partition
point_mappings_grid = grid.partition.nn_groups[grid.global_mask().outer_mask_indices()]
assert sum(point_mappings_grid == -1) == 0
sites_cart_map = grid.grid2cart(grid.global_mask().outer_mask(), origin_shift=True)
# Translate the grid partition mappings to the dataset alignment mappings
mappings_grid2dataset = get_interpolated_mapping_between_coordinates(query_list=grid.partition.sites_cart,
ref_list=dataset.model.alignment.reference_sites,
tol=0.01)
point_mappings_dataset = numpy.array([mappings_grid2dataset[i] for i in point_mappings_grid])
assert sum(point_mappings_dataset == -1) == 0
sites_cart_map_d = dataset.model.alignment.ref2nat(coordinates=sites_cart_map, mappings=point_mappings_dataset)
morphed_map_data = native_map_true.get_cart_values(sites_cart_map_d)
# Scale map to reference
scale_mask = grid.index_on_other(query=grid.global_mask().inner_mask_indices(),
other=grid.global_mask().outer_mask_indices())
scaled_map_data = scale_map_to_reference(ref_vals=reference_map.data,
vals=morphed_map_data,
mask_idxs=flex.size_t(scale_mask))
# Create map holder
morphed_map = reference_map.new_from_template(map_data=scaled_map_data, sparse=reference_map.is_sparse())
morphed_map.meta.num = dataset.num
morphed_map.meta.tag = dataset.tag
morphed_map.meta.type = 'observed map'
morphed_map.meta.resolution = reference_map.meta.resolution
morphed_map.meta.map_uncertainty = None
morphed_map.meta.obs_map_mean = morphed_map_data.min_max_mean().mean
morphed_map.meta.obs_map_rms = morphed_map_data.standard_deviation_of_the_sample()
morphed_map.meta.scl_map_mean = scaled_map_data.min_max_mean().mean
morphed_map.meta.scl_map_rms = scaled_map_data.standard_deviation_of_the_sample()
# Print a running row of dots
print('>', end='');
sys.stdout.flush()
return morphed_map.make_sparse()
class PanDDAGridSetup:
def __init__(self, cpus, mask_pdb, align_mask_to_reference, alignment_method,
outer_mask, inner_mask, inner_mask_symmetry, grid_spacing, padding, verbose, mask_selection_string):
self.cpus = cpus
self.mask_pdb = mask_pdb
self.align_mask_to_reference = align_mask_to_reference
self.alignment_method = alignment_method
self.outer_mask = outer_mask
self.inner_mask = inner_mask
self.inner_mask_symmetry = inner_mask_symmetry
self.grid_spacing = grid_spacing
self.padding = padding
self.verbose = verbose
self.mask_selection_string = mask_selection_string
self.grid = None
def __call__(self, reference=None):
"""Generate the grid objects for the analysis"""
# ============================================================================>
#####
# Create Sampling Grid (for generated maps)
#####
# Create reference grid based on the reference structure
# ============================================================================>
if self.grid is None:
# Which dataset to be used to mask the grid
if bool(self.mask_pdb):
mask_dataset = PanddaReferenceDataset.from_file(model_filename=self.mask_pdb).label(
tag='masking')
if self.align_mask_to_reference:
try:
mask_dataset.model.alignment = None
mask_dataset.model.align_to(other_hierarchy=reference.model.hierarchy,
method=self.alignment_method,
require_hierarchies_identical=False)
except:
msg = traceback.format_exc()
msg += '\n------------------>>>'
msg += '\n\nFailed to align masking pdb ({}) to the reference structure.'.format(
self.mask_pdb)
msg += '\nIf the masking structure does not need alignment, rerun with params.masks.align_mask_to_reference=False'
raise Failure(msg)
else:
mask_dataset.set_origin_shift((0.0, 0.0, 0.0))
else:
mask_dataset = reference.copy()
# Create the grid using the masking dataset (for determining size and extent of grid)
print("Creating referene grid")
self.create_reference_grid(dataset=mask_dataset, grid_spacing=self.grid_spacing, reference=reference)
print("Masking reference grid")
self.mask_reference_grid(dataset=mask_dataset, selection=self.mask_selection_string)
# Store the transformation to shift the reference dataset to the "grid frame", where the grid origin is (0,0,0)
print("shifting reference origin")
reference.set_origin_shift([-1.0 * a for a in self.grid.cart_origin()])
# Partition the grid with the reference dataset (which grid points use which coordinate transformations)
# print("Partitioning reference grid")
self.partition_reference_grid(dataset=reference)
return self.grid
def create_reference_grid(self, dataset, grid_spacing, reference):
"""Create a grid over the given dataset"""
# ============================================================================>
# Extract sites and calculate grid extent
# ============================================================================>
sites_cart = dataset.model.alignment.nat2ref(dataset.model.hierarchy.atoms().extract_xyz())
# Calculate the extent of the grid
buffer = self.outer_mask + self.padding
grid_min = flex.double([s - buffer for s in sites_cart.min()])
grid_max = flex.double([s + buffer for s in sites_cart.max()])
# ============================================================================>
# Create main grid object
# ============================================================================>
self.grid = Grid(grid_spacing=grid_spacing,
origin=tuple(grid_min),
approx_max=tuple(grid_max),
verbose=self.verbose)
# ==============================>
# Calculate alignment between reference dataset and grid
# ==============================>
ref_dataset = reference
ref_dataset.set_origin_shift(-1.0 * grid_min)
# Write out masks if selected
# if self.args.output.developer.write_grid_frame_masks:
# f_name = splice_ext(self.file_manager.get_file('reference_structure'), 'grid', position=-1)
# if not os.path.exists(f_name):
# tmp_h = ref_dataset.model.hierarchy.deep_copy()
# tmp_h.atoms().set_xyz(ref_dataset.ref2grid(tmp_h.atoms().extract_xyz()))
# tmp_h.write_pdb_file(f_name)
# f_name = splice_ext(self.file_manager.get_file('reference_symmetry'), 'grid', position=-1)
# if not os.path.exists(f_name):
# tmp_h = ref_dataset.model.crystal_contacts(distance_cutoff=self.args.params.masks.outer_mask+5, combine_copies=True)
# tmp_h.atoms().set_xyz(ref_dataset.ref2grid(tmp_h.atoms().extract_xyz()))
# tmp_h.write_pdb_file(f_name)
return self.grid
def mask_reference_grid(self, dataset, selection=None):
"""Create masks for the reference grid based on distances from atoms in the reference structure"""
# ============================================================================>
# Get main and neighbouring symmetry copies of the masking structure
# ============================================================================>
ref_h = dataset.model.hierarchy
sym_h = dataset.model.crystal_contacts(distance_cutoff=self.outer_mask + 5.0, combine_copies=True)
# ============================================================================>
# Apply mask (protein=default if selection is not given)
# ============================================================================>
if selection:
ref_h = ref_h.select(ref_h.atom_selection_cache().selection(selection), copy_atoms=True)
else:
ref_h = protein(ref_h)
# ============================================================================>
# Always generate symmetry mask using all non-water atoms - TODO also allow custom definitions? TODO
# ============================================================================>
sym_h = non_water(sym_h)
# ============================================================================>
# Check that these contain atoms
# ============================================================================>
if len(ref_h.atoms()) == 0: raise Sorry('Zero atoms have been selected to mask the grid')
if len(sym_h.atoms()) == 0: raise Sorry('Zero atoms have been selected to mask the grid')
# ============================================================================>
# Extract coordinates
# ============================================================================>
ref_sites_cart = dataset.model.alignment.nat2ref(ref_h.atoms().extract_xyz())
sym_sites_cart = dataset.model.alignment.nat2ref(sym_h.atoms().extract_xyz())
# ============================================================================>
# Global mask used for removing points in the bulk solvent regions
# ============================================================================>
if self.grid.global_mask() is None:
global_mask = AtomicMask(parent=self.grid,
sites_cart=ref_sites_cart,
max_dist=self.outer_mask,
min_dist=self.inner_mask)
self.grid.set_global_mask(global_mask)
# ============================================================================>
# Global mask used for removing points close to symmetry copies of the protein
# ============================================================================>
if self.grid.symmetry_mask() is None:
symmetry_mask = GridMask(parent=self.grid,
sites_cart=sym_sites_cart,
max_dist=self.outer_mask,
min_dist=self.inner_mask_symmetry)
self.grid.set_symmetry_mask(symmetry_mask)
# ============================================================================>
# Write masked maps
# ============================================================================>
# # Write protein masked map
# indices = self.grid.global_mask().total_mask_indices()
# f_name = self.file_manager.get_file('reference_dataset').replace('.mtz','.totalmask.ccp4')
# if self.args.output.developer.write_grid_frame_masks:
# self.grid.write_indices_as_map(indices=indices, f_name=splice_ext(f_name, 'grid', position=-1), origin_shift=False)
# if 1 or self.args.output.developer.write_reference_frame_common_masks_and_maps:
# self.grid.write_indices_as_map(indices=indices, f_name=splice_ext(f_name, 'ref', position=-1), origin_shift=True)
#
# # Write symmetry masked map
# indices = self.grid.symmetry_mask().total_mask_indices()
# f_name = self.file_manager.get_file('reference_dataset').replace('.mtz','.symmask.ccp4')
# if self.args.output.developer.write_grid_frame_masks:
# self.grid.write_indices_as_map(indices=indices, f_name=splice_ext(f_name, 'grid', position=-1), origin_shift=False)
# if 1 or self.args.output.developer.write_reference_frame_common_masks_and_maps:
# self.grid.write_indices_as_map(indices=indices, f_name=splice_ext(f_name, 'ref', position=-1), origin_shift=True)
return self.grid
def partition_reference_grid(self, dataset, altlocs=['', 'A']):
# ============================================================================>
# Select the sites for generating the voronoi alignments (calphas)
# ============================================================================>
partition_h = calphas(sel_altloc(dataset.model.hierarchy, altlocs=altlocs))
site_cart_ca = partition_h.atoms().extract_xyz()
# ============================================================================>
# Create voronoi cells based on these atoms
# ============================================================================>
t1 = time.time()
# self.grid.create_grid_partition(sites_cart=site_cart_ca)
# self.grid.partition.partition(mask = self.grid.global_mask(),
# cpus = self.cpus)
self.grid.partition = partition_grid(self.grid,
reference_dataset=dataset,
executor=ExecutorJoblib(cpus=21),
)
t2 = time.time()
# ============================================================================>
# Print cell-by-cell summary or the partitioning
# ============================================================================>
# self.log.bar(True, False)
# self.log('Partition Summary:')
# self.log.bar(False, True)
# voronoi_counts = dict(zip(*numpy.unique(self.grid.partition.nn_groups, return_counts=True)))
# # Cell-by-Cell summary of the voronoi cells
# self.log.bar()
# self.log('CHN - RES - RESID - ATOM - ALT : VORONOI VOLUME')
# self.log.bar()
# for i_atom, atom in enumerate(partition_h.atoms_with_labels()):
# self.log('{:<3} - {:<3} - {:<7} - {:<4} - {:<3} : {:>10} points'.format(atom.chain_id, atom.resname, atom.resid(), atom.name, atom.altloc, voronoi_counts.get(i_atom,0)))
# self.log.bar()
# self.log('Unpartitioned space: {} points'.format(voronoi_counts.get(-1,0)))
# self.log.bar()
# # Chain-by-chain summary of the voronoi cells
# for c in partition_h.chains():
# self.log('Chain {:1} - {:5} regions - ({:5} residues)'.format(c.id, len(c.atoms()), len(c.residue_groups())))
# self.log.bar()
# self.log('Total: {} regions ({} chains, {} residues)'.format(len(partition_h.atoms()), len(list(partition_h.chains())), len(list(partition_h.residue_groups()))))
return self.grid
def __repr__(self):
repr = {}
return repr
def partition_grid(grid, reference_dataset, executor, mask=None, altlocs=['', 'A']):
logger.info("Partitioning Calphas")
partition_h = calphas(sel_altloc(reference_dataset.model.hierarchy, altlocs=altlocs))
logger.info("Getting site carts")
site_cart_ca = partition_h.atoms().extract_xyz()
logger.info("making partition")
grid_partition = GridPartition(grid,
site_cart_ca,
)
# assert isinstance(cpus, int) and (cpus > 0)
# Sites that we are partitioning
logger.info("querying sites")
if mask:
query_sites = flex.vec3_double(mask.outer_mask())
else:
query_sites = flex.vec3_double(grid.grid_points())
# Find the nearest grid_site for each query_site (returns index of the grid site)
print("STARTING MULTIPROCESSING")
if executor.cpus == 1:
output = [find_sites((grid_partition.sites_grid, query_sites))]
else:
# Chunk the points into groups
chunk_size = iceil(1.0 * len(query_sites) / executor.cpus)
chunked_points = [query_sites[i:i + chunk_size]
for i
in range(0, len(query_sites), chunk_size)]
assert sum(map(len, chunked_points)) == len(query_sites)
assert len(chunked_points) == executor.cpus
# Map to cpus
# arg_list = [(self.sites_grid, chunk) for chunk in chunked_points]
# output = easy_mp.pool_map(fixed_func=find_sites, args=arg_list, processes=cpus)
funcs = []
for i, chunk in enumerate(chunked_points):
f = FindSites(sites_grid=grid_partition.sites_grid,
chunk=chunk,
)
funcs.append(f
)
output = executor(funcs)
assert len(output) == executor.cpus, '{!s} != {!s}'.format(len(output), executor.cpus)
logger.info("finding sites")
# output = executor(find_sites,
# [(grid_partition.sites_grid, [query_site])
# for query_site
# in query_sites
# ],
# )
# funcs = [lambda: find_sites((grid_partition.sites_grid,
# [query_site],
# )
# )
# for query_site
# in query_sites
# ]
# output = executor(funcs)
# assert len(output) == cpus, '{!s} != {!s}'.format(len(output), cpus)
# Extract the indices of the mapped points
logger.info("nn_groups")
nn_groups = []
[nn_groups.extend(o) for o in output]
nn_groups = numpy.array(nn_groups)
logger.info("assering")
logger.info(output)
assert len(query_sites) == len(nn_groups)
logger.info("assertation passed")
# Reformat into full grid size
if mask:
grid_partition.nn_groups = -1 * numpy.ones(grid.grid_size_1d(), dtype=int)
grid_partition.nn_groups.put(mask.outer_mask_indices(), nn_groups)
else:
grid_partition.nn_groups = nn_groups
logger.info("returning from nn groups")
return grid_partition
def query_by_grid_indices(self, idxs):
"""Return the atom label for a grid site index"""
assert self.nn_groups is not None, 'Grid not yet partitioned'
return numpy.array([self.nn_groups[i] for i in idxs])
def query_by_grid_points(self, gps):
"""Return the atom label for a grid point"""
assert self.nn_groups is not None, 'Grid not yet partitioned'
indxr = self.grid.indexer()
return numpy.array([self.nn_groups[indxr(g)] for g in gps])
def query_by_cart_points(self, sites_cart):
"""Dynamically calculate the nearest atom site to the input points"""
tree = spatial.KDTree(data=self.sites_cart)
nn_dists, nn_groups = tree.query(sites_cart)
return numpy.array(nn_groups)
def find_sites(sites_tuple):
ref_sites, query_sites = sites_tuple
tree = spatial.KDTree(data=ref_sites)
nn_dists, nn_groups = tree.query(query_sites)
return nn_groups
def get_interpolated_mapping_between_coordinates(query_list, ref_list, tol=0.01):
"""
Take each of query_list and find the sites in ref_list (within tolerance).
Missing sites will be interpolated to the closest neighbouring site.
Return list of indices mapping the site in one to the closest site in the other.
"""
ref_list = flex.vec3_double(ref_list)
tmp_idxs_q_to_r = [closest_point_within_tolerance(query=q, ref_list=ref_list, tol=tol) for q in query_list]
assert tmp_idxs_q_to_r.count(-1) != len(tmp_idxs_q_to_r), 'no matching sites found between mappings'
out_idxs_q_to_r = copy.copy(tmp_idxs_q_to_r)
l = len(tmp_idxs_q_to_r)
# Populate the missing values with the nearest value
for i in range(l):
d_i = 0
while out_idxs_q_to_r[i] == -1:
d_i += 1;
p_i = i + d_i;
n_i = i - d_i
if (p_i < l) and (tmp_idxs_q_to_r[p_i] != -1):
out_idxs_q_to_r[i] = out_idxs_q_to_r[p_i]
elif (n_i >= 0) and (tmp_idxs_q_to_r[n_i] != -1):
out_idxs_q_to_r[i] = out_idxs_q_to_r[n_i]
return out_idxs_q_to_r
def closest_point_within_tolerance(query, ref_list, tol):
dist_sq = list((ref_list - query).dot())
dmin_sq = min(dist_sq)
if dmin_sq > tol ** 2:
return -1
return dist_sq.index(dmin_sq)
class ExecutorEasyMP:
def __init__(self,
cpus=21,
):
self.cpus = cpus
def __call__(self, funcs):
results = easy_mp.pool_map(func=wrapper_run,
args=funcs,
processes=self.cpus,
chunksize=1,
)
return results
class ExecutorJoblib:
def __init__(self,
cpus=21,
):
self.cpus = cpus
def __call__(self, func_list):
results = Parallel(n_jobs=self.cpus,
verbose=8,
)(delayed(func)() for func in func_list)
return results
class FindSites:
def __init__(self,
sites_grid,
chunk,
):
self.sites_grid = sites_grid
self.chunk = chunk
def __call__(self):
return find_sites((self.sites_grid,
self.chunk,
)
)
def repr(self):
repr = OrderedDict()
repr["len_chunk"] = len(self.chunk)
return repr
| [
"conor.wild@sky.com"
] | conor.wild@sky.com |
07a7d518ed206c667703c46802b42a922321203a | 85a9ffeccb64f6159adbd164ff98edf4ac315e33 | /pysnmp-with-texts/BW-NetworkServerFault.py | bae054d0af9dd830995f3a00d036d02eb1102468 | [
"Apache-2.0",
"LicenseRef-scancode-warranty-disclaimer",
"LicenseRef-scancode-proprietary-license",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | agustinhenze/mibs.snmplabs.com | 5d7d5d4da84424c5f5a1ed2752f5043ae00019fb | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | refs/heads/master | 2020-12-26T12:41:41.132395 | 2019-08-16T15:51:41 | 2019-08-16T15:53:57 | 237,512,469 | 0 | 0 | Apache-2.0 | 2020-01-31T20:41:36 | 2020-01-31T20:41:35 | null | UTF-8 | Python | false | false | 30,143 | py | #
# PySNMP MIB module BW-NetworkServerFault (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/BW-NetworkServerFault
# Produced by pysmi-0.3.4 at Wed May 1 11:42:18 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, Integer, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "OctetString", "Integer", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
SingleValueConstraint, ValueSizeConstraint, ConstraintsUnion, ValueRangeConstraint, ConstraintsIntersection = mibBuilder.importSymbols("ASN1-REFINEMENT", "SingleValueConstraint", "ValueSizeConstraint", "ConstraintsUnion", "ValueRangeConstraint", "ConstraintsIntersection")
common, alarmName, faultFields, severity, timeStamp, alarmState, subcomponent, problemText, component, recommendedActionsText, systemName, identifier = mibBuilder.importSymbols("BroadworksFault", "common", "alarmName", "faultFields", "severity", "timeStamp", "alarmState", "subcomponent", "problemText", "component", "recommendedActionsText", "systemName", "identifier")
ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup")
Counter64, iso, IpAddress, Bits, Counter32, TimeTicks, Gauge32, MibIdentifier, ModuleIdentity, MibScalar, MibTable, MibTableRow, MibTableColumn, ObjectIdentity, Unsigned32, Integer32, NotificationType = mibBuilder.importSymbols("SNMPv2-SMI", "Counter64", "iso", "IpAddress", "Bits", "Counter32", "TimeTicks", "Gauge32", "MibIdentifier", "ModuleIdentity", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "ObjectIdentity", "Unsigned32", "Integer32", "NotificationType")
TextualConvention, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "DisplayString")
systemFaults = ModuleIdentity((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1))
systemFaults.setRevisions(('2000-09-19 14:31',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: systemFaults.setRevisionsDescriptions(('',))
if mibBuilder.loadTexts: systemFaults.setLastUpdated('200201220000Z')
if mibBuilder.loadTexts: systemFaults.setOrganization('Broadsoft, Inc')
if mibBuilder.loadTexts: systemFaults.setContactInfo('Broadsoft, Inc. 220 Perry Parkway Gaithersburg, MD 20877 301-977-9440')
if mibBuilder.loadTexts: systemFaults.setDescription('The defines the fault ')
bwPMNSExecutionServerLaunched = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1001)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwPMNSExecutionServerLaunched.setStatus('current')
if mibBuilder.loadTexts: bwPMNSExecutionServerLaunched.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwPMNSExecutionServerShutDown = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1002)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwPMNSExecutionServerShutDown.setStatus('current')
if mibBuilder.loadTexts: bwPMNSExecutionServerShutDown.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwPMNSExecutionServerRestarted = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1003)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwPMNSExecutionServerRestarted.setStatus('current')
if mibBuilder.loadTexts: bwPMNSExecutionServerRestarted.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwPMNSExecutionServerDeath = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1004)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwPMNSExecutionServerDeath.setStatus('current')
if mibBuilder.loadTexts: bwPMNSExecutionServerDeath.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwPMNSProvisioningServerLaunched = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1005)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwPMNSProvisioningServerLaunched.setStatus('current')
if mibBuilder.loadTexts: bwPMNSProvisioningServerLaunched.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwPMNSProvisioningServerShutDown = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1006)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwPMNSProvisioningServerShutDown.setStatus('current')
if mibBuilder.loadTexts: bwPMNSProvisioningServerShutDown.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwPMNSProvisioningServerRestarted = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1007)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwPMNSProvisioningServerRestarted.setStatus('current')
if mibBuilder.loadTexts: bwPMNSProvisioningServerRestarted.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwPMNSProvisioningServerDeath = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1008)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwPMNSProvisioningServerDeath.setStatus('current')
if mibBuilder.loadTexts: bwPMNSProvisioningServerDeath.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNoRowInNNACLFailure = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1009)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNoRowInNNACLFailure.setStatus('current')
if mibBuilder.loadTexts: bwNoRowInNNACLFailure.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSMemLeakInSessionFactory = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1010)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSMemLeakInSessionFactory.setStatus('current')
if mibBuilder.loadTexts: bwNSMemLeakInSessionFactory.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSPolicyDeploymentError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1011)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSPolicyDeploymentError.setStatus('current')
if mibBuilder.loadTexts: bwNSPolicyDeploymentError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSDatabaseDataInconsistencyError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1012)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSDatabaseDataInconsistencyError.setStatus('current')
if mibBuilder.loadTexts: bwNSDatabaseDataInconsistencyError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSCallGotTreatment = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1013)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSCallGotTreatment.setStatus('current')
if mibBuilder.loadTexts: bwNSCallGotTreatment.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSInvalidDialPlan = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1014)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSInvalidDialPlan.setStatus('current')
if mibBuilder.loadTexts: bwNSInvalidDialPlan.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSSCRPInconsistentList = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1015)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSSCRPInconsistentList.setStatus('current')
if mibBuilder.loadTexts: bwNSSCRPInconsistentList.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSUnlicensedFeature = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1016)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSUnlicensedFeature.setStatus('current')
if mibBuilder.loadTexts: bwNSUnlicensedFeature.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwCallLogRegister = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1019)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwCallLogRegister.setStatus('current')
if mibBuilder.loadTexts: bwCallLogRegister.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwCallLogUnregister = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1020)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwCallLogUnregister.setStatus('current')
if mibBuilder.loadTexts: bwCallLogUnregister.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwCallLogFailure = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1021)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwCallLogFailure.setStatus('current')
if mibBuilder.loadTexts: bwCallLogFailure.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwCallLogUnregisterFailure = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1022)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwCallLogUnregisterFailure.setStatus('current')
if mibBuilder.loadTexts: bwCallLogUnregisterFailure.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSASRUnknowHostError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1023)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSASRUnknowHostError.setStatus('current')
if mibBuilder.loadTexts: bwNSASRUnknowHostError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSSynchUnknownHostnameError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1024)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSSynchUnknownHostnameError.setStatus('current')
if mibBuilder.loadTexts: bwNSSynchUnknownHostnameError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSSynchTrustedKeyError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1025)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSSynchTrustedKeyError.setStatus('current')
if mibBuilder.loadTexts: bwNSSynchTrustedKeyError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSSynchExceptionError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1026)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSSynchExceptionError.setStatus('current')
if mibBuilder.loadTexts: bwNSSynchExceptionError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSSynchUpdateXMLError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1027)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSSynchUpdateXMLError.setStatus('current')
if mibBuilder.loadTexts: bwNSSynchUpdateXMLError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSSynchUpdateFailureError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1028)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSSynchUpdateFailureError.setStatus('current')
if mibBuilder.loadTexts: bwNSSynchUpdateFailureError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSSynchUpdateExceptionError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1029)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSSynchUpdateExceptionError.setStatus('current')
if mibBuilder.loadTexts: bwNSSynchUpdateExceptionError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSSynchUpdateIncorrectVersionError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1030)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSSynchUpdateIncorrectVersionError.setStatus('current')
if mibBuilder.loadTexts: bwNSSynchUpdateIncorrectVersionError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNSSynchUpdateIncorrectProtocolError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1031)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNSSynchUpdateIncorrectProtocolError.setStatus('current')
if mibBuilder.loadTexts: bwNSSynchUpdateIncorrectProtocolError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNetworkDeviceNodeIsFailed = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1032)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNetworkDeviceNodeIsFailed.setStatus('current')
if mibBuilder.loadTexts: bwNetworkDeviceNodeIsFailed.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNetworkDeviceNodeIsOnline = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1033)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNetworkDeviceNodeIsOnline.setStatus('current')
if mibBuilder.loadTexts: bwNetworkDeviceNodeIsOnline.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwLicenseViolation = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1034)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwLicenseViolation.setStatus('current')
if mibBuilder.loadTexts: bwLicenseViolation.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwLicenseThreshold = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1035)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwLicenseThreshold.setStatus('current')
if mibBuilder.loadTexts: bwLicenseThreshold.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwServiceControlProxyConnFailed = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1036)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwServiceControlProxyConnFailed.setStatus('current')
if mibBuilder.loadTexts: bwServiceControlProxyConnFailed.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwServiceControlProxyConnTerminated = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1037)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwServiceControlProxyConnTerminated.setStatus('current')
if mibBuilder.loadTexts: bwServiceControlProxyConnTerminated.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNonInviteLicenseViolation = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1038)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNonInviteLicenseViolation.setStatus('current')
if mibBuilder.loadTexts: bwNonInviteLicenseViolation.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwNonInviteLicenseThreshold = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1039)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwNonInviteLicenseThreshold.setStatus('current')
if mibBuilder.loadTexts: bwNonInviteLicenseThreshold.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
bwLocationAPIRequestError = NotificationType((1, 3, 6, 1, 4, 1, 6431, 1, 1, 1, 1040)).setObjects(("BroadworksFault", "identifier"), ("BroadworksFault", "timeStamp"), ("BroadworksFault", "alarmName"), ("BroadworksFault", "systemName"), ("BroadworksFault", "severity"), ("BroadworksFault", "component"), ("BroadworksFault", "subcomponent"), ("BroadworksFault", "problemText"), ("BroadworksFault", "recommendedActionsText"))
if mibBuilder.loadTexts: bwLocationAPIRequestError.setStatus('current')
if mibBuilder.loadTexts: bwLocationAPIRequestError.setDescription('For the actual description, refer the BroadWorks FaultManagementGuide as it may contain variable data.')
mibBuilder.exportSymbols("BW-NetworkServerFault", bwPMNSExecutionServerRestarted=bwPMNSExecutionServerRestarted, bwNSInvalidDialPlan=bwNSInvalidDialPlan, bwNSSynchUnknownHostnameError=bwNSSynchUnknownHostnameError, bwLocationAPIRequestError=bwLocationAPIRequestError, bwPMNSProvisioningServerLaunched=bwPMNSProvisioningServerLaunched, bwNSSynchExceptionError=bwNSSynchExceptionError, bwLicenseThreshold=bwLicenseThreshold, bwPMNSProvisioningServerRestarted=bwPMNSProvisioningServerRestarted, bwNSMemLeakInSessionFactory=bwNSMemLeakInSessionFactory, bwNSSynchUpdateIncorrectProtocolError=bwNSSynchUpdateIncorrectProtocolError, bwServiceControlProxyConnFailed=bwServiceControlProxyConnFailed, bwCallLogRegister=bwCallLogRegister, bwNSSynchUpdateXMLError=bwNSSynchUpdateXMLError, bwPMNSExecutionServerLaunched=bwPMNSExecutionServerLaunched, bwPMNSProvisioningServerDeath=bwPMNSProvisioningServerDeath, bwNetworkDeviceNodeIsFailed=bwNetworkDeviceNodeIsFailed, bwServiceControlProxyConnTerminated=bwServiceControlProxyConnTerminated, bwNSSynchUpdateFailureError=bwNSSynchUpdateFailureError, bwPMNSProvisioningServerShutDown=bwPMNSProvisioningServerShutDown, bwPMNSExecutionServerShutDown=bwPMNSExecutionServerShutDown, bwNSDatabaseDataInconsistencyError=bwNSDatabaseDataInconsistencyError, bwNSASRUnknowHostError=bwNSASRUnknowHostError, bwLicenseViolation=bwLicenseViolation, bwNSPolicyDeploymentError=bwNSPolicyDeploymentError, systemFaults=systemFaults, bwPMNSExecutionServerDeath=bwPMNSExecutionServerDeath, bwNonInviteLicenseViolation=bwNonInviteLicenseViolation, bwNSCallGotTreatment=bwNSCallGotTreatment, bwNoRowInNNACLFailure=bwNoRowInNNACLFailure, bwNonInviteLicenseThreshold=bwNonInviteLicenseThreshold, PYSNMP_MODULE_ID=systemFaults, bwNetworkDeviceNodeIsOnline=bwNetworkDeviceNodeIsOnline, bwNSUnlicensedFeature=bwNSUnlicensedFeature, bwNSSynchTrustedKeyError=bwNSSynchTrustedKeyError, bwCallLogUnregisterFailure=bwCallLogUnregisterFailure, bwCallLogFailure=bwCallLogFailure, bwNSSCRPInconsistentList=bwNSSCRPInconsistentList, bwNSSynchUpdateIncorrectVersionError=bwNSSynchUpdateIncorrectVersionError, bwNSSynchUpdateExceptionError=bwNSSynchUpdateExceptionError, bwCallLogUnregister=bwCallLogUnregister)
| [
"dcwangmit01@gmail.com"
] | dcwangmit01@gmail.com |
6eef0c5ebdb3cdd47d6bb998056cf17b8f881c47 | aec23242da73e6a1a67dd80263b2f4628a97251f | /test/integ/linktest.py | f50b7288897fb88260542746ee9e689b74be74ad | [
"LicenseRef-scancode-warranty-disclaimer"
] | no_license | wenzowski/h5serv | 81b245faba6c57c591c4deda7091f0233e80105d | aac69032aa9abd596e9ea7897372d86472d9be0d | refs/heads/develop | 2020-12-26T00:07:53.830528 | 2016-08-30T15:17:48 | 2016-08-30T16:06:26 | 66,952,480 | 0 | 0 | null | 2016-08-30T15:15:27 | 2016-08-30T15:15:27 | null | UTF-8 | Python | false | false | 18,566 | py | ##############################################################################
# Copyright by The HDF Group. #
# All rights reserved. #
# #
# This file is part of H5Serv (HDF5 REST Server) Service, Libraries and #
# Utilities. The full HDF5 REST Server copyright notice, including #
# terms governing use, modification, and redistribution, is contained in #
# the file COPYING, which can be found at the root of the source code #
# distribution tree. If you do not have access to this file, you may #
# request a copy from help@hdfgroup.org. #
##############################################################################
import requests
import config
import unittest
import helper
import json
import logging
class LinkTest(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(LinkTest, self).__init__(*args, **kwargs)
self.endpoint = 'http://' + config.get('server') + ':' + str(config.get('port'))
def testGetHard(self):
logging.info("LinkTest.testGetHard")
for domain_name in ('tall', 'tall_ro'):
g1_uuid = None
domain = domain_name + '.' + config.get('domain')
root_uuid = helper.getRootUUID(domain)
req = self.endpoint + "/groups/" + root_uuid + "/links/g1"
headers = {'host': domain}
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("created" in rspJson)
self.assertTrue("lastModified" in rspJson)
self.assertTrue('link' in rspJson)
target = rspJson['link']
self.assertTrue(helper.validateId(target['id']))
self.assertEqual(target['class'], 'H5L_TYPE_HARD')
self.assertEqual(target['title'], 'g1')
self.assertEqual(target['collection'], 'groups')
def testGetMising(self):
logging.info("LinkTest.testGetMissing")
for domain_name in ('tall', 'tall_ro'):
g1_uuid = None
domain = domain_name + '.' + config.get('domain')
root_uuid = helper.getRootUUID(domain)
req = self.endpoint + "/groups/" + root_uuid + "/links/not_a_link"
headers = {'host': domain}
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 404)
def testGetSoft(self):
logging.info("LinkTest.testGetSoft")
for domain_name in ('tall', 'tall_ro'):
g1_uuid = None
domain = domain_name + '.' + config.get('domain')
root_uuid = helper.getRootUUID(domain)
g1_uuid = helper.getUUID(domain, root_uuid, 'g1')
g12_uuid = helper.getUUID(domain, g1_uuid, 'g1.2')
g121_uuid = helper.getUUID(domain, g12_uuid, 'g1.2.1')
req = self.endpoint + "/groups/" + g121_uuid + "/links/slink"
headers = {'host': domain}
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("created" in rspJson)
self.assertTrue("lastModified" in rspJson)
target = rspJson['link']
self.assertEqual(target['h5path'], 'somevalue')
self.assertEqual(target['class'], 'H5L_TYPE_SOFT')
self.assertEqual(target['title'], 'slink')
self.assertTrue('collection' not in target)
def testGetExternal(self):
logging.info("LinkTest.testGetExternal")
for domain_name in ('tall', 'tall_ro'):
g1_uuid = None
domain = domain_name + '.' + config.get('domain')
root_uuid = helper.getRootUUID(domain)
g1_uuid = helper.getUUID(domain, root_uuid, 'g1')
g12_uuid = helper.getUUID(domain, g1_uuid, 'g1.2')
req = self.endpoint + "/groups/" + g12_uuid + "/links/extlink"
headers = {'host': domain}
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("created" in rspJson)
self.assertTrue("lastModified" in rspJson)
target = rspJson['link']
# self.assertEqual(target, "http://somefile/#h5path(somepath)")
self.assertEqual(target['class'], 'H5L_TYPE_EXTERNAL')
self.assertEqual(target['h5domain'], 'somefile')
self.assertEqual(target['h5path'], 'somepath')
self.assertEqual(target['title'], 'extlink')
self.assertTrue('collection' not in target)
def testGetUDLink(self):
logging.info("LinkTest.testGetUDLink")
domain_name = 'tall_with_udlink'
domain = domain_name + '.' + config.get('domain')
root_uuid = helper.getRootUUID(domain)
g2_uuid = helper.getUUID(domain, root_uuid, 'g2')
req = self.endpoint + "/groups/" + g2_uuid + "/links/udlink"
headers = {'host': domain}
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("created" in rspJson)
self.assertTrue("lastModified" in rspJson)
target = rspJson['link']
self.assertEqual(target['class'], 'H5L_TYPE_USER_DEFINED')
self.assertEqual(target['title'], 'udlink')
def testGetLinks(self):
logging.info("LinkTest.testGetLinks")
for domain_name in ('tall', 'tall_ro'):
g1_uuid = None
domain = domain_name + '.' + config.get('domain')
root_uuid = helper.getRootUUID(domain)
g1_uuid = helper.getUUID(domain, root_uuid, 'g1')
g12_uuid = helper.getUUID(domain, g1_uuid, 'g1.2')
req = self.endpoint + "/groups/" + g12_uuid + "/links"
headers = {'host': domain}
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("links" in rspJson)
links = rspJson["links"]
self.assertEqual(len(links), 2)
for link in links:
self.assertTrue("title" in link)
self.assertTrue("class" in link)
def testGetBatch(self):
logging.info("LinkTest.testGetBatch")
domain = 'group1k.' + config.get('domain')
root_uuid = helper.getRootUUID(domain)
req = helper.getEndpoint() + "/groups/" + root_uuid + "/links"
headers = {'host': domain}
params = {'Limit': 50 }
names = set()
# get links in 20 batches of 50 links each
lastName = None
for batchno in range(20):
if lastName:
params['Marker'] = lastName
rsp = requests.get(req, headers=headers, params=params)
self.assertEqual(rsp.status_code, 200)
if rsp.status_code != 200:
break
rspJson = json.loads(rsp.text)
links = rspJson['links']
self.assertEqual(len(links) <= 50, True)
for link in links:
lastName = link['title']
names.add(lastName)
if len(links) == 0:
break
self.assertEqual(len(names), 1000) # should get 1000 unique links
#Fix - This needs to be made more efficient - when deleting links, the code now
# searches all objects to see if the linked target needs to be made anonymous or not.
# idea: keep back pointers for all links?
# Tracked as Issue #12 in Github
"""
def testMoveLinks(self):
logging.info("LinkTest.testMoveLinks")
domain = 'group1k_updated.' + config.get('domain')
root_uuid = helper.getRootUUID(domain)
# create a new subgroup to move others to
targetGroupId = helper.createGroup(domain)
req = helper.getEndpoint() + "/groups/" + root_uuid + "/links"
headers = {'host': domain}
params = {'Limit': 100 }
names = set()
# get links in batches of 100 links each
count = 0
while True:
print 'count:', count
rsp = requests.get(req, headers=headers, params=params)
self.assertEqual(rsp.status_code, 200)
if rsp.status_code != 200:
break
rspJson = json.loads(rsp.text)
links = rspJson['links']
if len(links) == 0:
break
count += len(links)
for link in links:
# delete link
del_req = helper.getEndpoint() + "/groups/" + root_uuid + "/links/" + link['title']
rsp = requests.delete(del_req, headers=headers)
self.assertEqual(rsp.status_code, 200)
self.assertEqual(count, 1000) # should get 1000 unique links
"""
def testGetBadParam(self):
logging.info("LinkTest.testGetBatchBadParam")
domain = 'tall.' + config.get('domain')
root_uuid = helper.getRootUUID(domain)
req = helper.getEndpoint() + "/groups/" + root_uuid + "/links"
headers = {'host': domain}
params = {'Limit': 'abc' }
rsp = requests.get(req, headers=headers, params=params)
self.assertEqual(rsp.status_code, 400)
def testPut(self):
logging.info("LinkTest.testPut")
domain = 'tall_updated.' + config.get('domain')
grpId = helper.createGroup(domain)
rootId = helper.getRootUUID(domain)
name = 'g3'
req = helper.getEndpoint() + "/groups/" + rootId + "/links/" + name
payload = {"id": grpId}
headers = {'host': domain}
rsp = requests.get(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 404) # link doesn't exist
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 201)
rsp = requests.get(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 200) # it's there now!
# make a request second time (verify idempotent)
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 201)
# now try with a different payload
grpId2 = helper.createGroup(domain)
payload["id"] = grpId2
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 201)
def testPutNameWithSpaces(self):
logging.info("LinkTest.testPutNameWithSpaces")
domain = 'tall_updated.' + config.get('domain')
grpId = helper.createGroup(domain)
rootId = helper.getRootUUID(domain)
name = 'name with spaces'
req = helper.getEndpoint() + "/groups/" + rootId + "/links/" + name
payload = {"id": grpId}
headers = {'host': domain}
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 201)
# verify we can read the link back
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
rspJson = json.loads(rsp.text)
self.assertTrue("link" in rspJson)
link = rspJson["link"]
self.assertTrue("title" in link)
self.assertEqual(link["title"], name)
self.assertTrue("class" in link)
self.assertEqual(link["class"], "H5L_TYPE_HARD")
def testPutBadReqId(self):
logging.info("LinkTest.testPutBadReqId")
domain = 'tall_updated.' + config.get('domain')
grpId = helper.createGroup(domain)
badReqId = 'b2771194-347f-11e4-bb67-3c15c2da029e' # doesn't exist in tall.h5
name = 'g3'
req = helper.getEndpoint() + "/groups/" + badReqId + "/links/" + name
payload = {"id": grpId}
headers = {'host': domain}
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 404)
def testPutBadLinkId(self):
logging.info("LinkTest.testPutBadLinkId")
domain = 'tall_updated.' + config.get('domain')
grpId = helper.createGroup(domain)
rootId = helper.getRootUUID(domain)
badLinkId = 'b2771194-347f-11e4-bb67-3c15c2da029e' # doesn't exist in tall.h5
name = 'g3'
req = helper.getEndpoint() + "/groups/" + rootId + "/links/" + name
payload = {"id": badLinkId}
headers = {'host': domain}
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 404)
def testPutNoName(self):
logging.info("LinkTest.testPutNoName")
domain = 'tall_updated.' + config.get('domain')
grpId = helper.createGroup(domain)
rootId = helper.getRootUUID(domain)
req = helper.getEndpoint() + "/groups/" + rootId + "/links/"
payload = {"id": grpId}
headers = {'host': domain}
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 400)
def testPutBadName(self):
logging.info("LinkTest.testPutBadName")
domain = 'tall_updated.' + config.get('domain')
grpId = helper.createGroup(domain)
rootId = helper.getRootUUID(domain)
name = 'bad/name' # forward slash not allowed
req = helper.getEndpoint() + "/groups/" + rootId + "/links/" + name
payload = {"id": grpId}
headers = {'host': domain}
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 400)
def testPutSoftLink(self):
logging.info("LinkTest.testPutSoftLink")
domain = 'tall_updated.' + config.get('domain')
grpId = helper.createGroup(domain)
rootId = helper.getRootUUID(domain)
name = 'softlink'
req = helper.getEndpoint() + "/groups/" + rootId + "/links/" + name
payload = {"h5path": "somewhere"}
headers = {'host': domain}
# verify softlink does not exist
rsp = requests.get(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 404)
# make request
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 201)
# verify link is created
rsp = requests.get(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 200)
# verify idempotent
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 201)
def testPutExternalLink(self):
logging.info("LinkTest.testPutExternalLink")
domain = 'tall_updated.' + config.get('domain')
target_domain = 'external_target.' + config.get('domain')
target_path = '/dset1'
grpId = helper.createGroup(domain)
rootId = helper.getRootUUID(domain)
name = 'extlink'
req = helper.getEndpoint() + "/groups/" + rootId + "/links/" + name
payload = {"h5path": target_path, "h5domain": target_domain}
headers = {'host': domain}
# verify extlink does not exist
rsp = requests.get(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 404)
# make request
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 201)
# verify link is created
rsp = requests.get(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 200)
# verify that it is an external link
rspJson = json.loads(rsp.text)
target = rspJson['link']
self.assertEqual(target['class'], 'H5L_TYPE_EXTERNAL')
self.assertEqual(target['h5domain'], target_domain)
self.assertEqual(target['h5path'], target_path)
def testPutExternalMissingPath(self):
logging.info("LinkTest.testPutExternalMissingPath")
fakeId = "14bfeeb8-68b1-11e4-a69a-3c15c2da029e"
domain = 'tall_updated.' + config.get('domain')
external_domain = 'external_target.' + config.get('domain')
grpId = helper.createGroup(domain)
rootId = helper.getRootUUID(domain)
name = 'extlinkid'
req = helper.getEndpoint() + "/groups/" + rootId + "/links/" + name
payload = {"h5domain": external_domain}
headers = {'host': domain}
# verify extlink does not exist
rsp = requests.get(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 404)
# make request
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 400)
def testDelete(self):
logging.info("LinkTest.testDelete")
domain = 'tall_updated.' + config.get('domain')
grpId = helper.createGroup(domain)
rootId = helper.getRootUUID(domain)
name = 'deleteme'
req = helper.getEndpoint() + "/groups/" + rootId + "/links/" + name
payload = {"id": grpId}
headers = {'host': domain}
rsp = requests.put(req, data=json.dumps(payload), headers=headers)
self.assertEqual(rsp.status_code, 201)
# now remove the link
rsp = requests.delete(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
# get should fail
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 410)
# Group should still be accessible via uuid
req = self.endpoint + "/groups/" + grpId
rsp = requests.get(req, headers=headers)
self.assertEqual(rsp.status_code, 200)
if __name__ == '__main__':
unittest.main()
| [
"jreadey@hdfgroup.org"
] | jreadey@hdfgroup.org |
8ab10d663bdfbf0e0308ded035ba9e5f154f2c15 | add0bb7a309ea346614d7f560a24e653d3d0ff67 | /test/人机交互/数据交互.py | 03219ac54f95ceed913795f764cdd147bd7f0d56 | [] | no_license | 1572903465/PythonProjects | 935aff08d5b3d3f146393764a856369061513d36 | 73576080174f72ea1df9b36d201cf3949419041b | refs/heads/master | 2023-06-10T15:50:49.178112 | 2021-07-05T15:42:53 | 2021-07-05T15:42:53 | 301,328,267 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,918 | py | from flask import Flask, render_template, g, request, url_for, session, redirect
from dataclasses import dataclass
app = Flask(__name__, static_url_path="/")
app.config['SECRET_KEY']="sdfklas0lk42j"
@dataclass
class User:
id: int
username:str
password:str
users = [
User(1,"Admin","123456"),
User(2,"Eason","888888"),
User(3,"Tommy","666666")
]
@app.before_request
def before_request():
g.user = None
if 'user_id' in session:
user = [u for u in users if u.id == session['user_id']]
g.user=user[0]
print(g)
# @app.route('/')
# def begin():
# return render_template("login.html")
@app.route('/login',methods=['GET','POST'])
def login():
if request.method =='POST':
session.pop('user_id',None)
username = request.form.get("username",None)
password = request.form.get("password",None)
user = [u for u in users if u.username==username]
if len(user) > 0:
user = user[0]
if user and user.password == password:
session['user_id'] = user.id
# print(url_for('profile'))
# return redirect(url_for('profile'),)
user = {
'username':username,
'uid':user.id
}
return render_template("profile.html",userinfo=user)
return render_template("login.html")
@app.route("/profile")
def profile():
if not g.user:
return redirect(url_for('login'))
return render_template('profile.html')
@app.route("/modifyTeim",methods=["GET","POST"])
def modify_item():
print("111111111111111")
print(request.args)
request.args.get("name","")
return {"success":0};
if __name__ == '__main__':
# 运行本项目,host=0.0.0.0可以让其他电脑也能访问到该网站,port指定访问的端口。默认的host是127.0.0.1,port为5000
app.run(host='127.0.0.1',port=9000) | [
"1572903465@qq.com"
] | 1572903465@qq.com |
a6b7b62c9f1ac79f10ddf614af9f7bf20439ed33 | 0d2c2ffe431b159a87bcd78c97147422dce8d778 | /GUI学习/01PyQt5快速开发与实战/ch06信号与槽/05事件处理机制01.py | 43465b6595902f3927dcab02d7536babf580a255 | [] | no_license | YuanXianguo/Python-Project-ITC | 9e297fc1e1e8ec2b136e6e8b1db0afaaba81c16c | afd14cbe501147ec66b4aa0c1c7907b3ae41d148 | refs/heads/master | 2020-04-16T13:54:33.727825 | 2019-12-20T02:16:52 | 2019-12-20T02:16:52 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,225 | py | import sys
from PyQt5.QtCore import QEvent, QTimer, Qt
from PyQt5.QtWidgets import QApplication, QMenu, QWidget
from PyQt5.QtGui import QPainter
class Widget(QWidget):
def __init__(self):
super().__init__()
self.setWindowTitle('Event Demo')
self.setGeometry(300, 300, 300, 200)
self.just_double_clicked = False
self.key = ''
self.text = ''
self.message = ''
QTimer.singleShot(3000, self.give_help)
def give_help(self):
self.text = '请点击这触发追踪鼠标功能'
self.update() # 重绘事件,也就是触发paintEvent函数
def closeEvent(self, event):
"""重新实现关闭事件"""
print('Closed')
def contextMenuEvent(self, event):
"""重新实现上下文菜单事件,表示的是右键所显示的菜单事件"""
menu = QMenu()
one_action = menu.addAction('&One')
one_action.triggered.connect(self.one)
two_action = menu.addAction('&Two')
two_action.triggered.connect(self.two)
if not self.message:
menu.addSeparator()
three_action = menu.addAction('&Three')
three_action.triggered.connect(self.three)
menu.exec_(event.globalPos())
"""上下文菜单函数"""
def one(self):
self.message = 'Menu option One'
self.update()
def two(self):
self.message = 'Menu option Two'
self.update()
def three(self):
self.message = 'Menu option Three'
self.update()
def paintEvent(self, event):
"""重新实现绘制事件"""
text = self.text
i = text.find('\n\n')
if i >= 0:
text = text[0:i]
if self.key: # 若触发了键盘按键,则在信息文本中记录这个按键信息
text += '\n\n你按下了:{0}'.format(self.key)
painter = QPainter(self)
painter.setRenderHint(QPainter.TextAntialiasing)
painter.drawText(self.rect(), Qt.AlignCenter, text) # 绘制信息文本的内容
if self.message: # 若信息文本存在,则在底部居中绘制信息,5秒后清空信息文本并重绘
painter.drawText(self.rect(), Qt.AlignBottom|Qt.AlignCenter, self.message)
QTimer.singleShot(5000, self.clear_message)
QTimer.singleShot(5000, self.update)
def clear_message(self):
"""清空信息文本的槽函数"""
self.message = ''
def resizeEvent(self, event):
self.text = '调整窗口的大小为:QSize({},{})'.format(event.size().width(), event.size().height())
self.update()
def mouseReleaseEvent(self, event):
"""重新实现鼠标释放事件"""
# 若为双击释放,则不跟踪鼠标移动
# 若为单击释放,则需要改变跟踪功能的状态,如果开启跟踪功能就跟踪,否则就不跟踪
if self.just_double_clicked:
self.just_double_clicked = False
else:
self.setMouseTracking(not self.hasMouseTracking()) # 单击鼠标
if self.hasMouseTracking():
self.text = '开启鼠标跟踪功能.\n' + '请移动一下鼠标!\n' + \
'单击鼠标可以关闭这个功能'
else:
self.text = '关闭鼠标跟踪功能.\n' + '单击鼠标可以开启这个功能'
self.update()
def mouseMoveEvent(self, event):
"""重新实现鼠标移动事件"""
if not self.just_double_clicked:
globalPos = self.mapToGlobal(event.pos()) # 将窗口坐标转换为屏幕坐标
self.text = """鼠标位置:
窗口坐标为:QPoint({}, {})
屏幕坐标为:QPoint({}, {})""".format(event.pos().x(), event.pos().y(),
globalPos.x(), globalPos.y())
self.update()
def mouseDoubleClickEvent(self, event):
"""重新实现鼠标双击事件"""
self.just_double_clicked = True
self.text = '你双击了鼠标'
self.update()
def keyPressEvent(self, event):
"""重新实现键盘按下事件"""
self.key = ''
if event.key() == Qt.Key_Home:
self.key = 'Home'
elif event.key() == Qt.Key_End:
self.key = 'End'
elif event.key() == Qt.Key_PageUp:
if event.modifiers() & Qt.ControlModifier:
self.key = 'Ctrl+PageUp'
else:
self.key = 'PageUp'
elif event.key() == Qt.Key_PageDown:
if event.modifiers() & Qt.ControlModifier:
self.key = 'Ctrl+PageDown'
else:
self.key = 'PageDown'
elif Qt.Key_A <= event.key() <= Qt.Key_Z:
if event.modifiers() & Qt.ShiftModifier:
self.key = 'Shift+'
else:
self.key += event.text()
if self.key:
self.key = self.key
self.update()
else:
QWidget.keyPressEvent(self, event)
if __name__ == '__main__':
app = QApplication(sys.argv)
my_show = Widget()
my_show.show()
sys.exit(app.exec_())
| [
"736913978@qq.com"
] | 736913978@qq.com |
b4818ae7a9622cf19a01a92d9c769226d31d19a8 | e262e64415335060868e9f7f73ab8701e3be2f7b | /.history/Test_001/test_link_20201125110915.py | a02ef9ae464982e8e63a84a477479ab6fcfb08ce | [] | no_license | Allison001/developer_test | 6e211f1e2bd4287ee26fd2b33baf1c6a8d80fc63 | b8e04b4b248b0c10a35e93128a5323165990052c | refs/heads/master | 2023-06-18T08:46:40.202383 | 2021-07-23T03:31:54 | 2021-07-23T03:31:54 | 322,807,303 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 247 | py |
import allure
@allure.link("https://www.baidu.com",name="链接地址")
def test_link_a():
print("测试连接的测试用例")
testcase="https://www.baidu.com"
@allure.testcase
def test_testcase():
print("这个是测试用例地址") | [
"zhangyingxbba@gmail.com"
] | zhangyingxbba@gmail.com |
65ed6f471c70db25d6ea06054f3ba7f8adeaa18b | fa798e1779af170ee31bfd710a6faca9904a99ef | /7day/7. ex1.py | c141f66dd49c7579bdaf015710ae54329f342ae1 | [] | no_license | itwebMJ/pythonStudy | 1c573f98b78ce8c9273ae17a44d59a5a26c61b2c | 8ea3112c9c587b6aeb8a5fa6ef715053286fbaae | refs/heads/master | 2023-06-28T05:37:29.239010 | 2021-08-06T08:01:54 | 2021-08-06T08:01:54 | 375,879,186 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,657 | py | '''
프로그램 시작되면 memo 디렉토리 없으면 새로 생성.(맨처음만)
메모장
1. 쓰기
파일명 => 중복 : 덮어쓰기 : 기존 내용 지우고 작성
이어쓰기 : 기존 내용 살려서 뒤에 이어씀. =>키보드로 내용입력('/exit':입력 멈춤) 파일에 씀 =>파일 닫고 종료
새이름 : 새 파일 생성
2. 읽기
memo 디렉토리의 파일 목록 출력 => 파일 선택 => 그 파일을 읽기 모드로 오픈해서 파일 내용 읽어와서 출력
3. 삭제
memo 디렉토리의 파일 목록 출력 => 삭제할 파일 선택 => 선택한 파일 삭제
4. 종료
'''
import os
def init(path): #초기화 함수.
if not os.path.isdir(path): #지정한 디렉토리 있나 확인하여 없으면
os.mkdir(path) #새로 생성
def selectFile(path): #파일 선택 함수.
flist = os.listdir(path) #메모 디렉토리의 파일 목록 읽어옴[a.txt, b.txt, c.txt]
if len(flist)==0:
print('파일이 없다')
return
print('메모 파일 목록')
for i in range(0, len(flist)):#파일 목록 출력
print(i, '. '+flist[i]) #인덱스. 파일명
while True:
idx = int(input('선택할 파일의 번호를 입력하시오'))
if 0 <= idx <= len(flist)-1:
break
return flist[idx]
def readFile(path):
fname = selectFile(path)
if fname == None:
return
print('선택한 파일명:', fname)
f = open(path+fname, 'r', encoding='utf-8')
content = f.read()
f.close()
print('===', fname, '내용===')
print(content)
def nameCheck(path, fname): #파일 이름 중복 체크 및 중복됐을때 모드 선택. mode, 새파일명(중복됐을때 새로 입력받은 파일명)
flist = os.listdir(path) #메모 디렉토리 목록
mode = 'w' #디폴트 모드 w로 설정
if len(flist)==0:
return mode, ''
newName = ''
if fname in flist: #in: 리스트에 in 앞의 값이 있으면 True, 없으면 False. fname이 중복된 이름
newName = flist[0]
x = int(input('1.덮어쓰기 2.이어쓰기 3.새파일명입력'))
if x==1:
mode = 'w'
newName = ''
elif x==2:
mode = 'a'
newName = ''
elif x==3:
while newName in flist: #중복되지 않는 이름을 입력할때까지 계속 입력받음
newName = input('새파일명:')
else:
print('중복처리 중 잘못된 메뉴로 종료')
return
return mode, newName
#반환값이 None이면 잘못된 입력 쓰기 중단
#newName=='': mode값만 이용해서 쓰기작업
#newName!='': 새파일명 입력했음
def writeFile(path):
fname = input('파일명 입력')
res = nameCheck(path, fname)
mode = ''
if res == None:
print('파일 중복처리 선택을 잘못해서 여기서 끝냄')
return
elif res[1]=='':
mode = res[0]
elif res[1]!='':
mode = res[0]
fname = res[1]
else:
return
f = open(path+fname, mode, encoding='utf-8')
while True:
msg = input('내용입력(멈추려면 /exit):')
if msg=='/exit':
break
else:
f.write(msg+'\n')
f.close()
def main():
memo_path = 'memo/'
init(memo_path)
while True:
menu = input('1.읽기 2.쓰기 3.삭제 4.종료')
if menu=='1':
readFile(memo_path)
elif menu=='2':
writeFile(memo_path)
elif menu=='4':
break
main() | [
"rlaalwn61@naver.com"
] | rlaalwn61@naver.com |
fdd8a87f35d8ed1df1de0ea2daeaafd48ffa105a | 7a53f6c98c9a15772632dd1346a5507f01cf462c | /brick_server/__init__.py | f03901f54b3bffbb5bff2d157c921b74e481ac92 | [
"MIT"
] | permissive | jbkoh/brick-server | 59c1642665b908b74f344a7a1cacdae66c7caf59 | 945196e4915a7ae65cf60344eab146ee4926d9dd | refs/heads/master | 2020-04-14T21:48:36.888356 | 2019-03-29T23:30:00 | 2019-03-29T23:30:00 | 164,140,827 | 0 | 3 | MIT | 2019-03-26T19:39:37 | 2019-01-04T18:19:52 | Python | UTF-8 | Python | false | false | 2,301 | py | import pdb
import json
from flask import Flask
from flask_injector import FlaskInjector
from apispec import APISpec
from apispec.ext.marshmallow import MarshmallowPlugin
from apispec_webframeworks.flask import FlaskPlugin
from .apis import blueprint, entity_api
configs = json.load(open('configs/configs.json'))
API_V1_PREFIX = '/api/v1'
def configure_binding(binder):
from brick_data.timeseries import BrickTimeseries
from brick_data.sparql import BrickSparql
from brick_server.extensions.lockmanager import LockManager
brick_ts_configs = configs['timeseries']
brick_ts = BrickTimeseries(brick_ts_configs['dbname'],
brick_ts_configs['user'],
brick_ts_configs['password'],
brick_ts_configs['host'],
brick_ts_configs['port'],
)
lockmanager_configs = configs['lockmanager']
lock_manager = LockManager(lockmanager_configs['host'],
lockmanager_configs['port'],
lockmanager_configs['dbname'],
lockmanager_configs['user'],
lockmanager_configs['password'],
)
brick_configs = configs['brick']
if configs['server']['use_hostname_as_ns']:
base_ns = 'http://{hostname}{api_prefix}{entity_api_prefix}/'.format(
hostname = configs['server']['hostname'],
api_prefix = API_V1_PREFIX,
entity_api_prefix = entity_api.path
)
else:
base_ns = brick_configs['base_ns']
brick_sparql = BrickSparql(brick_configs['host'],
brick_configs['brick_version'],
#base_ns=brick_configs['base_ns'],
base_ns=base_ns,
load_schema=True,
)
binder.bind(BrickTimeseries, to=brick_ts)
binder.bind(BrickSparql, to=brick_sparql)
binder.bind(LockManager, to=lock_manager)
def create_app(**kwargs):
app = Flask(__name__)
app.register_blueprint(blueprint, url_prefix=API_V1_PREFIX)
FlaskInjector(app=app, modules=[configure_binding])
return app
| [
"bk7749@gmail.com"
] | bk7749@gmail.com |
e98a53c426404b9b4481a4b5586e8ce40502a809 | 5b4c803f68e52849a1c1093aac503efc423ad132 | /UnPyc/tests/tests/CFG/2/return/return_for_for_.py | 5e5fb0ffb37383d7173273d0c71be087d01fdbb8 | [] | no_license | Prashant-Jonny/UnPyc | 9ce5d63b1e0d2ec19c1faa48d932cc3f71f8599c | 4b9d4ab96dfc53a0b4e06972443e1402e9dc034f | refs/heads/master | 2021-01-17T12:03:17.314248 | 2013-02-22T07:22:35 | 2013-02-22T07:22:35 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 65 | py | def f():
for i in range(3):
for i in range(3):
return
| [
"d.v.kornev@gmail.com"
] | d.v.kornev@gmail.com |
f61a44374d1cea30adaa69d4941b90640bbdb742 | 2f98aa7e5bfc2fc5ef25e4d5cfa1d7802e3a7fae | /python/python_7931.py | 6c90b1af3190bb66e38e849cbae91192e04bcb7c | [] | no_license | AK-1121/code_extraction | cc812b6832b112e3ffcc2bb7eb4237fd85c88c01 | 5297a4a3aab3bb37efa24a89636935da04a1f8b6 | refs/heads/master | 2020-05-23T08:04:11.789141 | 2015-10-22T19:19:40 | 2015-10-22T19:19:40 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 103 | py | # IPython: How to save 2 ranges of lines with the save magic command?
save two-line-ranges.py 1-4 8-14
| [
"ubuntu@ip-172-31-7-228.us-west-2.compute.internal"
] | ubuntu@ip-172-31-7-228.us-west-2.compute.internal |
4c35192d5f0030784b18b331608a102726fba307 | 81d8e62f4f5cb921c129777653b139a75dbc64a3 | /module1/file1.py | f6c4acd4666db1d4527d30b6efdc8637b0c0ec39 | [] | no_license | sambapython/python21 | 96ef61c9e6bf75ed93ce2ea9c36028a4ff448ae7 | 6ed5c8fb79dfcd3777b063d1bf539a3ff80b9722 | refs/heads/master | 2020-07-31T08:25:37.699753 | 2016-12-18T10:57:30 | 2016-12-18T10:57:30 | 73,603,370 | 0 | 2 | null | null | null | null | UTF-8 | Python | false | false | 123 | py | print "program started"
def fun():
return "this is fun in file1"
print "other statements in program"
print "program ended" | [
"sambapython@gmail.com"
] | sambapython@gmail.com |
3a1f2199bc64a36f4049e02a0ae900a3fecdef66 | bae75bf1de75fb1b76e19b0d32c778e566de570a | /smodels-database/13TeV/ATLAS/ATLAS-SUSY-2016-26/orig/InputFile_HepData_Reader.py | b11122a202cb33cc8d5dcf13d5a87699d16bbb08 | [] | no_license | andlessa/RDM | 78ae5cbadda1875c24e1bb726096b05c61627249 | ac6b242871894fee492e089d378806c2c2e7aad8 | refs/heads/master | 2023-08-16T00:47:14.415434 | 2021-09-21T20:54:25 | 2021-09-21T20:54:25 | 228,639,778 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,611 | py | # This script reads files given as 'input' format in the HEP data website.
# e.g. see ATLAS susy analyses.
"""
This function reads the X,Y and values from the input file.
It returns the three corresponding lists of objects.
First it creates a general list, for which the entries are the lines that are read, in form of a list:
i.e. ListOfLines = [ [x1,y1,z1] , [x2,y2,z2] , ... , [xn,yn,zn] ]
Then it extracts the entries number 0,1 and 2 for each list, and fills the vectors XArrays, YArrays ... ZArrays, that are then returned.
Note that you have to adapt the numbers of verctor that are returned and read according to the numbers of columns present in the file.
num_col is the total number of entrie you need (x,y,z for efficiencies - x,y for exlusion lines)
column is the colomun that you need
"""
def Reading_Values(input,num_col,column):
print 'Reading the values from the input file: ',input,' .The column containing the chosen values is number: ',column , ' . \n'
ListOfLines = []
inputFile = open(input,'r')
for line in inputFile:
if line[0] != '#' and line[0] != '*' and line[0] != '\n':
# print line
lineElements = line.split(';')
# print lineElements;
elementsList = []
for element in lineElements:
if element and element != '\n':
#fElement=float(element)
element = element.replace(' ','')
elementsList.append(element)
ListOfLines.append(elementsList)
inputFile.close()
XArray = []
YArray = []
ZArray = []
# print ListOfLines
# If saves in the third list the values contained in the column number you specified in the parameter 'column'.
if(num_col ==3):
for list in ListOfLines:
XArray.append(list[0])
YArray.append(list[1])
ZArray.append(list[column-1])
return XArray, YArray, ZArray
elif(num_col ==2):
if(column == num_col):
for list in ListOfLines:
XArray.append(list[0])
YArray.append(list[1])
return XArray, YArray
"""
This function produces the efficiency maps: it multiplies the values for acceptance and efficiency and creates the .txt files for each region
The input parameters are the two name of the Acc and Eff files;
topo and SR are used to create the name of the output files.
BE CAREFUL if you want to divide or not by 10.000 ( i.e. if the values given are in percentage or absolute ): you can state this option
by inputting a normalization value in Norm
"""
def Map_Multiplier(topo, SR, accFile, effFile,num_col,column, Norm):
X1,Y1,Acc = Reading_Values(accFile,num_col,column)
X2,Y2,Eff = Reading_Values(effFile,num_col,column)
outputMap = open('EffMap_'+topo+"_"+SR+".txt",'w')
outputMap.write('# MassX , MassY , Eff*Acc '+'\n')
for x1,y1,acc in zip(X1,Y1,Acc):
for x2,y2,eff in zip(X2,Y2,Eff):
if x1==x2 and y1==y2:
# print x1 + ' ' + x2 + ' ' + y1 + ' ' + y2 + ' \n' # just to check if the selected values from the two files matches
outputMap.write(x1 + ' ' + y1 + ' ' + str(float(acc)*float(eff)/Norm) + '\n')
print "Map ",'EffMap_'+topo+"_"+SR+".txt", ' written!'
"""
This function simply rewrite in a file .dat that you want to plot, in a SModelS friendly format. It takes the values of the arrays from the Reading_Values function.
Give as parameters the two array you want to plot, and the name of the output file.
With 'type of data' you specify what kind of values are you extracting.
"""
def Simple_Map_Producer(X,Y,Z,type_of_data,outputName):
output = open(outputName+'.dat','w')
output.write('# MassX , MassY ' + type_of_data+'\n')
for x,y,z in zip(X,Y,Z):
output.write(x+' '+y+' '+z +'\n')
def Simple_Exclusion_Producer(X,Y,type_of_data,outputName):
output = open(outputName+'.dat','w')
output.write('# MassX , MassY ' + type_of_data+'\n')
for x,y in zip(X,Y):
output.write(x+' '+y+'\n')
for SR in ['SR1','SR2','SR3','SR4','SR5']:
ACC = 'T2cc_ACC_'+SR+".csv"
EFF = 'T2cc_EFF_'+SR+".csv"
Map_Multiplier('T2cc',SR,ACC,EFF,3,3,10000)
X,Y = Reading_Values("T2cc_Obs_Excl.csv",2,2) #Obs_Line.dat
Simple_Exclusion_Producer(X,Y,"Obs_Excl","T2cc_Obs_Excl.dat")
X,Y = Reading_Values("T2cc_Exp_Excl.csv",2,2) #Exp_Line.dat
Simple_Exclusion_Producer(X,Y,"Exp_Excl","T2cc_Exp_Excl.dat")
X,Y,Z = Reading_Values("T2cc_Obs_UL.csv",3,3)
Simple_Map_Producer(X,Y,Z,"Obs_UL","T2cc_Obs_UL.dat")
| [
"wolfgang.waltenberger@gmail.com"
] | wolfgang.waltenberger@gmail.com |
f89ffcd21d944932d0bc3df067c549070844ae55 | 2f9c2bb2c8d32368f90ef798c08848cec4ea2ebd | /jina/types/message/common.py | b3b3245b299fc6ce48d10701153c5fd2fd5037a6 | [
"Apache-2.0"
] | permissive | automation555/jina | 9e0aafd9d894bd5995f091ea0f8566a9ed0f781d | 337526c00265190fc45235b80df10c0a75b51c09 | refs/heads/master | 2023-06-03T04:33:18.460871 | 2021-06-17T08:51:21 | 2021-06-17T08:51:21 | 377,765,051 | 0 | 0 | Apache-2.0 | 2021-06-17T08:55:30 | 2021-06-17T08:50:48 | Python | UTF-8 | Python | false | false | 1,434 | py | from . import Message
from ..request import Request
from ...proto import jina_pb2
_available_commands = dict(
jina_pb2.RequestProto.ControlRequestProto.DESCRIPTOR.enum_values_by_name
)
__all__ = ['ControlMessage']
class ControlMessage(Message):
"""
Class of the protobuf message.
:param command: Command with string content. (e.g. 'IDLE', 'CANCEL', 'TERMINATE', 'STATUS')
:param pod_name: Name of the current pod, to represent routes only.
:param identity: The identity of the current pod
:param args: Additional positional arguments which are just used for the parent initialization
:param kwargs: Additional keyword arguments which are just used for the parent initialization
"""
def __init__(
self, command: str, pod_name: str = '', identity: str = '', *args, **kwargs
):
req = Request(jina_pb2.RequestProto())
if command in _available_commands:
req.control.command = getattr(
jina_pb2.RequestProto.ControlRequestProto, command
)
else:
raise ValueError(
f'command "{command}" is not supported, must be one of {_available_commands}'
)
super().__init__(
None, req, pod_name=pod_name, identity=identity, *args, **kwargs
)
req.request_type = 'control'
args = kwargs.get('args', None)
if args:
req.args = args
| [
"rajashree.patil@embold.io"
] | rajashree.patil@embold.io |
a5f75c4b6cd99db91c0f65af43367b7e6670c70b | 2315c570965da85ddb276840ee158319b2fb9df4 | /tests/suggestions/test_suggest_event_webcast_controller.py | e5926bcfccf85118c0e2706f07a1dcd2e02f1fa6 | [
"MIT"
] | permissive | enterstudio/the-blue-alliance | c1779676f809471d39486d077c834c7e78520467 | b53f752fe1f059b4b6f91c841e1865a6c6b81268 | refs/heads/master | 2022-11-26T06:50:11.159102 | 2017-02-03T16:53:26 | 2017-02-03T16:53:26 | 80,987,951 | 0 | 0 | MIT | 2022-11-19T06:05:18 | 2017-02-05T11:19:22 | HTML | UTF-8 | Python | false | false | 4,945 | py | from datetime import datetime
import unittest2
import webapp2
import webtest
from google.appengine.ext import ndb
from google.appengine.ext import testbed
from webapp2_extras.routes import RedirectRoute
from consts.district_type import DistrictType
from consts.event_type import EventType
from controllers.suggestions.suggest_event_webcast_controller import SuggestEventWebcastController
from models.account import Account
from models.event import Event
from models.suggestion import Suggestion
class TestSuggestEventWebcastController(unittest2.TestCase):
def loginUser(self):
self.testbed.setup_env(
user_email="user@example.com",
user_id="123",
user_is_admin='0',
overwrite=True)
Account.get_or_insert(
"123",
email="user@example.com",
registered=True)
def setUp(self):
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_datastore_v3_stub()
self.testbed.init_memcache_stub()
self.testbed.init_user_stub()
ndb.get_context().clear_cache() # Prevent data from leaking between tests
app = webapp2.WSGIApplication([
RedirectRoute(r'/suggest/event/webcast', SuggestEventWebcastController, 'suggest-webcast', strict_slash=True),
], debug=True)
self.testapp = webtest.TestApp(app)
self.event = Event(
id="2016necmp",
name="New England District Championship",
event_type_enum=EventType.DISTRICT_CMP,
event_district_enum=DistrictType.NEW_ENGLAND,
short_name="New England",
event_short="necmp",
year=2016,
end_date=datetime(2016, 03, 27),
official=False,
city='Hartford',
state_prov='CT',
country='USA',
venue="Some Venue",
venue_address="Some Venue, Hartford, CT, USA",
timezone_id="America/New_York",
start_date=datetime(2016, 03, 24),
webcast_json="",
website="http://www.firstsv.org",
)
self.event.put()
def tearDown(self):
self.testbed.deactivate()
def getSuggestionForm(self, event_key):
response = self.testapp.get('/suggest/event/webcast?event_key={}'.format(event_key))
self.assertEqual(response.status_int, 200)
form = response.forms.get('suggest_webcast', None)
self.assertIsNotNone(form)
return form
def testLoginRedirect(self):
response = self.testapp.get('/suggest/event/webcast?event_key=2016necmp', status='3*')
response = response.follow(expect_errors=True)
self.assertTrue(response.request.path.startswith("/account/login_required"))
def testNoParams(self):
self.loginUser()
response = self.testapp.get('/suggest/event/webcast', status='3*')
response = response.follow(expect_errors=True)
self.assertEqual(response.request.path, '/')
def testSubmitEmptyForm(self):
self.loginUser()
form = self.getSuggestionForm('2016necmp')
response = form.submit().follow()
self.assertEqual(response.status_int, 200)
request = response.request
self.assertEqual(request.GET.get('status'), 'blank_webcast')
def testSubmitBadUrl(self):
self.loginUser()
form = self.getSuggestionForm('2016necmp')
form['webcast_url'] = 'The Blue Alliance'
response = form.submit().follow()
self.assertEqual(response.status_int, 200)
request = response.request
self.assertEqual(request.GET.get('status'), 'invalid_url')
def testSubmitTBAUrl(self):
self.loginUser()
form = self.getSuggestionForm('2016necmp')
form['webcast_url'] = 'http://thebluealliance.com'
response = form.submit().follow()
self.assertEqual(response.status_int, 200)
request = response.request
self.assertEqual(request.GET.get('status'), 'invalid_url')
def testSubmitWebcast(self):
self.loginUser()
form = self.getSuggestionForm('2016necmp')
form['webcast_url'] = 'https://twitch.tv/frcgamesense'
response = form.submit().follow()
self.assertEqual(response.status_int, 200)
request = response.request
self.assertEqual(request.GET.get('status'), 'success')
# Make sure the Suggestion gets created
suggestion = Suggestion.query().fetch()[0]
self.assertIsNotNone(suggestion)
self.assertEqual(suggestion.review_state, Suggestion.REVIEW_PENDING)
self.assertEqual(suggestion.target_key, '2016necmp')
self.assertEqual(suggestion.contents['webcast_url'], 'https://twitch.tv/frcgamesense')
self.assertIsNotNone(suggestion.contents.get('webcast_dict'))
| [
"noreply@github.com"
] | enterstudio.noreply@github.com |
4cf4469143aebe731974ed947d501afecb9cceab | 5636cb0c282d03e91a830d30cec3bd54c225bd3b | /TP_SPE_Supplementaires/Mines_Ponts_2015/programmes/TD02_piles_Patricia.py | 23095a301d660d6e8b3c515c2a69d854cfbce056 | [] | no_license | xpessoles/Informatique | 24d4d05e871f0ac66b112eee6c51cfa6c78aea05 | 3cb4183647dc21e3acbcbe0231553a00e41e4e55 | refs/heads/master | 2023-08-30T21:10:56.788526 | 2021-01-26T20:57:51 | 2021-01-26T20:57:51 | 375,464,331 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,542 | py | ### TD02 - piles
# question 1 : fonction est_vide
def est_vide(pile):
return len(pile)==0
# >>> est_vide((1,2,3))
# False
# >>> est_vide(())
# True
# question 2 : fonction est pleine
def est_pleine(pile,nb):
return len(pile)==nb
# >>> est_pleine((1,2,3),3)
# True
# >>> est_pleine((1,2,3),6)
# False
# question 3 : ajouter un element (un peu tiré par les cheveux)
def push(pile,el):
pile=pile+(el,0)#je ne peux pas concatener un tuple avec un seul élément, minimum 2
pile=pile[:-1]
return(pile)
# >>> push((1,2,3),(94))
# (1, 2, 3, 94)
def pop(pile):
dernier=pile[-1]
pile=pile[:-1] #je n'arrive pas à changer le tuple qui est non mutable
return dernier,pile
# >>> pop((1,2,3,4,5))
# 5
# >>> pop((1,2,3,4,5))
# (5, (1, 2, 3, 4))
### Exercice 2 : notation polonaise inversee
# est-ce un element au hasard ?
# la pile est un tuple de strings
def est_nombre(pile,i):
return pile[i] not in ['+','-','*','/']
# >>> est_nombre(('+','1','3','*'),1)
# True
def est_operation(pile,i):
return pile[i] in ['+','-','*','/']
# >>> est_operation(('+','1','3','*'),0)
# True
# >>> est_operation(('+','1','3','*'),1)
# False
def evaluer(exp):
''' l'expression exp doit être postfixée '''
pile=()
for element in exp:
pile=push(pile,element)
# return pile resultat OK
res=()
for elt in pile:
if elt == '+':
b = float(pop(res)[0])
res=pop(res)[1]
a=float(pop(res)[0])
res=pop(res)[1]
res=push(res,(a+b))
elif elt == '*':
b = float(pop(res)[0])
res=pop(res)[1]
a=float(pop(res)[0])
res=pop(res)[1]
res=push(res,(a*b))
elif elt == '-':
b = float(pop(res)[0])
res=pop(res)[1]
a=float(pop(res)[0])
res=pop(res)[1]
res=push(res,(a-b))
elif elt == '/':
b = float(pop(res)[0])
res=pop(res)[1]
a=float(pop(res)[0])
res=pop(res)[1]
res=push(res,(a/b))
else:
res=push(res,(float(elt)))
return res[0]
# NE fonctionne pas
# Question 4 : '12+4*3-5+'
### Exercice 3 - croisement routier
#creation de listes aléatoires
import random as rd
f1=[rd.randint(0,1) for i in range(10)]
f2=[rd.randint(0,1) for i in range(8)]
def croisement(f1,f2):
f3=[]
while len(f1)!=0 and len(f2)!=0:
if f1[-1]==1: # si un véhicule dans la file 1 il est prioritaire
f3.append(1) # la file 3 reçoit le véhicule de la file 1
f1.pop() #la file 1 est dépilée
if f2[-1]==0:
f2.pop() #si pas de voiture sur la file 2 du stop avancer d'un véhicule
else: # si pas de véhicule sur la file 1 dépiler la file 2
if f2[-1]==1:
f3.append(1)
f1.pop()
f2.pop()
else:
f3.append(0)
f1.pop()
f2.pop()
if len(f1)!=0: #quand une file est vide les véhicules de la file suivant ese vide dans file 3
for i in range(len(f1)):
f3.append(f1.pop())
else:
for i in range(len(f2)):
f3.append(f2.pop())
f3.reverse() #inverser la file 3 pour avoir les véhicules dans l'ordre d'arrivée
return f3
# >>> croisement([0, 1, 1, 0, 0, 1, 1, 0, 1, 1],[0, 1, 0, 1, 1, 1, 1, 0])
# [0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
| [
"xpessoles.ptsi@free.fr"
] | xpessoles.ptsi@free.fr |
3dc0ba05a8b84dc198cec8fdc9cb9f8b08f0af92 | c91775afdc25f8897c6839cf8294869f3e928083 | /PythonFiles/snowmass_cfg_Bj_14TEV_0_300_Conf4v2_8.py | 54b77ac3a6ffa1c5ee02b194cf972c65b0cb3be4 | [] | no_license | Saptaparna/Miscellaneous | 7e6df9cdfd10d4861e2e382b1837dbd4c26fb249 | b954189d85e56a02fe257b5f5cbd779365719c00 | refs/heads/master | 2021-01-23T13:29:30.283308 | 2017-12-20T08:26:37 | 2017-12-20T08:26:37 | 42,525,018 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 9,275 | py | import FWCore.ParameterSet.Config as cms
import FWCore.PythonUtilities.LumiList as LumiList
import FWCore.ParameterSet.Types as CfgTypes
#
# Parameters that can be set via command line
# when submitting Condor jobs
#
isMc_settable = True
isSignalMc_settable = False
def FindFile(name):
fname = 'file.txt'
return fname
process = cms.Process("LJMetCom")
##################################################################
#
# All input files needed for the job to run
# Specify them here, and they will automatically be correctly
# transferred to Condor when needed
# NOTE: you can define as many or as few entries as you wish,
# names are up to you
miscFiles = {}
miscFiles['jec_uncertainty'] = '../cond/Summer12_V2_DATA_AK5PF_UncertaintySources.txt'
miscFiles['btag_performance'] = '../cond/btag_performance_db062012.root'
miscFiles['json'] = '../data/json/Cert_190456-208686_8TeV_PromptReco_Collisions12_JSON.txt'
miscFiles['MCL1JetPar'] = '../data/START53_V7G_L1FastJet_AK5PFchs.txt'
miscFiles['MCL2JetPar'] = '../data/START53_V7G_L2Relative_AK5PFchs.txt'
miscFiles['MCL3JetPar'] = '../data/START53_V7G_L3Absolute_AK5PFchs.txt'
miscFiles['DataL1JetPar'] = '../data/FT_53_V10_AN3_L1FastJet_AK5PFchs.txt'
miscFiles['DataL2JetPar'] = '../data/FT_53_V10_AN3_L2Relative_AK5PFchs.txt'
miscFiles['DataL3JetPar'] = '../data/FT_53_V10_AN3_L3Absolute_AK5PFchs.txt'
miscFiles['DataResJetPar'] = '../data/FT_53_V10_AN3_L2L3Residual_AK5PFchs.txt'
#Arguments from condor submit script which are used more than once
condorIsMC = bool(True)
relBase = str('/uscms_data/d2/sapta/work/LJMetCode_fromGena/Dilepton_Feb25/CMSSW_5_3_7_patch4')
condorJSON = str('None')
# Dilepton calculator options
process.load('LJMet.Com.DileptonCalc_cfi')
process.DileptonCalc.isMc = condorIsMC
process.DileptonCalc.dataType = cms.string('None')
############################################################
#
# FWLite application options
#
process.ljmet = cms.PSet(
isMc = cms.bool(condorIsMC),
runs = cms.vint32([]),
verbosity = cms.int32(0)
)
#Exclude unnecessary calculators
process.ljmet.excluded_calculators = cms.vstring(
'WprimeCalc',
'LjetsTopoCalc',
'LjetsTopoCalcNew',
'StopCalc'
)
############################################################
#
# common calculator options
process.load('LJMet.Com.commonCalc_cfi')
process.CommonCalc.dummy_parameter = cms.string('Dummy parameter value')
############################################################
#
# pileup calculator options
process.load('LJMet.Com.pileupCalc_cfi')
process.PileUpCalc.verbosity = process.ljmet.verbosity
############################################################
#
# Event selector options
#
process.event_selector = cms.PSet(
selection = cms.string('DileptonSelector'),
isMc = cms.bool(condorIsMC),
# cuts
#HLT
trigger_cut = cms.bool(True),
dump_trigger = cms.bool(False),
#Can use same trigger paths for data and MC since MC is always one of the data versions
trigger_path_ee = cms.vstring('HLT_Ele17_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_Ele8_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v15',
'HLT_Ele17_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_Ele8_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v16',
'HLT_Ele17_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_Ele8_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v17',
'HLT_Ele17_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_Ele8_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v18',
'HLT_Ele17_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_Ele8_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v19'),
trigger_path_em = cms.vstring('HLT_Mu8_Ele17_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v4', 'HLT_Mu8_Ele17_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v5',
'HLT_Mu8_Ele17_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v6', 'HLT_Mu8_Ele17_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v7',
'HLT_Mu8_Ele17_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v8', 'HLT_Mu8_Ele17_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v9',
'HLT_Mu17_Ele8_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v4', 'HLT_Mu17_Ele8_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v5',
'HLT_Mu17_Ele8_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v6', 'HLT_Mu17_Ele8_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v7',
'HLT_Mu17_Ele8_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v8', 'HLT_Mu17_Ele8_CaloIdT_CaloIsoVL_TrkIdVL_TrkIsoVL_v9'),
trigger_path_mm = cms.vstring('HLT_Mu17_Mu8_v16', 'HLT_Mu17_Mu8_v17', 'HLT_Mu17_Mu8_v18',
'HLT_Mu17_Mu8_v19', 'HLT_Mu17_Mu8_v21', 'HLT_Mu17_Mu8_v22',
'HLT_Mu17_TkMu8_v9', 'HLT_Mu17_TkMu8_v10', 'HLT_Mu17_TkMu8_v11',
'HLT_Mu17_TkMu8_v12', 'HLT_Mu17_TkMu8_v13', 'HLT_Mu17_TkMu8_v14'),
pv_cut = cms.bool(False),
hbhe_cut = cms.bool(False),
jet_cuts = cms.bool(False),
jet_minpt = cms.double(20.0),
jet_maxeta = cms.double(5),
min_jet = cms.int32(0),
max_jet = cms.int32(4000),
muon_cuts = cms.bool(True),
min_muon = cms.int32(0),
muon_minpt = cms.double(10.0),
muon_maxeta = cms.double(4.0),
max_muon = cms.int32(20),
electron_cuts = cms.bool(True),
min_electron = cms.int32(0),
electron_minpt = cms.double(10.0),
electron_maxeta = cms.double(4.0),
max_electron = cms.int32(20),
min_lepton = cms.int32(2),
met_cuts = cms.bool(False),
min_met = cms.double(0.0),
btag_cuts = cms.bool(False),
btagOP = cms.string("CSVM"),
btag_1 = cms.bool(True),
btag_2 = cms.bool(True),
btag_3 = cms.bool(False),
trigger_collection = cms.InputTag('TriggerResults::HLT'),
pv_collection = cms.InputTag('goodOfflinePrimaryVertices'),
jet_collection = cms.InputTag('goodPatJetsPFlow'),
muon_collection = cms.InputTag('selectedPatMuonsPFlowLoose'),
electron_collection = cms.InputTag('selectedPatElectronsPFlowLoose'),
met_collection = cms.InputTag('patMETsPFlow'),
JEC_txtfile = cms.string(miscFiles['jec_uncertainty']),
JECup = cms.bool(False),
JECdown = cms.bool(False),
JERup = cms.bool(False),
JERdown = cms.bool(False),
BTagUncertUp = cms.bool(False),
BTagUncertDown = cms.bool(True),
do53xJEC = cms.bool(True),
MCL1JetPar = cms.string(miscFiles['MCL1JetPar']),
MCL2JetPar = cms.string(miscFiles['MCL2JetPar']),
MCL3JetPar = cms.string(miscFiles['MCL3JetPar']),
DataL1JetPar = cms.string(miscFiles['DataL1JetPar']),
DataL2JetPar = cms.string(miscFiles['DataL2JetPar']),
DataL3JetPar = cms.string(miscFiles['DataL3JetPar']),
DataResJetPar = cms.string(miscFiles['DataResJetPar']),
keepFullMChistory = cms.bool(True)
)
##################################################################
#
# Input files
#
# NOTE: keep your test inputs in the python files as in
# this example, and they will be correctly substituted with
# specified input events when you submit to Condor
# (
#
# nEvents and skipEvents are for interactive use, their
# values will be correctly reset when you submit Condor
#
input_module = 'LJMet.Com.Bj_14TEV_0_300_Conf4v2_8'
process.load(input_module)
process.inputs.nEvents = cms.int32(-1)
process.inputs.skipEvents = cms.int32(0)
############################################################
#
# JSON
JsonFile = miscFiles['json']
myList = LumiList.LumiList(filename=JsonFile).getCMSSWString().split(',')
if not condorIsMC:
process.inputs.lumisToProcess.extend(myList)
#######################################################
#
# Output
#
process.outputs = cms.PSet (
outputName = cms.string('Bj_14TEV_0_300_Conf4v2_8'),
treeName = cms.string('ljmet'),
)
#######################################################
#
# Object selector options
#
# Primary vertex
process.load('PhysicsTools.SelectorUtils.pvSelector_cfi')
process.pvSelector.pvSrc = cms.InputTag('goodOfflinePrimaryVertices')
process.pvSelector.minNdof = cms.double(4.0)
process.pvSelector.maxZ = cms.double(24.0)
process.pvSelector.maxRho = cms.double(2.0)
# jets
process.load('PhysicsTools.SelectorUtils.pfJetIDSelector_cfi')
process.pfJetIDSelector.version = cms.string('FIRSTDATA')
process.pfJetIDSelector.quality = cms.string('LOOSE')
| [
"saptaparna@gmail.com"
] | saptaparna@gmail.com |
26d9469054aaa2d8af40439d3ff87f189436e3f0 | 56f5b2ea36a2258b8ca21e2a3af9a5c7a9df3c6e | /CMGTools/H2TauTau/prod/25aug_corrMC/up/mc/DY3JetsToLL_M-50_TuneZ2Star_8TeV-madgraph/Summer12_DR53X-PU_S10_START53_V7A-v1/AODSIM/V5_B/PAT_CMG_V5_16_0_1377544839/HTT_24Jul_newTES_manzoni_Up_Jobs/Job_58/run_cfg.py | f7490b8599e07eb7ffec34a1228ef176f43b870c | [] | no_license | rmanzoni/HTT | 18e6b583f04c0a6ca10142d9da3dd4c850cddabc | a03b227073b2d4d8a2abe95367c014694588bf98 | refs/heads/master | 2016-09-06T05:55:52.602604 | 2014-02-20T16:35:34 | 2014-02-20T16:35:34 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,494 | py | import FWCore.ParameterSet.Config as cms
import os,sys
sys.path.append('/afs/cern.ch/user/m/manzoni/summer13/CMGTools/CMSSW_5_3_9/src/CMGTools/H2TauTau/prod/25aug_corrMC/up/mc/DY3JetsToLL_M-50_TuneZ2Star_8TeV-madgraph/Summer12_DR53X-PU_S10_START53_V7A-v1/AODSIM/V5_B/PAT_CMG_V5_16_0_1377544839/HTT_24Jul_newTES_manzoni_Up_Jobs')
from base_cfg import *
process.source = cms.Source("PoolSource",
noEventSort = cms.untracked.bool(True),
inputCommands = cms.untracked.vstring('keep *',
'drop cmgStructuredPFJets_cmgStructuredPFJetSel__PAT'),
duplicateCheckMode = cms.untracked.string('noDuplicateCheck'),
fileNames = cms.untracked.vstring('/store/cmst3/user/cmgtools/CMG/DY3JetsToLL_M-50_TuneZ2Star_8TeV-madgraph/Summer12_DR53X-PU_S10_START53_V7A-v1/AODSIM/V5_B/PAT_CMG_V5_16_0/cmgTuple_316.root',
'/store/cmst3/user/cmgtools/CMG/DY3JetsToLL_M-50_TuneZ2Star_8TeV-madgraph/Summer12_DR53X-PU_S10_START53_V7A-v1/AODSIM/V5_B/PAT_CMG_V5_16_0/cmgTuple_317.root',
'/store/cmst3/user/cmgtools/CMG/DY3JetsToLL_M-50_TuneZ2Star_8TeV-madgraph/Summer12_DR53X-PU_S10_START53_V7A-v1/AODSIM/V5_B/PAT_CMG_V5_16_0/cmgTuple_318.root',
'/store/cmst3/user/cmgtools/CMG/DY3JetsToLL_M-50_TuneZ2Star_8TeV-madgraph/Summer12_DR53X-PU_S10_START53_V7A-v1/AODSIM/V5_B/PAT_CMG_V5_16_0/cmgTuple_319.root',
'/store/cmst3/user/cmgtools/CMG/DY3JetsToLL_M-50_TuneZ2Star_8TeV-madgraph/Summer12_DR53X-PU_S10_START53_V7A-v1/AODSIM/V5_B/PAT_CMG_V5_16_0/cmgTuple_32.root')
)
| [
"riccardo.manzoni@cern.ch"
] | riccardo.manzoni@cern.ch |
add097c8c0bbfc990db0229b131cc4d6e9aee2c8 | 45e376ae66b78b17788b1d3575b334b2cb1d0b1c | /tests/terraform/checks/resource/azure/test_AppServiceJavaVersion.py | b5041c7eaa0dafe935c47390d2a7a832719f6014 | [
"Apache-2.0"
] | permissive | bridgecrewio/checkov | aeb8febed2ed90e61d5755f8f9d80b125362644d | e64cbd27ffb6f09c2c9f081b45b7a821a3aa1a4d | refs/heads/main | 2023-08-31T06:57:21.990147 | 2023-08-30T23:01:47 | 2023-08-30T23:01:47 | 224,386,599 | 5,929 | 1,056 | Apache-2.0 | 2023-09-14T20:10:23 | 2019-11-27T08:55:14 | Python | UTF-8 | Python | false | false | 1,421 | py | import os
import unittest
from checkov.runner_filter import RunnerFilter
from checkov.terraform.runner import Runner
from checkov.terraform.checks.resource.azure.AppServiceJavaVersion import check
class TestAppServiceJavaVersion(unittest.TestCase):
def test(self):
runner = Runner()
current_dir = os.path.dirname(os.path.realpath(__file__))
test_files_dir = os.path.join(current_dir, "example_AppServiceJavaVersion")
report = runner.run(root_folder=test_files_dir,
runner_filter=RunnerFilter(checks=[check.id]))
summary = report.get_summary()
passing_resources = {
'azurerm_app_service.pass',
}
failing_resources = {
'azurerm_app_service.fail',
}
skipped_resources = {}
passed_check_resources = set([c.resource for c in report.passed_checks])
failed_check_resources = set([c.resource for c in report.failed_checks])
self.assertEqual(summary['passed'], len(passing_resources))
self.assertEqual(summary['failed'], len(failing_resources))
self.assertEqual(summary['skipped'], len(skipped_resources))
self.assertEqual(summary['parsing_errors'], 0)
self.assertEqual(passing_resources, passed_check_resources)
self.assertEqual(failing_resources, failed_check_resources)
if __name__ == '__main__':
unittest.main() | [
"noreply@github.com"
] | bridgecrewio.noreply@github.com |
461c936aa43dfc116d3a4e6bf313f171ee477ef0 | c05ed32f1ef7e1eb7d73efd674e7d1fd710ad171 | /daily-coding-problems/problem140.py | 840bf38914b4da57f1be1d0cb9995f9fdd704039 | [] | no_license | carlhinderer/python-exercises | c8367517fdf835fa1117f96dbfee3dccc596afa6 | 4e09bbb4c4e2bd5644ed50e997db9f3c289a18f7 | refs/heads/master | 2021-06-01T16:17:00.389134 | 2021-02-09T18:21:01 | 2021-02-09T18:21:01 | 150,902,917 | 0 | 0 | null | 2021-04-20T20:33:11 | 2018-09-29T21:03:36 | Python | UTF-8 | Python | false | false | 386 | py | # Problem 140
# Medium
# Asked by Facebook
#
# Given an array of integers in which two elements appear exactly once and all other elements appear
# exactly twice, find the two elements that appear only once.
#
# For example, given the array [2, 4, 6, 8, 10, 2, 6, 10], return 4 and 8. The order does not matter.
#
# Follow-up: Can you do this in linear time and constant space?
# | [
"carl.hinderer4@gmail.com"
] | carl.hinderer4@gmail.com |
e848dbcd393d04149e44c247f5a4581502207f3c | 98c6ea9c884152e8340605a706efefbea6170be5 | /examples/data/Assignment_3/chnjea007/question3.py | 0bc5afd1bdca7889b5168b1f137bc1c573eb5d55 | [] | no_license | MrHamdulay/csc3-capstone | 479d659e1dcd28040e83ebd9e3374d0ccc0c6817 | 6f0fa0fa1555ceb1b0fb33f25e9694e68b6a53d2 | refs/heads/master | 2021-03-12T21:55:57.781339 | 2014-09-22T02:22:22 | 2014-09-22T02:22:22 | 22,372,174 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 541 | py | # Assignment 3 question 3
message = input("Enter the message:\n")
repeat = eval(input("Enter the message repeat count:\n"))
thickness = eval(input("Enter the frame thickness:\n"))
for r in range (thickness):
print("|" * r,"+", "-" * ((len(message) + 2) + (thickness * 2) - 2 - r * 2), "+", "|" * r, sep = "")
for r in range (repeat):
print ("|" * thickness, message, "|" * thickness)
for r in range (thickness - 1, -1, -1):
print("|" * r,"+", "-" * ((len(message) + 2) + (thickness * 2) - 2 - r * 2), "+", "|" * r, sep = "") | [
"jarr2000@gmail.com"
] | jarr2000@gmail.com |
2b2d0ae0edba4a1583fff16fce9629c63291e3dc | 520b9a66a71e16c77beeaa28d9bc59a03cf77e79 | /shop/models.py | 92ecbc32c3cb702262f8b6657c6672214f313b16 | [] | no_license | gmachielsen/MadebyLoni | 0e40a6cc970a6ef7cc414e6e1f9e640dcbba5076 | e13706e7d61780ac0b1af9f6f8caf7f908d9ace3 | refs/heads/master | 2022-12-10T18:37:25.522641 | 2020-02-04T21:48:55 | 2020-02-04T21:48:55 | 238,231,154 | 0 | 1 | null | 2022-12-08T03:33:22 | 2020-02-04T14:52:57 | JavaScript | UTF-8 | Python | false | false | 2,476 | py | from django.db import models
from django.urls import reverse
from django.conf import settings
# Create your models here.
class Category(models.Model):
name = models.CharField(max_length=250, unique=True)
slug = models.SlugField(max_length=250, unique=True)
description = models.TextField(blank=True)
image = models.ImageField(upload_to='category', blank=True)
class Meta:
ordering = ('name',)
verbose_name = 'category'
verbose_name_plural = 'categories'
def get_url(self):
return reverse('shop:products_by_category', args=[self.slug])
def __str__(self):
return '{}'.format(self.name)
class Product(models.Model):
SIZES = (
('X', 'No size selected'),
('4', 'Size 4'),
('6', 'Size 6'),
('8', 'Size 8'),
('10','Size 10'),
('12', 'Size 12'),
('14', 'Size 14'),
('16', 'Size 16'),
)
TYPES = (
('X', 'Select type'),
('D', 'Dresses'),
('B', 'Bracelets')
)
COLOURS = (
('X', 'Select colour'),
('Black', 'Black'),
('White', 'White'),
('Red', 'Red'),
('Orange', 'Orange'),
('Yellow', 'Yellow'),
('Green', 'Green'),
('Blue', 'Blue'),
('Purple', 'Purple'),
('Pink', 'Pink'),
('Brown', 'Brown'),
('Different', 'Different'),
)
name = models.CharField(max_length=250, unique=True)
slug = models.SlugField(max_length=250, unique=True)
description = models.TextField(blank=True)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
price = models.DecimalField(max_digits=10, decimal_places=2)
image = models.ImageField(upload_to='product', blank=True)
size = models.CharField(blank=True, null=True, max_length=1, default='X', choices=SIZES)
type = models.CharField(blank=True, null=True, max_length=1, default='X', choices=TYPES)
color = models.CharField(blank=True, null=True, max_length=1, default='X', choices=COLOURS)
stock = models.IntegerField()
available = models.BooleanField(default=True)
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
views = models.IntegerField(default=0)
class Meta:
ordering = ('name',)
verbose_name = 'product'
verbose_name_plural = 'products'
def get_url(self):
return reverse('shop:ProdCatDetail', args=[self.category.slug, self.slug])
def __str__(self):
return '{}'.format(self.name)
class ProductImages(models.Model):
productpictures = models.ForeignKey(Product, related_name='product', on_delete=models.CASCADE)
images = models.ImageField(upload_to='product_images', blank=True)
| [
"g.machielsen@gmail.com"
] | g.machielsen@gmail.com |
0d139c974953af96e31cf51435326ff0a8ff5944 | 747febe786dd6b7fd6c63cfe73dbe3023354daa8 | /src/tt_personal_messages/tt_personal_messages/tests/test_operations.py | 212e985f4bae43ff752463c31c6c88ea1501c1a2 | [
"BSD-3-Clause"
] | permissive | the-tale/the-tale | 4e4b8d91dc873a5fb935fe58e9721a877baa6d3f | e8450bd2332344da805b1851e728da5a3e5bf0ef | refs/heads/develop | 2023-08-01T13:53:46.835667 | 2022-12-25T18:04:56 | 2022-12-25T18:04:56 | 1,949,167 | 98 | 52 | BSD-3-Clause | 2023-02-15T18:57:33 | 2011-06-24T18:49:48 | Python | UTF-8 | Python | false | false | 35,758 | py |
from aiohttp import test_utils
from tt_web import postgresql as db
from .. import relations
from .. import operations
from . import helpers
class OperationsTests(helpers.BaseTests):
async def check_account_created(self, number=1, id=666, new_messages_number=0, contacts=[]):
result = await db.sql('SELECT * FROM accounts ORDER BY created_at DESC')
self.assertEqual(len(result), number)
self.assertEqual(result[0]['id'], id)
self.assertEqual(result[0]['new_messages_number'], new_messages_number)
@test_utils.unittest_run_loop
async def test_increment_new_messages(self):
await operations.increment_new_messages(666)
await self.check_account_created(new_messages_number=1)
await operations.increment_new_messages(666)
await operations.increment_new_messages(666)
await self.check_account_created(new_messages_number=3)
@test_utils.unittest_run_loop
async def test_new_messages_number__has_account(self):
await operations.increment_new_messages(666)
await db.sql('UPDATE accounts SET new_messages_number=7')
number = await operations.new_messages_number(666)
self.assertEqual(number, 7)
@test_utils.unittest_run_loop
async def test_new_messages_number__no_account(self):
number = await operations.new_messages_number(666)
self.assertEqual(number, 0)
@test_utils.unittest_run_loop
async def test_read_messages__has_account(self):
await operations.increment_new_messages(666)
await db.sql('UPDATE accounts SET new_messages_number=7')
await operations.read_messages(666)
number = await operations.new_messages_number(666)
self.assertEqual(number, 0)
@test_utils.unittest_run_loop
async def test_read_messages__no_account(self):
await operations.read_messages(666)
number = await operations.new_messages_number(666)
self.assertEqual(number, 0)
@test_utils.unittest_run_loop
async def test_create_visibility(self):
message_1_id = await operations.create_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
message_2_id = await operations.create_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
await operations.create_visibility(1, message_1_id)
await operations.create_visibility(2, message_2_id)
result = await db.sql('SELECT account, message FROM visibilities')
self.assertCountEqual([dict(row) for row in result],
[{'account': 1, 'message': message_1_id},
{'account': 2, 'message': message_2_id}])
@test_utils.unittest_run_loop
async def test_add_to_conversation(self):
message_1_id = await operations.create_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
message_2_id = await operations.create_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
await operations.add_to_conversation(1, 2, message_1_id)
await operations.add_to_conversation(2, 1, message_2_id)
result = await db.sql('SELECT account_1, account_2, message FROM conversations')
self.assertCountEqual([dict(row) for row in result],
[{'account_1': 1, 'account_2': 2, 'message': message_1_id},
{'account_1': 1, 'account_2': 2, 'message': message_2_id}])
@test_utils.unittest_run_loop
async def test_create_message(self):
message_id = await operations.create_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
result = await db.sql('SELECT * FROM messages')
self.assertEqual(len(result), 1)
self.assertEqual(result[0]['sender'], 666)
self.assertEqual(result[0]['recipients'], [1, 3, 7])
self.assertEqual(result[0]['body'], 'some странный text')
@test_utils.unittest_run_loop
async def test_send_message__visibilities_created(self):
message_id = await operations.send_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
result = await db.sql('SELECT account, message, visible FROM visibilities')
self.assertCountEqual([dict(row) for row in result],
[{'account': 666, 'message': message_id, 'visible': True},
{'account': 1, 'message': message_id, 'visible': True},
{'account': 3, 'message': message_id, 'visible': True},
{'account': 7, 'message': message_id, 'visible': True}])
@test_utils.unittest_run_loop
async def test_send_message__conversations_created(self):
message_id = await operations.send_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
result = await db.sql('SELECT account_1, account_2, message FROM conversations')
self.assertCountEqual([dict(row) for row in result],
[{'account_1': 1, 'account_2': 666, 'message': message_id},
{'account_1': 3, 'account_2': 666, 'message': message_id},
{'account_1': 7, 'account_2': 666, 'message': message_id}])
@test_utils.unittest_run_loop
async def test_send_message__new_messages_increment(self):
await operations.send_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
await operations.send_message(sender_id=1, recipients_ids=[7], body='some странный text')
result = await db.sql('SELECT id, new_messages_number FROM accounts')
self.assertCountEqual([dict(row) for row in result],
[{'id': 1, 'new_messages_number': 1},
{'id': 3, 'new_messages_number': 1},
{'id': 7, 'new_messages_number': 2}])
@test_utils.unittest_run_loop
async def test_send_message__contacts_created(self):
message_id = await operations.send_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
contacts = await operations.get_contacts(666)
self.assertCountEqual(contacts, [1, 3, 7])
contacts = await operations.get_contacts(3)
self.assertCountEqual(contacts, [666])
@test_utils.unittest_run_loop
async def test_send_message__duplicate_recipients(self):
message_id = await operations.send_message(sender_id=666, recipients_ids=[1, 3, 7, 3, 7, 7], body='some странный text')
result = await db.sql('SELECT recipients, body FROM messages')
self.assertEqual([row['body'] for row in result], ['some странный text'])
self.assertEqual(len(result[0]['recipients']), 3)
self.assertEqual(set(result[0]['recipients']), {1, 3, 7})
@test_utils.unittest_run_loop
async def test_send_message__sender_is_recipient(self):
message_id = await operations.send_message(sender_id=666, recipients_ids=[666], body='some странный text')
self.assertEqual(message_id, None)
result = await db.sql('SELECT body FROM messages')
self.assertEqual(result, [])
@test_utils.unittest_run_loop
async def test_send_message__remove_sender_from_recipients(self):
message_id = await operations.send_message(sender_id=666, recipients_ids=[1, 3, 666, 7], body='some странный text')
result = await db.sql('SELECT body FROM messages')
self.assertEqual([row['body'] for row in result], ['some странный text'])
result = await db.sql('SELECT id FROM accounts')
self.assertEqual({row['id'] for row in result}, {1, 3, 7})
result = await db.sql('SELECT recipients FROM messages WHERE id=%(id)s', {'id': message_id})
self.assertEqual(set(result[0]['recipients']), {1, 3, 7})
result = await db.sql('SELECT account, message, visible FROM visibilities')
self.assertCountEqual([dict(row) for row in result],
[{'account': 666, 'message': message_id, 'visible': True},
{'account': 1, 'message': message_id, 'visible': True},
{'account': 3, 'message': message_id, 'visible': True},
{'account': 7, 'message': message_id, 'visible': True}])
result = await db.sql('SELECT account_1, account_2, message FROM conversations')
self.assertCountEqual([dict(row) for row in result],
[{'account_1': 1, 'account_2': 666, 'message': message_id},
{'account_1': 3, 'account_2': 666, 'message': message_id},
{'account_1': 7, 'account_2': 666, 'message': message_id}])
contacts = await operations.get_contacts(666)
self.assertCountEqual(contacts, [1, 3, 7])
contacts = await operations.get_contacts(3)
self.assertCountEqual(contacts, [666])
@test_utils.unittest_run_loop
async def test_send_message__duplicate_contacts(self):
await operations.send_message(sender_id=666, recipients_ids=[1, 3, 7], body='1')
await operations.send_message(sender_id=3, recipients_ids=[1, 666], body='2')
contacts = await operations.get_contacts(666)
self.assertCountEqual(contacts, [1, 3, 7])
contacts = await operations.get_contacts(1)
self.assertCountEqual(contacts, [3, 666])
contacts = await operations.get_contacts(3)
self.assertCountEqual(contacts, [1, 666])
contacts = await operations.get_contacts(7)
self.assertCountEqual(contacts, [666])
@test_utils.unittest_run_loop
async def test_hide_message(self):
message_id = await operations.send_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
await operations.hide_message(666, message_id)
await operations.hide_message(3, message_id)
result = await db.sql('SELECT account, message, visible FROM visibilities')
self.assertCountEqual([dict(row) for row in result],
[{'account': 666, 'message': message_id, 'visible': False},
{'account': 1, 'message': message_id, 'visible': True},
{'account': 3, 'message': message_id, 'visible': False},
{'account': 7, 'message': message_id, 'visible': True}])
@test_utils.unittest_run_loop
async def test_hide_all_messages(self):
message_1_id = await operations.send_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
message_2_id = await operations.send_message(sender_id=3, recipients_ids=[1, 666], body='some странный text')
await operations.hide_all_messages(666)
await operations.hide_all_messages(1)
result = await db.sql('SELECT account, message, visible FROM visibilities')
self.assertCountEqual([dict(row) for row in result],
[{'account': 666, 'message': message_1_id, 'visible': False},
{'account': 1, 'message': message_1_id, 'visible': False},
{'account': 3, 'message': message_1_id, 'visible': True},
{'account': 7, 'message': message_1_id, 'visible': True},
{'account': 666, 'message': message_2_id, 'visible': False},
{'account': 1, 'message': message_2_id, 'visible': False},
{'account': 3, 'message': message_2_id, 'visible': True}])
@test_utils.unittest_run_loop
async def test_hide_conversation(self):
message_1_id = await operations.send_message(sender_id=666, recipients_ids=[1, 3, 7], body='some странный text')
message_2_id = await operations.send_message(sender_id=3, recipients_ids=[1, 666], body='some странный text')
message_3_id = await operations.send_message(sender_id=666, recipients_ids=[3], body='some странный text')
await operations.hide_conversation(666, 3)
result = await db.sql('SELECT account, message, visible FROM visibilities')
self.assertCountEqual([dict(row) for row in result],
[{'account': 666, 'message': message_1_id, 'visible': False},
{'account': 1, 'message': message_1_id, 'visible': True},
{'account': 3, 'message': message_1_id, 'visible': True},
{'account': 7, 'message': message_1_id, 'visible': True},
{'account': 666, 'message': message_2_id, 'visible': False},
{'account': 1, 'message': message_2_id, 'visible': True},
{'account': 3, 'message': message_2_id, 'visible': True},
{'account': 666, 'message': message_3_id, 'visible': False},
{'account': 3, 'message': message_3_id, 'visible': True} ])
total, messages = await operations.load_conversation(666, 3)
self.assertEqual(total, 0)
total, messages = await operations.load_conversation(3, 666)
self.assertEqual(total, 3)
@test_utils.unittest_run_loop
async def test_old_messages_ids(self):
message_1_id = await operations.send_message(sender_id=1, recipients_ids=[2, 3, 4], body='1')
message_2_id = await operations.send_message(sender_id=2, recipients_ids=[3, 4, 5], body='2')
message_3_id = await operations.send_message(sender_id=3, recipients_ids=[4, 5, 6], body='3')
result = await db.sql('SELECT created_at FROM messages WHERE id=%(id)s', {'id': message_2_id})
messages_ids = await operations.old_messages_ids(accounts_ids=[1, 2, 3], barrier=result[0]['created_at'])
self.assertEqual(messages_ids, [message_1_id])
@test_utils.unittest_run_loop
async def test_remove_messages(self):
message_1_id = await operations.send_message(sender_id=1, recipients_ids=[2, 3, 4], body='1')
message_2_id = await operations.send_message(sender_id=2, recipients_ids=[3, 4, 5], body='2')
message_3_id = await operations.send_message(sender_id=3, recipients_ids=[4, 5, 6], body='3')
await operations.remove_messages(messages_ids=[message_1_id, message_3_id])
result = await db.sql('SELECT sender FROM messages')
self.assertEqual({row['sender'] for row in result}, {2})
result = await db.sql('SELECT account, message FROM visibilities')
self.assertCountEqual([dict(row) for row in result],
[{'account': 2, 'message': message_2_id},
{'account': 3, 'message': message_2_id},
{'account': 4, 'message': message_2_id},
{'account': 5, 'message': message_2_id}])
result = await db.sql('SELECT account_1, account_2, message FROM conversations')
self.assertCountEqual([dict(row) for row in result],
[{'account_1': 2, 'account_2': 3, 'message': message_2_id},
{'account_1': 2, 'account_2': 4, 'message': message_2_id},
{'account_1': 2, 'account_2': 5, 'message': message_2_id}])
class LoadMessagesTests(helpers.BaseTests):
async def fill_database(self):
self.messages_ids = [await operations.send_message(sender_id=1, recipients_ids=[2, 3], body='1 ааа'),
await operations.send_message(sender_id=2, recipients_ids=[1, 3], body='2 ббб'),
await operations.send_message(sender_id=1, recipients_ids=[2, 4], body='3 ссс'),
await operations.send_message(sender_id=2, recipients_ids=[1, 4], body='4 ааа'),
await operations.send_message(sender_id=1, recipients_ids=[3, 4], body='5 ббб'),
await operations.send_message(sender_id=2, recipients_ids=[3, 4], body='6 ссс'),
await operations.send_message(sender_id=1, recipients_ids=[5], body='7 ааа'),
await operations.send_message(sender_id=2, recipients_ids=[5], body='8 ббб'),
await operations.send_message(sender_id=1, recipients_ids=[5], body='9 ссс')]
@test_utils.unittest_run_loop
async def test_no_messages(self):
await self.fill_database()
total, messages = await operations.load_messages(666, relations.OWNER_TYPE.random())
self.assertEqual(total, 0)
self.assertEqual(messages, [])
@test_utils.unittest_run_loop
async def test_account_and_type(self):
await self.fill_database()
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.SENDER)
self.assertEqual(total, 5)
self.assertEqual({m.id for m in messages}, set(self.messages_ids[0:9:2]))
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.RECIPIENT)
self.assertEqual(total, 2)
self.assertEqual({m.id for m in messages}, {self.messages_ids[1], self.messages_ids[3]})
total, messages = await operations.load_messages(2, relations.OWNER_TYPE.SENDER)
self.assertEqual(total, 4)
self.assertEqual({m.id for m in messages}, set(self.messages_ids[1:9:2]))
total, messages = await operations.load_messages(2, relations.OWNER_TYPE.RECIPIENT)
self.assertEqual(total, 2)
self.assertEqual({m.id for m in messages}, {self.messages_ids[0], self.messages_ids[2]})
@test_utils.unittest_run_loop
async def test_order(self):
await self.fill_database()
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.SENDER)
self.assertEqual(total, 5)
self.assertEqual([m.id for m in messages], [m_id for m_id in reversed(self.messages_ids[0:9:2])])
@test_utils.unittest_run_loop
async def test_text(self):
await self.fill_database()
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.SENDER, text='ааа')
self.assertEqual(total, 2)
self.assertEqual({m.id for m in messages}, {self.messages_ids[0], self.messages_ids[6]})
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.RECIPIENT, text='ааа')
self.assertEqual(total, 1)
self.assertEqual({m.id for m in messages}, {self.messages_ids[3]})
@test_utils.unittest_run_loop
async def test_offset(self):
await self.fill_database()
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.SENDER, offset=1)
self.assertEqual(total, 5)
self.assertEqual({m.id for m in messages}, set(self.messages_ids[0:8:2])) # does not include last record
@test_utils.unittest_run_loop
async def test_limit(self):
await self.fill_database()
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.SENDER, limit=2)
self.assertEqual(total, 5)
self.assertEqual({m.id for m in messages}, set(self.messages_ids[6:9:2]))
@test_utils.unittest_run_loop
async def test_offset_and_limit(self):
await self.fill_database()
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.SENDER, offset=1, limit=2)
self.assertEqual(total, 5)
self.assertEqual({m.id for m in messages}, set(self.messages_ids[4:7:2]))
@test_utils.unittest_run_loop
async def test_visibility(self):
await self.fill_database()
await operations.hide_message(1, self.messages_ids[1])
await operations.hide_message(1, self.messages_ids[2])
await operations.hide_message(1, self.messages_ids[8])
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.SENDER, visibility=True)
self.assertEqual(total, 3)
self.assertEqual({message.id for message in messages},
{self.messages_ids[0], self.messages_ids[4], self.messages_ids[6]})
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.SENDER, visibility=False)
self.assertEqual(total, 2)
self.assertEqual({message.id for message in messages},
{self.messages_ids[2], self.messages_ids[8]})
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.SENDER, visibility=None)
self.assertEqual(total, 5)
self.assertEqual({message.id for message in messages},
{self.messages_ids[0],
self.messages_ids[2],
self.messages_ids[4],
self.messages_ids[6],
self.messages_ids[8]})
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.RECIPIENT, visibility=True)
self.assertEqual(total, 1)
self.assertEqual({message.id for message in messages},
{self.messages_ids[3]})
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.RECIPIENT, visibility=False)
self.assertEqual(total, 1)
self.assertEqual({message.id for message in messages},
{self.messages_ids[1]})
total, messages = await operations.load_messages(1, relations.OWNER_TYPE.RECIPIENT, visibility=None)
self.assertEqual(total, 2)
self.assertEqual({message.id for message in messages},
{self.messages_ids[1],
self.messages_ids[3]})
class LoadConversationTests(helpers.BaseTests):
async def fill_database(self):
self.messages_ids = [await operations.send_message(sender_id=1, recipients_ids=[2, 3], body='1 ааа'),
await operations.send_message(sender_id=2, recipients_ids=[1, 3], body='2 ббб'),
await operations.send_message(sender_id=1, recipients_ids=[2, 4], body='3 ссс'),
await operations.send_message(sender_id=2, recipients_ids=[1, 4], body='4 ааа'),
await operations.send_message(sender_id=1, recipients_ids=[3, 4], body='5 ббб'),
await operations.send_message(sender_id=2, recipients_ids=[3, 4], body='6 ссс'),
await operations.send_message(sender_id=2, recipients_ids=[5], body='10'),
await operations.send_message(sender_id=2, recipients_ids=[5], body='11'),
await operations.send_message(sender_id=1, recipients_ids=[5], body='7 ааа'),
await operations.send_message(sender_id=2, recipients_ids=[5], body='8 ббб'),
await operations.send_message(sender_id=1, recipients_ids=[5], body='9 ссс')]
@test_utils.unittest_run_loop
async def test_no_messages(self):
await self.fill_database()
total, messages = await operations.load_conversation(666, 1)
self.assertEqual(total, 0)
self.assertEqual(messages, [])
total, messages = await operations.load_conversation(3, 5)
self.assertEqual(total, 0)
self.assertEqual(messages, [])
@test_utils.unittest_run_loop
async def test_success(self):
await self.fill_database()
total, messages = await operations.load_conversation(1, 5)
self.assertEqual(total, 2)
self.assertEqual({m.id for m in messages}, {self.messages_ids[-1], self.messages_ids[-3]})
total, messages = await operations.load_conversation(5, 1)
self.assertEqual(total, 2)
self.assertEqual({m.id for m in messages}, {self.messages_ids[-1], self.messages_ids[-3]})
@test_utils.unittest_run_loop
async def test_filter_text(self):
await self.fill_database()
total, messages = await operations.load_conversation(1, 2, text='ааа')
self.assertEqual(total, 2)
self.assertEqual({m.id for m in messages}, {self.messages_ids[0], self.messages_ids[3]})
@test_utils.unittest_run_loop
async def test_success__multiple_recipients(self):
await self.fill_database()
total, messages = await operations.load_conversation(2, 3)
self.assertEqual(total, 2)
self.assertEqual({m.id for m in messages}, {self.messages_ids[1], self.messages_ids[5]})
total, messages = await operations.load_conversation(3, 2)
self.assertEqual(total, 2)
self.assertEqual({m.id for m in messages}, {self.messages_ids[1], self.messages_ids[5]})
@test_utils.unittest_run_loop
async def test_order(self):
await self.fill_database()
total, messages = await operations.load_conversation(1, 5)
self.assertEqual(total, 2)
self.assertEqual([m.id for m in messages], [self.messages_ids[-1], self.messages_ids[-3]])
total, messages = await operations.load_conversation(5, 1)
self.assertEqual(total, 2)
self.assertEqual([m.id for m in messages], [self.messages_ids[-1], self.messages_ids[-3]])
@test_utils.unittest_run_loop
async def test_offset(self):
await self.fill_database()
total, messages = await operations.load_conversation(1, 5, offset=1)
self.assertEqual(total, 2)
self.assertEqual([m.id for m in messages], [self.messages_ids[-3]])
@test_utils.unittest_run_loop
async def test_limit(self):
await self.fill_database()
total, messages = await operations.load_conversation(1, 5, limit=1)
self.assertEqual(total, 2)
self.assertEqual([m.id for m in messages], [self.messages_ids[-1]])
@test_utils.unittest_run_loop
async def test_offset_and_limit(self):
await self.fill_database()
total, messages = await operations.load_conversation(2, 5)
self.assertEqual(total, 3)
self.assertEqual([m.id for m in messages], [self.messages_ids[-2], self.messages_ids[-4], self.messages_ids[-5]])
total, messages = await operations.load_conversation(2, 5, offset=1, limit=1)
self.assertEqual(total, 3)
self.assertEqual([m.id for m in messages], [self.messages_ids[-4]])
class LoadMessageTests(helpers.BaseTests):
async def fill_database(self):
self.messages_ids = [await operations.send_message(sender_id=1, recipients_ids=[2], body='1 ааа')]
@test_utils.unittest_run_loop
async def test_sender(self):
await self.fill_database()
message = await operations.load_message(1, self.messages_ids[0])
self.assertEqual(message.body, '1 ааа')
@test_utils.unittest_run_loop
async def test_recipient(self):
await self.fill_database()
message = await operations.load_message(2, self.messages_ids[0])
self.assertEqual(message.body, '1 ааа')
@test_utils.unittest_run_loop
async def test_no_relation(self):
await self.fill_database()
message = await operations.load_message(3, self.messages_ids[0])
self.assertEqual(message, None)
@test_utils.unittest_run_loop
async def test_visibility__hide_from_sender(self):
await self.fill_database()
message_id = self.messages_ids[0]
await operations.hide_message(1, message_id)
message = await operations.load_message(1, message_id, visibility=True)
self.assertEqual(message, None)
message = await operations.load_message(1, message_id, visibility=False)
self.assertEqual(message.id, message_id)
message = await operations.load_message(1, message_id, visibility=None)
self.assertEqual(message.id, message_id)
message = await operations.load_message(2, message_id, visibility=True)
self.assertEqual(message.id, message_id)
message = await operations.load_message(2, message_id, visibility=False)
self.assertEqual(message, None)
message = await operations.load_message(2, message_id, visibility=None)
self.assertEqual(message.id, message_id)
@test_utils.unittest_run_loop
async def test_visibility__hide_from_recipient(self):
await self.fill_database()
message_id = self.messages_ids[0]
await operations.hide_message(2, message_id)
message = await operations.load_message(1, message_id, visibility=True)
self.assertEqual(message.id, message_id)
message = await operations.load_message(1, message_id, visibility=False)
self.assertEqual(message, None)
message = await operations.load_message(1, message_id, visibility=None)
self.assertEqual(message.id, message_id)
message = await operations.load_message(2, message_id, visibility=True)
self.assertEqual(message, None)
message = await operations.load_message(2, message_id, visibility=False)
self.assertEqual(message.id, message_id)
message = await operations.load_message(2, message_id, visibility=None)
self.assertEqual(message.id, message_id)
class CandidatesToRemoveIdsTests(helpers.BaseTests):
async def fill_database(self):
message_1_id = await operations.send_message(sender_id=1, recipients_ids=[2, 3, 4], body='1')
message_2_id = await operations.send_message(sender_id=2, recipients_ids=[3, 4, 5], body='2')
message_3_id = await operations.send_message(sender_id=3, recipients_ids=[4, 5, 6], body='3')
message_4_id = await operations.send_message(sender_id=4, recipients_ids=[2], body='4')
return [message_1_id, message_2_id, message_3_id, message_4_id]
@test_utils.unittest_run_loop
async def test_no_messages(self):
messages_ids = await operations.candidates_to_remove_ids()
self.assertEqual(messages_ids, [])
@test_utils.unittest_run_loop
async def test_no_candidates(self):
await self.fill_database()
messages_ids = await operations.candidates_to_remove_ids()
self.assertEqual(messages_ids, [])
@test_utils.unittest_run_loop
async def test_has_candidates(self):
messages_ids = await self.fill_database()
for message_id, account_id in [(messages_ids[0], 1),
(messages_ids[0], 2),
(messages_ids[0], 3),
(messages_ids[0], 4),
(messages_ids[1], 2),
(messages_ids[2], 4),
(messages_ids[2], 5),
(messages_ids[2], 6),
(messages_ids[3], 4),
(messages_ids[3], 2)]:
await operations.hide_message(account_id, message_id)
candidates_ids = await operations.candidates_to_remove_ids()
self.assertCountEqual(candidates_ids, [messages_ids[0], messages_ids[3]])
class GetDataReportTests(helpers.BaseTests):
async def fill_database(self):
message_1_id = await operations.send_message(sender_id=1, recipients_ids=[2, 3, 4], body='1')
message_2_id = await operations.send_message(sender_id=2, recipients_ids=[3, 4, 5], body='2')
message_3_id = await operations.send_message(sender_id=3, recipients_ids=[4, 5, 6], body='3')
message_4_id = await operations.send_message(sender_id=4, recipients_ids=[2], body='4')
return [message_1_id, message_2_id, message_3_id, message_4_id]
@test_utils.unittest_run_loop
async def test_no_messages(self):
report = await operations.get_data_report(666)
self.assertEqual(report, [])
@test_utils.unittest_run_loop
async def test_account_has_no_messages(self):
await self.fill_database()
report = await operations.get_data_report(7)
self.assertEqual(report, [])
@test_utils.unittest_run_loop
async def test_account_has_messages(self):
messages_ids = await self.fill_database()
messages = []
for message_id in messages_ids:
message = await operations.load_message(3, message_id)
messages.append(message)
report = await operations.get_data_report(3)
self.assertCountEqual(report, [('message', message.data_of(3))
for message in messages[:3]])
async def check_report(self,
account_id,
all_messages_ids,
expected_messages):
messages = []
for message_id in all_messages_ids:
message = await operations.load_message(account_id, message_id, visibility=None)
messages.append(message)
report = await operations.get_data_report(account_id)
self.assertCountEqual(report, [('message', messages[message_index].data_of(account_id))
for message_index in expected_messages])
@test_utils.unittest_run_loop
async def test_remove_messages(self):
messages_ids = await self.fill_database()
for message_id, account_id in [(messages_ids[0], 1),
(messages_ids[0], 2),
(messages_ids[0], 3),
(messages_ids[0], 4),
(messages_ids[1], 2),
(messages_ids[2], 4),
(messages_ids[2], 5),
(messages_ids[2], 6),
(messages_ids[3], 4),
(messages_ids[3], 2)]:
await operations.hide_message(account_id, message_id)
removed_messages_ids = await operations.candidates_to_remove_ids()
self.assertNotEqual(removed_messages_ids, [])
await operations.remove_messages(removed_messages_ids)
await self.check_report(1, messages_ids, [])
await self.check_report(2, messages_ids, [1])
await self.check_report(3, messages_ids, [1, 2])
await self.check_report(4, messages_ids, [1, 2])
await self.check_report(5, messages_ids, [1, 2])
await self.check_report(6, messages_ids, [2])
await operations.hide_message(3, messages_ids[2])
await self.check_report(1, messages_ids, [])
await self.check_report(2, messages_ids, [1])
await self.check_report(3, messages_ids, [1, 2])
await self.check_report(4, messages_ids, [1, 2])
await self.check_report(5, messages_ids, [1, 2])
await self.check_report(6, messages_ids, [2])
removed_messages_ids = await operations.candidates_to_remove_ids()
self.assertNotEqual(removed_messages_ids, [])
await operations.remove_messages(removed_messages_ids)
await self.check_report(1, messages_ids, [])
await self.check_report(2, messages_ids, [1])
await self.check_report(3, messages_ids, [1])
await self.check_report(4, messages_ids, [1])
await self.check_report(5, messages_ids, [1])
await self.check_report(6, messages_ids, [])
| [
"a.eletsky@gmail.com"
] | a.eletsky@gmail.com |
6d2be8063c9baf1a1d101e3bc3a6370ab68af4a9 | 09e57dd1374713f06b70d7b37a580130d9bbab0d | /benchmark/startQiskit_Class941.py | b998894584f82c1b8d88727a51595834cf9c1b86 | [
"BSD-3-Clause"
] | permissive | UCLA-SEAL/QDiff | ad53650034897abb5941e74539e3aee8edb600ab | d968cbc47fe926b7f88b4adf10490f1edd6f8819 | refs/heads/main | 2023-08-05T04:52:24.961998 | 2021-09-19T02:56:16 | 2021-09-19T02:56:16 | 405,159,939 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,927 | py | # qubit number=5
# total number=37
import cirq
import qiskit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import BasicAer, execute, transpile
from pprint import pprint
from qiskit.test.mock import FakeVigo
from math import log2,floor, sqrt, pi
import numpy as np
import networkx as nx
def build_oracle(n: int, f) -> QuantumCircuit:
# implement the oracle O_f^\pm
# NOTE: use U1 gate (P gate) with \lambda = 180 ==> CZ gate
# or multi_control_Z_gate (issue #127)
controls = QuantumRegister(n, "ofc")
oracle = QuantumCircuit(controls, name="Zf")
for i in range(2 ** n):
rep = np.binary_repr(i, n)
if f(rep) == "1":
for j in range(n):
if rep[j] == "0":
oracle.x(controls[j])
# oracle.h(controls[n])
if n >= 2:
oracle.mcu1(pi, controls[1:], controls[0])
for j in range(n):
if rep[j] == "0":
oracle.x(controls[j])
# oracle.barrier()
return oracle
def make_circuit(n:int,f) -> QuantumCircuit:
# circuit begin
input_qubit = QuantumRegister(n,"qc")
classical = ClassicalRegister(n, "qm")
prog = QuantumCircuit(input_qubit, classical)
prog.h(input_qubit[0]) # number=3
prog.h(input_qubit[1]) # number=4
prog.h(input_qubit[2]) # number=5
prog.h(input_qubit[2]) # number=34
prog.cz(input_qubit[3],input_qubit[2]) # number=35
prog.h(input_qubit[2]) # number=36
prog.y(input_qubit[2]) # number=33
prog.h(input_qubit[3]) # number=6
prog.h(input_qubit[4]) # number=21
Zf = build_oracle(n, f)
repeat = floor(sqrt(2 ** n) * pi / 4)
for i in range(1):
prog.append(Zf.to_gate(), [input_qubit[i] for i in range(n)])
prog.h(input_qubit[0]) # number=1
prog.h(input_qubit[1]) # number=2
prog.h(input_qubit[2]) # number=7
prog.h(input_qubit[3]) # number=8
prog.h(input_qubit[3]) # number=30
prog.cz(input_qubit[4],input_qubit[3]) # number=31
prog.h(input_qubit[3]) # number=32
prog.h(input_qubit[2]) # number=29
prog.cx(input_qubit[1],input_qubit[0]) # number=22
prog.cx(input_qubit[3],input_qubit[1]) # number=25
prog.x(input_qubit[0]) # number=23
prog.cx(input_qubit[1],input_qubit[0]) # number=24
prog.x(input_qubit[1]) # number=10
prog.x(input_qubit[2]) # number=11
prog.x(input_qubit[3]) # number=12
prog.x(input_qubit[1]) # number=27
if n>=2:
prog.mcu1(pi,input_qubit[1:],input_qubit[0])
prog.x(input_qubit[0]) # number=13
prog.x(input_qubit[1]) # number=14
prog.x(input_qubit[2]) # number=15
prog.x(input_qubit[3]) # number=16
prog.h(input_qubit[0]) # number=17
prog.h(input_qubit[1]) # number=18
prog.h(input_qubit[2]) # number=19
prog.h(input_qubit[3]) # number=20
prog.h(input_qubit[0])
prog.h(input_qubit[1])
prog.h(input_qubit[2])
prog.h(input_qubit[3])
# circuit end
return prog
if __name__ == '__main__':
key = "00000"
f = lambda rep: str(int(rep == key))
prog = make_circuit(5,f)
backend = BasicAer.get_backend('statevector_simulator')
sample_shot =7924
info = execute(prog, backend=backend).result().get_statevector()
qubits = round(log2(len(info)))
info = {
np.binary_repr(i, qubits): round((info[i]*(info[i].conjugate())).real,3)
for i in range(2 ** qubits)
}
backend = FakeVigo()
circuit1 = transpile(prog,backend,optimization_level=2)
writefile = open("../data/startQiskit_Class941.csv","w")
print(info,file=writefile)
print("results end", file=writefile)
print(circuit1.depth(),file=writefile)
print(circuit1,file=writefile)
writefile.close()
| [
"wangjiyuan123@yeah.net"
] | wangjiyuan123@yeah.net |
826f136ed78cf1861b035cddfa114d9947177a7a | 39fa403d46a4456a07c761e1aaa8af2d418c5f87 | /apps/data_taking_scripts/2015-10-jpl-park/sweep_and_stream_at_min_s21_two_groups.py | 8cc78c9c3ad1e76c37429bd861fd77636dfed4a8 | [
"BSD-2-Clause"
] | permissive | vapor36/kid_readout | 72d94d96e964d6a2eef3aa57ed6fc814946cfe46 | 07202090d468669200cab78297122880c1c03e87 | refs/heads/master | 2020-12-12T13:32:47.267337 | 2018-11-11T15:36:40 | 2018-11-11T15:36:40 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,481 | py | __author__ = 'gjones'
import time
import sys
import numpy as np
from kid_readout.roach import heterodyne
from kid_readout.utils import data_file, sweeps
from kid_readout.equipment import hittite_controller, lockin_controller
hittite = hittite_controller.hittiteController(addr='192.168.0.200')
lockin = lockin_controller.lockinController()
print lockin.get_idn()
ri = heterodyne.RoachHeterodyne(adc_valon='/dev/ttyUSB0')
ri.iq_delay = 0
ri.set_lo(1410.0)
#group_1_lo = 1020.0
#group_2_lo = 1410.0
#all_f0s = np.load('/data/readout/resonances/2016-01-13-jpl-2015-10-park-dark-32-resonances-split-at-1300.npy') -0.5
#group_1_f0 = all_f0s[all_f0s < 1300]
#group_2_f0 = all_f0s[all_f0s > 1300]
"""
all_f0s = np.load('/data/readout/resonances/2016-02-12-jpl-park-100nm-32-resonances.npy')
group_1_f0 = all_f0s[all_f0s<1500]
group_2_f0 = all_f0s[all_f0s>1800]
group_1_lo = 1220.0
group_2_lo = 1810.0
"""
all_f0s = np.load('/data/readout/resonances/2016-02-24-jpl-park-2015-10-40nm-al-niobium-gp-two-groups.npy')
group_1_f0 = all_f0s[all_f0s<1300]
group_2_f0 = all_f0s[all_f0s>1300]
group_1_lo = 1030.0
group_2_lo = 1420.0
#responsive_resonances = np.load('/data/readout/resonances/2015-11-26-jpl-nevins-responsive-resonances.npy')
suffix = "sweep_and_stream"
mmw_source_modulation_freq = ri.set_modulation_output(rate=7)
mmw_source_frequency = -1 #148e9
hittite.set_freq(mmw_source_frequency/12.0)
mmw_atten_turns = (4.5, 4.5)
#print "modulating at: {}".format(mmw_source_modulation_freq),
atonce = 16
df = data_file.DataFile(suffix=suffix)
df.nc.mmw_atten_turns = mmw_atten_turns
for group_num,(lo,f0s) in enumerate(zip([group_1_lo,group_2_lo],[group_1_f0,group_2_f0])):
print "group",group_num,"lo",lo,"min f0",f0s.min()
ri.set_lo(lo)
nsamp = 2**16
step = 1
nstep = 64
f0binned = np.round(f0s * nsamp / 512.0) * 512.0 / nsamp
offset_bins = np.arange(-(nstep + 1), (nstep + 1)) * step
offsets = offset_bins * 512.0 / nsamp
measured_freqs = sweeps.prepare_sweep(ri, f0binned, offsets, nsamp=nsamp)
for atten_index,dac_atten in enumerate([0,20]):
print "at dac atten", dac_atten
ri.set_dac_atten(dac_atten)
ri.set_modulation_output('low')
df.log_hw_state(ri)
df.log_adc_snap(ri)
sweep_data = sweeps.do_prepared_sweep(ri, nchan_per_step=atonce, reads_per_step=2)
df.add_sweep(sweep_data)
fmins = []
for k in range(len(f0s)):
fr, s21, errors = sweep_data.select_index(k)
fmins.append(fr[np.abs(s21).argmin()])
fmins.sort()
ri.add_tone_freqs(np.array(fmins))
ri.select_bank(ri.tone_bins.shape[0] - 1)
# ri.set_tone_freqs(responsive_resonances[:32],nsamp=2**15)
ri.select_fft_bins(range(len(f0s)))
ri._sync()
time.sleep(0.5)
print "taking data with source on"
# raw_input("press enter to start")
ri.set_modulation_output('low')
df.log_hw_state(ri)
nsets = len(f0s) / atonce
tsg = None
for iset in range(nsets):
selection = range(len(f0s))[iset::nsets]
ri.select_fft_bins(selection)
ri._sync()
time.sleep(0.4)
t0 = time.time()
dmod, addr = ri.get_data(256) # about 30 seconds of data
# x, y, r, theta = lockin.get_data()
tsg = df.add_timestream_data(dmod, ri, t0, tsg=tsg)
df.sync()
print "taking sweep with source on"
ri.set_modulation_output('high')
df.log_hw_state(ri)
df.log_adc_snap(ri)
sweep_data = sweeps.do_prepared_sweep(ri, nchan_per_step=atonce, reads_per_step=2)
df.add_sweep(sweep_data)
fmins = []
for k in range(len(f0s)):
fr, s21, errors = sweep_data.select_index(k)
fmins.append(fr[np.abs(s21).argmin()])
fmins.sort()
ri.add_tone_freqs(np.array(fmins))
ri.select_bank(ri.tone_bins.shape[0] - 1)
# ri.set_tone_freqs(responsive_resonances[:32],nsamp=2**15)
ri.select_fft_bins(range(len(f0s)))
ri._sync()
time.sleep(0.5)
print "taking timestream with source off"
# raw_input("press enter to start")
ri.set_modulation_output('high')
df.log_hw_state(ri)
nsets = len(f0s) / atonce
tsg = None
for iset in range(nsets):
selection = range(len(f0s))[iset::nsets]
ri.select_fft_bins(selection)
ri._sync()
time.sleep(0.4)
t0 = time.time()
dmod, addr = ri.get_data(256) # about 30 seconds of data
# x, y, r, theta = lockin.get_data()
tsg = df.add_timestream_data(dmod, ri, t0, tsg=tsg)
df.sync()
#raw_input("finished")
print "taking data with source modulated"
ri.set_modulation_output(7)
df.log_hw_state(ri)
nsets = len(f0s) / atonce
tsg = None
for iset in range(nsets):
selection = range(len(f0s))[iset::nsets]
ri.select_fft_bins(selection)
ri._sync()
time.sleep(0.4)
t0 = time.time()
dmod, addr = ri.get_data(16) # about 2 seconds of data
x, y, r, theta = lockin.get_data()
tsg = df.add_timestream_data(dmod, ri, t0, tsg=tsg,zbd_voltage=r,mmw_source_freq=mmw_source_frequency)
df.sync()
#ri.set_modulation_output('high')
df.close() | [
"glenn.caltech@gmail.com"
] | glenn.caltech@gmail.com |
6c0a86308a5a8a1586671cc3b900cc1e6306f8d9 | 028d788c0fa48a8cb0cc6990a471e8cd46f6ec50 | /Python-Web/pythons/pythons/pythons_auth/forms.py | 37236c9031fdea32b32534fdb04a213e7b4490ee | [] | no_license | Sheko1/SoftUni | d6b8e79ae545116f4c0e5705ad842f12d77a9c9d | a9fbeec13a30231b6a97c2b22bb35257ac1481c0 | refs/heads/main | 2023-07-13T15:39:48.826925 | 2021-08-21T12:51:02 | 2021-08-21T12:51:02 | 317,266,200 | 2 | 3 | null | null | null | null | UTF-8 | Python | false | false | 669 | py | from django import forms
from django.contrib.auth import authenticate
from django.contrib.auth.forms import AuthenticationForm
from django.core.exceptions import ValidationError
class LoginForm(AuthenticationForm):
user = None
username = forms.CharField(
max_length=150
)
password = forms.CharField(
widget=forms.PasswordInput()
)
def clean(self):
super().clean()
self.user = authenticate(username=self.cleaned_data['username'], password=self.cleaned_data['password'])
if not self.user:
raise ValidationError('Wrong username or password!!')
def save(self):
return self.user
| [
"martinkypar@gmail.com"
] | martinkypar@gmail.com |
92e841ff0bf1be4e124ac812dc415c948c1bca8e | f51c6d0cebb27c377ce9830deec4b727b9b2ee90 | /AI/04_Plot/heatmap/heatmap1.py | a737755c2ee0a26831af3838ed9931f17c38f058 | [] | no_license | dbbudd/Python-Experiments | 1c3c1322583aaaf2016a2f2f3061e6d034c5d1c8 | b6d294bf11a5c92b8578d16aa2f63cc27fc47b07 | refs/heads/master | 2020-04-17T02:21:36.693593 | 2019-01-17T00:18:34 | 2019-01-17T00:18:34 | 166,130,283 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 316 | py | #!/usr/bin/env python
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.patches as mpatches
import matplotlib.cm
import pylab as pl
data = pl.random((25,25)) # 25x25 matrix of values
fig = plt.figure()
ax = fig.add_subplot(111, xlim = (0,3), ylim = (0,3))
pl.pcolor(data)
pl.colorbar()
pl.show() | [
"dbbudd@gmail.com"
] | dbbudd@gmail.com |
550f9a58d50db74c3fb57272a3a0564aca4205a3 | 3c52eda991b4a37e2b807dd1e05f07139637c758 | /examples/client_server.py | 080e7b03ea846b1f8525127d5609ed971d3c8e54 | [
"Apache-2.0"
] | permissive | pgiri/pycos | ebea05b045f15f505eff5cf175798c0cf2b4a1db | 6594c311a02490ae0701fa741b508c335f305816 | refs/heads/master | 2022-12-25T21:53:15.091319 | 2022-12-18T17:27:05 | 2022-12-18T17:27:05 | 91,977,091 | 52 | 9 | NOASSERTION | 2020-02-19T01:47:09 | 2017-05-21T17:58:23 | Python | UTF-8 | Python | false | false | 888 | py | #!/usr/bin/env python
# client and server tasks communicating with message passing
# (asynchronous concurrent programming);
# see https://pycos.sourceforge.io/pycos.html for details.
import random
import pycos
def server_proc(task=None):
task.set_daemon()
while True:
msg = yield task.receive()
print('Received %s' % (msg))
def client_proc(server, n, task=None):
global msg_id
for i in range(3):
yield task.suspend(random.uniform(0.5, 3))
# although multiple clients execute this method, locking is not
# necessary, as a task not preempted (unlike Python threads) and runs
# till 'yield'
msg_id += 1
server.send('msg_id %d: client %d, msg %d' % (msg_id, n, i))
msg_id = 0
# create server
server = pycos.Task(server_proc)
# create 10 clients
for i in range(10):
pycos.Task(client_proc, server, i)
| [
"pgiri@yahoo.com"
] | pgiri@yahoo.com |
daea595ccbb9c88ee865036583ea376a5607127f | 00e804d17f4882e10c192bccebc6f90d60a78162 | /test/verif.py | 1762cadff3a340214e58ca062096aac3f716d3b7 | [] | no_license | devnandito/dataProcess | 2b57006b5f39c47b292e18293db9bdecdfee0744 | b5da91184bf6d8702f74cabbef46e2b4b25b16ac | refs/heads/master | 2023-02-16T21:03:07.412468 | 2021-01-18T21:32:26 | 2021-01-18T21:32:26 | 324,022,239 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,499 | py | import pandas as pd, json, csv, re, os, sys
from datetime import datetime, timedelta
if __name__ == '__main__':
now = datetime.now()
ihour = now.hour
iminute = now.minute
isecond = now.second
start = timedelta(hours=ihour, minutes=iminute, seconds=isecond)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
while True:
initial = input('Enter options start/quit:')
initial = initial.lower()
if initial == 'quit':
break
elif initial == 'start':
file_log = os.path.join(BASE_DIR, 'set/vjsonveriflog.txt')
f = open(file_log, "r")
f1 = f.readlines()
list_log = []
for x in f1:
list_log.append(x)
count = int(list_log[0])
fname = list_log[1]
f.close()
file_json = input('Enter file json:')
output_json = input('Enter output file:')
data_frame1 = pd.read_json(file_json).to_dict(orient='records')
data_join = list()
line = 0
counts = dict()
activities = dict()
sources = dict()
for row in data_frame1:
line += 1
ci = row['ci']
if ci not in counts:
counts[ci] = 1
activities[ci] = row['activity']
sources[ci] = row['source']
data_join.append({
'ci': row['ci'],
'ruc': row['ruc'],
'name': row['name'],
'activity': activities[ci],
'status': row['status'],
'salary': row['salary'],
'source': sources[ci],
'count': counts[ci]
})
print('New: {}, Line: {}'.format(ci,line))
else:
counts[ci] = counts[ci] + 1
activities[ci] = activities[ci] + '/' + row['activity'] + '-' + str(counts[ci])
sources[ci] = sources[ci] + '/' + row['source']
for i in range(len(data_join)):
if data_join[i]['ci'] == ci:
data_join[i]['activity'] = activities[ci]
data_join[i]['source'] = sources[ci]
data_join[i]['count'] = counts[ci]
else:
break
print('Duplicated: {}, Line: {}'.format(ci,line))
ofile = os.path.join(BASE_DIR, 'set/results/'+output_json)
with open(ofile, 'w+') as outfile:
json.dump(data_join, outfile, indent=4)
now = datetime.now()
ohour = now.hour
ominute = now.minute
osecond = now.second
end = timedelta(hours=ohour, minutes=ominute, seconds=osecond)
timerun = end - start
message = '''
Time start: {} \n
Runtime: {} \n
Time finish: {} \n
File: {}
'''.format(start, timerun, end, ofile)
print(message)
count += 1
f = open(file_log, 'w')
f.write(str(count)+'\n')
f.write(str(list_log[1]))
f.close()
else:
continue
| [
"fhersa@gmail.com"
] | fhersa@gmail.com |
40c97a61ac24b8475ed30e89689f8f2dea1aff73 | 1aa94863e9c2667ab937ebc23bcbe467c1c17424 | /homeworks/hw_op_1/parentheses.py | 306f3a5a08bf894fce5a98212565cfb7a0f29036 | [] | no_license | cyr1z/learn_python | 9d6648f10a1babd3bcff7cb3e19e63942518953a | 188ae51737f3b47e9acaaebf9a91530b2fa60194 | refs/heads/master | 2022-12-09T05:24:53.494187 | 2020-09-05T20:17:20 | 2020-09-05T20:17:20 | 281,864,043 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,098 | py | # 3. Write a Python class to find validity of a string of parentheses, '(', ')', '{', '}', '[' and '].'
# ' These brackets must be close in the correct order, for example "()" and "()[]{}" are valid '
# but "[)", "({[)]" and ' '"{{{" are invalid.
class Parentheses:
open_list = ["[", "{", "("]
close_list = ["]", "}", ")"]
def is_parentheses(self, string):
stack = []
for i in string:
if i in self.open_list:
stack.append(i)
if i in self.close_list:
if self.close_list.index(i) == self.open_list.index(stack[-1]):
stack.pop()
else:
return False
if stack:
return False
return True
balanced_string = '123 (14) [2, 3: {16, 9}, (90, a[1])]'
unbalanced_string = '44 (5t6y) [2, 3: {16, 9}, {(90, a[1]))]'
print(Parentheses().is_parentheses(balanced_string)) # True
print(Parentheses().is_parentheses(unbalanced_string)) # False
print(Parentheses().is_parentheses('{}')) # True
print(Parentheses().is_parentheses('{')) # False
| [
"cyr@zolotarev.pp.ua"
] | cyr@zolotarev.pp.ua |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.