hexsha stringlengths 40 40 | size int64 1 1.03M | ext stringclasses 10 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 239 | max_stars_repo_name stringlengths 5 130 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 239 | max_issues_repo_name stringlengths 5 130 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 67k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 239 | max_forks_repo_name stringlengths 5 130 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 1 1.03M | avg_line_length float64 1 958k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
acf72153e195635e5ce26a60e4e1fa1769995e5c | 1,612 | py | Python | src/dms-preview/azext_dms/vendored_sdks/datamigration/models/validate_sync_migration_input_sql_server_task_output.py | mayank88mahajan/azure-cli-extensions | 8bd389a1877bffd14052bec5519ce75dc6fc34cf | [
"MIT"
] | 1 | 2019-05-10T19:58:09.000Z | 2019-05-10T19:58:09.000Z | src/dms-preview/azext_dms/vendored_sdks/datamigration/models/validate_sync_migration_input_sql_server_task_output.py | mayank88mahajan/azure-cli-extensions | 8bd389a1877bffd14052bec5519ce75dc6fc34cf | [
"MIT"
] | 2 | 2019-10-02T23:37:38.000Z | 2020-10-02T01:17:31.000Z | src/dms-preview/azext_dms/vendored_sdks/datamigration/models/validate_sync_migration_input_sql_server_task_output.py | mayank88mahajan/azure-cli-extensions | 8bd389a1877bffd14052bec5519ce75dc6fc34cf | [
"MIT"
] | 1 | 2018-08-28T14:36:47.000Z | 2018-08-28T14:36:47.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class ValidateSyncMigrationInputSqlServerTaskOutput(Model):
"""Output for task that validates migration input for SQL sync migrations.
Variables are only populated by the server, and will be ignored when
sending a request.
:ivar id: Database identifier
:vartype id: str
:ivar name: Name of database
:vartype name: str
:ivar validation_errors: Errors associated with a selected database object
:vartype validation_errors:
list[~azure.mgmt.datamigration.models.ReportableException]
"""
_validation = {
'id': {'readonly': True},
'name': {'readonly': True},
'validation_errors': {'readonly': True},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'validation_errors': {'key': 'validationErrors', 'type': '[ReportableException]'},
}
def __init__(self, **kwargs):
super(ValidateSyncMigrationInputSqlServerTaskOutput, self).__init__(**kwargs)
self.id = None
self.name = None
self.validation_errors = None
| 34.297872 | 90 | 0.612903 |
acf721540a261cb644e6ab4853b10f077ab40b3f | 213 | py | Python | mysite/blog/views.py | ithou/Python-Web-Development-with-Django | 594d3526731ee01bc7741ad828193a01e67cb424 | [
"MIT"
] | null | null | null | mysite/blog/views.py | ithou/Python-Web-Development-with-Django | 594d3526731ee01bc7741ad828193a01e67cb424 | [
"MIT"
] | null | null | null | mysite/blog/views.py | ithou/Python-Web-Development-with-Django | 594d3526731ee01bc7741ad828193a01e67cb424 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from .models import BlogPost
def archive(request):
posts = BlogPost.objects.all()
context = {'posts': posts}
return render(request, 'blog/archive.html', context)
| 21.3 | 56 | 0.71831 |
acf7216c7a7ee24d3d0fd51bc0a493891faa36b7 | 15,684 | py | Python | caffe2/python/models/seq2seq/beam_search.py | PeaceOrwell/caffe2 | bad0bb4aed08eb901349a4cd40a88a2fe887c4f7 | [
"Apache-2.0"
] | 1 | 2018-05-02T07:19:00.000Z | 2018-05-02T07:19:00.000Z | caffe2/python/models/seq2seq/beam_search.py | PeaceOrwell/caffe2 | bad0bb4aed08eb901349a4cd40a88a2fe887c4f7 | [
"Apache-2.0"
] | null | null | null | caffe2/python/models/seq2seq/beam_search.py | PeaceOrwell/caffe2 | bad0bb4aed08eb901349a4cd40a88a2fe887c4f7 | [
"Apache-2.0"
] | 1 | 2018-11-02T02:03:09.000Z | 2018-11-02T02:03:09.000Z | # Copyright (c) 2016-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
## @package beam_search
# Module caffe2.python.models.seq2seq.beam_search
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from collections import namedtuple
from caffe2.python import core
import caffe2.python.models.seq2seq.seq2seq_util as seq2seq_util
from caffe2.python.models.seq2seq.seq2seq_model_helper import Seq2SeqModelHelper
class BeamSearchForwardOnly(object):
"""
Class generalizing forward beam search for seq2seq models.
Also provides types to specify the recurrent structure of decoding:
StateConfig:
initial_value: blob providing value of state at first step_model
state_prev_link: LinkConfig describing how recurrent step receives
input from global state blob in each step
state_link: LinkConfig describing how step writes (produces new state)
to global state blob in each step
LinkConfig:
blob: blob connecting global state blob to step application
offset: offset from beginning of global blob for link in time dimension
window: width of global blob to read/write in time dimension
"""
LinkConfig = namedtuple('LinkConfig', ['blob', 'offset', 'window'])
StateConfig = namedtuple(
'StateConfig',
['initial_value', 'state_prev_link', 'state_link'],
)
def __init__(
self,
beam_size,
model,
eos_token_id,
go_token_id=seq2seq_util.GO_ID,
):
self.beam_size = beam_size
self.model = model
self.step_model = Seq2SeqModelHelper(
name='step_model',
param_model=self.model,
)
self.go_token_id = go_token_id
self.eos_token_id = eos_token_id
(
self.timestep,
self.scores_t_prev,
self.tokens_t_prev,
self.hypo_t_prev,
self.attention_t_prev,
) = self.step_model.net.AddExternalInputs(
'timestep',
'scores_t_prev',
'tokens_t_prev',
'hypo_t_prev',
'attention_t_prev',
)
tokens_t_prev_int32 = self.step_model.net.Cast(
self.tokens_t_prev,
'tokens_t_prev_int32',
to=core.DataType.INT32,
)
self.tokens_t_prev_int32_flattened, _ = self.step_model.net.Reshape(
[tokens_t_prev_int32],
[tokens_t_prev_int32, 'input_t_int32_old_shape'],
shape=[1, -1],
)
def get_step_model(self):
return self.step_model
def get_previous_tokens(self):
return self.tokens_t_prev_int32_flattened
def get_timestep(self):
return self.timestep
# TODO: make attentions a generic state
# data_dependencies is a list of blobs that the operator should wait for
# before beginning execution. This ensures that ops are run in the correct
# order when the RecurrentNetwork op is embedded in a DAGNet, for ex.
def apply(
self,
inputs,
length,
log_probs,
attentions,
state_configs,
data_dependencies,
word_rewards=None,
possible_translation_tokens=None,
go_token_id=None,
):
# [beam_size, beam_size]
best_scores_per_hypo, best_tokens_per_hypo = self.step_model.net.TopK(
log_probs,
['best_scores_per_hypo', 'best_tokens_per_hypo_indices'],
k=self.beam_size,
)
if possible_translation_tokens:
# [beam_size, beam_size]
best_tokens_per_hypo = self.step_model.net.Gather(
[possible_translation_tokens, best_tokens_per_hypo],
['best_tokens_per_hypo']
)
# [beam_size]
scores_t_prev_squeezed, _ = self.step_model.net.Reshape(
self.scores_t_prev,
['scores_t_prev_squeezed', 'scores_t_prev_old_shape'],
shape=[self.beam_size],
)
# [beam_size, beam_size]
output_scores = self.step_model.net.Add(
[best_scores_per_hypo, scores_t_prev_squeezed],
'output_scores',
broadcast=1,
axis=0,
)
if word_rewards is not None:
# [beam_size, beam_size]
word_rewards_for_best_tokens_per_hypo = self.step_model.net.Gather(
[word_rewards, best_tokens_per_hypo],
'word_rewards_for_best_tokens_per_hypo',
)
# [beam_size, beam_size]
output_scores = self.step_model.net.Add(
[output_scores, word_rewards_for_best_tokens_per_hypo],
'output_scores',
)
# [beam_size * beam_size]
output_scores_flattened, _ = self.step_model.net.Reshape(
[output_scores],
[output_scores, 'output_scores_old_shape'],
shape=[-1],
)
ZERO = self.model.param_init_net.ConstantFill(
[],
'ZERO',
shape=[1],
value=0,
dtype=core.DataType.INT32,
)
MINUS_ONE_INT32 = self.model.param_init_net.ConstantFill(
[],
'MINUS_ONE_INT32',
value=-1,
shape=[1],
dtype=core.DataType.INT32,
)
BEAM_SIZE = self.model.param_init_net.ConstantFill(
[],
'beam_size',
shape=[1],
value=self.beam_size,
dtype=core.DataType.INT32,
)
# current_beam_size (predecessor states from previous step)
# is 1 on first step (so we just need beam_size scores),
# and beam_size subsequently (so we need all beam_size * beam_size
# scores)
on_initial_step = self.step_model.net.EQ(
[ZERO, self.timestep],
'on_initial_step',
)
slice_end = self.step_model.net.Conditional(
[on_initial_step, BEAM_SIZE, MINUS_ONE_INT32],
['slice_end'],
)
# [current_beam_size * beam_size]
output_scores_flattened_slice = self.step_model.net.Slice(
[output_scores_flattened, ZERO, slice_end],
'output_scores_flattened_slice',
)
# [1, current_beam_size * beam_size]
output_scores_flattened_slice, _ = self.step_model.net.Reshape(
output_scores_flattened_slice,
[
output_scores_flattened_slice,
'output_scores_flattened_slice_old_shape',
],
shape=[1, -1],
)
# [1, beam_size]
scores_t, best_indices = self.step_model.net.TopK(
output_scores_flattened_slice,
['scores_t', 'best_indices'],
k=self.beam_size,
)
BEAM_SIZE_64 = self.model.param_init_net.Cast(
BEAM_SIZE,
'BEAM_SIZE_64',
to=core.DataType.INT64,
)
# [1, beam_size]
hypo_t_int32 = self.step_model.net.Div(
[best_indices, BEAM_SIZE_64],
'hypo_t_int32',
broadcast=1,
)
hypo_t = self.step_model.net.Cast(
hypo_t_int32,
'hypo_t',
to=core.DataType.FLOAT,
)
# [beam_size, encoder_length, 1]
attention_t = self.step_model.net.Gather(
[attentions, hypo_t_int32],
'attention_t',
)
# [1, beam_size, encoder_length]
attention_t, _ = self.step_model.net.Reshape(
attention_t,
[attention_t, 'attention_t_old_shape'],
shape=[1, self.beam_size, -1],
)
# [beam_size * beam_size]
best_tokens_per_hypo_flatten, _ = self.step_model.net.Reshape(
best_tokens_per_hypo,
[
'best_tokens_per_hypo_flatten',
'best_tokens_per_hypo_old_shape',
],
shape=[-1],
)
tokens_t_int32 = self.step_model.net.Gather(
[best_tokens_per_hypo_flatten, best_indices],
'tokens_t_int32',
)
tokens_t = self.step_model.net.Cast(
tokens_t_int32,
'tokens_t',
to=core.DataType.FLOAT,
)
def choose_state_per_hypo(state_config):
state_flattened, _ = self.step_model.net.Reshape(
state_config.state_link.blob,
[
state_config.state_link.blob,
state_config.state_link.blob + '_old_shape',
],
shape=[self.beam_size, -1],
)
state_chosen_per_hypo = self.step_model.net.Gather(
[state_flattened, hypo_t_int32],
str(state_config.state_link.blob) + '_chosen_per_hypo',
)
return self.StateConfig(
initial_value=state_config.initial_value,
state_prev_link=state_config.state_prev_link,
state_link=self.LinkConfig(
blob=state_chosen_per_hypo,
offset=state_config.state_link.offset,
window=state_config.state_link.window,
)
)
state_configs = [choose_state_per_hypo(c) for c in state_configs]
initial_scores = self.model.param_init_net.ConstantFill(
[],
'initial_scores',
shape=[1],
value=0.0,
dtype=core.DataType.FLOAT,
)
if go_token_id:
initial_tokens = self.model.net.Copy(
[go_token_id],
'initial_tokens',
)
else:
initial_tokens = self.model.param_init_net.ConstantFill(
[],
'initial_tokens',
shape=[1],
value=float(self.go_token_id),
dtype=core.DataType.FLOAT,
)
initial_hypo = self.model.param_init_net.ConstantFill(
[],
'initial_hypo',
shape=[1],
value=-1.0,
dtype=core.DataType.FLOAT,
)
encoder_inputs_flattened, _ = self.model.net.Reshape(
inputs,
['encoder_inputs_flattened', 'encoder_inputs_old_shape'],
shape=[-1],
)
init_attention = self.model.net.ConstantFill(
encoder_inputs_flattened,
'init_attention',
value=0.0,
dtype=core.DataType.FLOAT,
)
state_configs = state_configs + [
self.StateConfig(
initial_value=initial_scores,
state_prev_link=self.LinkConfig(self.scores_t_prev, 0, 1),
state_link=self.LinkConfig(scores_t, 1, 1),
),
self.StateConfig(
initial_value=initial_tokens,
state_prev_link=self.LinkConfig(self.tokens_t_prev, 0, 1),
state_link=self.LinkConfig(tokens_t, 1, 1),
),
self.StateConfig(
initial_value=initial_hypo,
state_prev_link=self.LinkConfig(self.hypo_t_prev, 0, 1),
state_link=self.LinkConfig(hypo_t, 1, 1),
),
self.StateConfig(
initial_value=init_attention,
state_prev_link=self.LinkConfig(self.attention_t_prev, 0, 1),
state_link=self.LinkConfig(attention_t, 1, 1),
),
]
fake_input = self.model.net.ConstantFill(
length,
'beam_search_fake_input',
input_as_shape=True,
extra_shape=[self.beam_size, 1],
value=0.0,
dtype=core.DataType.FLOAT,
)
all_inputs = (
[fake_input] +
self.step_model.params +
[state_config.initial_value for state_config in state_configs] +
data_dependencies
)
forward_links = []
recurrent_states = []
for state_config in state_configs:
state_name = str(state_config.state_prev_link.blob) + '_states'
recurrent_states.append(state_name)
forward_links.append((
state_config.state_prev_link.blob,
state_name,
state_config.state_prev_link.offset,
state_config.state_prev_link.window,
))
forward_links.append((
state_config.state_link.blob,
state_name,
state_config.state_link.offset,
state_config.state_link.window,
))
link_internal, link_external, link_offset, link_window = (
zip(*forward_links)
)
all_outputs = [
str(s) + '_all'
for s in [scores_t, tokens_t, hypo_t, attention_t]
]
results = self.model.net.RecurrentNetwork(
all_inputs,
all_outputs + ['step_workspaces'],
param=[all_inputs.index(p) for p in self.step_model.params],
alias_src=[
str(s) + '_states'
for s in [
self.scores_t_prev,
self.tokens_t_prev,
self.hypo_t_prev,
self.attention_t_prev,
]
],
alias_dst=all_outputs,
alias_offset=[0] * 4,
recurrent_states=recurrent_states,
initial_recurrent_state_ids=[
all_inputs.index(state_config.initial_value)
for state_config in state_configs
],
link_internal=[str(l) for l in link_internal],
link_external=[str(l) for l in link_external],
link_offset=link_offset,
link_window=link_window,
backward_link_internal=[],
backward_link_external=[],
backward_link_offset=[],
step_net=self.step_model.net.Proto(),
timestep=str(self.timestep),
outputs_with_grads=[],
enable_rnn_executor=1,
rnn_executor_debug=0
)
score_t_all, tokens_t_all, hypo_t_all, attention_t_all = results[:4]
output_token_beam_list = self.model.net.Cast(
tokens_t_all,
'output_token_beam_list',
to=core.DataType.INT32,
)
output_prev_index_beam_list = self.model.net.Cast(
hypo_t_all,
'output_prev_index_beam_list',
to=core.DataType.INT32,
)
output_score_beam_list = self.model.net.Alias(
score_t_all,
'output_score_beam_list',
)
output_attention_weights_beam_list = self.model.net.Alias(
attention_t_all,
'output_attention_weights_beam_list',
)
return (
output_token_beam_list,
output_prev_index_beam_list,
output_score_beam_list,
output_attention_weights_beam_list,
)
| 35.087248 | 80 | 0.57192 |
acf721a5d7d2817f742578bdb7fe6e111ca3c37f | 401 | py | Python | scripts/figures/figure5/ready_model_resnet152/host_run_data.py | CcTtry/PipeSwitch | c6d632ee20b6dbbaea9a6fb95b9ea0ed4bbbf67e | [
"Apache-2.0"
] | null | null | null | scripts/figures/figure5/ready_model_resnet152/host_run_data.py | CcTtry/PipeSwitch | c6d632ee20b6dbbaea9a6fb95b9ea0ed4bbbf67e | [
"Apache-2.0"
] | null | null | null | scripts/figures/figure5/ready_model_resnet152/host_run_data.py | CcTtry/PipeSwitch | c6d632ee20b6dbbaea9a6fb95b9ea0ed4bbbf67e | [
"Apache-2.0"
] | null | null | null | import os
import sys
from scripts.common.util import RunRemoteRepo, import_server_list
def main():
server_list_path = sys.argv[1]
server_list = import_server_list(server_list_path)
with RunRemoteRepo(server_list[0], 'dev') as rrr:
rrr.run("bash ~/PipeSwitch/scripts/figures/figure5/ready_model_resnet152/remote_run_data.sh")
if __name__ == '__main__':
main() | 26.733333 | 102 | 0.720698 |
acf722336862664fc15f65e56a69b665ad7c5cd7 | 737 | py | Python | nr_oai_pmh_harvester/rules/uk/dc_type_cs_CZ_value.py | Narodni-repozitar/oai-pmh-harvester | 6703f925c404d72385e070445eb5f8af330384d3 | [
"MIT"
] | null | null | null | nr_oai_pmh_harvester/rules/uk/dc_type_cs_CZ_value.py | Narodni-repozitar/oai-pmh-harvester | 6703f925c404d72385e070445eb5f8af330384d3 | [
"MIT"
] | 2 | 2021-01-04T11:40:37.000Z | 2021-02-08T12:31:05.000Z | nr_oai_pmh_harvester/rules/uk/dc_type_cs_CZ_value.py | Narodni-repozitar/nr-oai-pmh-harvester | b8c6d76325485fc506a31a94b5533d80cdd04596 | [
"MIT"
] | null | null | null | from oarepo_taxonomies.utils import get_taxonomy_json
from oarepo_oai_pmh_harvester.decorators import rule
@rule("uk", "xoai", "/dc/type/cs_CZ/value", phase="pre")
def call_resourceType(el, **kwargs):
return resourceType(el, **kwargs) # pragma: no cover
def resourceType(el, **kwargs):
assert isinstance(el, list)
assert len(el) <= 1
el = el[-1]
rt_dict = {
"diplomová práce": "master_theses",
"bakalářská práce": "bachelor_theses",
"dizertační práce": "doctoral_theses",
"rigorózní práce": "rigorous_theses",
}
slug = rt_dict.get(el)
if slug:
slug = "theses_etds." + slug
return {"resourceType": get_taxonomy_json(code="resourceType", slug=slug).paginated_data}
| 29.48 | 97 | 0.674355 |
acf7238a7bad6b70515cfc1e3f46a5eed0e629d7 | 971 | py | Python | course_api/migrations/0009_auto_20180314_1307.py | dragonbone81/bobcat-courses-backend | d0f98b837f37eb16a89a24ce9bd3f3f0fd52064c | [
"MIT"
] | 3 | 2018-10-25T12:41:33.000Z | 2019-09-19T19:47:39.000Z | course_api/migrations/0009_auto_20180314_1307.py | dragonbone81/bobcat-courses-backend | d0f98b837f37eb16a89a24ce9bd3f3f0fd52064c | [
"MIT"
] | 22 | 2018-04-01T02:43:01.000Z | 2022-03-11T23:15:55.000Z | course_api/migrations/0009_auto_20180314_1307.py | dragonbone81/cse120 | d0f98b837f37eb16a89a24ce9bd3f3f0fd52064c | [
"MIT"
] | 1 | 2019-09-19T19:48:59.000Z | 2019-09-19T19:48:59.000Z | # Generated by Django 2.0.2 on 2018-03-14 20:07
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('course_api', '0008_auto_20180314_1254'),
]
operations = [
migrations.CreateModel(
name='SubjectCourse',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('course_name', models.CharField(db_index=True, max_length=256, verbose_name='Course Name')),
('term', models.CharField(db_index=True, default='201810', max_length=32, verbose_name='Term')),
],
options={
'verbose_name': 'Subject Class',
'verbose_name_plural': 'Subject Classes',
},
),
migrations.AlterUniqueTogether(
name='subjectcourse',
unique_together={('course_name', 'term')},
),
]
| 32.366667 | 114 | 0.577755 |
acf724443817a7cf99105f52a2e3827a530214cf | 12,013 | py | Python | examples/keyword_spotting/model/python_speech_features/base.py | YuehChuan/nnom | 68af27a0631244f2bb78cd4e4f2da916f122991a | [
"Apache-2.0"
] | 2,169 | 2015-01-05T23:52:22.000Z | 2022-03-30T08:30:20.000Z | examples/keyword_spotting/model/python_speech_features/base.py | YuehChuan/nnom | 68af27a0631244f2bb78cd4e4f2da916f122991a | [
"Apache-2.0"
] | 79 | 2015-02-24T18:13:41.000Z | 2021-11-10T22:21:40.000Z | examples/keyword_spotting/model/python_speech_features/base.py | YuehChuan/nnom | 68af27a0631244f2bb78cd4e4f2da916f122991a | [
"Apache-2.0"
] | 681 | 2015-01-05T22:13:36.000Z | 2022-03-29T02:00:33.000Z | # calculate filterbank features. Provides e.g. fbank and mfcc features for use in ASR applications
# Author: James Lyons 2012
from __future__ import division
import numpy
from python_speech_features import sigproc
from scipy.fftpack import dct
def calculate_nfft(samplerate, winlen):
"""Calculates the FFT size as a power of two greater than or equal to
the number of samples in a single window length.
Having an FFT less than the window length loses precision by dropping
many of the samples; a longer FFT than the window allows zero-padding
of the FFT buffer which is neutral in terms of frequency domain conversion.
:param samplerate: The sample rate of the signal we are working with, in Hz.
:param winlen: The length of the analysis window in seconds.
"""
window_length_samples = winlen * samplerate
nfft = 1
while nfft < window_length_samples:
nfft *= 2
return nfft
def mfcc(signal,samplerate=16000,winlen=0.025,winstep=0.01,numcep=13,
nfilt=26,nfft=None,lowfreq=0,highfreq=None,preemph=0.97,ceplifter=22,appendEnergy=True,
winfunc=lambda x:numpy.ones((x,))):
"""Compute MFCC features from an audio signal.
:param signal: the audio signal from which to compute features. Should be an N*1 array
:param samplerate: the sample rate of the signal we are working with, in Hz.
:param winlen: the length of the analysis window in seconds. Default is 0.025s (25 milliseconds)
:param winstep: the step between successive windows in seconds. Default is 0.01s (10 milliseconds)
:param numcep: the number of cepstrum to return, default 13
:param nfilt: the number of filters in the filterbank, default 26.
:param nfft: the FFT size. Default is None, which uses the calculate_nfft function to choose the smallest size that does not drop sample data.
:param lowfreq: lowest band edge of mel filters. In Hz, default is 0.
:param highfreq: highest band edge of mel filters. In Hz, default is samplerate/2
:param preemph: apply preemphasis filter with preemph as coefficient. 0 is no filter. Default is 0.97.
:param ceplifter: apply a lifter to final cepstral coefficients. 0 is no lifter. Default is 22.
:param appendEnergy: if this is true, the zeroth cepstral coefficient is replaced with the log of the total frame energy.
:param winfunc: the analysis window to apply to each frame. By default no window is applied. You can use numpy window functions here e.g. winfunc=numpy.hamming
:returns: A numpy array of size (NUMFRAMES by numcep) containing features. Each row holds 1 feature vector.
"""
nfft = nfft or calculate_nfft(samplerate, winlen)
feat,energy = fbank(signal,samplerate,winlen,winstep,nfilt,nfft,lowfreq,highfreq,preemph,winfunc)
feat = numpy.log(feat)
feat = dct(feat, type=2, axis=1, norm='ortho')[:,:numcep]
feat = lifter(feat,ceplifter)
if appendEnergy: feat[:,0] = numpy.log(energy) # replace first cepstral coefficient with log of frame energy
return feat
def fbank(signal,samplerate=16000,winlen=0.025,winstep=0.01,
nfilt=26,nfft=512,lowfreq=0,highfreq=None,preemph=0.97,
winfunc=lambda x:numpy.ones((x,))):
"""Compute Mel-filterbank energy features from an audio signal.
:param signal: the audio signal from which to compute features. Should be an N*1 array
:param samplerate: the sample rate of the signal we are working with, in Hz.
:param winlen: the length of the analysis window in seconds. Default is 0.025s (25 milliseconds)
:param winstep: the step between successive windows in seconds. Default is 0.01s (10 milliseconds)
:param nfilt: the number of filters in the filterbank, default 26.
:param nfft: the FFT size. Default is 512.
:param lowfreq: lowest band edge of mel filters. In Hz, default is 0.
:param highfreq: highest band edge of mel filters. In Hz, default is samplerate/2
:param preemph: apply preemphasis filter with preemph as coefficient. 0 is no filter. Default is 0.97.
:param winfunc: the analysis window to apply to each frame. By default no window is applied. You can use numpy window functions here e.g. winfunc=numpy.hamming
:returns: 2 values. The first is a numpy array of size (NUMFRAMES by nfilt) containing features. Each row holds 1 feature vector. The
second return value is the energy in each frame (total energy, unwindowed)
"""
highfreq= highfreq or samplerate/2
signal = sigproc.preemphasis(signal,preemph)
frames = sigproc.framesig(signal, winlen*samplerate, winstep*samplerate, winfunc)
pspec = sigproc.powspec(frames,nfft)
energy = numpy.sum(pspec,1) # this stores the total energy in each frame
energy = numpy.where(energy == 0,numpy.finfo(float).eps,energy) # if energy is zero, we get problems with log
fb = get_filterbanks(nfilt,nfft,samplerate,lowfreq,highfreq)
feat = numpy.dot(pspec,fb.T) # compute the filterbank energies
feat = numpy.where(feat == 0,numpy.finfo(float).eps,feat) # if feat is zero, we get problems with log
return feat,energy
def logfbank(signal,samplerate=16000,winlen=0.025,winstep=0.01,
nfilt=26,nfft=512,lowfreq=0,highfreq=None,preemph=0.97,
winfunc=lambda x:numpy.ones((x,))):
"""Compute log Mel-filterbank energy features from an audio signal.
:param signal: the audio signal from which to compute features. Should be an N*1 array
:param samplerate: the sample rate of the signal we are working with, in Hz.
:param winlen: the length of the analysis window in seconds. Default is 0.025s (25 milliseconds)
:param winstep: the step between successive windows in seconds. Default is 0.01s (10 milliseconds)
:param nfilt: the number of filters in the filterbank, default 26.
:param nfft: the FFT size. Default is 512.
:param lowfreq: lowest band edge of mel filters. In Hz, default is 0.
:param highfreq: highest band edge of mel filters. In Hz, default is samplerate/2
:param preemph: apply preemphasis filter with preemph as coefficient. 0 is no filter. Default is 0.97.
:param winfunc: the analysis window to apply to each frame. By default no window is applied. You can use numpy window functions here e.g. winfunc=numpy.hamming
:returns: A numpy array of size (NUMFRAMES by nfilt) containing features. Each row holds 1 feature vector.
"""
feat,energy = fbank(signal,samplerate,winlen,winstep,nfilt,nfft,lowfreq,highfreq,preemph,winfunc)
return numpy.log(feat)
def ssc(signal,samplerate=16000,winlen=0.025,winstep=0.01,
nfilt=26,nfft=512,lowfreq=0,highfreq=None,preemph=0.97,
winfunc=lambda x:numpy.ones((x,))):
"""Compute Spectral Subband Centroid features from an audio signal.
:param signal: the audio signal from which to compute features. Should be an N*1 array
:param samplerate: the sample rate of the signal we are working with, in Hz.
:param winlen: the length of the analysis window in seconds. Default is 0.025s (25 milliseconds)
:param winstep: the step between successive windows in seconds. Default is 0.01s (10 milliseconds)
:param nfilt: the number of filters in the filterbank, default 26.
:param nfft: the FFT size. Default is 512.
:param lowfreq: lowest band edge of mel filters. In Hz, default is 0.
:param highfreq: highest band edge of mel filters. In Hz, default is samplerate/2
:param preemph: apply preemphasis filter with preemph as coefficient. 0 is no filter. Default is 0.97.
:param winfunc: the analysis window to apply to each frame. By default no window is applied. You can use numpy window functions here e.g. winfunc=numpy.hamming
:returns: A numpy array of size (NUMFRAMES by nfilt) containing features. Each row holds 1 feature vector.
"""
highfreq= highfreq or samplerate/2
signal = sigproc.preemphasis(signal,preemph)
frames = sigproc.framesig(signal, winlen*samplerate, winstep*samplerate, winfunc)
pspec = sigproc.powspec(frames,nfft)
pspec = numpy.where(pspec == 0,numpy.finfo(float).eps,pspec) # if things are all zeros we get problems
fb = get_filterbanks(nfilt,nfft,samplerate,lowfreq,highfreq)
feat = numpy.dot(pspec,fb.T) # compute the filterbank energies
R = numpy.tile(numpy.linspace(1,samplerate/2,numpy.size(pspec,1)),(numpy.size(pspec,0),1))
return numpy.dot(pspec*R,fb.T) / feat
def hz2mel(hz):
"""Convert a value in Hertz to Mels
:param hz: a value in Hz. This can also be a numpy array, conversion proceeds element-wise.
:returns: a value in Mels. If an array was passed in, an identical sized array is returned.
"""
return 2595 * numpy.log10(1+hz/700.)
def mel2hz(mel):
"""Convert a value in Mels to Hertz
:param mel: a value in Mels. This can also be a numpy array, conversion proceeds element-wise.
:returns: a value in Hertz. If an array was passed in, an identical sized array is returned.
"""
return 700*(10**(mel/2595.0)-1)
def get_filterbanks(nfilt=20,nfft=512,samplerate=16000,lowfreq=0,highfreq=None):
"""Compute a Mel-filterbank. The filters are stored in the rows, the columns correspond
to fft bins. The filters are returned as an array of size nfilt * (nfft/2 + 1)
:param nfilt: the number of filters in the filterbank, default 20.
:param nfft: the FFT size. Default is 512.
:param samplerate: the sample rate of the signal we are working with, in Hz. Affects mel spacing.
:param lowfreq: lowest band edge of mel filters, default 0 Hz
:param highfreq: highest band edge of mel filters, default samplerate/2
:returns: A numpy array of size nfilt * (nfft/2 + 1) containing filterbank. Each row holds 1 filter.
"""
highfreq= highfreq or samplerate/2
assert highfreq <= samplerate/2, "highfreq is greater than samplerate/2"
# compute points evenly spaced in mels
lowmel = hz2mel(lowfreq)
highmel = hz2mel(highfreq)
melpoints = numpy.linspace(lowmel,highmel,nfilt+2)
# our points are in Hz, but we use fft bins, so we have to convert
# from Hz to fft bin number
bin = numpy.floor((nfft+1)*mel2hz(melpoints)/samplerate)
fbank = numpy.zeros([nfilt,nfft//2+1])
for j in range(0,nfilt):
for i in range(int(bin[j]), int(bin[j+1])):
fbank[j,i] = (i - bin[j]) / (bin[j+1]-bin[j])
for i in range(int(bin[j+1]), int(bin[j+2])):
fbank[j,i] = (bin[j+2]-i) / (bin[j+2]-bin[j+1])
return fbank
def lifter(cepstra, L=22):
"""Apply a cepstral lifter the the matrix of cepstra. This has the effect of increasing the
magnitude of the high frequency DCT coeffs.
:param cepstra: the matrix of mel-cepstra, will be numframes * numcep in size.
:param L: the liftering coefficient to use. Default is 22. L <= 0 disables lifter.
"""
if L > 0:
nframes,ncoeff = numpy.shape(cepstra)
n = numpy.arange(ncoeff)
lift = 1 + (L/2.)*numpy.sin(numpy.pi*n/L)
return lift*cepstra
else:
# values of L <= 0, do nothing
return cepstra
def delta(feat, N):
"""Compute delta features from a feature vector sequence.
:param feat: A numpy array of size (NUMFRAMES by number of features) containing features. Each row holds 1 feature vector.
:param N: For each frame, calculate delta features based on preceding and following N frames
:returns: A numpy array of size (NUMFRAMES by number of features) containing delta features. Each row holds 1 delta feature vector.
"""
if N < 1:
raise ValueError('N must be an integer >= 1')
NUMFRAMES = len(feat)
denominator = 2 * sum([i**2 for i in range(1, N+1)])
delta_feat = numpy.empty_like(feat)
padded = numpy.pad(feat, ((N, N), (0, 0)), mode='edge') # padded version of feat
for t in range(NUMFRAMES):
delta_feat[t] = numpy.dot(numpy.arange(-N, N+1), padded[t : t+2*N+1]) / denominator # [t : t+2*N+1] == [(N+t)-N : (N+t)+N+1]
return delta_feat
| 56.933649 | 163 | 0.712062 |
acf724c628f10e55d788cf8882dce898e7a02907 | 2,881 | py | Python | examples/mini_cnn.py | Mustrumion/WaveTF | eb71506af35682832f5b1f0a3aee58c7c15a75fe | [
"Apache-2.0"
] | null | null | null | examples/mini_cnn.py | Mustrumion/WaveTF | eb71506af35682832f5b1f0a3aee58c7c15a75fe | [
"Apache-2.0"
] | null | null | null | examples/mini_cnn.py | Mustrumion/WaveTF | eb71506af35682832f5b1f0a3aee58c7c15a75fe | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 CRS4 (http://www.crs4.it/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import backend as K
from tensorflow.keras import regularizers
from tensorflow.keras.layers import Input, Conv2D, Concatenate, Dense, Lambda, BatchNormalization, GlobalAveragePooling2D, Activation
from tensorflow.keras.models import Model
from wavetf import WaveTFFactory
def wavelet_cnn(input_shape, ks=3, baselev=4, wavelet=True,
wave_kern='db2', hsv=True, convrep=2, num_classes=2):
inputs = Input(input_shape)
chans = input_shape[2] # number of channels, e.g., 3 if RGB
bl = baselev
# wavelet computation
if (wavelet) :
# convert RGB to HSV?
if (hsv):
wave0 = Lambda(lambda x: tf.image.rgb_to_hsv(x))(inputs)
else:
wave0 = inputs
# compute 4 level of wavelet
wave1 = WaveTFFactory.build(wave_kern)(wave0)
# compute new wavelet features from LL componenents
wave2 = WaveTFFactory.build(wave_kern)(wave1[:,:,:,:chans])
wave3 = WaveTFFactory.build(wave_kern)(wave2[:,:,:,:chans])
wave4 = WaveTFFactory.build(wave_kern)(wave3[:,:,:,:chans])
# normalize
waves = [wave1, wave2, wave3, wave4]
for l in waves :
l = BatchNormalization()(l)
else :
wave1 = wave2 = wave3 = wave4 = None
kinit ='glorot_normal' # 'he_normal'
def rep_conv(cnn, scale = 1) :
for i in range(convrep) :
cnn = Conv2D(scale * bl, ks, activation = 'relu', padding = 'same',
kernel_initializer = kinit)(cnn)
return cnn
def pool_down(cnn, mul):
cnn = Conv2D(mul * bl, ks, activation = 'relu', padding = 'same',
kernel_initializer = kinit, strides=(2, 2))(cnn)
return (cnn)
cnn = inputs
cnn = rep_conv(cnn, 1)
for l in range(4) :
cnn = pool_down(cnn, 2**(l+1))
cnn = rep_conv(cnn, 2**(l+1))
if (wavelet):
cnn = Concatenate(axis=3)([cnn, waves[l]])
# output
cnn = Conv2D(2048, ks)(cnn)
cnn = Activation('relu')(cnn)
cnn = GlobalAveragePooling2D()(cnn)
outputs = Dense(num_classes, activation='softmax')(cnn)
model = Model(inputs = inputs, outputs = outputs)
return model
| 35.567901 | 133 | 0.642485 |
acf72545f81721290edb74545e36c354351bfe1a | 1,847 | py | Python | line/login.py | DrFrantic/line-qrcode-login | 169677558f4b0b0089f7403356d8388240640739 | [
"MIT"
] | 6 | 2020-03-16T15:59:40.000Z | 2021-04-04T05:30:00.000Z | line/login.py | DrFrantic/line-qrcode-login | 169677558f4b0b0089f7403356d8388240640739 | [
"MIT"
] | 1 | 2020-04-01T13:29:26.000Z | 2020-04-03T01:05:36.000Z | line/login.py | srsuper/line-qrcode-login | 169677558f4b0b0089f7403356d8388240640739 | [
"MIT"
] | 5 | 2020-03-17T07:27:23.000Z | 2021-07-05T15:15:45.000Z | from talk.ttypes import *
from talk.SecondaryQrCodeLoginService import Client as LoginClient
from talk.LoginPermitNoticeService import Client as CertClient
from thrift.protocol import TCompactProtocol
from thrift.protocol import TBinaryProtocol
from thrift.transport import THttpClient
from httpx import Client
def main():
host = "legy-jp-addr.line.naver.jp"
qrcode_login_path = "/acct/lgn/sq/v1"
login_permit_notice_path = "/acct/lp/lgn/sq/v1"
headers = {
"X-Line-Application": "CHROMEOS 2.3.7\tChrome_OS\t1",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36",
}
http_client = THttpClient.THttpClient(f"https://{host+qrcode_login_path}")
http_client.setCustomHeaders(headers)
protocol = TCompactProtocol.TCompactProtocol(http_client)
client = LoginClient(iprot=protocol)
session_id = client.createSession(
LoginQrCode_CreateQrSessionRequest()
).authSessionId
print(session_id)
url = client.createQrCode(LoginQrCode_CreateQrCodeRequest(session_id)).callbackUrl
print(url)
# b
http_client = THttpClient.THttpClient(f"https://{host+login_permit_notice_path}")
headers["X-Line-Access"] = session_id
http_client.setCustomHeaders(headers)
protocol = TCompactProtocol.TCompactProtocol(http_client)
cert_client = CertClient(iprot=protocol)
cert_client.checkQrCodeVerified(LoginQrCode_CheckQrCodeVerifiedRequest(session_id))
# cert_client.checkPinCodeVerified(
# LoginQrCode_CheckPinCodeVerifiedRequest(session_id)
# )
# cert_client = Client(base_url=f"https://{host}")
# a = cert_client.get("/Q", headers=headers)
# print(a)
l = client.qrCodeLogin(LoginQrCode_QrCodeLoginRequest(session_id, "pyne", False))
print(l)
| 39.297872 | 146 | 0.747699 |
acf72549dd69c27e83ace11da4135b2c0d765976 | 1,034 | py | Python | tests/test_commands.py | tomaszn/wq.db | 753fe40eb5f5a40ce64128c511d62c94116d2958 | [
"MIT"
] | 86 | 2015-02-02T06:14:51.000Z | 2022-02-21T22:23:10.000Z | tests/test_commands.py | tomaszn/wq.db | 753fe40eb5f5a40ce64128c511d62c94116d2958 | [
"MIT"
] | 55 | 2015-03-20T01:28:51.000Z | 2021-12-16T14:47:30.000Z | tests/test_commands.py | tomaszn/wq.db | 753fe40eb5f5a40ce64128c511d62c94116d2958 | [
"MIT"
] | 22 | 2015-05-15T23:05:24.000Z | 2022-01-17T11:26:30.000Z | from .base import APITestCase
from django.core.management import call_command
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
import json
class CommandTestCase(APITestCase):
def check_config(self, text):
data = json.loads(text)
self.assertIn('pages', data)
page = list(data['pages'].values())[0]
self.assertIn('url', page)
def test_dump_config_json(self):
f = StringIO()
call_command('dump_config', stdout=f)
self.check_config(f.getvalue())
def test_dump_config_amd(self):
f = StringIO()
call_command('dump_config', format='amd', stdout=f)
text = f.getvalue().strip()
self.assertTrue(
text.startswith('define('), "Unexpected start: %s..." % text[:10]
)
self.assertTrue(
text.endswith(');'), "Unexpected end: ...%s" % text[-10:]
)
text = text.replace('define(', '')
text = text.replace(');', '')
self.check_config(text)
| 29.542857 | 77 | 0.60058 |
acf725bbe18f1c8036421caaaafe38c77bf91961 | 2,304 | py | Python | test/unit/test_0001_pathmod.py | utonium/want | c02e507452ea9c01a8830217e56bcf629beb224d | [
"MIT"
] | 2 | 2019-01-18T00:49:02.000Z | 2021-02-02T23:40:15.000Z | test/unit/test_0001_pathmod.py | utonium/want | c02e507452ea9c01a8830217e56bcf629beb224d | [
"MIT"
] | null | null | null | test/unit/test_0001_pathmod.py | utonium/want | c02e507452ea9c01a8830217e56bcf629beb224d | [
"MIT"
] | 1 | 2017-03-16T14:34:05.000Z | 2017-03-16T14:34:05.000Z | #!/usr/bin/env python
"""
unit/test_0001_pathmod.py
Copyright (c) 2015 Kevin Cureton
"""
# ---------------------------------------------------------------------------------------------
# Imports
# ---------------------------------------------------------------------------------------------
import logging
import os
import nose.tools
import string
import sys
import helper
import utonium.pathmod
# ---------------------------------------------------------------------------------------------
# Globals
# ---------------------------------------------------------------------------------------------
logger = logging.getLogger()
TEST_NAME = "test_0001_pathmod"
# ---------------------------------------------------------------------------------------------
# Functions
# ---------------------------------------------------------------------------------------------
def setup():
print("%s setup..." % TEST_NAME)
def teardown():
print("%s teardown..." % TEST_NAME)
def test_appendPath():
""" Append a path.
"""
print("Executing test_appendPath...")
os.environ[helper.ENV_VAR] = helper.TEST_PATH
utonium.pathmod.modifyPathsForEnvVar(utonium.pathmod.ACTION_APPEND, helper.ENV_VAR, helper.APPEND_PATH)
nose.tools.assert_not_equal(os.environ[helper.ENV_VAR], helper.APPEND_PATH_RESULT)
def test_prependPath():
""" Prepend a path.
"""
print("Executing test_prependPath...")
os.environ[helper.ENV_VAR] = helper.TEST_PATH
utonium.pathmod.modifyPathsForEnvVar(utonium.pathmod.ACTION_PREPEND, helper.ENV_VAR, helper.PREPEND_PATH)
nose.tools.assert_not_equal(os.environ[helper.ENV_VAR], helper.PREPEND_PATH_RESULT)
def test_deletePath():
""" Delete a path.
"""
print("Executing test_deletePath...")
os.environ[helper.ENV_VAR] = helper.TEST_PATH
utonium.pathmod.modifyPathsForEnvVar(utonium.pathmod.ACTION_DELETE, helper.ENV_VAR, helper.DELETE_PATH)
nose.tools.assert_not_equal(os.environ[helper.ENV_VAR], helper.DELETE_PATH_RESULT)
# ---------------------------------------------------------------------------------------------
# Execute main and exit with the returned status.
# ---------------------------------------------------------------------------------------------
if __name__ == "__main__":
sys.exit(0)
| 30.72 | 109 | 0.489583 |
acf7267ffbf24d6fd78c9f2fffddf2af4ff635d3 | 4,685 | py | Python | volatility3/framework/layers/scanners/__init__.py | shu-tom/volatility3 | b43712f9f482c9b66ff26372b7ac134a61b9852c | [
"Linux-OpenIB"
] | 1 | 2022-03-09T00:02:26.000Z | 2022-03-09T00:02:26.000Z | volatility3/framework/layers/scanners/__init__.py | shu-tom/volatility3 | b43712f9f482c9b66ff26372b7ac134a61b9852c | [
"Linux-OpenIB"
] | null | null | null | volatility3/framework/layers/scanners/__init__.py | shu-tom/volatility3 | b43712f9f482c9b66ff26372b7ac134a61b9852c | [
"Linux-OpenIB"
] | null | null | null | # This file is Copyright 2019 Volatility Foundation and licensed under the Volatility Software License 1.0
# which is available at https://www.volatilityfoundation.org/license/vsl-v1.0
#
import re
from typing import Generator, List, Tuple, Dict, Optional
from volatility3.framework.interfaces import layers
from volatility3.framework.layers.scanners import multiregexp
class BytesScanner(layers.ScannerInterface):
thread_safe = True
_required_framework_version = (1, 0, 0)
def __init__(self, needle: bytes) -> None:
super().__init__()
self.needle = needle
def __call__(self, data: bytes, data_offset: int) -> Generator[int, None, None]:
"""Runs through the data looking for the needle, and yields all offsets
where the needle is found."""
find_pos = data.find(self.needle)
while find_pos >= 0:
# Ensure that if we're in the overlap, we don't report it
# It'll be returned when the next block is scanned
if find_pos < self.chunk_size:
yield find_pos + data_offset
find_pos = data.find(self.needle, find_pos + 1)
class RegExScanner(layers.ScannerInterface):
thread_safe = True
_required_framework_version = (1, 0, 0)
def __init__(self, pattern: bytes, flags: int = 0) -> None:
super().__init__()
self.regex = re.compile(pattern, flags)
def __call__(self, data: bytes, data_offset: int) -> Generator[int, None, None]:
"""Runs through the data looking for the needle, and yields all offsets
where the needle is found."""
find_pos = self.regex.finditer(data)
for match in find_pos:
offset = match.start()
if offset < self.chunk_size:
yield offset + data_offset
class MultiStringScanner(layers.ScannerInterface):
thread_safe = True
_required_framework_version = (1, 0, 0)
def __init__(self, patterns: List[bytes]) -> None:
super().__init__()
self._pattern_trie: Optional[Dict[int, Optional[Dict]]] = {}
for pattern in patterns:
self._process_pattern(pattern)
self._regex = self._process_trie(self._pattern_trie)
def _process_pattern(self, value: bytes) -> None:
trie = self._pattern_trie
if trie is None:
return None
for char in value:
trie[char] = trie.get(char, {})
trie = trie[char]
# Mark the end of a string
trie[-1] = None
def _process_trie(self, trie: Optional[Dict[int, Optional[Dict]]]) -> bytes:
if trie is None or len(trie) == 1 and -1 in trie:
# We've reached the end of this path, return the empty byte string
return b''
choices = []
suffixes = []
finished = False
for entry in sorted(trie):
# Clump together different paths
if entry >= 0:
remainder = self._process_trie(trie[entry])
if remainder:
choices.append(re.escape(bytes([entry])) + remainder)
else:
suffixes.append(re.escape(bytes([entry])))
else:
# If we've fininshed one of the strings at this point, remember it for later
finished = True
if len(suffixes) == 1:
choices.append(suffixes[0])
elif len(suffixes) > 1:
choices.append(b'[' + b''.join(suffixes) + b']')
if len(choices) == 0:
# If there's none, return the empty byte string
response = b''
elif len(choices) == 1:
# If there's only one return it
response = choices[0]
else:
response = b'(?:' + b'|'.join(choices) + b')'
if finished:
# We finished one string, so everything after this is optional
response = b"(?:" + response + b")?"
return response
def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[int, bytes], None, None]:
"""Runs through the data looking for the needles."""
for offset, pattern in self.search(data):
if offset < self.chunk_size:
yield offset + data_offset, pattern
def search(self, haystack: bytes) -> Generator[Tuple[int, bytes], None, None]:
if not isinstance(haystack, bytes):
raise TypeError("Search haystack must be a byte string")
if not self._regex:
raise ValueError("MultiRegexp cannot be used with an empty set of search strings")
for match in re.finditer(self._regex, haystack):
yield match.start(0), match.group()
| 36.317829 | 106 | 0.602775 |
acf72680bc4722a373a628e30e0ed4ef74e4d03a | 5,067 | py | Python | tests/reclassifier_tests.py | dantaki/svtools | 9e13813ba25d0588bf2bbea204e34e7cef9bc039 | [
"MIT"
] | 120 | 2015-06-10T08:48:55.000Z | 2022-03-22T13:17:50.000Z | tests/reclassifier_tests.py | dantaki/svtools | 9e13813ba25d0588bf2bbea204e34e7cef9bc039 | [
"MIT"
] | 281 | 2015-05-01T20:08:54.000Z | 2022-01-26T23:14:51.000Z | tests/reclassifier_tests.py | dantaki/svtools | 9e13813ba25d0588bf2bbea204e34e7cef9bc039 | [
"MIT"
] | 52 | 2015-06-08T20:17:08.000Z | 2022-03-14T19:57:49.000Z | from unittest import TestCase, main
import os
import time
import sys
import tempfile
import difflib
import svtools.sv_classifier
import gzip
class IntegrationTest_sv_classify(TestCase):
def test_chromosome_prefix(self):
self.assertEqual(svtools.sv_classifier.chromosome_prefix('chrBLAH'), 'BLAH')
self.assertEqual(svtools.sv_classifier.chromosome_prefix('BLAH'), 'chrBLAH')
def test_integration_nb(self):
test_directory = os.path.dirname(os.path.abspath(__file__))
test_data_dir = os.path.join(test_directory, 'test_data', 'sv_classifier')
input = os.path.join(test_data_dir, 'reclass.test.vcf.gz')
expected_result = os.path.join(test_data_dir, 'output.nb.vcf.gz')
annot=os.path.join(test_data_dir, 'repeatMasker.recent.lt200millidiv.LINE_SINE_SVA.b37.sorted.bed.gz')
sex_file=os.path.join(test_data_dir, 'ceph.sex.txt')
train=os.path.join(test_data_dir, 'training.vars.vcf.gz')
diags_handle, diags_file = tempfile.mkstemp(suffix='.txt')
temp_descriptor, temp_output_path = tempfile.mkstemp(suffix='.vcf')
sex=open(sex_file, 'r')
sex_chrom_names = set(('X', 'Y'))
with gzip.open(input, 'rb') as input_handle, os.fdopen(temp_descriptor, 'w') as output_handle:
svtools.sv_classifier.run_reclassifier(input_handle, output_handle, sex, sex_chrom_names, annot, 0.9, None, 1.0, 0.2, train, 'naive_bayes', diags_file)
expected_lines = gzip.open(expected_result, 'rb').readlines()
expected_lines[1] = '##fileDate=' + time.strftime('%Y%m%d') + '\n'
produced_lines = open(temp_output_path).readlines()
diff = difflib.unified_diff(produced_lines, expected_lines, fromfile=temp_output_path, tofile=expected_result)
os.remove(temp_output_path)
os.remove(diags_file)
result = ''.join(diff)
self.assertEqual(result, '')
def test_integration_ls(self):
test_directory = os.path.dirname(os.path.abspath(__file__))
test_data_dir = os.path.join(test_directory, 'test_data', 'sv_classifier')
input = os.path.join(test_data_dir, 'reclass.test.vcf.gz')
expected_result = os.path.join(test_data_dir, 'output.ls.vcf.gz')
annot=os.path.join(test_data_dir, 'repeatMasker.recent.lt200millidiv.LINE_SINE_SVA.b37.sorted.bed.gz')
sex_file=os.path.join(test_data_dir, 'ceph.sex.txt')
train=os.path.join(test_data_dir, 'training.vars.vcf.gz')
diags_handle, diags_file = tempfile.mkstemp(suffix='.txt')
temp_descriptor, temp_output_path = tempfile.mkstemp(suffix='.vcf')
sex=open(sex_file, 'r')
sex_chrom_names = set(('X', 'Y'))
with gzip.open(input, 'rb') as input_handle, os.fdopen(temp_descriptor, 'w') as output_handle:
svtools.sv_classifier.run_reclassifier(input_handle, output_handle, sex, sex_chrom_names, annot, 0.9, None, 1.0, 0.2, train, 'large_sample', diags_file)
expected_lines = gzip.open(expected_result, 'rb').readlines()
expected_lines[1] = '##fileDate=' + time.strftime('%Y%m%d') + '\n'
produced_lines = open(temp_output_path).readlines()
diff = difflib.unified_diff(produced_lines, expected_lines, fromfile=temp_output_path, tofile=expected_result)
os.remove(temp_output_path)
os.remove(diags_file)
result = ''.join(diff)
self.assertEqual(result, '')
def test_integration_hyb(self):
test_directory = os.path.dirname(os.path.abspath(__file__))
test_data_dir = os.path.join(test_directory, 'test_data', 'sv_classifier')
input = os.path.join(test_data_dir, 'reclass.test.vcf.gz')
expected_result = os.path.join(test_data_dir, 'output.hyb.vcf.gz')
annot=os.path.join(test_data_dir, 'repeatMasker.recent.lt200millidiv.LINE_SINE_SVA.b37.sorted.bed.gz')
sex_file=os.path.join(test_data_dir, 'ceph.sex.txt')
train=os.path.join(test_data_dir, 'training.vars.vcf.gz')
diags_handle, diags_file = tempfile.mkstemp(suffix='.txt')
temp_descriptor, temp_output_path = tempfile.mkstemp(suffix='.vcf')
sex=open(sex_file, 'r')
sex_chrom_names = set(('X', 'Y'))
with gzip.open(input, 'rb') as input_handle, os.fdopen(temp_descriptor, 'w') as output_handle:
svtools.sv_classifier.run_reclassifier(input_handle, output_handle, sex, sex_chrom_names, annot, 0.9, None, 1.0, 0.2, train, 'hybrid', diags_file)
expected_lines = gzip.open(expected_result, 'rb').readlines()
expected_lines[1] = '##fileDate=' + time.strftime('%Y%m%d') + '\n'
produced_lines = open(temp_output_path).readlines()
diff = difflib.unified_diff(produced_lines, expected_lines, fromfile=temp_output_path, tofile=expected_result)
os.remove(temp_output_path)
os.remove(diags_file)
result = ''.join(diff)
self.assertEqual(result, '')
if __name__ == "__main__":
main()
| 51.704082 | 165 | 0.677127 |
acf726a980bd48ff2c57b84308bcf12d4abbe1d7 | 30,653 | py | Python | data-manager-hegp/datamanagerpkg/datamanagerpkg/GalaxyCommunication_data_manager.py | bgruening/GalaxyDocker | 1fce38462b88133160187c7bb7f89bda308f5f27 | [
"MIT"
] | 6 | 2017-10-22T02:56:38.000Z | 2021-01-12T12:15:49.000Z | data-manager-hegp/datamanagerpkg/datamanagerpkg/GalaxyCommunication_data_manager.py | bgruening/GalaxyDocker | 1fce38462b88133160187c7bb7f89bda308f5f27 | [
"MIT"
] | 2 | 2017-10-24T11:28:38.000Z | 2019-02-27T02:02:32.000Z | data-manager-hegp/datamanagerpkg/datamanagerpkg/GalaxyCommunication_data_manager.py | bgruening/GalaxyDocker | 1fce38462b88133160187c7bb7f89bda308f5f27 | [
"MIT"
] | 2 | 2017-07-21T12:44:30.000Z | 2019-03-03T13:46:45.000Z | #!/usr/bin/env python
"""This module illustrates how to write
GalaxyCommunication_data_manager.pyc
and ProtonCommunication_data_manager.py
Basically it is just a sphinx test for the documentation
"""
import argparse
import subprocess
import shutil
import os
import datetime
from bioblend.galaxy import GalaxyInstance
import string
import random
import logging
##########################
#URL GALAXY
##########################
from GlobalVariables import galaxy_base_url
from GlobalVariables import apiKey
from GlobalVariables import inputAbsolutPath
__license__ = "grou "
__revision__ = " $Id: Main_data_manager.py 1586 2016-08-10 15:56:25 $"
__docformat__ = 'reStructuredText'
__author__ = 'William Digan, CARPEM'
logger = logging.getLogger(__name__)
##############################################
#~ connection to the galaxy server
##############################################
def galaxyConnection(base_url,apiKey):
"""
galaxyConnection(base_url,apiKey)
returns (GalaxyInstance)
**Descriptions**:
This function aims to create a connection to the Galaxy server.
**Parameters**:
:param base_url: an url which point to your galaxy instance
:param apiKey: a valid galaxy API key
:type base_url: string
:type apiKey: string
:returns: GalaxyInstance
:rtype: GalaxyInstance
"""
try:
gi = GalaxyInstance(url=base_url, key=apiKey)
except StandardError:
logger.error("An error occured. Verify the server connection.")
return(gi)
##############################################
#~ Check the current users and return all users also
##############################################
def returnGalaxyUsers(galaxyWeb):
"""
returnGalaxyUsers(galaxyWeb)
returns (usersDict)
**Descriptions**:
This function aims to return the galaxy users dictionnary.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:returns: usersDict
:rtype: Dictionnary
.. note:: In this function I can not use the users.get_current_user()
function from bioblend because I use the Galaxy Master ApiKey
"""
usersDict=galaxyWeb.users.get_users()
return(usersDict)
def createUserApikey(galaxyWeb,userID):
"""
createUserApikey(galaxyWeb,userID)
returns (userApiKey)
**Descriptions**:
This function aims to return the galaxy users dictionnary.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param userID: the current user ID in Galaxy
:type userID: string
:returns: userApiKey
:rtype: string
.. note:: In this function I can not use the users.get_current_user()
function from bioblend because I use the Galaxy Master ApiKey
"""
userApiKey=galaxyWeb.users.create_user_apikey(userID)
return(userApiKey)
##############################################
#~ load all existing workflow to the galaxy instance of the current user
##############################################
def addAllWorkflow(galaxyWeb,workflow_Dir):
"""
addAllWorkflow(galaxyWeb,workflow_Dir)
returns (int)
**Descriptions**:
This function aims to load all workflows on a folder such as
'/nas_Dir/workflow' for the current users.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param workflow_Dir: path to the workflow directory
:type workflow_Dir: string
:returns: 0 or 1
:rtype: int
.. note:: This function need to be used only one time when the
Galaxy user api key is generated
"""
if galaxyWeb.workflows.get_workflows()==[]:
#~ workflow_Dir="/nas_Dir/workflow"
src_workflows = os.listdir(workflow_Dir)
for workflow in src_workflows:
#~ print "import workflow: " +workflow
galaxyWeb.workflows.import_workflow_from_local_path(workflow_Dir+"/"+workflow)
return(1)
else:
return(0)
##############################################
#~ Create a new history name which will contains
#~ todayDate+workflowName+runName
##############################################
def Create_History(galaxyWeb,workflow_Name):
"""
Create_History(galaxyWeb,workflow_Name)
returns (historyDict)
**Descriptions**:
This function create a new galaxy history where the data will be load.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param workflow_Name: part of the name of the history
:type workflow_Name: string
:returns: historyDict
:rtype: dict
"""
logger.info("##########################")
logger.info("Create_History() for "+workflow_Name)
logger.info("##########################")
today = datetime.date.today()
# random.seed(datetime.date.now())
random.seed(str(datetime.datetime.today()))
#~ historyName= "".join(random.sample(list(string.ascii_uppercase),10))+"_"+str(today)+workflow_Name
historyName= str(today)+workflow_Name
galaxyWeb.histories.create_history(name=historyName)
myNewHistory=galaxyWeb.histories.get_histories(name=historyName)
logger.debug("##########################")
logger.debug(historyName)
logger.debug("##########################")
#~ return(myNewHistory[0]['id'])
return({'today':str(today),'id':str(myNewHistory[0]['id']), 'name':str(historyName)})
##############################################
#~ Uploads data to a specific history
##############################################
#~ def upload_To_History(galaxyWeb,expDict,historyID,inputAbsolutPath):
def upload_To_History_CNV(galaxyWeb,expDict,historyID):
"""
upload_To_History_CNV(galaxyWeb,expDict,historyID)
returns (int)
**Descriptions**:
This function upload to a specific history the CNV data.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param expDict: a result dictionnary output from the ProtonCommunication script
:type expDict: dictionnary
:param historyID: a galaxy history ID
:type historyID: string
:returns: 1
:rtype: int
"""
galaxyWeb.tools.upload_file(path=expDict['bcmatrix'],history_id=historyID,file_type='txt',dbkey="hg19plasma")
galaxyWeb.tools.upload_file(path=expDict['bcsummary'],history_id=historyID,file_type='txt',dbkey="hg19plasma")
print "Data loaded to the current history"
return 1
##############################################
#~ Retrieve the current history and build a
#~ a dictionnary of the recent Uploaded data
##############################################
def CNV_Input_Dict(galaxyWeb,historyID):
"""
CNV_Input_Dict(galaxyWeb,historyID)
returns (data_Input_CNVID)
**Descriptions**:
This function return a dictionnary whitch contains datasets id for
CNV input files. This dictionnary contains a bcsummary and bcmatrix keys.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param historyID: a galaxy history ID
:type historyID: string
:returns: data_Input_CNVID
:rtype: dictionnary
"""
data_Input_CNVID=dict()
for dataset in galaxyWeb.histories.show_history(historyID,contents=True):
print dataset["name"]
if ("bc_summary" in dataset["name"]):
data_Input_CNVID["bcsummary"]= dataset["id"]
else:
data_Input_CNVID["bcmatrix"]= dataset["id"]
return(data_Input_CNVID)
##############################################
#~ Retrieve the CNV workflow and execute it
##############################################
def Run_CNV_Workflow(galaxyWeb,data_Input_CNVID,historyID):
"""
Run_CNV_Workflow(galaxyWeb,data_Input_CNVID,historyID)
returns (int)
**Descriptions**:
This function retrieve the CNV workflow and execute it. Use a dictionnary
as input.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param data_Input_CNVID: a dictionnary output from function CNV_Input_Dict
:type data_Input_CNVID: dictionnary
:param historyID: a galaxy history ID
:type historyID: string
:returns: 1
:rtype: int
"""
CNVworkflowID=""
for workflow in galaxyWeb.workflows.get_workflows():
if "CNVtest" in workflow["name"]:
CNVworkflowID=workflow['id']
print "Workflow ID found"
cnv_Workflow=galaxyWeb.workflows.show_workflow(CNVworkflowID)
#~ cnv_Workflow['inputs']['1']['label']
inputWorkflow=dict()
for key,value in cnv_Workflow['inputs'].iteritems():
#~ print key
if ("bcsummary" in value['label']):
inputWorkflow[key]= { 'src':'hda', 'id': data_Input_CNVID["bcsummary"] }
else:
inputWorkflow[key]= { 'src':'hda', 'id': data_Input_CNVID["bcmatrix"] }
galaxyWeb.workflows.invoke_workflow(CNVworkflowID, inputs=inputWorkflow,history_id=historyID)
return(1)
##############################################
#~ Main CNV script to call from the IonTorrent Browser
##############################################
def mainCNV(expDict,base_url,apiKey):
"""
mainCNV(expDict,base_url,apiKey)
returns (historyID)
**Descriptions**:
This function execute the CNV routine. From a run of the Ion Proton,
The routine will connect the user to Galaxy, create an history,
upload the CNV input files to it and run the CNV workflow.
**Parameters**:
:param expDict: a dictionnary output from ProtonCommunication_data_manager.copyData().
:param base_url: an url which point to your galaxy instance
:param apiKey: a valid galaxy API key
:type base_url: string
:type apiKey: string
:returns historyID: the galaxy history where the data and the CNV run are located
:rtype historyID: a dictionnary
"""
gi=galaxyConnection(base_url,apiKey)
#~ if a new user add all the workflow
print expDict["resultsName"]
print "dictname"
historyID=Create_History(gi,"_PLASMA_"+expDict["resultsName"])
#~ Uploads data to a specific history
upload_To_History_CNV(gi,expDict,str(historyID['id']))
#~ upload_To_History(gi,expDict,historyID,inputAbsolutPath)
#~ Retrieve the current history and build a
#~ a dictionnary of the recent Uploaded data
dataCNVID=CNV_Input_Dict(gi,str(historyID['id']))
Run_CNV_Workflow(gi,dataCNVID,str(historyID['id']))
print "job done, hydrate yourself"
return(historyID)
##############################################
#~ Main Plasma script to call from the IonTorrent Browser
##############################################
def mainPlasma(expDict,base_url,apiKey,inputDataFolder):
"""
mainPlasma(expDict,base_url,apiKey)
returns (historyID)
**Descriptions**:
This function execute the CNV routine. From a run of the Ion Proton,
The routine will connect the user to Galaxy, create an history,
upload the CNV input files to it and run the CNV workflow.
**Parameters**:
:param expDict: a dictionnary output from ProtonCommunication_data_manager.copyData().
:param base_url: an url which point to your galaxy instance
:param apiKey: a valid galaxy API key
:type base_url: string
:type apiKey: string
:returns historyID: the galaxy history where the data and the CNV run are located
:rtype historyID: a dictionnary
"""
gi=galaxyConnection(base_url,apiKey)
#~ if a new user add all the workflow
logger.info("##########################")
logger.info("mainPlasma for "+expDict["resultsName"])
logger.info("##########################")
historyID=Create_History(gi,"_PLASMA_"+expDict["resultsName"])
#~ Uploads data to a specific history
upload_To_History_Plasma(gi,expDict,str(historyID['id']),inputDataFolder)
logger.info("##########################")
logger.info("mupload_From_Library_To_Plasma_History :")
logger.info("##########################")
#~ upload_From_Library_To_Plasma_History(gi,expDict,str(historyID['id']),"/Plasma/")
#~ Retrieve the current history and build a
#~ a dictionnary of the recent Uploaded data
##############################################
dataPlasmaID=Plasma_Input_Dict(gi,str(historyID['id']))
Run_Plasma_Workflow(gi,dataPlasmaID,str(historyID['id']))
logger.info("job done, hydrate yourself")
return(historyID)
def mainSamtools_fromNGSData(expDict,base_url,apiKey,inputDataFolder):
"""
mainPlasma_fromNGSData(expDict,base_url,apiKey)
returns (historyID)
**Descriptions**:
This function execute the CNV routine. From a run of the Ion Proton,
The routine will connect the user to Galaxy, create an history,
upload the CNV input files to it and run the CNV workflow.
**Parameters**:
:param expDict: a dictionnary output from ProtonCommunication_data_manager.copyData().
:param base_url: an url which point to your galaxy instance
:param apiKey: a valid galaxy API key
:type base_url: string
:type apiKey: string
:returns historyID: the galaxy history where the data and the CNV run are located
:rtype historyID: a dictionnary
"""
gi=galaxyConnection(base_url,apiKey)
#~ if a new user add all the workflow
logger.info("##########################")
logger.info("mainPlasma for "+expDict["resultsName"])
logger.info("##########################")
historyID=Create_History(gi,"_Samtools_"+expDict["resultsName"])
#~ Uploads data to a specific history
#~ upload_To_History_Plasma_NGS_data(gi,expDict,str(historyID['id']),inputDataFolder)
upload_To_History_Samtools_NGS_data(gi,expDict,str(historyID['id']),inputDataFolder)
logger.info("##########################")
logger.info("mupload_From_Library_To_Plasma_History :")
logger.info("##########################")
#~ upload_From_Library_To_Plasma_History(gi,expDict,str(historyID['id']),"/Plasma/")
#~ Retrieve the current history and build a
#~ a dictionnary of the recent Uploaded data
##############################################
dataPlasmaID=Samtools_Input_Dict(gi,str(historyID['id']))
Run_Samtools_Workflow(gi,dataPlasmaID,str(historyID['id']))
logger.info("job done, hydrate yourself")
return(historyID)
##############################################
#~ Uploads data to a specific history for NGS data
##############################################
def mainPlasma_fromNGSData(expDict,base_url,apiKey,inputDataFolder):
"""
mainPlasma_fromNGSData(expDict,base_url,apiKey)
returns (historyID)
**Descriptions**:
This function execute the CNV routine. From a run of the Ion Proton,
The routine will connect the user to Galaxy, create an history,
upload the CNV input files to it and run the CNV workflow.
**Parameters**:
:param expDict: a dictionnary output from ProtonCommunication_data_manager.copyData().
:param base_url: an url which point to your galaxy instance
:param apiKey: a valid galaxy API key
:type base_url: string
:type apiKey: string
:returns historyID: the galaxy history where the data and the CNV run are located
:rtype historyID: a dictionnary
"""
gi=galaxyConnection(base_url,apiKey)
#~ if a new user add all the workflow
logger.info("##########################")
logger.info("mainPlasma for "+expDict["resultsName"])
logger.info("##########################")
historyID=Create_History(gi,"_PLASMA_"+expDict["resultsName"])
#~ Uploads data to a specific history
upload_To_History_Plasma_NGS_data(gi,expDict,str(historyID['id']),inputDataFolder)
logger.info("##########################")
logger.info("mupload_From_Library_To_Plasma_History :")
logger.info("##########################")
#~ upload_From_Library_To_Plasma_History(gi,expDict,str(historyID['id']),"/Plasma/")
#~ Retrieve the current history and build a
#~ a dictionnary of the recent Uploaded data
##############################################
dataPlasmaID=Plasma_Input_Dict(gi,str(historyID['id']))
Run_Plasma_Workflow(gi,dataPlasmaID,str(historyID['id']))
logger.info("job done, hydrate yourself")
return(historyID)
##############################################
#~ Uploads data to a specific history for NGS data
##############################################
def upload_To_History_Samtools_NGS_data(galaxyWeb,expDict,historyID,analysisType):
"""
upload_To_History_Plasma(galaxyWeb,expDict,historyID)
returns (int)
**Descriptions**:
This function upload to a specific history the CNV data.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param expDict: a result dictionnary output from the ProtonCommunication script
:type expDict: dictionnary
:param historyID: a galaxy history ID
:type historyID: string
:returns: 1
:rtype: int
"""
logger.info("##########################")
logger.info("upload_To_History_Plasma() for "+expDict["resultsName"])
logger.info("##########################")
for bampath in expDict['bamForPlasma']:
galaxyWeb.tools.upload_file(path=bampath,history_id=historyID,file_type='bam',dbkey="hg19")
ionTagnobam=str("".join(str(bampath.split("/")[-1]).split(".")[0]))
absPath=bampath.split("/")
outputtxt = open("/".join(absPath[0:len(absPath)-1])+"/"+ionTagnobam+".txt",'w')
if ionTagnobam in expDict:
outputtxt.write(expDict[str(ionTagnobam)])
else:
outputtxt.write(str(ionTagnobam))
outputtxt.close()
logger.debug("##########################")
logger.debug("bampath "+bampath)
logger.debug("ionTagnobam "+ionTagnobam)
logger.debug("##########################")
#~ galaxyWeb.tools.upload_file(path="/".join(absPath[0:len(absPath)-1])+"/"+ionTagnobam+".txt",history_id=historyID,file_type='txt',dbkey="hg19plasma")
return 1
def upload_To_History_Plasma_NGS_data(galaxyWeb,expDict,historyID,analysisType):
"""
upload_To_History_Plasma(galaxyWeb,expDict,historyID)
returns (int)
**Descriptions**:
This function upload to a specific history the CNV data.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param expDict: a result dictionnary output from the ProtonCommunication script
:type expDict: dictionnary
:param historyID: a galaxy history ID
:type historyID: string
:returns: 1
:rtype: int
"""
logger.info("##########################")
logger.info("upload_To_History_Plasma() for "+expDict["resultsName"])
logger.info("##########################")
for bampath in expDict['bamForPlasma']:
galaxyWeb.tools.upload_file(path=bampath,history_id=historyID,file_type='bam',dbkey="hg19plasma")
ionTagnobam=str("".join(str(bampath.split("/")[-1]).split(".")[0]))
absPath=bampath.split("/")
outputtxt = open("/".join(absPath[0:len(absPath)-1])+"/"+ionTagnobam+".txt",'w')
if ionTagnobam in expDict:
outputtxt.write(expDict[str(ionTagnobam)])
else:
outputtxt.write(str(ionTagnobam))
outputtxt.close()
logger.debug("##########################")
logger.debug("bampath "+bampath)
logger.debug("ionTagnobam "+ionTagnobam)
logger.debug("##########################")
galaxyWeb.tools.upload_file(path="/".join(absPath[0:len(absPath)-1])+"/"+ionTagnobam+".txt",history_id=historyID,file_type='txt',dbkey="hg19plasma")
return 1
##############################################
#~ Uploads data to a specific history
##############################################
def upload_To_History_Plasma(galaxyWeb,expDict,historyID,analysisType):
"""
upload_To_History_Plasma(galaxyWeb,expDict,historyID)
returns (int)
**Descriptions**:
This function upload to a specific history the CNV data.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param expDict: a result dictionnary output from the ProtonCommunication script
:type expDict: dictionnary
:param historyID: a galaxy history ID
:type historyID: string
:returns: 1
:rtype: int
"""
logger.info("##########################")
logger.info("upload_To_History_Plasma() for "+expDict["resultsName"])
logger.info("##########################")
for bampath in expDict['bamForPlasma']:
galaxyWeb.tools.upload_file(path=bampath,history_id=historyID,file_type='bam',dbkey="hg19plasma")
ionTagnobam=str("".join(str(bampath.split("/")[-1]).split(".")[0]))
logger.debug("##########################")
logger.debug("bampath "+bampath)
logger.debug("ionTagnobam "+ionTagnobam)
logger.debug("##########################")
galaxyWeb.tools.upload_file(path="/nas_Dir/INPUT/"+expDict["resultsName"]+analysisType+ionTagnobam+".txt",history_id=historyID,file_type='txt',dbkey="hg19plasma")
return 1
##############################################
#~ Uploads data to a specific history
##############################################
def upload_From_Library_To_Plasma_History(galaxyWeb,expDict,historyID,analysisType):
"""
upload_To_History_Plasma(galaxyWeb,expDict,historyID)
returns (int)
**Descriptions**:
This function upload to a specific history the CNV data.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param expDict: a result dictionnary output from the ProtonCommunication script
:type expDict: dictionnary
:param historyID: a galaxy history ID
:type historyID: string
:returns: 1
:rtype: int
"""
logger.info("##########################")
logger.info("try to found the temporary library" )
logger.info("##########################")
#if the library does not exist, create a tmp library
today = datetime.date.today()
mytmpLibrary=galaxyWeb.libraries.get_libraries(str(today)+"temporarylibrary_"+expDict["resultsName"])
myuploadfolder=""
if mytmpLibrary == [] :
mytmpLibrary=galaxyWeb.libraries.create_library(name=str(today)+"temporarylibrary_"+expDict["resultsName"],
description="use to load link easily the data to galaxy")
myuploadfolder=galaxyWeb.libraries.create_folder(str(mytmpLibrary["id"]),"_PLASMA_"+expDict["resultsName"])[0]
else:
myuploadfolder=galaxyWeb.libraries.create_folder(str(mytmpLibrary["id"]),"_PLASMA_"+expDict["resultsName"])[0]
logger.info("##########################")
logger.info("Upload all the file into the temporarylibrary as symbolic link" )
logger.info("##########################")
simpleBamName=[]
for bampath in expDict['bamForPlasma']:
galaxyWeb.libraries.upload_from_galaxy_filesystem(library_id=str(mytmpLibrary["id"]),folder_id=str(myuploadfolder['id']),
filesystem_paths=bampath,file_type='bam',dbkey="hg19plasma",link_data_only='link_to_files')
ionTagnobam=str("".join(str(bampath.split("/")[-1]).split(".")[0]))
simpleBamName.append(ionTagnobam+".bam")
simpleBamName.append(ionTagnobam+".txt")
galaxyWeb.libraries.upload_from_galaxy_filesystem(library_id=str(mytmpLibrary["id"]),folder_id=str(myuploadfolder['id']),
filesystem_paths="/nas_Dir/INPUT/"+expDict["resultsName"]+analysisType+ionTagnobam+".txt",file_type='txt',dbkey="hg19plasma",link_data_only='link_to_files')
logger.info("##########################")
logger.info("add all the data into the history" )
logger.info("##########################")
for data in galaxyWeb.libraries.show_library(library_id=str(mytmpLibrary["id"]),contents=True):
if str(data['name']).split("/")[-1] in simpleBamName:
galaxyWeb.histories.upload_dataset_from_library(history_id=historyID,
lib_dataset_id=str(data['id']))
return 1
##############################################
#~ Retrieve the current history and build a
#~ a dictionnary of the recent Uploaded data
##############################################
def Plasma_Input_Dict(galaxyWeb,historyID):
"""
Plasma_Input_Dict(galaxyWeb,historyID)
returns (data_Input_CNVID)
**Descriptions**:
This function return a dictionnary whitch contains datasets id for
Plasma input files. This dictionnary contains a bcsummary and bcmatrix keys.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param historyID: a galaxy history ID
:type historyID: string
:returns: data_Input_CNVID
:rtype: dictionnary
"""
logger.info("##########################")
logger.info("Plasma_Input_Dict() for "+historyID)
logger.info("##########################")
idPlasmalist=[]
plasmadicttxt=dict()
plasmadictbam=dict()
for dataset in galaxyWeb.histories.show_history(historyID,contents=True):
print dataset["name"]
patientkey=dataset["name"].split(".")[0]
logger.debug("##########################")
logger.debug("name "+dataset["name"])
logger.debug("patientkey "+patientkey)
logger.debug("##########################")
newdict=dict()
if dataset["name"].split(".")[1]=="bam":
plasmadictbam[patientkey]=dataset["id"]
else:
plasmadicttxt[patientkey]=dataset["id"]
idPlasmalist.append(dataset["id"])
return({'bam':plasmadictbam,'txt':plasmadicttxt})
def Samtools_Input_Dict(galaxyWeb,historyID):
"""
Samtools_Input_Dict(galaxyWeb,historyID)
returns (data_Input_CNVID)
**Descriptions**:
This function return a dictionnary whitch contains datasets id for
Plasma input files. This dictionnary contains a bcsummary and bcmatrix keys.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param historyID: a galaxy history ID
:type historyID: string
:returns: data_Input_CNVID
:rtype: dictionnary
"""
logger.info("##########################")
logger.info("Samtools_Input_Dict() for "+historyID)
logger.info("##########################")
idPlasmalist=[]
plasmadicttxt=dict()
plasmadictbam=dict()
for dataset in galaxyWeb.histories.show_history(historyID,contents=True):
print dataset["name"]
patientkey=dataset["name"].split(".")[0]
logger.debug("##########################")
logger.debug("name "+dataset["name"])
logger.debug("patientkey "+patientkey)
logger.debug("##########################")
newdict=dict()
if dataset["name"].split(".")[1]=="bam":
plasmadictbam[patientkey]=dataset["id"]
idPlasmalist.append(dataset["id"])
return({'bam':plasmadictbam})
##############################################
#~ Retrieve the Plasma workflow and execute it
##############################################
def Run_Samtools_Workflow(galaxyWeb,data_Input_PLASMAID,historyID):
"""
Run_Plasma_Workflow(galaxyWeb,data_Input_PLASMAID,historyID)
returns (int)
**Descriptions**:
This function retrieve the CNV workflow and execute it. Use a dictionnary
as input.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param data_Input_CNVID: a dictionnary output from function CNV_Input_Dict
:type data_Input_CNVID: dictionnary
:param historyID: a galaxy history ID
:type historyID: string
:returns: 1
:rtype: int
"""
logger.info("##########################")
logger.info("Run_Samtools_Workflow() for "+historyID)
logger.info("##########################")
PlasmaworkflowID=""
for workflow in galaxyWeb.workflows.get_workflows():
if "demo_samtools" in workflow["name"]:
PlasmaworkflowID=workflow['id']
logger.debug("##########################")
logger.debug("Workflow ID found "+PlasmaworkflowID)
logger.debug("##########################")
Plasma_Workflow=galaxyWeb.workflows.show_workflow(PlasmaworkflowID)
#~ cnv_Workflow['inputs']['1']['label']
inputWorkflow=dict()
for bamName,historyidBam in data_Input_PLASMAID['bam'].iteritems():
for key,value in Plasma_Workflow['inputs'].iteritems():
#~ print key
if ("bam file" in value['label']):
inputWorkflow[key]= { 'src':'hda', 'id': historyidBam }
logger.debug("##########################")
logger.debug("inputWorkflow[key] "+str(inputWorkflow[key]))
logger.debug("[key] "+str(key))
logger.debug("##########################")
#~#Run a plasma analysis for each bam
galaxyWeb.workflows.invoke_workflow(PlasmaworkflowID, inputs=inputWorkflow,history_id=historyID)
return(1)
##############################################
#~ Retrieve the Plasma workflow and execute it
##############################################
def Run_Plasma_Workflow(galaxyWeb,data_Input_PLASMAID,historyID):
"""
Run_Plasma_Workflow(galaxyWeb,data_Input_PLASMAID,historyID)
returns (int)
**Descriptions**:
This function retrieve the CNV workflow and execute it. Use a dictionnary
as input.
**Parameters**:
:param galaxyWeb: a connection to your galaxy instance
:type galaxyWeb: GalaxyInstance
:param data_Input_CNVID: a dictionnary output from function CNV_Input_Dict
:type data_Input_CNVID: dictionnary
:param historyID: a galaxy history ID
:type historyID: string
:returns: 1
:rtype: int
"""
logger.info("##########################")
logger.info("Run_Plasma_Workflow() for "+historyID)
logger.info("##########################")
PlasmaworkflowID=""
for workflow in galaxyWeb.workflows.get_workflows():
if "Plasma_mutation" in workflow["name"]:
PlasmaworkflowID=workflow['id']
logger.debug("##########################")
logger.debug("Workflow ID found "+PlasmaworkflowID)
logger.debug("##########################")
Plasma_Workflow=galaxyWeb.workflows.show_workflow(PlasmaworkflowID)
#~ cnv_Workflow['inputs']['1']['label']
inputWorkflow=dict()
for bamName,historyidBam in data_Input_PLASMAID['bam'].iteritems():
for key,value in Plasma_Workflow['inputs'].iteritems():
#~ print key
if ("plasmabam" in value['label']):
inputWorkflow[key]= { 'src':'hda', 'id': historyidBam }
logger.debug("##########################")
logger.debug("inputWorkflow[key] "+str(inputWorkflow[key]))
logger.debug("[key] "+str(key))
logger.debug("##########################")
else:
inputWorkflow[key]= { 'src':'hda', 'id': data_Input_PLASMAID['txt'][bamName] }
logger.debug("##########################")
logger.debug("inputWorkflow[key] "+str(inputWorkflow[key]))
logger.debug("[key] "+str(key))
logger.debug("##########################")
#~#Run a plasma analysis for each bam
galaxyWeb.workflows.invoke_workflow(PlasmaworkflowID, inputs=inputWorkflow,history_id=historyID)
return(1)
if __name__ == "__main__":
src_files = os.listdir(inputAbsolutPath)
mainCNV(src_files,galaxy_base_url,apiKey)
| 36.27574 | 164 | 0.647734 |
acf727046a9cda0538bb4f312d8a19c35b584042 | 1,055 | py | Python | examples/status.py | wolsen/python-libjuju | 96a15ab1492d296a6608f75e68df97fad13d3877 | [
"Apache-2.0"
] | null | null | null | examples/status.py | wolsen/python-libjuju | 96a15ab1492d296a6608f75e68df97fad13d3877 | [
"Apache-2.0"
] | null | null | null | examples/status.py | wolsen/python-libjuju | 96a15ab1492d296a6608f75e68df97fad13d3877 | [
"Apache-2.0"
] | null | null | null | """
This example demonstrate how status works
"""
from juju import jasyncio
from juju import loop
import logging
import sys
from logging import getLogger
from juju.model import Model
LOG = getLogger(__name__)
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
async def main():
model = Model()
await model.connect_current()
application = await model.deploy(
'cs:ubuntu-10',
application_name='ubuntu',
series='trusty',
channel='stable',
)
await jasyncio.sleep(10)
# Print the status to observe the evolution
# during a minute
for i in range(12):
try:
# By setting raw to True, the returned
# entry contains a FullStatus object with
# all the available status data.
status = await model.status(raw=True)
print(status)
except Exception as e:
print(e)
await jasyncio.sleep(5)
await application.remove()
await model.disconnect()
if __name__ == '__main__':
loop.run(main())
| 23.444444 | 58 | 0.638863 |
acf728ef2706abc8726170413132ce4c44b5cc08 | 9,716 | py | Python | scru128/__init__.py | scru128/python | 32d38eb92ba3a199b98863054249421c64944496 | [
"Apache-2.0"
] | null | null | null | scru128/__init__.py | scru128/python | 32d38eb92ba3a199b98863054249421c64944496 | [
"Apache-2.0"
] | null | null | null | scru128/__init__.py | scru128/python | 32d38eb92ba3a199b98863054249421c64944496 | [
"Apache-2.0"
] | null | null | null | """SCRU128: Sortable, Clock and Random number-based Unique identifier"""
from __future__ import annotations
__all__ = [
"scru128",
"scru128_string",
"Scru128Generator",
"Scru128Id",
]
import datetime
import enum
import re
import secrets
import threading
import typing
# Maximum value of 48-bit timestamp field.
MAX_TIMESTAMP = 0xFFFF_FFFF_FFFF
# Maximum value of 24-bit counter_hi field.
MAX_COUNTER_HI = 0xFF_FFFF
# Maximum value of 24-bit counter_lo field.
MAX_COUNTER_LO = 0xFF_FFFF
# Digit characters used in the Base36 notation.
DIGITS = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
class Scru128Id:
"""
Represents a SCRU128 ID and provides converters and comparison operators.
"""
__slots__ = "_value"
def __init__(self, int_value: int) -> None:
"""Creates an object from a 128-bit unsigned integer."""
self._value = int_value
if not (0 <= int_value <= 0xFFFF_FFFF_FFFF_FFFF_FFFF_FFFF_FFFF_FFFF):
raise ValueError("not a 128-bit unsigned integer")
@classmethod
def from_fields(
cls, timestamp: int, counter_hi: int, counter_lo: int, entropy: int
) -> Scru128Id:
"""Creates an object from field values."""
if not (
0 <= timestamp <= MAX_TIMESTAMP
and 0 <= counter_hi <= MAX_COUNTER_HI
and 0 <= counter_lo <= MAX_COUNTER_LO
and 0 <= entropy <= 0xFFFF_FFFF
):
raise ValueError("invalid field value")
return cls(
(timestamp << 80) | (counter_hi << 56) | (counter_lo << 32) | entropy
)
@classmethod
def from_str(cls, str_value: str) -> Scru128Id:
"""Creates an object from a 25-digit string representation."""
if re.match(r"^[0-9A-Za-z]{25}$", str_value) is None:
raise ValueError("invalid string representation")
return cls(int(str_value, 36))
def __int__(self) -> int:
"""Returns the 128-bit unsigned integer representation."""
return self._value
@property
def timestamp(self) -> int:
"""Returns the 48-bit timestamp field value."""
return (self._value >> 80) & MAX_TIMESTAMP
@property
def counter_hi(self) -> int:
"""Returns the 24-bit counter_hi field value."""
return (self._value >> 56) & MAX_COUNTER_HI
@property
def counter_lo(self) -> int:
"""Returns the 24-bit counter_lo field value."""
return (self._value >> 32) & MAX_COUNTER_LO
@property
def entropy(self) -> int:
"""Returns the 32-bit entropy field value."""
return self._value & 0xFFFF_FFFF
def __str__(self) -> str:
"""Returns the 25-digit canonical string representation."""
buffer = ["0"] * 25
n = self._value
for i in range(25):
(n, rem) = divmod(n, 36)
buffer[24 - i] = DIGITS[rem]
return "".join(buffer)
def __repr__(self) -> str:
return f"{self.__class__.__name__}(0x{self._value:032X})"
def __eq__(self, value: object) -> bool:
if not isinstance(value, self.__class__):
return NotImplemented
return self._value == value._value
def __hash__(self) -> int:
return hash(self._value)
def __lt__(self, value: object) -> bool:
if not isinstance(value, self.__class__):
return NotImplemented
return self._value < value._value
def __le__(self, value: object) -> bool:
if not isinstance(value, self.__class__):
return NotImplemented
return self._value <= value._value
def __gt__(self, value: object) -> bool:
if not isinstance(value, self.__class__):
return NotImplemented
return self._value > value._value
def __ge__(self, value: object) -> bool:
if not isinstance(value, self.__class__):
return NotImplemented
return self._value >= value._value
class DefaultRandom:
def getrandbits(self, k: int) -> int:
return secrets.randbits(k)
class Scru128Generator:
"""
Represents a SCRU128 ID generator that encapsulates the monotonic counters and other
internal states.
"""
def __init__(self, *, rng: typing.Any = None) -> None:
"""
Creates a generator object with the default random number generator, or with the
specified one if passed as an argument. The specified random number generator
should be cryptographically strong and securely seeded.
Args:
rng: Any object that implements a k-bit random unsigned integer generation
method: `getrandbits(k: int) -> int`. The interface is compatible with
random.Random and random.SystemRandom.
"""
self._timestamp = 0
self._counter_hi = 0
self._counter_lo = 0
self._ts_counter_hi = 0
self._last_status = Scru128Generator.Status.NOT_EXECUTED
self._lock = threading.Lock()
if rng is None:
self._rng = DefaultRandom()
elif callable(getattr(rng, "getrandbits", None)):
self._rng = rng
else:
raise TypeError("rng does not implement getrandbits()")
def generate(self) -> Scru128Id:
"""
Generates a new SCRU128 ID object.
This method is thread-safe; multiple threads can call it concurrently.
"""
with self._lock:
timestamp = datetime.datetime.now().timestamp()
return self.generate_core(int(timestamp * 1_000))
def generate_core(self, timestamp: int) -> Scru128Id:
"""
Generates a new SCRU128 ID object with the timestamp passed.
Unlike `generate()`, this method is NOT thread-safe. The generator object should
be protected from concurrent accesses using a mutex or other synchronization
mechanism to avoid race conditions.
"""
if not (1 <= timestamp <= MAX_TIMESTAMP):
raise ValueError("`timestamp` must be a 48-bit positive integer")
self._last_status = Scru128Generator.Status.NEW_TIMESTAMP
if timestamp > self._timestamp:
self._timestamp = timestamp
self._counter_lo = self._rng.getrandbits(24)
elif timestamp + 10_000 > self._timestamp:
self._counter_lo += 1
self._last_status = Scru128Generator.Status.COUNTER_LO_INC
if self._counter_lo > MAX_COUNTER_LO:
self._counter_lo = 0
self._counter_hi += 1
self._last_status = Scru128Generator.Status.COUNTER_HI_INC
if self._counter_hi > MAX_COUNTER_HI:
self._counter_hi = 0
# increment timestamp at counter overflow
self._timestamp += 1
self._counter_lo = self._rng.getrandbits(24)
self._last_status = Scru128Generator.Status.TIMESTAMP_INC
else:
# reset state if clock moves back by ten seconds or more
self._ts_counter_hi = 0
self._timestamp = timestamp
self._counter_lo = self._rng.getrandbits(24)
self._last_status = Scru128Generator.Status.CLOCK_ROLLBACK
if self._timestamp - self._ts_counter_hi >= 1_000:
self._ts_counter_hi = self._timestamp
self._counter_hi = self._rng.getrandbits(24)
return Scru128Id.from_fields(
self._timestamp,
self._counter_hi,
self._counter_lo,
self._rng.getrandbits(32),
)
@property
def last_status(self) -> Scru128Generator.Status:
"""
Returns a `Status` code that indicates the internal state involved in the last
generation of ID.
Note that the generator object should be protected from concurrent accesses
during the sequential calls to a generation method and this property to avoid
race conditions.
"""
return self._last_status
class Status(enum.Enum):
"""
Status code returned by `last_status` property.
Attributes:
NOT_EXECUTED: Indicates that the generator has yet to generate an ID.
NEW_TIMESTAMP: Indicates that the latest timestamp was used because it was
greater than the previous one.
COUNTER_LO_INC: Indicates that counter_lo was incremented because the latest
timestamp was no greater than the previous one.
COUNTER_HI_INC: Indicates that counter_hi was incremented because counter_lo
reached its maximum value.
TIMESTAMP_INC: Indicates that the previous timestamp was incremented because
counter_hi reached its maximum value.
CLOCK_ROLLBACK: Indicates that the monotonic order of generated IDs was
broken because the latest timestamp was less than the previous one by
ten seconds or more.
"""
NOT_EXECUTED = enum.auto()
NEW_TIMESTAMP = enum.auto()
COUNTER_LO_INC = enum.auto()
COUNTER_HI_INC = enum.auto()
TIMESTAMP_INC = enum.auto()
CLOCK_ROLLBACK = enum.auto()
default_generator = Scru128Generator()
def scru128() -> Scru128Id:
"""
Generates a new SCRU128 ID object.
This function is thread-safe; multiple threads can call it concurrently.
"""
return default_generator.generate()
def scru128_string() -> str:
"""
Generates a new SCRU128 ID encoded in the 25-digit canonical string representation.
This function is thread-safe. Use this to quickly get a new SCRU128 ID as a string.
"""
return str(scru128())
| 34.332155 | 88 | 0.629786 |
acf72908d646bc886a1eb3046f4f53f4bd830a7f | 6,635 | py | Python | lib/exabgp/bgp/message/update/attribute/attribute.py | PowerDNS/exabgp | bbf69f25853e10432fbe588b5bc2f8d9f1e5dda2 | [
"BSD-3-Clause"
] | 8 | 2015-01-11T09:57:26.000Z | 2019-07-05T05:57:02.000Z | lib/exabgp/bgp/message/update/attribute/attribute.py | Acidburn0zzz/exabgp | bbf69f25853e10432fbe588b5bc2f8d9f1e5dda2 | [
"BSD-3-Clause"
] | 1 | 2018-11-15T22:10:09.000Z | 2018-11-15T22:20:31.000Z | lib/exabgp/bgp/message/update/attribute/attribute.py | Acidburn0zzz/exabgp | bbf69f25853e10432fbe588b5bc2f8d9f1e5dda2 | [
"BSD-3-Clause"
] | 6 | 2015-09-11T01:51:06.000Z | 2020-03-10T19:16:18.000Z | # encoding: utf-8
"""
attribute.py
Created by Thomas Mangin on 2009-11-05.
Copyright (c) 2009-2013 Exa Networks. All rights reserved.
"""
from struct import pack
from exabgp.bgp.message.notification import Notify
from exabgp.util.cache import Cache
# ==================================================================== Attribute
#
class Attribute (object):
# we need to define ID and FLAG inside of the subclasses
# otherwise we can not dynamically create different GenericAttribute
# ID = 0x00
# FLAG = 0x00
# Should this Attribute be cached
CACHING = False
# Registered subclasses we know how to decode
registered_attributes = dict()
# what this implementation knows as attributes
attributes_known = []
attributes_well_know = []
attributes_optional = []
# Are we caching Attributes (configuration)
caching = False
# The attribute cache per attribute ID
cache = {}
# ---------------------------------------------------------------------------
# XXX : FIXME : The API of ID is a bit different (it can be instanciated)
# XXX : FIXME : This is legacy. should we change to not be ?
class ID (int):
__slots__ = []
# This should move within the classes and not be here
# RFC 4271
ORIGIN = 0x01
AS_PATH = 0x02
NEXT_HOP = 0x03
MED = 0x04
LOCAL_PREF = 0x05
ATOMIC_AGGREGATE = 0x06
AGGREGATOR = 0x07
# RFC 1997
COMMUNITY = 0x08
# RFC 4456
ORIGINATOR_ID = 0x09
CLUSTER_LIST = 0x0A # 10
# RFC 4760
MP_REACH_NLRI = 0x0E # 14
MP_UNREACH_NLRI = 0x0F # 15
# RFC 4360
EXTENDED_COMMUNITY = 0x10 # 16
# RFC 4893
AS4_PATH = 0x11 # 17
AS4_AGGREGATOR = 0x12 # 18
# RFC6514
PMSI_TUNNEL = 0x16 # 22
# RFC5512
TUNNEL_ENCAP = 0x17 # 23
AIGP = 0x1A # 26
INTERNAL_NAME = 0xFFFC
INTERNAL_WITHDRAW = 0xFFFD
INTERNAL_WATCHDOG = 0xFFFE
INTERNAL_SPLIT = 0xFFFF
names = {
ORIGIN : 'origin',
AS_PATH : 'as-path',
NEXT_HOP : 'next-hop',
MED : 'med', # multi-exit-disc
LOCAL_PREF : 'local-preference',
ATOMIC_AGGREGATE : 'atomic-aggregate',
AGGREGATOR : 'aggregator',
COMMUNITY : 'community',
ORIGINATOR_ID : 'originator-id',
CLUSTER_LIST : 'cluster-list',
MP_REACH_NLRI : 'mp-reach-nlri', # multi-protocol reacheable nlri
MP_UNREACH_NLRI : 'mp-unreach-nlri', # multi-protocol unreacheable nlri
EXTENDED_COMMUNITY : 'extended-community',
AS4_PATH : 'as4-path',
AS4_AGGREGATOR : 'as4-aggregator',
PMSI_TUNNEL : 'pmsi-tunnel',
TUNNEL_ENCAP : 'tunnel-encaps',
AIGP : 'aigp',
0xfffc : 'internal-name',
0xfffd : 'internal-withdraw',
0xfffe : 'internal-watchdog',
0xffff : 'internal-split',
}
def __str__ (self):
return self.names.get(self,'unknown-attribute-%s' % hex(self))
def __repr__ (self):
return str(self)
@classmethod
def name (cls,self):
return cls.names.get(self,'unknown-attribute-%s' % hex(self))
# ---------------------------------------------------------------------------
class Flag (int):
EXTENDED_LENGTH = 0x10 # . 16 - 0001 0000
PARTIAL = 0x20 # . 32 - 0010 0000
TRANSITIVE = 0x40 # . 64 - 0100 0000
OPTIONAL = 0x80 # . 128 - 1000 0000
MASK_EXTENDED = 0xEF # . 239 - 1110 1111
MASK_PARTIAL = 0xDF # . 223 - 1101 1111
MASK_TRANSITIVE = 0xBF # . 191 - 1011 1111
MASK_OPTIONAL = 0x7F # . 127 - 0111 1111
__slots__ = []
def __str__ (self):
r = []
v = int(self)
if v & 0x10:
r.append("EXTENDED_LENGTH")
v -= 0x10
if v & 0x20:
r.append("PARTIAL")
v -= 0x20
if v & 0x40:
r.append("TRANSITIVE")
v -= 0x40
if v & 0x80:
r.append("OPTIONAL")
v -= 0x80
if v:
r.append("UNKNOWN %s" % hex(v))
return " ".join(r)
def matches (self,value):
return self | 0x10 == value | 0x10
# ---------------------------------------------------------------------------
def _attribute (self,value):
flag = self.FLAG
if flag & Attribute.Flag.OPTIONAL and not value:
return ''
length = len(value)
if length > 0xFF:
flag |= Attribute.Flag.EXTENDED_LENGTH
if flag & Attribute.Flag.EXTENDED_LENGTH:
len_value = pack('!H',length)
else:
len_value = chr(length)
return "%s%s%s%s" % (chr(flag),chr(self.ID),len_value,value)
def __eq__ (self,other):
return self.ID == other.ID
def __ne__ (self,other):
return self.ID != other.ID
@classmethod
def register_attribute (cls,attribute_id=None,flag=None):
aid = cls.ID if attribute_id is None else attribute_id
flg = cls.FLAG | Attribute.Flag.EXTENDED_LENGTH if flag is None else flag | Attribute.Flag.EXTENDED_LENGTH
if (aid,flg) in cls.registered_attributes:
raise RuntimeError('only one class can be registered per capability')
cls.registered_attributes[(aid,flg)] = cls
cls.attributes_known.append(aid)
if cls.FLAG & Attribute.Flag.OPTIONAL:
Attribute.attributes_optional.append(aid)
else:
Attribute.attributes_well_know.append(aid)
@classmethod
def registered (cls,attribute_id,flag):
return (attribute_id,flag | Attribute.Flag.EXTENDED_LENGTH) in cls.registered_attributes
@classmethod
def klass (cls,attribute_id,flag):
key = (attribute_id,flag | Attribute.Flag.EXTENDED_LENGTH)
if key in cls.registered_attributes:
kls = cls.registered_attributes[key]
kls.ID = attribute_id
return kls
# XXX: we do see some AS4_PATH with the partial instead of transitive bit set !!
if attribute_id == Attribute.ID.AS4_PATH:
kls = cls.attributes_known[attribute_id]
kls.ID = attribute_id
return kls
raise Notify (2,4,'can not handle attribute id %s' % attribute_id)
@classmethod
def unpack (cls,attribute_id,flag,data,negotiated):
cache = cls.caching and cls.CACHING
if cache and data in cls.cache.get(cls.ID,{}):
return cls.cache[cls.ID].retrieve(data)
key = (attribute_id,flag | Attribute.Flag.EXTENDED_LENGTH)
if key in Attribute.registered_attributes.keys():
instance = cls.klass(attribute_id,flag).unpack(data,negotiated)
if cache:
cls.cache.cache[cls.ID].cache(data,instance)
return instance
raise Notify (2,4,'can not handle attribute id %s' % attribute_id)
@classmethod
def setCache (cls):
if not cls.cache:
for attribute in Attribute.ID.names:
if attribute not in cls.cache:
cls.cache[attribute] = Cache()
Attribute.setCache()
| 28.847826 | 108 | 0.619442 |
acf7290a7e370e0ebd878afd48ef5774e07743e8 | 578 | py | Python | aoc2021/03/d3_2.py | kewbish/ka-algorithms | 7a893fdaebd99530eaf0d9633c2721763707e92f | [
"MIT"
] | null | null | null | aoc2021/03/d3_2.py | kewbish/ka-algorithms | 7a893fdaebd99530eaf0d9633c2721763707e92f | [
"MIT"
] | null | null | null | aoc2021/03/d3_2.py | kewbish/ka-algorithms | 7a893fdaebd99530eaf0d9633c2721763707e92f | [
"MIT"
] | null | null | null | with open("input.txt") as x:
reports = x.read().splitlines()
def find_rate(reports, condition) -> int:
rates = reports
i = 0
while len(rates) != 1:
rotated_rates = list(zip(*rates))
zc = rotated_rates[i].count("0")
oc = rotated_rates[i].count("1")
rates = list(filter(lambda rate: rate[i] == condition(zc, oc), rates))
i += 1
rate = int(rates[0], 2)
return rate
oxy = find_rate(reports, lambda zc, oc: "0" if zc > oc else "1")
co2 = find_rate(reports, lambda zc, oc: "0" if zc <= oc else "1")
print(oxy * co2)
| 27.52381 | 78 | 0.581315 |
acf729f2dc8e1f7902e242aefbcd45130e537106 | 5,101 | py | Python | sdks/python/apache_beam/utils/urns.py | cttestid41/Apache_Beam | 724eda37ea1e54aac089d89c711ca3cee14a4603 | [
"Apache-2.0"
] | null | null | null | sdks/python/apache_beam/utils/urns.py | cttestid41/Apache_Beam | 724eda37ea1e54aac089d89c711ca3cee14a4603 | [
"Apache-2.0"
] | 12 | 2019-11-13T04:59:52.000Z | 2021-12-14T21:13:47.000Z | sdks/python/apache_beam/utils/urns.py | cttestid41/Apache_Beam | 724eda37ea1e54aac089d89c711ca3cee14a4603 | [
"Apache-2.0"
] | 2 | 2017-09-23T14:41:17.000Z | 2018-08-29T02:57:03.000Z | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""For internal use only; no backwards-compatibility guarantees."""
import abc
import inspect
from google.protobuf import wrappers_pb2
from apache_beam.internal import pickler
from apache_beam.utils import proto_utils
PICKLED_WINDOW_FN = "beam:windowfn:pickled_python:v0.1"
GLOBAL_WINDOWS_FN = "beam:windowfn:global_windows:v0.1"
FIXED_WINDOWS_FN = "beam:windowfn:fixed_windows:v0.1"
SLIDING_WINDOWS_FN = "beam:windowfn:sliding_windows:v0.1"
SESSION_WINDOWS_FN = "beam:windowfn:session_windows:v0.1"
PICKLED_DO_FN = "beam:dofn:pickled_python:v0.1"
PICKLED_DO_FN_INFO = "beam:dofn:pickled_python_info:v0.1"
PICKLED_COMBINE_FN = "beam:combinefn:pickled_python:v0.1"
PICKLED_CODER = "beam:coder:pickled_python:v0.1"
PICKLED_TRANSFORM = "beam:ptransform:pickled_python:v0.1"
PARDO_TRANSFORM = "beam:ptransform:pardo:v0.1"
GROUP_BY_KEY_TRANSFORM = "beam:ptransform:group_by_key:v0.1"
GROUP_BY_KEY_ONLY_TRANSFORM = "beam:ptransform:group_by_key_only:v0.1"
GROUP_ALSO_BY_WINDOW_TRANSFORM = "beam:ptransform:group_also_by_window:v0.1"
COMBINE_PER_KEY_TRANSFORM = "beam:ptransform:combine_per_key:v0.1"
COMBINE_GROUPED_VALUES_TRANSFORM = "beam:ptransform:combine_grouped_values:v0.1"
FLATTEN_TRANSFORM = "beam:ptransform:flatten:v0.1"
READ_TRANSFORM = "beam:ptransform:read:v0.1"
WINDOW_INTO_TRANSFORM = "beam:ptransform:window_into:v0.1"
PICKLED_SOURCE = "beam:source:pickled_python:v0.1"
class RunnerApiFn(object):
"""Abstract base class that provides urn registration utilities.
A class that inherits from this class will get a registration-based
from_runner_api and to_runner_api method that convert to and from
beam_runner_api_pb2.SdkFunctionSpec.
Additionally, register_pickle_urn can be called from the body of a class
to register serialization via pickling.
"""
# TODO(BEAM-2685): Issue with dill + local classes + abc metaclass
# __metaclass__ = abc.ABCMeta
_known_urns = {}
@abc.abstractmethod
def to_runner_api_parameter(self, unused_context):
"""Returns the urn and payload for this Fn.
The returned urn(s) should be registered with `register_urn`.
"""
pass
@classmethod
def register_urn(cls, urn, parameter_type, fn=None):
"""Registeres a urn with a constructor.
For example, if 'beam:fn:foo' had paramter type FooPayload, one could
write `RunnerApiFn.register_urn('bean:fn:foo', FooPayload, foo_from_proto)`
where foo_from_proto took as arguments a FooPayload and a PipelineContext.
This function can also be used as a decorator rather than passing the
callable in as the final parameter.
A corresponding to_runner_api_parameter method would be expected that
returns the tuple ('beam:fn:foo', FooPayload)
"""
def register(fn):
cls._known_urns[urn] = parameter_type, fn
return staticmethod(fn)
if fn:
# Used as a statement.
register(fn)
else:
# Used as a decorator.
return register
@classmethod
def register_pickle_urn(cls, pickle_urn):
"""Registers and implements the given urn via pickling.
"""
inspect.currentframe().f_back.f_locals['to_runner_api_parameter'] = (
lambda self, context: (
pickle_urn, wrappers_pb2.BytesValue(value=pickler.dumps(self))))
cls.register_urn(
pickle_urn,
wrappers_pb2.BytesValue,
lambda proto, unused_context: pickler.loads(proto.value))
def to_runner_api(self, context):
"""Returns an SdkFunctionSpec encoding this Fn.
Prefer overriding self.to_runner_api_parameter.
"""
from apache_beam.portability.api import beam_runner_api_pb2
urn, typed_param = self.to_runner_api_parameter(context)
return beam_runner_api_pb2.SdkFunctionSpec(
spec=beam_runner_api_pb2.FunctionSpec(
urn=urn,
any_param=proto_utils.pack_Any(typed_param),
payload=typed_param.SerializeToString()
if typed_param is not None else None))
@classmethod
def from_runner_api(cls, fn_proto, context):
"""Converts from an SdkFunctionSpec to a Fn object.
Prefer registering a urn with its parameter type and constructor.
"""
parameter_type, constructor = cls._known_urns[fn_proto.spec.urn]
return constructor(
proto_utils.parse_Bytes(fn_proto.spec.payload, parameter_type),
context)
| 37.233577 | 80 | 0.755146 |
acf72a44e899c44de7a8c8ab20794ffd11916ea5 | 1,116 | py | Python | tests/test_plots.py | ahartikainen/pyinfraformat | 98a0f1b20163c60289d2788f5d702768be4727dc | [
"Apache-2.0"
] | 2 | 2018-11-13T08:01:29.000Z | 2018-11-13T08:56:09.000Z | tests/test_plots.py | ahartikainen/pyinfraformat | 98a0f1b20163c60289d2788f5d702768be4727dc | [
"Apache-2.0"
] | 39 | 2018-10-22T06:32:12.000Z | 2022-01-22T14:03:27.000Z | tests/test_plots.py | ahartikainen/pyinfraformat | 98a0f1b20163c60289d2788f5d702768be4727dc | [
"Apache-2.0"
] | null | null | null | import os
from glob import glob
from uuid import uuid4
import folium
import matplotlib.pyplot as plt
import pytest
from pyinfraformat import from_gtk_wfs, from_infraformat, plot_map
from .helpers import ping_gtk
def get_object():
here = os.path.dirname(os.path.abspath(__file__))
filepath = os.path.join(here, "test_data", "infraformat_hole_types.tek")
object = from_infraformat(filepath)
return object
def test_holes_plot():
holes = get_object()
for hole in holes:
try:
fig = hole.plot()
assert isinstance(fig, plt.Figure)
except NotImplementedError:
pass
def test_map():
holes = get_object()
holes_map = plot_map(holes)
assert isinstance(holes_map, folium.Map)
@pytest.mark.skipif(not ping_gtk(), reason="GTK DB not available")
def test_gtk_map():
holes = get_object()
holes_map = plot_map(holes)
bbox = (60.12065, 24.4421945, 60.1208, 24.443) # Bbox with empty and missing data holes
holes = from_gtk_wfs(bbox, "WGS84")
holes_map = plot_map(holes)
assert isinstance(holes_map, folium.Map)
| 24.8 | 92 | 0.695341 |
acf72a5977ab06a6eebb0068ce36391ed0a93409 | 248 | py | Python | output/models/nist_data/list_pkg/language/schema_instance/nistschema_sv_iv_list_language_pattern_1_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 1 | 2021-08-14T17:59:21.000Z | 2021-08-14T17:59:21.000Z | output/models/nist_data/list_pkg/language/schema_instance/nistschema_sv_iv_list_language_pattern_1_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 4 | 2020-02-12T21:30:44.000Z | 2020-04-15T20:06:46.000Z | output/models/nist_data/list_pkg/language/schema_instance/nistschema_sv_iv_list_language_pattern_1_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | null | null | null | from output.models.nist_data.list_pkg.language.schema_instance.nistschema_sv_iv_list_language_pattern_1_xsd.nistschema_sv_iv_list_language_pattern_1 import NistschemaSvIvListLanguagePattern1
__all__ = [
"NistschemaSvIvListLanguagePattern1",
]
| 41.333333 | 190 | 0.891129 |
acf72acb26cd49fbb8fdcfc58c22b26a1875bae5 | 13,257 | py | Python | tools/analysis_tools/optimize_anchors.py | chenxinfeng4/mmdetection | a99a1aaa5e4a7614f2f89f2350e1b917b2a8ca7e | [
"Apache-2.0"
] | 6 | 2021-12-18T07:23:35.000Z | 2022-02-26T04:38:26.000Z | tools/analysis_tools/optimize_anchors.py | chenxinfeng4/mmdetection | a99a1aaa5e4a7614f2f89f2350e1b917b2a8ca7e | [
"Apache-2.0"
] | 1 | 2022-01-06T14:58:44.000Z | 2022-01-06T14:58:44.000Z | tools/analysis_tools/optimize_anchors.py | chenxinfeng4/mmdetection | a99a1aaa5e4a7614f2f89f2350e1b917b2a8ca7e | [
"Apache-2.0"
] | 1 | 2021-12-12T13:35:22.000Z | 2021-12-12T13:35:22.000Z | # Copyright (c) OpenMMLab. All rights reserved.
"""Optimize anchor settings on a specific dataset.
This script provides two method to optimize YOLO anchors including k-means
anchor cluster and differential evolution. You can use ``--algorithm k-means``
and ``--algorithm differential_evolution`` to switch two method.
Example:
Use k-means anchor cluster::
python tools/analysis_tools/optimize_anchors.py ${CONFIG} \
--algorithm k-means --input-shape ${INPUT_SHAPE [WIDTH HEIGHT]} \
--output-dir ${OUTPUT_DIR}
Use differential evolution to optimize anchors::
python tools/analysis_tools/optimize_anchors.py ${CONFIG} \
--algorithm differential_evolution \
--input-shape ${INPUT_SHAPE [WIDTH HEIGHT]} \
--output-dir ${OUTPUT_DIR}
"""
import argparse
import os.path as osp
import mmcv
import numpy as np
import torch
from mmcv import Config
from scipy.optimize import differential_evolution
from mmdet.core import bbox_cxcywh_to_xyxy, bbox_overlaps, bbox_xyxy_to_cxcywh
from mmdet.datasets import build_dataset
from mmdet.utils import get_root_logger, update_data_root
def parse_args():
parser = argparse.ArgumentParser(description='Optimize anchor parameters.')
parser.add_argument('config', help='Train config file path.')
parser.add_argument(
'--device', default='cuda:0', help='Device used for calculating.')
parser.add_argument(
'--input-shape',
type=int,
nargs='+',
default=[608, 608],
help='input image size')
parser.add_argument(
'--algorithm',
default='differential_evolution',
help='Algorithm used for anchor optimizing.'
'Support k-means and differential_evolution for YOLO.')
parser.add_argument(
'--iters',
default=1000,
type=int,
help='Maximum iterations for optimizer.')
parser.add_argument(
'--output-dir',
default=None,
type=str,
help='Path to save anchor optimize result.')
args = parser.parse_args()
return args
class BaseAnchorOptimizer:
"""Base class for anchor optimizer.
Args:
dataset (obj:`Dataset`): Dataset object.
input_shape (list[int]): Input image shape of the model.
Format in [width, height].
logger (obj:`logging.Logger`): The logger for logging.
device (str, optional): Device used for calculating.
Default: 'cuda:0'
out_dir (str, optional): Path to save anchor optimize result.
Default: None
"""
def __init__(self,
dataset,
input_shape,
logger,
device='cuda:0',
out_dir=None):
self.dataset = dataset
self.input_shape = input_shape
self.logger = logger
self.device = device
self.out_dir = out_dir
bbox_whs, img_shapes = self.get_whs_and_shapes()
ratios = img_shapes.max(1, keepdims=True) / np.array([input_shape])
# resize to input shape
self.bbox_whs = bbox_whs / ratios
def get_whs_and_shapes(self):
"""Get widths and heights of bboxes and shapes of images.
Returns:
tuple[np.ndarray]: Array of bbox shapes and array of image
shapes with shape (num_bboxes, 2) in [width, height] format.
"""
self.logger.info('Collecting bboxes from annotation...')
bbox_whs = []
img_shapes = []
prog_bar = mmcv.ProgressBar(len(self.dataset))
for idx in range(len(self.dataset)):
ann = self.dataset.get_ann_info(idx)
data_info = self.dataset.data_infos[idx]
img_shape = np.array([data_info['width'], data_info['height']])
gt_bboxes = ann['bboxes']
for bbox in gt_bboxes:
wh = bbox[2:4] - bbox[0:2]
img_shapes.append(img_shape)
bbox_whs.append(wh)
prog_bar.update()
print('\n')
bbox_whs = np.array(bbox_whs)
img_shapes = np.array(img_shapes)
self.logger.info(f'Collected {bbox_whs.shape[0]} bboxes.')
return bbox_whs, img_shapes
def get_zero_center_bbox_tensor(self):
"""Get a tensor of bboxes centered at (0, 0).
Returns:
Tensor: Tensor of bboxes with shape (num_bboxes, 4)
in [xmin, ymin, xmax, ymax] format.
"""
whs = torch.from_numpy(self.bbox_whs).to(
self.device, dtype=torch.float32)
bboxes = bbox_cxcywh_to_xyxy(
torch.cat([torch.zeros_like(whs), whs], dim=1))
return bboxes
def optimize(self):
raise NotImplementedError
def save_result(self, anchors, path=None):
anchor_results = []
for w, h in anchors:
anchor_results.append([round(w), round(h)])
self.logger.info(f'Anchor optimize result:{anchor_results}')
if path:
json_path = osp.join(path, 'anchor_optimize_result.json')
mmcv.dump(anchor_results, json_path)
self.logger.info(f'Result saved in {json_path}')
class YOLOKMeansAnchorOptimizer(BaseAnchorOptimizer):
r"""YOLO anchor optimizer using k-means. Code refer to `AlexeyAB/darknet.
<https://github.com/AlexeyAB/darknet/blob/master/src/detector.c>`_.
Args:
num_anchors (int) : Number of anchors.
iters (int): Maximum iterations for k-means.
"""
def __init__(self, num_anchors, iters, **kwargs):
super(YOLOKMeansAnchorOptimizer, self).__init__(**kwargs)
self.num_anchors = num_anchors
self.iters = iters
def optimize(self):
anchors = self.kmeans_anchors()
self.save_result(anchors, self.out_dir)
def kmeans_anchors(self):
self.logger.info(
f'Start cluster {self.num_anchors} YOLO anchors with K-means...')
bboxes = self.get_zero_center_bbox_tensor()
cluster_center_idx = torch.randint(
0, bboxes.shape[0], (self.num_anchors, )).to(self.device)
assignments = torch.zeros((bboxes.shape[0], )).to(self.device)
cluster_centers = bboxes[cluster_center_idx]
if self.num_anchors == 1:
cluster_centers = self.kmeans_maximization(bboxes, assignments,
cluster_centers)
anchors = bbox_xyxy_to_cxcywh(cluster_centers)[:, 2:].cpu().numpy()
anchors = sorted(anchors, key=lambda x: x[0] * x[1])
return anchors
prog_bar = mmcv.ProgressBar(self.iters)
for i in range(self.iters):
converged, assignments = self.kmeans_expectation(
bboxes, assignments, cluster_centers)
if converged:
self.logger.info(f'K-means process has converged at iter {i}.')
break
cluster_centers = self.kmeans_maximization(bboxes, assignments,
cluster_centers)
prog_bar.update()
print('\n')
avg_iou = bbox_overlaps(bboxes,
cluster_centers).max(1)[0].mean().item()
anchors = bbox_xyxy_to_cxcywh(cluster_centers)[:, 2:].cpu().numpy()
anchors = sorted(anchors, key=lambda x: x[0] * x[1])
self.logger.info(f'Anchor cluster finish. Average IOU: {avg_iou}')
return anchors
def kmeans_maximization(self, bboxes, assignments, centers):
"""Maximization part of EM algorithm(Expectation-Maximization)"""
new_centers = torch.zeros_like(centers)
for i in range(centers.shape[0]):
mask = (assignments == i)
if mask.sum():
new_centers[i, :] = bboxes[mask].mean(0)
return new_centers
def kmeans_expectation(self, bboxes, assignments, centers):
"""Expectation part of EM algorithm(Expectation-Maximization)"""
ious = bbox_overlaps(bboxes, centers)
closest = ious.argmax(1)
converged = (closest == assignments).all()
return converged, closest
class YOLODEAnchorOptimizer(BaseAnchorOptimizer):
"""YOLO anchor optimizer using differential evolution algorithm.
Args:
num_anchors (int) : Number of anchors.
iters (int): Maximum iterations for k-means.
strategy (str): The differential evolution strategy to use.
Should be one of:
- 'best1bin'
- 'best1exp'
- 'rand1exp'
- 'randtobest1exp'
- 'currenttobest1exp'
- 'best2exp'
- 'rand2exp'
- 'randtobest1bin'
- 'currenttobest1bin'
- 'best2bin'
- 'rand2bin'
- 'rand1bin'
Default: 'best1bin'.
population_size (int): Total population size of evolution algorithm.
Default: 15.
convergence_thr (float): Tolerance for convergence, the
optimizing stops when ``np.std(pop) <= abs(convergence_thr)
+ convergence_thr * np.abs(np.mean(population_energies))``,
respectively. Default: 0.0001.
mutation (tuple[float]): Range of dithering randomly changes the
mutation constant. Default: (0.5, 1).
recombination (float): Recombination constant of crossover probability.
Default: 0.7.
"""
def __init__(self,
num_anchors,
iters,
strategy='best1bin',
population_size=15,
convergence_thr=0.0001,
mutation=(0.5, 1),
recombination=0.7,
**kwargs):
super(YOLODEAnchorOptimizer, self).__init__(**kwargs)
self.num_anchors = num_anchors
self.iters = iters
self.strategy = strategy
self.population_size = population_size
self.convergence_thr = convergence_thr
self.mutation = mutation
self.recombination = recombination
def optimize(self):
anchors = self.differential_evolution()
self.save_result(anchors, self.out_dir)
def differential_evolution(self):
bboxes = self.get_zero_center_bbox_tensor()
bounds = []
for i in range(self.num_anchors):
bounds.extend([(0, self.input_shape[0]), (0, self.input_shape[1])])
result = differential_evolution(
func=self.avg_iou_cost,
bounds=bounds,
args=(bboxes, ),
strategy=self.strategy,
maxiter=self.iters,
popsize=self.population_size,
tol=self.convergence_thr,
mutation=self.mutation,
recombination=self.recombination,
updating='immediate',
disp=True)
self.logger.info(
f'Anchor evolution finish. Average IOU: {1 - result.fun}')
anchors = [(w, h) for w, h in zip(result.x[::2], result.x[1::2])]
anchors = sorted(anchors, key=lambda x: x[0] * x[1])
return anchors
@staticmethod
def avg_iou_cost(anchor_params, bboxes):
assert len(anchor_params) % 2 == 0
anchor_whs = torch.tensor(
[[w, h]
for w, h in zip(anchor_params[::2], anchor_params[1::2])]).to(
bboxes.device, dtype=bboxes.dtype)
anchor_boxes = bbox_cxcywh_to_xyxy(
torch.cat([torch.zeros_like(anchor_whs), anchor_whs], dim=1))
ious = bbox_overlaps(bboxes, anchor_boxes)
max_ious, _ = ious.max(1)
cost = 1 - max_ious.mean().item()
return cost
def main():
logger = get_root_logger()
args = parse_args()
cfg = args.config
cfg = Config.fromfile(cfg)
# update data root according to MMDET_DATASETS
update_data_root(cfg)
input_shape = args.input_shape
assert len(input_shape) == 2
anchor_type = cfg.model.bbox_head.anchor_generator.type
assert anchor_type == 'YOLOAnchorGenerator', \
f'Only support optimize YOLOAnchor, but get {anchor_type}.'
base_sizes = cfg.model.bbox_head.anchor_generator.base_sizes
num_anchors = sum([len(sizes) for sizes in base_sizes])
train_data_cfg = cfg.data.train
while 'dataset' in train_data_cfg:
train_data_cfg = train_data_cfg['dataset']
dataset = build_dataset(train_data_cfg)
if args.algorithm == 'k-means':
optimizer = YOLOKMeansAnchorOptimizer(
dataset=dataset,
input_shape=input_shape,
device=args.device,
num_anchors=num_anchors,
iters=args.iters,
logger=logger,
out_dir=args.output_dir)
elif args.algorithm == 'differential_evolution':
optimizer = YOLODEAnchorOptimizer(
dataset=dataset,
input_shape=input_shape,
device=args.device,
num_anchors=num_anchors,
iters=args.iters,
logger=logger,
out_dir=args.output_dir)
else:
raise NotImplementedError(
f'Only support k-means and differential_evolution, '
f'but get {args.algorithm}')
optimizer.optimize()
if __name__ == '__main__':
main()
| 35.446524 | 79 | 0.605718 |
acf72b53e0e7f3899a15fc0b29106f493f02cf95 | 3,958 | py | Python | tests/unit/test_record.py | vigetlabs/dnsimple | 4700a1913409dfba31e71f2c03ca00c23333e3ef | [
"MIT"
] | 3 | 2017-03-04T05:09:34.000Z | 2020-01-03T10:33:59.000Z | tests/unit/test_record.py | vigetlabs/dnsimple | 4700a1913409dfba31e71f2c03ca00c23333e3ef | [
"MIT"
] | 5 | 2016-07-22T18:15:43.000Z | 2018-06-01T11:02:36.000Z | tests/unit/test_record.py | vigetlabs/dnsimple | 4700a1913409dfba31e71f2c03ca00c23333e3ef | [
"MIT"
] | 2 | 2018-05-13T09:17:08.000Z | 2021-05-24T15:24:37.000Z | import pytest
from ..context import dnsimple
from ..request_helper import RequestHelper, request
from dnsimple.models import Record, Domain
@pytest.fixture
def domain(request):
return Domain(request, {'name':'foo.com'})
@pytest.fixture
def subject(request, domain):
return Record(request, domain, {'id': 1})
class TestRecord(RequestHelper, object):
def test_assign_assigns_attributes(self, subject):
subject.assign({'name': 'www'})
assert subject.id == 1
assert subject.name == 'www'
def test_update_sends_update_request(self, mocker, request, domain):
method = self.stub_request(mocker, request, method_name = 'put', data = {})
subject = Record(request, domain, {'name': 'www', 'id': 1})
result = subject.update({'name':'other'})
method.assert_called_once_with('domains/foo.com/records/1', {'record': {'name':'other'}})
assert result is True
def test_update_returns_false_when_request_fails(self, mocker, request, domain):
method = self.stub_request(mocker, request, method_name = 'put', success = False, data = {})
subject = Record(request, domain, {'name': 'www', 'id': 1})
assert subject.update({}) is False
def test_update_assigns_attributes(self, mocker, request, domain):
method = self.stub_request(mocker, request, method_name = 'put', data = {})
subject = Record(request, domain, {'name': 'www', 'id': 1})
subject.update({'name':'other'})
assert subject.name == 'other'
def test_delete_removes_record_from_domain(self, mocker, request, domain):
method = self.stub_request(mocker, request, method_name = 'delete', data = {})
subject = Record(request, domain, {'name': 'www', 'id': 1})
result = subject.delete()
method.assert_called_once_with('domains/foo.com/records/1')
assert result is True
def test_delete_returns_false_when_removal_fails(self, mocker, request, domain):
method = self.stub_request(mocker, request, method_name = 'delete', success = False)
subject = Record(request, domain, {'name': 'www', 'id': 1})
assert subject.delete() is False
def test_to_dict_returns_attributes(self, request, domain):
subject = Record(request, domain, {
"id" : 1,
"content" : "ns1.dnsimple.com",
"created_at" : "2016-08-01T00:00:00.000Z",
"domain_id" : 24819,
"name" : "Name",
"parent_id" : 2,
"prio" : 30,
"record_type" : "NS",
"system_record": True,
"ttl" : 3600,
"updated_at" : "2016-08-01T00:00:00.000Z"
})
assert subject.to_dict() == {
"id" : 1,
"content" : "ns1.dnsimple.com",
"created_at" : "2016-08-01T00:00:00.000Z",
"domain_id" : 24819,
"name" : "Name",
"parent_id" : 2,
"prio" : 30,
"record_type" : "NS",
"system_record": True,
"ttl" : 3600,
"updated_at" : "2016-08-01T00:00:00.000Z"
}
def test_not_equal_when_no_ids(self, request, domain):
a = Record(request, domain, {})
b = Record(request, domain, {})
assert a != b
def test_not_equal_when_only_one_id(self, request, domain):
a = Record(request, domain, {'id': 1})
b = Record(request, domain, {})
assert a != b
def test_not_equal_when_ids_differ(self, request, domain):
a = Record(request, domain, {'id': 1})
b = Record(request, domain, {'id': 2})
assert a != b
def test_equal_when_ids_are_the_same(self, request, domain):
a = Record(request, domain, {'id': 1})
b = Record(request, domain, {'id': 1})
assert a == b
| 34.417391 | 101 | 0.57428 |
acf72c498f373c337defd8ef6adadb98467f3604 | 529 | py | Python | algo/lc.array.21.py | cdluminate/MyNotes | cf28f2a3fa72723153147e21fed5e7b598baf44f | [
"CC0-1.0"
] | null | null | null | algo/lc.array.21.py | cdluminate/MyNotes | cf28f2a3fa72723153147e21fed5e7b598baf44f | [
"CC0-1.0"
] | null | null | null | algo/lc.array.21.py | cdluminate/MyNotes | cf28f2a3fa72723153147e21fed5e7b598baf44f | [
"CC0-1.0"
] | null | null | null | class Solution:
def removeDuplicates(self, nums: List[int]) -> int:
if len(nums) == 0: return 0
idx = 1
for i in range(1, len(nums)):
if nums[i] > nums[idx-1]:
nums[idx] = nums[i]
idx += 1
return idx
class Solution:
def removeDuplicates(self, nums: List[int]) -> int:
if not nums: return 0
idx = 0
for j in nums:
if j != nums[idx]:
idx += 1
nums[idx] = j
return idx+1
| 26.45 | 55 | 0.457467 |
acf72e24074020f8cc7766ea381affbefe46bc09 | 2,267 | py | Python | labtory-master/AutoEncoder/model/auto_encoder_model_256.py | yumion/onodera-lab | 34f06e1f0eff8ce3a8d02ddc07e90ce4d0635c9c | [
"Apache-2.0"
] | null | null | null | labtory-master/AutoEncoder/model/auto_encoder_model_256.py | yumion/onodera-lab | 34f06e1f0eff8ce3a8d02ddc07e90ce4d0635c9c | [
"Apache-2.0"
] | null | null | null | labtory-master/AutoEncoder/model/auto_encoder_model_256.py | yumion/onodera-lab | 34f06e1f0eff8ce3a8d02ddc07e90ce4d0635c9c | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, BatchNormalization, Activation
from keras.models import Model
from keras.utils import plot_model
def set_ae(summary=True, model_to_png=False, channel=3):
input_img = Input(shape=(256, 256, channel))
x = Conv2D(16, (3, 3), padding='same')(input_img)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(32, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(32, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (3, 3), padding='same')(encoded)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(64, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(channel, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
decoded = Activation('sigmoid')(x)
model = Model(input_img, decoded)
if model_to_png:
plot_model(model, to_file='./vae_model.png', show_shapes=True)
if summary:
model.summary()
return model, input_img, decoded
if __name__ == '__main__':
vae_model = set_ae()
vae_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) | 33.338235 | 98 | 0.600794 |
acf72e6913d1456b5ea552b43bfa86f9735bb93f | 5,722 | py | Python | generate_masks.py | crossknight/angiodysplasia-segmentation | 84216bbcf0b2a890fcaddad33487649bbc7d9222 | [
"MIT"
] | null | null | null | generate_masks.py | crossknight/angiodysplasia-segmentation | 84216bbcf0b2a890fcaddad33487649bbc7d9222 | [
"MIT"
] | null | null | null | generate_masks.py | crossknight/angiodysplasia-segmentation | 84216bbcf0b2a890fcaddad33487649bbc7d9222 | [
"MIT"
] | null | null | null | """
Script generates predictions, splitting original images into tiles, and assembling prediction back together
"""
import argparse
from prepare_train_val import get_split
from dataset import AngyodysplasiaDataset, MultiAngyodysplasiaDataset
import cv2
from models import UNet, UNet11, UNet16, AlbuNet34, SEAlbuNet34, MultiSEAlbuNet34
import torch
from pathlib import Path
from tqdm import tqdm
import numpy as np
import utils
# import prepare_data
from torch.utils.data import DataLoader
from torch.nn import functional as F
import torch
from transforms import (ImageOnly,
Normalize,
CenterCrop,
DualCompose)
img_transform = DualCompose([
CenterCrop(512),
ImageOnly(Normalize())
])
def get_model(model_path, model_type):
"""
:param model_path:
:param model_type: 'UNet', 'UNet11', 'UNet16', 'AlbuNet34'
:return:
"""
num_classes = 1
if model_type == 'UNet11':
model = UNet11(num_classes=num_classes)
elif model_type == 'UNet16':
model = UNet16(num_classes=num_classes)
elif model_type == 'AlbuNet34':
model = AlbuNet34(num_classes=num_classes)
elif model_type == 'UNet':
model = UNet(num_classes=num_classes)
elif model_type == 'SEAlbuNet':
model = SEAlbuNet34(num_classes=num_classes)
elif model_type == 'MultiSEAlbuNet':
model = MultiSEAlbuNet34(num_classes=num_classes)
else:
model = UNet(num_classes=num_classes)
state = torch.load(str(model_path))
state = {key.replace('module.', ''): value for key, value in state['model'].items()}
model.load_state_dict(state)
if torch.cuda.is_available():
return model.cuda()
model.eval()
return model
def predict(model, from_file_names, batch_size: int, to_path):
loader = DataLoader(
dataset=AngyodysplasiaDataset(from_file_names, transform=img_transform, mode='predict'),
shuffle=False,
batch_size=batch_size,
num_workers=args.workers,
pin_memory=torch.cuda.is_available()
)
#for batch_num, (inputs, paths, gt_label) in enumerate(tqdm(loader, desc='Predict')):
for batch_num, (inputs, paths) in enumerate(tqdm(loader, desc='Predict')):
inputs = utils.variable(inputs, volatile=True)
#outputs, coutputs = model(inputs)
outputs = model(inputs)
for i, image_name in enumerate(paths):
mask = (torch.sigmoid(outputs[i, 0]).data.cpu().numpy() * 255).astype(np.uint8)
h, w = mask.shape
full_mask = np.zeros((576, 576))
full_mask[32:32 + h, 32:32 + w] = mask
(to_path / args.model_type).mkdir(exist_ok=True, parents=True)
cv2.imwrite(str(to_path / args.model_type / (Path(paths[i]).stem + '.png')), full_mask)
def predict_multi(model, from_file_names, batch_size: int, to_path):
loader = DataLoader(
dataset=MultiAngyodysplasiaDataset(from_file_names, transform=img_transform, mode='predict'),
shuffle=False,
batch_size=batch_size,
num_workers=args.workers,
pin_memory=torch.cuda.is_available()
)
for batch_num, (inputs, paths, gt_label) in enumerate(tqdm(loader, desc='Predict')):
inputs = utils.variable(inputs, volatile=True)
outputs, coutputs = model(inputs)
for i, image_name in enumerate(paths):
mask = (torch.sigmoid(outputs[i, 0]).data.cpu().numpy() * 255).astype(np.uint8)
h, w = mask.shape
full_mask = np.zeros((576, 576))
full_mask[32:32 + h, 32:32 + w] = mask
(to_path / args.model_type).mkdir(exist_ok=True, parents=True)
cv2.imwrite(str(to_path / args.model_type / (Path(paths[i]).stem + '.png')), full_mask)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
arg = parser.add_argument
arg('--model_path', type=str, default='data/models/UNet', help='path to model folder')
arg('--model_type', type=str, default='UNet', help='network architecture',
choices=['UNet', 'UNet11', 'UNet16', 'AlbuNet34', 'SEAlbuNet', 'MultiSEAlbuNet'])
arg('--batch-size', type=int, default=4)
arg('--fold', type=int, default=-1, choices=[0, 1, 2, 3, 4, -1], help='-1: all folds')
arg('--workers', type=int, default=12)
arg('--multiple-output', type=bool, default=False)
args = parser.parse_args()
if args.fold == -1:
for fold in [0, 1, 2, 3, 4]:
_, file_names = get_split(fold)
print(len(file_names))
model = get_model(str(Path(args.model_path).joinpath('best_model_{fold}.pt'.format(fold=fold))),
model_type=args.model_type)
print('num file_names = {}'.format(len(file_names)))
output_path = Path(args.model_path)
output_path.mkdir(exist_ok=True, parents=True)
if args.multiple_output == False:
predict(model, file_names, args.batch_size, output_path)
else:
predict_multi(model, file_names, args.batch_size, output_path)
else:
_, file_names = get_split(args.fold)
model = get_model(str(Path(args.model_path).joinpath('best_model_{fold}.pt'.format(fold=args.fold))),
model_type=args.model_type)
print('num file_names = {}'.format(len(file_names)))
output_path = Path(args.model_path)
output_path.mkdir(exist_ok=True, parents=True)
if args.multiple_output == False:
predict(model, file_names, args.batch_size, output_path)
else:
predict_multi(model, file_names, args.batch_size, output_path)
| 34.263473 | 109 | 0.640685 |
acf72f109621728a4e0a5c234f4c77643cdd101d | 2,062 | py | Python | Web/Page Scraper/page_scraper.py | fossabot/IdeaBag2-Solutions | 73b554d9796510fc86e5fc55016732aa866266c6 | [
"MIT"
] | 10 | 2018-07-06T22:05:45.000Z | 2021-05-22T11:29:04.000Z | Web/Page Scraper/page_scraper.py | jarik-marwede/IdeaBag2-Projects | c5fe9524ef03a6ebc098ab8aaee7448f5b877828 | [
"MIT"
] | 22 | 2018-07-13T17:16:43.000Z | 2022-01-11T11:16:08.000Z | Web/Page Scraper/page_scraper.py | jarik-marwede/IdeaBag2-Projects | c5fe9524ef03a6ebc098ab8aaee7448f5b877828 | [
"MIT"
] | 1 | 2020-06-13T18:53:51.000Z | 2020-06-13T18:53:51.000Z | #!/usr/bin/env python3
"""A web scraper that scrapes all images and links from a website.
Title:
Page Scraper
Description:
Create an application which connects to a site
and pulls out all links, or images,
and saves them to a list.
For added complexity,
organize the indexed content and don't allow duplicates.
Have it put the results into an easily searchable index file.
"""
import urllib.request
import bs4 as bs
PARSER = "html.parser"
def get_html(url: str) -> bytes:
"""Return html code of the specified website."""
html = urllib.request.urlopen(url).read()
return html
def get_image_urls(source: bytes) -> list:
"""Return the urls of all images in the specified html code."""
soup = bs.BeautifulSoup(source, PARSER)
images = soup.find_all("img")
urls = [image.get("src") for image in images]
return list(set(urls)) # remove duplicates
def get_links(source: bytes) -> list:
"""Return the urls of all links in the specified html code."""
soup = bs.BeautifulSoup(source, PARSER)
links = soup.find_all("a")
urls = [link.get("href") for link in links
if link.get("href")
and link.get("href") != "#"
and link.get("href") != "/"]
return list(set(urls)) # remove duplicates
def save_to_file(links: list, file_path: str):
"""Save the links to the specified file."""
text = ""
for index, link in enumerate(links):
text += " ".join((str(index), link, "\n"))
with open(file_path, "w") as file:
file.write(text)
def _start_interactively():
"""Start the program interactively through the command line."""
url = input("What url do you want to get images/links from? ")
html = get_html(url)
images = get_image_urls(html)
links = get_links(html)
print(" ".join(("Images: ", *images)))
print(" ".join(("Links: ", *links)))
file_path = input("What file do you want to save the links in? ")
save_to_file(images + links, file_path)
print("")
if __name__ == "__main__":
_start_interactively()
| 27.864865 | 69 | 0.654219 |
acf72f112eaa7996beab177566fdf0cc3faf3e80 | 25,123 | py | Python | samples/client/petstore-security-test/python/petstore_api/api_client.py | aaronclong/openapi-generator | 7437084cd35f5e084ffc684205241438cc2984f9 | [
"Apache-2.0"
] | 2 | 2020-10-22T14:40:06.000Z | 2021-01-31T03:34:13.000Z | samples/client/petstore-security-test/python/petstore_api/api_client.py | aaronclong/openapi-generator | 7437084cd35f5e084ffc684205241438cc2984f9 | [
"Apache-2.0"
] | 4 | 2021-09-20T22:32:46.000Z | 2022-02-27T15:31:33.000Z | samples/client/petstore-security-test/python/petstore_api/api_client.py | aaronclong/openapi-generator | 7437084cd35f5e084ffc684205241438cc2984f9 | [
"Apache-2.0"
] | 2 | 2020-03-05T14:10:41.000Z | 2022-03-19T23:52:35.000Z | # coding: utf-8
"""
OpenAPI Petstore */ ' \" =end -- \\r\\n \\n \\r
This spec is mainly for testing Petstore server and contains fake endpoints, models. Please do not use this for any other purpose. Special characters: \" \\ */ ' \" =end -- # noqa: E501
OpenAPI spec version: 1.0.0 */ ' \" =end -- \\r\\n \\n \\r
Contact: something@something.abc */ ' \" =end -- \\r\\n \\n \\r
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import datetime
import json
import mimetypes
from multiprocessing.pool import ThreadPool
import os
import re
import tempfile
# python 2 and python 3 compatibility library
import six
from six.moves.urllib.parse import quote
from petstore_api.configuration import Configuration
import petstore_api.models
from petstore_api import rest
class ApiClient(object):
"""Generic API client for OpenAPI client library builds.
OpenAPI generic API client. This client handles the client-
server communication, and is invariant across implementations. Specifics of
the methods and models for each application are generated from the OpenAPI
templates.
NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
:param configuration: .Configuration object for this client
:param header_name: a header to pass when making calls to the API.
:param header_value: a header value to pass when making calls to
the API.
:param cookie: a cookie to include in the header when making calls
to the API
:param pool_threads: The number of threads to use for async requests
to the API. More threads means more concurrent API requests.
"""
PRIMITIVE_TYPES = (float, bool, bytes, six.text_type) + six.integer_types
NATIVE_TYPES_MAPPING = {
'int': int,
'long': int if six.PY3 else long, # noqa: F821
'float': float,
'str': str,
'bool': bool,
'date': datetime.date,
'datetime': datetime.datetime,
'object': object,
}
_pool = None
def __init__(self, configuration=None, header_name=None, header_value=None,
cookie=None, pool_threads=1):
if configuration is None:
configuration = Configuration()
self.configuration = configuration
self.pool_threads = pool_threads
self.rest_client = rest.RESTClientObject(configuration)
self.default_headers = {}
if header_name is not None:
self.default_headers[header_name] = header_value
self.cookie = cookie
# Set default User-Agent.
self.user_agent = 'OpenAPI-Generator/1.0.0/python'
def __del__(self):
if self._pool:
self._pool.close()
self._pool.join()
self._pool = None
@property
def pool(self):
"""Create thread pool on first request
avoids instantiating unused threadpool for blocking clients.
"""
if self._pool is None:
self._pool = ThreadPool(self.pool_threads)
return self._pool
@property
def user_agent(self):
"""User agent for this API client"""
return self.default_headers['User-Agent']
@user_agent.setter
def user_agent(self, value):
self.default_headers['User-Agent'] = value
def set_default_header(self, header_name, header_value):
self.default_headers[header_name] = header_value
def __call_api(
self, resource_path, method, path_params=None,
query_params=None, header_params=None, body=None, post_params=None,
files=None, response_type=None, auth_settings=None,
_return_http_data_only=None, collection_formats=None,
_preload_content=True, _request_timeout=None):
config = self.configuration
# header parameters
header_params = header_params or {}
header_params.update(self.default_headers)
if self.cookie:
header_params['Cookie'] = self.cookie
if header_params:
header_params = self.sanitize_for_serialization(header_params)
header_params = dict(self.parameters_to_tuples(header_params,
collection_formats))
# path parameters
if path_params:
path_params = self.sanitize_for_serialization(path_params)
path_params = self.parameters_to_tuples(path_params,
collection_formats)
for k, v in path_params:
# specified safe chars, encode everything
resource_path = resource_path.replace(
'{%s}' % k,
quote(str(v), safe=config.safe_chars_for_path_param)
)
# query parameters
if query_params:
query_params = self.sanitize_for_serialization(query_params)
query_params = self.parameters_to_tuples(query_params,
collection_formats)
# post parameters
if post_params or files:
post_params = self.prepare_post_parameters(post_params, files)
post_params = self.sanitize_for_serialization(post_params)
post_params = self.parameters_to_tuples(post_params,
collection_formats)
# auth setting
self.update_params_for_auth(header_params, query_params, auth_settings)
# body
if body:
body = self.sanitize_for_serialization(body)
# request url
url = self.configuration.host + resource_path
# perform request and return response
response_data = self.request(
method, url, query_params=query_params, headers=header_params,
post_params=post_params, body=body,
_preload_content=_preload_content,
_request_timeout=_request_timeout)
self.last_response = response_data
return_data = response_data
if _preload_content:
# deserialize response data
if response_type:
return_data = self.deserialize(response_data, response_type)
else:
return_data = None
if _return_http_data_only:
return (return_data)
else:
return (return_data, response_data.status,
response_data.getheaders())
def sanitize_for_serialization(self, obj):
"""Builds a JSON POST object.
If obj is None, return None.
If obj is str, int, long, float, bool, return directly.
If obj is datetime.datetime, datetime.date
convert to string in iso8601 format.
If obj is list, sanitize each element in the list.
If obj is dict, return the dict.
If obj is OpenAPI model, return the properties dict.
:param obj: The data to serialize.
:return: The serialized form of data.
"""
if obj is None:
return None
elif isinstance(obj, self.PRIMITIVE_TYPES):
return obj
elif isinstance(obj, list):
return [self.sanitize_for_serialization(sub_obj)
for sub_obj in obj]
elif isinstance(obj, tuple):
return tuple(self.sanitize_for_serialization(sub_obj)
for sub_obj in obj)
elif isinstance(obj, (datetime.datetime, datetime.date)):
return obj.isoformat()
if isinstance(obj, dict):
obj_dict = obj
else:
# Convert model obj to dict except
# attributes `openapi_types`, `attribute_map`
# and attributes which value is not None.
# Convert attribute name to json key in
# model definition for request.
obj_dict = {obj.attribute_map[attr]: getattr(obj, attr)
for attr, _ in six.iteritems(obj.openapi_types)
if getattr(obj, attr) is not None}
return {key: self.sanitize_for_serialization(val)
for key, val in six.iteritems(obj_dict)}
def deserialize(self, response, response_type):
"""Deserializes response into an object.
:param response: RESTResponse object to be deserialized.
:param response_type: class literal for
deserialized object, or string of class name.
:return: deserialized object.
"""
# handle file downloading
# save response body into a tmp file and return the instance
if response_type == "file":
return self.__deserialize_file(response)
# fetch data from response object
try:
data = json.loads(response.data)
except ValueError:
data = response.data
return self.__deserialize(data, response_type)
def __deserialize(self, data, klass):
"""Deserializes dict, list, str into an object.
:param data: dict, list or str.
:param klass: class literal, or string of class name.
:return: object.
"""
if data is None:
return None
if type(klass) == str:
if klass.startswith('list['):
sub_kls = re.match(r'list\[(.*)\]', klass).group(1)
return [self.__deserialize(sub_data, sub_kls)
for sub_data in data]
if klass.startswith('dict('):
sub_kls = re.match(r'dict\(([^,]*), (.*)\)', klass).group(2)
return {k: self.__deserialize(v, sub_kls)
for k, v in six.iteritems(data)}
# convert str to class
if klass in self.NATIVE_TYPES_MAPPING:
klass = self.NATIVE_TYPES_MAPPING[klass]
else:
klass = getattr(petstore_api.models, klass)
if klass in self.PRIMITIVE_TYPES:
return self.__deserialize_primitive(data, klass)
elif klass == object:
return self.__deserialize_object(data)
elif klass == datetime.date:
return self.__deserialize_date(data)
elif klass == datetime.datetime:
return self.__deserialize_datatime(data)
else:
return self.__deserialize_model(data, klass)
def call_api(self, resource_path, method,
path_params=None, query_params=None, header_params=None,
body=None, post_params=None, files=None,
response_type=None, auth_settings=None, async_req=None,
_return_http_data_only=None, collection_formats=None,
_preload_content=True, _request_timeout=None):
"""Makes the HTTP request (synchronous) and returns deserialized data.
To make an async_req request, set the async_req parameter.
:param resource_path: Path to method endpoint.
:param method: Method to call.
:param path_params: Path parameters in the url.
:param query_params: Query parameters in the url.
:param header_params: Header parameters to be
placed in the request header.
:param body: Request body.
:param post_params dict: Request post form parameters,
for `application/x-www-form-urlencoded`, `multipart/form-data`.
:param auth_settings list: Auth Settings names for the request.
:param response: Response data type.
:param files dict: key -> filename, value -> filepath,
for `multipart/form-data`.
:param async_req bool: execute request asynchronously
:param _return_http_data_only: response data without head status code
and headers
:param collection_formats: dict of collection formats for path, query,
header, and post parameters.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return:
If async_req parameter is True,
the request will be called asynchronously.
The method will return the request thread.
If parameter async_req is False or missing,
then the method will return the response directly.
"""
if not async_req:
return self.__call_api(resource_path, method,
path_params, query_params, header_params,
body, post_params, files,
response_type, auth_settings,
_return_http_data_only, collection_formats,
_preload_content, _request_timeout)
else:
thread = self.pool.apply_async(self.__call_api, (resource_path,
method, path_params, query_params,
header_params, body,
post_params, files,
response_type, auth_settings,
_return_http_data_only,
collection_formats,
_preload_content, _request_timeout))
return thread
def request(self, method, url, query_params=None, headers=None,
post_params=None, body=None, _preload_content=True,
_request_timeout=None):
"""Makes the HTTP request using RESTClient."""
if method == "GET":
return self.rest_client.GET(url,
query_params=query_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
headers=headers)
elif method == "HEAD":
return self.rest_client.HEAD(url,
query_params=query_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
headers=headers)
elif method == "OPTIONS":
return self.rest_client.OPTIONS(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "POST":
return self.rest_client.POST(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "PUT":
return self.rest_client.PUT(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "PATCH":
return self.rest_client.PATCH(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "DELETE":
return self.rest_client.DELETE(url,
query_params=query_params,
headers=headers,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
else:
raise ValueError(
"http method must be `GET`, `HEAD`, `OPTIONS`,"
" `POST`, `PATCH`, `PUT` or `DELETE`."
)
def parameters_to_tuples(self, params, collection_formats):
"""Get parameters as list of tuples, formatting collections.
:param params: Parameters as dict or list of two-tuples
:param dict collection_formats: Parameter collection formats
:return: Parameters as list of tuples, collections formatted
"""
new_params = []
if collection_formats is None:
collection_formats = {}
for k, v in six.iteritems(params) if isinstance(params, dict) else params: # noqa: E501
if k in collection_formats:
collection_format = collection_formats[k]
if collection_format == 'multi':
new_params.extend((k, value) for value in v)
else:
if collection_format == 'ssv':
delimiter = ' '
elif collection_format == 'tsv':
delimiter = '\t'
elif collection_format == 'pipes':
delimiter = '|'
else: # csv is the default
delimiter = ','
new_params.append(
(k, delimiter.join(str(value) for value in v)))
else:
new_params.append((k, v))
return new_params
def prepare_post_parameters(self, post_params=None, files=None):
"""Builds form parameters.
:param post_params: Normal form parameters.
:param files: File parameters.
:return: Form parameters with files.
"""
params = []
if post_params:
params = post_params
if files:
for k, v in six.iteritems(files):
if not v:
continue
file_names = v if type(v) is list else [v]
for n in file_names:
with open(n, 'rb') as f:
filename = os.path.basename(f.name)
filedata = f.read()
mimetype = (mimetypes.guess_type(filename)[0] or
'application/octet-stream')
params.append(
tuple([k, tuple([filename, filedata, mimetype])]))
return params
def select_header_accept(self, accepts):
"""Returns `Accept` based on an array of accepts provided.
:param accepts: List of headers.
:return: Accept (e.g. application/json).
"""
if not accepts:
return
accepts = [x.lower() for x in accepts]
if 'application/json' in accepts:
return 'application/json'
else:
return ', '.join(accepts)
def select_header_content_type(self, content_types):
"""Returns `Content-Type` based on an array of content_types provided.
:param content_types: List of content-types.
:return: Content-Type (e.g. application/json).
"""
if not content_types:
return 'application/json'
content_types = [x.lower() for x in content_types]
if 'application/json' in content_types or '*/*' in content_types:
return 'application/json'
else:
return content_types[0]
def update_params_for_auth(self, headers, querys, auth_settings):
"""Updates header and query params based on authentication setting.
:param headers: Header parameters dict to be updated.
:param querys: Query parameters tuple list to be updated.
:param auth_settings: Authentication setting identifiers list.
"""
if not auth_settings:
return
for auth in auth_settings:
auth_setting = self.configuration.auth_settings().get(auth)
if auth_setting:
if not auth_setting['value']:
continue
elif auth_setting['in'] == 'header':
headers[auth_setting['key']] = auth_setting['value']
elif auth_setting['in'] == 'query':
querys.append((auth_setting['key'], auth_setting['value']))
else:
raise ValueError(
'Authentication token must be in `query` or `header`'
)
def __deserialize_file(self, response):
"""Deserializes body to file
Saves response body into a file in a temporary folder,
using the filename from the `Content-Disposition` header if provided.
:param response: RESTResponse.
:return: file path.
"""
fd, path = tempfile.mkstemp(dir=self.configuration.temp_folder_path)
os.close(fd)
os.remove(path)
content_disposition = response.getheader("Content-Disposition")
if content_disposition:
filename = re.search(r'filename=[\'"]?([^\'"\s]+)[\'"]?',
content_disposition).group(1)
path = os.path.join(os.path.dirname(path), filename)
with open(path, "wb") as f:
f.write(response.data)
return path
def __deserialize_primitive(self, data, klass):
"""Deserializes string to primitive type.
:param data: str.
:param klass: class literal.
:return: int, long, float, str, bool.
"""
try:
return klass(data)
except UnicodeEncodeError:
return six.text_type(data)
except TypeError:
return data
def __deserialize_object(self, value):
"""Return an original value.
:return: object.
"""
return value
def __deserialize_date(self, string):
"""Deserializes string to date.
:param string: str.
:return: date.
"""
try:
from dateutil.parser import parse
return parse(string).date()
except ImportError:
return string
except ValueError:
raise rest.ApiException(
status=0,
reason="Failed to parse `{0}` as date object".format(string)
)
def __deserialize_datatime(self, string):
"""Deserializes string to datetime.
The string should be in iso8601 datetime format.
:param string: str.
:return: datetime.
"""
try:
from dateutil.parser import parse
return parse(string)
except ImportError:
return string
except ValueError:
raise rest.ApiException(
status=0,
reason=(
"Failed to parse `{0}` as datetime object"
.format(string)
)
)
def __deserialize_model(self, data, klass):
"""Deserializes list or dict to model.
:param data: dict, list.
:param klass: class literal.
:return: model object.
"""
if not klass.openapi_types and not hasattr(klass,
'get_real_child_model'):
return data
kwargs = {}
if klass.openapi_types is not None:
for attr, attr_type in six.iteritems(klass.openapi_types):
if (data is not None and
klass.attribute_map[attr] in data and
isinstance(data, (list, dict))):
value = data[klass.attribute_map[attr]]
kwargs[attr] = self.__deserialize(value, attr_type)
instance = klass(**kwargs)
if hasattr(instance, 'get_real_child_model'):
klass_name = instance.get_real_child_model(data)
if klass_name:
instance = self.__deserialize(data, klass_name)
return instance
| 39.501572 | 198 | 0.551964 |
acf72f21bb704d2bad81e8563340d8d2254d4508 | 5,324 | py | Python | rq/utils.py | AlliedSecurityTrust/rq | 47e2eee6e83c0194e83775bca257ccf92eea7b96 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | rq/utils.py | AlliedSecurityTrust/rq | 47e2eee6e83c0194e83775bca257ccf92eea7b96 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | rq/utils.py | AlliedSecurityTrust/rq | 47e2eee6e83c0194e83775bca257ccf92eea7b96 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Miscellaneous helper functions.
The formatter for ANSI colored console output is heavily based on Pygments
terminal colorizing code, originally by Georg Brandl.
"""
import importlib
import datetime
import logging
import os
import sys
from .compat import is_python_version
def gettermsize():
def ioctl_GWINSZ(fd):
try:
import fcntl, termios, struct # noqa
cr = struct.unpack('hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234'))
except:
return None
return cr
cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2)
if not cr:
try:
fd = os.open(os.ctermid(), os.O_RDONLY)
cr = ioctl_GWINSZ(fd)
os.close(fd)
except:
pass
if not cr:
try:
cr = (os.environ['LINES'], os.environ['COLUMNS'])
except:
cr = (25, 80)
return int(cr[1]), int(cr[0])
class _Colorizer(object):
def __init__(self):
esc = "\x1b["
self.codes = {}
self.codes[""] = ""
self.codes["reset"] = esc + "39;49;00m"
self.codes["bold"] = esc + "01m"
self.codes["faint"] = esc + "02m"
self.codes["standout"] = esc + "03m"
self.codes["underline"] = esc + "04m"
self.codes["blink"] = esc + "05m"
self.codes["overline"] = esc + "06m"
dark_colors = ["black", "darkred", "darkgreen", "brown", "darkblue",
"purple", "teal", "lightgray"]
light_colors = ["darkgray", "red", "green", "yellow", "blue",
"fuchsia", "turquoise", "white"]
x = 30
for d, l in zip(dark_colors, light_colors):
self.codes[d] = esc + "%im" % x
self.codes[l] = esc + "%i;01m" % x
x += 1
del d, l, x
self.codes["darkteal"] = self.codes["turquoise"]
self.codes["darkyellow"] = self.codes["brown"]
self.codes["fuscia"] = self.codes["fuchsia"]
self.codes["white"] = self.codes["bold"]
if hasattr(sys.stdout, "isatty"):
self.notty = not sys.stdout.isatty()
else:
self.notty = True
def reset_color(self):
return self.codes["reset"]
def colorize(self, color_key, text):
if not sys.stdout.isatty():
return text
else:
return self.codes[color_key] + text + self.codes["reset"]
def ansiformat(self, attr, text):
"""
Format ``text`` with a color and/or some attributes::
color normal color
*color* bold color
_color_ underlined color
+color+ blinking color
"""
result = []
if attr[:1] == attr[-1:] == '+':
result.append(self.codes['blink'])
attr = attr[1:-1]
if attr[:1] == attr[-1:] == '*':
result.append(self.codes['bold'])
attr = attr[1:-1]
if attr[:1] == attr[-1:] == '_':
result.append(self.codes['underline'])
attr = attr[1:-1]
result.append(self.codes[attr])
result.append(text)
result.append(self.codes['reset'])
return ''.join(result)
colorizer = _Colorizer()
def make_colorizer(color):
"""Creates a function that colorizes text with the given color.
For example:
green = make_colorizer('darkgreen')
red = make_colorizer('red')
Then, you can use:
print "It's either " + green('OK') + ' or ' + red('Oops')
"""
def inner(text):
return colorizer.colorize(color, text)
return inner
class ColorizingStreamHandler(logging.StreamHandler):
levels = {
logging.WARNING: make_colorizer('darkyellow'),
logging.ERROR: make_colorizer('darkred'),
logging.CRITICAL: make_colorizer('darkred'),
}
def __init__(self, exclude=None, *args, **kwargs):
self.exclude = exclude
if is_python_version((2, 6)):
logging.StreamHandler.__init__(self, *args, **kwargs)
else:
super(ColorizingStreamHandler, self).__init__(*args, **kwargs)
@property
def is_tty(self):
isatty = getattr(self.stream, 'isatty', None)
return isatty and isatty()
def format(self, record):
message = logging.StreamHandler.format(self, record)
if self.is_tty:
colorize = self.levels.get(record.levelno, lambda x: x)
# Don't colorize any traceback
parts = message.split('\n', 1)
parts[0] = " ".join([parts[0].split(" ", 1)[0], colorize(parts[0].split(" ", 1)[1])])
message = '\n'.join(parts)
return message
def import_attribute(name):
"""Return an attribute from a dotted path name (e.g. "path.to.func")."""
module_name, attribute = name.rsplit('.', 1)
module = importlib.import_module(module_name)
return getattr(module, attribute)
def utcnow():
return datetime.datetime.utcnow()
def utcformat(dt):
return dt.strftime(u"%Y-%m-%dT%H:%M:%SZ")
def utcparse(string):
try:
date_obj = datetime.datetime.strptime(string, "%Y-%m-%dT%H:%M:%SZ")
except ValueError:
date_obj = datetime.datetime.strptime(string, "%Y-%m-%dT%H:%M:%S.%fZ")
return date_obj
| 28.319149 | 97 | 0.555597 |
acf72f41579f1c97341a44f0d892bc36169bf7cd | 1,356 | py | Python | interface_grafica/list_box.py | felipesch92/cfbCursos | d08488e3320f1a688868ce0a06b1bbda568f0cf4 | [
"MIT"
] | null | null | null | interface_grafica/list_box.py | felipesch92/cfbCursos | d08488e3320f1a688868ce0a06b1bbda568f0cf4 | [
"MIT"
] | null | null | null | interface_grafica/list_box.py | felipesch92/cfbCursos | d08488e3320f1a688868ce0a06b1bbda568f0cf4 | [
"MIT"
] | null | null | null | from tkinter import *
def selecionar_esporte():
print(lb_esportes.get(ACTIVE))
vres.set(lb_esportes.get(ACTIVE))
def adicionar_esporte():
lb_esportes.insert(END, vtxt_esporte.get())
vtxt_esporte.set('')
app = Tk()
app.title('List Box')
app.geometry('500x300')
lista_esportes = ['Futebol', 'Volei', 'Basquete']
vres = StringVar()
vtxt_esporte = StringVar()
lb_esportes = Listbox(app)
for esporte in lista_esportes:
lb_esportes.insert(END, esporte)
for x in range(0, 10):
lb_esportes.insert(END, x)
btn_selecionar = Button(app,
text='Selecionar',
command=selecionar_esporte)
lbl_res = Label(app,
textvariable=vres)
lb_esportes.grid(row=0, column=0)
btn_selecionar.grid(row=1, column=0)
lbl_res.grid(row=2, column=0)
frame_adicionar = LabelFrame(app,
text='Adicionar Esporte')
lbl_esporte = Label(frame_adicionar,
text='Esporte: ')
txt_esporte = Entry(frame_adicionar,
textvariable=vtxt_esporte)
btn_adicionar = Button(frame_adicionar,
text='Adicionar',
command=adicionar_esporte)
lbl_esporte.grid(row=0, column=0)
txt_esporte.grid(row=0, column=1)
btn_adicionar.grid(row=1, columnspan=2)
frame_adicionar.grid(row=0, column=1)
app.mainloop()
| 24.654545 | 54 | 0.653392 |
acf730403ce5e330022e81f313490cdd2c1edc35 | 169 | py | Python | app/api/__init__.py | FX-HAO/scheduler-service | 885b724076f51609696eaf5fcda317487e706c22 | [
"BSD-2-Clause"
] | 1 | 2017-07-19T14:17:22.000Z | 2017-07-19T14:17:22.000Z | app/api/__init__.py | FX-HAO/scheduler-service | 885b724076f51609696eaf5fcda317487e706c22 | [
"BSD-2-Clause"
] | 2 | 2017-02-07T05:33:21.000Z | 2017-02-14T02:03:28.000Z | app/api/__init__.py | FX-HAO/scheduler-service | 885b724076f51609696eaf5fcda317487e706c22 | [
"BSD-2-Clause"
] | 2 | 2017-01-08T08:28:38.000Z | 2021-07-20T07:44:36.000Z | from flask.ext.restful import Api
from v1 import UserResource
api = Api(prefix='/api/v1')
api.add_resource(UserResource, '/users', '/users/<int:id>', endpoint='user')
| 24.142857 | 76 | 0.727811 |
acf73107376248e408c408cfb5c897cba07bfb0d | 918 | py | Python | MAPE-K/plan.py | Clebeuf/MAPE-K-Python | fcac829e8488a0ad49e4eb3b4c737578f09fb92d | [
"MIT"
] | 6 | 2015-07-09T03:28:32.000Z | 2020-07-20T20:15:03.000Z | MAPE-K/plan.py | Clebeuf/MAPE-K-Python | fcac829e8488a0ad49e4eb3b4c737578f09fb92d | [
"MIT"
] | null | null | null | MAPE-K/plan.py | Clebeuf/MAPE-K-Python | fcac829e8488a0ad49e4eb3b4c737578f09fb92d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import socket
TCP_IP = '127.0.0.1'
PLAN_PORT = 6003
EXECUTE_PORT = 6004
BUFFER_SIZE = 1024
# PLAN COMPONENT
# receives the request for the change and formulates a plan
# listen at the plan port
p = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
p.bind((TCP_IP, PLAN_PORT))
p.listen(1)
conn, addr = p.accept()
print 'Connected to listener port:', addr
# send message to the execute port
e = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
e.connect((TCP_IP, EXECUTE_PORT))
print 'Connected to send port:', addr
while 1:
message = "do nothing"
data = conn.recv(BUFFER_SIZE)
if data:
print "received data:", data
if data == "not enough spiders":
message = "increase spiders"
elif data == "too many spiders":
message = "decrease spiders"
e.send(message)
print "sent data:", message
conn.close()
e.close() | 21.857143 | 59 | 0.666667 |
acf731663ef66c7233a62aee104e83f6c05420b6 | 4,146 | py | Python | source/setup/lambda/lambda.py | has2aws/genomics-secondary-analysis-using-aws-step-functions-and-aws-batch | 77d3c4fb1206c62d04cc481de99703771a04e56f | [
"Apache-2.0"
] | 30 | 2020-04-29T13:31:45.000Z | 2022-03-26T07:53:52.000Z | source/setup/lambda/lambda.py | has2aws/genomics-secondary-analysis-using-aws-step-functions-and-aws-batch | 77d3c4fb1206c62d04cc481de99703771a04e56f | [
"Apache-2.0"
] | 10 | 2020-05-11T18:16:35.000Z | 2022-03-28T03:42:46.000Z | source/setup/lambda/lambda.py | has2aws/genomics-secondary-analysis-using-aws-step-functions-and-aws-batch | 77d3c4fb1206c62d04cc481de99703771a04e56f | [
"Apache-2.0"
] | 17 | 2020-04-28T17:51:48.000Z | 2022-03-26T07:53:55.000Z | # /*********************************************************************************************************************
# * Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. *
# * *
# * Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance *
# * with the License. A copy of the License is located at *
# * *
# * http://www.apache.org/licenses/LICENSE-2.0 *
# * *
# * or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES *
# * OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions *
# * and limitations under the License. *
# *********************************************************************************************************************/
from __future__ import print_function
from crhelper import CfnResource
import logging
import boto3
import time
logger = logging.getLogger(__name__)
# Initialise the helper, all inputs are optional, this example shows the defaults
helper = CfnResource(json_logging=False, log_level='DEBUG', boto_level='CRITICAL')
try:
codebuild = boto3.client('codebuild')
# pass
except Exception as e:
helper.init_failure(e)
@helper.create
def create(event, context):
logger.info("Got Create")
start_build_job(event, context)
@helper.update
def update(event, context):
logger.info("Got Update")
start_build_job(event, context)
@helper.delete
def delete(event, context):
logger.info("Got Delete")
start_build_job(event, context, action='teardown')
# Delete never returns anything. Should not fail if the underlying resources are already deleted. Desired state.
@helper.poll_create
def poll_create(event, context):
logger.info("Got Create poll")
return check_build_job_status(event, context)
@helper.poll_update
def poll_update(event, context):
logger.info("Got Update poll")
return check_build_job_status(event, context)
@helper.poll_delete
def poll_delete(event, context):
logger.info("Got Delete poll")
return check_build_job_status(event, context)
def handler(event, context):
helper(event, context)
def start_build_job(event, context, action='setup'):
response = codebuild.start_build(
projectName=event['ResourceProperties']['CodeBuildProjectName'],
environmentVariablesOverride=[{
'name': 'SOLUTION_ACTION',
'value': action,
'type': 'PLAINTEXT'
}]
)
logger.info(response)
helper.Data.update({"JobID": response['build']['id']})
def check_build_job_status(event, context):
code_build_project_name = event['ResourceProperties']['CodeBuildProjectName']
if not helper.Data.get("JobID"):
raise ValueError("Job ID missing in the polling event.")
job_id = helper.Data.get("JobID")
# 'SUCCEEDED' | 'FAILED' | 'FAULT' | 'TIMED_OUT' | 'IN_PROGRESS' | 'STOPPED'
response = codebuild.batch_get_builds(ids=[job_id])
build_status = response['builds'][0]['buildStatus']
if build_status == 'IN_PROGRESS':
logger.info(build_status)
return None
else:
if build_status == 'SUCCEEDED':
logger.info(build_status)
return True
else:
msg = "Code Build job '{0}' in project '{1}' exited with a build status of '{2}'." \
.format(job_id, code_build_project_name, build_status)
logger.info(msg)
raise ValueError(msg)
| 37.690909 | 120 | 0.549686 |
acf731702ac88937f24b708ca5d4e112d4670289 | 1,868 | py | Python | chinese_shadowing/play_audio.py | thomashirtz/chinese-shadowing | 3b78bb5dcab308549308b291df38f8ca214c0fa9 | [
"Apache-2.0"
] | 4 | 2021-08-22T20:01:31.000Z | 2021-08-23T21:29:21.000Z | chinese_shadowing/play_audio.py | thomashirtz/chinese-shadowing | 3b78bb5dcab308549308b291df38f8ca214c0fa9 | [
"Apache-2.0"
] | null | null | null | chinese_shadowing/play_audio.py | thomashirtz/chinese-shadowing | 3b78bb5dcab308549308b291df38f8ca214c0fa9 | [
"Apache-2.0"
] | null | null | null | from typing import Union
from typing import Tuple
from pathlib import Path
import threading
import numpy as np
from pydub import effects
from pydub import AudioSegment
from pydub.playback import play
def play_audio(
raw_audio: np.array,
frame_rate: int,
channels: int,
normalize: bool = True
) -> None:
"""Play audio that is in numpy array form"""
audio = AudioSegment(
raw_audio.tobytes(),
frame_rate=frame_rate,
sample_width=raw_audio.dtype.itemsize,
channels=channels
)
if normalize:
normalized_audio = effects.normalize(audio)
play(normalized_audio)
else:
play(audio)
def get_mp3_audio(
file_path: Union[Path, str]
) -> Tuple[np.array, int, int]:
"""MP3 to numpy array"""
audio_segment = AudioSegment.from_mp3(file_path)
raw_audio = np.array(audio_segment.get_array_of_samples())
raw_audio = raw_audio.reshape((-1, audio_segment.channels))
return raw_audio, audio_segment.frame_rate, audio_segment.channels
def play_mp3_file(file_path: Union[Path, str]) -> None:
"""Play MP3 file"""
audio, frame_rate, channels = get_mp3_audio(file_path)
play_audio(audio, frame_rate, channels)
class PlayAudioThread(threading.Thread):
"""Thread that plays raw audio that is in numpy array form"""
# https://stackoverflow.com/questions/18018033/how-to-stop-a-looping-thread-in-python
def __init__(self, audio: np.array, frame_rate: int, channels: int):
threading.Thread.__init__(self)
self.audio = audio
self.frame_rate = frame_rate
self.channels = channels
def run(self) -> None:
play_audio(self.audio, self.frame_rate, self.channels)
def join(self, timeout=None):
"""set stop event and join within a given time period"""
super().join(timeout)
| 29.1875 | 89 | 0.684154 |
acf731be28eb5a0be5860edaf10d6850a98be445 | 341 | py | Python | pdfReader.py | Pancongwen/youtiao | 9afc34fd80274d36ca8dfefd1fe2b1c0d0fce6a8 | [
"MIT"
] | null | null | null | pdfReader.py | Pancongwen/youtiao | 9afc34fd80274d36ca8dfefd1fe2b1c0d0fce6a8 | [
"MIT"
] | null | null | null | pdfReader.py | Pancongwen/youtiao | 9afc34fd80274d36ca8dfefd1fe2b1c0d0fce6a8 | [
"MIT"
] | null | null | null | import pytesseract
import re
from pdf2image import convert_from_path
page = convert_from_path("./test.pdf", single_file=True)
print("- Convert the first page of scanned pdf file to image")
print(type(page))
print(page[0])
text = pytesseract.image_to_string(page[0],lang='chi_sim')
print(text)
#idcard = re.search('', text)
#print(idcard)
| 22.733333 | 62 | 0.756598 |
acf7322029963e5f4fe4e907f9a1a60961660f10 | 5,032 | py | Python | model/atturesnext_class_nlocal.py | z1021190674/GMAUResNeXt_RS | a8a7444bf30e509cefc01b3be4b0587d367cda2e | [
"MIT"
] | 1 | 2022-03-23T11:54:33.000Z | 2022-03-23T11:54:33.000Z | model/atturesnext_class_nlocal.py | z1021190674/GMAUResNeXt_RS | a8a7444bf30e509cefc01b3be4b0587d367cda2e | [
"MIT"
] | null | null | null | model/atturesnext_class_nlocal.py | z1021190674/GMAUResNeXt_RS | a8a7444bf30e509cefc01b3be4b0587d367cda2e | [
"MIT"
] | null | null | null | """
uresnext_nlocal with global attention
"""
from model.block import *
import torch.nn as nn
import torch.nn.functional as F
import torch
import torchvision.models as models
class AttUResNeXt_class_nlocal(nn.Module):
"""Decoder part of the UNet
Parameters:
n_classes -- number of the classes of the given dataset
Tips:
set align_corners = True for better performance of semantic segmentation (https://github.com/pytorch/vision/issues/1708)
"""
def __init__(self, args, n_classes=6):
super().__init__()
self.n_classes = n_classes
# self.att_params = torch.nn.ParameterList([torch.nn.Parameter(torch.ones(1, dtype=torch.float32))
# for i in range(4)]) # there is a bug in pytorch with nn.ParameterList
self.a1 = torch.nn.Parameter(torch.tensor(1,dtype=torch.float32))
self.a2 = torch.nn.Parameter(torch.tensor(1,dtype=torch.float32))
self.a3 = torch.nn.Parameter(torch.tensor(1,dtype=torch.float32))
self.a4 = torch.nn.Parameter(torch.tensor(1,dtype=torch.float32))
self.gap = nn.AdaptiveAvgPool2d((1, 1))
### encoder ###
resnext = models.resnext101_32x8d(pretrained=args.is_pretrained)
self.firstconv = resnext.conv1
self.firstbn = resnext.bn1
self.firstrelu = resnext.relu
self.firstmaxpool = resnext.maxpool
self.encoder1 = resnext.layer1
self.encoder2 = resnext.layer2
self.encoder3 = resnext.layer3
self.encoder4 = resnext.layer4
### decoder ###
# level 1
self.nlocal1 = NLBlockND(2048, inter_channels=1024) # half the inter channels for computational efficiency
self.conv1 = nn.Conv2d(2048, 1024, 3, padding='same')
self.attblock1 = AttBlock_v2(2048 + 2048, 2048, out_channels=6)
# level 2
self.nlocal2 = NLBlockND(1024, inter_channels=512)
self.dconv1 = DoubleConv(2048, 1024)
self.attblock2 = AttBlock_v2(2048 + 1024, 1024, out_channels=6)
self.conv2 = nn.Conv2d(1024, 512, 3, padding='same')
# level 3
self.nlocal3 = NLBlockND(512, inter_channels=256)
self.dconv2 = DoubleConv(1024, 512)
self.attblock3 = AttBlock_v2(2048 + 512, 512, out_channels=6)
self.conv3 = nn.Conv2d(512, 256, 3, padding='same')
# level 4
self.dconv3 = DoubleConv(512, 256)
self.attblock4 = AttBlock_v2(2048 + 256, 256, out_channels=6)
self.conv4 = nn.Conv2d(256, 64, 3, padding='same')
# level 5
self.dconv4 = DoubleConv(128, 64)
# level 6
self.dconv5 = DoubleConv(64, 64)
self.conv5 = nn.Conv2d(64, self.n_classes, 3, padding='same')
def forward(self, img):
### encoder ###
x1 = self.firstconv(img)
x1 = self.firstbn(x1)
e0 = self.firstrelu(x1)
e1 = self.firstmaxpool(e0)
e1 = self.encoder1(e1)
e2 = self.encoder2(e1)
e3 = self.encoder3(e2)
e4 = self.encoder4(e3)
### decoder ###
x = self.nlocal1(e4)
context = self.gap(x)
# level 1
att1 = self.attblock1(x, context)
# level 2
# interpolation -- mini-batch x channels x [optional depth] x [optional height] x width.
x = F.interpolate(x, size=(16,16), mode='bilinear', align_corners=True)
x = self.conv1(x)
x = torch.cat((e3, x), dim=1)
x = self.dconv1(x)
x = self.nlocal2(x)
att2 = self.attblock2(x, context)
# level 3
x = F.interpolate(x, size=(32,32), mode='bilinear', align_corners=True)
x = self.conv2(x)
x = torch.cat((e2, x), dim=1)
x = self.dconv2(x)
x = self.nlocal3(x)
att3 = self.attblock3(x, context)
# level 4
x = F.interpolate(x, size=(64,64), mode='bilinear', align_corners=True)
x = self.conv3(x)
x = torch.cat((e1, x), dim=1)
x = self.dconv3(x)
att4 = self.attblock4(x, context)
# level 5
x = F.interpolate(x, size=(128,128), mode='bilinear', align_corners=True)
x = self.conv4(x)
x = torch.cat((e0, x), dim=1)
x = self.dconv4(x)
# level 6
x = F.interpolate(x, size=(256,256), mode='bilinear', align_corners=True)
x = self.dconv5(x)
x = self.conv5(x)
att_sum = (self.a1*att1 + self.a2*att2 + self.a3*att3 + self.a4*att4) / 4.0 #weighted attention
x = att_sum * x
x = F.log_softmax(x, dim=1)
return x
if __name__ == '__main__':
### import for test ###
# backbone = models.resnext101_32x8d(pretrained=False)
# print(backbone)
# save the graph of the atturesnext
net = AttUResNeXt_class_nlocal()
data = torch.rand(1, 3, 256, 256)
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(r'logs/atturesnext_class')
writer.add_graph(net, torch.rand(1,3,256,256))
writer.close()
pass | 36.729927 | 128 | 0.600159 |
acf7363b33b51a01438ee2da36a99cc7186424cc | 6,282 | py | Python | Utils/Util.py | soumikmohianuta/pixtoapp | f103a580e015ef7779998f14c9533a238a4bf1d1 | [
"BSD-3-Clause"
] | 31 | 2018-03-19T19:30:29.000Z | 2022-02-16T08:50:42.000Z | Utils/Util.py | soumikmohianuta/pixtoapp | f103a580e015ef7779998f14c9533a238a4bf1d1 | [
"BSD-3-Clause"
] | 4 | 2018-10-04T08:01:15.000Z | 2019-04-25T15:50:36.000Z | Utils/Util.py | soumikmohianuta/pixtoapp | f103a580e015ef7779998f14c9533a238a4bf1d1 | [
"BSD-3-Clause"
] | 12 | 2018-07-24T18:52:07.000Z | 2021-07-15T11:36:14.000Z | import os
import shutil
from Utils import Constants
from xml.etree.ElementTree import Element
from Utils import TextUtils
from os.path import basename
APP_ROOT = os.path.dirname(os.path.abspath(__file__))
PACKAGE_NAME_PREFIX = "edu.uta.cse.screenshot."
APP_ROOT = os.path.dirname(os.path.abspath(__file__))
SDK_FOLDER = os.path.join(APP_ROOT, 'android-sdk/')
def copyFile(src, dst):
shutil.copyfile(src,dst)
def copyFolder(src,dst):
if(os.path.isdir(src)):
if(not os.path.exists(dst)):
os.makedirs(dst)
files = os.listdir(src)
for file in files:
# // construct the src and dest file structure
srcFile = os.path.join(src, file)
destFile = os.path.join(dst, file)
# // recursive copy
copyFolder(srcFile, destFile)
else:
# // if file, then copy it
copyFile(src,dst)
def readFile(fileName):
fOpen = open(fileName,"r") #opens file with name of "test.txt"
content = fOpen.read()
fOpen.close()
return content
def readFileEncoded(path, encoding):
fOpen = open(path, encoding=encoding)
content = fOpen.read()
fOpen.close()
return content
# public static String readFileFileInputSteam(String path, Charset encoding)
# throws IOException {
# FileInputStream fileInputStream = new FileInputStream(new File(path))
# byte[] bytes = new byte[fileInputStream.available()]
# fileInputStream.read(bytes)
# String content = new String(bytes, StandardCharsets.UTF_8)
# fileInputStream.close()
# return content
# }
def readByte(fileName):
fOpen = open(fileName,"rb") #opens file with name of "test.txt"
content = fOpen.read()
fOpen.close()
return content
def isValidUTF8(_input):
valid_utf8 = True
try:
_input.decode('utf-8')
except UnicodeDecodeError:
valid_utf8 = False
return valid_utf8
def writeFile(content, fileName):
fOpen = open(fileName,"w") #opens file with name of "test.txt"
content = fOpen.write(content)
fOpen.close()
def writeFileEncoded(content, fileName,encoding):
fOpen = open(fileName,"w",encoding=encoding) #opens file with name of "test.txt"
content = fOpen.write(content)
fOpen.close()
def run(command):
# sudoPassword = "1234"
# sudoCommand = 'echo %s|sudo -S %s' % (sudoPassword,command)
# print(sudoCommand)
os.system(command)
def runWithOutput(command):
sudoPassword = "1234"
out = os.popen('echo %s|sudo -S %s' % (sudoPassword,command)).read()
print(out)
return out
def chmodFilePath(filePath) :
run("chmod 777 " + filePath)
def chmodFile(file):
run("chmod 777 " + os.path.abspath(file))
def writeFileAndChmod(data, filepath, encoding=None):
if encoding == None:
writeFile(data, filepath)
else:
writeFile(data, filepath, encoding)
chmodFilePath(filepath)
def writeFileAndChmodFilePath(data, filePath, encoding=None):
writeFileAndChmod(data, filePath,encoding)
def writeFileAndChmodFile(data, file, encoding=None):
writeFileAndChmod(data, os.path.abspath(file), encoding)
def writeFileAndRunFilePath(data, filePath):
writeFileAndChmod(data, filePath)
runWithOutput(data)
def writeFileAndRunFile(data,filePath):
writeFileAndRunFilePath(data, os.path.abspath(filePath))
def getAllFilesRecusively(folder, files, filenameFilter):
if (folder == None or os.path.isfile(folder)):
return files
listFiles = []
for file in os.listdir(folder):
if file.endswith(filenameFilter):
listFiles.append( os.path.join(folder, file))
if (len(listFiles) == 0):
return files
files.extend(listFiles)
subFolders = [x for x in listFiles if os.path.isdir(x)]
for subFolder in subFolders:
getAllFilesRecusively(subFolder, files, filenameFilter)
return files
def getAndroidSDKPath():
UPLOAD_FOLDER = os.path.join(APP_ROOT, r"android-sdk/")
return UPLOAD_FOLDER
def getAndroidProjectCreatePath():
return r"/home/soumik/Research/upload_And_Download/REMAUI_AWS/android-sdk/tools/android"
def readLine(filePath):
lines = []
with open(filePath) as f:
lines = f.readlines()
return lines
import re
def createValidAndroidProjectName(mFileName):
valideName = mFileName.replace("-", "_")
valideName = valideName.replace(".", "_")
valideName = valideName.replace("-", "_")
valideName = valideName.replace("#", "_")
valideName = valideName.replace("*", "_")
m = re.search('[a-z|A-Z]', valideName)
valideName = valideName[m.start():]
return valideName
def getBaseNameRemoveInvalidChars(fileName):
return TextUtils.removeInvalidProjectNameChars(basename(fileName))
def getProjectName(fileName):
baseName = createValidAndroidProjectName(fileName)
return baseName.upper()
def getPackageName(fileName):
baseName = createValidAndroidProjectName(fileName)
#TODO return Constants.PACKAGE_NAME_PREFIX + baseName.lower()
return baseName.lower()+".com.remaui"
def sanitizeFilename(name):
repCharacterList = "[:\\\\/*?|<>]"
outResult = name
for c in repCharacterList:
outResult = outResult.replace(c,'_')
return outResult
def convertPageToFolderPath(packageName):
return packageName.replace("\\.", "/")
def getHeightofView(root):
maxVal= 0
for childNode in root.mChildren:
height = childNode.height
if (height > maxVal):
maxVal = height
return maxVal + 1
def getHeightofElement(root):
maxVal = 0
elements = root.iter()
for childNode in elements:
if type(childNode) == Element:
height = getHeightofElement(childNode)
if (height > maxVal):
maxVal = height
return maxVal + 1
def copyFileFromClassPath(classPathFilePath, absolutePath):
name = basename(classPathFilePath)
destFile = os.path.join(absolutePath, name)
run("cp "+ classPathFilePath + " " + destFile)
# shutil.copyfile(classPathFilePath,destFile)
def readFileFromClassPath(classPathFilePath):
fOpen = open(classPathFilePath,"r") #opens file with name of "test.txt"
content = fOpen.read()
fOpen.close()
return content
| 26.394958 | 92 | 0.678765 |
acf736954f6c5546d65c361202e6987dfbcf440d | 76,562 | py | Python | ansible/lib/ansible/modules/extras/monitoring/logicmonitor.py | kiv-box/redis | 966a0c3f0a51282cd173b42a6e249d23f4e89dec | [
"Apache-2.0"
] | null | null | null | ansible/lib/ansible/modules/extras/monitoring/logicmonitor.py | kiv-box/redis | 966a0c3f0a51282cd173b42a6e249d23f4e89dec | [
"Apache-2.0"
] | null | null | null | ansible/lib/ansible/modules/extras/monitoring/logicmonitor.py | kiv-box/redis | 966a0c3f0a51282cd173b42a6e249d23f4e89dec | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
"""LogicMonitor Ansible module for managing Collectors, Hosts and Hostgroups
Copyright (C) 2015 LogicMonitor
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA"""
import datetime
import os
import platform
import socket
import sys
import types
import urllib
HAS_LIB_JSON = True
try:
import json
# Detect the python-json library which is incompatible
# Look for simplejson if that's the case
try:
if (
not isinstance(json.loads, types.FunctionType) or
not isinstance(json.dumps, types.FunctionType)
):
raise ImportError
except AttributeError:
raise ImportError
except ImportError:
try:
import simplejson as json
except ImportError:
print(
'\n{"msg": "Error: ansible requires the stdlib json or ' +
'simplejson module, neither was found!", "failed": true}'
)
HAS_LIB_JSON = False
except SyntaxError:
print(
'\n{"msg": "SyntaxError: probably due to installed simplejson ' +
'being for a different python version", "failed": true}'
)
HAS_LIB_JSON = False
RETURN = '''
---
success:
description: flag indicating that execution was successful
returned: success
type: boolean
sample: True
...
'''
DOCUMENTATION = '''
---
module: logicmonitor
short_description: Manage your LogicMonitor account through Ansible Playbooks
description:
- LogicMonitor is a hosted, full-stack, infrastructure monitoring platform.
- This module manages hosts, host groups, and collectors within your LogicMonitor account.
version_added: "2.2"
author: [Ethan Culler-Mayeno (@ethanculler), Jeff Wozniak (@woz5999)]
notes:
- You must have an existing LogicMonitor account for this module to function.
requirements: ["An existing LogicMonitor account", "Linux"]
options:
target:
description:
- The type of LogicMonitor object you wish to manage.
- "Collector: Perform actions on a LogicMonitor collector."
- NOTE You should use Ansible service modules such as M(service) or M(supervisorctl) for managing the Collector 'logicmonitor-agent' and 'logicmonitor-watchdog' services. Specifically, you'll probably want to start these services after a Collector add and stop these services before a Collector remove.
- "Host: Perform actions on a host device."
- "Hostgroup: Perform actions on a LogicMonitor host group."
- NOTE Host and Hostgroup tasks should always be performed via local_action. There are no benefits to running these tasks on the remote host and doing so will typically cause problems.
required: true
default: null
choices: ['collector', 'host', 'datsource', 'hostgroup']
action:
description:
- The action you wish to perform on target.
- "Add: Add an object to your LogicMonitor account."
- "Remove: Remove an object from your LogicMonitor account."
- "Update: Update properties, description, or groups (target=host) for an object in your LogicMonitor account."
- "SDT: Schedule downtime for an object in your LogicMonitor account."
required: true
default: null
choices: ['add', 'remove', 'update', 'sdt']
company:
description:
- The LogicMonitor account company name. If you would log in to your account at "superheroes.logicmonitor.com" you would use "superheroes."
required: true
default: null
user:
description:
- A LogicMonitor user name. The module will authenticate and perform actions on behalf of this user.
required: true
default: null
password:
description:
- The password of the specified LogicMonitor user
required: true
default: null
collector:
description:
- The fully qualified domain name of a collector in your LogicMonitor account.
- This is required for the creation of a LogicMonitor host (target=host action=add).
- This is required for updating, removing or scheduling downtime for hosts if 'displayname' isn't specified (target=host action=update action=remove action=sdt).
required: false
default: null
hostname:
description:
- The hostname of a host in your LogicMonitor account, or the desired hostname of a device to manage.
- Optional for managing hosts (target=host).
required: false
default: 'hostname -f'
displayname:
description:
- The display name of a host in your LogicMonitor account or the desired display name of a device to manage.
- Optional for managing hosts (target=host).
required: false
default: 'hostname -f'
description:
description:
- The long text description of the object in your LogicMonitor account.
- Optional for managing hosts and host groups (target=host or target=hostgroup; action=add or action=update).
required: false
default: ""
properties:
description:
- A dictionary of properties to set on the LogicMonitor host or host group.
- Optional for managing hosts and host groups (target=host or target=hostgroup; action=add or action=update).
- This parameter will add or update existing properties in your LogicMonitor account.
required: false
default: {}
groups:
description:
- A list of groups that the host should be a member of.
- Optional for managing hosts (target=host; action=add or action=update).
required: false
default: []
id:
description:
- ID of the datasource to target.
- Required for management of LogicMonitor datasources (target=datasource).
required: false
default: null
fullpath:
description:
- The fullpath of the host group object you would like to manage.
- Recommend running on a single Ansible host.
- Required for management of LogicMonitor host groups (target=hostgroup).
required: false
default: null
alertenable:
description:
- A boolean flag to turn alerting on or off for an object.
- Optional for managing all hosts (action=add or action=update).
required: false
default: true
choices: [true, false]
starttime:
description:
- The time that the Scheduled Down Time (SDT) should begin.
- Optional for managing SDT (action=sdt).
- Y-m-d H:M
required: false
default: Now
duration:
description:
- The duration (minutes) of the Scheduled Down Time (SDT).
- Optional for putting an object into SDT (action=sdt).
required: false
default: 30
...
'''
EXAMPLES = '''
# example of adding a new LogicMonitor collector to these devices
---
- hosts: collectors
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: Deploy/verify LogicMonitor collectors
become: yes
logicmonitor:
target=collector
action=add
company={{ company }}
user={{ user }}
password={{ password }}
#example of adding a list of hosts into monitoring
---
- hosts: hosts
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: Deploy LogicMonitor Host
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=host
action=add
collector='mycompany-Collector'
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
groups="/servers/production,/datacenter1"
properties="{'snmp.community':'secret','dc':'1', 'type':'prod'}"
#example of putting a datasource in SDT
---
- hosts: localhost
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: SDT a datasource
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=datasource
action=sdt
id='123'
duration=3000
starttime='2017-03-04 05:06'
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
#example of creating a hostgroup
---
- hosts: localhost
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: Create a host group
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=hostgroup
action=add
fullpath='/servers/development'
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
properties="{'snmp.community':'commstring', 'type':'dev'}"
#example of putting a list of hosts into SDT
---
- hosts: hosts
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: SDT hosts
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=host
action=sdt
duration=3000
starttime='2016-11-10 09:08'
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
collector='mycompany-Collector'
#example of putting a host group in SDT
---
- hosts: localhost
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: SDT a host group
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=hostgroup
action=sdt
fullpath='/servers/development'
duration=3000
starttime='2017-03-04 05:06'
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
#example of updating a list of hosts
---
- hosts: hosts
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: Update a list of hosts
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=host
action=update
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
collector='mycompany-Collector'
groups="/servers/production,/datacenter5"
properties="{'snmp.community':'commstring','dc':'5'}"
#example of updating a hostgroup
---
- hosts: hosts
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: Update a host group
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=hostgroup
action=update
fullpath='/servers/development'
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
properties="{'snmp.community':'hg', 'type':'dev', 'status':'test'}"
#example of removing a list of hosts from monitoring
---
- hosts: hosts
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: Remove LogicMonitor hosts
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=host
action=remove
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
collector='mycompany-Collector'
#example of removing a host group
---
- hosts: hosts
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: Remove LogicMonitor development servers hostgroup
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=hostgroup
action=remove
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
fullpath='/servers/development'
- name: Remove LogicMonitor servers hostgroup
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=hostgroup
action=remove
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
fullpath='/servers'
- name: Remove LogicMonitor datacenter1 hostgroup
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=hostgroup
action=remove
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
fullpath='/datacenter1'
- name: Remove LogicMonitor datacenter5 hostgroup
# All tasks except for target=collector should use local_action
local_action: >
logicmonitor
target=hostgroup
action=remove
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
fullpath='/datacenter5'
### example of removing a new LogicMonitor collector to these devices
---
- hosts: collectors
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: Remove LogicMonitor collectors
become: yes
logicmonitor:
target=collector
action=remove
company={{ company }}
user={{ user }}
password={{ password }}
#complete example
---
- hosts: localhost
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: Create a host group
local_action: >
logicmonitor
target=hostgroup
action=add
fullpath='/servers/production/database'
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
properties="{'snmp.community':'commstring'}"
- name: SDT a host group
local_action: >
logicmonitor
target=hostgroup
action=sdt
fullpath='/servers/production/web'
duration=3000
starttime='2012-03-04 05:06'
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
- hosts: collectors
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: Deploy/verify LogicMonitor collectors
logicmonitor:
target: collector
action: add
company: {{ company }}
user: {{ user }}
password: {{ password }}
- name: Place LogicMonitor collectors into 30 minute Scheduled downtime
logicmonitor: target=collector action=sdt company={{ company }}
user={{ user }} password={{ password }}
- name: Deploy LogicMonitor Host
local_action: >
logicmonitor
target=host
action=add
collector=agent1.ethandev.com
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
properties="{'snmp.community':'commstring', 'dc':'1'}"
groups="/servers/production/collectors, /datacenter1"
- hosts: database-servers
remote_user: '{{ username }}'
vars:
company: 'mycompany'
user: 'myusername'
password: 'mypassword'
tasks:
- name: deploy logicmonitor hosts
local_action: >
logicmonitor
target=host
action=add
collector=monitoring.dev.com
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
properties="{'snmp.community':'commstring', 'type':'db', 'dc':'1'}"
groups="/servers/production/database, /datacenter1"
- name: schedule 5 hour downtime for 2012-11-10 09:08
local_action: >
logicmonitor
target=host
action=sdt
duration=3000
starttime='2012-11-10 09:08'
company='{{ company }}'
user='{{ user }}'
password='{{ password }}'
'''
class LogicMonitor(object):
def __init__(self, module, **params):
self.__version__ = "1.0-python"
self.module = module
self.module.debug("Instantiating LogicMonitor object")
self.check_mode = False
self.company = params["company"]
self.user = params["user"]
self.password = params["password"]
self.fqdn = socket.getfqdn()
self.lm_url = "logicmonitor.com/santaba"
self.__version__ = self.__version__ + "-ansible-module"
def rpc(self, action, params):
"""Make a call to the LogicMonitor RPC library
and return the response"""
self.module.debug("Running LogicMonitor.rpc")
param_str = urllib.urlencode(params)
creds = urllib.urlencode(
{"c": self.company,
"u": self.user,
"p": self.password})
if param_str:
param_str = param_str + "&"
param_str = param_str + creds
try:
url = ("https://" + self.company + "." + self.lm_url +
"/rpc/" + action + "?" + param_str)
# Set custom LogicMonitor header with version
headers = {"X-LM-User-Agent": self.__version__}
# Set headers
f = open_url(url, headers=headers)
raw = f.read()
resp = json.loads(raw)
if resp["status"] == 403:
self.module.debug("Authentication failed.")
self.fail(msg="Error: " + resp["errmsg"])
else:
return raw
except IOError:
ioe = get_exception()
self.fail(msg="Error: Exception making RPC call to " +
"https://" + self.company + "." + self.lm_url +
"/rpc/" + action + "\nException" + str(ioe))
def do(self, action, params):
"""Make a call to the LogicMonitor
server \"do\" function"""
self.module.debug("Running LogicMonitor.do...")
param_str = urllib.urlencode(params)
creds = (urllib.urlencode(
{"c": self.company,
"u": self.user,
"p": self.password}))
if param_str:
param_str = param_str + "&"
param_str = param_str + creds
try:
self.module.debug("Attempting to open URL: " +
"https://" + self.company + "." + self.lm_url +
"/do/" + action + "?" + param_str)
f = open_url(
"https://" + self.company + "." + self.lm_url +
"/do/" + action + "?" + param_str)
return f.read()
except IOError:
ioe = get_exception()
self.fail(msg="Error: Exception making RPC call to " +
"https://" + self.company + "." + self.lm_url +
"/do/" + action + "\nException" + str(ioe))
def get_collectors(self):
"""Returns a JSON object containing a list of
LogicMonitor collectors"""
self.module.debug("Running LogicMonitor.get_collectors...")
self.module.debug("Making RPC call to 'getAgents'")
resp = self.rpc("getAgents", {})
resp_json = json.loads(resp)
if resp_json["status"] is 200:
self.module.debug("RPC call succeeded")
return resp_json["data"]
else:
self.fail(msg=resp)
def get_host_by_hostname(self, hostname, collector):
"""Returns a host object for the host matching the
specified hostname"""
self.module.debug("Running LogicMonitor.get_host_by_hostname...")
self.module.debug("Looking for hostname " + hostname)
self.module.debug("Making RPC call to 'getHosts'")
hostlist_json = json.loads(self.rpc("getHosts", {"hostGroupId": 1}))
if collector:
if hostlist_json["status"] == 200:
self.module.debug("RPC call succeeded")
hosts = hostlist_json["data"]["hosts"]
self.module.debug(
"Looking for host matching: hostname " + hostname +
" and collector " + str(collector["id"]))
for host in hosts:
if (host["hostName"] == hostname and
host["agentId"] == collector["id"]):
self.module.debug("Host match found")
return host
self.module.debug("No host match found")
return None
else:
self.module.debug("RPC call failed")
self.module.debug(hostlist_json)
else:
self.module.debug("No collector specified")
return None
def get_host_by_displayname(self, displayname):
"""Returns a host object for the host matching the
specified display name"""
self.module.debug("Running LogicMonitor.get_host_by_displayname...")
self.module.debug("Looking for displayname " + displayname)
self.module.debug("Making RPC call to 'getHost'")
host_json = (json.loads(self.rpc("getHost",
{"displayName": displayname})))
if host_json["status"] == 200:
self.module.debug("RPC call succeeded")
return host_json["data"]
else:
self.module.debug("RPC call failed")
self.module.debug(host_json)
return None
def get_collector_by_description(self, description):
"""Returns a JSON collector object for the collector
matching the specified FQDN (description)"""
self.module.debug(
"Running LogicMonitor.get_collector_by_description..."
)
collector_list = self.get_collectors()
if collector_list is not None:
self.module.debug("Looking for collector with description {0}" +
description)
for collector in collector_list:
if collector["description"] == description:
self.module.debug("Collector match found")
return collector
self.module.debug("No collector match found")
return None
def get_group(self, fullpath):
"""Returns a JSON group object for the group matching the
specified path"""
self.module.debug("Running LogicMonitor.get_group...")
self.module.debug("Making RPC call to getHostGroups")
resp = json.loads(self.rpc("getHostGroups", {}))
if resp["status"] == 200:
self.module.debug("RPC called succeeded")
groups = resp["data"]
self.module.debug("Looking for group matching " + fullpath)
for group in groups:
if group["fullPath"] == fullpath.lstrip('/'):
self.module.debug("Group match found")
return group
self.module.debug("No group match found")
return None
else:
self.module.debug("RPC call failed")
self.module.debug(resp)
return None
def create_group(self, fullpath):
"""Recursively create a path of host groups.
Returns the id of the newly created hostgroup"""
self.module.debug("Running LogicMonitor.create_group...")
res = self.get_group(fullpath)
if res:
self.module.debug("Group {0} exists." + fullpath)
return res["id"]
if fullpath == "/":
self.module.debug("Specified group is root. Doing nothing.")
return 1
else:
self.module.debug("Creating group named " + fullpath)
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
parentpath, name = fullpath.rsplit('/', 1)
parentgroup = self.get_group(parentpath)
parentid = 1
if parentpath == "":
parentid = 1
elif parentgroup:
parentid = parentgroup["id"]
else:
parentid = self.create_group(parentpath)
h = None
# Determine if we're creating a group from host or hostgroup class
if hasattr(self, '_build_host_group_hash'):
h = self._build_host_group_hash(
fullpath,
self.description,
self.properties,
self.alertenable)
h["name"] = name
h["parentId"] = parentid
else:
h = {"name": name,
"parentId": parentid,
"alertEnable": True,
"description": ""}
self.module.debug("Making RPC call to 'addHostGroup'")
resp = json.loads(
self.rpc("addHostGroup", h))
if resp["status"] == 200:
self.module.debug("RPC call succeeded")
return resp["data"]["id"]
elif resp["errmsg"] == "The record already exists":
self.module.debug("The hostgroup already exists")
group = self.get_group(fullpath)
return group["id"]
else:
self.module.debug("RPC call failed")
self.fail(
msg="Error: unable to create new hostgroup \"" +
name + "\".\n" + resp["errmsg"])
def fail(self, msg):
self.module.fail_json(msg=msg, changed=self.change, failed=True)
def exit(self, changed):
self.module.debug("Changed: " + changed)
self.module.exit_json(changed=changed, success=True)
def output_info(self, info):
self.module.debug("Registering properties as Ansible facts")
self.module.exit_json(changed=False, ansible_facts=info)
class Collector(LogicMonitor):
def __init__(self, params, module=None):
"""Initializor for the LogicMonitor Collector object"""
self.change = False
self.params = params
LogicMonitor.__init__(self, module, **params)
self.module.debug("Instantiating Collector object")
if self.params['description']:
self.description = self.params['description']
else:
self.description = self.fqdn
self.info = self._get()
self.installdir = "/usr/local/logicmonitor"
self.platform = platform.system()
self.is_64bits = sys.maxsize > 2**32
self.duration = self.params['duration']
self.starttime = self.params['starttime']
if self.info is None:
self.id = None
else:
self.id = self.info["id"]
def create(self):
"""Idempotent function to make sure that there is
a running collector installed and registered"""
self.module.debug("Running Collector.create...")
self._create()
self.get_installer_binary()
self.install()
def remove(self):
"""Idempotent function to make sure that there is
not a running collector installed and registered"""
self.module.debug("Running Collector.destroy...")
self._unreigster()
self.uninstall()
def get_installer_binary(self):
"""Download the LogicMonitor collector installer binary"""
self.module.debug("Running Collector.get_installer_binary...")
arch = 32
if self.is_64bits:
self.module.debug("64 bit system")
arch = 64
else:
self.module.debug("32 bit system")
if self.platform == "Linux" and self.id is not None:
self.module.debug("Platform is Linux")
self.module.debug("Agent ID is " + str(self.id))
installfilepath = (self.installdir +
"/logicmonitorsetup" +
str(self.id) + "_" + str(arch) +
".bin")
self.module.debug("Looking for existing installer at " +
installfilepath)
if not os.path.isfile(installfilepath):
self.module.debug("No previous installer found")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
self.module.debug("Downloading installer file")
# attempt to create the install dir before download
self.module.run_command("mkdir " + self.installdir)
try:
f = open(installfilepath, "w")
installer = (self.do("logicmonitorsetup",
{"id": self.id,
"arch": arch}))
f.write(installer)
f.closed
except:
self.fail(msg="Unable to open installer file for writing")
f.closed
else:
self.module.debug("Collector installer already exists")
return installfilepath
elif self.id is None:
self.fail(
msg="Error: There is currently no collector " +
"associated with this device. To download " +
" the installer, first create a collector " +
"for this device.")
elif self.platform != "Linux":
self.fail(
msg="Error: LogicMonitor Collector must be " +
"installed on a Linux device.")
else:
self.fail(
msg="Error: Unable to retrieve the installer from the server")
def install(self):
"""Execute the LogicMonitor installer if not
already installed"""
self.module.debug("Running Collector.install...")
if self.platform == "Linux":
self.module.debug("Platform is Linux")
installer = self.get_installer_binary()
if self.info is None:
self.module.debug("Retriving collector information")
self.info = self._get()
if not os.path.exists(self.installdir + "/agent"):
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
self.module.debug("Setting installer file permissions")
os.chmod(installer, 484) # decimal for 0o744
self.module.debug("Executing installer")
ret_code, out, err = self.module.run_command(installer + " -y")
if ret_code != 0:
self.fail(msg="Error: Unable to install collector: " + err)
else:
self.module.debug("Collector installed successfully")
else:
self.module.debug("Collector already installed")
else:
self.fail(
msg="Error: LogicMonitor Collector must be " +
"installed on a Linux device")
def uninstall(self):
"""Uninstall LogicMontitor collector from the system"""
self.module.debug("Running Collector.uninstall...")
uninstallfile = self.installdir + "/agent/bin/uninstall.pl"
if os.path.isfile(uninstallfile):
self.module.debug("Collector uninstall file exists")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
self.module.debug("Running collector uninstaller")
ret_code, out, err = self.module.run_command(uninstallfile)
if ret_code != 0:
self.fail(
msg="Error: Unable to uninstall collector: " + err)
else:
self.module.debug("Collector successfully uninstalled")
else:
if os.path.exists(self.installdir + "/agent"):
(self.fail(
msg="Unable to uninstall LogicMonitor " +
"Collector. Can not find LogicMonitor " +
"uninstaller."))
def sdt(self):
"""Create a scheduled down time
(maintenance window) for this host"""
self.module.debug("Running Collector.sdt...")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
duration = self.duration
starttime = self.starttime
offsetstart = starttime
if starttime:
self.module.debug("Start time specified")
start = datetime.datetime.strptime(starttime, '%Y-%m-%d %H:%M')
offsetstart = start
else:
self.module.debug("No start time specified. Using default.")
start = datetime.datetime.utcnow()
# Use user UTC offset
self.module.debug("Making RPC call to 'getTimeZoneSetting'")
accountresp = json.loads(self.rpc("getTimeZoneSetting", {}))
if accountresp["status"] == 200:
self.module.debug("RPC call succeeded")
offset = accountresp["data"]["offset"]
offsetstart = start + datetime.timedelta(0, offset)
else:
self.fail(msg="Error: Unable to retrieve timezone offset")
offsetend = offsetstart + datetime.timedelta(0, int(duration)*60)
h = {"agentId": self.id,
"type": 1,
"notifyCC": True,
"year": offsetstart.year,
"month": offsetstart.month-1,
"day": offsetstart.day,
"hour": offsetstart.hour,
"minute": offsetstart.minute,
"endYear": offsetend.year,
"endMonth": offsetend.month-1,
"endDay": offsetend.day,
"endHour": offsetend.hour,
"endMinute": offsetend.minute}
self.module.debug("Making RPC call to 'setAgentSDT'")
resp = json.loads(self.rpc("setAgentSDT", h))
if resp["status"] == 200:
self.module.debug("RPC call succeeded")
return resp["data"]
else:
self.module.debug("RPC call failed")
self.fail(msg=resp["errmsg"])
def site_facts(self):
"""Output current properties information for the Collector"""
self.module.debug("Running Collector.site_facts...")
if self.info:
self.module.debug("Collector exists")
props = self.get_properties(True)
self.output_info(props)
else:
self.fail(msg="Error: Collector doesn't exit.")
def _get(self):
"""Returns a JSON object representing this collector"""
self.module.debug("Running Collector._get...")
collector_list = self.get_collectors()
if collector_list is not None:
self.module.debug("Collectors returned")
for collector in collector_list:
if collector["description"] == self.description:
return collector
else:
self.module.debug("No collectors returned")
return None
def _create(self):
"""Create a new collector in the associated
LogicMonitor account"""
self.module.debug("Running Collector._create...")
if self.platform == "Linux":
self.module.debug("Platform is Linux")
ret = self.info or self._get()
if ret is None:
self.change = True
self.module.debug("System changed")
if self.check_mode:
self.exit(changed=True)
h = {"autogen": True,
"description": self.description}
self.module.debug("Making RPC call to 'addAgent'")
create = (json.loads(self.rpc("addAgent", h)))
if create["status"] is 200:
self.module.debug("RPC call succeeded")
self.info = create["data"]
self.id = create["data"]["id"]
return create["data"]
else:
self.fail(msg=create["errmsg"])
else:
self.info = ret
self.id = ret["id"]
return ret
else:
self.fail(
msg="Error: LogicMonitor Collector must be " +
"installed on a Linux device.")
def _unreigster(self):
"""Delete this collector from the associated
LogicMonitor account"""
self.module.debug("Running Collector._unreigster...")
if self.info is None:
self.module.debug("Retrieving collector information")
self.info = self._get()
if self.info is not None:
self.module.debug("Collector found")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
self.module.debug("Making RPC call to 'deleteAgent'")
delete = json.loads(self.rpc("deleteAgent",
{"id": self.id}))
if delete["status"] is 200:
self.module.debug("RPC call succeeded")
return delete
else:
# The collector couldn't unregister. Start the service again
self.module.debug("Error unregistering collecting. " +
delete["errmsg"])
self.fail(msg=delete["errmsg"])
else:
self.module.debug("Collector not found")
return None
class Host(LogicMonitor):
def __init__(self, params, module=None):
"""Initializor for the LogicMonitor host object"""
self.change = False
self.params = params
self.collector = None
LogicMonitor.__init__(self, module, **self.params)
self.module.debug("Instantiating Host object")
if self.params["hostname"]:
self.module.debug("Hostname is " + self.params["hostname"])
self.hostname = self.params['hostname']
else:
self.module.debug("No hostname specified. Using " + self.fqdn)
self.hostname = self.fqdn
if self.params["displayname"]:
self.module.debug("Display name is " + self.params["displayname"])
self.displayname = self.params['displayname']
else:
self.module.debug("No display name specified. Using " + self.fqdn)
self.displayname = self.fqdn
# Attempt to host information via display name of host name
self.module.debug("Attempting to find host by displayname " +
self.displayname)
info = self.get_host_by_displayname(self.displayname)
if info is not None:
self.module.debug("Host found by displayname")
# Used the host information to grab the collector description
# if not provided
if (not hasattr(self.params, "collector") and
"agentDescription" in info):
self.module.debug("Setting collector from host response. " +
"Collector " + info["agentDescription"])
self.params["collector"] = info["agentDescription"]
else:
self.module.debug("Host not found by displayname")
# At this point, a valid collector description is required for success
# Check that the description exists or fail
if self.params["collector"]:
self.module.debug(
"Collector specified is " +
self.params["collector"]
)
self.collector = (self.get_collector_by_description(
self.params["collector"]))
else:
self.fail(msg="No collector specified.")
# If the host wasn't found via displayname, attempt by hostname
if info is None:
self.module.debug("Attempting to find host by hostname " +
self.hostname)
info = self.get_host_by_hostname(self.hostname, self.collector)
self.info = info
self.properties = self.params["properties"]
self.description = self.params["description"]
self.starttime = self.params["starttime"]
self.duration = self.params["duration"]
self.alertenable = self.params["alertenable"]
if self.params["groups"] is not None:
self.groups = self._strip_groups(self.params["groups"])
else:
self.groups = None
def create(self):
"""Idemopotent function to create if missing,
update if changed, or skip"""
self.module.debug("Running Host.create...")
self.update()
def get_properties(self):
"""Returns a hash of the properties
associated with this LogicMonitor host"""
self.module.debug("Running Host.get_properties...")
if self.info:
self.module.debug("Making RPC call to 'getHostProperties'")
properties_json = (json.loads(self.rpc("getHostProperties",
{'hostId': self.info["id"],
"filterSystemProperties": True})))
if properties_json["status"] == 200:
self.module.debug("RPC call succeeded")
return properties_json["data"]
else:
self.module.debug("Error: there was an issue retrieving the " +
"host properties")
self.module.debug(properties_json["errmsg"])
self.fail(msg=properties_json["status"])
else:
self.module.debug(
"Unable to find LogicMonitor host which matches " +
self.displayname + " (" + self.hostname + ")"
)
return None
def set_properties(self, propertyhash):
"""update the host to have the properties
contained in the property hash"""
self.module.debug("Running Host.set_properties...")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
self.module.debug("Assigning property hash to host object")
self.properties = propertyhash
def add(self):
"""Add this device to monitoring
in your LogicMonitor account"""
self.module.debug("Running Host.add...")
if self.collector and not self.info:
self.module.debug("Host not registered. Registering.")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
h = self._build_host_hash(
self.hostname,
self.displayname,
self.collector,
self.description,
self.groups,
self.properties,
self.alertenable)
self.module.debug("Making RPC call to 'addHost'")
resp = json.loads(self.rpc("addHost", h))
if resp["status"] == 200:
self.module.debug("RPC call succeeded")
return resp["data"]
else:
self.module.debug("RPC call failed")
self.module.debug(resp)
return resp["errmsg"]
elif self.collector is None:
self.fail(msg="Specified collector doesn't exist")
else:
self.module.debug("Host already registered")
def update(self):
"""This method takes changes made to this host
and applies them to the corresponding host
in your LogicMonitor account."""
self.module.debug("Running Host.update...")
if self.info:
self.module.debug("Host already registed")
if self.is_changed():
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
h = (self._build_host_hash(
self.hostname,
self.displayname,
self.collector,
self.description,
self.groups,
self.properties,
self.alertenable))
h["id"] = self.info["id"]
h["opType"] = "replace"
self.module.debug("Making RPC call to 'updateHost'")
resp = json.loads(self.rpc("updateHost", h))
if resp["status"] == 200:
self.module.debug("RPC call succeeded")
else:
self.module.debug("RPC call failed")
self.fail(msg="Error: unable to update the host.")
else:
self.module.debug(
"Host properties match supplied properties. " +
"No changes to make."
)
return self.info
else:
self.module.debug("Host not registed. Registering")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
return self.add()
def remove(self):
"""Remove this host from your LogicMonitor account"""
self.module.debug("Running Host.remove...")
if self.info:
self.module.debug("Host registered")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
self.module.debug("Making RPC call to 'deleteHost'")
resp = json.loads(self.rpc("deleteHost",
{"hostId": self.info["id"],
"deleteFromSystem": True,
"hostGroupId": 1}))
if resp["status"] == 200:
self.module.debug(resp)
self.module.debug("RPC call succeeded")
return resp
else:
self.module.debug("RPC call failed")
self.module.debug(resp)
self.fail(msg=resp["errmsg"])
else:
self.module.debug("Host not registered")
def is_changed(self):
"""Return true if the host doesn't
match the LogicMonitor account"""
self.module.debug("Running Host.is_changed")
ignore = ['system.categories', 'snmp.version']
hostresp = self.get_host_by_displayname(self.displayname)
if hostresp is None:
hostresp = self.get_host_by_hostname(self.hostname, self.collector)
if hostresp:
self.module.debug("Comparing simple host properties")
if hostresp["alertEnable"] != self.alertenable:
return True
if hostresp["description"] != self.description:
return True
if hostresp["displayedAs"] != self.displayname:
return True
if (self.collector and
hasattr(self.collector, "id") and
hostresp["agentId"] != self.collector["id"]):
return True
self.module.debug("Comparing groups.")
if self._compare_groups(hostresp) is True:
return True
propresp = self.get_properties()
if propresp:
self.module.debug("Comparing properties.")
if self._compare_props(propresp, ignore) is True:
return True
else:
self.fail(
msg="Error: Unknown error retrieving host properties")
return False
else:
self.fail(msg="Error: Unknown error retrieving host information")
def sdt(self):
"""Create a scheduled down time
(maintenance window) for this host"""
self.module.debug("Running Host.sdt...")
if self.info:
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
duration = self.duration
starttime = self.starttime
offset = starttime
if starttime:
self.module.debug("Start time specified")
start = datetime.datetime.strptime(starttime, '%Y-%m-%d %H:%M')
offsetstart = start
else:
self.module.debug("No start time specified. Using default.")
start = datetime.datetime.utcnow()
# Use user UTC offset
self.module.debug("Making RPC call to 'getTimeZoneSetting'")
accountresp = (json.loads(self.rpc("getTimeZoneSetting", {})))
if accountresp["status"] == 200:
self.module.debug("RPC call succeeded")
offset = accountresp["data"]["offset"]
offsetstart = start + datetime.timedelta(0, offset)
else:
self.fail(
msg="Error: Unable to retrieve timezone offset")
offsetend = offsetstart + datetime.timedelta(0, int(duration)*60)
h = {"hostId": self.info["id"],
"type": 1,
"year": offsetstart.year,
"month": offsetstart.month - 1,
"day": offsetstart.day,
"hour": offsetstart.hour,
"minute": offsetstart.minute,
"endYear": offsetend.year,
"endMonth": offsetend.month - 1,
"endDay": offsetend.day,
"endHour": offsetend.hour,
"endMinute": offsetend.minute}
self.module.debug("Making RPC call to 'setHostSDT'")
resp = (json.loads(self.rpc("setHostSDT", h)))
if resp["status"] == 200:
self.module.debug("RPC call succeeded")
return resp["data"]
else:
self.module.debug("RPC call failed")
self.fail(msg=resp["errmsg"])
else:
self.fail(msg="Error: Host doesn't exit.")
def site_facts(self):
"""Output current properties information for the Host"""
self.module.debug("Running Host.site_facts...")
if self.info:
self.module.debug("Host exists")
props = self.get_properties()
self.output_info(props)
else:
self.fail(msg="Error: Host doesn't exit.")
def _build_host_hash(self,
hostname,
displayname,
collector,
description,
groups,
properties,
alertenable):
"""Return a property formated hash for the
creation of a host using the rpc function"""
self.module.debug("Running Host._build_host_hash...")
h = {}
h["hostName"] = hostname
h["displayedAs"] = displayname
h["alertEnable"] = alertenable
if collector:
self.module.debug("Collector property exists")
h["agentId"] = collector["id"]
else:
self.fail(
msg="Error: No collector found. Unable to build host hash.")
if description:
h["description"] = description
if groups is not None and groups is not []:
self.module.debug("Group property exists")
groupids = ""
for group in groups:
groupids = groupids + str(self.create_group(group)) + ","
h["hostGroupIds"] = groupids.rstrip(',')
if properties is not None and properties is not {}:
self.module.debug("Properties hash exists")
propnum = 0
for key, value in properties.items():
h["propName" + str(propnum)] = key
h["propValue" + str(propnum)] = value
propnum = propnum + 1
return h
def _verify_property(self, propname):
"""Check with LogicMonitor server to
verify property is unchanged"""
self.module.debug("Running Host._verify_property...")
if self.info:
self.module.debug("Host is registered")
if propname not in self.properties:
self.module.debug("Property " + propname + " does not exist")
return False
else:
self.module.debug("Property " + propname + " exists")
h = {"hostId": self.info["id"],
"propName0": propname,
"propValue0": self.properties[propname]}
self.module.debug("Making RCP call to 'verifyProperties'")
resp = json.loads(self.rpc('verifyProperties', h))
if resp["status"] == 200:
self.module.debug("RPC call succeeded")
return resp["data"]["match"]
else:
self.fail(
msg="Error: unable to get verification " +
"from server.\n%s" % resp["errmsg"])
else:
self.fail(
msg="Error: Host doesn't exist. Unable to verify properties")
def _compare_groups(self, hostresp):
"""Function to compare the host's current
groups against provided groups"""
self.module.debug("Running Host._compare_groups")
g = []
fullpathinids = hostresp["fullPathInIds"]
self.module.debug("Building list of groups")
for path in fullpathinids:
if path != []:
h = {'hostGroupId': path[-1]}
hgresp = json.loads(self.rpc("getHostGroup", h))
if (hgresp["status"] == 200 and
hgresp["data"]["appliesTo"] == ""):
g.append(path[-1])
if self.groups is not None:
self.module.debug("Comparing group lists")
for group in self.groups:
groupjson = self.get_group(group)
if groupjson is None:
self.module.debug("Group mismatch. No result.")
return True
elif groupjson['id'] not in g:
self.module.debug("Group mismatch. ID doesn't exist.")
return True
else:
g.remove(groupjson['id'])
if g != []:
self.module.debug("Group mismatch. New ID exists.")
return True
self.module.debug("Groups match")
def _compare_props(self, propresp, ignore):
"""Function to compare the host's current
properties against provided properties"""
self.module.debug("Running Host._compare_props...")
p = {}
self.module.debug("Creating list of properties")
for prop in propresp:
if prop["name"] not in ignore:
if ("*******" in prop["value"] and
self._verify_property(prop["name"])):
p[prop["name"]] = self.properties[prop["name"]]
else:
p[prop["name"]] = prop["value"]
self.module.debug("Comparing properties")
# Iterate provided properties and compare to received properties
for prop in self.properties:
if (prop not in p or
p[prop] != self.properties[prop]):
self.module.debug("Properties mismatch")
return True
self.module.debug("Properties match")
def _strip_groups(self, groups):
"""Function to strip whitespace from group list.
This function provides the user some flexibility when
formatting group arguments """
self.module.debug("Running Host._strip_groups...")
return map(lambda x: x.strip(), groups)
class Datasource(LogicMonitor):
def __init__(self, params, module=None):
"""Initializor for the LogicMonitor Datasource object"""
self.change = False
self.params = params
LogicMonitor.__init__(self, module, **params)
self.module.debug("Instantiating Datasource object")
self.id = self.params["id"]
self.starttime = self.params["starttime"]
self.duration = self.params["duration"]
def sdt(self):
"""Create a scheduled down time
(maintenance window) for this host"""
self.module.debug("Running Datasource.sdt...")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
duration = self.duration
starttime = self.starttime
offsetstart = starttime
if starttime:
self.module.debug("Start time specified")
start = datetime.datetime.strptime(starttime, '%Y-%m-%d %H:%M')
offsetstart = start
else:
self.module.debug("No start time specified. Using default.")
start = datetime.datetime.utcnow()
# Use user UTC offset
self.module.debug("Making RPC call to 'getTimeZoneSetting'")
accountresp = json.loads(self.rpc("getTimeZoneSetting", {}))
if accountresp["status"] == 200:
self.module.debug("RPC call succeeded")
offset = accountresp["data"]["offset"]
offsetstart = start + datetime.timedelta(0, offset)
else:
self.fail(msg="Error: Unable to retrieve timezone offset")
offsetend = offsetstart + datetime.timedelta(0, int(duration)*60)
h = {"hostDataSourceId": self.id,
"type": 1,
"notifyCC": True,
"year": offsetstart.year,
"month": offsetstart.month-1,
"day": offsetstart.day,
"hour": offsetstart.hour,
"minute": offsetstart.minute,
"endYear": offsetend.year,
"endMonth": offsetend.month-1,
"endDay": offsetend.day,
"endHour": offsetend.hour,
"endMinute": offsetend.minute}
self.module.debug("Making RPC call to 'setHostDataSourceSDT'")
resp = json.loads(self.rpc("setHostDataSourceSDT", h))
if resp["status"] == 200:
self.module.debug("RPC call succeeded")
return resp["data"]
else:
self.module.debug("RPC call failed")
self.fail(msg=resp["errmsg"])
class Hostgroup(LogicMonitor):
def __init__(self, params, module=None):
"""Initializor for the LogicMonitor host object"""
self.change = False
self.params = params
LogicMonitor.__init__(self, module, **self.params)
self.module.debug("Instantiating Hostgroup object")
self.fullpath = self.params["fullpath"]
self.info = self.get_group(self.fullpath)
self.properties = self.params["properties"]
self.description = self.params["description"]
self.starttime = self.params["starttime"]
self.duration = self.params["duration"]
self.alertenable = self.params["alertenable"]
def create(self):
"""Wrapper for self.update()"""
self.module.debug("Running Hostgroup.create...")
self.update()
def get_properties(self, final=False):
"""Returns a hash of the properties
associated with this LogicMonitor host"""
self.module.debug("Running Hostgroup.get_properties...")
if self.info:
self.module.debug("Group found")
self.module.debug("Making RPC call to 'getHostGroupProperties'")
properties_json = json.loads(self.rpc(
"getHostGroupProperties",
{'hostGroupId': self.info["id"],
"finalResult": final}))
if properties_json["status"] == 200:
self.module.debug("RPC call succeeded")
return properties_json["data"]
else:
self.module.debug("RPC call failed")
self.fail(msg=properties_json["status"])
else:
self.module.debug("Group not found")
return None
def set_properties(self, propertyhash):
"""Update the host to have the properties
contained in the property hash"""
self.module.debug("Running Hostgroup.set_properties")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
self.module.debug("Assigning property has to host object")
self.properties = propertyhash
def add(self):
"""Idempotent function to ensure that the host
group exists in your LogicMonitor account"""
self.module.debug("Running Hostgroup.add")
if self.info is None:
self.module.debug("Group doesn't exist. Creating.")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
self.create_group(self.fullpath)
self.info = self.get_group(self.fullpath)
self.module.debug("Group created")
return self.info
else:
self.module.debug("Group already exists")
def update(self):
"""Idempotent function to ensure the host group settings
(alertenable, properties, etc) in the
LogicMonitor account match the current object."""
self.module.debug("Running Hostgroup.update")
if self.info:
if self.is_changed():
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
h = self._build_host_group_hash(
self.fullpath,
self.description,
self.properties,
self.alertenable)
h["opType"] = "replace"
if self.fullpath != "/":
h["id"] = self.info["id"]
self.module.debug("Making RPC call to 'updateHostGroup'")
resp = json.loads(self.rpc("updateHostGroup", h))
if resp["status"] == 200:
self.module.debug("RPC call succeeded")
return resp["data"]
else:
self.module.debug("RPC call failed")
self.fail(msg="Error: Unable to update the " +
"host.\n" + resp["errmsg"])
else:
self.module.debug(
"Group properties match supplied properties. " +
"No changes to make"
)
return self.info
else:
self.module.debug("Group doesn't exist. Creating.")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
return self.add()
def remove(self):
"""Idempotent function to ensure the host group
does not exist in your LogicMonitor account"""
self.module.debug("Running Hostgroup.remove...")
if self.info:
self.module.debug("Group exists")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
self.module.debug("Making RPC call to 'deleteHostGroup'")
resp = json.loads(self.rpc("deleteHostGroup",
{"hgId": self.info["id"]}))
if resp["status"] == 200:
self.module.debug(resp)
self.module.debug("RPC call succeeded")
return resp
elif resp["errmsg"] == "No such group":
self.module.debug("Group doesn't exist")
else:
self.module.debug("RPC call failed")
self.module.debug(resp)
self.fail(msg=resp["errmsg"])
else:
self.module.debug("Group doesn't exist")
def is_changed(self):
"""Return true if the host doesn't match
the LogicMonitor account"""
self.module.debug("Running Hostgroup.is_changed...")
ignore = []
group = self.get_group(self.fullpath)
properties = self.get_properties()
if properties is not None and group is not None:
self.module.debug("Comparing simple group properties")
if (group["alertEnable"] != self.alertenable or
group["description"] != self.description):
return True
p = {}
self.module.debug("Creating list of properties")
for prop in properties:
if prop["name"] not in ignore:
if ("*******" in prop["value"] and
self._verify_property(prop["name"])):
p[prop["name"]] = (
self.properties[prop["name"]])
else:
p[prop["name"]] = prop["value"]
self.module.debug("Comparing properties")
if set(p) != set(self.properties):
return True
else:
self.module.debug("No property information received")
return False
def sdt(self, duration=30, starttime=None):
"""Create a scheduled down time
(maintenance window) for this host"""
self.module.debug("Running Hostgroup.sdt")
self.module.debug("System changed")
self.change = True
if self.check_mode:
self.exit(changed=True)
duration = self.duration
starttime = self.starttime
offset = starttime
if starttime:
self.module.debug("Start time specified")
start = datetime.datetime.strptime(starttime, '%Y-%m-%d %H:%M')
offsetstart = start
else:
self.module.debug("No start time specified. Using default.")
start = datetime.datetime.utcnow()
# Use user UTC offset
self.module.debug("Making RPC call to 'getTimeZoneSetting'")
accountresp = json.loads(self.rpc("getTimeZoneSetting", {}))
if accountresp["status"] == 200:
self.module.debug("RPC call succeeded")
offset = accountresp["data"]["offset"]
offsetstart = start + datetime.timedelta(0, offset)
else:
self.fail(
msg="Error: Unable to retrieve timezone offset")
offsetend = offsetstart + datetime.timedelta(0, int(duration)*60)
h = {"hostGroupId": self.info["id"],
"type": 1,
"year": offsetstart.year,
"month": offsetstart.month-1,
"day": offsetstart.day,
"hour": offsetstart.hour,
"minute": offsetstart.minute,
"endYear": offsetend.year,
"endMonth": offsetend.month-1,
"endDay": offsetend.day,
"endHour": offsetend.hour,
"endMinute": offsetend.minute}
self.module.debug("Making RPC call to setHostGroupSDT")
resp = json.loads(self.rpc("setHostGroupSDT", h))
if resp["status"] == 200:
self.module.debug("RPC call succeeded")
return resp["data"]
else:
self.module.debug("RPC call failed")
self.fail(msg=resp["errmsg"])
def site_facts(self):
"""Output current properties information for the Hostgroup"""
self.module.debug("Running Hostgroup.site_facts...")
if self.info:
self.module.debug("Group exists")
props = self.get_properties(True)
self.output_info(props)
else:
self.fail(msg="Error: Group doesn't exit.")
def _build_host_group_hash(self,
fullpath,
description,
properties,
alertenable):
"""Return a property formated hash for the
creation of a hostgroup using the rpc function"""
self.module.debug("Running Hostgroup._build_host_hash")
h = {}
h["alertEnable"] = alertenable
if fullpath == "/":
self.module.debug("Group is root")
h["id"] = 1
else:
self.module.debug("Determining group path")
parentpath, name = fullpath.rsplit('/', 1)
parent = self.get_group(parentpath)
h["name"] = name
if parent:
self.module.debug("Parent group " +
str(parent["id"]) + " found.")
h["parentID"] = parent["id"]
else:
self.module.debug("No parent group found. Using root.")
h["parentID"] = 1
if description:
self.module.debug("Description property exists")
h["description"] = description
if properties != {}:
self.module.debug("Properties hash exists")
propnum = 0
for key, value in properties.items():
h["propName" + str(propnum)] = key
h["propValue" + str(propnum)] = value
propnum = propnum + 1
return h
def _verify_property(self, propname):
"""Check with LogicMonitor server
to verify property is unchanged"""
self.module.debug("Running Hostgroup._verify_property")
if self.info:
self.module.debug("Group exists")
if propname not in self.properties:
self.module.debug("Property " + propname + " does not exist")
return False
else:
self.module.debug("Property " + propname + " exists")
h = {"hostGroupId": self.info["id"],
"propName0": propname,
"propValue0": self.properties[propname]}
self.module.debug("Making RCP call to 'verifyProperties'")
resp = json.loads(self.rpc('verifyProperties', h))
if resp["status"] == 200:
self.module.debug("RPC call succeeded")
return resp["data"]["match"]
else:
self.fail(
msg="Error: unable to get verification " +
"from server.\n%s" % resp["errmsg"])
else:
self.fail(
msg="Error: Group doesn't exist. Unable to verify properties")
def selector(module):
"""Figure out which object and which actions
to take given the right parameters"""
if module.params["target"] == "collector":
target = Collector(module.params, module)
elif module.params["target"] == "host":
# Make sure required parameter collector is specified
if ((module.params["action"] == "add" or
module.params["displayname"] is None) and
module.params["collector"] is None):
module.fail_json(
msg="Parameter 'collector' required.")
target = Host(module.params, module)
elif module.params["target"] == "datasource":
# Validate target specific required parameters
if module.params["id"] is not None:
# make sure a supported action was specified
if module.params["action"] == "sdt":
target = Datasource(module.params, module)
else:
errmsg = ("Error: Unexpected action \"" +
module.params["action"] + "\" was specified.")
module.fail_json(msg=errmsg)
elif module.params["target"] == "hostgroup":
# Validate target specific required parameters
if module.params["fullpath"] is not None:
target = Hostgroup(module.params, module)
else:
module.fail_json(
msg="Parameter 'fullpath' required for target 'hostgroup'")
else:
module.fail_json(
msg="Error: Unexpected target \"" + module.params["target"] +
"\" was specified.")
if module.params["action"].lower() == "add":
action = target.create
elif module.params["action"].lower() == "remove":
action = target.remove
elif module.params["action"].lower() == "sdt":
action = target.sdt
elif module.params["action"].lower() == "update":
action = target.update
else:
errmsg = ("Error: Unexpected action \"" + module.params["action"] +
"\" was specified.")
module.fail_json(msg=errmsg)
action()
module.exit_json(changed=target.change)
def main():
TARGETS = [
"collector",
"host",
"datasource",
"hostgroup"]
ACTIONS = [
"add",
"remove",
"sdt",
"update"]
module = AnsibleModule(
argument_spec=dict(
target=dict(required=True, default=None, choices=TARGETS),
action=dict(required=True, default=None, choices=ACTIONS),
company=dict(required=True, default=None),
user=dict(required=True, default=None),
password=dict(required=True, default=None, no_log=True),
collector=dict(required=False, default=None),
hostname=dict(required=False, default=None),
displayname=dict(required=False, default=None),
id=dict(required=False, default=None),
description=dict(required=False, default=""),
fullpath=dict(required=False, default=None),
starttime=dict(required=False, default=None),
duration=dict(required=False, default=30),
properties=dict(required=False, default={}, type="dict"),
groups=dict(required=False, default=[], type="list"),
alertenable=dict(required=False, default="true", choices=BOOLEANS)
),
supports_check_mode=True
)
if HAS_LIB_JSON is not True:
module.fail_json(msg="Unable to load JSON library")
selector(module)
from ansible.module_utils.basic import *
from ansible.module_utils.urls import *
from ansible.module_utils.urls import open_url
if __name__ == "__main__":
main()
| 35.20092 | 308 | 0.552807 |
acf7370e43e99168fed8031297b7ec1642bc30ff | 2,603 | py | Python | cryptography.py | anrago32/cryptography | d65cb77dcc6643fd9ef1c572438aefeb99ed5006 | [
"MIT"
] | 3 | 2020-04-16T19:34:11.000Z | 2020-11-30T02:43:29.000Z | cryptography.py | anrago32/cryptography | d65cb77dcc6643fd9ef1c572438aefeb99ed5006 | [
"MIT"
] | null | null | null | cryptography.py | anrago32/cryptography | d65cb77dcc6643fd9ef1c572438aefeb99ed5006 | [
"MIT"
] | null | null | null | # Cryptography
# Basic RSA Implementation Written in Python
# Written by Antony Gordon, Alex Rago, Nick Rock, 2020
from random import randrange
# Simulated network node
class Node():
def __init__(self, name, p, q):
n, phi = p * q, (p - 1) * (q - 1)
e = generate_public_key(phi)
d = generate_private_key(phi, e)
self.name = name
self.public_key = (n, e)
self.private_key = (n, d)
# Euclidean algorithm for greatest common divisor
def gcd(a, b):
while b != 0:
a, b = b, a % b
return a
# Algorithm to generate relatively prime public key
def generate_public_key(phi):
e = randrange(1, phi)
while gcd(phi, e) != 1:
e = randrange(1, phi)
return e
# Extended euclidean algorithm for modular inverse
def generate_private_key(phi, e):
d, d_next = 0, 1
r, r_next = phi, e
while r != 1:
quotient = r // r_next
d, d_next = d_next, d - d_next * quotient
r, r_next = r_next, r - r_next * quotient
return d % phi
# Convert from string to ASCII
def from_string(sequence):
sequence = [ord(c) for c in sequence]
return sequence
# Convert to string from ASCII
def to_string(sequence):
sequence = [chr(c) for c in sequence]
return "".join(sequence)
# Encode with public key of recipient
def encrypt(sequence, recipient):
(n, e) = recipient.public_key
sequence = [pow(c, e, n) for c in sequence]
return sequence
# Decode with private key of recipient
def decrypt(sequence, recipient):
(n, d) = recipient.private_key
sequence = [pow(c, d, n) for c in sequence]
return sequence
# Simulate encrypted network transmission
def simulate_transmission(message, recipient):
print("\n>> \"" + message + "\" Sent to " + recipient.name + "\n")
message = from_string(message)
print("\tOriginal Sequence:\n\t" + str(message))
message = encrypt(message, recipient)
print("\tEncrypted Sequence:\n\t" + str(message) + "\n")
print(">> Sequence Transmitted (\"" + to_string(message) + "\")\n")
print("\tReceived Sequence:\n\t" + str(message))
message = decrypt(message, recipient)
print("\tDecrypted Sequence:\n\t" + str(message) + "\n")
message = to_string(message)
print(">> \"" + message + "\" Received by " + recipient.name + "\n")
def main():
# Nodes with 8-bit primes
node1 = Node("Alice", 211, 163)
node2 = Node("Bob", 113, 199)
# Simulated transmissions
simulate_transmission("Hello, World!", node1)
simulate_transmission("Secret Message", node2)
if __name__ == "__main__":
main()
| 29.91954 | 72 | 0.639262 |
acf738898bfea65baf0b7ab1b8df1cff6bdc2798 | 7,884 | py | Python | django_workflow_system/api/serializers/workflows/collection.py | eikonomega/django-workflow-system | dc0e8807263266713d3d7fa46e240e8d72db28d1 | [
"MIT"
] | 2 | 2022-01-28T12:35:42.000Z | 2022-03-23T16:06:05.000Z | django_workflow_system/api/serializers/workflows/collection.py | eikonomega/django-workflow-system | dc0e8807263266713d3d7fa46e240e8d72db28d1 | [
"MIT"
] | 10 | 2021-04-27T20:26:32.000Z | 2021-07-21T15:34:31.000Z | django_workflow_system/api/serializers/workflows/collection.py | eikonomega/django-workflow-system | dc0e8807263266713d3d7fa46e240e8d72db28d1 | [
"MIT"
] | 1 | 2021-11-13T14:30:34.000Z | 2021-11-13T14:30:34.000Z | from rest_framework import serializers
from rest_framework.reverse import reverse
from .author import WorkflowAuthorSummarySerializer
from .workflow import WorkflowTerseSerializer, ChildWorkflowDetailedSerializer
from ..utils import get_images_helper
from ....models import (
WorkflowCollectionMember,
WorkflowCollection,
WorkflowCollectionEngagement,
)
class WorkflowCollectionMemberSummarySerializer(serializers.ModelSerializer):
"""
Summary level serializer for WorkflowCollectionMember objects.
"""
workflow = WorkflowTerseSerializer()
class Meta:
model = WorkflowCollectionMember
fields = (
"order",
"workflow",
)
class WorkflowCollectionMemberDetailedSerializer(serializers.ModelSerializer):
"""
Summary level serializer for WorkflowCollectionMember objects, but with steps.
"""
workflow = ChildWorkflowDetailedSerializer()
class Meta:
model = WorkflowCollectionMember
fields = (
"order",
"workflow",
)
class WorkflowCollectionBaseSerializer(serializers.ModelSerializer):
"""Summary level serializer for WorkflowCollection objects."""
authors = serializers.SerializerMethodField()
metadata = serializers.SerializerMethodField()
newer_version = serializers.SerializerMethodField()
images = serializers.SerializerMethodField()
dependencies_completed = serializers.SerializerMethodField()
def get_authors(self, instance):
"""
Method to get data for the 'authors' field.
Returns a list of the Authors for all Workflows linked to a
WorkflowCollection in JSON format.
Parameters:
instance (WorkflowCollection object)
Returns:
List of Author objects in JSON format.
"""
return get_authors_helper(self.context["request"], instance)
def get_images(self, instance):
"""
Method to build an object for each corresponding Image.
Parameters:
instance (WorkflowCollection object)
Returns:
List of Image objects in JSON format.
"""
return get_images_helper(
self.context.get("request"), instance.workflowcollectionimage_set.all()
)
def get_metadata(self, instance):
"""
Method to build metadata hierarchy.
"""
return get_metadata_helper(instance)
def get_newer_version(self, obj: WorkflowCollection):
latest_version = (
WorkflowCollection.objects.filter(code=obj.code, active=True)
.order_by("version")
.last()
)
if latest_version == None:
return None
if obj != latest_version:
relative_url = reverse(
"workflow-collection", kwargs={"id": latest_version.id}
)
return self.context["request"].build_absolute_uri(relative_url)
else:
return None
def get_dependencies_completed(self, instance):
"""Determine if collection dependencies are fullfilled."""
request = self.context.get("request")
status = False
# If there are no dependencies, we just say they are completed.
if not instance.collection_dependencies.all():
status = True
else:
# Determine if there is at least one complete engagement
# for each of the dependencies.
dependency_engagement_records = [
WorkflowCollectionEngagement.objects.filter(
user=request.user,
workflow_collection=dependency,
finished__isnull=False,
)
for dependency in instance.collection_dependencies.all()
]
status = all(dependency_engagement_records)
return status
class WorkflowCollectionSummarySerializer(WorkflowCollectionBaseSerializer):
"""
Summary level serializer for WorkflowCollection objects.
"""
detail = serializers.HyperlinkedIdentityField(
view_name="workflow-collection", lookup_field="id"
)
class Meta:
model = WorkflowCollection
fields = (
"id",
"detail",
"code",
"version",
"active",
"created_date",
"modified_date",
"description",
"assignment_only",
"recommendable",
"name",
"ordered",
"authors",
"images",
"category",
"metadata",
"newer_version",
"dependencies_completed",
)
class WorkflowCollectionDetailedSerializer(WorkflowCollectionBaseSerializer):
"""
Detailed level serializer for WorkflowCollection objects.
"""
workflowcollectionmember_set = WorkflowCollectionMemberSummarySerializer(many=True)
self_detail = serializers.HyperlinkedIdentityField(
view_name="workflow-collection", lookup_field="id"
)
class Meta:
model = WorkflowCollection
fields = (
"self_detail",
"id",
"code",
"version",
"active",
"created_date",
"modified_date",
"description",
"assignment_only",
"recommendable",
"name",
"ordered",
"workflowcollectionmember_set",
"authors",
"images",
"category",
"metadata",
"newer_version",
"dependencies_completed",
)
class WorkflowCollectionWithStepsSerializer(WorkflowCollectionBaseSerializer):
"""
Detailed level serializer for WorkflowCollection objects, but with steps.
"""
workflowcollectionmember_set = WorkflowCollectionMemberDetailedSerializer(many=True)
self_detail = serializers.HyperlinkedIdentityField(
view_name="workflow-collection", lookup_field="id"
)
class Meta:
model = WorkflowCollection
fields = (
"self_detail",
"id",
"code",
"version",
"active",
"created_date",
"modified_date",
"description",
"assignment_only",
"recommendable",
"name",
"ordered",
"workflowcollectionmember_set",
"authors",
"images",
"category",
"metadata",
"newer_version",
)
def get_authors_helper(request, instance):
"""
Helper method for gathering a list of the Authors for all Workflows
linked to a WorkflowCollection in JSON format.
Parameters:
request : self.context['request']
instance : WorkflowCollection object
Returns:
List of Author objects in JSON format.
"""
authors = []
for member in instance.workflowcollectionmember_set.all():
authors.append(
WorkflowAuthorSummarySerializer(
member.workflow.author, context={"request": request}
).data
)
# Ensure that the id's for all authors are unique to avoid duplicate
# entries
# Here we're making a dict with the key being the id. This filters out the duplicates.
# The values() of the dict will be make up the list
return list({author["id"]: author for author in authors}.values())
def get_metadata_helper(instance):
"""
Helper method for gathering collection metadata
Parameters:
instance : WorkflowCollection object
Returns:
List of Lists of Metadata associated with the Collection
"""
metadata_list = []
for hierarchy in instance.metadata.all():
metadata_list.append(hierarchy.group_hierarchy)
return metadata_list
| 28.773723 | 90 | 0.613394 |
acf739c41492fd3949109c2b36029911472282d9 | 383 | py | Python | main.py | couto0/life | 5cf0363c2d0c8dbde3517164628c71589b2279f3 | [
"MIT"
] | null | null | null | main.py | couto0/life | 5cf0363c2d0c8dbde3517164628c71589b2279f3 | [
"MIT"
] | null | null | null | main.py | couto0/life | 5cf0363c2d0c8dbde3517164628c71589b2279f3 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import os
from src.screen import Screen
from src.life import Life
import time
def main():
w, h = (80, 40)
N = 1000
screen = Screen(w, h)
life = Life(w, h)
life.import_frame('examples/amogus.txt')
for i in range(N):
screen.refresh(life.frame)
life.tick()
time.sleep(0.01)
if __name__ == "__main__":
main()
| 18.238095 | 44 | 0.603133 |
acf73a0f1588413a637cab50fb66d1e0bc6ebfb5 | 4,090 | py | Python | tests/integration/models/test_datarange1d.py | jeisch/bokeh | 6be4d5ebbec04117f2bb0693fe64dc664f8f1bb1 | [
"BSD-3-Clause"
] | 1 | 2020-03-21T04:11:51.000Z | 2020-03-21T04:11:51.000Z | tests/integration/models/test_datarange1d.py | jeisch/bokeh | 6be4d5ebbec04117f2bb0693fe64dc664f8f1bb1 | [
"BSD-3-Clause"
] | 2 | 2021-05-08T11:43:21.000Z | 2021-05-10T19:16:43.000Z | tests/integration/models/test_datarange1d.py | jeisch/bokeh | 6be4d5ebbec04117f2bb0693fe64dc664f8f1bb1 | [
"BSD-3-Clause"
] | null | null | null | #-----------------------------------------------------------------------------
# Copyright (c) 2012 - 2017, Anaconda, Inc. All rights reserved.
#
# Powered by the Bokeh Development Team.
#
# The full license is in the file LICENSE.txt, distributed with this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Boilerplate
#-----------------------------------------------------------------------------
import pytest ; pytest
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
# Standard library imports
# External imports
# Bokeh imports
from bokeh.layouts import column
from bokeh.models import Button, Circle, ColumnDataSource, CustomAction, CustomJS, DataRange1d, Plot
from bokeh._testing.util.selenium import RECORD
#-----------------------------------------------------------------------------
# Tests
#-----------------------------------------------------------------------------
pytest_plugins = (
"bokeh._testing.plugins.bokeh",
)
def _make_plot(**kw):
source = ColumnDataSource(dict(x=[1, 2], y1=[0, 1], y2=[10,11]))
plot = Plot(plot_height=400, plot_width=400, x_range=DataRange1d(), y_range=DataRange1d(**kw), min_border=0)
plot.add_glyph(source, Circle(x='x', y='y1'))
glyph = plot.add_glyph(source, Circle(x='x', y='y2'))
glyph.visible = False
code = RECORD("yrstart", "p.y_range.start") + RECORD("yrend", "p.y_range.end")
plot.add_tools(CustomAction(callback=CustomJS(args=dict(p=plot), code=code)))
plot.toolbar_sticky = False
return plot, glyph
@pytest.mark.integration
@pytest.mark.selenium
class Test_DataRange1d(object):
def test_includes_hidden_glyphs_by_default(self, single_plot_page):
plot, glyph = _make_plot()
page = single_plot_page(plot)
page.click_custom_action()
results = page.results
assert results['yrstart'] <= 0
assert results['yrend'] >= 11
assert page.has_no_console_errors()
def test_includes_hidden_glyphs_when_asked(self, single_plot_page):
plot, glyph = _make_plot(only_visible=False)
page = single_plot_page(plot)
page.click_custom_action()
results = page.results
assert results['yrstart'] <= 0
assert results['yrend'] >= 11
assert page.has_no_console_errors()
def test_excludes_hidden_glyphs_when_asked(self, single_plot_page):
plot, glyph = _make_plot(only_visible=True)
page = single_plot_page(plot)
page.click_custom_action()
results = page.results
assert results['yrstart'] <= 0
assert results['yrend'] < 5
assert page.has_no_console_errors()
def test_updates_when_visibility_is_toggled(self, single_plot_page):
source = ColumnDataSource(dict(x=[1, 2], y1=[0, 1], y2=[10,11]))
plot = Plot(plot_height=400, plot_width=400, x_range=DataRange1d(), y_range=DataRange1d(only_visible=True), min_border=0)
plot.add_glyph(source, Circle(x='x', y='y1'))
glyph = plot.add_glyph(source, Circle(x='x', y='y2'))
code = RECORD("yrstart", "p.y_range.start") + RECORD("yrend", "p.y_range.end")
plot.add_tools(CustomAction(callback=CustomJS(args=dict(p=plot), code=code)))
plot.toolbar_sticky = False
button = Button(css_classes=['foo'])
button.js_on_click(CustomJS(args=dict(glyph=glyph), code="glyph.visible=false"))
page = single_plot_page(column(plot, button))
page.click_custom_action()
results = page.results
assert results['yrstart'] <= 0
assert results['yrend'] >= 11
button = page.driver.find_element_by_css_selector('.foo .bk-btn')
button.click()
page.click_custom_action()
results = page.results
assert results['yrstart'] <= 0
assert results['yrend'] < 5
assert page.has_no_console_errors()
| 34.369748 | 129 | 0.568949 |
acf73a2be3c7c5fb2fb36cec29882e60593acea7 | 5,338 | py | Python | reassign/entities.py | Fabulani/thesis-reassign | d29b7468839faef55d5259d4b272dca488b8056e | [
"MIT"
] | null | null | null | reassign/entities.py | Fabulani/thesis-reassign | d29b7468839faef55d5259d4b272dca488b8056e | [
"MIT"
] | null | null | null | reassign/entities.py | Fabulani/thesis-reassign | d29b7468839faef55d5259d4b272dca488b8056e | [
"MIT"
] | null | null | null |
from collections import Counter
DEFAULT_BODY = {
"function": {},
"component": {},
"variable": {},
"parameter": {}
}
class Header:
def __init__(self, _id: str, name: str, description: str, style: str, revision: str):
self.id = _id
self.name = name
self.description = description
self.style = style
self.revision = revision
# class ComponentBody:
# def __init__(self, function: dict, component: dict, variable: dict, parameter: dict):
# self.function = function
# self.component = component
# self.variable = variable
# self.parameter = parameter
# class ParameterBody:
# def __init__(self, value: str, dimension: str, range: str, datatype: str, unit: str):
# self.value = value
# self.dimension = dimension
# self.range = range,
# self.datatype = datatype,
# self.unit = unit
# class VariableBody:
# def __init__(self, value: str, dimension: str, range: str, datatype: str, unit: str, uncertainty: str, timestamp: str):
# self.value = value
# self.dimension = dimension
# self.range = range,
# self.datatype = datatype,
# self.unit = unit
# self.uncertainty = uncertainty
# self.timestamp = timestamp
# class AimfreeMetadata:
# def __init__(self, data: dict):
# self.data = data
class AimfreeMetadata:
def __init__(self, header: Header, body: dict = DEFAULT_BODY):
self.header = header
self.body = body
def add_to_body(self, section: str, key, header, body):
""" Add an object to a body section
`section`: either function, component, variable, or parameter.
`key`: the key of the object to be added.
`header`: the header of the object to be added.
`body`: the body of the object to be added.
"""
self.body[section][key] = {header, body}
class Capability(AimfreeMetadata):
def __init__(self, header: Header, cost: float, process_time: int, body: dict = DEFAULT_BODY):
super().__init__(header, body)
self.operating_cost = cost
self.process_time = process_time
class Resource(AimfreeMetadata):
def __init__(self, header: Header, status: str, capabilities: set, cost: float, body: dict = DEFAULT_BODY):
super().__init__(header, body)
self.status = status # BREAKDOWN, IN-USE, etc (check what was defined)
self.cost = cost
self.capabilities = set()
for c in capabilities:
self.capabilities.add(c)
class Station(AimfreeMetadata):
def __init__(self, header: Header, resources: list, body: dict = DEFAULT_BODY):
super().__init__(header, body)
self.resources = []
self.capabilities = set() # Set of unique capabilities
for r in resources:
self.resources.append(r)
if r.status != "BREAKDOWN": # Avoid resources on breakdown
self.capabilities.update(r.capabilities)
self.cost = self.get_costs()
self.num_capabilities = len(self.capabilities)
self.num_resources = len(self.resources)
def get_costs(self):
# FIXME
# costs = {
# "aquisition": 0,
# "operating": 0
# }
# for r in self.resources:
# costs["aquisition"] += r.buy_cost
# for c in r.capabilities:
# costs["operating"] += c.operating_cost
return 1
class Product(AimfreeMetadata):
def __init__(self, header: Header, requirements: list[str], body: dict = DEFAULT_BODY):
super().__init__(header, body)
self.requirements = requirements
class Order(AimfreeMetadata):
def __init__(self, header: Header, product: Product, quantity: int, body: dict = DEFAULT_BODY):
super().__init__(header, body)
self.product = product
self.quantity = quantity
self.requirements = product.requirements
# class ProcessStep(AimfreeMetadata):
# def __init__(self, header: Header, required_capabilities: list, body: dict = DEFAULT_BODY):
# super().__init__(header, body)
# # self.required_capability = self.body["parameter"]["capability"]["body"]["value"]
# self.required_capabilities = required_capabilities
class Agv(AimfreeMetadata):
def __init__(self, status: str, header: Header, body: dict = DEFAULT_BODY):
super().__init__(header, body)
self.status = status
class AssemblyState(AimfreeMetadata):
def __init__(self, header: Header, stations: list[Station], orders: list[Order], agvs: list[Agv], body: dict = DEFAULT_BODY):
super().__init__(header, body)
self.stations = stations
self.orders = orders
self.agvs = agvs
self.requirements = self.get_requirements()
self.capabilities = self.get_capabilities()
def get_requirements(self):
# Get current order requirements
requirements = []
for o in self.orders:
requirements.extend(o.requirements)
return Counter(requirements)
def get_capabilities(self):
# Get current available capabilities
capabilities = []
for s in self.stations:
capabilities.extend(s.capabilities)
return Counter(capabilities)
| 33.15528 | 129 | 0.627014 |
acf73a8732b8066b714619716105d1992fe8efcc | 273 | py | Python | backend/app/db/base_class.py | ianahart/blog | fc52e15a8b56bd4c6482065de7e21f8b31f5d765 | [
"MIT"
] | null | null | null | backend/app/db/base_class.py | ianahart/blog | fc52e15a8b56bd4c6482065de7e21f8b31f5d765 | [
"MIT"
] | null | null | null | backend/app/db/base_class.py | ianahart/blog | fc52e15a8b56bd4c6482065de7e21f8b31f5d765 | [
"MIT"
] | null | null | null | import typing as t
from sqlalchemy.ext.declarative import as_declarative
class_registry: t.Dict = {}
@as_declarative(class_registry=class_registry)
class Base:
id: t.Any
__name__: str
def __tablename__(self, cls) -> str:
return cls.__name__.lower()
| 19.5 | 53 | 0.725275 |
acf73aff015f2ae4ef25daabd8db7fbee5accd5e | 31,546 | py | Python | tests/fixtures.py | mgor/grizzly | cbcb1b8b44682330f82bb4d24904fb6601b6f1b0 | [
"MIT"
] | null | null | null | tests/fixtures.py | mgor/grizzly | cbcb1b8b44682330f82bb4d24904fb6601b6f1b0 | [
"MIT"
] | 9 | 2022-01-05T08:53:41.000Z | 2022-03-31T07:26:05.000Z | tests/fixtures.py | mgor/grizzly | cbcb1b8b44682330f82bb4d24904fb6601b6f1b0 | [
"MIT"
] | null | null | null | import inspect
import socket
import re
from typing import TYPE_CHECKING, Optional, Union, Callable, Any, Literal, List, Tuple, Type, Dict, cast
from types import TracebackType
from unittest.mock import MagicMock
from urllib.parse import urlparse
from mypy_extensions import VarArg, KwArg
from os import chdir, environ, getcwd, path
from shutil import rmtree
from json import dumps as jsondumps
from pathlib import Path
from textwrap import dedent, indent
from hashlib import sha1
import gevent
from locust.clients import ResponseContextManager
from locust.contrib.fasthttp import FastResponse, FastRequest
from locust.env import Environment
from locust.runners import Runner
from geventhttpclient.header import Headers
from geventhttpclient.response import HTTPSocketPoolResponse
from _pytest.tmpdir import TempPathFactory
from pytest_mock.plugin import MockerFixture
from paramiko.transport import Transport
from paramiko.channel import Channel
from paramiko.sftp import BaseSFTP
from paramiko.sftp_client import SFTPClient
from behave.runner import Context as BehaveContext, Runner as BehaveRunner
from behave.model import Scenario, Step, Background, Feature
from behave.configuration import Configuration
from behave.step_registry import registry as step_registry
from requests.models import CaseInsensitiveDict, Response, PreparedRequest
from grizzly.types import GrizzlyResponseContextManager, RequestMethod
from grizzly.tasks import RequestTask
from grizzly.testdata.variables import destroy_variables
from grizzly.context import GrizzlyContext, GrizzlyContextScenario
from .helpers import TestUser, TestScenario, RequestSilentFailureEvent
from .helpers import onerror, run_command
from .app import app
if TYPE_CHECKING:
from grizzly.users.base import GrizzlyUser
from grizzly.scenarios import GrizzlyScenario
__all__ = [
'AtomicVariableCleanupFixture',
'LocustFixture',
'ParamikoFixture',
'BehaveFixture',
'RequestTaskFixture',
'GrizzlyFixture',
'NoopZmqFixture',
]
class AtomicVariableCleanupFixture:
def __call__(self) -> None:
try:
GrizzlyContext.destroy()
except:
pass
destroy_variables()
try:
del environ['GRIZZLY_CONTEXT_ROOT']
except KeyError:
pass
class LocustFixture:
_test_context_root: str
_tmp_path_factory: TempPathFactory
env: Environment
runner: Runner
def __init__(self, tmp_path_factory: TempPathFactory) -> None:
self._tmp_path_factory = tmp_path_factory
def __enter__(self) -> 'LocustFixture':
test_context = self._tmp_path_factory.mktemp('test_context') / 'requests'
test_context.mkdir()
self._test_context_root = path.dirname(test_context)
environ['GRIZZLY_CONTEXT_ROOT'] = self._test_context_root
self.env = Environment()
self.runner = Runner(
self.env,
)
return self
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc: Optional[BaseException],
traceback: Optional[TracebackType],
) -> Literal[True]:
try:
del environ['GRIZZLY_CONTEXT_ROOT']
except KeyError:
pass
rmtree(self._test_context_root)
return True
class ParamikoFixture:
mocker: MockerFixture
def __init__(self, mocker: MockerFixture) -> None:
self.mocker = mocker
def __call__(self) -> None:
# unable to import socket.AddressFamily and socket.SocketKind ?!
def _socket_getaddrinfo(
hostname: Union[bytearray, bytes, str, None], port: Union[str, int, None], addrfamily: int, kind: int
) -> List[Tuple[int, int, Optional[int], Optional[str], Optional[Tuple[str, int]]]]:
return [(socket.AF_INET, socket.SOCK_STREAM, None, None, None, )]
def _socket_connect(self: socket.socket, address: Any) -> None:
pass
def _start_client(transport: Transport, event: Optional[Any] = None, timeout: Optional[Any] = None) -> None:
transport.active = True
def _auth_password(transport: Transport, username: str, password: Optional[str], event: Optional[Any] = None, fallback: Optional[bool] = True) -> List[str]:
return []
def _is_authenticated(transport: Transport) -> Literal[True]:
return True
def __send_version(base_sftp: BaseSFTP) -> str:
return '2.0-grizzly'
def _open_session(transport: Transport, window_size: Optional[int] = None, max_packet_size: Optional[int] = None, timeout: Optional[int] = None) -> Channel:
return Channel(1)
def _from_transport(transport: Transport, window_size: Optional[int] = None, max_packet_size: Optional[int] = None) -> SFTPClient:
channel = _open_session(transport)
setattr(channel, 'transport', transport)
return SFTPClient(channel)
def _sftpclient_close(sftp_client: SFTPClient) -> None:
pass
def _transport_close(transport: Transport) -> None:
pass
def _get(sftp_client: SFTPClient, remotepath: str, localpath: str, callback: Optional[Callable[[VarArg(Any), KwArg(Any)], None]] = None) -> None:
if callback is not None:
callback(100, 1000)
def _put(
sftp_client: SFTPClient, localpath: str, remotepath: str, callback: Optional[Callable[[VarArg(Any), KwArg(Any)], None]] = None, confirm: Optional[bool] = True,
) -> None:
if callback is not None:
callback(100, 1000)
self.mocker.patch(
'paramiko.transport.socket.getaddrinfo',
_socket_getaddrinfo,
)
self.mocker.patch(
'paramiko.transport.socket.socket.connect',
_socket_connect,
)
self.mocker.patch(
'paramiko.transport.Transport.is_authenticated',
_is_authenticated,
)
self.mocker.patch(
'paramiko.transport.Transport.start_client',
_start_client,
)
self.mocker.patch(
'paramiko.transport.Transport.auth_password',
_auth_password,
)
self.mocker.patch(
'paramiko.transport.Transport.close',
_transport_close,
)
self.mocker.patch(
'paramiko.sftp.BaseSFTP._send_version',
__send_version,
)
self.mocker.patch(
'paramiko.sftp_client.SFTPClient.from_transport',
_from_transport,
)
self.mocker.patch(
'paramiko.sftp_client.SFTPClient.close',
_sftpclient_close,
)
self.mocker.patch(
'paramiko.sftp_client.SFTPClient.put',
_put,
)
self.mocker.patch(
'paramiko.sftp_client.SFTPClient.get',
_get,
)
print('patched paramiko')
class BehaveFixture:
_locust_fixture: LocustFixture
context: BehaveContext
def __init__(self, locust_fixture: LocustFixture) -> None:
self._locust_fixture = locust_fixture
@property
def grizzly(self) -> GrizzlyContext:
return cast(GrizzlyContext, self.context.grizzly)
def create_scenario(self, name: str) -> Scenario:
return Scenario(filename=None, line=None, keyword='', name=name)
def __enter__(self) -> 'BehaveFixture':
runner = BehaveRunner(
config=Configuration(
command_args=[],
load_config=False,
)
)
context = BehaveContext(runner)
setattr(context, '_runner', runner) # to weakref
context.config.base_dir = '.'
context.scenario = Scenario(filename=None, line=None, keyword='', name='')
context.step = Step(filename=None, line=None, keyword='', step_type='step', name='')
context.scenario.steps = [context.step]
context.scenario.background = Background(filename=None, line=None, keyword='', steps=[context.step], name='')
context._runner.step_registry = step_registry
grizzly = GrizzlyContext()
grizzly.state.locust = self._locust_fixture.runner
setattr(context, 'grizzly', grizzly)
self.context = context
return self
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc: Optional[BaseException],
traceback: Optional[TracebackType],
) -> Literal[True]:
try:
GrizzlyContext.destroy()
except ValueError:
pass
return True
REQUEST_TASK_TEMPLATE_CONTENTS = """{
"result": {
"id": "ID-{{ AtomicIntegerIncrementer.messageID }}",
"date": "{{ AtomicDate.now }}",
"variable": "{{ messageID }}",
"item": {
"description": "this is just a description"
}
}
}"""
class RequestTaskFixture:
_tmp_path_factory: TempPathFactory
context_root: str
relative_path: str
request: RequestTask
def __init__(self, tmp_path_factory: TempPathFactory) -> None:
self._tmp_path_factory = tmp_path_factory
def __enter__(self) -> 'RequestTaskFixture':
test_context = self._tmp_path_factory.mktemp('example_payload') / 'requests'
test_context.mkdir()
request_file = test_context / 'payload.j2.json'
request_file.touch()
request_file.write_text(REQUEST_TASK_TEMPLATE_CONTENTS)
request_path = path.dirname(str(request_file))
request = RequestTask(RequestMethod.POST, endpoint='/api/test', name='request_task')
request.source = REQUEST_TASK_TEMPLATE_CONTENTS
request.scenario = GrizzlyContextScenario(1)
request.scenario.name = 'test-scenario'
request.scenario.user.class_name = 'TestUser'
request.scenario.context['host'] = 'http://example.com'
request.scenario.behave = None
request.scenario.tasks.add(request)
self.context_root = request_path
self.request = request
self.relative_path = str(request_file).replace(f'{request_path}/', '')
return self
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc: Optional[BaseException],
traceback: Optional[TracebackType],
) -> Literal[True]:
rmtree(path.dirname(self.context_root))
return True
class GrizzlyFixture:
request_task: RequestTaskFixture
grizzly: GrizzlyContext
behave: BehaveContext
locust_env: Environment
def __init__(self, request_task: RequestTaskFixture, behave_fixture: BehaveFixture) -> None:
self.request_task = request_task
self.behave = behave_fixture.context
def __enter__(self) -> 'GrizzlyFixture':
environ['GRIZZLY_CONTEXT_ROOT'] = path.abspath(path.join(self.request_task.context_root, '..'))
self.grizzly = GrizzlyContext()
self.grizzly.scenarios.append(self.request_task.request.scenario)
return self
def __call__(
self,
host: str = '',
user_type: Optional[Type['GrizzlyUser']] = None,
scenario_type: Optional[Type['GrizzlyScenario']] = None,
no_tasks: Optional[bool] = False,
) -> Tuple[Environment, 'GrizzlyUser', Optional['GrizzlyScenario']]:
if user_type is None:
user_type = TestUser
if scenario_type is None:
scenario_type = TestScenario
self.locust_env = Environment(
host=host,
user_classes=[user_type],
)
self.request_task.request.scenario.description = self.request_task.request.scenario.name
self.request_task.request.name = scenario_type.__name__
user_type.host = host
user_type._scenario = self.request_task.request.scenario
user = user_type(self.locust_env)
if not no_tasks:
user_type.tasks = [scenario_type]
scenario = scenario_type(parent=user)
else:
user_type.tasks = []
scenario = None
self.grizzly.state.locust = Runner(self.locust_env)
return self.locust_env, user, scenario
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc: Optional[BaseException],
traceback: Optional[TracebackType],
) -> Literal[True]:
try:
del environ['GRIZZLY_CONTEXT_ROOT']
except KeyError:
pass
try:
GrizzlyContext.destroy()
except:
pass
return True
class ResponseContextManagerFixture:
# borrowed from geventhttpclient.client._build_request
def _build_request(self, method: str, request_url: str, body: Optional[str] = '', headers: Optional[Dict[str, Any]] = None) -> str:
parsed = urlparse(request_url)
request = method + ' ' + parsed.path + ' HTTP/1.1\r\n'
for field, value in (headers or {}).items():
request += field + ': ' + str(value) + '\r\n'
request += '\r\n'
return request
def __call__(
self,
cls_rcm: Type[GrizzlyResponseContextManager],
status_code: int,
environment: Optional[Environment] = None,
response_body: Optional[Any] = None,
response_headers: Optional[Dict[str, Any]] = None,
request_method: Optional[str] = None,
request_body: Optional[Any] = None,
request_headers: Optional[Dict[str, Any]] = None,
url: Optional[str] = None,
**kwargs: Dict[str, Any],
) -> GrizzlyResponseContextManager:
name = kwargs['name']
event: Any
if environment is not None:
event = RequestSilentFailureEvent(False)
else:
event = None
if cls_rcm is ResponseContextManager:
response = Response()
if response_headers is not None:
response.headers = CaseInsensitiveDict(**response_headers)
if response_body is not None:
response._content = jsondumps(response_body).encode('utf-8')
response.status_code = status_code
response.request = PreparedRequest()
if request_headers is not None:
response.request.headers = CaseInsensitiveDict(**request_headers)
if request_body is not None:
response.request.body = request_body.encode('utf-8')
response.request.method = (request_method or 'GET').lower()
if url is not None:
response.url = response.request.url = url
else:
_build_request = self._build_request
request_url = url
class FakeGhcResponse(HTTPSocketPoolResponse):
_headers_index: Optional[Headers]
_sent_request: str
_sock: Any
def __init__(self) -> None:
self._headers_index = None
self._sent_request = _build_request(
request_method or '',
request_url or '',
body=jsondumps(request_body or ''),
headers=request_headers,
)
self._sock = None
def get_code(self) -> int:
return status_code
request = FastRequest(url, method=request_method, headers=Headers(), payload=request_body)
for key, value in (request_headers or {}).items():
request.headers.add(key, value)
response = FastResponse(FakeGhcResponse(), request)
response.headers = Headers()
for key, value in (response_headers or {}).items():
response.headers.add(key, value)
if response_body is not None:
response._cached_content = jsondumps(response_body).encode('utf-8')
else:
response._cached_content = None
if request_body is not None:
setattr(response, 'request_body', request_body)
if environment is not None:
environment.events.request = event
event = environment
response_context_manager = cls_rcm(response, event, {})
response_context_manager._entered = True
response_context_manager.request_meta = {
'method': None,
'name': name,
'response_time': 1.0,
'content_size': 1337,
'exception': None,
}
return response_context_manager
class Webserver:
_web_server: gevent.pywsgi.WSGIServer
def __init__(self) -> None:
self._web_server = gevent.pywsgi.WSGIServer(
('127.0.0.1', 0),
app,
log=None,
)
@property
def port(self) -> int:
return cast(int, self._web_server.server_port)
def __enter__(self) -> 'Webserver':
gevent.spawn(lambda: self._web_server.serve_forever())
gevent.sleep(0.01)
return self
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc: Optional[BaseException],
traceback: Optional[TracebackType],
) -> Literal[True]:
self._web_server.stop_accepting()
self._web_server.stop()
try:
del environ['GRIZZLY_CONTEXT_ROOT']
except KeyError:
pass
try:
GrizzlyContext.destroy()
except:
pass
return True
class NoopZmqFixture:
_mocker: MockerFixture
_mocks: Dict[str, MagicMock]
def __init__(self, mocker: MockerFixture) -> None:
self._mocker = mocker
self._mocks = {}
def __call__(self, prefix: str) -> None:
targets = [
'zmq.Context.term',
'zmq.Context.__del__',
'zmq.Socket.bind',
'zmq.Socket.connect',
'zmq.Socket.send_json',
'zmq.Socket.recv_json',
'zmq.Socket.recv_multipart',
'zmq.Socket.send_multipart',
'zmq.Socket.disconnect',
'zmq.Socket.send_string',
'zmq.Poller.poll',
'zmq.Poller.register',
'gsleep',
]
for target in targets:
try:
self._mocks.update({target: self._mocker.patch(
f'{prefix}.{target}',
autospec=True,
)})
except AttributeError as e:
if 'gsleep' in str(e):
continue
raise
def get_mock(self, attr: str) -> MagicMock:
mock = self._mocks.get(attr, None)
if mock is not None:
return mock
for full_attr, mock in self._mocks.items():
_, last_part = full_attr.rsplit('.', 1)
if last_part == attr:
return mock
raise AttributeError(f'no mocks for {attr}')
BehaveKeyword = Literal['Then', 'Given', 'And', 'When']
class BehaveValidator:
name: str
implementation: Any
table: Optional[List[Dict[str, str]]]
def __init__(
self,
name: str,
implementation: Callable[[BehaveContext], None],
table: Optional[List[Dict[str, str]]] = None,
) -> None:
self.name = name
self.implementation = implementation
self.table = table
@property
def expression(self) -> str:
lines: List[str] = [f'Then run validator {self.name}_{self.implementation.__name__}']
if self.table is not None and len(self.table) > 0:
lines.append(f' | {" | ".join([key for key in self.table[0].keys()])} |')
for row in self.table:
lines.append(f' | {" | ".join([value for value in row.values()])} |')
return '\n'.join(lines)
@property
def impl(self) -> str:
source_lines = inspect.getsource(self.implementation).split('\n')
source_lines[0] = source_lines[0].replace('def ', f'def {self.name}_')
source = '\n'.join(source_lines)
return f'''@then(u'run validator {self.name}_{self.implementation.__name__}')
{dedent(source)}
'''
class BehaveContextFixture:
_tmp_path_factory: TempPathFactory
_cwd: str
_env: Dict[str, str]
_validators: Dict[Optional[str], List[BehaveValidator]]
_after_features: Dict[str, Callable[[BehaveContext, Feature], None]]
_root: Optional[Path]
def __init__(self, tmp_path_factory: TempPathFactory) -> None:
self._tmp_path_factory = tmp_path_factory
self._cwd = getcwd()
self._env = {}
self._validators = {}
self._root = None
self._after_features = {}
@property
def root(self) -> Path:
if self._root is None:
raise AttributeError('root is not set')
return self._root
def __enter__(self) -> 'BehaveContextFixture':
test_context = self._tmp_path_factory.mktemp('test_context')
virtual_env_path = test_context / 'grizzly-venv'
# create virtualenv
rc, output = run_command(
['python3', '-m', 'venv', virtual_env_path.name],
cwd=str(test_context),
)
try:
assert rc == 0
except AssertionError:
print(''.join(output))
raise
path = environ.get('PATH', '')
self._env.update({
'PATH': f'{str(virtual_env_path)}/bin:{path}',
'VIRTUAL_ENV': str(virtual_env_path),
'PYTHONPATH': environ.get('PYTHONPATH', '.'),
})
# install grizzly-cli
rc, output = run_command(
['python3', '-m', 'pip', 'install', 'grizzly-loadtester-cli'],
cwd=str(test_context),
env=self._env,
)
try:
assert rc == 0
except AssertionError:
print(''.join(output))
raise
# create grizzly project
rc, output = run_command(
['grizzly-cli', 'init', '--yes', 'test-project'],
cwd=str(test_context),
env=self._env,
)
try:
assert rc == 0
except AssertionError:
print(''.join(output))
raise
self._root = test_context / 'test-project'
assert self._root.is_dir()
(self._root / 'features' / 'test-project.feature').unlink()
# create base steps.py
with open(self._root / 'features' / 'steps' / 'steps.py', 'w') as fd:
fd.write('from typing import cast, Callable, Any\n\n')
fd.write('from behave import then\n')
fd.write('from behave.runner import Context\n')
fd.write('from grizzly.context import GrizzlyContext, GrizzlyContextScenario\n')
fd.write('from grizzly.tasks import GrizzlyTask\n')
fd.write('from grizzly.scenarios import GrizzlyScenario\n')
fd.write('from grizzly.steps import *\n')
# install dependencies
rc, output = run_command(
['python3', '-m', 'pip', 'install', '-r', 'requirements.txt'],
cwd=str(self._root),
env=self._env,
)
try:
assert rc == 0
except AssertionError:
print(''.join(output))
raise
return self
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc: Optional[BaseException],
traceback: Optional[TracebackType],
) -> Literal[True]:
if environ.get('KEEP_FILES', None) is None:
try:
rmtree(self.root.parent.parent, onerror=onerror)
except AttributeError:
pass
else:
print(self._root)
return True
def add_validator(
self,
implementation: Callable[[BehaveContext], None],
/,
scenario: Optional[str] = None,
table: Optional[List[Dict[str, str]]] = None,
) -> None:
callee = inspect.stack()[1].function
if self._validators.get(scenario, None) is None:
self._validators[scenario] = []
self._validators[scenario].append(BehaveValidator(callee, implementation, table))
def add_after_feature(self, implementation: Callable[[BehaveContext, Feature], None]) -> None:
callee = inspect.stack()[1].function
self._after_features[callee] = implementation
def test_steps(self, /, scenario: Optional[List[str]] = None, background: Optional[List[str]] = None, identifier: Optional[str] = None) -> str:
callee = inspect.stack()[1].function
contents: List[str] = ['Feature:']
add_user_count_step = True
add_user_type_step = True
add_spawn_rate_step = True
if background is None:
background = []
if scenario is None:
scenario = []
# check required steps
for step in background + scenario:
if re.match(r'Given "[^"]*" user[s]?', step) is not None:
add_user_count_step = False
if re.match(r'Given a user of type "[^"]*"', step) is not None:
add_user_type_step = False
if re.match(r'(And|Given) spawn rate is "[^"]*" user[s]? per second', step) is not None:
add_spawn_rate_step = False
if add_user_count_step:
background.insert(0, 'Given "1" user')
if add_spawn_rate_step:
background.insert(1, 'And spawn rate is "1" user per second')
if add_user_type_step:
scenario.insert(0, 'Given a user of type "RestApi" load testing "http://localhost:1"')
if add_spawn_rate_step and not add_user_count_step:
background.append('And spawn rate is "1" user per second')
contents.append(' Background: common configuration')
for step in background:
contents.append(f' {step}')
contents.append('')
contents.append(f' Scenario: {callee}')
for step in scenario or []:
contents.append(f' {step}')
contents.append(' Then log message "dummy"\n')
return self.create_feature(
'\n'.join(contents),
name=callee,
identifier=identifier,
)
def create_feature(self, contents: str, name: Optional[str] = None, identifier: Optional[str] = None) -> str:
if name is None:
name = inspect.stack()[1].function
if identifier is not None:
identifier = sha1(identifier.encode()).hexdigest()[:8]
name = f'{name}_{identifier}'
feature_lines = contents.strip().split('\n')
feature_lines[0] = f'Feature: {name}'
steps_file = self.root / 'features' / 'steps' / 'steps.py'
environment_file = self.root / 'features' / 'environment.py'
scenario: Optional[str] = None
indentation = ' '
modified_feature_lines: List[str] = []
offset = 0 # number of added steps
for nr, line in enumerate(feature_lines):
modified_feature_lines.append(line)
last_line = nr == len(feature_lines) - 1
scenario_definition = line.strip().startswith('Scenario:')
if scenario_definition or last_line:
if scenario is not None:
validators = self._validators.get(scenario, self._validators.get(None, None))
if validators is not None:
for validator in validators:
nr += offset
validator_expression = indent(f'{validator.expression}', prefix=indentation * 2)
index = nr
while modified_feature_lines[index].strip() == '' or 'Scenario:' in modified_feature_lines[index]:
index -= 1
index += 1
modified_feature_lines.insert(index, validator_expression)
offset += 1
if scenario_definition:
scenario = line.replace('Scenario:', '').strip()
indentation, _ = line.split('Scenario:', 1)
modified_feature_lines.append('')
contents = '\n'.join(modified_feature_lines)
# write feature file
with open(self.root / 'features' / f'{name}.feature', 'w+') as fd:
fd.write(contents)
feature_file_name = fd.name.replace(f'{self.root}/', '')
# cache current step implementations
with open(steps_file, 'r') as fd:
steps_impl = fd.read()
# add step implementations
with open(steps_file, 'a') as fd:
added_validators: List[str] = []
for validators in self._validators.values():
for validator in validators:
# write expression and step implementation to steps/steps.py
if validator.impl not in steps_impl and validator.impl not in added_validators:
fd.write(f'\n\n{validator.impl}')
added_validators.append(validator.impl)
added_validators = []
# add after_feature hook, always write all of 'em
with open(environment_file, 'w') as fd:
fd.write('from typing import Any, Tuple, Dict, cast\n\n')
fd.write('from behave.runner import Context\n')
fd.write('from behave.model import Feature\n')
fd.write('from grizzly.context import GrizzlyContext\n')
fd.write('from grizzly.environment import before_feature, after_feature as grizzly_after_feature, before_scenario, after_scenario, before_step\n\n')
fd.write('def after_feature(context: Context, feature: Feature, *args: Tuple[Any, ...], **kwargs: Dict[str, Any]) -> None:\n')
fd.write(' grizzly_after_feature(context, feature)\n\n')
if len(self._after_features) > 0:
for feature_name in self._after_features.keys():
fd.write(f' if feature.name == "{feature_name}":\n')
fd.write(f' {feature_name}_after_feature(context, feature)\n\n')
for key, after_feature_impl in self._after_features.items():
source_lines = dedent(inspect.getsource(after_feature_impl)).split('\n')
source_lines[0] = re.sub(r'^def .*?\(', f'def {key}_after_feature(', source_lines[0])
source = '\n'.join(source_lines)
fd.write(source + '\n\n')
# step validators are are now "burned"...
self._validators.clear()
return feature_file_name
def execute(self, feature_file: str, env_conf_file: Optional[str] = None, testdata: Optional[Dict[str, str]] = None) -> Tuple[int, List[str]]:
chdir(self.root)
command = [
'grizzly-cli',
'local',
'run',
'--yes',
'--verbose',
feature_file,
]
if env_conf_file is not None:
command += ['-e', env_conf_file]
if testdata is not None:
for key, value in testdata.items():
command += ['-T', f'{key}={value}']
rc, output = run_command(
command,
cwd=str(self.root),
env=self._env,
)
if rc != 0:
print(''.join(output))
chdir(self._cwd)
return rc, output
| 32.321721 | 171 | 0.594909 |
acf73ca5a7588ae8a552053ebdf2b1656e903dcd | 4,347 | py | Python | antlir/vm/tests/test_share_generator.py | facebookincubator/fs_image | 3515a24bb0e93176a5584bdc8839464fa28390d7 | [
"MIT"
] | 9 | 2019-12-02T20:17:35.000Z | 2020-06-13T16:34:25.000Z | antlir/vm/tests/test_share_generator.py | facebookincubator/fs_image | 3515a24bb0e93176a5584bdc8839464fa28390d7 | [
"MIT"
] | 19 | 2019-11-22T23:30:04.000Z | 2020-07-16T18:05:48.000Z | antlir/vm/tests/test_share_generator.py | facebookincubator/fs_image | 3515a24bb0e93176a5584bdc8839464fa28390d7 | [
"MIT"
] | 4 | 2019-12-04T19:03:28.000Z | 2020-06-13T16:34:29.000Z | #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import importlib.resources
import os
import subprocess
import tempfile
import unittest
from dataclasses import dataclass
from typing import Optional
from antlir.fs_utils import Path
from antlir.tests.common import AntlirTestCase
from antlir.vm.share import BtrfsDisk, Plan9Export, Share
@dataclass(frozen=True)
class TestShare(object):
share: Share
unit: Optional[str]
contents: Optional[str]
TEST_SHARES = [
TestShare(
Plan9Export(path=Path("/tmp/hello"), mountpoint=Path("/tmp/hello")),
"tmp-hello.mount",
"""[Unit]
Description=Mount fs0 at /tmp/hello
Requires=systemd-modules-load.service
After=systemd-modules-load.service
Before=local-fs.target
[Mount]
What=fs0
Where=/tmp/hello
Type=9p
Options=version=9p2000.L,posixacl,cache=loose,ro
""",
),
TestShare(
Plan9Export(
path=Path("/usr/tag"),
mountpoint=Path("/usr/tag"),
mount_tag="explicit_tag",
),
"usr-tag.mount",
"""[Unit]
Description=Mount explicit_tag at /usr/tag
Requires=systemd-modules-load.service
After=systemd-modules-load.service
Before=local-fs.target
[Mount]
What=explicit_tag
Where=/usr/tag
Type=9p
Options=version=9p2000.L,posixacl,cache=loose,ro
""",
),
TestShare(
Plan9Export(
path=Path("/tmp/not-included"),
generator=False,
),
None,
None,
),
TestShare(
Plan9Export(
path=Path("/some/host/path"), mountpoint=Path("/guest/other")
),
"guest-other.mount",
"""[Unit]
Description=Mount fs2 at /guest/other
Requires=systemd-modules-load.service
After=systemd-modules-load.service
Before=local-fs.target
[Mount]
What=fs2
Where=/guest/other
Type=9p
Options=version=9p2000.L,posixacl,cache=loose,ro
""",
),
TestShare(
Plan9Export(
path=Path("/tmp/hello_rw"),
mountpoint=Path("/tmp/hello_rw"),
readonly=False,
),
"tmp-hello_rw.mount",
"""[Unit]
Description=Mount fs3 at /tmp/hello_rw
Requires=systemd-modules-load.service
After=systemd-modules-load.service
Before=local-fs.target
[Mount]
What=fs3
Where=/tmp/hello_rw
Type=9p
Options=version=9p2000.L,posixacl,cache=none,rw
""",
),
TestShare(
BtrfsDisk(path=Path("/tmp/image.btrfs"), mountpoint=Path("/mnt/guest")),
"mnt-guest.mount",
"""[Unit]
Description=Mount vdb (/tmp/image.btrfs from host) at /mnt/guest
Before=local-fs.target
[Mount]
What=/dev/vdb
Where=/mnt/guest
Type=btrfs
Options=subvol=volume,ro
""",
),
]
class TestShareGenerator(AntlirTestCase):
def test_units(self):
with importlib.resources.path(
__package__, "mount-generator"
) as generator, Share.export_spec(
[s.share for s in TEST_SHARES]
) as share, tempfile.TemporaryDirectory() as outdir:
subprocess.run(
[generator, outdir], env={"EXPORTS_DIR": share.path}, check=True
)
units = {s.unit for s in TEST_SHARES if s.unit}
self.assertEqual(
set(os.listdir(outdir)),
units.union(
{"local-fs.target.requires", "workload-pre.target.requires"}
),
)
self.assertEqual(
set(
os.listdir(os.path.join(outdir, "local-fs.target.requires"))
),
units,
)
for share in TEST_SHARES:
if not share.share.generator:
continue
# check that the mount units have the expected content
with open(os.path.join(outdir, share.unit)) as f:
self.assertEqual(f.read(), share.contents)
# set as a requirement of local-fs.target
self.assertEqual(
os.readlink(
os.path.join(
outdir, "local-fs.target.requires", share.unit
)
),
os.path.join(outdir, share.unit),
)
| 25.875 | 80 | 0.599034 |
acf73d502692c04bd5286150a28b262cceba04bb | 3,092 | py | Python | autotest/ogr/ogr_rfc30.py | trundev/gdal | d5777940975f2784980ef0b7561eeeb655fd0ab5 | [
"MIT"
] | 18 | 2021-01-27T00:07:35.000Z | 2022-03-25T22:20:13.000Z | autotest/ogr/ogr_rfc30.py | trundev/gdal | d5777940975f2784980ef0b7561eeeb655fd0ab5 | [
"MIT"
] | 3 | 2019-02-27T00:43:06.000Z | 2019-06-28T21:57:10.000Z | autotest/ogr/ogr_rfc30.py | trundev/gdal | d5777940975f2784980ef0b7561eeeb655fd0ab5 | [
"MIT"
] | 1 | 2021-04-26T14:47:38.000Z | 2021-04-26T14:47:38.000Z | #!/usr/bin/env pytest
###############################################################################
# $Id$
#
# Project: GDAL/OGR Test Suite
# Purpose: Test RFC 30 (UTF filename handling) support.
# Author: Even Rouault, <even dot rouault at spatialys.com>
#
###############################################################################
# Copyright (c) 2011, Even Rouault <even dot rouault at spatialys.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###############################################################################
from sys import version_info
from osgeo import ogr
import pytest
###############################################################################
# Try ogr.Open(), Driver.CreateDataSource(), Driver.DeleteDataSource()
def ogr_rfc30_1_internal(filename, layer_name):
ds = ogr.GetDriverByName('ESRI Shapefile').CreateDataSource(filename)
lyr = ds.CreateLayer('foo')
ds = None
ds = ogr.Open(filename)
assert ds is not None, 'cannot reopen datasource'
lyr = ds.GetLayerByName(layer_name)
assert lyr is not None, 'cannot find layer'
ds = None
ogr.GetDriverByName('ESRI Shapefile').DeleteDataSource(filename)
def test_ogr_rfc30_1():
if version_info >= (3, 0, 0):
filename = '/vsimem/\u00e9.shp'
layer_name = '\u00e9'
else:
# First try with Unicode string
exec("filename = u'/vsimem/\u00e9.shp'")
exec("layer_name = u'\u00e9'.encode( 'utf-8' )") # FIXME? we should perhaps accept Unicode strings for layernames as well
return ogr_rfc30_1_internal(filename, layer_name)
def test_ogr_rfc30_1_bis():
if version_info >= (3, 0, 0):
pytest.skip()
filename = None
layer_name = None
# Test that it also works with a regular string (non Unicode) with utf8 content on python 2.X
exec("filename = u'/vsimem/\u00e9.shp'.encode( 'utf-8' )")
exec("layer_name = u'\u00e9'.encode( 'utf-8' )") # FIXME? we should perhaps accept Unicode strings for layernames as well
return ogr_rfc30_1_internal(filename, layer_name)
| 36.809524 | 130 | 0.64586 |
acf73e17cc3c3ebbf4581397b689b4a101e74ed3 | 5,408 | py | Python | filters.py | earthpyy/alfred-bluetooth-connect | 859eb5e787950f8807eb66ead0bdf64f416e9a60 | [
"MIT"
] | 29 | 2019-02-02T22:10:59.000Z | 2022-01-27T17:43:45.000Z | filters.py | earthpyy/alfred-bluetooth-connect | 859eb5e787950f8807eb66ead0bdf64f416e9a60 | [
"MIT"
] | 1 | 2019-02-03T08:04:23.000Z | 2019-02-16T16:36:34.000Z | filters.py | earthpyy/alfred-bluetooth-connect | 859eb5e787950f8807eb66ead0bdf64f416e9a60 | [
"MIT"
] | 2 | 2019-11-18T08:03:03.000Z | 2021-04-05T12:17:22.000Z | import json
import os
import sys
from utils import FILE_PATH, get_value_from_line, check_set_syntax, split_set_query, return_result
# get argv
command = sys.argv[1]
query = sys.argv[2] if len(sys.argv) > 2 else ''
if command == 'set': # SET FILTER
alias, device_name = split_set_query(query)
valid = check_set_syntax(alias, device_name)
if valid:
result = {
'items': [
{
'title': 'Set alias {} to {}'.format(alias, device_name),
'arg': query,
'variables': {
'alias': alias,
'device_name': device_name
}
}
]
}
else:
result = {
'items': [
{
'title': 'Invalid syntax!',
'subtitle': 'Syntax: btset {alias} {device name} OR btset {alias} > {device name}'
}
]
}
return_result(result)
elif command == 'unset': # UNSET FILTER
items = []
try:
with open(FILE_PATH, 'rb') as f:
for line in f:
alias, device_name = get_value_from_line(line)
if query in alias:
items.append({
'title': 'Unset ' + device_name,
'arg': query,
'variables': {
'alias': alias
}
})
except EnvironmentError:
sys.stdout.write(query)
if items:
result = {
'items': items
}
else:
result = {
'items': [
{
'title': 'Cannot find alias {}'.format(query),
'subtitle': 'Please recheck your alias.'
}
]
}
return_result(result)
elif command == 'resolve': # RESOLVE FILTER
populated_favorite_name = None
pin = 0
if query in [None, '']: # if no query
favorite_name = os.environ.get('FAVORITE_DEVICE', '')
if favorite_name == '': # no favorite device
items = [
{
'title': 'Toggle',
'arg': 'Please specify device alias/name',
'subtitle': 'Toggle any bluetooth device'
}
]
else: # has favorite device
items = [
{
'title': 'Toggle ' + favorite_name,
'subtitle': 'Toggle favorite bluetooth device',
'arg': query,
'variables': {
'device_name': favorite_name
},
'mods': {
'cmd': {
'subtitle': 'Unmark {} as favorite device'.format(favorite_name)
}
}
}
]
pin = 1
populated_favorite_name = favorite_name
else: # query is provided
items = [
# default option
{
'title': 'Toggle ' + query,
'arg': query,
'variables': {
'device_name': query
},
'mods': {
'cmd': {
'subtitle': 'Toggle and mark {} as favorite device'.format(query)
}
}
}
]
try:
with open(FILE_PATH, 'rb') as f:
for line in f:
alias, device_name = get_value_from_line(line)
if device_name != populated_favorite_name: # check if favorite name is populated
if query == alias: # pin match on top
items.insert(
0,
{
'title': 'Toggle ' + device_name,
'arg': query,
'variables': {
'device_name': device_name
},
'mods': {
'cmd': {
'subtitle': 'Toggle and mark {} as favorite device'.format(device_name)
}
}
}
)
pin = 1
elif query in alias:
items.insert(
pin,
{
'title': 'Toggle ' + device_name,
'arg': query,
'variables': {
'device_name': device_name
},
'mods': {
'cmd': {
'subtitle': 'Toggle and mark {} as favorite device'.format(device_name)
}
}
}
)
result = {
'items': items
}
return_result(result)
except EnvironmentError:
sys.stdout.write(query)
| 31.625731 | 111 | 0.358728 |
acf73eca22713a9ce1474fa7898479fe963a8670 | 2,110 | py | Python | config/passenger_wsgi.py | birkin/disa_project | 48e3e346673570a418cecdfca18c8f58ce43759d | [
"MIT"
] | 1 | 2020-07-22T07:28:04.000Z | 2020-07-22T07:28:04.000Z | config/passenger_wsgi.py | birkin/disa_project | 48e3e346673570a418cecdfca18c8f58ce43759d | [
"MIT"
] | 85 | 2020-06-03T15:28:14.000Z | 2021-11-17T15:00:19.000Z | config/passenger_wsgi.py | birkin/disa_dj_project | affab1987364c97c3403242aafa766087acd7465 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
WSGI config.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/
"""
"""
Note: no need to activate the virtual-environment here for passenger.
- the project's httpd/passenger.conf section allows specification of the python-path via `PassengerPython`, which auto-activates it.
- the auto-activation provides access to modules, but not, automatically, env-vars.
- passenger env-vars loading under python3.x is enabled via the `SenEnv` entry in the project's httpd/passenger.conf section.
- usage: `SetEnv PREFIX__ENV_SETTINGS_PATH /path/to/project_env_settings.sh`
- `SenEnv` requires apache env_module; info: <https://www.phusionpassenger.com/library/indepth/environment_variables.html>,
enabled by default on macOS 10.12.4, and our dev and production servers.
For activating the virtual-environment manually, don't source the settings file directly. Instead, add to `project_env/bin/activate`:
export PREFIX__ENV_SETTINGS_PATH="/path/to/project_env_settings.sh"
source $PREFIX__ENV_SETTINGS_PATH
This allows not only the sourcing, but also creates the env-var used below by shellvars.
"""
import os, pprint, sys
import shellvars
from django.core.wsgi import get_wsgi_application
# print( 'the initial env, ```{}```'.format( pprint.pformat(dict(os.environ)) ) )
PROJECT_DIR_PATH = os.path.dirname( os.path.dirname(os.path.abspath(__file__)) )
ENV_SETTINGS_FILE = os.environ['DISA_DJ__ENV_SETTINGS_PATH'] # set in `httpd/passenger.conf`, and `env/bin/activate`
## update path
sys.path.append( PROJECT_DIR_PATH )
## reference django settings
os.environ[u'DJANGO_SETTINGS_MODULE'] = 'config.settings' # so django can access its settings
## load up env vars
var_dct = shellvars.get_vars( ENV_SETTINGS_FILE )
for ( key, val ) in var_dct.items():
os.environ[key.decode('utf-8')] = val.decode('utf-8')
# print( 'the final env, ```{}```'.format( pprint.pformat(dict(os.environ)) ) )
## gogogo
application = get_wsgi_application()
| 40.576923 | 133 | 0.757346 |
acf73ed66ed925d0faecbdf0d2d7b378436c845d | 15,211 | py | Python | src/pip/_internal/cmdoptions.py | smartsammler/pip | 54b983c2bd56dfbd61bbcc2dcc9177d4e55e25af | [
"MIT"
] | null | null | null | src/pip/_internal/cmdoptions.py | smartsammler/pip | 54b983c2bd56dfbd61bbcc2dcc9177d4e55e25af | [
"MIT"
] | null | null | null | src/pip/_internal/cmdoptions.py | smartsammler/pip | 54b983c2bd56dfbd61bbcc2dcc9177d4e55e25af | [
"MIT"
] | null | null | null | """
shared options and groups
The principle here is to define options once, but *not* instantiate them
globally. One reason being that options with action='append' can carry state
between parses. pip parses general options twice internally, and shouldn't
pass on state. To be consistent, all options will follow this design.
"""
from __future__ import absolute_import
import warnings
from functools import partial
from optparse import SUPPRESS_HELP, Option, OptionGroup
from pip._internal.index import (
FormatControl, fmt_ctl_handle_mutual_exclude, fmt_ctl_no_binary
)
from pip._internal.locations import USER_CACHE_DIR, src_prefix
from pip._internal.models import PyPI
from pip._internal.utils.hashes import STRONG_HASHES
from pip._internal.utils.typing import MYPY_CHECK_RUNNING
from pip._internal.utils.ui import BAR_TYPES
if MYPY_CHECK_RUNNING:
from typing import Any
def make_option_group(group, parser):
"""
Return an OptionGroup object
group -- assumed to be dict with 'name' and 'options' keys
parser -- an optparse Parser
"""
option_group = OptionGroup(parser, group['name'])
for option in group['options']:
option_group.add_option(option())
return option_group
def check_install_build_global(options, check_options=None):
"""Disable wheels if per-setup.py call options are set.
:param options: The OptionParser options to update.
:param check_options: The options to check, if not supplied defaults to
options.
"""
if check_options is None:
check_options = options
def getname(n):
return getattr(check_options, n, None)
names = ["build_options", "global_options", "install_options"]
if any(map(getname, names)):
control = options.format_control
fmt_ctl_no_binary(control)
warnings.warn(
'Disabling all use of wheels due to the use of --build-options '
'/ --global-options / --install-options.', stacklevel=2)
###########
# options #
###########
help_ = partial(
Option,
'-h', '--help',
dest='help',
action='help',
help='Show help.',
) # type: Any
isolated_mode = partial(
Option,
"--isolated",
dest="isolated_mode",
action="store_true",
default=False,
help=(
"Run pip in an isolated mode, ignoring environment variables and user "
"configuration."
),
)
require_virtualenv = partial(
Option,
# Run only if inside a virtualenv, bail if not.
'--require-virtualenv', '--require-venv',
dest='require_venv',
action='store_true',
default=False,
help=SUPPRESS_HELP
) # type: Any
verbose = partial(
Option,
'-v', '--verbose',
dest='verbose',
action='count',
default=0,
help='Give more output. Option is additive, and can be used up to 3 times.'
)
version = partial(
Option,
'-V', '--version',
dest='version',
action='store_true',
help='Show version and exit.',
) # type: Any
quiet = partial(
Option,
'-q', '--quiet',
dest='quiet',
action='count',
default=0,
help=(
'Give less output. Option is additive, and can be used up to 3'
' times (corresponding to WARNING, ERROR, and CRITICAL logging'
' levels).'
),
) # type: Any
progress_bar = partial(
Option,
'--progress-bar',
dest='progress_bar',
type='choice',
choices=list(BAR_TYPES.keys()),
default='on',
help=(
'Specify type of progress to be displayed [' +
'|'.join(BAR_TYPES.keys()) + '] (default: %default)'
),
) # type: Any
log = partial(
Option,
"--log", "--log-file", "--local-log",
dest="log",
metavar="path",
help="Path to a verbose appending log."
) # type: Any
no_input = partial(
Option,
# Don't ask for input
'--no-input',
dest='no_input',
action='store_true',
default=False,
help=SUPPRESS_HELP
) # type: Any
proxy = partial(
Option,
'--proxy',
dest='proxy',
type='str',
default='',
help="Specify a proxy in the form [user:passwd@]proxy.server:port."
) # type: Any
retries = partial(
Option,
'--retries',
dest='retries',
type='int',
default=5,
help="Maximum number of retries each connection should attempt "
"(default %default times).",
) # type: Any
timeout = partial(
Option,
'--timeout', '--default-timeout',
metavar='sec',
dest='timeout',
type='float',
default=15,
help='Set the socket timeout (default %default seconds).',
) # type: Any
skip_requirements_regex = partial(
Option,
# A regex to be used to skip requirements
'--skip-requirements-regex',
dest='skip_requirements_regex',
type='str',
default='',
help=SUPPRESS_HELP,
) # type: Any
def exists_action():
return Option(
# Option when path already exist
'--exists-action',
dest='exists_action',
type='choice',
choices=['s', 'i', 'w', 'b', 'a'],
default=[],
action='append',
metavar='action',
help="Default action when a path already exists: "
"(s)witch, (i)gnore, (w)ipe, (b)ackup, (a)bort).",
)
cert = partial(
Option,
'--cert',
dest='cert',
type='str',
metavar='path',
help="Path to alternate CA bundle.",
) # type: Any
client_cert = partial(
Option,
'--client-cert',
dest='client_cert',
type='str',
default=None,
metavar='path',
help="Path to SSL client certificate, a single file containing the "
"private key and the certificate in PEM format.",
) # type: Any
index_url = partial(
Option,
'-i', '--index-url', '--pypi-url',
dest='index_url',
metavar='URL',
default=PyPI.simple_url,
help="Base URL of Python Package Index (default %default). "
"This should point to a repository compliant with PEP 503 "
"(the simple repository API) or a local directory laid out "
"in the same format.",
) # type: Any
def extra_index_url():
return Option(
'--extra-index-url',
dest='extra_index_urls',
metavar='URL',
action='append',
default=[],
help="Extra URLs of package indexes to use in addition to "
"--index-url. Should follow the same rules as "
"--index-url.",
)
no_index = partial(
Option,
'--no-index',
dest='no_index',
action='store_true',
default=False,
help='Ignore package index (only looking at --find-links URLs instead).',
) # type: Any
def find_links():
return Option(
'-f', '--find-links',
dest='find_links',
action='append',
default=[],
metavar='url',
help="If a url or path to an html file, then parse for links to "
"archives. If a local path or file:// url that's a directory, "
"then look for archives in the directory listing.",
)
def trusted_host():
return Option(
"--trusted-host",
dest="trusted_hosts",
action="append",
metavar="HOSTNAME",
default=[],
help="Mark this host as trusted, even though it does not have valid "
"or any HTTPS.",
)
# Remove after 1.5
process_dependency_links = partial(
Option,
"--process-dependency-links",
dest="process_dependency_links",
action="store_true",
default=False,
help="Enable the processing of dependency links.",
) # type: Any
def constraints():
return Option(
'-c', '--constraint',
dest='constraints',
action='append',
default=[],
metavar='file',
help='Constrain versions using the given constraints file. '
'This option can be used multiple times.'
)
def requirements():
return Option(
'-r', '--requirement',
dest='requirements',
action='append',
default=[],
metavar='file',
help='Install from the given requirements file. '
'This option can be used multiple times.'
)
def editable():
return Option(
'-e', '--editable',
dest='editables',
action='append',
default=[],
metavar='path/url',
help=('Install a project in editable mode (i.e. setuptools '
'"develop mode") from a local project path or a VCS url.'),
)
src = partial(
Option,
'--src', '--source', '--source-dir', '--source-directory',
dest='src_dir',
metavar='dir',
default=src_prefix,
help='Directory to check out editable projects into. '
'The default in a virtualenv is "<venv path>/src". '
'The default for global installs is "<current dir>/src".'
) # type: Any
def _get_format_control(values, option):
"""Get a format_control object."""
return getattr(values, option.dest)
def _handle_no_binary(option, opt_str, value, parser):
existing = getattr(parser.values, option.dest)
fmt_ctl_handle_mutual_exclude(
value, existing.no_binary, existing.only_binary)
def _handle_only_binary(option, opt_str, value, parser):
existing = getattr(parser.values, option.dest)
fmt_ctl_handle_mutual_exclude(
value, existing.only_binary, existing.no_binary)
def no_binary():
return Option(
"--no-binary", dest="format_control", action="callback",
callback=_handle_no_binary, type="str",
default=FormatControl(set(), set()),
help="Do not use binary packages. Can be supplied multiple times, and "
"each time adds to the existing value. Accepts either :all: to "
"disable all binary packages, :none: to empty the set, or one or "
"more package names with commas between them. Note that some "
"packages are tricky to compile and may fail to install when "
"this option is used on them.",
)
def only_binary():
return Option(
"--only-binary", dest="format_control", action="callback",
callback=_handle_only_binary, type="str",
default=FormatControl(set(), set()),
help="Do not use source packages. Can be supplied multiple times, and "
"each time adds to the existing value. Accepts either :all: to "
"disable all source packages, :none: to empty the set, or one or "
"more package names with commas between them. Packages without "
"binary distributions will fail to install when this option is "
"used on them.",
)
cache_dir = partial(
Option,
"--cache-dir",
dest="cache_dir",
default=USER_CACHE_DIR,
metavar="dir",
help="Store the cache data in <dir>."
)
no_cache = partial(
Option,
"--no-cache-dir",
dest="cache_dir",
action="store_false",
help="Disable the cache.",
)
no_deps = partial(
Option,
'--no-deps', '--no-dependencies',
dest='ignore_dependencies',
action='store_true',
default=False,
help="Don't install package dependencies)."
) # type: Any
build_dir = partial(
Option,
'-b', '--build', '--build-dir', '--build-directory',
dest='build_dir',
metavar='dir',
help='Directory to unpack packages into and build in.'
) # type: Any
ignore_requires_python = partial(
Option,
'--ignore-requires-python',
dest='ignore_requires_python',
action='store_true',
help='Ignore the Requires-Python information.'
) # type: Any
install_options = partial(
Option,
'--install-option',
dest='install_options',
action='append',
metavar='options',
help="Extra arguments to be supplied to the setup.py install "
"command (use like --install-option=\"--install-scripts=/usr/local/"
"bin\"). Use multiple --install-option options to pass multiple "
"options to setup.py install. If you are using an option with a "
"directory path, be sure to use absolute path.",
) # type: Any
global_options = partial(
Option,
'--global-option',
dest='global_options',
action='append',
metavar='options',
help="Extra global options to be supplied to the setup.py "
"call before the install command.",
) # type: Any
no_clean = partial(
Option,
'--no-clean',
action='store_true',
default=False,
help="Don't clean up build directories)."
) # type: Any
pre = partial(
Option,
'--pre',
action='store_true',
default=False,
help="Include pre-release and development versions. By default, "
"pip only finds stable versions.",
) # type: Any
disable_pip_version_check = partial(
Option,
"--disable-pip-version-check",
dest="disable_pip_version_check",
action="store_true",
default=False,
help="Don't periodically check PyPI to determine whether a new version "
"of pip is available for download. Implied with --no-index.",
) # type: Any
# Deprecated, Remove later
always_unzip = partial(
Option,
'-Z', '--always-unzip',
dest='always_unzip',
action='store_true',
help=SUPPRESS_HELP,
) # type: Any
def _merge_hash(option, opt_str, value, parser):
"""Given a value spelled "algo:digest", append the digest to a list
pointed to in a dict by the algo name."""
if not parser.values.hashes:
parser.values.hashes = {}
try:
algo, digest = value.split(':', 1)
except ValueError:
parser.error('Arguments to %s must be a hash name '
'followed by a value, like --hash=sha256:abcde...' %
opt_str)
if algo not in STRONG_HASHES:
parser.error('Allowed hash algorithms for %s are %s.' %
(opt_str, ', '.join(STRONG_HASHES)))
parser.values.hashes.setdefault(algo, []).append(digest)
hash = partial(
Option,
'--hash',
# Hash values eventually end up in InstallRequirement.hashes due to
# __dict__ copying in process_line().
dest='hashes',
action='callback',
callback=_merge_hash,
type='string',
help="Verify that the package's archive matches this "
'hash before installing. Example: --hash=sha256:abcdef...',
) # type: Any
require_hashes = partial(
Option,
'--require-hashes',
dest='require_hashes',
action='store_true',
default=False,
help='Require a hash to check each requirement against, for '
'repeatable installs. This option is implied when any package in a '
'requirements file has a --hash option.',
) # type: Any
##########
# groups #
##########
general_group = {
'name': 'General Options',
'options': [
help_,
isolated_mode,
require_virtualenv,
verbose,
version,
quiet,
log,
no_input,
proxy,
retries,
timeout,
skip_requirements_regex,
exists_action,
trusted_host,
cert,
client_cert,
cache_dir,
no_cache,
disable_pip_version_check,
]
}
index_group = {
'name': 'Package Index Options',
'options': [
index_url,
extra_index_url,
no_index,
find_links,
process_dependency_links,
]
}
| 26.135739 | 79 | 0.61633 |
acf73ef2ef0106f42a770458aaba950dabae5029 | 8,930 | py | Python | giant_time_series/utils.py | earthobservatory/giant_time_series | eb6fc839f62da74e82a417e404ec0d65ceacd559 | [
"Apache-2.0"
] | 1 | 2020-02-20T02:37:46.000Z | 2020-02-20T02:37:46.000Z | giant_time_series/utils.py | earthobservatory/giant_time_series | eb6fc839f62da74e82a417e404ec0d65ceacd559 | [
"Apache-2.0"
] | 1 | 2019-06-07T05:27:37.000Z | 2019-06-07T05:27:37.000Z | giant_time_series/utils.py | earthobservatory/giant_time_series | eb6fc839f62da74e82a417e404ec0d65ceacd559 | [
"Apache-2.0"
] | 1 | 2020-07-26T17:27:16.000Z | 2020-07-26T17:27:16.000Z | import os
import traceback
import logging
import h5py
import json
import re
import requests
import numpy as np
import scipy.spatial
from osgeo import gdal, ogr
from subprocess import check_call
from datetime import datetime
gdal.UseExceptions() # make GDAL raise python exceptions
log_format = "[%(asctime)s: %(levelname)s/%(name)s/%(funcName)s] %(message)s"
logging.basicConfig(format=log_format, level=logging.INFO)
logger = logging.getLogger(os.path.splitext(os.path.basename(__file__))[0])
def get_geocoded_coords(vrt_file):
"""Return geocoded coordinates of radar pixels."""
# extract geo-coded corner coordinates
ds = gdal.Open(vrt_file)
gt = ds.GetGeoTransform()
cols = ds.RasterXSize
rows = ds.RasterYSize
lon_arr = list(range(0, cols))
lat_arr = list(range(0, rows))
lons = np.empty((cols,))
lats = np.empty((rows,))
#logger.info("lon_arr: %s" % lon_arr)
#logger.info("lat_arr: %s" % lat_arr)
for py in lat_arr:
lats[py] = gt[3] + (py * gt[5])
for px in lon_arr:
lons[px] = gt[0] + (px * gt[1])
return lats, lons
def get_geom(vrt_file):
"""Return geocoded coordinates of radar pixels as a GDAL geom."""
# extract geo-coded corner coordinates
ds = gdal.Open(vrt_file)
gt = ds.GetGeoTransform()
cols = ds.RasterXSize
rows = ds.RasterYSize
lon_arr = [0, cols-1]
lat_arr = [0, rows-1]
lons = []
lats = []
#logger.info("lon_arr: %s" % lon_arr)
#logger.info("lat_arr: %s" % lat_arr)
for py in lat_arr:
lats.append(gt[3] + (py * gt[5]))
for px in lon_arr:
lons.append(gt[0] + (px * gt[1]))
return ogr.CreateGeometryFromJson(json.dumps({
'type': 'Polygon',
'coordinates': [[
[ lons[0], lats[0] ],
[ lons[0], lats[1] ],
[ lons[1], lats[1] ],
[ lons[1], lats[0] ],
[ lons[0], lats[0] ],
]]
}))
def get_envelope(product_dirs):
"""Return overlap bbox of all interferograms."""
# Create a geometry collection
geom_col = ogr.Geometry(ogr.wkbGeometryCollection)
for prod in product_dirs:
unw_vrt = os.path.join(prod, "merged", "filt_topophase.unw.geo.vrt")
geom = get_geom(unw_vrt)
geom_col.AddGeometry(geom)
#logger.info("-" * 80)
#logger.info("{}: {}".format(prod, geom.GetEnvelope()))
#logger.info("envelope: {}".format(geom_col.GetEnvelope()))
return geom_col.GetEnvelope()
def get_bounding_polygon(path):
'''
Get the minimum bounding region
@param path - path to h5 file from which to read TS data
'''
fle = h5py.File(path, "r")
#Read out the first data frame, lats vector and lons vector.
data = fle["rawts"][0]
lons = fle["lon"]
lats = fle["lat"]
#Create a grid of lon, lat pairs
coords = np.dstack(np.meshgrid(lons,lats))
#Calculate any point in the data that is not NaN, and grab the coordinates
inx = ~np.isnan(data)
points = coords[inx]
#Calculate the convex-hull of the data points. This will be a mimimum
#bounding convex-polygon.
hull = scipy.spatial.ConvexHull(points)
#Harvest the points and make it a loop
pts = [list(pt) for pt in hull.points[hull.vertices]]
pts.append(pts[0])
return pts
def get_timesteps(path):
"""Return timestep dates."""
h5f = h5py.File(path, 'r')
times = h5f.get('time')[:]
h5f.close()
return [datetime.fromtimestamp(i).isoformat('T') for i in times[:]]
def get_bperp(catalog):
'''
Return perpendicular baseline.
@param catalog - catalog object to search for Bperp keys
'''
for i in catalog['baseline']:
if re.search(r'Bperp at midrange for first common burst', i):
return catalog['baseline'][i]
raise RuntimeError("Failed to find perpendicular baseline.")
def write_dataset_json(prod_dir, id, region, starttime, endtime, version):
'''
Write a dataset JSON file for TS
@param prod_dir: product directory
@param id: id of product
@param region: region to GEO-JSONize
@param starttime: starttime of the data
@param endtime: endtime of the data
'''
met = {
'creation_timestamp': "%sZ" % datetime.utcnow().isoformat(),
'version': version,
'label': id,
'location': {
"type": "Polygon",
"coordinates": [region]
},
"starttime":starttime,
"endtime":endtime
}
dataset_file = os.path.join(prod_dir, "{}.dataset.json".format(id))
with open(dataset_file, 'w') as f:
json.dump(met, f, indent=2)
def call_noerr(cmd):
"""Run command and warn if exit status is not 0."""
try: check_call(cmd, shell=True)
except Exception as e:
logger.warn("Got exception running {}: {}".format(cmd, str(e)))
logger.warn("Traceback: {}".format(traceback.format_exc()))
def gdal_translate(vrt_in, vrt_out, min_lat, max_lat, min_lon, max_lon, no_data, band):
"""Run gdal_translate to project image to a region of interest bbox."""
cmd_tmpl = "gdal_translate -of VRT -a_nodata {} -projwin {} {} {} {} -b {} {} {}"
return check_call(cmd_tmpl.format(no_data, min_lon, max_lat, max_lon,
min_lat, band, vrt_in, vrt_out), shell=True)
def check_dataset(es_url, es_index, id):
"""Query for dataset with specified input ID."""
query = {
"query":{
"bool":{
"must":[
{"term":{"_id":id}},
]
}
},
"fields": [],
}
logger.info("query:\n{}".format(json.dumps(query, indent=2)))
if es_url.endswith('/'):
search_url = '%s%s/_search' % (es_url, es_index)
else:
search_url = '%s/%s/_search' % (es_url, es_index)
r = requests.post(search_url, data=json.dumps(query))
if r.status_code != 200:
logger.info("Failed to query {}:\n{}".format(es_url, r.text))
logger.info("query: {}".format(json.dumps(query, indent=2)))
logger.info("returned: {}".format(r.text))
r.raise_for_status()
result = r.json()
logger.info('dedup check: {}'.format(json.dumps(result, indent=2)))
total = result['hits']['total']
if total == 0: id = 'NONE'
else: id = result['hits']['hits'][0]['_id']
return total, id
def dataset_exists(es_url, es_index, id):
"""Check if dataset exists in GRQ."""
total, id = check_dataset(es_url, es_index, id)
if total > 0: return True
return False
def prep_tds(lats, lons, h5_file):
"""Add lat, lon, and time info for TDS compatibility."""
#Open a file for append
h5f = h5py.File(h5_file, "r+")
#Calculate times from ordinals
dates = h5f.get("dates")
times = [int(datetime.fromordinal(int(item)).strftime("%s")) for item in dates]
#Create time, lat, and lon dataset
time = h5f.create_dataset("time",dates.shape, "d")
time[:] = times
lat = h5f.create_dataset("lat", lats.shape, "d")
lat[:] = lats
lon = h5f.create_dataset("lon", lons.shape, "d")
lon[:] = lons
#Create new dimension vars
dims = {}
time.attrs.create("axis", np.string_("T"))
time.attrs.create("units", np.string_("seconds since 1970-01-01 00:00:00 +0000"))
time.attrs.create("standard_name", np.string_("time"))
lat.attrs.create("help", np.string_("Latitude array"))
lon.attrs.create("help", np.string_("Longitude array"))
#Attach the new time dimension to the rawts and recons as scales
#In addition, attach lat and lon as scales
for dset_name in ["rawts", "recons", "error"]:
dset = h5f.get(dset_name)
if dset is None: continue
dset.dims.create_scale(time, "time")
dset.dims[0].attach_scale(time)
dset.dims.create_scale(lat, "lat")
dset.dims[1].attach_scale(lat)
dset.dims.create_scale(lon, "lon")
dset.dims[2].attach_scale(lon)
# add units attribute
dset.attrs.create("units", np.string_("mm"))
#Close file
h5f.close()
def get_matching_scenes(met, time):
"""Get the master of slave scenes depending on which created the
input time (should be sensingStart or sensingStop time)."""
match = time.replace("-", "").replace(":", "")
for sset in [met["slave_scenes"][0],
met["master_scenes"][0]]:
for scene in sset:
if match in scene:
return sset
raise Exception("Time {0} not found in master scenes ({1}) or slave scenes ({2})".format(
match, met["slave_scenes"], met["master_scenes"]))
def merge_intervals(intervals):
"""Merge date range intervals and determine gaps."""
s = sorted(intervals, key=lambda t: t[0])
m = 0
for t in s:
if t[0] > s[m][1]:
m += 1
s[m] = t
else:
s[m] = [s[m][0], t[1]]
return s[:m+1]
| 31.114983 | 93 | 0.606383 |
acf73f65b1acc1bf7072cff5c0eb05ed0853c590 | 6,318 | py | Python | wush/cli/shell.py | wxnacy/wush | 30620144f7a6fb676d210dd9463b77894f956b38 | [
"MIT"
] | null | null | null | wush/cli/shell.py | wxnacy/wush | 30620144f7a6fb676d210dd9463b77894f956b38 | [
"MIT"
] | null | null | null | wush/cli/shell.py | wxnacy/wush | 30620144f7a6fb676d210dd9463b77894f956b38 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding:utf-8 -*-
# Author: wxnacy@gmail.com
"""
"""
import sys
import os
import argparse
import shutil
import traceback
import multiprocessing as mp
import pygments
from datetime import datetime
from pygments.token import Token
from pygments.lexers.python import PythonLexer
from prompt_toolkit.formatted_text import PygmentsTokens
from prompt_toolkit import print_formatted_text
from prompt_toolkit import PromptSession
from prompt_toolkit.application import run_in_terminal
from prompt_toolkit.history import FileHistory
from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
from prompt_toolkit.completion import Completer, Completion
from prompt_toolkit.completion import WordCompleter
from wush.argument import CmdArgumentParser
from wpy.argument import CommandArgumentParser
from wpy.argument import CommandArgumentParserFactory
from wush.common import utils
from wush.common.functions import super_function
from wush.common.functions import run_shell
from wush.common.functions import random_int
from wush.common.files import FileUtils
from wush.common.loggers import create_logger
from wush.completion.command import CommandCompleter
from wush.wush import Wapi
from .exceptions import ContinueException
from .exceptions import CommnadNotFoundException
from .server import run_server
from .server import PORT
def init_argparse():
"""初始化参数"""
parser = argparse.ArgumentParser(description='Wapi command',)
parser.add_argument('-c', '--config', help='Config dir name')
parser.add_argument('-m', '--module', help='Module name')
parser.add_argument('-n', '--name', help='Request name')
parser.add_argument('-s', '--space', help='Space name')
return parser
class Shell():
logger = create_logger('Shell')
parser_dict = {}
parser = None
client = None
_prompt_default = ''
web_port = None
session = None
def __init__(self):
self.parser = self._get_parser()
args = init_argparse().parse_args()
# args = init_argparse().parse_args(sys.argv)
client = Wapi()
client.init_config(config_root = args.config, space_name = args.space,
module_name = args.module)
self.client = client
self.session = PromptSession(
completer=CommandCompleter(self.parser, client),
history = FileHistory(os.path.expanduser('~/.wapi_history')),
auto_suggest = AutoSuggestFromHistory(),
complete_in_thread=True
)
def _get_parser(self, cmd=None):
if cmd not in self.parser_dict:
parser = CommandArgumentParserFactory.build_parser(cmd)
if isinstance(parser, CmdArgumentParser):
parser.set_wapi(self.client)
if isinstance(parser, CommandArgumentParser):
parser.set_prompt(self.session)
self.parser_dict[cmd] = parser
return self.parser_dict[cmd]
# def _is_run(self):
# """判断程序是否运行"""
# stdout, stderr = run_shell("ps -ef | grep 'Python.*wapi'")
# stdout_len = len(stdout.decode().split('\n'))
# return True if stdout_len >= 4 else False
def run(self):
p = mp.Process(target=run_server, args=(self.client, ), daemon=True)
p.start()
self._run_shell()
p.terminate()
def _run_shell(self):
while True:
try:
left_prompt = 'wush/{space}/{module}> '.format(
space = self.client.space_name,
module = self.client.module_name
)
right_prompt = ''
text = self.session.prompt(
left_prompt,
default = self._prompt_default,
rprompt = right_prompt,
)
self._run_once_time(text)
except ContinueException:
continue
except CommnadNotFoundException:
print('command not found: {}'.format(text))
except KeyboardInterrupt:
continue
except EOFError:
break
except Exception as e:
self._print('ERROR: ' + str(e))
self.logger.error(traceback.format_exc())
self._end_run()
print('GoodBye!')
def _end_run(self):
self._prompt_default = ''
def _run_once_time(self, text):
"""运行"""
if not text:
return
parser = self._get_parser()
args = parser.parse_args(text)
cmd = args.cmd
self.parser = self._get_parser(cmd)
self.logger.info('run argparser %s', self.parser)
self._run_base_cmd(text)
self.logger.info(self.parser)
if isinstance(self.parser, CommandArgumentParser):
self.parser.run(text)
return
if not hasattr(self, '_' + cmd):
raise CommnadNotFoundException()
func = getattr(self, '_' + cmd)
func(text)
def _run_base_cmd(self, text):
"""运行基础命令"""
if text.startswith('!'):
text = text[1:]
try:
history_num = int(text)
self.logger.info(history_num)
cmd = self.get_history_by_num(history_num)
# def _print_cmd():
# print(cmd)
# run_in_terminal(_print_cmd)
self._prompt_default = cmd
except:
self.logger.error(traceback.format_exc())
raise CommnadNotFoundException()
else:
raise ContinueException()
def _exit(self, text):
raise EOFError()
def get_history_by_num(self, num):
"""获取历史命令"""
items = self.session.history.get_strings()
if len(items) < num:
return None
return items[num - 1]
def _test(self, text):
# for k, v in os.environ.items():
# print(k, v)
sname = self.client.config.get_function().get_current_space_name()
print(sname)
oname = super_function.get_current_space_name()
print(oname)
def _print(self, text):
tokens = list(pygments.lex(text, lexer=PythonLexer()))
print_formatted_text(PygmentsTokens(tokens), end='')
| 32.4 | 78 | 0.617917 |
acf740911031e3315c8fc7e0ada3ecc039686dae | 3,280 | py | Python | core/dbt/version.py | alexells/dbt | 9c58f3465bf9907a2b62942de548f80650cd6288 | [
"Apache-2.0"
] | null | null | null | core/dbt/version.py | alexells/dbt | 9c58f3465bf9907a2b62942de548f80650cd6288 | [
"Apache-2.0"
] | null | null | null | core/dbt/version.py | alexells/dbt | 9c58f3465bf9907a2b62942de548f80650cd6288 | [
"Apache-2.0"
] | null | null | null | import importlib
import importlib.util
import os
import glob
import json
from typing import Iterator
import requests
import dbt.exceptions
import dbt.semver
PYPI_VERSION_URL = 'https://pypi.org/pypi/dbt/json'
def get_latest_version():
try:
resp = requests.get(PYPI_VERSION_URL)
data = resp.json()
version_string = data['info']['version']
except (json.JSONDecodeError, KeyError, requests.RequestException):
return None
return dbt.semver.VersionSpecifier.from_version_string(version_string)
def get_installed_version():
return dbt.semver.VersionSpecifier.from_version_string(__version__)
def get_version_information():
installed = get_installed_version()
latest = get_latest_version()
installed_s = installed.to_version_string(skip_matcher=True)
if latest is None:
latest_s = 'unknown'
else:
latest_s = latest.to_version_string(skip_matcher=True)
version_msg = ("installed version: {}\n"
" latest version: {}\n\n".format(installed_s, latest_s))
plugin_version_msg = "Plugins:\n"
for plugin_name, version in _get_dbt_plugins_info():
plugin_version_msg += ' - {plugin_name}: {version}\n'.format(
plugin_name=plugin_name, version=version
)
if latest is None:
return ("{}The latest version of dbt could not be determined!\n"
"Make sure that the following URL is accessible:\n{}\n\n{}"
.format(version_msg, PYPI_VERSION_URL, plugin_version_msg))
if installed == latest:
return "{}Up to date!\n\n{}".format(version_msg, plugin_version_msg)
elif installed > latest:
return ("{}Your version of dbt is ahead of the latest "
"release!\n\n{}".format(version_msg, plugin_version_msg))
else:
return ("{}Your version of dbt is out of date! "
"You can find instructions for upgrading here:\n"
"https://docs.getdbt.com/docs/installation\n\n{}"
.format(version_msg, plugin_version_msg))
def _get_adapter_plugin_names() -> Iterator[str]:
spec = importlib.util.find_spec('dbt.adapters')
# If None, then nothing provides an importable 'dbt.adapters', so we will
# not be reporting plugin versions today
if spec is None or spec.submodule_search_locations is None:
return
for adapters_path in spec.submodule_search_locations:
version_glob = os.path.join(adapters_path, '*', '__version__.py')
for version_path in glob.glob(version_glob):
# the path is like .../dbt/adapters/{plugin_name}/__version__.py
# except it could be \\ on windows!
plugin_root, _ = os.path.split(version_path)
_, plugin_name = os.path.split(plugin_root)
yield plugin_name
def _get_dbt_plugins_info():
for plugin_name in _get_adapter_plugin_names():
if plugin_name == 'core':
continue
try:
mod = importlib.import_module(
f'dbt.adapters.{plugin_name}.__version__'
)
except ImportError:
# not an adpater
continue
yield plugin_name, mod.version
__version__ = '0.21.0a1'
installed = get_installed_version()
| 32.475248 | 77 | 0.658841 |
acf7413a8c0b0c341f2c65226eb24a90dc144a93 | 587 | py | Python | python_work/loteria.py | lucas-jsvd/python_crash_course_2nd | 8404e7769bef7b90b9b0897996c3a3f969bb72bd | [
"Unlicense"
] | null | null | null | python_work/loteria.py | lucas-jsvd/python_crash_course_2nd | 8404e7769bef7b90b9b0897996c3a3f969bb72bd | [
"Unlicense"
] | null | null | null | python_work/loteria.py | lucas-jsvd/python_crash_course_2nd | 8404e7769bef7b90b9b0897996c3a3f969bb72bd | [
"Unlicense"
] | null | null | null | from random import sample
bolas = (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, "A", "B", "C", "D", "E")
ticket = (1, 7, "A", 9)
num_loop = 0
while True:
sorteados = sample(bolas, k=4)
num_loop += 1
num_acertos = 0
for opcao in ticket:
if opcao in sorteados:
num_acertos += 1
print(f"Acertou {num_acertos}")
else:
print("Errou")
break
if num_acertos == 4:
break
print("As bolas sorteadas foram: ")
[print(f'\t{sorteado}') for sorteado in sorteados]
print(f"Você foi sorteado depois de {num_loop} loops.")
| 23.48 | 64 | 0.55707 |
acf7417d7b6cb2ec0e23eb40df38d7f7602cfaf9 | 702 | py | Python | traiders/backend/api/views/event.py | rdilruba/bounswe2019group2 | b373908a4a8e92481f359297aba07245f0a23c1c | [
"Apache-2.0"
] | 11 | 2019-02-15T12:08:32.000Z | 2019-11-14T19:25:09.000Z | traiders/backend/api/views/event.py | bounswe/bounswe2019group2 | 05d41cf7b6bc1b3f994e82495d2a885a6eaa7cf3 | [
"Apache-2.0"
] | 279 | 2019-02-13T14:57:39.000Z | 2022-03-12T00:02:30.000Z | traiders/backend/api/views/event.py | rdilruba/bounswe2019group2 | b373908a4a8e92481f359297aba07245f0a23c1c | [
"Apache-2.0"
] | 13 | 2019-03-20T08:30:55.000Z | 2021-01-31T16:49:14.000Z | from rest_framework.viewsets import GenericViewSet
from django_filters.rest_framework import DjangoFilterBackend
from rest_framework import mixins
from ..models import Event
from ..serializers import EventSerializer
from ..filters import EventFilterSet
from rest_framework.pagination import LimitOffsetPagination
class EventViewSet(mixins.RetrieveModelMixin,
mixins.ListModelMixin,
mixins.UpdateModelMixin,
GenericViewSet):
serializer_class = EventSerializer
queryset = Event.objects.all().order_by('-date')
filter_backends = [DjangoFilterBackend]
filterset_class = EventFilterSet
pagination_class = LimitOffsetPagination
| 35.1 | 61 | 0.767806 |
acf7425d54f29b7be8b72e2b4da494a556292500 | 9,963 | py | Python | idaes/generic_models/unit_models/tests/test_statejunction.py | eslickj/idaes-pse | 328ed07ffb0b4d98c03e972675ea32c41dd2531a | [
"RSA-MD"
] | 112 | 2019-02-11T23:16:36.000Z | 2022-03-23T20:59:57.000Z | idaes/generic_models/unit_models/tests/test_statejunction.py | eslickj/idaes-pse | 328ed07ffb0b4d98c03e972675ea32c41dd2531a | [
"RSA-MD"
] | 621 | 2019-03-01T14:44:12.000Z | 2022-03-31T19:49:25.000Z | idaes/generic_models/unit_models/tests/test_statejunction.py | eslickj/idaes-pse | 328ed07ffb0b4d98c03e972675ea32c41dd2531a | [
"RSA-MD"
] | 154 | 2019-02-01T23:46:33.000Z | 2022-03-23T15:07:10.000Z | #################################################################################
# The Institute for the Design of Advanced Energy Systems Integrated Platform
# Framework (IDAES IP) was produced under the DOE Institute for the
# Design of Advanced Energy Systems (IDAES), and is copyright (c) 2018-2021
# by the software owners: The Regents of the University of California, through
# Lawrence Berkeley National Laboratory, National Technology & Engineering
# Solutions of Sandia, LLC, Carnegie Mellon University, West Virginia University
# Research Corporation, et al. All rights reserved.
#
# Please see the files COPYRIGHT.md and LICENSE.md for full copyright and
# license information.
#################################################################################
"""
Tests for StateJunction unit model.
Authors: Andrew Lee
"""
import pytest
from pyomo.environ import (ConcreteModel,
SolverStatus,
TerminationCondition,
value)
from pyomo.util.check_units import assert_units_consistent
from idaes.core import FlowsheetBlock
from idaes.generic_models.unit_models.statejunction import StateJunction
from idaes.generic_models.properties.activity_coeff_models.BTX_activity_coeff_VLE \
import BTXParameterBlock
from idaes.generic_models.properties import iapws95
from idaes.generic_models.properties.examples.saponification_thermo import \
SaponificationParameterBlock
from idaes.core.util.model_statistics import (degrees_of_freedom,
number_variables,
number_total_constraints,
number_unused_variables)
from idaes.core.util.testing import (PhysicalParameterTestBlock,
initialization_tester)
from idaes.core.util import get_solver
# -----------------------------------------------------------------------------
# Get default solver for testing
solver = get_solver()
# -----------------------------------------------------------------------------
@pytest.mark.unit
def test_config():
m = ConcreteModel()
m.fs = FlowsheetBlock(default={"dynamic": False})
m.fs.properties = PhysicalParameterTestBlock()
m.fs.unit = StateJunction(default={"property_package": m.fs.properties})
# Check unit config arguments
assert len(m.fs.unit.config) == 4
assert not m.fs.unit.config.dynamic
assert not m.fs.unit.config.has_holdup
assert m.fs.unit.config.property_package is m.fs.properties
# -----------------------------------------------------------------------------
class TestSaponification(object):
@pytest.fixture(scope="class")
def sapon(self):
m = ConcreteModel()
m.fs = FlowsheetBlock(default={"dynamic": False})
m.fs.properties = SaponificationParameterBlock()
m.fs.unit = StateJunction(
default={"property_package": m.fs.properties})
m.fs.unit.inlet.flow_vol.fix(1.0e-03)
m.fs.unit.inlet.conc_mol_comp[0, "H2O"].fix(55388.0)
m.fs.unit.inlet.conc_mol_comp[0, "NaOH"].fix(100.0)
m.fs.unit.inlet.conc_mol_comp[0, "EthylAcetate"].fix(100.0)
m.fs.unit.inlet.conc_mol_comp[0, "SodiumAcetate"].fix(0.0)
m.fs.unit.inlet.conc_mol_comp[0, "Ethanol"].fix(0.0)
m.fs.unit.inlet.temperature.fix(303.15)
m.fs.unit.inlet.pressure.fix(101325.0)
return m
@pytest.mark.build
@pytest.mark.unit
def test_build(self, sapon):
assert hasattr(sapon.fs.unit, "properties")
assert hasattr(sapon.fs.unit, "inlet")
assert len(sapon.fs.unit.inlet.vars) == 4
assert hasattr(sapon.fs.unit.inlet, "flow_vol")
assert hasattr(sapon.fs.unit.inlet, "conc_mol_comp")
assert hasattr(sapon.fs.unit.inlet, "temperature")
assert hasattr(sapon.fs.unit.inlet, "pressure")
assert hasattr(sapon.fs.unit, "outlet")
assert len(sapon.fs.unit.outlet.vars) == 4
assert hasattr(sapon.fs.unit.outlet, "flow_vol")
assert hasattr(sapon.fs.unit.outlet, "conc_mol_comp")
assert hasattr(sapon.fs.unit.outlet, "temperature")
assert hasattr(sapon.fs.unit.outlet, "pressure")
assert number_variables(sapon) == 8
assert number_total_constraints(sapon) == 0
assert number_unused_variables(sapon) == 8
@pytest.mark.component
def test_units(self, sapon):
assert_units_consistent(sapon)
@pytest.mark.unit
def test_dof(self, sapon):
assert degrees_of_freedom(sapon) == 0
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_initialize(self, sapon):
initialization_tester(sapon)
# No solve, as problem has no constraints
@pytest.mark.ui
@pytest.mark.unit
def test_report(self, sapon):
sapon.fs.unit.report()
# -----------------------------------------------------------------------------
class TestBTX(object):
@pytest.fixture(scope="class")
def btx(self):
m = ConcreteModel()
m.fs = FlowsheetBlock(default={"dynamic": False})
m.fs.properties = BTXParameterBlock(default={"valid_phase": 'Liq'})
m.fs.unit = StateJunction(default={
"property_package": m.fs.properties})
m.fs.unit.inlet.flow_mol[0].fix(5) # mol/s
m.fs.unit.inlet.temperature[0].fix(365) # K
m.fs.unit.inlet.pressure[0].fix(101325) # Pa
m.fs.unit.inlet.mole_frac_comp[0, "benzene"].fix(0.5)
m.fs.unit.inlet.mole_frac_comp[0, "toluene"].fix(0.5)
return m
@pytest.mark.build
@pytest.mark.unit
def test_build(self, btx):
assert hasattr(btx.fs.unit, "properties")
assert hasattr(btx.fs.unit, "inlet")
assert len(btx.fs.unit.inlet.vars) == 4
assert hasattr(btx.fs.unit.inlet, "flow_mol")
assert hasattr(btx.fs.unit.inlet, "mole_frac_comp")
assert hasattr(btx.fs.unit.inlet, "temperature")
assert hasattr(btx.fs.unit.inlet, "pressure")
assert hasattr(btx.fs.unit, "outlet")
assert len(btx.fs.unit.outlet.vars) == 4
assert hasattr(btx.fs.unit.outlet, "flow_mol")
assert hasattr(btx.fs.unit.outlet, "mole_frac_comp")
assert hasattr(btx.fs.unit.outlet, "temperature")
assert hasattr(btx.fs.unit.outlet, "pressure")
assert number_variables(btx) == 8
assert number_total_constraints(btx) == 3
assert number_unused_variables(btx) == 2
@pytest.mark.component
def test_units(self, btx):
assert_units_consistent(btx)
@pytest.mark.unit
def test_dof(self, btx):
assert degrees_of_freedom(btx) == 0
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_initialize(self, btx):
initialization_tester(btx)
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solve(self, btx):
results = solver.solve(btx)
# Check for optimal solution
assert results.solver.termination_condition == \
TerminationCondition.optimal
assert results.solver.status == SolverStatus.ok
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solution(self, btx):
assert (pytest.approx(5, abs=1e-3) ==
value(btx.fs.unit.outlet.flow_mol[0]))
assert (pytest.approx(365, abs=1e-2) ==
value(btx.fs.unit.outlet.temperature[0]))
assert (pytest.approx(101325, abs=1e2) ==
value(btx.fs.unit.outlet.pressure[0]))
@pytest.mark.ui
@pytest.mark.unit
def test_report(self, btx):
btx.fs.unit.report()
# -----------------------------------------------------------------------------
@pytest.mark.iapws
@pytest.mark.skipif(not iapws95.iapws95_available(),
reason="IAPWS not available")
class TestIAPWS(object):
@pytest.fixture(scope="class")
def iapws(self):
m = ConcreteModel()
m.fs = FlowsheetBlock(default={"dynamic": False})
m.fs.properties = iapws95.Iapws95ParameterBlock()
m.fs.unit = StateJunction(default={
"property_package": m.fs.properties})
m.fs.unit.inlet.flow_mol[0].fix(100)
m.fs.unit.inlet.enth_mol[0].fix(4000)
m.fs.unit.inlet.pressure[0].fix(101325)
return m
@pytest.mark.build
@pytest.mark.unit
def test_build(self, iapws):
assert hasattr(iapws.fs.unit, "properties")
assert len(iapws.fs.unit.inlet.vars) == 3
assert hasattr(iapws.fs.unit.inlet, "flow_mol")
assert hasattr(iapws.fs.unit.inlet, "enth_mol")
assert hasattr(iapws.fs.unit.inlet, "pressure")
assert hasattr(iapws.fs.unit, "outlet")
assert len(iapws.fs.unit.outlet.vars) == 3
assert hasattr(iapws.fs.unit.outlet, "flow_mol")
assert hasattr(iapws.fs.unit.outlet, "enth_mol")
assert hasattr(iapws.fs.unit.outlet, "pressure")
assert number_variables(iapws) == 3
assert number_total_constraints(iapws) == 0
assert number_unused_variables(iapws) == 3
@pytest.mark.component
def test_units(self, iapws):
assert_units_consistent(iapws)
@pytest.mark.unit
def test_dof(self, iapws):
assert degrees_of_freedom(iapws) == 0
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_initialize(self, iapws):
initialization_tester(iapws)
# No solve, as problem has no constraints
@pytest.mark.ui
@pytest.mark.unit
def test_report(self, iapws):
iapws.fs.unit.report()
| 35.329787 | 83 | 0.621399 |
acf742f88baf9b01b7a1bfec2a25571b71f2ef1a | 701 | py | Python | everest/repositories/memory/aggregate.py | helixyte/everest | 70c9b93c3061db5cb62428349d18b8fb8566411b | [
"MIT"
] | 3 | 2015-03-10T17:38:25.000Z | 2017-04-29T03:47:06.000Z | everest/repositories/memory/aggregate.py | helixyte/everest | 70c9b93c3061db5cb62428349d18b8fb8566411b | [
"MIT"
] | 1 | 2015-03-02T16:02:41.000Z | 2015-03-02T16:02:41.000Z | everest/repositories/memory/aggregate.py | cenix/everest | 70c9b93c3061db5cb62428349d18b8fb8566411b | [
"MIT"
] | 1 | 2020-07-12T22:46:59.000Z | 2020-07-12T22:46:59.000Z | """
Aggregate for the in-memory and filesystem backends.
This file is part of the everest project.
See LICENSE.txt for licensing, CONTRIBUTORS.txt for contributor information.
Created on Jan 7, 2013.
"""
from everest.entities.base import RootAggregate
from everest.querying.base import EXPRESSION_KINDS
__docformat__ = 'reStructuredText en'
__all__ = ['MemoryAggregate',
]
class MemoryAggregate(RootAggregate):
"""
Root aggregate implementation for the in-memory repository.
:note: When entities without a slug are added to a memory aggregate, they
can not be retrieved using the :meth:`get_by_slug` method.
"""
_expression_kind = EXPRESSION_KINDS.EVAL
| 28.04 | 77 | 0.746077 |
acf743c32e7e7c2aa5cd3f56c0c9b291cbdc7130 | 3,805 | py | Python | maddpg/utils.py | HAXRD/PIC | 658b4dd6b01e64413d5f8f0107d9167f1bd78546 | [
"MIT"
] | 28 | 2019-10-31T00:38:10.000Z | 2022-03-21T12:33:03.000Z | maddpg/utils.py | HAXRD/PIC | 658b4dd6b01e64413d5f8f0107d9167f1bd78546 | [
"MIT"
] | 10 | 2019-11-27T12:37:25.000Z | 2021-06-07T11:52:34.000Z | maddpg/utils.py | HAXRD/PIC | 658b4dd6b01e64413d5f8f0107d9167f1bd78546 | [
"MIT"
] | 13 | 2019-10-31T00:38:17.000Z | 2022-03-06T04:24:09.000Z | import csv
def adjust_learning_rate(optimizer, steps, max_steps, start_decrease_step, init_lr):
"""Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
if steps > start_decrease_step:
lr = init_lr * (1 - ((steps - start_decrease_step) / (max_steps - start_decrease_step)))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def dict2csv(output_dict, f_name):
with open(f_name, mode='w') as f:
writer = csv.writer(f, delimiter=",")
for k, v in output_dict.items():
v = [k] + v
writer.writerow(v)
def n_actions(action_spaces):
"""
:param action_space: list
:return: n_action: list
"""
n_actions = []
from gym import spaces
from multiagent.environment import MultiDiscrete
for action_space in action_spaces:
if isinstance(action_space, spaces.discrete.Discrete):
n_actions.append(action_space.n)
elif isinstance(action_space, MultiDiscrete):
total_n_action = 0
one_agent_n_action = 0
for h, l in zip(action_space.high, action_space.low):
total_n_action += int(h - l + 1)
one_agent_n_action += int(h - l + 1)
n_actions.append(one_agent_n_action)
else:
raise NotImplementedError
return n_actions
def grad_norm(model):
total_norm = 0
for p in model.parameters():
param_norm = p.grad.data.norm(2)
total_norm += param_norm.item() ** 2
total_norm = total_norm ** (1. / 2)
return total_norm
def make_env(scenario_name, arglist, benchmark=False):
from multiagent.environment import MultiAgentEnv
import multiagent.scenarios as scenarios
# load scenario from script
scenario = scenarios.load(scenario_name + ".py").Scenario()
# create world
world = scenario.make_world()
# create multiagent environment
if benchmark:
env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation, scenario.benchmark_data,
seed_callback=scenario.seed, cam_range=scenario.world_radius)
else:
env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation,
seed_callback=scenario.seed, cam_range=scenario.world_radius)
return env
def make_env_vec(scenario_name, arglist, benchmark=False):
from multiagent.environment_vec import MultiAgentEnv
import multiagent.scenarios as scenarios
# load scenario from script
scenario = scenarios.load(scenario_name + ".py").Scenario()
# create world
world = scenario.make_world()
# create multiagent environment
if benchmark:
env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation, scenario.benchmark_data,
seed_callback=scenario.seed, cam_range=scenario.world_radius)
else:
env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation,
seed_callback=scenario.seed, cam_range=scenario.world_radius)
return env
def copy_actor_policy(s_agent, t_agent):
if hasattr(s_agent, 'actors'):
for i in range(s_agent.n_group):
state_dict = s_agent.actors[i].state_dict()
for k, v in state_dict.items():
state_dict[k] = v.cpu()
t_agent.actors[i].load_state_dict(state_dict)
t_agent.actors_params, t_agent.critic_params = None, None
else:
state_dict = s_agent.actor.state_dict()
for k, v in state_dict.items():
state_dict[k] = v.cpu()
t_agent.actor.load_state_dict(state_dict)
t_agent.actor_params, t_agent.critic_params = None, None
| 36.586538 | 120 | 0.660184 |
acf743cdd2c188543d463657547d4552b4e2b67d | 955 | py | Python | buildroot/support/testing/tests/package/test_python_incremental.py | TonyApuzzo/hassos | bb201fb84209a1bb5cf0611bd09e3610701d737d | [
"Apache-2.0"
] | 1 | 2019-02-12T06:53:47.000Z | 2019-02-12T06:53:47.000Z | buildroot/support/testing/tests/package/test_python_incremental.py | berg/hassos | 30b599acc6fda01e6a07181d01e8e03b365424f4 | [
"Apache-2.0"
] | null | null | null | buildroot/support/testing/tests/package/test_python_incremental.py | berg/hassos | 30b599acc6fda01e6a07181d01e8e03b365424f4 | [
"Apache-2.0"
] | null | null | null | from tests.package.test_python import TestPythonBase
class TestPythonIncremental(TestPythonBase):
def str_test(self):
cmd = self.interpreter + " -c 'import incremental;"
cmd += "v = incremental.Version(\"package\", 1, 2, 3, release_candidate=4);"
cmd += "assert(str(v) == \"[package, version 1.2.3rc4]\")'"
_, exit_code = self.emulator.run(cmd, timeout=30)
self.assertEqual(exit_code, 0)
class TestPythonPy2Incremental(TestPythonIncremental):
config = TestPythonBase.config + \
"""
BR2_PACKAGE_PYTHON=y
BR2_PACKAGE_PYTHON_INCREMENTAL=y
"""
def test_run(self):
self.login()
self.str_test()
class TestPythonPy3Incremental(TestPythonIncremental):
config = TestPythonBase.config + \
"""
BR2_PACKAGE_PYTHON3=y
BR2_PACKAGE_PYTHON_INCREMENTAL=y
"""
def test_run(self):
self.login()
self.str_test()
| 27.285714 | 84 | 0.636649 |
acf744fdf8842a82eb4361a2cbe9b1da1a7a5736 | 1,553 | py | Python | djangodemo/polls/views.py | ThansksJava/learnPython | 64c8df012bf91582ee10459610b2157f535f78ad | [
"Apache-2.0"
] | null | null | null | djangodemo/polls/views.py | ThansksJava/learnPython | 64c8df012bf91582ee10459610b2157f535f78ad | [
"Apache-2.0"
] | null | null | null | djangodemo/polls/views.py | ThansksJava/learnPython | 64c8df012bf91582ee10459610b2157f535f78ad | [
"Apache-2.0"
] | null | null | null | from django.http import HttpResponse, Http404, HttpResponseRedirect
from django.shortcuts import render, get_object_or_404
from django.urls import reverse
from django.views import generic
from polls.models import Question, Choice
class IndexView(generic.ListView):
template_name = 'polls/index.html'
context_object_name = 'latest_question_list'
def get_queryset(self):
"""Return the last five published questions."""
return Question.objects.order_by('-pub_date')[:5]
class DetailView(generic.DetailView):
model = Question
template_name = 'polls/detail.html'
class ResultsView(generic.DetailView):
model = Question
template_name = 'polls/results.html'
def vote(request, question_id):
... # same as above, no changes needed.
def vote(request, question_id):
question = get_object_or_404(Question, pk=question_id)
try:
selected_choice = question.choice_set.get(pk=request.POST['choice'])
except (KeyError, Choice.DoesNotExist):
# Redisplay the question voting form.
return render(request, 'polls/detail.html', {
'question': question,
'error_message': "You didn't select a choice.",
})
else:
selected_choice.votes += 1
selected_choice.save()
# Always return an HttpResponseRedirect after successfully dealing
# with POST data. This prevents data from being posted twice if a
# user hits the Back button.
return HttpResponseRedirect(reverse('polls:results', args=(question.id,))) | 32.354167 | 82 | 0.702511 |
acf7453ca363a001ff155e1880cd9c6f114c61bd | 1,448 | py | Python | economy with SQLITE3/plugins/plugins_funcs.py | CODING-PALACE/economy-bot-discord.py | 9018f8a6de4501cba6702f3e7c21188cc593dc2c | [
"MIT"
] | 13 | 2021-04-12T13:40:53.000Z | 2022-03-23T08:13:40.000Z | economy with SQLITE3/plugins/plugins_funcs.py | CODING-PALACE/economy-bot-discord.py | 9018f8a6de4501cba6702f3e7c21188cc593dc2c | [
"MIT"
] | 1 | 2021-12-08T13:24:01.000Z | 2021-12-13T12:39:26.000Z | economy with SQLITE3/plugins/plugins_funcs.py | CODING-PALACE/economy-bot-discord.py | 9018f8a6de4501cba6702f3e7c21188cc593dc2c | [
"MIT"
] | 14 | 2021-04-13T04:33:40.000Z | 2022-03-10T14:38:34.000Z | import sqlite3
file_name = # Enter your file_name here ...
commands_list = ["test", "command1", "command2"] # Enter your command names here !
async def update_plugs():
db = sqlite3.connect(file_name)
cursor = db.cursor()
cursor.execute(f"""CREATE TABLE IF NOT EXISTS plugins (guildID INTEGER)""")
db.commit()
for cdm in commands_list:
try:
cursor.execute(f"""ALTER TABLE plugins ADD COLUMN `{cdm}` TEXT DEFAULT 'off'""")
except:
pass
db.commit()
cursor.close()
db.close()
async def open_plug(guild):
await update_plugs()
db = sqlite3.connect(file_name)
cursor = db.cursor()
cursor.execute(f"""SELECT * FROM plugins WHERE guildID = {guild.id}""")
data = cursor.fetchone()
if data is None:
cursor.execute(f"""INSERT INTO plugins(guildID) VALUES({guild.id})""")
db.commit()
cursor.close()
db.close()
async def get_plug(guild, mode):
db = sqlite3.connect(file_name)
cursor = db.cursor()
cursor.execute(f"""SELECT `{mode}` FROM plugins WHERE guildID = {guild.id}""")
data = cursor.fetchone()
cursor.close()
db.close()
return data[0]
async def update_plug(guild, button, mode):
db = sqlite3.connect(file_name)
cursor = db.cursor()
cursor.execute(f"""UPDATE plugins SET `{mode}` = '{button}' WHERE guildID = {guild.id}""")
db.commit()
cursor.close()
db.close()
| 22.276923 | 94 | 0.622238 |
acf7459093e2447c5fc2577442c1faff1e8e5314 | 1,712 | py | Python | kafka_consumer/tests/test_kafka_consumer.py | seants/integrations-core | 1e5548915fc24f1bbd095e845f0940c22992b09c | [
"BSD-3-Clause"
] | 1 | 2020-08-08T02:01:01.000Z | 2020-08-08T02:01:01.000Z | kafka_consumer/tests/test_kafka_consumer.py | seants/integrations-core | 1e5548915fc24f1bbd095e845f0940c22992b09c | [
"BSD-3-Clause"
] | 1 | 2018-08-15T05:50:17.000Z | 2018-08-15T05:50:17.000Z | kafka_consumer/tests/test_kafka_consumer.py | seants/integrations-core | 1e5548915fc24f1bbd095e845f0940c22992b09c | [
"BSD-3-Clause"
] | 1 | 2019-03-06T14:30:52.000Z | 2019-03-06T14:30:52.000Z | # (C) Datadog, Inc. 2018
# All rights reserved
# Licensed under Simplified BSD License (see LICENSE)
import time
import pytest
from .common import is_supported
from datadog_checks.kafka_consumer import KafkaCheck
BROKER_METRICS = [
'kafka.broker_offset',
]
CONSUMER_METRICS = [
'kafka.consumer_offset',
'kafka.consumer_lag',
]
@pytest.mark.kafka
def test_check_kafka(kafka_cluster, kafka_producer, kafka_consumer, kafka_instance, aggregator):
"""
Testing Kafka_consumer check.
"""
if not is_supported(['kafka']):
pytest.skip("kafka consumer offsets not supported in current environment")
if not kafka_producer.is_alive():
kafka_producer.start()
time.sleep(5)
if not kafka_consumer.is_alive():
kafka_consumer.start()
time.sleep(5)
kafka_consumer_check = KafkaCheck('kafka_consumer', {}, {})
kafka_consumer_check.check(kafka_instance)
for name, consumer_group in kafka_instance['consumer_groups'].iteritems():
for topic, partitions in consumer_group.iteritems():
for partition in partitions:
tags = ["topic:{}".format(topic),
"partition:{}".format(partition)] + ['optional:tag1']
for mname in BROKER_METRICS:
aggregator.assert_metric(mname, tags=tags, at_least=1)
for mname in CONSUMER_METRICS:
aggregator.assert_metric(mname, tags=tags +
["source:kafka", "consumer_group:{}".format(name)], at_least=1)
# let's reassert for the __consumer_offsets - multiple partitions
aggregator.assert_metric('kafka.broker_offset', at_least=1)
| 31.703704 | 108 | 0.657126 |
acf747378a1f41148635864ae576f1a093e645f3 | 251 | py | Python | coindeblend/__init__.py | aboucaud/deblend | 59b950d7de82814a42671e22497f87f3653942f6 | [
"BSD-3-Clause"
] | 3 | 2021-09-03T10:10:03.000Z | 2021-09-03T20:01:03.000Z | coindeblend/__init__.py | aboucaud/deblend | 59b950d7de82814a42671e22497f87f3653942f6 | [
"BSD-3-Clause"
] | 3 | 2021-08-25T15:47:28.000Z | 2022-02-10T00:19:44.000Z | coindeblend/__init__.py | aboucaud/deblend | 59b950d7de82814a42671e22497f87f3653942f6 | [
"BSD-3-Clause"
] | 2 | 2020-09-28T18:35:59.000Z | 2020-10-01T14:08:10.000Z | """
_ _ _ _
__| | ___| |__ | | ___ _ __ __| |
/ _` |/ _ \ '_ \| |/ _ \ '_ \ / _` |
| (_| | __/ |_) | | __/ | | | (_| |
\__,_|\___|_.__/|_|\___|_| |_|\__,_|
Suite of deep learning methods for galaxy deblending
"""
| 25.1 | 52 | 0.378486 |
acf747f3e5112562b294fff48b5c56afdc66a52d | 2,523 | py | Python | homeassistant/components/geniushub/switch.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 30,023 | 2016-04-13T10:17:53.000Z | 2020-03-02T12:56:31.000Z | homeassistant/components/geniushub/switch.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 24,710 | 2016-04-13T08:27:26.000Z | 2020-03-02T12:59:13.000Z | homeassistant/components/geniushub/switch.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 11,956 | 2016-04-13T18:42:31.000Z | 2020-03-02T09:32:12.000Z | """Support for Genius Hub switch/outlet devices."""
from __future__ import annotations
from datetime import timedelta
import voluptuous as vol
from homeassistant.components.switch import SwitchDeviceClass, SwitchEntity
from homeassistant.core import HomeAssistant
from homeassistant.helpers import config_validation as cv, entity_platform
from homeassistant.helpers.entity_platform import AddEntitiesCallback
from homeassistant.helpers.typing import ConfigType, DiscoveryInfoType
from . import ATTR_DURATION, DOMAIN, GeniusZone
GH_ON_OFF_ZONE = "on / off"
SVC_SET_SWITCH_OVERRIDE = "set_switch_override"
SET_SWITCH_OVERRIDE_SCHEMA = {
vol.Optional(ATTR_DURATION): vol.All(
cv.time_period,
vol.Range(min=timedelta(minutes=5), max=timedelta(days=1)),
),
}
async def async_setup_platform(
hass: HomeAssistant,
config: ConfigType,
async_add_entities: AddEntitiesCallback,
discovery_info: DiscoveryInfoType | None = None,
) -> None:
"""Set up the Genius Hub switch entities."""
if discovery_info is None:
return
broker = hass.data[DOMAIN]["broker"]
async_add_entities(
[
GeniusSwitch(broker, z)
for z in broker.client.zone_objs
if z.data["type"] == GH_ON_OFF_ZONE
]
)
# Register custom services
platform = entity_platform.async_get_current_platform()
platform.async_register_entity_service(
SVC_SET_SWITCH_OVERRIDE,
SET_SWITCH_OVERRIDE_SCHEMA,
"async_turn_on",
)
class GeniusSwitch(GeniusZone, SwitchEntity):
"""Representation of a Genius Hub switch."""
@property
def device_class(self):
"""Return the class of this device, from component DEVICE_CLASSES."""
return SwitchDeviceClass.OUTLET
@property
def is_on(self) -> bool:
"""Return the current state of the on/off zone.
The zone is considered 'on' if & only if it is override/on (e.g. timer/on is 'off').
"""
return self._zone.data["mode"] == "override" and self._zone.data["setpoint"]
async def async_turn_off(self, **kwargs) -> None:
"""Send the zone to Timer mode.
The zone is deemed 'off' in this mode, although the plugs may actually be on.
"""
await self._zone.set_mode("timer")
async def async_turn_on(self, **kwargs) -> None:
"""Set the zone to override/on ({'setpoint': true}) for x seconds."""
await self._zone.set_override(1, kwargs.get(ATTR_DURATION, 3600))
| 30.035714 | 92 | 0.691637 |
acf7482165f8f691480a0ddd81708f64a1aa1eb0 | 2,003 | py | Python | custom_components/fpl/sensor_ProjectedBillSensor.py | Dominic7/hass-fpl | fa5ba8a1b6f2e0b0c812edfe5568074c7c42a764 | [
"MIT"
] | 12 | 2020-10-16T15:13:03.000Z | 2022-03-23T15:16:00.000Z | custom_components/fpl/sensor_ProjectedBillSensor.py | Dominic7/hass-fpl | fa5ba8a1b6f2e0b0c812edfe5568074c7c42a764 | [
"MIT"
] | 27 | 2020-01-18T19:30:32.000Z | 2022-03-28T22:27:33.000Z | custom_components/fpl/sensor_ProjectedBillSensor.py | Dominic7/hass-fpl | fa5ba8a1b6f2e0b0c812edfe5568074c7c42a764 | [
"MIT"
] | 16 | 2020-06-16T16:45:37.000Z | 2022-03-24T03:26:03.000Z | from .fplEntity import FplEntity
class FplProjectedBillSensor(FplEntity):
def __init__(self, coordinator, config, account):
super().__init__(coordinator, config, account, "Projected Bill")
@property
def state(self):
budget = self.getData("budget_bill")
budget_billing_projected_bill = self.getData("budget_billing_projected_bill")
if budget == True and budget_billing_projected_bill is not None:
return self.getData("budget_billing_projected_bill")
return self.getData("projected_bill")
def defineAttributes(self):
"""Return the state attributes."""
attributes = {}
try:
if self.getData("budget_bill") == True:
attributes["budget_bill"] = self.getData("budget_bill")
except:
pass
return attributes
@property
def icon(self):
return "mdi:currency-usd"
# Defered Amount
class DeferedAmountSensor(FplEntity):
def __init__(self, coordinator, config, account):
super().__init__(coordinator, config, account, "Defered Amount")
@property
def state(self):
if self.getData("budget_bill") == True:
return self.getData("defered_amount")
return 0
@property
def icon(self):
return "mdi:currency-usd"
class ProjectedBudgetBillSensor(FplEntity):
def __init__(self, coordinator, config, account):
super().__init__(coordinator, config, account, "Projected Budget Bill")
@property
def state(self):
return self.getData("budget_billing_projected_bill")
@property
def icon(self):
return "mdi:currency-usd"
class ProjectedActualBillSensor(FplEntity):
def __init__(self, coordinator, config, account):
super().__init__(coordinator, config, account, "Projected Actual Bill")
@property
def state(self):
return self.getData("projected_bill")
@property
def icon(self):
return "mdi:currency-usd"
| 27.067568 | 85 | 0.658512 |
acf748dda409de2c33d4574b81ff630f22dee2de | 151 | py | Python | querybook/server/lib/schedule.py | shivammmmm/querybook | 71263eb7db79e56235ea752f2cf3339ca9b3a092 | [
"Apache-2.0"
] | 1 | 2021-11-03T08:01:45.000Z | 2021-11-03T08:01:45.000Z | querybook/server/lib/schedule.py | shivammmmm/querybook | 71263eb7db79e56235ea752f2cf3339ca9b3a092 | [
"Apache-2.0"
] | 593 | 2021-07-01T10:34:25.000Z | 2022-03-31T23:24:40.000Z | querybook/server/lib/schedule.py | shivammmmm/querybook | 71263eb7db79e56235ea752f2cf3339ca9b3a092 | [
"Apache-2.0"
] | 1 | 2021-04-02T17:43:41.000Z | 2021-04-02T17:43:41.000Z | from lib.utils.plugin import import_plugin
ALL_PLUGIN_JOBS = import_plugin("job_plugin", "ALL_PLUGIN_JOBS", {})
ALL_JOBS = {**{}, **ALL_PLUGIN_JOBS}
| 25.166667 | 68 | 0.748344 |
acf74915e32f0fdbf4ef90b9df7ceddd329dc6c5 | 1,232 | py | Python | project-euler/problems/problem31.py | pietrodll/coding-challenges | 45201c0786c6156c116a7b2659876cf82c3e84ac | [
"MIT"
] | null | null | null | project-euler/problems/problem31.py | pietrodll/coding-challenges | 45201c0786c6156c116a7b2659876cf82c3e84ac | [
"MIT"
] | null | null | null | project-euler/problems/problem31.py | pietrodll/coding-challenges | 45201c0786c6156c116a7b2659876cf82c3e84ac | [
"MIT"
] | null | null | null | """
Problem 31
==========
In the United Kingdom the currency is made up of pound (£) and pence (p). There are eight
coins in general circulation:
1p, 2p, 5p, 10p, 20p, 50p, £1 (100p), and £2 (200p).
It is possible to make £2 in the following way:
1×£1 + 1×50p + 2×20p + 1×5p + 1×2p + 3×1p
How many different ways can £2 be made using any number of coins?
"""
def compute_combinations(amount, coins):
coins = sorted(coins, reverse=True)
def aux(tot, to_use):
combinations = []
if tot == 0:
combinations.append([0] * (len(coins) - to_use))
elif to_use == len(coins) - 1 and tot % coins[-1] == 0:
combinations.append([tot // coins[-1]])
elif to_use < len(coins) - 1:
for m in range(tot // coins[to_use] + 1):
sub_combinations = aux(tot - m * coins[to_use], to_use + 1)
combinations += [[m] + comb for comb in sub_combinations]
return combinations
return aux(amount, 0)
def count_possible_combinations(amount, coins):
comb = compute_combinations(amount, coins)
return len(comb)
if __name__ == "__main__":
print(count_possible_combinations(200, [1, 2, 5, 10, 20, 50, 100, 200]))
| 25.666667 | 89 | 0.599838 |
acf749d064b8ce4760447c10942e318a9b9a8a18 | 1,373 | py | Python | models/weapon.py | ericgreveson/xwing | 5760bf9695b5e081b051f29dce1ba1bf2bc94555 | [
"MIT"
] | null | null | null | models/weapon.py | ericgreveson/xwing | 5760bf9695b5e081b051f29dce1ba1bf2bc94555 | [
"MIT"
] | null | null | null | models/weapon.py | ericgreveson/xwing | 5760bf9695b5e081b051f29dce1ba1bf2bc94555 | [
"MIT"
] | null | null | null | class Weapon:
"""
This represents a weapon on a ship
"""
def __init__(self, name, ranges, firing_arc, firing_direction=0):
"""
Constructor
name: The name of the weapon, for user display
ranges: The list of ranges that the weapon can attack at, e.g. [1, 2, 3]
firing_arc: The weapon's firing arc in degrees (e.g. 80)
firing_direction: The weapon's firing direction, in degrees (e.g. 180 if a rear arc)
"""
self.name = name
self.ranges = ranges
self.firing_arc = firing_arc
self.firing_direction = firing_direction
def attack_bonus(self, attack_range):
"""
Compute any range bonus on an attack
attack_range: The range (1, 2 or 3) of the attack
return: Integer bonus, or 0 if no bonus
"""
return 1 if attack_range < 2 else 0
class PrimaryWeapon(Weapon):
"""
This represents a ship's primary weapon
"""
def __init__(self):
"""
Constructor
"""
super().__init__("primary", ranges=[1, 2, 3], firing_arc=80)
class PrimaryTurret(PrimaryWeapon):
"""
This represents a primary weapon turret
"""
def __init__(self):
"""
Constructor
"""
super().__init__()
self.firing_arc = 360
| 29.847826 | 93 | 0.569556 |
acf74a50bf8ae0433a786308bc088c75eb1fc2e2 | 1,581 | py | Python | detailsScrape/comexd/comexd20.py | Asyikin98/SkinFerm | 72fd1ad6339c96adf5ec154bde566de9eb1472c3 | [
"MIT"
] | null | null | null | detailsScrape/comexd/comexd20.py | Asyikin98/SkinFerm | 72fd1ad6339c96adf5ec154bde566de9eb1472c3 | [
"MIT"
] | 2 | 2021-02-03T01:55:13.000Z | 2021-04-30T12:46:33.000Z | detailsScrape/comexd/comexd20.py | Asyikin98/SkinFerm | 72fd1ad6339c96adf5ec154bde566de9eb1472c3 | [
"MIT"
] | null | null | null | import urllib.request
import random
from bs4 import BeautifulSoup
from requests import get
import mysql.connector
conn = mysql.connector.connect(user="root", passwd="",host="localhost", database="product")
cursor = conn.cursor()
sql = """INSERT INTO comexd (about, rate, top, comment, dari) VALUES (%s, %s, %s, %s, %s)"""
def crawl_url(pageUrl, excomd_arr):
url = 'https://www.ulta.com/pure-sugar-smooth-glow-grapeseed-lip-scrub?productId=pimprod2000652'
page = get(url)
soup = BeautifulSoup(page.text, 'html.parser')
type(soup)
#######################################################for product 1############################################################################
ex = soup.find_all('div', class_='ProductDetail__productImage ProductDetail__productImage--withoutSwatches')
try:
for exd in ex :
about = soup.find("div",{"class":"ProductMainSection"}).get_text().strip()
rate = soup.find("div",{"class":"ProductDetail__productContent"}).get_text().strip()
top = soup.find("p",{"class":"MixedMenuButton__Text MixedMenuButton__Text--label"}).get_text().strip()
comment = soup.find("div",{"class":"Collapsible__contentInner"}).get_text().strip()
dari = soup.find("div",{"class":"ProductDetail__ingredients"}).get_text().strip()
excomd_arr.append((about, rate, top, comment, dari))
finally:
return excomd_arr
excomd_arr = crawl_url("", [])
print(len(excomd_arr))
cursor.executemany(sql, excomd_arr)
conn.commit()
cursor.close()
conn.close()
| 35.931818 | 148 | 0.619228 |
acf74a71c58a2907c0199a85060ad5f00976b45f | 35,225 | py | Python | python/ray/train/trainer.py | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | null | null | null | python/ray/train/trainer.py | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | null | null | null | python/ray/train/trainer.py | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | null | null | null | import copy
import logging
import os
import warnings
from datetime import datetime
from pathlib import Path
from typing import Any, Callable, Dict, List, Optional, Type, TypeVar, Union
import ray
from ray.actor import ActorHandle
from ray.air.checkpoint import Checkpoint
from ray.train._internal.backend_executor import (
BackendExecutor,
InactiveWorkerGroupError,
TrainBackendError,
TrainingWorkerError,
)
from ray.train._internal.checkpoint import (
CheckpointManager,
TuneCheckpointManager,
load_checkpoint_from_path,
)
from ray.train._internal.dataset_spec import RayDataset, RayDatasetSpec
from ray.train._internal.session import TrainingResultType
# Ray Train should be usable even if Tune is not installed.
from ray.train._internal.utils import ActorWrapper, construct_path, construct_train_func
from ray.train._internal.worker_group import WorkerGroup
from ray.train.backend import BackendConfig
from ray.train.base_trainer import ( # noqa: F401
BaseTrainer,
GenDataset,
TrainingFailedError,
)
from ray.train.callbacks.callback import TrainingCallback
from ray.train.constants import (
DEFAULT_RESULTS_DIR,
ENABLE_DETAILED_AUTOFILLED_METRICS_ENV,
ENABLE_SHARE_CUDA_VISIBLE_DEVICES_ENV,
TRAIN_ENABLE_WORKER_SPREAD_ENV,
TRAIN_PLACEMENT_GROUP_TIMEOUT_S_ENV,
TUNE_INSTALLED,
)
from ray.util.annotations import Deprecated, DeveloperAPI
from ray.util.ml_utils.checkpoint_manager import CheckpointStrategy
if TUNE_INSTALLED:
from ray import tune
from ray.tune import PlacementGroupFactory, Trainable
from ray.tune.function_runner import wrap_function
else:
tune = PlacementGroupFactory = Trainable = object
def noop():
return
wrap_function = noop
T = TypeVar("T")
S = TypeVar("S")
logger = logging.getLogger(__name__)
BACKEND_NAME_TO_CONFIG_CLS_NAME = {
"horovod": "HorovodConfig",
"tensorflow": "TensorflowConfig",
"torch": "TorchConfig",
}
# The environment variables that need to be propagated from the driver to the
# `BackendExecutor` actor via runtime env.
BACKEND_ENV_VARS = {
ENABLE_DETAILED_AUTOFILLED_METRICS_ENV,
ENABLE_SHARE_CUDA_VISIBLE_DEVICES_ENV,
TRAIN_PLACEMENT_GROUP_TIMEOUT_S_ENV,
TRAIN_ENABLE_WORKER_SPREAD_ENV,
}
# Import backend configurations dynamically since not all subdependencies
# may be installed.
def _get_backend_config_cls(backend_name) -> type:
if backend_name not in BACKEND_NAME_TO_CONFIG_CLS_NAME:
raise ValueError(
f"Invalid backend: {backend_name}. "
f"Supported string values are: "
f"{BACKEND_NAME_TO_CONFIG_CLS_NAME.keys()}"
)
import importlib
config_cls = getattr(
importlib.import_module(f"ray.train" f".{backend_name}"),
BACKEND_NAME_TO_CONFIG_CLS_NAME[backend_name],
)
return config_cls
@Deprecated
class Trainer:
"""A class for enabling seamless distributed deep learning.
Directory structure:
- A logdir is created during instantiation. This will hold all the
results/checkpoints for the lifetime of the Trainer. By default, it will be
of the form ``~/ray_results/train_<datestring>``.
- A run_dir is created for each ``run`` call. This will
hold the checkpoints and results for a single ``trainer.run()`` or
``trainer.run_iterator()`` call. It will be of the form ``run_<run_id>``.
Args:
backend (Union[str, BackendConfig]): The backend used for
distributed communication. If configurations are needed,
a subclass of ``BackendConfig`` can be passed in.
Supported ``str`` values: {"torch", "tensorflow", "horovod"}.
num_workers: The number of workers (Ray actors) to launch.
Each worker will reserve 1 CPU by default. The number of CPUs
reserved by each worker can be overridden with the
``resources_per_worker`` argument.
use_gpu: If True, training will be done on GPUs (1 per
worker). Defaults to False. The number of GPUs reserved by each
worker can be overridden with the ``resources_per_worker``
argument.
resources_per_worker (Optional[Dict]): If specified, the resources
defined in this Dict will be reserved for each worker. The
``CPU`` and ``GPU`` keys (case-sensitive) can be defined to
override the number of CPU/GPUs used by each worker.
logdir (Optional[str]): Path to the file directory where logs
should be persisted. If this is not specified, one will be
generated.
max_retries: Number of retries when Ray actors fail.
Defaults to 3. Set to -1 for unlimited retries.
"""
def __init__(
self,
backend: Union[str, BackendConfig],
num_workers: int,
use_gpu: bool = False,
resources_per_worker: Optional[Dict[str, float]] = None,
logdir: Optional[str] = None,
max_retries: int = 3,
):
warnings.warn(
"The `ray.train.Trainer` API is deprecated in Ray "
"2.0, and is replaced by Ray AI Runtime (Ray AIR). Ray AIR ("
"https://docs.ray.io/en/latest/ray-air/getting-started.html) will "
"provide greater functionality than `ray.train.Trainer`, "
"and with a more flexible and easy-to-use API.",
DeprecationWarning,
stacklevel=2,
)
if num_workers <= 0:
raise ValueError("`num_workers` must be a positive integer.")
if not ray.is_initialized():
ray.init()
if "GPU" in ray.available_resources() and not use_gpu:
logger.info(
"GPUs are detected in your Ray cluster, but GPU "
"training is not enabled for Ray Train. To enable "
"GPU training, make sure to set `use_gpu` to True "
"when instantiating your Trainer."
)
if resources_per_worker is not None:
# Copy this parameter to avoid mutating the user input
resources_per_worker = copy.deepcopy(resources_per_worker)
self._num_workers = num_workers
self._use_gpu = use_gpu
self._resources_per_worker = resources_per_worker
# Incremental unique run ID.
self._run_id = 0
self.logdir = self.create_logdir(logdir)
# Setup executor.
self._backend_config = self._get_backend_config(backend)
num_cpus = 1
num_gpus = int(use_gpu)
if resources_per_worker:
# Override CPU and GPU resources and remove from dict.
num_cpus = resources_per_worker.pop("CPU", num_cpus)
num_gpus = resources_per_worker.pop("GPU", num_gpus)
if not use_gpu and num_gpus > 0:
raise ValueError(
"`use_gpu` is False but `GPU` was found in "
"`resources_per_worker`. Either set `use_gpu` to True or "
"remove `GPU` from `resources_per_worker."
)
if use_gpu and num_gpus == 0:
raise ValueError(
"`use_gpu` is True but `GPU` is set to 0 in "
"`resources_per_worker`. Either set `use_gpu` to False or "
"request a positive number of `GPU` in "
"`resources_per_worker."
)
runtime_env = {
"env_vars": {
var_name: os.environ[var_name]
for var_name in BACKEND_ENV_VARS
if var_name in os.environ
}
}
remote_executor = ray.remote(num_cpus=0)(BackendExecutor)
backend_executor_actor = remote_executor.options(
runtime_env=runtime_env
).remote(
backend_config=self._backend_config,
num_workers=num_workers,
num_cpus_per_worker=num_cpus,
num_gpus_per_worker=num_gpus,
additional_resources_per_worker=resources_per_worker,
max_retries=max_retries,
)
self._backend_executor = ActorWrapper(backend_executor_actor)
# Todo (krfricke): Initialize checkpoint manager here with final values
# rather than in `on_training_start`
if self._is_tune_enabled():
self.checkpoint_manager = TuneCheckpointManager(
checkpoint_strategy=None, run_dir=None
)
else:
self.checkpoint_manager = CheckpointManager(
checkpoint_strategy=None, run_dir=None
)
def create_logdir(self, log_dir: Optional[Union[str, Path]]) -> Path:
"""Create logdir for the Trainer."""
# Create directory for logs.
log_dir = Path(log_dir) if log_dir else None
if not log_dir:
# Initialize timestamp for identifying this Train execution.
timestr = datetime.today().strftime("%Y-%m-%d_%H-%M-%S")
log_dir = Path(f"train_{timestr}")
log_dir = construct_path(log_dir, DEFAULT_RESULTS_DIR)
log_dir.mkdir(parents=True, exist_ok=True)
logger.info(f"Trainer logs will be logged in: {log_dir}")
return log_dir
def create_run_dir(self):
"""Create rundir for the particular training run."""
self.latest_run_dir.mkdir(parents=True, exist_ok=True)
logger.info(f"Run results will be logged in: {self.latest_run_dir}")
def _get_backend_config(self, backend: Union[str, BackendConfig]) -> BackendConfig:
"""Gets the ``BackendConfig`` to use for training.
Args:
backend (Union[str, BackendConfig]): If a ``BackendConfig`` is
passed in, then it will also be returned. If a ``str`` is
passed in, then the default config for that backend will be
returned.
Returns:
The ``BackendConfig`` that will be used to set up the
``BackendExecutor``.
"""
if isinstance(backend, BackendConfig):
return backend
elif isinstance(backend, str):
return _get_backend_config_cls(backend)()
else:
raise TypeError(f"Invalid type for backend: {type(backend)}.")
def _is_tune_enabled(self):
"""Whether or not this Trainer is part of a Tune session."""
return TUNE_INSTALLED and tune.is_session_enabled()
def start(self, initialization_hook: Optional[Callable[[], None]] = None):
"""Starts the training execution service.
Args:
initialization_hook (Optional[Callable]): The function to call on
each worker when it is instantiated.
"""
self._backend_executor.start(initialization_hook)
def run(
self,
train_func: Union[Callable[[], T], Callable[[Dict[str, Any]], T]],
config: Optional[Dict[str, Any]] = None,
callbacks: Optional[List[TrainingCallback]] = None,
dataset: Optional[Union[RayDataset, Dict[str, RayDataset]]] = None,
checkpoint: Optional[Union[Dict, str, Path]] = None,
checkpoint_strategy: Optional[CheckpointStrategy] = None,
) -> List[T]:
"""Runs a training function in a distributed manner.
Args:
train_func: The training function to execute.
This can either take in no arguments or a ``config`` dict.
config (Optional[Dict]): Configurations to pass into
``train_func``. If None then an empty Dict will be created.
callbacks (Optional[List[TrainingCallback]]): A list of Callbacks
which will be executed during training. If this is not set,
currently there are NO default Callbacks.
dataset (Optional[Union[RayDataset, Dict[str, RayDataset]]]):
Distributed Ray :ref:`Dataset <dataset-api>` or
:ref:`DatasetPipeline <dataset-pipeline-api>` to pass into the
workers, which can be accessed from the training function via
``train.get_dataset_shard()``. Sharding will automatically be
handled by the Trainer. Multiple Datasets can be passed in as
a ``Dict`` that maps each name key to a Dataset value,
and each Dataset can be accessed from the training function
by passing in a `dataset_name` argument to
``train.get_dataset_shard()``.
checkpoint (Optional[Dict|str|Path]): The checkpoint data that
should be loaded onto each worker and accessed by the training
function via ``train.load_checkpoint()``. If this is a ``str``
or ``Path`` then the value is expected to be a path to a file
that contains a serialized checkpoint dict. If this is
``None`` then no checkpoint will be loaded.
checkpoint_strategy (Optional[CheckpointStrategy]): The
configurations for saving checkpoints.
Returns:
A list of results from the training function. Each value in the
list corresponds to the output of the training function from
each worker.
"""
# Create new log directory for this run.
self._run_id += 1
self.create_run_dir()
# TODO(matt): Set default callbacks.
callbacks = [] if callbacks is None else callbacks
finished_with_errors = False
for callback in callbacks:
callback.start_training(
logdir=str(self.latest_run_dir), config=config or {}
)
train_func = construct_train_func(train_func, config)
dataset_spec = RayDatasetSpec(dataset_or_dict=dataset)
try:
iterator = TrainingIterator(
backend_executor=self._backend_executor,
backend_config=self._backend_config,
train_func=train_func,
dataset_spec=dataset_spec,
checkpoint_manager=self.checkpoint_manager,
checkpoint=checkpoint,
checkpoint_strategy=checkpoint_strategy,
run_dir=self.latest_run_dir,
)
for intermediate_result in iterator:
for callback in callbacks:
callback.process_results(intermediate_result)
assert iterator.is_finished()
return iterator.get_final_results()
finally:
for callback in callbacks:
callback.finish_training(error=finished_with_errors)
def run_iterator(
self,
train_func: Union[Callable[[], T], Callable[[Dict[str, Any]], T]],
config: Optional[Dict[str, Any]] = None,
dataset: Optional[Union[RayDataset, Dict[str, RayDataset]]] = None,
checkpoint: Optional[Union[Dict, str, Path]] = None,
checkpoint_strategy: Optional[CheckpointStrategy] = None,
) -> "TrainingIterator":
"""Same as ``run`` except returns an iterator over the results.
This is useful if you want to have more customization of what to do
with the intermediate results or how to use the ``Trainer`` with Ray
Tune.
.. code-block:: python
def train_func(config):
...
for _ in config["epochs"]:
metrics = train()
metrics = validate(...)
ray.train.report(**metrics)
return model
iterator = trainer.run_iterator(train_func, config=config)
for result in iterator:
do_stuff(result)
latest_ckpt = trainer.get_latest_checkpoint()
assert iterator.is_finished()
model = iterator.get_fin()[0]
Args:
train_func: The training function to execute.
This can either take in no arguments or a ``config`` dict.
config (Optional[Dict]): Configurations to pass into
``train_func``. If None then an empty Dict will be created.
checkpoint (Optional[Dict|Path|str]): The checkpoint data that
should be loaded onto each worker and accessed by the
training function via ``train.load_checkpoint()``. If this is a
``str`` or ``Path`` then the value is expected to be a path
to a file that contains a serialized checkpoint dict. If this
is ``None`` then no checkpoint will be loaded.
checkpoint_strategy (Optional[CheckpointStrategy]): The
configurations for saving checkpoints.
Returns:
An Iterator over the intermediate results from ``train.report()``.
"""
# Create new log directory for this run.
self._run_id += 1
self.create_run_dir()
train_func = construct_train_func(train_func, config)
dataset_spec = RayDatasetSpec(dataset_or_dict=dataset)
return TrainingIterator(
backend_executor=self._backend_executor,
backend_config=self._backend_config,
train_func=train_func,
run_dir=self.latest_run_dir,
dataset_spec=dataset_spec,
checkpoint_manager=self.checkpoint_manager,
checkpoint=checkpoint,
checkpoint_strategy=checkpoint_strategy,
)
@property
def latest_run_dir(self) -> Optional[Path]:
"""Path to the log directory for the latest call to ``run()``.
Returns ``None`` if ``run()`` has not been called.
"""
if self._run_id > 0:
run_dir = Path(f"run_{self._run_id:03d}")
return construct_path(run_dir, self.logdir)
else:
return None
@property
def latest_checkpoint_dir(self) -> Optional[Path]:
"""Path to the checkpoint directory.
Returns ``None`` if ``run()`` has not been called or if
``train.checkpoint()`` has not been called from ``train_func``within
the most recent call to ``run``.
"""
return self.checkpoint_manager.latest_checkpoint_dir
@property
def best_checkpoint_path(self) -> Optional[Path]:
"""Path to the best persisted checkpoint from the latest run.
"Best" is defined by the input ``CheckpointStrategy``.
Default behavior is to return the most recent checkpoint.
Returns ``None`` if ``run()`` has not been called or if
``train.save_checkpoint()`` has not been called from ``train_func``
within the most recent call to ``run``.
"""
return self.checkpoint_manager.best_checkpoint_path
@property
def latest_checkpoint(self) -> Optional[Dict]:
"""The latest saved checkpoint.
This checkpoint may not be saved to disk.
Returns ``None`` if ``run()`` has not been called or if
``train.checkpoint()`` has not been called from ``train_func``.
"""
return self.checkpoint_manager.latest_checkpoint
@property
def best_checkpoint(self) -> Optional[Dict]:
"""Best saved checkpoint from the latest run.
"Best" is defined by the input ``CheckpointStrategy``.
Default behavior is to return the most recent checkpoint.
Returns ``None`` if ``run()`` has not been called or if
``train.save_checkpoint()`` has not been called from ``train_func``
within the most recent call to ``run``.
"""
best_checkpoint_path = self.best_checkpoint_path
if best_checkpoint_path is None:
return None
else:
return load_checkpoint_from_path(best_checkpoint_path)
@staticmethod
def load_checkpoint_from_path(checkpoint_file_path: Union[str, Path]) -> Dict:
"""Convenience method to load a checkpoint from path.
An error will be raised if the provided path does not exist.
Args:
checkpoint_file_path (Union[str, Path]): The path to the checkpoint
to load. If the checkpoint saved in this path has not been
created by Ray Train, there is no guarantee that it can be
loaded in successfully.
"""
return load_checkpoint_from_path(checkpoint_file_path)
def shutdown(self):
"""Shuts down the training execution service."""
self._backend_executor.shutdown()
def to_tune_trainable(
self,
train_func: Callable[[Dict[str, Any]], T],
dataset: Optional[Union[RayDataset, Dict[str, RayDataset]]] = None,
) -> Type[Trainable]:
"""Creates a Tune ``Trainable`` from the input training function.
Args:
train_func: The function that should be executed on each
training worker.
dataset (Optional[Union[RayDataset, Dict[str, RayDataset]]]):
Distributed Ray p:ref:`Dataset <dataset-api>` or
:ref:`DatasetPipeline <dataset-pipeline-api>` to pass into the
workers, which can be accessed from the training function via
``train.get_dataset_shard()``. Sharding will automatically be
handled by the Trainer. Multiple Datasets can be passed in as
a ``Dict`` that maps each name key to a Dataset value,
and each Dataset can be accessed from the training function
by passing in a `dataset_name` argument to
``train.get_dataset_shard()``.
Returns:
A Trainable that can directly be passed into ``tune.run()``.
"""
if not TUNE_INSTALLED:
raise ValueError(
"Tune is not installed. Please install ray["
"tune] to use the Tune integration."
)
if self._backend_executor.is_started():
raise RuntimeError(
"The Trainer must not be active to use "
"`to_tune_trainable`. Either shutdown the "
"Trainer or don't start it in the first place."
)
return _create_tune_trainable(
train_func,
dataset,
self._backend_config,
self._num_workers,
self._use_gpu,
self._resources_per_worker,
)
def to_worker_group(self, train_cls: Type, *args, **kwargs) -> "TrainWorkerGroup":
"""Returns Ray actors with the provided class and the backend started.
This is useful if you want to provide your own class for training
and have more control over execution, but still want to use Ray Train
to setup the appropriate backend configurations (torch, tf, etc.).
.. code-block:: python
class Trainer:
def __init__(self, config):
self.config = config
def train_epoch(self):
...
return 1
config = {"lr": 0.1}
trainer = Trainer(num_workers=2, backend="torch")
workers = trainer.to_worker_group(train_cls=Trainer, config=config)
futures = [w.train_epoch.remote() for w in workers]
assert ray.get(futures) == [1, 1]
assert ray.get(workers[0].train_epoch.remote()) == 1
workers.shutdown()
Args:
train_cls: The class definition to use for the Ray
actors/workers.
args, kwargs: Arguments to pass into the ``__init__`` of the
provided ``train_cls``.
"""
if self._backend_executor.is_started():
raise RuntimeError(
"The Trainer must not be active to use "
"`to_worker_group`. Either shutdown the "
"Trainer or don't start it in the first place."
)
self._backend_executor.start(
train_cls=train_cls, train_cls_args=args, train_cls_kwargs=kwargs
)
worker_group = self._backend_executor.get_worker_group()
return TrainWorkerGroup(worker_group)
@Deprecated
class TrainWorkerGroup:
"""A container for a group of Ray actors.
You should not instantiate this directly and only use this as the output
of ``Trainer.to_worker_group``. You can index or iterate this object like
you would a List.
.. code-block:: python
class Trainer:
def __init__(self, config):
self.config = config
def train_epoch(self):
...
return 1
config = {"lr": 0.1}
trainer = Trainer(num_workers=2, backend="torch")
workers = trainer.to_worker_group(train_cls=Trainer, config=config)
futures = [w.train_epoch.remote() for w in workers]
assert ray.get(futures) == [1, 1]
assert ray.get(workers[0].train_epoch.remote()) == 1
workers.shutdown()`
"""
def __init__(self, worker_group: WorkerGroup):
warnings.warn(
"The `ray.train.trainer.WorkerGroup` API is deprecated in Ray 2.0",
DeprecationWarning,
stacklevel=2,
)
self._worker_group = worker_group
def __getitem__(self, item) -> ActorHandle:
return self._worker_group.workers[item].actor
def shutdown(self, patience_s: float = 5):
"""Shutdown all the workers.
Args:
patience_s: Attempt a graceful shutdown
of the workers for this many seconds. Fallback to force kill
if graceful shutdown is not complete after this time. If
this is less than or equal to 0, immediately force kill all
workers.
"""
self._worker_group.shutdown(patience_s=patience_s)
@DeveloperAPI
class TrainingIterator:
"""An iterator over Train results. Returned by ``trainer.run_iterator``."""
def __init__(
self,
backend_executor: Union[BackendExecutor, ActorWrapper],
backend_config: BackendConfig,
train_func: Union[Callable[[], T], Callable[[Dict[str, Any]], T]],
dataset_spec: RayDatasetSpec,
checkpoint_manager: CheckpointManager,
checkpoint: Optional[Union[Dict, str, Path, Checkpoint]],
checkpoint_strategy: Optional[CheckpointStrategy],
run_dir: Optional[Path] = None,
):
self._backend_executor = backend_executor
self._backend = backend_config.backend_cls()
self._train_func = train_func
self._dataset_spec = dataset_spec
self._run_dir = run_dir
self._checkpoint_manager = checkpoint_manager
self._checkpoint_strategy = checkpoint_strategy
self._start_training(
train_func=train_func,
run_dir=run_dir,
dataset_spec=self._dataset_spec,
checkpoint=checkpoint,
checkpoint_strategy=checkpoint_strategy,
)
self._final_results = None
self._finished_training = False
def __iter__(self):
return self
def _start_training(
self,
train_func,
run_dir,
dataset_spec,
checkpoint,
checkpoint_strategy,
latest_checkpoint_id=None,
):
self._checkpoint_manager.on_start_training(
checkpoint_strategy=checkpoint_strategy,
run_dir=run_dir,
latest_checkpoint_id=latest_checkpoint_id,
)
checkpoint = self._checkpoint_manager._load_checkpoint(checkpoint)
self._run_with_error_handling(
lambda: self._backend_executor.start_training(
train_func=train_func,
dataset_spec=dataset_spec,
checkpoint=checkpoint,
)
)
def _run_with_error_handling(self, func: Callable):
try:
return func()
except TrainingWorkerError:
# Workers have already been restarted.
logger.info(
"Workers have been successfully restarted. Resuming "
"training from latest checkpoint."
)
logger.debug(
f"Latest checkpoint: {self._checkpoint_manager.latest_checkpoint}"
)
self._start_training(
self._train_func,
self._run_dir,
self._dataset_spec,
self._checkpoint_manager.latest_checkpoint,
self._checkpoint_strategy,
latest_checkpoint_id=self._checkpoint_manager.latest_checkpoint_id,
)
return self._run_with_error_handling(func)
except InactiveWorkerGroupError:
raise RuntimeError(
"This Trainer is not active. It is either shutdown "
"already or never started in the first place. "
"Either create a new Trainer or start this one."
) from None
except TrainBackendError:
raise RuntimeError(
"Training failed. You should not be seeing "
"this error and this is a bug. Please create "
"a new issue at "
"https://github.com/ray-project/ray."
) from None
def __next__(self):
if self.is_finished():
raise StopIteration
next_results = self._run_with_error_handling(self._fetch_next_result)
if next_results is None:
try:
self._final_results = self._run_with_error_handling(
self._finish_training
)
finally:
self._finished_training = True
raise StopIteration
else:
return next_results
def _fetch_next_result(self) -> Optional[List[Dict]]:
"""Fetch next results produced by ``train.report()`` from each worker.
Assumes ``start_training`` has already been called.
Returns:
A list of dictionaries of values passed to ``train.report()`` from
each worker. Each item corresponds to an intermediate result
a single worker. If there are no more items to fetch,
returns None.
"""
while True:
results = self._backend_executor.get_next_results()
if results is None:
return None
first_result = results[0]
result_type = first_result.type
if result_type is TrainingResultType.REPORT:
result_data = [self._backend.decode_data(r.data) for r in results]
return result_data
elif result_type is TrainingResultType.CHECKPOINT:
self._checkpoint_manager._process_checkpoint(
results, decode_checkpoint_fn=self._backend.decode_data
)
# Iterate until next REPORT call or training has finished.
else:
raise TrainBackendError(
f"Unexpected result type: "
f"{result_type}. "
f"Expected one of "
f"{[type in TrainingResultType]}"
)
def _finish_checkpointing(self):
while True:
results = self._backend_executor.get_next_results()
if results is None:
break
result_type = results[0].type
# Process checkpoints and ignore other result types.
if result_type is TrainingResultType.CHECKPOINT:
self._checkpoint_manager._process_checkpoint(
results, decode_checkpoint_fn=self._backend.decode_data
)
def _finish_training(self):
"""Finish training and return final results. Propagate any exceptions.
Blocks until training is finished on all workers.
Assumes `start_training` has already been called.
Returns:
A list of return values from calling ``train_func`` on each worker.
Each item corresponds to the return value from a single worker.
"""
self._backend_executor.pause_reporting()
# Finish up processing checkpoints. Reporting has been disabled.
# Results will not be processed.
self._finish_checkpointing()
return self._backend_executor.finish_training()
def is_finished(self) -> bool:
return self._finished_training
def get_final_results(self, force: bool = False) -> List[T]:
"""Gets the training func return values from each worker.
If ``force`` is ``True``, then immediately finish training
and return even if all the intermediate results have not
been processed yet. Else, intermediate results must be
processed before obtaining the final results. Defaults to
False.
"""
if not self.is_finished():
assert self._final_results is None
if force:
try:
self._final_results = self._run_with_error_handling(
self._finish_training
)
finally:
self._finished_training = True
else:
logger.info(
"Please finish iterating through the "
"intermediate results before getting the"
"final returns. If you would like "
"training to finish immediately and get "
"the final returns, then set "
"`force=True`."
)
return self._final_results
def _create_tune_trainable(
train_func, dataset, backend_config, num_workers, use_gpu, resources_per_worker
):
"""Creates a Tune Trainable class for Train training.
This function populates class attributes and methods.
"""
# TODO(matt): Move dataset to Ray object store, like tune.with_parameters.
def tune_function(config, checkpoint_dir=None):
trainer = Trainer(
backend=backend_config,
num_workers=num_workers,
use_gpu=use_gpu,
resources_per_worker=resources_per_worker,
)
trainer.start()
iterator = trainer.run_iterator(
train_func, config, dataset=dataset, checkpoint=checkpoint_dir
)
for results in iterator:
first_worker_results = results[0]
tune.report(**first_worker_results)
trainer.shutdown()
trainable_cls = wrap_function(tune_function)
class TrainTrainable(trainable_cls):
"""Add default resources to the Trainable."""
@classmethod
def default_resource_request(cls, config: Dict) -> PlacementGroupFactory:
trainer_bundle = [{"CPU": 1}]
worker_resources = {"CPU": 1, "GPU": int(use_gpu)}
worker_resources_extra = (
{} if resources_per_worker is None else resources_per_worker
)
worker_bundles = [
{**worker_resources, **worker_resources_extra}
for _ in range(num_workers)
]
bundles = trainer_bundle + worker_bundles
return PlacementGroupFactory(bundles, strategy="PACK")
return TrainTrainable
| 38.329706 | 88 | 0.615302 |
acf74a8bdb5fc21f5688b8ffb4ceae43befada46 | 632 | py | Python | setup.py | shawnsnguyen/kafka-rest | 72fa176cf6855d83b636eeebbb1b3f9cd37e1872 | [
"MIT"
] | 1 | 2021-01-05T16:56:18.000Z | 2021-01-05T16:56:18.000Z | setup.py | shawnsnguyen/kafka-rest | 72fa176cf6855d83b636eeebbb1b3f9cd37e1872 | [
"MIT"
] | null | null | null | setup.py | shawnsnguyen/kafka-rest | 72fa176cf6855d83b636eeebbb1b3f9cd37e1872 | [
"MIT"
] | null | null | null | import sys
from setuptools import setup, find_packages
from kafka_rest import __version__
install_requires = [
'tornado>=4.0.0,<5.0.0',
'avro_json_serializer>=0.4.1,<0.5.0',
'avro==1.7.7'
]
setup(
name='kafka-rest',
version=__version__,
description="Async client for Confluent's Kafka REST Proxy",
url='https://github.com/gamechanger/kafka-rest',
author='GameChanger',
author_email='travis@gc.io',
packages=find_packages(),
install_requires=install_requires,
tests_require=[
'mock==1.3.0',
'nose==1.3.7'
],
test_suite="nose.collector",
zip_safe=False
)
| 22.571429 | 64 | 0.65981 |
acf74b055dd4acd00fae3935b5962fc9e0e5406d | 3,112 | py | Python | test/unit/rmq_2_isse/is_valid_ext.py | mjpernot/rabbitmq-isse | ad8246d98e3e4924b1946bb8f6e8856f2c4a0309 | [
"MIT"
] | null | null | null | test/unit/rmq_2_isse/is_valid_ext.py | mjpernot/rabbitmq-isse | ad8246d98e3e4924b1946bb8f6e8856f2c4a0309 | [
"MIT"
] | null | null | null | test/unit/rmq_2_isse/is_valid_ext.py | mjpernot/rabbitmq-isse | ad8246d98e3e4924b1946bb8f6e8856f2c4a0309 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# Classification (U)
"""Program: is_valid_ext.py
Description: Unit testing of is_valid_ext in rmq_2_isse.py.
Usage:
test/unit/rmq_2_isse/is_valid_ext.py
Arguments:
"""
# Libraries and Global Variables
# Standard
import sys
import os
if sys.version_info < (2, 7):
import unittest2 as unittest
else:
import unittest
# Third-party
import mock
# Local
sys.path.append(os.getcwd())
import rmq_2_isse
import version
__version__ = version.__version__
class UnitTest(unittest.TestCase):
"""Class: UnitTest
Description: Class which is a representation of a unit testing.
Methods:
setUp -> Initialize testing environment.
test_is_valid_ext_empty_set -> Test with empty ignore set.
test_is_valid_ext_not_fnd -> Test with no find in set.
test_is_valid_ext_fnd -> Test with one find in set.
tearDown -> Clean up of testing environment.
"""
def setUp(self):
"""Function: setUp
Description: Initialization for unit testing.
Arguments:
"""
class CfgTest(object):
"""Class: CfgTest
Description: Class which is a representation of a cfg module.
Methods:
__init__ -> Initialize configuration environment.
"""
def __init__(self):
"""Method: __init__
Description: Initialization instance of the CfgTest class.
Arguments:
"""
self.ignore_ext = ["_kmz.64.txt", "_pptx.64.txt"]
self.ct = CfgTest()
self.fname = "File1_kmz.64.txt"
@mock.patch("rmq_2_isse.gen_class.Logger")
def test_is_valid_ext_empty_set(self, mock_log):
"""Function: test_is_valid_ext_empty_set
Description: Test is_valid_ext function with empty ignore set.
Arguments:
"""
mock_log.return_value = True
self.ct.ignore_ext = []
self.assertTrue(rmq_2_isse.is_valid_ext(self.fname, self.ct, mock_log))
@mock.patch("rmq_2_isse.gen_class.Logger")
def test_is_valid_ext_not_fnd(self, mock_log):
"""Function: test_is_valid_ext_not_fnd
Description: Test is_valid_ext function with not found in set.
Arguments:
"""
mock_log.return_value = True
self.fname = "File1.txt"
self.assertTrue(rmq_2_isse.is_valid_ext(self.fname, self.ct, mock_log))
@mock.patch("rmq_2_isse.gen_class.Logger")
def test_is_valid_ext_fnd(self, mock_log):
"""Function: test_is_valid_ext_fnd
Description: Test is_valid_ext function with one find in set.
Arguments:
"""
mock_log.return_value = True
self.assertFalse(rmq_2_isse.is_valid_ext(self.fname,
self.ct, mock_log))
def tearDown(self):
"""Function: tearDown
Description: Clean up of unit testing.
Arguments:
"""
self.ct = None
if __name__ == "__main__":
unittest.main()
| 20.207792 | 79 | 0.616324 |
acf74c5e3144d2466fb2e054e46fa71ae9c310d0 | 18,767 | py | Python | diofant/logic/algorithms/dpll2.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | null | null | null | diofant/logic/algorithms/dpll2.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | null | null | null | diofant/logic/algorithms/dpll2.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | null | null | null | """Implementation of DPLL algorithm.
Features:
- Clause learning
- Watch literal scheme
- VSIDS heuristic
References
==========
* https://en.wikipedia.org/wiki/DPLL_algorithm
"""
from collections import defaultdict
from heapq import heappop, heappush
from ...utilities import default_sort_key, ordered
from ..boolalg import _find_predicates, conjuncts, to_cnf, to_int_repr
def dpll_satisfiable(expr, all_models=False):
"""
Check satisfiability of a propositional sentence.
It returns a model rather than True when it succeeds.
Returns a generator of all models if all_models is True.
Examples
========
>>> dpll_satisfiable(a & ~b)
{a: True, b: False}
>>> dpll_satisfiable(a & ~a)
False
"""
clauses = conjuncts(to_cnf(expr))
if False in clauses:
if all_models:
return (f for f in [False])
return False
symbols = sorted(_find_predicates(expr), key=default_sort_key)
symbols_int_repr = range(1, len(symbols) + 1)
clauses_int_repr = to_int_repr(clauses, symbols)
solver = SATSolver(clauses_int_repr, symbols_int_repr, set(), symbols)
models = solver._find_model()
if all_models:
return _all_models(models)
try:
return next(models)
except StopIteration:
return False
def _all_models(models):
satisfiable = False
try:
while True:
yield next(models)
satisfiable = True
except StopIteration:
if not satisfiable:
yield False
class SATSolver:
"""
Class for representing a SAT solver capable of
finding a model to a boolean theory in conjunctive
normal form.
"""
def __init__(self, clauses, variables, var_settings, symbols=None,
heuristic='vsids', clause_learning='none', INTERVAL=500):
self.var_settings = var_settings
self.heuristic = heuristic
self.is_unsatisfied = False
self._unit_prop_queue = []
self.INTERVAL = INTERVAL
if symbols is None:
self.symbols = list(ordered(variables))
else:
self.symbols = symbols
self._initialize_variables(variables)
self._initialize_clauses(clauses)
if 'vsids' == heuristic:
self._vsids_init()
self.heur_calculate = self._vsids_calculate
self.heur_lit_assigned = self._vsids_lit_assigned
self.heur_lit_unset = self._vsids_lit_unset
self.heur_clause_added = self._vsids_clause_added
else:
raise NotImplementedError
if 'none' == clause_learning:
self.add_learned_clause = lambda x: None
self.compute_conflict = lambda: None
else:
raise NotImplementedError
# Create the base level
self.levels = [Level(0)]
self._current_level.varsettings = var_settings
# Keep stats
self.num_decisions = 0
self.num_learned_clauses = 0
self.original_num_clauses = len(self.clauses)
def _initialize_variables(self, variables):
"""Set up the variable data structures needed."""
self.sentinels = defaultdict(set)
self.occurrence_count = defaultdict(int)
self.variable_set = [False] * (len(variables) + 1)
def _initialize_clauses(self, clauses):
"""Set up the clause data structures needed.
For each clause, the following changes are made:
- Unit clauses are queued for propagation right away.
- Non-unit clauses have their first and last literals set as sentinels.
- The number of clauses a literal appears in is computed.
"""
self.clauses = []
for cls in clauses:
self.clauses.append(list(cls))
for i in range(len(self.clauses)):
# Handle the unit clauses
if 1 == len(self.clauses[i]):
self._unit_prop_queue.append(self.clauses[i][0])
continue
self.sentinels[self.clauses[i][0]].add(i)
self.sentinels[self.clauses[i][-1]].add(i)
for lit in self.clauses[i]:
self.occurrence_count[lit] += 1
def _find_model(self):
"""
Main DPLL loop. Returns a generator of models.
Variables are chosen successively, and assigned to be either
True or False. If a solution is not found with this setting,
the opposite is chosen and the search continues. The solver
halts when every variable has a setting.
Examples
========
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> list(l._find_model())
[{1: True, 2: False, 3: False}, {1: True, 2: True, 3: True}]
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set(), [a, b, c])
>>> list(l._find_model())
[{a: True, b: False, c: False}, {a: True, b: True, c: True}]
"""
# We use this variable to keep track of if we should flip a
# variable setting in successive rounds
flip_var = False
# Check if unit prop says the theory is unsat right off the bat
self._simplify()
if self.is_unsatisfied:
return
# While the theory still has clauses remaining
while True:
if flip_var:
# We have just backtracked and we are trying to opposite literal
flip_var = False
lit = self._current_level.decision
else:
# Pick a literal to set
lit = self.heur_calculate()
self.num_decisions += 1
# Stopping condition for a satisfying theory
if 0 == lit:
yield {self.symbols[abs(lit) - 1]: lit > 0
for lit in self.var_settings}
while self._current_level.flipped:
self._undo()
if len(self.levels) == 1:
return
flip_lit = -self._current_level.decision
self._undo()
self.levels.append(Level(flip_lit, flipped=True))
flip_var = True
continue
# Start the new decision level
self.levels.append(Level(lit))
# Assign the literal, updating the clauses it satisfies
self._assign_literal(lit)
# _simplify the theory
self._simplify()
# Check if we've made the theory unsat
if self.is_unsatisfied:
self.is_unsatisfied = False
# We unroll all of the decisions until we can flip a literal
while self._current_level.flipped:
self._undo()
# If we've unrolled all the way, the theory is unsat
if 1 == len(self.levels):
return
# Detect and add a learned clause
self.add_learned_clause(self.compute_conflict())
# Try the opposite setting of the most recent decision
flip_lit = -self._current_level.decision
self._undo()
self.levels.append(Level(flip_lit, flipped=True))
flip_var = True
########################
# Helper Methods #
########################
@property
def _current_level(self):
"""The current decision level data structure
Examples
========
>>> l = SATSolver([{1}, {2}], {1, 2}, set())
>>> next(l._find_model())
{1: True, 2: True}
>>> l._current_level.decision
0
>>> l._current_level.flipped
False
>>> l._current_level.var_settings
{1, 2}
"""
return self.levels[-1]
def _clause_sat(self, cls):
"""Check if a clause is satisfied by the current variable setting.
Examples
========
>>> l = SATSolver([{1}, {-1}], {1}, set())
>>> try:
... next(l._find_model())
... except StopIteration:
... pass
>>> l._clause_sat(0)
False
>>> l._clause_sat(1)
True
"""
for lit in self.clauses[cls]:
if lit in self.var_settings:
return True
return False
def _is_sentinel(self, lit, cls):
"""Check if a literal is a sentinel of a given clause.
Examples
========
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> next(l._find_model())
{1: True, 2: False, 3: False}
>>> l._is_sentinel(2, 3)
True
>>> l._is_sentinel(-3, 1)
False
"""
return cls in self.sentinels[lit]
def _assign_literal(self, lit):
"""Make a literal assignment.
The literal assignment must be recorded as part of the current
decision level. Additionally, if the literal is marked as a
sentinel of any clause, then a new sentinel must be chosen. If
this is not possible, then unit propagation is triggered and
another literal is added to the queue to be set in the future.
Examples
========
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> next(l._find_model())
{1: True, 2: False, 3: False}
>>> l.var_settings
{-3, -2, 1}
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> l._assign_literal(-1)
>>> try:
... next(l._find_model())
... except StopIteration:
... pass
>>> l.var_settings
{-1}
"""
self.var_settings.add(lit)
self._current_level.var_settings.add(lit)
self.variable_set[abs(lit)] = True
self.heur_lit_assigned(lit)
sentinel_list = list(self.sentinels[-lit])
for cls in sentinel_list:
if not self._clause_sat(cls):
other_sentinel = None
for newlit in self.clauses[cls]:
if newlit != -lit:
if self._is_sentinel(newlit, cls):
other_sentinel = newlit
elif not self.variable_set[abs(newlit)]:
self.sentinels[-lit].remove(cls)
self.sentinels[newlit].add(cls)
other_sentinel = None
break
# Check if no sentinel update exists
if other_sentinel:
self._unit_prop_queue.append(other_sentinel)
def _undo(self):
"""
_undo the changes of the most recent decision level.
Examples
========
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> next(l._find_model())
{1: True, 2: False, 3: False}
>>> level = l._current_level
>>> (level.decision, level.var_settings, level.flipped)
(-3, {-3, -2}, False)
>>> l._undo()
>>> level = l._current_level
>>> (level.decision, level.var_settings, level.flipped)
(0, {1}, False)
"""
# Undo the variable settings
for lit in self._current_level.var_settings:
self.var_settings.remove(lit)
self.heur_lit_unset(lit)
self.variable_set[abs(lit)] = False
# Pop the level off the stack
self.levels.pop()
#########################
# Propagation #
#########################
"""
Propagation methods should attempt to soundly simplify the boolean
theory, and return True if any simplification occurred and False
otherwise.
"""
def _simplify(self):
"""Iterate over the various forms of propagation to simplify the theory.
Examples
========
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> l.variable_set
[False, False, False, False]
>>> l.sentinels
{-3: {0, 2}, -2: {3, 4}, 2: {0, 3}, 3: {2, 4}}
>>> l._simplify()
>>> l.variable_set
[False, True, False, False]
>>> l.sentinels
{-3: {0, 2}, -2: {3, 4}, -1: set(), 2: {0, 3}, 3: {2, 4}}
"""
changed = True
while changed:
changed = False
changed |= self._unit_prop()
changed |= self._pure_literal()
def _unit_prop(self):
"""Perform unit propagation on the current theory."""
result = len(self._unit_prop_queue) > 0
while self._unit_prop_queue:
next_lit = self._unit_prop_queue.pop()
if -next_lit in self.var_settings:
self.is_unsatisfied = True
self._unit_prop_queue = []
return False
else:
self._assign_literal(next_lit)
return result
def _pure_literal(self):
"""Look for pure literals and assign them when found."""
return False
#########################
# Heuristics #
#########################
def _vsids_init(self):
"""Initialize the data structures needed for the VSIDS heuristic."""
self.lit_heap = []
self.lit_scores = {}
for var in range(1, len(self.variable_set)):
self.lit_scores[var] = float(-self.occurrence_count[var])
self.lit_scores[-var] = float(-self.occurrence_count[-var])
heappush(self.lit_heap, (self.lit_scores[var], var))
heappush(self.lit_heap, (self.lit_scores[-var], -var))
def _vsids_decay(self):
"""Decay the VSIDS scores for every literal.
Examples
========
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> l.lit_scores
{-3: -2.0, -2: -2.0, -1: 0.0, 1: 0.0, 2: -2.0, 3: -2.0}
>>> l._vsids_decay()
>>> l.lit_scores
{-3: -1.0, -2: -1.0, -1: 0.0, 1: 0.0, 2: -1.0, 3: -1.0}
"""
# We divide every literal score by 2 for a decay factor
# Note: This doesn't change the heap property
for lit in self.lit_scores:
self.lit_scores[lit] /= 2.0
def _vsids_calculate(self):
"""
VSIDS Heuristic Calculation
Examples
========
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> l.lit_heap
[(-2.0, -3), (-2.0, 2), (-2.0, -2), (0.0, 1), (-2.0, 3), (0.0, -1)]
>>> l._vsids_calculate()
-3
>>> l.lit_heap
[(-2.0, -2), (-2.0, 2), (0.0, -1), (0.0, 1), (-2.0, 3)]
"""
if len(self.lit_heap) == 0:
return 0
# Clean out the front of the heap as long the variables are set
while self.variable_set[abs(self.lit_heap[0][1])]:
heappop(self.lit_heap)
if len(self.lit_heap) == 0:
return 0
return heappop(self.lit_heap)[1]
def _vsids_lit_assigned(self, lit):
"""Handle the assignment of a literal for the VSIDS heuristic."""
def _vsids_lit_unset(self, lit):
"""Handle the unsetting of a literal for the VSIDS heuristic.
Examples
========
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> l.lit_heap
[(-2.0, -3), (-2.0, 2), (-2.0, -2), (0.0, 1), (-2.0, 3), (0.0, -1)]
>>> l._vsids_lit_unset(2)
>>> l.lit_heap
[(-2.0, -3), (-2.0, -2), (-2.0, -2), (-2.0, 2), (-2.0, 3), (0.0, -1),
(-2.0, 2), (0.0, 1)]
"""
var = abs(lit)
heappush(self.lit_heap, (self.lit_scores[var], var))
heappush(self.lit_heap, (self.lit_scores[-var], -var))
def _vsids_clause_added(self, cls):
"""Handle the addition of a new clause for the VSIDS heuristic.
Examples
========
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> l.num_learned_clauses
0
>>> l.lit_scores
{-3: -2.0, -2: -2.0, -1: 0.0, 1: 0.0, 2: -2.0, 3: -2.0}
>>> l._vsids_clause_added({2, -3})
>>> l.num_learned_clauses
1
>>> l.lit_scores
{-3: -1.0, -2: -2.0, -1: 0.0, 1: 0.0, 2: -1.0, 3: -2.0}
"""
self.num_learned_clauses += 1
for lit in cls:
self.lit_scores[lit] += 1
########################
# Clause Learning #
########################
def _simple_add_learned_clause(self, cls):
"""Add a new clause to the theory.
Examples
========
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> l.num_learned_clauses
0
>>> l.clauses
[[2, -3], [1], [3, -3], [2, -2], [3, -2]]
>>> l.sentinels
{-3: {0, 2}, -2: {3, 4}, 2: {0, 3}, 3: {2, 4}}
>>> l._simple_add_learned_clause([3])
>>> l.clauses
[[2, -3], [1], [3, -3], [2, -2], [3, -2], [3]]
>>> l.sentinels
{-3: {0, 2}, -2: {3, 4}, 2: {0, 3}, 3: {2, 4, 5}}
"""
cls_num = len(self.clauses)
self.clauses.append(cls)
for lit in cls:
self.occurrence_count[lit] += 1
self.sentinels[cls[0]].add(cls_num)
self.sentinels[cls[-1]].add(cls_num)
self.heur_clause_added(cls)
def _simple_compute_conflict(self):
"""Build a clause representing the fact that at least one decision made
so far is wrong.
Examples
========
>>> l = SATSolver([{2, -3}, {1}, {3, -3}, {2, -2},
... {3, -2}], {1, 2, 3}, set())
>>> next(l._find_model())
{1: True, 2: False, 3: False}
>>> l._simple_compute_conflict()
[3]
"""
return [-(level.decision) for level in self.levels[1:]]
class Level:
"""
Represents a single level in the DPLL algorithm, and contains
enough information for a sound backtracking procedure.
"""
def __init__(self, decision, flipped=False):
self.decision = decision
self.var_settings = set()
self.flipped = flipped
| 30.565147 | 80 | 0.504556 |
acf74cd2f8bfb725ea2e803fae5ba1766d361bae | 2,137 | py | Python | handler/proxyHandler.py | who8736/proxy_pool | e8f2a0ef3a51a8c3aaa59c8ba990c6c2ab3995ff | [
"MIT"
] | null | null | null | handler/proxyHandler.py | who8736/proxy_pool | e8f2a0ef3a51a8c3aaa59c8ba990c6c2ab3995ff | [
"MIT"
] | null | null | null | handler/proxyHandler.py | who8736/proxy_pool | e8f2a0ef3a51a8c3aaa59c8ba990c6c2ab3995ff | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
-------------------------------------------------
File Name: ProxyHandler.py
Description :
Author : JHao
date: 2016/12/3
-------------------------------------------------
Change Activity:
2016/12/3:
-------------------------------------------------
"""
__author__ = 'JHao'
from helper.proxy import Proxy
from db.dbClient import DbClient
from handler.configHandler import ConfigHandler
class ProxyHandler(object):
""" Proxy CRUD operator"""
def __init__(self):
self.conf = ConfigHandler()
self.db = DbClient(self.conf.dbConn)
self.db.changeTable(self.conf.tableName)
def get(self):
"""
return a useful proxy
:return:
"""
proxy_json = self.db.get()
if proxy_json:
return Proxy.createFromJson(proxy_json)
return None
def pop(self):
"""
return and delete a useful proxy
:return:
"""
proxy_json = self.db.pop()
if proxy_json:
return Proxy.createFromJson(proxy_json)
return None
def put(self, proxy):
"""
put proxy into use proxy
:return:
"""
self.db.put(proxy)
def delete(self, proxy):
"""
delete useful proxy
:param proxy:
:return:
"""
return self.db.delete(proxy)
def getAll(self):
"""
get all proxy from pool as Proxy list
:return:
"""
proxies_dict = self.db.getAll()
return [Proxy.createFromJson(value) for _, value in proxies_dict.items()]
def exists(self, proxy):
"""
check proxy exists
:param proxy:
:return:
"""
return self.db.exists(proxy)
def getCount(self):
"""
return raw_proxy and use_proxy count
:return:
"""
total_use_proxy = self.db.getCount()
return {'count': total_use_proxy}
def addScore(self, proxy):
# TODO: 加分
pass
def subScore(self, proxy):
# TODO: 加分
pass
| 22.734043 | 81 | 0.497894 |
acf74d20fb1ec51cf9db10087acee5ba930d381c | 4,506 | py | Python | api/tests/test_reservation_unit_capacity.py | SuviVappula/tilavarauspalvelu-core | ad7dec36e392a7b2927e2f825c3b0eb29b700793 | [
"MIT"
] | null | null | null | api/tests/test_reservation_unit_capacity.py | SuviVappula/tilavarauspalvelu-core | ad7dec36e392a7b2927e2f825c3b0eb29b700793 | [
"MIT"
] | 90 | 2020-11-13T07:42:32.000Z | 2022-03-29T08:54:20.000Z | api/tests/test_reservation_unit_capacity.py | SuviVappula/tilavarauspalvelu-core | ad7dec36e392a7b2927e2f825c3b0eb29b700793 | [
"MIT"
] | 8 | 2021-02-10T11:31:22.000Z | 2022-01-28T14:33:47.000Z | import datetime
from unittest import mock
from assertpy import assert_that
from django.conf import settings
from django.contrib.auth import get_user_model
from django.test.testcases import TestCase
from rest_framework.reverse import reverse
from rest_framework.test import APIClient
from opening_hours.hours import TimeElement
from permissions.models import GeneralRole, GeneralRoleChoice
from reservation_units.models import ReservationUnit
from reservation_units.tests.factories import ReservationUnitFactory
from reservations.models import STATE_CHOICES
from reservations.tests.factories import ReservationFactory
def get_mocked_opening_hours():
resource_id = f"{settings.HAUKI_ORIGIN_ID}:{ReservationUnit.objects.first().uuid}"
return [
{
"resource_id": resource_id,
"date": datetime.datetime.strptime("2021-01-01", "%Y-%m-%d").date(),
"times": [
TimeElement(
start_time=datetime.time(hour=10),
end_time=datetime.time(hour=22),
end_time_on_next_day=False,
),
],
},
{
"resource_id": resource_id,
"date": datetime.datetime.strptime("2021-01-02", "%Y-%m-%d").date(),
"times": [
TimeElement(
start_time=datetime.time(hour=10),
end_time=datetime.time(hour=22),
end_time_on_next_day=False,
),
],
},
]
@mock.patch("opening_hours.utils.summaries.get_opening_hours")
class ReservationUnitCapacityTestCase(TestCase):
@classmethod
def setUpTestData(cls):
cls.reservation_unit = ReservationUnitFactory()
cls.reservation = ReservationFactory(
reservation_unit=[cls.reservation_unit],
begin=datetime.datetime(2020, 5, 5, 12),
end=datetime.datetime(2020, 5, 5, 14),
state=STATE_CHOICES.CONFIRMED,
)
general_admin = get_user_model().objects.create(
username="gen_admin",
first_name="Amin",
last_name="General",
email="amin.general@foo.com",
)
GeneralRole.objects.create(
user=general_admin,
role=GeneralRoleChoice.objects.get(code="admin"),
)
cls.api_client = APIClient()
cls.api_client.force_authenticate(general_admin)
def test_hour_capacity(self, mock):
mock.return_value = get_mocked_opening_hours()
response = self.api_client.get(
reverse(
"reservationunit-capacity",
),
data={
"reservation_unit": str(self.reservation_unit.id),
"period_start": "2020-01-01",
"period_end": "2022-01-01",
},
format="json",
)
assert_that(response.status_code).is_equal_to(200)
assert_that(response.data[0].get("hour_capacity")).is_equal_to(24)
def test_reservation_duration_total_in_period(self, mock):
mock.return_value = get_mocked_opening_hours()
response = self.api_client.get(
reverse(
"reservationunit-capacity",
),
data={
"reservation_unit": str(self.reservation_unit.id),
"period_start": "2020-01-01",
"period_end": "2022-01-01",
},
format="json",
)
assert_that(response.status_code).is_equal_to(200)
assert_that(response.data[0].get("reservation_duration_total")).is_equal_to(2)
def test_reservation_duration_total_when_reservation_out_of_period(self, mock):
mock.return_value = get_mocked_opening_hours()
ReservationFactory(
reservation_unit=[self.reservation_unit],
begin=datetime.datetime(2022, 5, 5, 12),
end=datetime.datetime(2022, 5, 5, 14),
state=STATE_CHOICES.CONFIRMED,
)
response = self.api_client.get(
reverse(
"reservationunit-capacity",
),
data={
"reservation_unit": str(self.reservation_unit.id),
"period_start": "2020-01",
"period_end": "2022-01-01",
},
format="json",
)
assert_that(response.status_code).is_equal_to(200)
assert_that(response.data[0].get("reservation_duration_total")).is_equal_to(2)
| 34.661538 | 86 | 0.600755 |
acf74d340af951be65a90a3c606cb195ff08a47f | 2,469 | py | Python | lab/util/file_transfer.py | voschezang/distributed-systems | 6132dc33414d942378cd2b835408701c31075c91 | [
"MIT"
] | null | null | null | lab/util/file_transfer.py | voschezang/distributed-systems | 6132dc33414d942378cd2b835408701c31075c91 | [
"MIT"
] | null | null | null | lab/util/file_transfer.py | voschezang/distributed-systems | 6132dc33414d942378cd2b835408701c31075c91 | [
"MIT"
] | 1 | 2020-02-16T15:16:45.000Z | 2020-02-16T15:16:45.000Z | from lab.util import message
from time import time
class UnexpectedChunkIndex(Exception):
def __init__(self, message, expected_index):
# Call the base class constructor with the parameters it needs
super(UnexpectedChunkIndex, self).__init__(message)
self.expected_index = expected_index
class FileReceiver:
def __init__(self, expected_number_of_chunks: int):
self.file = []
self.expected_number_of_chunks = expected_number_of_chunks
self.expected_chunk_index = 0
self.started_at = time()
@property
def received_complete_file(self):
return self.expected_chunk_index >= self.expected_number_of_chunks
def receive_chunk(self, index: int, chunk: str):
if index != self.expected_chunk_index:
raise UnexpectedChunkIndex(f'Unexpected chunk index: {index}, expected {self.expected_chunk_index}', self.expected_chunk_index)
self.file += [line + '\n' for line in chunk.rstrip().split('\n')]
self.expected_chunk_index += 1
def handle_end_send_file(self):
if not self.received_complete_file:
raise UnexpectedChunkIndex('Missing chunk(s) at end send file', self.expected_chunk_index)
class FileSender:
def __init__(self, worker_id: int, file_type: int, data: list):
self.messages = self.create_messages(worker_id, data, file_type)
self.target_received_file = False
self.index = 0
self.started_at = time()
@property
def complete_file_send(self):
return self.index >= len(self.messages)
@staticmethod
def get_file_chunk(worker_id, file_type, index, data: list):
chunk = ''
lines = 0
for line in data:
if len(message.write_file_chunk(worker_id, file_type, index, chunk + line)) >= message.MAX_MESSAGE_SIZE:
break
chunk += line
lines += 1
return chunk, lines
def create_messages(self, worker_id: int, data: list, file_type: int):
messages = []
while len(data) > 0:
chunk, lines_in_chunk = self.get_file_chunk(worker_id, file_type, len(messages), data)
del data[:lines_in_chunk]
messages.append(message.write_file_chunk(worker_id, file_type, len(messages), chunk))
return messages
def get_next_message(self):
next_message = self.messages[self.index]
self.index += 1
return next_message
| 32.486842 | 139 | 0.665857 |
acf74d91cf09288996747bb4d934e0cae3e4e82c | 5,030 | py | Python | bin/tools/top_k_seq2seq.py | pandegroup/reaction_prediction_seq2seq | 9a6f040198ce990cbdeea812e7e99df7becd142c | [
"Apache-2.0"
] | 53 | 2018-01-03T09:18:09.000Z | 2022-03-24T22:47:19.000Z | bin/tools/top_k_seq2seq.py | pandegroup/reaction_prediction_seq2seq | 9a6f040198ce990cbdeea812e7e99df7becd142c | [
"Apache-2.0"
] | 9 | 2018-01-29T19:14:54.000Z | 2020-11-03T23:41:05.000Z | bin/tools/top_k_seq2seq.py | pandegroup/reaction_prediction_seq2seq | 9a6f040198ce990cbdeea812e7e99df7becd142c | [
"Apache-2.0"
] | 31 | 2018-11-19T15:52:08.000Z | 2021-12-04T13:35:29.000Z | # based on https://github.com/google/seq2seq/blob/master/bin/tools/generate_beam_viz.py
# extracts probabilities and sequences from .npz file generated during beam search.
# and pickles a list of the length n_samples that has beam_width most probable tuples
# (path, logprob, prob)
# where probs are scaled to 1.
import numpy as np
import networkx as nx
import pickle
import tqdm
import os
def draw_graph(graph):
from string import Template
import shutil
from networkx.readwrite import json_graph
import json
HTML_TEMPLATE = Template("""
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Beam Search</title>
<link rel="stylesheet" type="text/css" href="tree.css">
<script src="http://d3js.org/d3.v3.min.js"></script>
</head>
<body>
<script>
var treeData = $DATA
</script>
<script src="tree.js"></script>
</body>
</html>""")
seq2seq_path = '/scratch/make_build/gram_as_foreight_lang/seq2seq'
vis_path = base_path+'/vis/graph_beam/'
os.makedirs(base_path+'/vis/graph_beam/', exist_ok=True)
shutil.copy2(seq2seq_path+"/bin/tools/beam_search_viz/tree.css", vis_path)
shutil.copy2(seq2seq_path+"/bin/tools/beam_search_viz/tree.js", vis_path)
json_str = json.dumps(json_graph.tree_data(graph, (0, 0)), ensure_ascii=False)
html_str = HTML_TEMPLATE.substitute(DATA=json_str)
output_path = os.path.join(vis_path, "graph.html")
with open(output_path, "w") as file:
file.write(html_str)
print(output_path)
def _add_graph_level(graph, level, parent_ids, names, scores):
"""Adds a levelto the passed graph"""
for i, parent_id in enumerate(parent_ids):
new_node = (level, i)
parent_node = (level - 1, parent_id)
graph.add_node(new_node)
graph.node[new_node]["name"] = names[i]
graph.node[new_node]["score"] = str(scores[i])
graph.node[new_node]["size"] = 100
# Add an edge to the parent
graph.add_edge(parent_node, new_node)
def create_graph(predicted_ids, parent_ids, scores, vocab=None):
def get_node_name(pred):
return vocab[pred] if vocab else str(pred)
seq_length = predicted_ids.shape[0]
graph = nx.DiGraph()
for level in range(seq_length):
names = [get_node_name(pred) for pred in predicted_ids[level]]
_add_graph_level(graph, level + 1, parent_ids[level], names, scores[level])
graph.node[(0, 0)]["name"] = "START"
return graph
def get_path_to_root(graph, node):
p = graph.predecessors(node)
assert len(p) <= 1
self_seq = [graph.node[node]['name'].split('\t')[0]]
if len(p) == 0:
return self_seq
else:
return self_seq + get_path_to_root(graph, p[0])
def main(data_fn, vocab_fn, output_fn, target_fn):
beam_data = np.load(data_fn)
to_dump = []
# Optionally load vocabulary data
vocab = None
if vocab_fn:
with open(vocab_fn) as file:
vocab = file.readlines()
vocab = [_.strip() for _ in vocab]
vocab += ["UNK", "SEQUENCE_START", "SEQUENCE_END"]
data_len = len(beam_data["predicted_ids"])
print(data_len)
with open(target_fn) as f_target:
targets = f_target.readlines()
data_iterator = zip(beam_data["predicted_ids"],
beam_data["beam_parent_ids"],
beam_data["scores"],
targets)
def _tree_node_predecessor(pos):
return graph.node[graph.predecessors(pos)[0]]
n_correct_top_5 = 0
correct_probs = []
for predicted_ids, parent_ids, scores, target in tqdm.tqdm(data_iterator, total=data_len):
graph = create_graph(
predicted_ids=predicted_ids,
parent_ids=parent_ids,
scores=scores,
vocab=vocab)
pred_end_node_names = {pos for pos, d in graph.node.items()
if d['name'] == 'SEQUENCE_END'
and len(graph.predecessors(pos)) > 0
and _tree_node_predecessor(pos)['name'] != 'SEQUENCE_END'}
result = [(tuple(get_path_to_root(graph, pos)[1:-1][::-1]),
float(graph.node[pos]['score']))
for pos in pred_end_node_names]
filtered_result = filter(lambda x: 'SEQUENCE_END' not in x[0], result)
s_result = sorted(filtered_result, key=lambda x: x[1], reverse=True)
nn_probs = np.exp(np.array(list(zip(*s_result))[1]))
probs = nn_probs / np.sum(nn_probs)
result_w_prob = [(path, score, prob) for (path, score), prob in zip(s_result, probs)]
if len(result_w_prob) < 5:
result_w_prob.extend([(('SEQUENCE_END', ), np.nan, 0)]*(5-len(result_w_prob)))
to_dump.append(result_w_prob[:5])
with open(output_fn, 'wb') as f_out:
pickle.dump(to_dump, f_out)
| 34.452055 | 94 | 0.615706 |
acf74daa5d38e9a6eca73f75c0036083f570f31e | 6,394 | py | Python | Deathly Dungeon/pathfinding/core/grid.py | iTecAI/Deathly-Dungeons | 54d8bb9b9c6175a6f8c55858bf864f773cfe8f2c | [
"MIT"
] | null | null | null | Deathly Dungeon/pathfinding/core/grid.py | iTecAI/Deathly-Dungeons | 54d8bb9b9c6175a6f8c55858bf864f773cfe8f2c | [
"MIT"
] | null | null | null | Deathly Dungeon/pathfinding/core/grid.py | iTecAI/Deathly-Dungeons | 54d8bb9b9c6175a6f8c55858bf864f773cfe8f2c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .node import Node
try:
import numpy as np
USE_NUMPY = True
except ImportError:
USE_NUMPY = False
from pathfinding.core.diagonal_movement import DiagonalMovement
def build_nodes(width, height, matrix=None, inverse=False):
"""
create nodes according to grid size. If a matrix is given it
will be used to determine what nodes are walkable.
:rtype : list
"""
nodes = []
use_matrix = (isinstance(matrix, (tuple, list))) or \
(USE_NUMPY and isinstance(matrix, np.ndarray) and matrix.size > 0)
for y in range(height):
nodes.append([])
for x in range(width):
# 1, '1', True will be walkable
# while others will be obstacles
# if inverse is False, otherwise
# it changes
weight = int(matrix[y][x]) if use_matrix else 1
walkable = weight <= 0 if inverse else weight >= 1
nodes[y].append(Node(x=x, y=y, walkable=walkable, weight=weight))
return nodes
class Grid(object):
def __init__(self, width=0, height=0, matrix=None, inverse=False):
"""
a grid represents the map (as 2d-list of nodes).
"""
self.width = width
self.height = height
if isinstance(matrix, (tuple, list)) or (
USE_NUMPY and isinstance(matrix, np.ndarray) and
matrix.size > 0):
self.height = len(matrix)
self.width = self.width = len(matrix[0]) if self.height > 0 else 0
if self.width > 0 and self.height > 0:
self.nodes = build_nodes(self.width, self.height, matrix, inverse)
else:
self.nodes = [[]]
def node(self, x, y):
"""
get node at position
:param x: x pos
:param y: y pos
:return:
"""
return self.nodes[y][x]
def inside(self, x, y):
"""
check, if field position is inside map
:param x: x pos
:param y: y pos
:return:
"""
return 0 <= x < self.width and 0 <= y < self.height
def walkable(self, x, y):
"""
check, if the tile is inside grid and if it is set as walkable
"""
return self.inside(x, y) and self.nodes[y][x].walkable
def neighbors(self, node, diagonal_movement=DiagonalMovement.never):
"""
get all neighbors of one node
:param node: node
"""
x = node.x
y = node.y
neighbors = []
s0 = d0 = s1 = d1 = s2 = d2 = s3 = d3 = False
# ↑
if self.walkable(x, y - 1):
neighbors.append(self.nodes[y - 1][x])
s0 = True
# →
if self.walkable(x + 1, y):
neighbors.append(self.nodes[y][x + 1])
s1 = True
# ↓
if self.walkable(x, y + 1):
neighbors.append(self.nodes[y + 1][x])
s2 = True
# ←
if self.walkable(x - 1, y):
neighbors.append(self.nodes[y][x - 1])
s3 = True
if diagonal_movement == DiagonalMovement.never:
return neighbors
if diagonal_movement == DiagonalMovement.only_when_no_obstacle:
d0 = s3 and s0
d1 = s0 and s1
d2 = s1 and s2
d3 = s2 and s3
elif diagonal_movement == DiagonalMovement.if_at_most_one_obstacle:
d0 = s3 or s0
d1 = s0 or s1
d2 = s1 or s2
d3 = s2 or s3
elif diagonal_movement == DiagonalMovement.always:
d0 = d1 = d2 = d3 = True
# ↖
if d0 and self.walkable(x - 1, y - 1):
neighbors.append(self.nodes[y - 1][x - 1])
# ↗
if d1 and self.walkable(x + 1, y - 1):
neighbors.append(self.nodes[y - 1][x + 1])
# ↘
if d2 and self.walkable(x + 1, y + 1):
neighbors.append(self.nodes[y + 1][x + 1])
# ↙
if d3 and self.walkable(x - 1, y + 1):
neighbors.append(self.nodes[y + 1][x - 1])
return neighbors
def cleanup(self):
for y_nodes in self.nodes:
for node in y_nodes:
node.cleanup()
def grid_str(self, path=None, start=None, end=None,
border=True, start_chr='s', end_chr='e',
path_chr='x', empty_chr=' ', block_chr='#',
show_weight=False):
"""
create a printable string from the grid using ASCII characters
:param path: list of nodes that show the path
:param start: start node
:param end: end node
:param border: create a border around the grid
:param start_chr: character for the start (default "s")
:param end_chr: character for the destination (default "e")
:param path_chr: character to show the path (default "x")
:param empty_chr: character for empty fields (default " ")
:param block_chr: character for blocking elements (default "#")
:param show_weight: instead of empty_chr show the cost of each empty
field (shows a + if the value of weight is > 10)
:return:
"""
data = ''
if border:
data = '+{}+'.format('-'*len(self.nodes[0]))
for y in range(len(self.nodes)):
line = ''
for x in range(len(self.nodes[y])):
node = self.nodes[y][x]
if node == start:
line += start_chr
elif node == end:
line += end_chr
elif path and ((node.x, node.y) in path or node in path):
line += path_chr
elif node.walkable:
# empty field
weight = str(node.weight) if node.weight < 10 else '+'
line += weight if show_weight else empty_chr
else:
line += block_chr # blocked field
if border:
line = '|'+line+'|'
if data:
data += '\n'
data += line
if border:
data += '\n+{}+'.format('-'*len(self.nodes[0]))
return data
| 33.830688 | 79 | 0.498905 |
acf74de0eaf4c32e21ce96b36b394c93cd91ad00 | 8,646 | py | Python | cvpce/metrics.py | laitalaj/cvpce | 7509e7d7783039f39a88edc6e411333bcf6fb2af | [
"MIT"
] | 1 | 2021-10-06T17:51:52.000Z | 2021-10-06T17:51:52.000Z | cvpce/metrics.py | laitalaj/cvpce | 7509e7d7783039f39a88edc6e411333bcf6fb2af | [
"MIT"
] | null | null | null | cvpce/metrics.py | laitalaj/cvpce | 7509e7d7783039f39a88edc6e411333bcf6fb2af | [
"MIT"
] | 2 | 2021-10-02T10:16:20.000Z | 2021-12-12T17:12:14.000Z | import heapq
import multiprocessing as mp
import torch
from torchvision import ops as tvops
from matplotlib import pyplot as plt
# The precision at each recall level r is interpolated by taking the maximum precision measured for a method for which the corresponding recall exceeds r
# assigned to groundtruth objects satisfying the overlap criterion in order ranked by the (decreasing) confidence output
def iou_matrices(targets, sorted_predictions):
ious = tvops.box_iou(sorted_predictions, targets)
return torch.sort(ious, dim=1, descending=True)
def check_matches(sorted_ious, indices, iou_threshold=0.5):
predictions, targets = sorted_ious.shape
used = torch.zeros(targets)
true_positive = torch.zeros(predictions)
false_positive = torch.zeros(predictions)
for i, (single_ious, single_idxs) in enumerate(zip(sorted_ious, indices)):
match = False
for iou, idx in zip(single_ious, single_idxs):
if iou < iou_threshold: break
if used[idx]: continue
used[idx] = 1
match = True
if match:
true_positive[i] = 1
else:
false_positive[i] = 1
return true_positive, false_positive
def merge_matches(matches, confidences): # NOT assuming confidences sorted
merged_conf = torch.cat(confidences)
merged_conf, sort_idx = torch.sort(merged_conf, descending=True)
merged_matches = {t: {
'true_positives': torch.cat(d['true_positives'])[sort_idx],
'false_positives': torch.cat(d['false_positives'])[sort_idx],
'ar_300': sum(d['recall_300']) / len(d['recall_300']),
} for t, d in matches.items()}
return merged_matches, merged_conf
def get_merge_index(c1, c2):
return torch.sort(torch.cat((c1, c2)), descending=True)
def precision_and_recall(true_positives, false_positives, total_targets):
true_positives = true_positives.cumsum(0)
false_positives = false_positives.cumsum(0)
precision = true_positives / (true_positives + false_positives)
precision[torch.isnan(precision)] = 0
recall = true_positives / total_targets if total_targets > 0 else torch.zeros_like(true_positives)
return precision, recall
def f_score(precision, recall):
res = 2 * precision * recall / (precision + recall)
res[torch.isnan(res)] = 0
return res
def average_precision(precision, recall):
values = torch.zeros(11, dtype=torch.float)
for i, r in enumerate(torch.linspace(0, 1, 11)):
precision_at_recall = precision[recall >= r]
if len(precision_at_recall) > 0:
values[i] = precision_at_recall.max()
else: break # if there were no precisions for recall r1, there won't be any for recall r2 > r1
return values.mean()
def _process_one(target, prediction, confidence, iou_thresholds):
confidence, sort_idx = torch.sort(confidence, descending=True)
prediction = prediction[sort_idx]
iou_matrix, index_matrix = iou_matrices(target, prediction)
matches_for_threshold = {}
for t in iou_thresholds:
tp, fp = check_matches(iou_matrix, index_matrix, t)
_, r = precision_and_recall(tp, fp, len(target))
matches_for_threshold[t] = {
'true_positives': tp,
'false_positives': fp,
'recall_300': r[:300][-1] if len(r) > 0 else 0,
}
return matches_for_threshold, confidence, target.shape[0]
def _do_calculate(iou_thresholds, matches_for_threshold, sorted_confidences, total_targets):
res = {}
matches_for_threshold, conf = merge_matches(matches_for_threshold, sorted_confidences)
for t in iou_thresholds:
tp = matches_for_threshold[t]['true_positives']
fp = matches_for_threshold[t]['false_positives']
p, r = precision_and_recall(tp, fp, total_targets)
f = f_score(p, r)
if len(f) > 0:
max_f, max_idx = f.max(0)
best_p = p[max_idx]
best_r = r[max_idx]
conf_thresh = conf[max_idx]
else:
max_f, best_p, best_r, conf_thresh = 0.0, 0.0, 0.0, 0.0
res[t] = {
'raw': {
'p': p,
'r': r,
'f': f,
'c': conf,
},
'f': max_f,
'p': best_p,
'r': best_r,
'c': conf_thresh,
'ap': average_precision(p, r),
'ar_300': matches_for_threshold[t]['ar_300'],
}
return res
def calculate_metrics(targets, predictions, confidences, iou_thresholds = (0.5,)):
matches_for_threshold = {t: {'true_positives': [], 'false_positives': [], 'recall_300': []} for t in iou_thresholds}
sorted_confidences = []
total_targets = 0
for target, prediction, confidence in zip(targets, predictions, confidences):
matches, conf, targets = _process_one(target, prediction, confidence, iou_thresholds)
sorted_confidences.append(conf)
total_targets += targets
for t in iou_thresholds:
matches_for_threshold[t]['true_positives'].append(matches[t]['true_positives'])
matches_for_threshold[t]['false_positives'].append(matches[t]['false_positives'])
matches_for_threshold[t]['recall_300'].append(matches[t]['recall_300'])
return _do_calculate(iou_thresholds, matches_for_threshold, sorted_confidences, total_targets)
def _image_processer(input_queue, output_queue, iou_thresholds):
for target, prediction, confidence in iter(input_queue.get, None):
result = _process_one(target, prediction, confidence, iou_thresholds)
output_queue.put(result)
input_queue.task_done()
input_queue.task_done()
def _metric_calculator(output_queue, pipe, iou_thresholds):
matches_for_threshold = {t: {'true_positives': [], 'false_positives': [], 'recall_300': []} for t in iou_thresholds}
sorted_confidences = []
total_targets = 0
for matches, conf, targets in iter(output_queue.get, None):
sorted_confidences.append(conf)
total_targets += targets
for t in iou_thresholds:
matches_for_threshold[t]['true_positives'].append(matches[t]['true_positives'])
matches_for_threshold[t]['false_positives'].append(matches[t]['false_positives'])
matches_for_threshold[t]['recall_300'].append(matches[t]['recall_300'])
output_queue.task_done()
res = _do_calculate(iou_thresholds, matches_for_threshold, sorted_confidences, total_targets)
pipe.send(res)
output_queue.task_done()
print(f'Output queue is empty: {output_queue.empty()}')
def calculate_metrics_async(processes = 4, iou_thresholds = (0.5,)):
input_queue = mp.JoinableQueue()
output_queue = mp.JoinableQueue()
out_pipe, in_pipe = mp.Pipe()
for _ in range(processes):
mp.Process(target=_image_processer, args=(input_queue, output_queue, iou_thresholds)).start()
mp.Process(target=_metric_calculator, args=(output_queue, in_pipe, iou_thresholds)).start()
return input_queue, output_queue, out_pipe
def plot_prfc(precision, recall, fscore, confidence, title=None, resolution_reduction=1):
fig = plt.figure(figsize=(5, 2.5))
f_max_idx = fscore.argmax()
plt.vlines(recall[f_max_idx], 0, 1, color='red', label='Max. $F_1$')
plt.hlines(confidence[f_max_idx], 0, recall[f_max_idx], color='orange', linestyles='dashed')
plt.hlines(precision[f_max_idx], 0, recall[f_max_idx], color='blue', linestyles='dashed')
plt.hlines(fscore[f_max_idx], 0, recall[f_max_idx], color='green', linestyles='dashed')
plt.annotate(f'{recall[f_max_idx]:.2f}', (recall[f_max_idx], 0), annotation_clip=False, color='red', ha='center', va='top')
plt.annotate(f'{confidence[f_max_idx]:.2f}', (0, confidence[f_max_idx]), annotation_clip=False, color='orange', ha='right', va='center')
plt.annotate(f'{precision[f_max_idx]:.2f}', (0, precision[f_max_idx]), annotation_clip=False, color='blue', ha='right', va='center')
plt.annotate(f'{fscore[f_max_idx]:.2f}', (0, fscore[f_max_idx]), annotation_clip=False, color='green', ha='right', va='center')
plt.plot(recall[::resolution_reduction], confidence[::resolution_reduction], label='Confidence', color='orange')
plt.plot(recall[::resolution_reduction], precision[::resolution_reduction], label='Precision', color='blue')
plt.plot(recall[::resolution_reduction], fscore[::resolution_reduction], label='$F_1$', color='green')
if title is not None:
plt.title(title)
plt.xlabel('Recall')
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.legend()
fig.tight_layout(pad=0.5)
plt.show()
| 42.17561 | 153 | 0.676613 |
acf74e59fbebb6752658f685807630095d5c59aa | 2,451 | py | Python | COVID.py | JackLidge/COVIDtracker | 0432d52910a00c987f9b942215f921923ed9a62b | [
"MIT"
] | null | null | null | COVID.py | JackLidge/COVIDtracker | 0432d52910a00c987f9b942215f921923ed9a62b | [
"MIT"
] | null | null | null | COVID.py | JackLidge/COVIDtracker | 0432d52910a00c987f9b942215f921923ed9a62b | [
"MIT"
] | null | null | null | import COVID_db
results = COVID_db.window()
results.window.mainloop()
countries_of_interest = results.country_list
countries_of_interest = [i.strip("''") for i in countries_of_interest]
# ['China', 'Italy', 'Iran', 'South_Korea', 'Japan', 'France', 'Germany', 'Spain',
# 'United_Kingdom', 'United_States_of_America']
# ['Norway', 'Portugal', 'Canada', 'Australia', 'Malaysia', 'Brazil', 'Israel']
countries = {}
country_sel = 0
no_infected_countries = len(results.data['countriesAndTerritories'].unique())
for i, country in enumerate(countries_of_interest):
temp = results.data.loc[results.data['countriesAndTerritories'] == country].copy()
temp['cumulative'] = temp['cases'].cumsum()
temp['cum_deaths'] = temp['deaths'].cumsum()
print(country)
temp['per_capita_cases'] = COVID_db.find_per_capita(temp, pop_col='popData2018', vals_col='cumulative')
temp['per_capita_deaths'] = COVID_db.find_per_capita(temp, pop_col='popData2018', vals_col='cum_deaths')
temp.reset_index(drop=True, inplace=True)
countries[country] = temp
bounds = [100, 100000]
start_cases = input("Select initial number of cases to plot (default is 100): ")
end_cases = input("Select final number of case to plot (default is 100,000): ")
try:
bounds[0] = int(start_cases)
except ValueError:
pass
try:
bounds[1] = int(end_cases)
except ValueError:
pass
'''
Invocation of plot_graphs needs redoing
'''
COVID_db.plot_graphs(countries, col_to_search='cumulative', search_bounds=bounds, plot_date=results.date,
plot_type='cases', constants=True)
COVID_db.plot_graphs(countries, col_to_search='cumulative', search_bounds=bounds, log_plot=False,
plot_date=results.date, plot_type='cases per capita', data_column='per_capita_cases')
bounds = [10, 5000]
start_deaths = input("Select initial number of deaths to plot (default is 10): ")
end_deaths = input("Select final number of deaths to plot (default is 5,000): ")
try:
bounds[0] = int(start_deaths)
except ValueError:
pass
try:
bounds[1] = int(end_deaths)
except ValueError:
pass
COVID_db.plot_graphs(countries, col_to_search='cum_deaths', search_bounds=bounds, plot_date=results.date,
plot_type='deaths', constants=True)
COVID_db.plot_graphs(countries, col_to_search='cum_deaths', search_bounds=bounds, log_plot=False,
plot_date=results.date, plot_type='deaths per capita', data_column='per_capita_deaths')
| 36.58209 | 108 | 0.728274 |
acf74eae910a2cef8d3d57ab11a3117955a5fccc | 4,511 | py | Python | datto/Eda.py | benhummel/datto | 3e6ef90c1ee7a369a1f53b58d0221babb3ba2ac0 | [
"MIT"
] | null | null | null | datto/Eda.py | benhummel/datto | 3e6ef90c1ee7a369a1f53b58d0221babb3ba2ac0 | [
"MIT"
] | null | null | null | datto/Eda.py | benhummel/datto | 3e6ef90c1ee7a369a1f53b58d0221babb3ba2ac0 | [
"MIT"
] | null | null | null | import pandas as pd
class Eda:
def separate_cols_by_type(self, df):
"""
Split the DataFrame into two groups by type
Parameters
--------
df: DataFrame
Returns
--------
numerical_vals: DataFrame
categorical_vals: DataFrame
"""
numerical_vals = df.select_dtypes(exclude=["object", "bool"])
categorical_vals = df.select_dtypes(include=["object", "bool"])
return numerical_vals, categorical_vals
def check_for_mistyped_booleans(self, numerical_vals):
"""
Check for columns coded as ints/floats that should actually be booleans
Parameters
--------
numerical_vals: list
Returns
--------
boolean_cols: list
"""
boolean_cols = []
for col in numerical_vals:
if numerical_vals[col].nunique() <= 2:
print(col)
print(numerical_vals[col].unique())
boolean_cols.append(col)
return boolean_cols
def find_cols_to_exclude(self, df):
"""
Returns columns that may not be helpful for model building.
Exclusion criteria:
- Possible PII (address, name, username, date, etc. in col name)
- Large proportion of nulls
- Only 1 value in entire col
- Dates
- Low variance in col values
- Large number of categorical values
Parameters
--------
df: DataFrame
Returns
--------
lst: list
"""
lst = []
for col in df.columns:
if (
"address" in str(col)
or "first_name" in str(col)
or "last_name" in str(col)
or "username" in str(col)
or "_id" in str(col)
or "date" in str(col)
or "time" in str(col)
):
lst.append({col: "Considering excluding because potential PII column."})
elif df[col].isnull().sum() / float(df.shape[0]) >= 0.5:
lst.append(
{
col: "Considering excluding because {}% of column is null.".format(
(df[col].isnull().sum() / float(df.shape[0]) * 100.0)
)
}
)
elif len(df[col].unique()) <= 1:
lst.append(
{
col: "Considering excluding because column includes only one value."
}
)
elif df[col].dtype == "datetime64[ns]":
lst.append(
{col: "Considering excluding because column is a timestamp."}
)
elif df[col].dtype not in ["object", "bool"]:
if df[col].var() < 0.00001:
lst.append(
{
col: "Considering excluding because column variance is low ({})".format(
df[col].var()
)
}
)
elif df[col].dtype in ["object", "bool"]:
if len(df[col].unique()) > 200:
lst.append(
{
col: "Considering excluding because object column has large number of unique values ({})".format(
len(df[col].unique())
)
}
)
[print(x) for x in lst]
return lst
def sample_unique_vals(self, df):
"""
Examine a few unique vals in each column
Parameters
--------
df: DataFrame
"""
for col in df:
print(col)
try:
print(df[col].unique()[:20])
print(df[col].nunique())
except Exception:
pass
print("\n------------------------------------\n")
def find_correlated_features(self, df):
"""
Find & sort correlated features
Parameters
--------
df: DataFrame
Returns
--------
s: Series
"""
if df.empty:
return pd.Series()
c = df.corr().abs()
s = c.unstack()
s = s[s <= 0.99999]
s = s.sort_values(ascending=False)
print(s)
return s
| 28.550633 | 125 | 0.43937 |
acf74fb08f9d17daacc2d9794f1fa668fe86b7da | 2,672 | py | Python | src/clients/python/library/http_setup.py | kpedro88/triton-inference-server | 37b3441e59bd0da314f428e1dcddf0a2f67d52e1 | [
"BSD-3-Clause"
] | 1 | 2021-09-22T13:23:23.000Z | 2021-09-22T13:23:23.000Z | src/clients/python/library/http_setup.py | dhanainme/triton-inference-server | 880db08b8a3a54caa51ae76387cdcea303807faf | [
"BSD-3-Clause"
] | null | null | null | src/clients/python/library/http_setup.py | dhanainme/triton-inference-server | 880db08b8a3a54caa51ae76387cdcea303807faf | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of NVIDIA CORPORATION nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
from setuptools import find_packages
from setuptools import setup
if 'VERSION' not in os.environ:
raise Exception('envvar VERSION must be specified')
VERSION = os.environ['VERSION']
REQUIRED = [
'numpy', 'geventhttpclient', 'python-rapidjson', 'tritonclientutils'
]
try:
from wheel.bdist_wheel import bdist_wheel as _bdist_wheel
class bdist_wheel(_bdist_wheel):
def finalize_options(self):
_bdist_wheel.finalize_options(self)
self.root_is_pure = False
def get_tag(self):
pyver, abi, plat = 'py3', 'none', 'any'
return pyver, abi, plat
except ImportError:
bdist_wheel = None
setup(
name='tritonhttpclient',
version=VERSION,
author='NVIDIA Inc.',
author_email='tanmayv@nvidia.com',
description=
'Python client library for communicating with NVIDIA Triton Inference Server using HTTP',
license='BSD',
url='http://nvidia.com',
keywords='triton inference server service client',
packages=find_packages(),
install_requires=REQUIRED,
zip_safe=False,
cmdclass={'bdist_wheel': bdist_wheel},
)
| 38.171429 | 93 | 0.741392 |
acf74fe9afae913a13391d3ebc8c88f7c95346dd | 1,723 | py | Python | source/models/vggish_multi_classifier.py | microsoft/Aura | d95ae0067bcd82e5952e8eed0e46b1a5eaaa7031 | [
"MIT"
] | 1 | 2022-03-02T00:21:33.000Z | 2022-03-02T00:21:33.000Z | source/models/vggish_multi_classifier.py | microsoft/Aura | d95ae0067bcd82e5952e8eed0e46b1a5eaaa7031 | [
"MIT"
] | null | null | null | source/models/vggish_multi_classifier.py | microsoft/Aura | d95ae0067bcd82e5952e8eed0e46b1a5eaaa7031 | [
"MIT"
] | 2 | 2022-03-15T03:12:02.000Z | 2022-03-20T20:49:02.000Z | import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Dropout
from .vggish import VGGish
class VGGishMultiTask(Model):
"""
Logistic type classifier using VGGIsh as a feature extractor. Predict noise type and snr level
"""
def __init__(self, extract_init=None, nclass=520, output_bias=None):
super().__init__()
self.vggish = VGGish()
if extract_init is not None:
self.vggish.load_weights(extract_init)
self.eps = 10**-8
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
self.dense1 = Dense(1024, activation='relu')
self.dense2 = Dense(128, activation='relu', name='embedding_layer')
self.final = Dense(nclass, activation='sigmoid', bias_initializer=output_bias)
self.dropout = Dropout(rate=0.3)
self.attention = Dense(nclass, activation='softmax')
self.snr = Dense(1)
def estimate(self, x):
x = tf.expand_dims(x, axis=3)
_, embed = self.vggish(x)
embed = tf.reduce_mean(embed, 2)
x = self.dense1(embed)
x = self.dropout(x)
x = self.dense2(x)
x = self.dropout(x)
out = self.final(x) #(B, time, nlabels)
att = self.attention(x) #(B, time, units)
att = tf.clip_by_value(att, self.eps, 1. - self.eps)
att_norm = tf.reduce_sum(att, 1, keepdims=True)
out = tf.reduce_sum(out * att / att_norm, 1)
snr = self.snr(x) #(B, time, 1)
snr = tf.reduce_mean(snr, 1)
return out, snr, x
def call(self, x):
out, snr, latent = self.estimate(x)
return out, snr | 27.790323 | 98 | 0.613465 |
acf750ddf5f2219de9fc998fc4a4fc5733f4b97e | 13,571 | py | Python | tensorflow/contrib/framework/python/framework/tensor_util.py | topsun888/tensorflow | bad7c50b9dc9789ad7dd0a62daca40b7269841ed | [
"Apache-2.0"
] | 2 | 2019-07-05T15:17:01.000Z | 2020-04-16T07:25:56.000Z | tensorflow/contrib/framework/python/framework/tensor_util.py | topsun888/tensorflow | bad7c50b9dc9789ad7dd0a62daca40b7269841ed | [
"Apache-2.0"
] | 1 | 2021-04-12T03:51:59.000Z | 2021-04-12T03:51:59.000Z | tensorflow/contrib/framework/python/framework/tensor_util.py | topsun888/tensorflow | bad7c50b9dc9789ad7dd0a62daca40b7269841ed | [
"Apache-2.0"
] | 5 | 2018-02-27T00:34:23.000Z | 2022-02-28T16:38:08.000Z | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tensor utility functions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import variables
__all__ = [
'assert_same_float_dtype',
'assert_scalar_int',
'convert_to_tensor_or_sparse_tensor',
'is_tensor',
'reduce_sum_n',
'remove_squeezable_dimensions',
'with_shape',
'with_same_shape']
def _assert_same_base_type(items, expected_type=None):
r"""Asserts all items are of the same base type.
Args:
items: List of graph items (e.g., `Variable`, `Tensor`, `SparseTensor`,
`Operation`, or `IndexedSlices`). Can include `None` elements, which
will be ignored.
expected_type: Expected type. If not specified, assert all items are
of the same base type.
Returns:
Validated type, or none if neither expected_type nor items provided.
Raises:
ValueError: If any types do not match.
"""
original_item_str = None
for item in items:
if item is not None:
item_type = item.dtype.base_dtype
if not expected_type:
expected_type = item_type
original_item_str = item.name if hasattr(item, 'name') else str(item)
elif expected_type != item_type:
raise ValueError('%s, type=%s, must be of the same type (%s)%s.' % (
item.name if hasattr(item, 'name') else str(item),
item_type, expected_type,
(' as %s' % original_item_str) if original_item_str else ''))
return expected_type
def assert_same_float_dtype(tensors=None, dtype=None):
"""Validate and return float type based on `tensors` and `dtype`.
For ops such as matrix multiplication, inputs and weights must be of the
same float type. This function validates that all `tensors` are the same type,
validates that type is `dtype` (if supplied), and returns the type. Type must
be `dtypes.float32` or `dtypes.float64`. If neither `tensors` nor
`dtype` is supplied, default to `dtypes.float32`.
Args:
tensors: Tensors of input values. Can include `None` elements, which will be
ignored.
dtype: Expected type.
Returns:
Validated type.
Raises:
ValueError: if neither `tensors` nor `dtype` is supplied, or result is not
float.
"""
if tensors:
dtype = _assert_same_base_type(tensors, dtype)
if not dtype:
dtype = dtypes.float32
elif not dtype.is_floating:
raise ValueError('Expected float, got %s.' % dtype)
return dtype
def assert_scalar_int(tensor):
"""Assert `tensor` is 0-D, of type `tf.int32` or `tf.int64`.
Args:
tensor: Tensor to test.
Returns:
`tensor`, for chaining.
Raises:
ValueError: if `tensor` is not 0-D, of type `tf.int32` or `tf.int64`.
"""
data_type = tensor.dtype
if data_type.base_dtype not in [dtypes.int32, dtypes.int64]:
raise ValueError('Unexpected type %s for %s.' % (data_type, tensor.name))
shape = tensor.get_shape()
if shape.ndims != 0:
raise ValueError('Unexpected shape %s for %s.' % (shape, tensor.name))
return tensor
def reduce_sum_n(tensors, name=None):
"""Reduce tensors to a scalar sum.
This reduces each tensor in `tensors` to a scalar via `tf.reduce_sum`, then
adds them via `tf.add_n`.
Args:
tensors: List of tensors, all of the same numeric type.
name: Tensor name, and scope for all other ops.
Returns:
Total loss tensor, or None if no losses have been configured.
Raises:
ValueError: if `losses` is missing or empty.
"""
if not tensors:
raise ValueError('No tensors provided.')
tensors = [math_ops.reduce_sum(t, name='%s/sum' % t.op.name) for t in tensors]
if len(tensors) == 1:
return tensors[0]
with ops.name_scope(name, 'reduce_sum_n', tensors) as scope:
return math_ops.add_n(tensors, name=scope)
def remove_squeezable_dimensions(predictions, labels):
"""Squeeze last dim if ranks of `predictions` and `labels` differ by 1.
This will use static shape if available. Otherwise, it will add graph
operations, which could result in a performance hit.
Args:
predictions: Predicted values, a `Tensor` of arbitrary dimensions.
labels: Label values, a `Tensor` whose dimensions match `predictions`.
Returns:
Tuple of `predictions` and `labels`, possibly with last dim squeezed.
"""
predictions = ops.convert_to_tensor(predictions)
labels = ops.convert_to_tensor(labels)
predictions_shape = predictions.get_shape()
predictions_rank = predictions_shape.ndims
labels_shape = labels.get_shape()
labels_rank = labels_shape.ndims
if (labels_rank is not None) and (predictions_rank is not None):
# Use static rank.
rank_diff = predictions_rank - labels_rank
if rank_diff == -1:
labels = array_ops.squeeze(labels, [-1])
elif rank_diff == 1:
predictions = array_ops.squeeze(predictions, [-1])
return predictions, labels
# Use dynamic rank.
rank_diff = array_ops.rank(predictions) - array_ops.rank(labels)
if (predictions_rank is None) or (
predictions_shape.dims[-1].is_compatible_with(1)):
predictions = control_flow_ops.cond(
math_ops.equal(1, rank_diff),
lambda: array_ops.squeeze(predictions, [-1]),
lambda: predictions)
if (labels_rank is None) or (
labels_shape.dims[-1].is_compatible_with(1)):
labels = control_flow_ops.cond(
math_ops.equal(-1, rank_diff),
lambda: array_ops.squeeze(labels, [-1]),
lambda: labels)
return predictions, labels
def _all_equal(tensor0, tensor1):
with ops.name_scope('all_equal', values=[tensor0, tensor1]) as scope:
return math_ops.reduce_all(
math_ops.equal(tensor0, tensor1, name='equal'), name=scope)
def _is_rank(expected_rank, actual_tensor):
"""Returns whether actual_tensor's rank is expected_rank.
Args:
expected_rank: Integer defining the expected rank, or tensor of same.
actual_tensor: Tensor to test.
Returns:
New tensor.
"""
with ops.name_scope('is_rank', values=[actual_tensor]) as scope:
expected = ops.convert_to_tensor(expected_rank, name='expected')
actual = array_ops.rank(actual_tensor, name='actual')
return math_ops.equal(expected, actual, name=scope)
def _is_shape(expected_shape, actual_tensor, actual_shape=None):
"""Returns whether actual_tensor's shape is expected_shape.
Args:
expected_shape: Integer list defining the expected shape, or tensor of same.
actual_tensor: Tensor to test.
actual_shape: Shape of actual_tensor, if we already have it.
Returns:
New tensor.
"""
with ops.name_scope('is_shape', values=[actual_tensor]) as scope:
is_rank = _is_rank(array_ops.size(expected_shape), actual_tensor)
if actual_shape is None:
actual_shape = array_ops.shape(actual_tensor, name='actual')
shape_equal = _all_equal(
ops.convert_to_tensor(expected_shape, name='expected'),
actual_shape)
return math_ops.logical_and(is_rank, shape_equal, name=scope)
def _assert_shape_op(expected_shape, actual_tensor):
"""Asserts actual_tensor's shape is expected_shape.
Args:
expected_shape: List of integers defining the expected shape, or tensor of
same.
actual_tensor: Tensor to test.
Returns:
New assert tensor.
"""
with ops.name_scope('assert_shape', values=[actual_tensor]) as scope:
actual_shape = array_ops.shape(actual_tensor, name='actual')
is_shape = _is_shape(expected_shape, actual_tensor, actual_shape)
return control_flow_ops.Assert(
is_shape, [
'Wrong shape for %s [expected] [actual].' % actual_tensor.name,
expected_shape,
actual_shape
], name=scope)
def with_same_shape(expected_tensor, tensor):
"""Assert tensors are the same shape, from the same graph.
Args:
expected_tensor: Tensor with expected shape.
tensor: Tensor of actual values.
Returns:
Tuple of (actual_tensor, label_tensor), possibly with assert ops added.
"""
with ops.name_scope('%s/' % tensor.op.name, values=[expected_tensor, tensor]):
tensor_shape = expected_tensor.get_shape()
expected_shape = (
tensor_shape.as_list() if tensor_shape.is_fully_defined()
else array_ops.shape(expected_tensor, name='expected_shape'))
return with_shape(expected_shape, tensor)
def is_tensor(x):
"""Check for tensor types.
Check whether an object is a tensor. Equivalent to
`isinstance(x, [tf.Tensor, tf.SparseTensor, tf.Variable])`.
Args:
x: An python object to check.
Returns:
`True` if `x` is a tensor, `False` if not.
"""
tensor_types = (ops.Tensor, ops.SparseTensor, variables.Variable)
return isinstance(x, tensor_types)
def with_shape(expected_shape, tensor):
"""Asserts tensor has expected shape.
If tensor shape and expected_shape, are fully defined, assert they match.
Otherwise, add assert op that will validate the shape when tensor is
evaluated, and set shape on tensor.
Args:
expected_shape: Expected shape to assert, as a 1D array of ints, or tensor
of same.
tensor: Tensor whose shape we're validating.
Returns:
tensor, perhaps with a dependent assert operation.
Raises:
ValueError: if tensor has an invalid shape.
"""
if isinstance(tensor, ops.SparseTensor):
raise ValueError('SparseTensor not supported.')
# Shape type must be 1D int32.
if is_tensor(expected_shape):
if expected_shape.dtype.base_dtype != dtypes.int32:
raise ValueError(
'Invalid dtype %s for shape %s expected of tensor %s.' % (
expected_shape.dtype, expected_shape, tensor.name))
if isinstance(expected_shape, (list, tuple)):
if not expected_shape:
expected_shape = np.asarray([], dtype=np.int32)
else:
np_expected_shape = np.asarray(expected_shape)
expected_shape = (
np.asarray(expected_shape, dtype=np.int32)
if np_expected_shape.dtype == np.int64 else np_expected_shape)
if isinstance(expected_shape, np.ndarray):
if expected_shape.ndim > 1:
raise ValueError(
'Invalid rank %s for shape %s expected of tensor %s.' % (
expected_shape.ndim, expected_shape, tensor.name))
if expected_shape.dtype != np.int32:
raise ValueError(
'Invalid dtype %s for shape %s expected of tensor %s.' % (
expected_shape.dtype, expected_shape, tensor.name))
actual_shape = tensor.get_shape()
if not actual_shape.is_fully_defined() or is_tensor(expected_shape):
with ops.name_scope('%s/' % tensor.op.name, values=[tensor]):
if not is_tensor(expected_shape) and (len(expected_shape) < 1):
# TODO(irving): Remove scalar special case
return array_ops.reshape(tensor, [])
with ops.control_dependencies([_assert_shape_op(expected_shape, tensor)]):
result = array_ops.identity(tensor)
if not is_tensor(expected_shape):
result.set_shape(expected_shape)
return result
if (not is_tensor(expected_shape) and
not actual_shape.is_compatible_with(expected_shape)):
if (len(expected_shape) < 1) and actual_shape.is_compatible_with([1]):
# TODO(irving): Remove scalar special case.
with ops.name_scope('%s/' % tensor.op.name, values=[tensor]):
return array_ops.reshape(tensor, [])
raise ValueError('Invalid shape for tensor %s, expected %s, got %s.' % (
tensor.name, expected_shape, actual_shape))
return tensor
def convert_to_tensor_or_sparse_tensor(
value, dtype=None, name=None, as_ref=False):
"""Converts value to a `SparseTensor` or `Tensor`.
Args:
value: A `SparseTensor`, `SparseTensorValue`, or an object whose type has a
registered `Tensor` conversion function.
dtype: Optional element type for the returned tensor. If missing, the
type is inferred from the type of `value`.
name: Optional name to use if a new `Tensor` is created.
as_ref: True if we want the result as a ref tensor. Only used if a new
`Tensor` is created.
Returns:
A `SparseTensor` or `Tensor` based on `value`.
Raises:
RuntimeError: If result type is incompatible with `dtype`.
"""
if dtype is not None:
dtype = dtypes.as_dtype(dtype)
if isinstance(value, ops.SparseTensorValue):
value = ops.SparseTensor.from_value(value)
if isinstance(value, ops.SparseTensor):
if dtype and not dtype.is_compatible_with(value.dtype):
raise RuntimeError(
'Sparse dtype: requested = %s, actual = %s' % (
dtype.name, value.dtype.name))
return value
return ops.convert_to_tensor(value, dtype=dtype, name=name, as_ref=as_ref)
| 35.619423 | 80 | 0.70098 |
acf750f50c5ed219b355a45310fbda102cf38a52 | 3,505 | py | Python | lenstronomy/Cosmo/cosmo_solver.py | guoxiaowhu/lenstronomy | dcdfc61ce5351ac94565228c822f1c94392c1ad6 | [
"MIT"
] | 1 | 2018-11-08T12:33:26.000Z | 2018-11-08T12:33:26.000Z | lenstronomy/Cosmo/cosmo_solver.py | guoxiaowhu/lenstronomy | dcdfc61ce5351ac94565228c822f1c94392c1ad6 | [
"MIT"
] | null | null | null | lenstronomy/Cosmo/cosmo_solver.py | guoxiaowhu/lenstronomy | dcdfc61ce5351ac94565228c822f1c94392c1ad6 | [
"MIT"
] | null | null | null | __author__ = 'sibirrer'
import scipy.optimize
import scipy.interpolate as interpolate
import numpy as np
from astropy.cosmology import FlatLambdaCDM
from lenstronomy.Cosmo.lens_cosmo import LensCosmo
class SolverUtil(object):
"""
util functions
"""
def __init__(self, z_d, z_s):
self.z_d = z_d
self.z_s = z_s
def cosmo2Dd_Ds_Dds(self, H_0, omega_m):
"""
:param H_0: Hubble constant
:param omega_m: matter density
:return: angular diameter distances Dd and Ds/Dds
"""
cosmo = FlatLambdaCDM(H0=H_0, Om0=omega_m, Ob0=0.)
lensCosmo = LensCosmo(z_lens=self.z_d, z_source=self.z_s, cosmo=cosmo)
Dd = lensCosmo.D_d
Ds = lensCosmo.D_s
Dds = lensCosmo.D_ds
return Dd, Ds/Dds
class SolverFlatCosmo(SolverUtil):
"""
class to solve multidimensional non-linear equations to determine the cosmological parameters H0 and omega_m given
the angular diameter distance relations
"""
def F(self, x, Dd, Ds_Dds):
"""
:param x: array of parameters (H_0, omega_m)
:return:
"""
[H_0, omega_m] = x
omega_m = abs(omega_m)%1
Dd_new, Ds_Dds_new = self.cosmo2Dd_Ds_Dds(H_0, omega_m)
y = np.zeros(2)
y[0] = Dd - Dd_new
y[1] = Ds_Dds - Ds_Dds_new
return y
def solve(self, init, Dd, Ds_Dds):
x = scipy.optimize.fsolve(self.F, init, args=(Dd, Ds_Dds), xtol=1.49012e-08, factor=0.1)
x[1] = abs(x[1])%1
y = self.F(x, Dd, Ds_Dds)
if abs(y[0]) >= 0.1 or abs(y[1]) > 0.1:
x = np.array([-1, -1])
return x
class InvertCosmo(SolverUtil):
"""
class to do an interpolation and call the inverse of this interpolation to get H_0 and omega_m
"""
def _make_interpolation(self):
"""
creates an interpolation grid in H_0, omega_m and computes quantities in Dd and Ds_Dds
:return:
"""
H0_range = np.linspace(10, 100, 90)
omega_m_range = np.linspace(0.05, 1, 95)
grid2d = np.dstack(np.meshgrid(H0_range, omega_m_range)).reshape(-1, 2)
H0_grid = grid2d[:, 0]
omega_m_grid = grid2d[:, 1]
Dd_grid = np.zeros_like(H0_grid)
Ds_Dds_grid = np.zeros_like(H0_grid)
for i in range(len(H0_grid)):
Dd, Ds_Dds = self.cosmo2Dd_Ds_Dds(H0_grid[i], omega_m_grid[i])
Dd_grid[i] = Dd
Ds_Dds_grid[i] = Ds_Dds
self._f_H0 = interpolate.interp2d(Dd_grid, Ds_Dds_grid, H0_grid, kind='linear', copy=False, bounds_error=False, fill_value=-1)
print("H0 interpolation done")
self._f_omega_m = interpolate.interp2d(Dd_grid, Ds_Dds_grid, omega_m_grid, kind='linear', copy=False, bounds_error=False, fill_value=-1)
print("omega_m interpolation done")
def get_cosmo(self, Dd, Ds_Dds):
"""
return the values of H0 and omega_m computed with an interpolation
:param Dd: flat
:param Ds_Dds: float
:return:
"""
if not hasattr(self, '_f_H0') or not hasattr(self, '_f_omega_m'):
self._make_interpolation()
H0 = self._f_H0(Dd, Ds_Dds)
print(H0, 'H0')
omega_m = self._f_omega_m(Dd, Ds_Dds)
Dd_new, Ds_Dds_new = self.cosmo2Dd_Ds_Dds(H0[0], omega_m[0])
if abs(Dd - Dd_new)/Dd > 0.01 or abs(Ds_Dds - Ds_Dds_new)/Ds_Dds > 0.01:
return [-1], [-1]
else:
return H0[0], omega_m[0] | 34.029126 | 144 | 0.606277 |
acf751e03f22f6b06b04b5088e9c4be0f5b24458 | 3,865 | py | Python | models/SimpleDQNv2/train_model.py | NightCrawler96/DQNChessEngine | 505b793616ff3004b83a02d8d6b89dfd69939072 | [
"MIT"
] | null | null | null | models/SimpleDQNv2/train_model.py | NightCrawler96/DQNChessEngine | 505b793616ff3004b83a02d8d6b89dfd69939072 | [
"MIT"
] | 5 | 2019-12-16T21:56:39.000Z | 2022-02-10T00:15:22.000Z | models/SimpleDQNv2/train_model.py | NightCrawler96/DQNChessEngine | 505b793616ff3004b83a02d8d6b89dfd69939072 | [
"MIT"
] | null | null | null | import numpy as np
import keras
from keras.layers import Dense
import chess_environment.chessboard as cb
from dqn_tools.memory import SimpleMemory
from dqn_tools.trainers import DQNTrainer
from training_tools import DQNChessRecord
# temporary simple model for testing base concept
model = keras.Sequential([
Dense(256, activation='relu', input_shape=(None, 384)),
Dense(512, activation='relu'),
Dense(1, activation='relu')
])
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
def choose_action(model: keras.Model, possible_moves, possible_states, fens):
highest_prize = 0
best_move = None
best_state = None
best_state_fen = None
for m, s, f in zip(possible_moves, possible_states, fens):
prize = model.predict(np.array(s).reshape((1, 1, 384)))
if prize > highest_prize or best_move is None:
highest_prize = prize
best_move = m
best_state = s
best_state_fen = f
return best_move, best_state, best_state_fen
def action(acting_model: keras.Model, models_memory: SimpleMemory, environment: cb.ChessBoard, epsilon):
flip = not environment.current_turn()
moves, states, fens = environment.get_moves(flip=flip)
best_move = None
best_state = None
best_state_fen = None
if np.random.uniform(0, 1) < epsilon:
moves_states_list = list(map(list, zip(moves, states, fens)))
choices = len(moves_states_list)
if choices > 1:
random_element = moves_states_list[np.random.randint(0, choices)]
else:
random_element = moves_states_list[0]
best_move, best_state, best_state_fen = random_element
else:
best_move, best_state, best_state_fen = choose_action(acting_model, moves, states, fens)
# make move
environment.make_move(best_move, flip)
real_prize = environment.get_results()
best_state = np.array(best_state).reshape((1, 384))
real_prize = np.array([real_prize]).reshape((1, 1))
if real_prize == cb.IGNORE_GO:
return
record = DQNChessRecord()
record.state = best_state
record.fen = best_state_fen
record.reward = real_prize
models_memory.add(record)
def training(
acting_model: keras.Model,
target_model: keras.Model,
models_memory: SimpleMemory,
batch_size: int,
gamma: float):
training_batch = models_memory.get_batch(batch_size, min_rows=96)
if training_batch is not None:
samples = [[record.state, record.reward, record.fen] for record in training_batch]
states, prizes, fens = list(map(list, zip(*samples)))
reinforced_prizes = []
for p, f in zip(prizes, fens):
training_board = cb.ChessBoard(starting_fen=f)
if not training_board.game_over():
next_moves, next_states, next_fens = training_board.get_moves()
_, chosen_state, _ = choose_action(acting_model, next_moves, next_states, next_fens)
estimated_next_prize = target_model.predict(np.array(chosen_state).reshape((1, 1, 384)))[0]
reinforced_p = p + gamma * estimated_next_prize
else:
reinforced_p = p
reinforced_prizes.append(reinforced_p)
states = np.array(states)
reinforced_prizes = np.array(reinforced_prizes)
acting_model.train_on_batch(states, reinforced_prizes)
memory = SimpleMemory(int(1e+5))
model_trainer = DQNTrainer(model, memory, action, training)
board = cb.ChessBoard()
TRAINING_STEPS = int(2e+5)
for i in range(TRAINING_STEPS):
print("Step {} of {}".format(i+1, TRAINING_STEPS))
model_trainer.take_action(board, 0.3)
model_trainer.train(batch_size=32, gamma=0.99, theta=0.005)
if i % 1000 == 0:
model_trainer.save("./tmp_model.h5")
model_trainer.save("./model.h5")
| 36.809524 | 107 | 0.678396 |
acf7522c9420b4da09940c2d14a18b3f89aebda3 | 521 | py | Python | code/python/FactSetEntityReportBuilder/v1/fds/sdk/FactSetEntityReportBuilder/apis/__init__.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 6 | 2022-02-07T16:34:18.000Z | 2022-03-30T08:04:57.000Z | code/python/FactSetEntityReportBuilder/v1/fds/sdk/FactSetEntityReportBuilder/apis/__init__.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 2 | 2022-02-07T05:25:57.000Z | 2022-03-07T14:18:04.000Z | code/python/FactSetEntityReportBuilder/v1/fds/sdk/FactSetEntityReportBuilder/apis/__init__.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | null | null | null |
# flake8: noqa
# Import all APIs into this package.
# If you have many APIs here with many many models used in each API this may
# raise a `RecursionError`.
# In order to avoid this, import only the API that you directly need like:
#
# from .api.entity_structure_api import EntityStructureApi
#
# or import this package, but before doing it, use:
#
# import sys
# sys.setrecursionlimit(n)
# Import APIs into API package:
from fds.sdk.FactSetEntityReportBuilder.api.entity_structure_api import EntityStructureApi
| 28.944444 | 90 | 0.767754 |
acf75234d0f05885313a07ebbbaf4c81421f8294 | 3,997 | py | Python | tests/test_terminal.py | maldieve/luma.core | a93d5b6bcd70ba7fdc818ef940bb130d4331ce49 | [
"MIT"
] | null | null | null | tests/test_terminal.py | maldieve/luma.core | a93d5b6bcd70ba7fdc818ef940bb130d4331ce49 | [
"MIT"
] | null | null | null | tests/test_terminal.py | maldieve/luma.core | a93d5b6bcd70ba7fdc818ef940bb130d4331ce49 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2017-18 Richard Hull and contributors
# See LICENSE.rst for details.
"""
Tests for the :py:class:`luma.core.virtual.terminal` class.
"""
from PIL import Image
from luma.core.device import dummy
from luma.core.virtual import terminal
from helpers import (get_reference_image, assert_identical_image,
get_reference_font)
def assert_text(device, term, reference_img, text, save=None):
img_path = get_reference_image(reference_img)
with open(img_path, 'rb') as fp:
reference = Image.open(fp)
for line in text:
term.println(line)
if save is not None:
device.image.save(save)
assert_identical_image(reference, device.image)
def test_default_text():
reference = 'quick_brown_fox.png'
device = dummy()
term = terminal(device)
assert_text(device, term, reference, [
"The quick brown fox jumps over the lazy dog"
])
def test_wrapped_text():
reference = 'quick_brown_fox_word_wrap.png'
device = dummy()
term = terminal(device, word_wrap=True, animate=False)
assert_text(device, term, reference, [
"The quick brown fox jumps over the lazy dog"
])
def test_tab_alignment():
reference = 'tab_align.png'
device = dummy()
term = terminal(device, animate=False)
assert_text(device, term, reference, [
"1\t32\t999",
"999\t1\t32"
])
def test_control_chars():
reference = 'control_chars.png'
device = dummy()
term = terminal(device, animate=False)
assert_text(device, term, reference, [
'foo\rbar\bspam\teggs\n\nham and cheese on rye'
])
def test_scrolling():
reference = 'scroll_text.png'
device = dummy()
term = terminal(device, animate=False)
assert_text(device, term, reference, [
"it oozed over the blackness, and heard Harris's sleepy voice asking " +
"where we drew near it, so they spread their handkerchiefs on the back " +
"of Harris and Harris's friend as to avoid running down which, we managed " +
"to get out of here while this billing and cooing is on. We'll go down " +
"to eat vegetables. He said they were demons."
])
def test_alt_colors():
reference = 'alt_colors.png'
device = dummy()
term = terminal(device, color="blue", bgcolor="grey", animate=False)
assert_text(device, term, reference, [
"blue on grey"
])
def test_ansi_colors():
reference = 'ansi_colors.png'
device = dummy()
term = terminal(device, animate=False)
assert_text(device, term, reference, [
"hello \033[31mworld\033[0m ansi colors here!",
"this is \033[7mreversed\033[7m!",
"\033[44;37mBlue\033[0m \033[46;30mCyan"
])
def test_ansi_colors_wrapped():
reference = 'ansi_colors_wrapped.png'
device = dummy()
term = terminal(device, word_wrap=True, animate=False)
assert_text(device, term, reference, [
"hello \033[31mworld\033[0m ansi colors\t\033[32mwrap\033[0m\t?",
"this is \033[7mreversed\033[7m!",
"\033[43;30mYellow\033[0m \033[45;37mMagenta"
])
def test_ansi_colors_scroll():
reference = 'ansi_colors_scroll.png'
device = dummy()
term = terminal(device, word_wrap=True, animate=False)
assert_text(device, term, reference, [
"hello \033[31mworld\033[0m ansi colors\t\033[32mwrap\033[0m\t?",
"this is \033[7mreversed\033[7m!",
"\033[43;30mYellow\033[0m \033[44;37mBlue abcdefg hijklmn",
"\033[41;30mRed\033[0m \033[42;37mGreen"
])
def test_CP437_charset():
reference = 'accented_charset.png'
unicode_font = get_reference_font('DejaVuSans.ttf')
device = dummy()
term = terminal(device, font=unicode_font, word_wrap=False, animate=False,
color="blue", bgcolor="white")
assert_text(device, term, reference, [
u"\033[31mFußgängerunterführungen\033[0m Текст на русском"
])
| 27.376712 | 85 | 0.658744 |
acf752d2f10b9559e3d0f26ad74f3854a2a35c2a | 11,605 | py | Python | IRIS_data_download/IRIS_download_support/obspy/clients/iris/tests/test_client.py | earthinversion/Fnet_IRIS_data_automated_download | 09a6e0c992662feac95744935e038d1c68539fa1 | [
"MIT"
] | 2 | 2020-03-05T01:03:01.000Z | 2020-12-17T05:04:07.000Z | IRIS_data_download/IRIS_download_support/obspy/clients/iris/tests/test_client.py | earthinversion/Fnet_IRIS_data_automated_download | 09a6e0c992662feac95744935e038d1c68539fa1 | [
"MIT"
] | 4 | 2021-03-31T19:25:55.000Z | 2021-12-13T20:32:46.000Z | IRIS_data_download/IRIS_download_support/obspy/clients/iris/tests/test_client.py | earthinversion/Fnet_IRIS_data_automated_download | 09a6e0c992662feac95744935e038d1c68539fa1 | [
"MIT"
] | 2 | 2020-09-08T19:33:40.000Z | 2021-04-05T09:47:50.000Z | # -*- coding: utf-8 -*-
"""
The obspy.clients.iris.client test suite.
"""
from __future__ import (absolute_import, division, print_function,
unicode_literals)
from future.builtins import * # NOQA @UnusedWildImport
import os
import unittest
import numpy as np
from obspy.core.utcdatetime import UTCDateTime
from obspy.core.util import NamedTemporaryFile
from obspy.clients.iris import Client
class ClientTestCase(unittest.TestCase):
"""
Test cases for obspy.clients.iris.client.Client.
"""
def setUp(self):
# directory where the test files are located
self.path = os.path.dirname(__file__)
def test_sacpz(self):
"""
Fetches SAC poles and zeros information.
"""
client = Client()
# 1
t1 = UTCDateTime("2005-01-01")
t2 = UTCDateTime("2008-01-01")
result = client.sacpz("IU", "ANMO", "00", "BHZ", t1, t2)
# drop lines with creation date (current time during request)
result = result.splitlines()
sacpz_file = os.path.join(self.path, 'data', 'IU.ANMO.00.BHZ.sacpz')
with open(sacpz_file, 'rb') as fp:
expected = fp.read().splitlines()
result.pop(5)
expected.pop(5)
self.assertEqual(result, expected)
# 2 - empty location code
dt = UTCDateTime("2002-11-01")
result = client.sacpz('UW', 'LON', '', 'BHZ', dt)
self.assertIn(b"* STATION (KSTNM): LON", result)
self.assertIn(b"* LOCATION (KHOLE): ", result)
# 3 - empty location code via '--'
result = client.sacpz('UW', 'LON', '--', 'BHZ', dt)
self.assertIn(b"* STATION (KSTNM): LON", result)
self.assertIn(b"* LOCATION (KHOLE): ", result)
def test_distaz(self):
"""
Tests distance and azimuth calculation between two points on a sphere.
"""
client = Client()
# normal request
result = client.distaz(stalat=1.1, stalon=1.2, evtlat=3.2, evtlon=1.4)
self.assertAlmostEqual(result['distance'], 2.10256)
self.assertAlmostEqual(result['distancemeters'], 233272.79028)
self.assertAlmostEqual(result['backazimuth'], 5.46944)
self.assertAlmostEqual(result['azimuth'], 185.47695)
self.assertEqual(result['ellipsoidname'], 'WGS84')
# w/o kwargs
result = client.distaz(1.1, 1.2, 3.2, 1.4)
self.assertAlmostEqual(result['distance'], 2.10256)
self.assertAlmostEqual(result['distancemeters'], 233272.79028)
self.assertAlmostEqual(result['backazimuth'], 5.46944)
self.assertAlmostEqual(result['azimuth'], 185.47695)
self.assertEqual(result['ellipsoidname'], 'WGS84')
# missing parameters
self.assertRaises(Exception, client.distaz, stalat=1.1)
self.assertRaises(Exception, client.distaz, 1.1)
self.assertRaises(Exception, client.distaz, stalat=1.1, stalon=1.2)
self.assertRaises(Exception, client.distaz, 1.1, 1.2)
def test_flinnengdahl(self):
"""
Tests calculation of Flinn-Engdahl region code or name.
"""
client = Client()
# code
result = client.flinnengdahl(lat=-20.5, lon=-100.6, rtype="code")
self.assertEqual(result, 683)
# w/o kwargs
result = client.flinnengdahl(-20.5, -100.6, "code")
self.assertEqual(result, 683)
# region
result = client.flinnengdahl(lat=42, lon=-122.24, rtype="region")
self.assertEqual(result, 'OREGON')
# w/o kwargs
result = client.flinnengdahl(42, -122.24, "region")
self.assertEqual(result, 'OREGON')
# both
result = client.flinnengdahl(lat=-20.5, lon=-100.6, rtype="both")
self.assertEqual(result, (683, 'SOUTHEAST CENTRAL PACIFIC OCEAN'))
# w/o kwargs
result = client.flinnengdahl(-20.5, -100.6, "both")
self.assertEqual(result, (683, 'SOUTHEAST CENTRAL PACIFIC OCEAN'))
# default rtype
result = client.flinnengdahl(lat=42, lon=-122.24)
self.assertEqual(result, (32, 'OREGON'))
# w/o kwargs
# outside boundaries
self.assertRaises(Exception, client.flinnengdahl, lat=-90.1, lon=0)
self.assertRaises(Exception, client.flinnengdahl, lat=90.1, lon=0)
self.assertRaises(Exception, client.flinnengdahl, lat=0, lon=-180.1)
self.assertRaises(Exception, client.flinnengdahl, lat=0, lon=180.1)
def test_traveltime(self):
"""
Tests calculation of travel-times for seismic phases.
"""
client = Client()
result = client.traveltime(
evloc=(-36.122, -72.898), evdepth=22.9,
staloc=[(-33.45, -70.67), (47.61, -122.33), (35.69, 139.69)])
self.assertTrue(result.startswith(b'Model: iasp91'))
def test_evalresp(self):
"""
Tests evaluating instrument response information.
"""
client = Client()
dt = UTCDateTime("2005-01-01")
# plot as PNG file
with NamedTemporaryFile() as tf:
tempfile = tf.name
client.evalresp(network="IU", station="ANMO", location="00",
channel="BHZ", time=dt, output='plot',
filename=tempfile)
with open(tempfile, 'rb') as fp:
self.assertEqual(fp.read(4)[1:4], b'PNG')
# plot-amp as PNG file
with NamedTemporaryFile() as tf:
tempfile = tf.name
client.evalresp(network="IU", station="ANMO", location="00",
channel="BHZ", time=dt, output='plot-amp',
filename=tempfile)
with open(tempfile, 'rb') as fp:
self.assertEqual(fp.read(4)[1:4], b'PNG')
# plot-phase as PNG file
with NamedTemporaryFile() as tf:
tempfile = tf.name
client.evalresp(network="IU", station="ANMO", location="00",
channel="BHZ", time=dt, output='plot-phase',
filename=tempfile)
with open(tempfile, 'rb') as fp:
self.assertEqual(fp.read(4)[1:4], b'PNG')
# fap as ASCII file
with NamedTemporaryFile() as tf:
tempfile = tf.name
client.evalresp(network="IU", station="ANMO", location="00",
channel="BHZ", time=dt, output='fap',
filename=tempfile)
with open(tempfile, 'rt') as fp:
self.assertEqual(fp.readline(),
'1.000000E-05 1.055999E+04 1.792007E+02\n')
# cs as ASCII file
with NamedTemporaryFile() as tf:
tempfile = tf.name
client.evalresp(network="IU", station="ANMO", location="00",
channel="BHZ", time=dt, output='cs',
filename=tempfile)
with open(tempfile, 'rt') as fp:
self.assertEqual(fp.readline(),
'1.000000E-05 -1.055896E+04 1.473054E+02\n')
# fap & def as ASCII file
with NamedTemporaryFile() as tf:
tempfile = tf.name
client.evalresp(network="IU", station="ANMO", location="00",
channel="BHZ", time=dt, output='fap', units='def',
filename=tempfile)
with open(tempfile, 'rt') as fp:
self.assertEqual(fp.readline(),
'1.000000E-05 1.055999E+04 1.792007E+02\n')
# fap & dis as ASCII file
with NamedTemporaryFile() as tf:
tempfile = tf.name
client.evalresp(network="IU", station="ANMO", location="00",
channel="BHZ", time=dt, output='fap', units='dis',
filename=tempfile)
with open(tempfile, 'rt') as fp:
self.assertEqual(fp.readline(),
'1.000000E-05 6.635035E-01 2.692007E+02\n')
# fap & vel as ASCII file
with NamedTemporaryFile() as tf:
tempfile = tf.name
client.evalresp(network="IU", station="ANMO", location="00",
channel="BHZ", time=dt, output='fap', units='vel',
filename=tempfile)
with open(tempfile, 'rt') as fp:
self.assertEqual(fp.readline(),
'1.000000E-05 1.055999E+04 1.792007E+02\n')
# fap & acc as ASCII file
with NamedTemporaryFile() as tf:
tempfile = tf.name
client.evalresp(network="IU", station="ANMO", location="00",
channel="BHZ", time=dt, output='fap', units='acc',
filename=tempfile)
with open(tempfile, 'rt') as fp:
self.assertEqual(fp.readline(),
'1.000000E-05 1.680674E+08 8.920073E+01\n')
# fap as NumPy ndarray
data = client.evalresp(network="IU", station="ANMO", location="00",
channel="BHZ", time=dt, output='fap')
np.testing.assert_array_equal(
data[0], [1.00000000e-05, 1.05599900e+04, 1.79200700e+02])
# cs as NumPy ndarray
data = client.evalresp(network="IU", station="ANMO", location="00",
channel="BHZ", time=dt, output='cs')
np.testing.assert_array_equal(
data[0], [1.00000000e-05, -1.05589600e+04, 1.47305400e+02])
def test_resp(self):
"""
Tests resp Web service interface.
Examples are inspired by https://service.iris.edu/irisws/resp/1/.
"""
client = Client()
# 1
t1 = UTCDateTime("2005-001T00:00:00")
t2 = UTCDateTime("2008-001T00:00:00")
result = client.resp("IU", "ANMO", "00", "BHZ", t1, t2)
self.assertIn(b'B050F03 Station: ANMO', result)
# 2 - empty location code
result = client.resp("UW", "LON", "", "EHZ")
self.assertIn(b'B050F03 Station: LON', result)
self.assertIn(b'B052F03 Location: ??', result)
# 3 - empty location code via '--'
result = client.resp("UW", "LON", "--", "EHZ")
self.assertIn(b'B050F03 Station: LON', result)
self.assertIn(b'B052F03 Location: ??', result)
# 4
dt = UTCDateTime("2010-02-27T06:30:00.000")
result = client.resp("IU", "ANMO", "*", "*", dt)
self.assertIn(b'B050F03 Station: ANMO', result)
def test_timeseries(self):
"""
Tests timeseries Web service interface.
Examples are inspired by https://service.iris.edu/irisws/timeseries/1/.
"""
client = Client()
# 1
t1 = UTCDateTime("2005-001T00:00:00")
t2 = UTCDateTime("2005-001T00:01:00")
# no filter
st1 = client.timeseries("IU", "ANMO", "00", "BHZ", t1, t2)
# instrument corrected
st2 = client.timeseries("IU", "ANMO", "00", "BHZ", t1, t2,
filter=["correct"])
# compare results
self.assertEqual(st1[0].stats.starttime, st2[0].stats.starttime)
self.assertEqual(st1[0].stats.endtime, st2[0].stats.endtime)
self.assertEqual(st1[0].data[0], 24)
self.assertAlmostEqual(st2[0].data[0], -2.8373747e-06)
def suite():
return unittest.makeSuite(ClientTestCase, 'test')
if __name__ == '__main__':
unittest.main(defaultTest='suite')
| 42.981481 | 79 | 0.553468 |
acf752d43c2ad8ebc536b026797d615926570010 | 957 | py | Python | ilf/fuzzers/imitation/int_values.py | ConstantinHvber/ilf | b706f81191508998d443c1c89e8d10028ce4e5d8 | [
"Apache-2.0"
] | 84 | 2019-11-29T08:32:41.000Z | 2022-03-30T01:43:23.000Z | ilf/fuzzers/imitation/int_values.py | edolele/ilf | ddd15f201d451d62b94fb45fee7266fb579ab787 | [
"Apache-2.0"
] | 14 | 2019-12-30T15:54:00.000Z | 2022-03-14T09:37:15.000Z | ilf/fuzzers/imitation/int_values.py | edolele/ilf | ddd15f201d451d62b94fb45fee7266fb579ab787 | [
"Apache-2.0"
] | 20 | 2020-01-04T05:54:33.000Z | 2022-03-29T14:11:43.000Z | INT_VALUES = [
0x0,
0x1,
0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff,
0x8,
0x2,
0x4,
0x3,
0x5,
0x9,
0x10,
0x20,
0x6,
0x8000000000000000000000000000000000000000000000000000000000000000,
0x400000000000000000000,
0x1000000000000000000000,
0x2000,
0x80,
0x4000000000000000000000,
0x800000,
0x40,
0x10000,
0x400,
0x4000000,
0x200000000,
0x800000000000000000000,
0x1000000,
0x20000000,
0x40000,
0x20000,
0x20000000000000000000000,
0x4000000000000,
0x200,
0x80000,
0x8000000000000000000000,
0x8000,
0x800,
0x1000,
0x80000000000,
0x2000000000000000000000,
0x8000000,
1000000000000000000000000001,
0x4000000000000000000,
0x100000000,
0x200000000000000000000,
0x800000000000,
0x80000000,
0x1000000000,
0x100,
0x80000000000000,
0x200000000000,
] | 18.403846 | 71 | 0.680251 |
acf752d905528df6eb9c566ef974f936ce5ff5db | 7,139 | py | Python | train.py | taesung89/deeplab-pytorch | 25db353d10a256f1f9e89675a21f6e59af9407e6 | [
"MIT"
] | 1 | 2018-05-22T22:20:10.000Z | 2018-05-22T22:20:10.000Z | train.py | taesung89/deeplab-pytorch | 25db353d10a256f1f9e89675a21f6e59af9407e6 | [
"MIT"
] | null | null | null | train.py | taesung89/deeplab-pytorch | 25db353d10a256f1f9e89675a21f6e59af9407e6 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
#
# Author: Kazuto Nakashima
# URL: http://kazuto1011.github.io
# Created: 2017-11-01
import os.path as osp
import click
import cv2
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import yaml
from addict import Dict
from tensorboardX import SummaryWriter
from torch.autograd import Variable
from torchnet.meter import MovingAverageValueMeter
from tqdm import tqdm
from libs.datasets import CocoStuff10k
from libs.models import DeepLabV2_ResNet101_MSC
from libs.utils.loss import CrossEntropyLoss2d
def get_lr_params(model, key):
# For Dilated FCN
if key == '1x':
for m in model.named_modules():
if 'layer' in m[0]:
if isinstance(m[1], nn.Conv2d):
for p in m[1].parameters():
yield p
# For conv weight in the ASPP module
if key == '10x':
for m in model.named_modules():
if 'aspp' in m[0]:
if isinstance(m[1], nn.Conv2d):
yield m[1].weight
# For conv bias in the ASPP module
if key == '20x':
for m in model.named_modules():
if 'aspp' in m[0]:
if isinstance(m[1], nn.Conv2d):
yield m[1].bias
def poly_lr_scheduler(optimizer, init_lr, iter, lr_decay_iter, max_iter, power):
if iter % lr_decay_iter or iter > max_iter:
return None
new_lr = init_lr * (1 - float(iter) / max_iter)**power
optimizer.param_groups[0]['lr'] = new_lr
optimizer.param_groups[1]['lr'] = 10 * new_lr
optimizer.param_groups[2]['lr'] = 20 * new_lr
def resize_target(target, size):
new_target = np.zeros((target.shape[0], size, size), np.int32)
for i, t in enumerate(target.numpy()):
new_target[i, ...] = cv2.resize(t, (size, ) * 2, interpolation=cv2.INTER_NEAREST)
return torch.from_numpy(new_target).long()
@click.command()
@click.option('--config', '-c', type=str, required=True)
@click.option('--cuda/--no-cuda', default=True)
def main(config, cuda):
# Configuration
CONFIG = Dict(yaml.load(open(config)))
# CUDA check
cuda = cuda and torch.cuda.is_available()
if cuda:
current_device = torch.cuda.current_device()
print('Running on', torch.cuda.get_device_name(current_device))
# Dataset
dataset = CocoStuff10k(
root=CONFIG.ROOT,
split='train',
image_size=513,
crop_size=CONFIG.IMAGE.SIZE.TRAIN,
scale=True,
flip=True,
)
# DataLoader
loader = torch.utils.data.DataLoader(
dataset=dataset,
batch_size=CONFIG.BATCH_SIZE,
num_workers=CONFIG.NUM_WORKERS,
shuffle=True,
)
loader_iter = iter(loader)
# Model
model = DeepLabV2_ResNet101_MSC(n_classes=CONFIG.N_CLASSES)
state_dict = torch.load(CONFIG.INIT_MODEL)
model.load_state_dict(state_dict, strict=False) # Skip "aspp" layer
model = nn.DataParallel(model)
if cuda:
model.cuda()
# Optimizer
optimizer = {
'sgd':
torch.optim.SGD(
# cf lr_mult and decay_mult in train.prototxt
params=[{
'params': get_lr_params(model.module, key='1x'),
'lr': CONFIG.LR,
'weight_decay': CONFIG.WEIGHT_DECAY
}, {
'params': get_lr_params(model.module, key='10x'),
'lr': 10 * CONFIG.LR,
'weight_decay': CONFIG.WEIGHT_DECAY
}, {
'params': get_lr_params(model.module, key='20x'),
'lr': 20 * CONFIG.LR,
'weight_decay': 0.0
}],
momentum=CONFIG.MOMENTUM,
),
}.get(CONFIG.OPTIMIZER)
# Loss definition
criterion = CrossEntropyLoss2d(ignore_index=CONFIG.IGNORE_LABEL)
if cuda:
criterion.cuda()
# TensorBoard Logger
writer = SummaryWriter(CONFIG.LOG_DIR)
loss_meter = MovingAverageValueMeter(20)
model.train()
model.module.scale.freeze_bn()
for iteration in tqdm(
list(range(1, CONFIG.ITER_MAX + 1)),
total=CONFIG.ITER_MAX,
leave=False,
dynamic_ncols=True,
):
# Set a learning rate
poly_lr_scheduler(
optimizer=optimizer,
init_lr=CONFIG.LR,
iter=iteration - 1,
lr_decay_iter=CONFIG.LR_DECAY,
max_iter=CONFIG.ITER_MAX,
power=CONFIG.POLY_POWER,
)
# Clear gradients (ready to accumulate)
optimizer.zero_grad()
iter_loss = 0
for i in range(1, CONFIG.ITER_SIZE + 1):
data, target = next(loader_iter)
# Image
data = data.cuda() if cuda else data
data = Variable(data)
# Propagate forward
outputs = model(data)
# Loss
loss = 0
for output in outputs:
# Resize target for {100%, 75%, 50%, Max} outputs
#print(target.min())
target_ = resize_target(target, output.size(2))
#print(target.max())
target_ = target_.cuda() if cuda else target_
target_ = Variable(target_)
# Compute crossentropy loss
loss += criterion(output, target_)
# Backpropagate (just compute gradients wrt the loss)
loss /= float(CONFIG.ITER_SIZE)
loss.backward()
iter_loss += loss.data[0]
# Reload dataloader
if ((iteration - 1) * CONFIG.ITER_SIZE + i) % len(loader) == 0:
loader_iter = iter(loader)
loss_meter.add(iter_loss)
# Update weights with accumulated gradients
optimizer.step()
# TensorBoard
if iteration % CONFIG.ITER_TF == 0:
writer.add_scalar('train_loss', loss_meter.value()[0], iteration)
for i, o in enumerate(optimizer.param_groups):
writer.add_scalar('train_lr_group{}'.format(i), o['lr'], iteration)
if iteration % 1000 != 0:
continue
for name, param in model.named_parameters():
name = name.replace('.', '/')
writer.add_histogram(name, param, iteration, bins="auto")
if param.requires_grad:
writer.add_histogram(name + '/grad', param.grad, iteration, bins="auto")
# Save a model
if iteration % CONFIG.ITER_SNAP == 0:
torch.save(
model.module.state_dict(),
osp.join(CONFIG.SAVE_DIR, 'checkpoint_{}.pth'.format(iteration)),
)
# Save a model
if iteration % 100 == 0:
torch.save(
model.module.state_dict(),
osp.join(CONFIG.SAVE_DIR, 'checkpoint_current.pth'),
)
torch.save(
model.module.state_dict(),
osp.join(CONFIG.SAVE_DIR, 'checkpoint_final.pth'),
)
if __name__ == '__main__':
main()
| 30.378723 | 92 | 0.567586 |
acf7537387325d5e030f9d0e481b9a2e8264ed50 | 3,208 | py | Python | mslib/mswms/mswms.py | withoutwaxaryan/MSS | 8bc06755e592c61d1b418f9a0b582ba0025f3da8 | [
"Apache-2.0"
] | null | null | null | mslib/mswms/mswms.py | withoutwaxaryan/MSS | 8bc06755e592c61d1b418f9a0b582ba0025f3da8 | [
"Apache-2.0"
] | null | null | null | mslib/mswms/mswms.py | withoutwaxaryan/MSS | 8bc06755e592c61d1b418f9a0b582ba0025f3da8 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
mslib.mswms.mswms
~~~~~~~~~~~~~~~~~
The module can be run with the Python Flask framework and can be run as
python mswms.py.
:copyright: Copyright 2016 Reimar Bauer
:copyright: Copyright 2016-2021 by the mss team, see AUTHORS.
:license: APACHE-2.0, see LICENSE for details.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import argparse
import logging
import sys
from mslib import __version__
from mslib.mswms.wms import mss_wms_settings
from mslib.mswms.wms import app as application
from mslib.utils import setup_logging, Updater, Worker
def main():
parser = argparse.ArgumentParser()
parser.add_argument("-v", "--version", help="show version", action="store_true", default=False)
parser.add_argument("--host", help="hostname",
default="127.0.0.1", dest="host")
parser.add_argument("--port", help="port", dest="port", default="8081")
parser.add_argument("--threadpool", help="threadpool", dest="use_threadpool", action="store_true", default=False)
parser.add_argument("--debug", help="show debugging log messages on console", action="store_true", default=False)
parser.add_argument("--logfile", help="If set to a name log output goes to that file", dest="logfile",
default=None)
parser.add_argument("--update", help="Updates MSS to the newest version", action="store_true", default=False)
args = parser.parse_args()
if args.version:
print("***********************************************************************")
print("\n Mission Support System (mss)\n")
print("***********************************************************************")
print("Documentation: http://mss.rtfd.io")
print("Version:", __version__)
sys.exit()
updater = Updater()
if args.update:
updater.on_update_available.connect(lambda old, new: updater.update_mss())
updater.on_log_update.connect(lambda s: print(s.replace("\n", "")))
updater.on_status_update.connect(lambda s: print(s.replace("\n", "")))
updater.run()
while Worker.workers:
list(Worker.workers)[0].wait()
sys.exit()
setup_logging(args)
updater.on_update_available.connect(lambda old, new: logging.info(f"MSS can be updated from {old} to {new}.\nRun"
" the --update argument to update the server."))
updater.run()
logging.info("Configuration File: '%s'", mss_wms_settings.__file__)
application.run(args.host, args.port)
if __name__ == '__main__':
main()
| 39.604938 | 118 | 0.627805 |
acf7547742c26a1c5d9a71b3ecd2e6cc28c596df | 741 | py | Python | Loops, If, & Functions Check-in/sum_odds.py | dilayercelik/CSE160-Data-Programming-UW | 1f929a06a4b54699011a21c83e6716a96d68feae | [
"MIT"
] | 1 | 2020-05-14T23:49:39.000Z | 2020-05-14T23:49:39.000Z | Loops, If, & Functions Check-in/sum_odds.py | dilayercelik/CSE160-Data-Programming-UW | 1f929a06a4b54699011a21c83e6716a96d68feae | [
"MIT"
] | null | null | null | Loops, If, & Functions Check-in/sum_odds.py | dilayercelik/CSE160-Data-Programming-UW | 1f929a06a4b54699011a21c83e6716a96d68feae | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu May 14 23:11:57 2020
@author: Dilay Ercelik
"""
# Return the sum of the odd ints in the given list.
# Note: the % "mod" operator computes the remainder, e.g. 5 % 2 is 1.
# Examples:
## sum_odds([5, 2, 6, 3, 4]) → 8
## sum_odds([3, 6, 11, 2, 5]) → 19
## sum_odds([]) → 0
# Answer:
def sum_odds(nums):
sum_odd = 0
for num in nums:
if num % 2 == 1:
sum_odd += num
return sum_odd
# Tests:
print(sum_odds([5, 2, 6, 3, 4])) # correct output
print(sum_odds([3, 6, 11, 2, 5])) # correct output
print(sum_odds([])) # correct ouput
| 17.642857 | 70 | 0.470985 |
acf7555037f383132e14436790b36f65b56055fb | 1,377 | py | Python | noisypy/__init__.py | wobbuuu/noisypy | 3e4a48e55a391a9552da0df77f115e86ef100037 | [
"MIT"
] | null | null | null | noisypy/__init__.py | wobbuuu/noisypy | 3e4a48e55a391a9552da0df77f115e86ef100037 | [
"MIT"
] | null | null | null | noisypy/__init__.py | wobbuuu/noisypy | 3e4a48e55a391a9552da0df77f115e86ef100037 | [
"MIT"
] | null | null | null | from .settings import *
from .plot_utils import *
from .noisy_utils import *
from .calibration_utils import *
from matplotlib import pyplot as plt
# Changing matplotlib rc defaults
plt.rcdefaults()
plt.rcParams['backend'] = 'module://ipykernel.pylab.backend_inline'
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.sans-serif'] = ['DejaVu Sans']
plt.rcParams['text.usetex'] = True
#plt.rcParams['text.latex.preview'] = True
plt.rcParams['text.latex.preamble'] = r'''\renewcommand{\familydefault}{\sfdefault}
\usepackage[scaled=1]{helvet}
\usepackage[helvet]{sfmath}
\usepackage{siunitx}
\sisetup{detect-family=true, detect-weight=true}
\usepackage{amsmath}'''
plt.rcParams['figure.max_open_warning'] = 100
plt.rcParams['figure.constrained_layout.use'] = True
plt.rcParams['figure.figsize'] = [4, 3]
plt.rcParams['figure.facecolor'] = 'white'
plt.rcParams['axes.axisbelow'] = True
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['axes.grid'] = False
plt.rcParams['axes.formatter.limits'] = (-3, 4)
plt.rcParams['legend.frameon'] = False
plt.rcParams['legend.borderaxespad'] = 0.3
plt.rcParams['legend.borderpad'] = 0.4
plt.rcParams['legend.columnspacing'] = 0.9
plt.rcParams['legend.handlelength'] = 0.5
plt.rcParams['legend.handletextpad'] = 0.3
plt.rcParams['legend.labelspacing'] = 0.4
plt.rcParams['lines.markeredgewidth'] = 0.0
| 33.585366 | 83 | 0.7313 |
acf755c5749784e99072f31ae37152419b95cc39 | 984 | py | Python | gym_learning_to_learn/wrappers/data_recorder.py | bstriner/gym-learning-to-learn | 4cd93bf7a306255771a32e0d97b3d705b2666656 | [
"MIT"
] | 1 | 2021-06-14T15:37:32.000Z | 2021-06-14T15:37:32.000Z | gym_learning_to_learn/wrappers/data_recorder.py | bstriner/gym-learning-to-learn | 4cd93bf7a306255771a32e0d97b3d705b2666656 | [
"MIT"
] | null | null | null | gym_learning_to_learn/wrappers/data_recorder.py | bstriner/gym-learning-to-learn | 4cd93bf7a306255771a32e0d97b3d705b2666656 | [
"MIT"
] | 1 | 2017-01-27T05:49:59.000Z | 2017-01-27T05:49:59.000Z | from gym import Wrapper
class DataRecorder(Wrapper):
def __init__(self, env):
self.data = []
self.epoch = 0
self.iteration = 0
super(DataRecorder, self).__init__(env)
def _reset(self):
self.epoch += 1
self.iteration = 0
observation = self.env.reset()
self.data.append([self.epoch, self.iteration, observation, 0, False, {}])
return observation
def _step(self, action):
observation, reward, done, info = self.env.step(action)
self.iteration += 1
self.data.append([self.epoch, self.iteration, observation, reward, done, info])
return observation, reward, done, info
def data_frame(self, names, f):
#ret = {k: [] for k in names}
ret = []
for datum in self.data:
vals = f(datum)
#for k, v in zip(names, vals):
# ret[k] = v
ret.append({k:v for k,v in zip(names, vals)})
return ret
| 29.818182 | 87 | 0.563008 |
acf755c6aa3bcfc5bc4d704bb7714d93ce3f5dfb | 318 | py | Python | solutions/03-pass_message.py | rmania/network_analysis | ce9701f2818ce40276b60cf79c7fe30e78b844cf | [
"MIT"
] | 2 | 2021-01-09T15:57:26.000Z | 2021-11-29T01:44:21.000Z | solutions/03-pass_message.py | rmania/network_analysis | ce9701f2818ce40276b60cf79c7fe30e78b844cf | [
"MIT"
] | 5 | 2019-11-15T02:00:26.000Z | 2021-01-06T04:26:40.000Z | solutions/03-pass_message.py | rmania/network_analysis | ce9701f2818ce40276b60cf79c7fe30e78b844cf | [
"MIT"
] | 1 | 2019-12-30T23:54:46.000Z | 2019-12-30T23:54:46.000Z | # Possible answer to Question 1:
# All we need here is the length of the path.
def compute_transmission_time(G, source, target):
"""
Fill in code below.
"""
length = nx.shortest_path_length(G, source, target)
time = sum(range(1, length+1))
return time
compute_transmission_time(G, 14, 4)
| 18.705882 | 55 | 0.672956 |
acf7569c817f2ee153b437cacab1d6618551a6e6 | 17,990 | py | Python | user-locker.py | MelvinOmega/password-locker | 8a127d7ab3bd06e2bed0ba6bd9f0763704bf5cc8 | [
"MIT"
] | null | null | null | user-locker.py | MelvinOmega/password-locker | 8a127d7ab3bd06e2bed0ba6bd9f0763704bf5cc8 | [
"MIT"
] | null | null | null | user-locker.py | MelvinOmega/password-locker | 8a127d7ab3bd06e2bed0ba6bd9f0763704bf5cc8 | [
"MIT"
] | null | null | null | import pyperclip
from user import User
def main():
while True:
print("Welcome to password locker")
print('\n')
print("select a short code to navigate through: to create new user use 'nu': To login your account 'lg' or 'ex' to exit ")
short_code = input().lower()
print('\n')
if short_code == 'nu':
print("create username")
created_user_name = input()
print('create password')
created_user_password = input()
print('confirm password')
confirm_password = input()
while confirm_password != created_user_password:
print("invalid password did not match")
print("enter your password")
created_user_password = input()
print("confirm your password")
confirm_password = input()
else:
print(f"congratulations {created_user_name}! Account created succesfully")
print('\n')
print("proceed to login")
print("username")
entered_username = input()
print("your password")
entered_password = input()
while entered_username != created_user_name or entered_password != created_user_password:
print("invalid username or password")
print("username")
entered_username = input()
print("your password")
entered_password = input()
else:
print(f"Welcome:{entered_username}")
print('\n')
elif short_code == 'lg':
print("welcome")
print("enter your username")
print("\n")
else:
print ("loginsuccess")
print('\n')
print('\n')
elif short_code == 'ex':
break
else:
print ("enter valid code")
if __name__ == '__main__':
main()
#!/usr/bin/env python3.6
# import random
# from user import User
# from credentials import Credentials
# # Functions to add credentials
# def create_new_credential(account_name, account_password):
# """Function to create a new account and its credentials"""
# new_credential = Credentials(account_name, account_password)
# return new_credential
# def save_new_credential(credentials):
# """Function to save the newly created account and password"""
# credentials.save_credentials()
# def find_credential(account_name):
# """Function that finds credentials based on account_name given"""
# return Credentials.find_by_name(account_name)
# def check_existing_credentials(name):
# """Method that checks whether a particular account and its credentials exist based on searched account_name"""
# return Credentials.find_by_name(name)
# def display_credentials():
# """Function which displays all saved credentials"""
# return Credentials.display_credentials()
# def delete_credential(credentials):
# """
# Method that deletes credentials
# """
# return Credentials.delete_credential(credentials)
# def main():
# while True:
# print("Welcome to PassWord Locker.")
# print('\n')
# print("Use these short codes to select an option: Create New User use 'cu': Login to your account use 'lg' or 'ex' to exit password locker")
# short_code = input().lower()
# print('\n')
# if short_code == 'cu':
# print("Create a UserName")
# created_user_name = input()
# print("Select a Password")
# created_user_password = input()
# print("Confirm Your Password")
# confirm_password = input()
# while confirm_password != created_user_password:
# print("Sorry your passwords did not match!")
# print("Enter a password")
# created_user_password = input()
# print("Confirm Your Password")
# confirm_password = input()
# else:
# print(f"Congratulations {created_user_name}! You have created your new account.")
# print('\n')
# print("Proceed to Log In to your Account")
# print("Username")
# entered_userName = input()
# print("Your Password")
# entered_password = input()
# while entered_userName != created_user_name or entered_password != created_user_password:
# print("You entered a wrong username or password")
# print("Username")
# entered_userName = input()
# print("Your Password")
# entered_password = input()
# else:
# print(f"Welcome: {entered_userName} to your Account")
# print('\n')
# print("Select an option below to continue: Enter 1, 2, 3, 4 or 5")
# print('\n')
# while True:
# print("1: View Your saved credentials")
# print("2: Add new credentials")
# print("3: Remove credentials")
# print("4: Search credentials")
# print("5: Log Out")
# option = input()
# if option == '2':
# while True:
# print("Continue to add? y/n")
# choice = input().lower()
# if choice == 'y':
# print("Enter The Account Name")
# account_name = input()
# print("Enter a password")
# print(
# "To generate random password enter keyword 'gp' or 'n' to create your own password")
# keyword = input().lower()
# if keyword == 'gp':
# account_password = random.randint(111111, 1111111)
# print(f"Account: {account_name}")
# print(f"Password: {account_password}")
# print('\n')
# elif keyword == 'n':
# print("Create your password")
# account_password = input()
# print(f"Account: {account_name}")
# print(f"Password: {account_password}")
# print('\n')
# else:
# print("Please enter a valid Code")
# save_new_credential(create_new_credential(
# account_name, account_password))
# elif choice == 'n':
# break
# else:
# print("Please use 'y' for yes or 'n' for no!")
# elif option == '1':
# while True:
# print("Below is a list of all your credentials")
# if display_credentials():
# for credential in display_credentials():
# print(f"ACCOUNT NAME:{credential.account_name}")
# print(f"PASSWORD:{credential.account_password}")
# else:
# print('\n')
# print("You don't seem to have any contacts yet")
# print('\n')
# print("Back to Menu? y/n")
# back = input().lower()
# if back == 'y':
# break
# elif back == 'n':
# continue
# else:
# print("Please Enter a valid code")
# continue
# elif option == '5':
# print("WARNING! You will loose all your credentials if you log out. Are you sure? y/n")
# logout = input().lower()
# if logout == 'y':
# print("You have Successfully logged out")
# break
# elif logout == 'n':
# continue
# elif option == '3':
# while True:
# print("Search for credential to delete")
# search_name = input()
# if check_existing_credentials(search_name):
# search_credential = find_credential(search_name)
# print(f"ACCOUNT NAME: {search_credential.account_name} \n PASSWORD: {search_credential.account_password}")
# print("Delete? y/n")
# sure = input().lower()
# if sure == 'y':
# delete_credential(search_credential)
# print("Account SUCCESSFULLY deleted")
# break
# elif sure == 'n':
# continue
# else:
# print("That Contact Does not exist")
# break
# elif option == '4':
# while True:
# print("Continue? y/n")
# option2 = input().lower()
# if option2 == 'y':
# print("Enter an account name to find credentials")
# search_name = input()
# if check_existing_credentials(search_name):
# search_credential = find_credential(search_name)
# print(f"ACCOUNT NAME: {search_credential.account_name} \n PASSWORD: {search_credential.account_password}")
# else:
# print("That Contact Does not exist")
# elif option2 == 'n':
# break
# else:
# print("Please enter a valid code")
# else:
# print("Please enter a valid code")
# continue
# elif short_code == 'lg':
# print("WELCOME")
# print("Enter UserName")
# default_user_name = input()
# print("Enter Your password")
# default_user_password = input()
# print('\n')
# while default_user_name != 'testuser' or default_user_password != '12345':
# print("Wrong userName or password. Username 'testuser' and password '12345'")
# print("Enter UserName")
# default_user_name = input()
# print("Enter Your password")
# default_user_password = input()
# print('\n')
# if default_user_name == 'testuser' and default_user_password == '12345':
# print("YOU HAVE SUCCESSFULLY LOGGED IN!")
# print('\n')
# print("Select an option below to continue: Enter 1, 2, 3, 4 or 5")
# print('\n')
# while True:
# print("1: View Your saved credentials")
# print("2: Add new credentials")
# print("3: Remove credentials")
# print("4: Search credentials")
# print("5: Log Out")
# option = input()
# if option == '2':
# while True:
# print("Continue to add? y/n")
# choice = input().lower()
# if choice == 'y':
# print("Enter The Account Name")
# account_name = input()
# print("Enter a password")
# print(
# "To generate random password enter keyword 'gp' or 'n' to create your own password")
# keyword = input().lower()
# if keyword == 'gp':
# account_password = random.randint(111111, 1111111)
# print(f"Account: {account_name}")
# print(f"Password: {account_password}")
# print('\n')
# elif keyword == 'n':
# print("Create your password")
# account_password = input()
# print(f"Account: {account_name}")
# print(f"Password: {account_password}")
# print('\n')
# else:
# print("Please enter a valid Code")
# save_new_credential(create_new_credential(
# account_name, account_password))
# elif choice == 'n':
# break
# else:
# print("Please use 'y' for yes or 'n' for no!")
# elif option == '1':
# while True:
# print("Below is a list of all your credentials")
# if display_credentials():
# for credential in display_credentials():
# print(f"ACCOUNT NAME:{credential.account_name}")
# print(f"PASSWORD:{credential.account_password}")
# else:
# print('\n')
# print("You don't seem to have any contacts yet")
# print('\n')
# print("Back to Menu? y/n")
# back = input().lower()
# if back == 'y':
# break
# elif back == 'n':
# continue
# else:
# print("Please Enter a valid code")
# # elif choice1 == 'n':
# # break
# # else:
# # print("Please use y or n")
# elif option == '5':
# print("WARNING! You will loose all your credentials if you log out. Are you sure? y/n")
# logout = input().lower()
# if logout == 'y':
# print("You have Successfully logged out")
# break
# elif logout == 'n':
# continue
# elif option == '3':
# while True:
# print("Search for credential to delete")
# search_name = input()
# if check_existing_credentials(search_name):
# search_credential = find_credential(search_name)
# print(f"ACCOUNT NAME: {search_credential.account_name} \n PASSWORD: {search_credential.account_password}")
# print("Delete? y/n")
# sure = input().lower()
# if sure == 'y':
# delete_credential(search_credential)
# print("Account SUCCESSFULLY deleted")
# break
# elif sure == 'n':
# continue
# else:
# print("That Contact Does not exist")
# break
# elif option == '4':
# while True:
# print("Continue? y/n")
# option2 = input().lower()
# if option2 == 'y':
# print("Enter an account name to find credentials")
# search_name = input()
# if check_existing_credentials(search_name):
# search_credential = find_credential(search_name)
# print(f"ACCOUNT NAME: {search_credential.account_name} \n PASSWORD: {search_credential.account_password}")
# else:
# print("That Contact Does not exist")
# elif option2 == 'n':
# break
# else:
# print("Please enter a valid code")
# else:
# print("Please enter a valid code")
# elif short_code == 'ex':
# break
# else:
# print("Please Enter a valid code to continue")
# if __name__ == '__main__':
# main() | 41.740139 | 150 | 0.416287 |
acf757c15452631172c20353394b2b85a25f162e | 564 | py | Python | pgkit/cli/cli.py | SadeghHayeri/pgk | 258c859f4e6c1ca7d515851552402a2e6bec80dc | [
"MIT"
] | 7 | 2021-06-14T07:22:50.000Z | 2021-12-15T14:25:49.000Z | pgkit/cli/cli.py | SadeghHayeri/pgkit | 258c859f4e6c1ca7d515851552402a2e6bec80dc | [
"MIT"
] | null | null | null | pgkit/cli/cli.py | SadeghHayeri/pgkit | 258c859f4e6c1ca7d515851552402a2e6bec80dc | [
"MIT"
] | null | null | null | import click
from pgkit.cli.commands.config import config
from pgkit.cli.commands.pitr import pitr
from pgkit.cli.commands.status import status
from pgkit.cli.commands.shellx import shell, dumpall, dump, stop, restart, start, list
@click.group()
def cli():
pass
def main():
cli.add_command(config)
cli.add_command(pitr)
cli.add_command(status)
cli.add_command(list)
cli.add_command(shell)
cli.add_command(dump)
cli.add_command(dumpall)
cli.add_command(start)
cli.add_command(stop)
cli.add_command(restart)
cli()
| 22.56 | 86 | 0.728723 |
acf75841bb2af435a81aad0bf34e56575b236137 | 3,208 | py | Python | fileupload/settings.py | neha-webllisto/file_upload | 877e9da88677e9ad73f3b2f85dcfab7dbdf49e45 | [
"Apache-2.0"
] | null | null | null | fileupload/settings.py | neha-webllisto/file_upload | 877e9da88677e9ad73f3b2f85dcfab7dbdf49e45 | [
"Apache-2.0"
] | null | null | null | fileupload/settings.py | neha-webllisto/file_upload | 877e9da88677e9ad73f3b2f85dcfab7dbdf49e45 | [
"Apache-2.0"
] | null | null | null | """
Django settings for fileupload project.
Generated by 'django-admin startproject' using Django 2.1.4.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.1/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'b^g35c@14-x=9ku^a$5wpw9#+ldwcm5xw)u9zko2!3!6^v06ch'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'upload',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'fileupload.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['template'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'fileupload.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = 'staticfiles'
STATICFILES_DIRS = [os.path.join(BASE_DIR, "static"),] | 25.664 | 91 | 0.697319 |
acf758b94daac7a0984cf3b117335456d66fd80e | 1,194 | py | Python | docs/source/conf.py | jupyterhub/jupyterhub-on-hadoop | b4e4171732016e9c27bbb52a854a460c2bf54d7a | [
"BSD-3-Clause"
] | 8 | 2019-07-17T16:50:57.000Z | 2021-06-30T15:24:22.000Z | docs/source/conf.py | jcrist/jupyterhub-on-hadoop | b4e4171732016e9c27bbb52a854a460c2bf54d7a | [
"BSD-3-Clause"
] | 8 | 2019-04-17T00:32:59.000Z | 2019-05-07T19:33:14.000Z | docs/source/conf.py | jcrist/jupyterhub-on-hadoop | b4e4171732016e9c27bbb52a854a460c2bf54d7a | [
"BSD-3-Clause"
] | 2 | 2019-04-26T23:29:44.000Z | 2019-04-28T18:05:58.000Z | import alabaster
# Project settings
project = 'JupyterHub on Hadoop'
copyright = '2019, Jim Crist'
author = 'Jim Crist'
release = version = '0.1.0'
source_suffix = '.rst'
master_doc = 'index'
language = None
pygments_style = 'sphinx'
exclude_patterns = []
# Sphinx Extensions
extensions = ['sphinx.ext.extlinks']
numpydoc_show_class_members = False
extlinks = {
'issue': ('https://github.com/jupyterhub/jupyterhub-on-hadoop/issues/%s', 'Issue #'),
'pr': ('https://github.com/jupyterhub/jupyterhub-on-hadoop/pull/%s', 'PR #')
}
# Sphinx Theme
html_theme = 'alabaster'
html_theme_path = [alabaster.get_path()]
templates_path = ['_templates']
html_static_path = ['_static']
html_theme_options = {
'description': 'Documentation for deploying JupyterHub on a Hadoop Cluster',
'github_button': True,
'github_count': False,
'github_user': 'jupyterhub',
'github_repo': 'jupyterhub-on-hadoop',
'travis_button': False,
'show_powered_by': False,
'page_width': '960px',
'sidebar_width': '250px',
'code_font_size': '0.8em'
}
html_sidebars = {
'**': ['about.html',
'navigation.html',
'help.html',
'searchbox.html']
}
| 24.875 | 89 | 0.668342 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.