hexsha stringlengths 40 40 | size int64 5 2.06M | ext stringclasses 11 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 251 | max_stars_repo_name stringlengths 4 130 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 251 | max_issues_repo_name stringlengths 4 130 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 251 | max_forks_repo_name stringlengths 4 130 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 1 1.05M | avg_line_length float64 1 1.02M | max_line_length int64 3 1.04M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b8ada7f96a8a91b1795a09283b5bb56adf3d888d | 2,373 | py | Python | tests/_geom/test_path_control_x_interface.py | ynsnf/apysc | b10ffaf76ec6beb187477d0a744fca00e3efc3fb | [
"MIT"
] | 16 | 2021-04-16T02:01:29.000Z | 2022-01-01T08:53:49.000Z | tests/_geom/test_path_control_x_interface.py | ynsnf/apysc | b10ffaf76ec6beb187477d0a744fca00e3efc3fb | [
"MIT"
] | 613 | 2021-03-24T03:37:38.000Z | 2022-03-26T10:58:37.000Z | tests/_geom/test_path_control_x_interface.py | simon-ritchie/apyscript | c319f8ab2f1f5f7fad8d2a8b4fc06e7195476279 | [
"MIT"
] | 2 | 2021-06-20T07:32:58.000Z | 2021-12-26T08:22:11.000Z | from random import randint
from retrying import retry
import apysc as ap
from apysc._geom.path_control_x_interface import PathControlXInterface
| 43.944444 | 78 | 0.728614 |
b8add48d3538b0aee1f01094470a9d13e1f3491d | 1,060 | py | Python | test/PySrc/tools/collect_tutorials.py | lifubang/live-py-plugin | 38a3cf447fd7d9c4e6014b71134e178b0d8a01de | [
"MIT"
] | 224 | 2015-03-22T23:40:52.000Z | 2022-03-01T21:45:51.000Z | test/PySrc/tools/collect_tutorials.py | lifubang/live-py-plugin | 38a3cf447fd7d9c4e6014b71134e178b0d8a01de | [
"MIT"
] | 371 | 2015-04-28T05:14:00.000Z | 2022-03-28T01:31:22.000Z | test/PySrc/tools/collect_tutorials.py | lifubang/live-py-plugin | 38a3cf447fd7d9c4e6014b71134e178b0d8a01de | [
"MIT"
] | 53 | 2015-10-30T07:52:07.000Z | 2022-02-28T12:56:35.000Z | import json
from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter, FileType
from pathlib import Path
main()
| 34.193548 | 82 | 0.592453 |
b8ae0034ffcbb27bca0f7745d8873b03677fa88a | 1,569 | py | Python | autobiography.py | wcmckee/wcmckee | 19315a37b592b7bcebb5f2720c965aea58f928ce | [
"MIT"
] | null | null | null | autobiography.py | wcmckee/wcmckee | 19315a37b592b7bcebb5f2720c965aea58f928ce | [
"MIT"
] | null | null | null | autobiography.py | wcmckee/wcmckee | 19315a37b592b7bcebb5f2720c965aea58f928ce | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# <nbformat>3.0</nbformat>
# <markdowncell>
# My name is William Clifford Mckee and this is my autobiography. Written in November 2014.
#
# Structure:
#
# <markdowncell>
# Hello and goodNI
# testing one two there.
# <markdowncell>
# Hello. Testing one two three.
# Screw you guys. I'm going to bed.
#
# History. Mum and Dad
#
#
# One of my early drawing memories was of my friend Wayne. Around age 10. His art was better than mine. I wanted to be better.
# I can't remember seriously drawing untill after high school.
#
# I had a friend at highschool whos artwork I admired.
# He got in trouble once for drawing nudes.
# I rembmer being ifn the art room at intermediate. I have better memories of cooking and wood work than art.
# Paint yourself said the reliever.
#
# We had art folder. Kids would cover these black folders in art. I was always embarrassed by the art on mine. I would hide it by carrying the folder so that art was facing the inside.
# Today I walk around with a visual diary and will let anyone look.
# I'm always very critical of my art though.
#
#
# I hated using artist models and copy their painting.
# My painting skills were low - needed to develop drawing and confidence.
# I am tired.
# More bad news tonight
#
# I had some excellent tutors that helped me develop my painting. Most notable was Gary Freemantle and Roger Key.
# Gary pushed my abstraction and color
#
# Key pushed obsovational painting and focusing on lights and darks.
#
# The classes I did at The Learning Connextion were
| 32.020408 | 185 | 0.733588 |
b8ae248b83fdee036686d9358abb1c53e99adc81 | 26,125 | py | Python | tests/configured_tests.py | maxcountryman/flask-security | ccb41df095177b11e8526958c1001d2f887d9feb | [
"MIT"
] | null | null | null | tests/configured_tests.py | maxcountryman/flask-security | ccb41df095177b11e8526958c1001d2f887d9feb | [
"MIT"
] | null | null | null | tests/configured_tests.py | maxcountryman/flask-security | ccb41df095177b11e8526958c1001d2f887d9feb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import with_statement
import base64
import time
import simplejson as json
from flask.ext.security.utils import capture_registrations, \
capture_reset_password_requests, capture_passwordless_login_requests
from flask.ext.security.forms import LoginForm, ConfirmRegisterForm, RegisterForm, \
ForgotPasswordForm, ResetPasswordForm, SendConfirmationForm, \
PasswordlessLoginForm
from flask.ext.security.forms import TextField, SubmitField, valid_user_email
from tests import SecurityTest
| 33.407928 | 104 | 0.633952 |
b8ae72a8774e3f5e5b83670734de99743ac5f598 | 94 | py | Python | Server/programs/__init__.py | VHirtz/CC-mastermind | 11dc4e043ed67c86e66230812cbd86f736e6a7d1 | [
"MIT"
] | null | null | null | Server/programs/__init__.py | VHirtz/CC-mastermind | 11dc4e043ed67c86e66230812cbd86f736e6a7d1 | [
"MIT"
] | null | null | null | Server/programs/__init__.py | VHirtz/CC-mastermind | 11dc4e043ed67c86e66230812cbd86f736e6a7d1 | [
"MIT"
] | null | null | null | from . import program
from . import turtle_test
from . import antoine_test
from . import dance | 23.5 | 26 | 0.797872 |
b8af272edd34ec0b3bc42014f30caae48187c86a | 8,283 | py | Python | src/robotkernel/utils.py | robocorp/robocode-kernel | b9c7ed20ba0046d0b3bae4e461205f9fa19b77a8 | [
"BSD-3-Clause"
] | 4 | 2020-04-01T16:24:01.000Z | 2022-02-16T19:22:44.000Z | src/robotkernel/utils.py | robocorp/robocode-kernel | b9c7ed20ba0046d0b3bae4e461205f9fa19b77a8 | [
"BSD-3-Clause"
] | 8 | 2020-04-21T13:35:02.000Z | 2022-03-12T00:39:17.000Z | src/robotkernel/utils.py | robocorp/robocode-kernel | b9c7ed20ba0046d0b3bae4e461205f9fa19b77a8 | [
"BSD-3-Clause"
] | 1 | 2020-04-03T10:48:31.000Z | 2020-04-03T10:48:31.000Z | # -*- coding: utf-8 -*-
from copy import deepcopy
from difflib import SequenceMatcher
from IPython.core.display import Image
from IPython.core.display import JSON
from json import JSONDecodeError
from lunr.builder import Builder
from lunr.stemmer import stemmer
from lunr.stop_word_filter import stop_word_filter
from lunr.trimmer import trimmer
from operator import itemgetter
from pygments.formatters import HtmlFormatter
from pygments.lexers import get_lexer_by_name
from robot.libdocpkg.htmlwriter import DocToHtml
from robotkernel.constants import HAS_RF32_PARSER
import base64
import json
import os
import pygments
import re
if HAS_RF32_PARSER:
else:
from robot.parsing.settings import Documentation
def javascript_uri(html, filename=""):
"""Because data-uri for text/html is not supported by IE."""
if isinstance(html, str):
html = html.encode("utf-8")
return (
"javascript:(function(el){{"
"var w=window.open();var d='{}';"
"w.document.open();"
"w.document.write(window.atob(d));"
"w.document.close();"
"var a=w.document.createElement('a');"
"a.appendChild(w.document.createTextNode('Download'));"
"a.href='data:text/html;base64,' + d;"
"a.download='{}';"
"a.style='position:fixed;top:0;right:0;"
"color:white;background:black;text-decoration:none;"
"font-weight:bold;padding:7px 14px;border-radius:0 0 0 5px;';"
"w.document.body.append(a);"
"}})(this);".format(base64.b64encode(html).decode("utf-8"), filename)
)
def lunr_builder(ref, fields):
"""A convenience function to configure and construct a lunr.Builder.
Returns:
Index: The populated Index ready to search against.
"""
builder = Builder()
builder.pipeline.add(trimmer, stop_word_filter, stemmer)
builder.search_pipeline.add(stemmer)
builder.ref(ref)
for field in fields:
builder.field(field)
return builder
def readable_keyword(s):
"""Return keyword with only the first letter in title case."""
if s and not s.startswith("*") and not s.startswith("["):
if s.count("."):
library, name = s.rsplit(".", 1)
return library + "." + name[0].title() + name[1:].lower()
else:
return s[0].title() + s[1:].lower()
else:
return s
def detect_robot_context(code, cursor_pos):
"""Return robot code context in cursor position."""
code = code[:cursor_pos]
line = code.rsplit("\n")[-1]
context_parts = code.rsplit("***", 2)
if len(context_parts) != 3:
return "__root__"
else:
context_name = context_parts[1].strip().lower()
if context_name == "settings":
return "__settings__"
elif line.lstrip() == line:
return "__root__"
elif context_name in ["tasks", "test cases"]:
return "__tasks__"
elif context_name == "keywords":
return "__keywords__"
else:
return "__root__"
NAME_REGEXP = re.compile("`(.+?)`")
def to_html(obj):
"""Return object as highlighted JSON."""
return highlight("json", json.dumps(obj, sort_keys=False, indent=4))
# noinspection PyProtectedMember
| 32.482353 | 87 | 0.60268 |
b8b06e91b0fbc55f204d0286612efe3154be4b90 | 5,022 | py | Python | Pyduino/__init__.py | ItzTheDodo/Pyduino | a68d6a3214d5fb452e8b8e53cb013ee7205734bb | [
"Apache-2.0"
] | null | null | null | Pyduino/__init__.py | ItzTheDodo/Pyduino | a68d6a3214d5fb452e8b8e53cb013ee7205734bb | [
"Apache-2.0"
] | null | null | null | Pyduino/__init__.py | ItzTheDodo/Pyduino | a68d6a3214d5fb452e8b8e53cb013ee7205734bb | [
"Apache-2.0"
] | null | null | null | # Function Credits: https://github.com/lekum/pyduino/blob/master/pyduino/pyduino.py (lekum (as of 2014))
# Written By: ItzTheDodo
from Pyduino.Boards.Uno import UnoInfo
from Pyduino.Boards.Mega import MegaInfo
from Pyduino.Boards.Diecimila import DiecimilaInfo
from Pyduino.Boards.Due import DueInfo
from Pyduino.Boards.Nano import NanoInfo
from Pyduino.Boards.Mini import MiniInfo
from Pyduino.Boards.Lilypad import LilypadInfo
from Pyduino.Boards.CustomBoardProfile import BoardProfileInfo
from Pyduino.Utils.Pins import *
from Pyduino.Utils.ReadWrite import *
from Pyduino.Utils.ReadOnly import *
import serial
import time
import sys
if __name__ == "__main__":
# set all pins high for 1 sec then low for 0.5 sec
p = Pyduino("Uno")
for i in range(p.getBoardInfo().getMainInfo()["0"]):
p.pinMode(i, OUTPUT)
while True:
for i in range(p.getBoardInfo().getMainInfo()["0"]):
p.setPin(i, HIGH)
p.delay(1000)
for i in range(p.getBoardInfo().getMainInfo()["0"]):
p.setPin(i, LOW)
p.delay(500)
| 27.293478 | 105 | 0.554162 |
b8b10e6e66f7f88c735881020e22e44e43687a75 | 2,218 | py | Python | apps/api/v1/social_auth.py | asmuratbek/oobamarket | 1053976a13ea84b9aabfcbbcbcffd79549ce9538 | [
"MIT"
] | null | null | null | apps/api/v1/social_auth.py | asmuratbek/oobamarket | 1053976a13ea84b9aabfcbbcbcffd79549ce9538 | [
"MIT"
] | 7 | 2020-06-05T23:36:01.000Z | 2022-01-13T01:42:07.000Z | apps/api/v1/social_auth.py | asmuratbek/oobamarket | 1053976a13ea84b9aabfcbbcbcffd79549ce9538 | [
"MIT"
] | null | null | null | from allauth.socialaccount.helpers import complete_social_login
from allauth.socialaccount.models import SocialApp, SocialToken, SocialLogin, SocialAccount
from allauth.socialaccount.providers.facebook.views import fb_complete_login
from allauth.socialaccount.providers.google.views import GoogleOAuth2Adapter
from django.http import JsonResponse
from requests import HTTPError
from rest_framework.authtoken.models import Token
from apps.users.models import User
__author__ = 'kolyakoikelov'
| 41.074074 | 118 | 0.701533 |
b8b4581f931e18341efca7f99abcc93a3432695c | 13,076 | py | Python | queryable_properties/properties/common.py | W1ldPo1nter/django-queryable-properties | 9bb4ecb4fbdd7a9e0f610f937c8101a643027fb1 | [
"BSD-3-Clause"
] | 36 | 2019-10-22T11:44:37.000Z | 2022-03-15T21:27:03.000Z | queryable_properties/properties/common.py | W1ldPo1nter/django-queryable-properties | 9bb4ecb4fbdd7a9e0f610f937c8101a643027fb1 | [
"BSD-3-Clause"
] | 6 | 2020-10-03T15:13:26.000Z | 2021-09-25T14:05:50.000Z | queryable_properties/properties/common.py | W1ldPo1nter/django-queryable-properties | 9bb4ecb4fbdd7a9e0f610f937c8101a643027fb1 | [
"BSD-3-Clause"
] | 3 | 2021-04-26T08:30:46.000Z | 2021-08-18T09:04:49.000Z | # encoding: utf-8
import operator
import six
from django.db.models import BooleanField, Field, Q
from ..utils.internal import MISSING_OBJECT, ModelAttributeGetter, QueryPath
from .base import QueryableProperty
from .mixins import AnnotationGetterMixin, AnnotationMixin, boolean_filter, LookupFilterMixin
| 45.245675 | 119 | 0.613643 |
b8b45d47d2d2b0c8935936a0ff5a2cb55518f1d6 | 2,558 | py | Python | experiments/examples/example_run_bench_s1_periodic_bench.py | cogsys-tuebingen/uninas | 06729b9cf517ec416fb798ae387c5bd9c3a278ac | [
"MIT"
] | 18 | 2020-11-22T16:03:08.000Z | 2022-03-15T12:11:46.000Z | experiments/examples/example_run_bench_s1_periodic_bench.py | cogsys-tuebingen/uninas | 06729b9cf517ec416fb798ae387c5bd9c3a278ac | [
"MIT"
] | 2 | 2022-01-04T08:10:17.000Z | 2022-01-05T08:13:14.000Z | experiments/examples/example_run_bench_s1_periodic_bench.py | cogsys-tuebingen/uninas | 06729b9cf517ec416fb798ae387c5bd9c3a278ac | [
"MIT"
] | 6 | 2021-03-08T07:08:52.000Z | 2022-02-24T12:00:43.000Z | """
training a super-network and periodically evaluating its performance on bench architectures
a work in this direction exists: https://arxiv.org/abs/2001.01431
"""
from uninas.main import Main
# default configurations, for the search process and the network design
# config_files = "{path_conf_bench_tasks}/s1_fairnas_cifar.run_config, {path_conf_net_search}/bench201.run_config"
config_files = "{path_conf_bench_tasks}/s1_random_cifar.run_config, {path_conf_net_search}/bench201.run_config"
# these changes are applied to the default configuration in the config files
changes = {
"{cls_task}.is_test_run": True,
"{cls_task}.save_dir": "{path_tmp}/run_bench_s1_per/",
"{cls_task}.save_del_old": True,
"{cls_trainer}.max_epochs": 4,
"{cls_data}.dir": "{path_data}/cifar_data/",
"{cls_data}.fake": False,
"{cls_data}.download": False,
"{cls_data}.batch_size_train": 96,
# example how to mask options
"{cls_method}.mask_indices": "0, 1, 4", # mask Zero, Skip, Pool
"{cls_network_body}.cell_order": "n, n, r, n, n, r, n, n", # 2 normal cells, one reduction cell, ...
"{cls_network_stem}.features": 16, # start with 16 channels
# some augmentations
"cls_augmentations": "DartsCifarAug", # default augmentations for cifar
"{cls_schedulers#0}.warmup_epochs": 0,
# specifying how to add weights, note that SplitWeightsMixedOp requires a SplitWeightsMixedOpCallback
"{cls_network_cells_primitives#0}.mixed_cls": "MixedOp", # MixedOp, BiasD1MixedOp, ...
"{cls_network_cells_primitives#1}.mixed_cls": "MixedOp", # MixedOp, BiasD1MixedOp, ...
"cls_callbacks": "CheckpointCallback, CreateBenchCallback",
"{cls_callbacks#1}.each_epochs": 1,
"{cls_callbacks#1}.reset_bn": True,
"{cls_callbacks#1}.benchmark_path": "{path_data}/bench/nats/nats_bench_1.1_subset_m_test.pt",
# what and how to evaluate each specific network
"cls_cb_objectives": "NetValueEstimator",
"{cls_cb_objectives#0}.key": "acc1/valid",
"{cls_cb_objectives#0}.is_constraint": False,
"{cls_cb_objectives#0}.is_objective": True,
"{cls_cb_objectives#0}.maximize": True,
"{cls_cb_objectives#0}.load": True,
"{cls_cb_objectives#0}.batches_forward": 20,
"{cls_cb_objectives#0}.batches_train": 0,
"{cls_cb_objectives#0}.batches_eval": -1,
"{cls_cb_objectives#0}.value": "val/accuracy/1",
}
if __name__ == "__main__":
task = Main.new_task(config_files, args_changes=changes)
task.run()
| 41.258065 | 114 | 0.69742 |
b8b6326ff4e90a353f713e0c09d84e4633fbcdd7 | 9,650 | py | Python | zuds/photometry.py | charlotteaward/zuds-pipeline | 52423859498374203d13fdc15c88bdc1260db183 | [
"BSD-3-Clause"
] | null | null | null | zuds/photometry.py | charlotteaward/zuds-pipeline | 52423859498374203d13fdc15c88bdc1260db183 | [
"BSD-3-Clause"
] | null | null | null | zuds/photometry.py | charlotteaward/zuds-pipeline | 52423859498374203d13fdc15c88bdc1260db183 | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql as psql
from sqlalchemy.orm import relationship
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.schema import UniqueConstraint
from astropy import units as u
from .core import Base
from .constants import APER_KEY, APERTURE_RADIUS
__all__ = ['ForcedPhotometry', 'raw_aperture_photometry', 'aperture_photometry']
| 33.623693 | 108 | 0.594922 |
b8b69dfcb1d5f6e006ee8b568536b7b0df129c02 | 5,521 | py | Python | models/Libraries/UnitTest.py | yangshiquan/GraphDialog | 5bb1239bf502c8d79c4c888f69c7aff0c02c2928 | [
"MIT"
] | 26 | 2020-09-25T02:19:43.000Z | 2022-03-27T09:03:34.000Z | models/Libraries/UnitTest.py | yangshiquan/GraphDialog | 5bb1239bf502c8d79c4c888f69c7aff0c02c2928 | [
"MIT"
] | 1 | 2020-10-28T11:28:35.000Z | 2020-10-28T11:28:35.000Z | models/Libraries/UnitTest.py | yangshiquan/GraphDialog | 5bb1239bf502c8d79c4c888f69c7aff0c02c2928 | [
"MIT"
] | 2 | 2020-12-17T08:49:13.000Z | 2021-04-18T13:08:48.000Z | import tensorflow as tf
from models.Libraries.BidirectionalGraphEncoder import BidirectionalGraphEncoder
from tensorflow.python.ops import array_ops
if __name__ == "__main__":
# units=2, input_dim=2, edge_types=10, recurrent_size=4
bi_graph_encoder = BidirectionalGraphEncoder(2, 2, 10, 4)
# inputs: batch_size=8, max_len=3, embedding_dim=2
# inputs: batch_size*max_len*embedding_dim
inputs = tf.convert_to_tensor([[[0.1, 0.2],[0.0, 0.0],[0.0, 0.0]],[[0.1, 0.2],[0.3, 0.4],[0.0, 0.0]],[[0.1, 0.2],[0.3, 0.4],[0.5, 0.6]],[[0.1, 0.2],[0.3, 0.4],[0.5, 0.6]],[[0.1, 0.2],[0.3, 0.4],[0.5, 0.6]],[[0.1, 0.2],[0.3, 0.4],[0.5, 0.6]],[[0.1, 0.2],[0.3, 0.4],[0.5, 0.6]],[[0.1, 0.2],[0.3, 0.4],[0.5, 0.6]]])
# deps: 2*batch_size*max_len*(recurrent_size-1)
deps = tf.convert_to_tensor([[[['$', '$', '$'],['$', '$', '$'],['$', '$', '$']],[['$', '$', '$'],['$', '$', '$'],['$', '$', '$']], [['$', '$', '$'],['$', '$', '$'],['$', '$', '$']],[['$', '$', '$'],['$', '$', '$'],['$', '$', '$']],[['$', '$', '$'],['0', '$', '$'],['1', '$', '$']],[['$', '$', '$'],['0', '$', '$'],['1', '$', '$']],[['$', '$', '$'],['0', '$', '$'],['1', '$', '$']],[['$', '$', '$'],['0', '$', '$'],['1', '$', '$']]],[[['$', '$', '$'],['$', '$', '$'],['$', '$', '$']],[['$', '$', '$'],['$', '$', '$'],['$', '$', '$']],[['$', '$', '$'],['0', '$', '$'],['1', '$', '$']],[['$', '$', '$'],['0', '$', '$'],['1', '$', '$']],[['$', '$', '$'],['0', '$', '$'],['1', '$', '$']],[['$', '$', '$'],['0', '$', '$'],['1', '$', '$']],[['$', '$', '$'],['0', '$', '$'],['1', '$', '$']],[['$', '$', '$'],['0', '$', '$'],['1', '$', '$']]]])
# edge_types = tf.convert_to_tensor([[[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']]], [[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']],[['0', '$', '$', '$'],['0', '2', '$', '$'],['0', '2', '$', '$']]]])
# edge_types = tf.convert_to_tensor([[[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]]],[[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]],[[0, -1, -1, -1],[0, 2, -1, -1],[0, 2, -1, -1]]]])
# edge_types: 2*batch_size*max_len*recurrent_size
edge_types = tf.convert_to_tensor([[[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[1, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[2, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]]], [[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]],[[0, 9, 9, 9],[0, 2, 9, 9],[0, 2, 9, 9]]]])
# mask: batch_size*max_len
mask = tf.convert_to_tensor([[1,1,1],[1,1,1],[1,1,1],[1,1,0],[1,1,0],[1,1,0],[1,0,0],[1,0,0]])
# cell_mask: 2*batch_size*max_len*recurrent_size
cell_mask = tf.convert_to_tensor([[[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]]],[[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]],[[1, 0, 0, 0],[1, 1, 0, 0],[1, 1, 0, 0]]]])
# initial_state: 2*recurrent_size*batch_size*embedding_dim
initial_state = array_ops.zeros([2, 4, 8, 2])
input_lengths = tf.convert_to_tensor([1, 2, 3, 3, 3, 3, 3, 3])
outputs, hidden_f, hidden_b = bi_graph_encoder(inputs, input_lengths, deps, edge_types, mask, cell_mask, initial_state, True)
print(outputs)
print(hidden_f)
print(hidden_b) | 197.178571 | 1,087 | 0.307915 |
b8b7d48ab2b3078dad4877e762a40e5343a5d8aa | 96 | py | Python | animazya/apps.py | KenFon/kenfontaine.fr | 6b4055de791e3cc47b473c1890b2fcafab8a635d | [
"MIT"
] | null | null | null | animazya/apps.py | KenFon/kenfontaine.fr | 6b4055de791e3cc47b473c1890b2fcafab8a635d | [
"MIT"
] | null | null | null | animazya/apps.py | KenFon/kenfontaine.fr | 6b4055de791e3cc47b473c1890b2fcafab8a635d | [
"MIT"
] | null | null | null | from django.apps import AppConfig
| 16 | 34 | 0.71875 |
b8b7e91501f23e4c04cf067b13d9a9480a460c77 | 59 | py | Python | python/testData/debug/test4.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2018-12-29T09:53:39.000Z | 2018-12-29T09:53:42.000Z | python/testData/debug/test4.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/debug/test4.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 1 | 2020-11-27T10:36:50.000Z | 2020-11-27T10:36:50.000Z | xval = 0
xvalue1 = 1
xvalue2 = 2
print(xvalue1 + xvalue2)
| 9.833333 | 24 | 0.677966 |
b8b9118b82c32808b4a088d2dcdc263280ad9a3a | 3,000 | py | Python | pres/ray-tracing/main.py | sosterwalder/mte7103-qde | e4ed8beda40c9cd15d3c815567d9a4e3396adee7 | [
"MIT"
] | null | null | null | pres/ray-tracing/main.py | sosterwalder/mte7103-qde | e4ed8beda40c9cd15d3c815567d9a4e3396adee7 | [
"MIT"
] | null | null | null | pres/ray-tracing/main.py | sosterwalder/mte7103-qde | e4ed8beda40c9cd15d3c815567d9a4e3396adee7 | [
"MIT"
] | null | null | null | from kivy.animation import Animation
from kivy.app import App
from kivy.core.window import Window
from kivy.graphics import Color
from kivy.graphics import Ellipse
from kivy.graphics import Line
from kivy.uix.widget import Widget
if __name__ == '__main__':
RayTracingApp().run()
| 28.037383 | 77 | 0.553 |
b8b92ae2e5cf67849b6f6b332521716f375c2982 | 3,960 | py | Python | sookie.py | anygard/sookie | 5732f7644d2d908911735e62c8574863825174a2 | [
"MIT"
] | null | null | null | sookie.py | anygard/sookie | 5732f7644d2d908911735e62c8574863825174a2 | [
"MIT"
] | null | null | null | sookie.py | anygard/sookie | 5732f7644d2d908911735e62c8574863825174a2 | [
"MIT"
] | null | null | null |
""" Sookie, is a waiter, waits for a socket to be listening then it moves on
Usage:
sookie <socket> [--timeout=<to>] [--retry=<rt>] [--logsocket=<ls>] [--logfacility=<lf>] [--loglevel=<ll>]
sookie -h | --help
sookie --version
Options:
-h --help Show this screen
--version Show version
--timeout=<to> Timout in seconds [default: 1800]
--retry=<rt> Interval between retries in seconds [default: 20]
--logsocket=<ls> Socket to send syslog messages to, only logging to local syslog if omitted.
--logfacility=<lf> The syslog facility to use for logging [default: user]
--loglevel=<ll> The syslog severity level to use, i.e the verbosity level [default: info]
<socket> Socket to wait for, 'host:port'
Sookie is intended to be a simple way of providing som measure of management of
inter server dependencies in complex environments. All it does is wait for a
socket to start listening for connections then it exits. It is supposed to be
used as a "smart" sleep in a startup script.
Sookie logs to syslog, and optionally to a remote syslog server aswell. Level
and facility values can be taken from syslog(1)
Sookie Stackhouse is a waitress.
exitcodes
0: ok, the server answered
1: waited until timout
2: invalid syntax
"""
import docopt
import logging
import logging.handlers
import os
import socket
import sys
import time
if __name__ == '__main__':
args = docopt.docopt(__doc__, version='0.1')
main(args)
| 30.697674 | 109 | 0.630303 |
b8bd55689822e6f7e5a2823014bfe14020f8b719 | 912 | py | Python | tests/parser/syntax/test_ann_assign.py | williamremor/vyper | 4d33dc4140f7d0c339876afb6af7b417bd0ed8e0 | [
"MIT"
] | 1 | 2018-08-31T02:32:57.000Z | 2018-08-31T02:32:57.000Z | tests/parser/syntax/test_ann_assign.py | williamremor/vyper | 4d33dc4140f7d0c339876afb6af7b417bd0ed8e0 | [
"MIT"
] | null | null | null | tests/parser/syntax/test_ann_assign.py | williamremor/vyper | 4d33dc4140f7d0c339876afb6af7b417bd0ed8e0 | [
"MIT"
] | null | null | null | import pytest
from pytest import raises
from vyper import compiler
from vyper.exceptions import VariableDeclarationException, TypeMismatchException
fail_list = [
"""
@public
def test():
a = 1
""",
"""
@public
def test():
a = 33.33
""",
"""
@public
def test():
a = "test string"
""",
("""
@public
def test():
a: num = 33.33
""", TypeMismatchException)
]
valid_list = [
"""
@public
def test():
a: num = 1
""",
]
| 16.888889 | 80 | 0.634868 |
b8bd59b6d2fd731f6b088f01ce1a174d704adcae | 7,568 | py | Python | tests/test_mongoengine_dsl.py | StoneMoe/mongoengine_dsl | 310d77c30e77ba1f695b3d644737fcfc3c2ab304 | [
"MIT"
] | 3 | 2021-08-25T02:08:34.000Z | 2022-03-23T08:32:09.000Z | tests/test_mongoengine_dsl.py | StoneMoe/mongoengine_dsl | 310d77c30e77ba1f695b3d644737fcfc3c2ab304 | [
"MIT"
] | 1 | 2021-08-24T09:41:11.000Z | 2021-08-24T10:02:43.000Z | tests/test_mongoengine_dsl.py | StoneMoe/mongoengine_dsl | 310d77c30e77ba1f695b3d644737fcfc3c2ab304 | [
"MIT"
] | 1 | 2021-08-24T14:25:28.000Z | 2021-08-24T14:25:28.000Z | #!/usr/bin/env python
import unittest
from mongoengine import Document, Q, StringField, connect
from mongoengine_dsl import Query
from mongoengine_dsl.errors import InvalidSyntaxError, TransformHookError
from tests.utils import ts2dt
| 35.698113 | 88 | 0.532505 |
b8be08575104b59466524c927af95ffef96623e1 | 2,891 | py | Python | dmidecode/__init__.py | hamgom95/dmidecode | d8d82fecdbfe578ad5e9c561753dcbc6fdfdc02c | [
"MIT"
] | null | null | null | dmidecode/__init__.py | hamgom95/dmidecode | d8d82fecdbfe578ad5e9c561753dcbc6fdfdc02c | [
"MIT"
] | null | null | null | dmidecode/__init__.py | hamgom95/dmidecode | d8d82fecdbfe578ad5e9c561753dcbc6fdfdc02c | [
"MIT"
] | null | null | null | import subprocess
from collections import UserDict
from functools import lru_cache
def _parse_handle_section(lines):
"""
Parse a section of dmidecode output
* 1st line contains address, type and size
* 2nd line is title
* line started with one tab is one option and its value
* line started with two tabs is a member of list
"""
data = {"_title": next(lines).rstrip()}
for line in lines:
line = line.rstrip()
if line.startswith("\t\t"):
try:
data[k].append(line.lstrip())
except AttributeError:
# ignore stray <OUT OF SPEC> lines
pass
elif line.startswith("\t"):
k, v = [i.strip() for i in line.lstrip().split(":", 1)]
if v is "":
data[k] = []
else:
data[k] = v
else:
break
return data
| 28.623762 | 79 | 0.528191 |
b8beac8ab26c148a31cb1d0f421ff54922a1ebcd | 1,580 | py | Python | {{cookiecutter.repo_name}}/webapp/config/settings/cache.py | bopo/django-template | 465f48563bc9625e37bb278a32800e7a55d9e256 | [
"BSD-3-Clause"
] | null | null | null | {{cookiecutter.repo_name}}/webapp/config/settings/cache.py | bopo/django-template | 465f48563bc9625e37bb278a32800e7a55d9e256 | [
"BSD-3-Clause"
] | null | null | null | {{cookiecutter.repo_name}}/webapp/config/settings/cache.py | bopo/django-template | 465f48563bc9625e37bb278a32800e7a55d9e256 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
try:
from .base import MIDDLEWARE_CLASSES
except ImportError as e:
raise e
# MIDDLEWARE_CLASSES += (
# 'django.middleware.cache.CacheMiddleware',
# 'django.middleware.cache.UpdateCacheMiddleware',
# 'django.middleware.cache.FetchFromCacheMiddleware',
# )
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': '/var/tmp/django_cache',
},
# 'locmem': {
# 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
# 'LOCATION': 'unique-snowflake',
# },
# 'dummy': {
# 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
# },
# 'redis': {
# 'BACKEND': 'redis_cache.RedisCache',
# 'LOCATION': '127.0.0.1:6379',
# 'OPTIONS': {
# 'DB': 0,
# 'PASSWORD': '',
# 'CONNECTION_POOL_CLASS': 'redis.BlockingConnectionPool',
# 'CONNECTION_POOL_CLASS_KWARGS': {
# 'max_connections': 50,
# 'timeout': 20,
# }
# },
# },
# 'memcache': {
# 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
# 'LOCATION': '127.0.0.1:11211',
# 'LOCATION': 'unix:/tmp/memcached.sock',
# },
# 'database': {
# 'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
# 'LOCATION': 'my_cache_table',
# }
}
REDIS_TIMEOUT = 7 * 24 * 60 * 60
CUBES_REDIS_TIMEOUT = 60 * 60
NEVER_REDIS_TIMEOUT = 365 * 24 * 60 * 60
| 28.727273 | 75 | 0.563924 |
b8c05b3185ce376bc8351fd54c6fd146defe890b | 5,144 | py | Python | scripts/evaluate_hatexplain.py | GKingA/POTATO | 585eb002d95375a94b496b0f38637fdf69cd8a9e | [
"MIT"
] | 26 | 2021-10-05T14:57:33.000Z | 2022-03-27T04:26:21.000Z | scripts/evaluate_hatexplain.py | GKingA/POTATO | 585eb002d95375a94b496b0f38637fdf69cd8a9e | [
"MIT"
] | 20 | 2021-12-01T09:03:41.000Z | 2022-03-09T10:45:58.000Z | scripts/evaluate_hatexplain.py | GKingA/POTATO | 585eb002d95375a94b496b0f38637fdf69cd8a9e | [
"MIT"
] | 3 | 2021-11-18T07:14:56.000Z | 2022-02-17T09:14:46.000Z | from typing import List, Dict
import json
import numpy as np
from pandas import DataFrame
import logging
from argparse import ArgumentParser, ArgumentError
from sklearn.metrics import classification_report
from xpotato.graph_extractor.extract import FeatureEvaluator
from xpotato.dataset.explainable_dataset import ExplainableDataset
if __name__ == "__main__":
argparser = ArgumentParser()
argparser.add_argument(
"--mode",
"-m",
choices=["find_good_features", "evaluate"],
help="The mode of operation",
default="evaluate",
)
argparser.add_argument(
"--features",
"-f",
help="Path to the feature to evaluate.Used in both modes",
required=True,
)
argparser.add_argument(
"--target",
"-tg",
help="The target category of your features. If not given, than the code will choose one from the feature file.",
)
argparser.add_argument(
"--threshold",
"-th",
help="The minimum precision with which we consider a feature good.",
default=0.8,
type=float,
)
argparser.add_argument(
"--train", "-t", help="The train file in potato format", nargs="+"
)
argparser.add_argument(
"--valid", "-v", help="The validation file in potato format", nargs="+"
)
argparser.add_argument(
"--save_features",
"-sf",
help="Path to the feature file where the good features will be saved in find_good features mode",
)
args = argparser.parse_args()
if args.mode == "find_good_features":
if args.train is None:
raise ArgumentError(
argument=args.train,
message="Training file is needed in find_good_features mode",
)
find_good_features(
args.features,
args.train,
args.valid,
args.save_features,
args.target,
args.threshold,
)
else:
if args.train is None and args.valid is None:
raise ArgumentError(
argument=args.train,
message="At least one training file or validation is needed in evaluate mode",
)
train = [] if args.train is None else args.train
valid = [] if args.valid is None else args.valid
evaluate(args.features, train + valid, args.target)
| 34.756757 | 120 | 0.626361 |
b8c18579e2101b06416f377ffa427b6e165dcba7 | 53 | py | Python | agency/memory/__init__.py | jackharmer/agency | 5a78dd23e14c44c4076e49ea44b83ab1697e51c8 | [
"MIT"
] | 2 | 2022-03-30T19:51:42.000Z | 2022-03-30T20:05:39.000Z | agency/memory/__init__.py | jackharmer/agency | 5a78dd23e14c44c4076e49ea44b83ab1697e51c8 | [
"MIT"
] | null | null | null | agency/memory/__init__.py | jackharmer/agency | 5a78dd23e14c44c4076e49ea44b83ab1697e51c8 | [
"MIT"
] | null | null | null | from .episodic import EpisodicMemory, EpisodicBuffer
| 26.5 | 52 | 0.867925 |
b8c19e5ee50c09165615a57248929fdadd0a46be | 1,876 | py | Python | examples/car_on_hill_fqi.py | doroK/mushroom | 47e5b1d09b65da585c1b19a6cc7f0366849d7863 | [
"MIT"
] | null | null | null | examples/car_on_hill_fqi.py | doroK/mushroom | 47e5b1d09b65da585c1b19a6cc7f0366849d7863 | [
"MIT"
] | null | null | null | examples/car_on_hill_fqi.py | doroK/mushroom | 47e5b1d09b65da585c1b19a6cc7f0366849d7863 | [
"MIT"
] | null | null | null | import numpy as np
from joblib import Parallel, delayed
from sklearn.ensemble import ExtraTreesRegressor
from mushroom.algorithms.value import FQI
from mushroom.core import Core
from mushroom.environments import *
from mushroom.policy import EpsGreedy
from mushroom.utils.dataset import compute_J
from mushroom.utils.parameters import Parameter
"""
This script aims to replicate the experiments on the Car on Hill MDP as
presented in:
"Tree-Based Batch Mode Reinforcement Learning", Ernst D. et al.. 2005.
"""
if __name__ == '__main__':
n_experiment = 1
Js = Parallel(n_jobs=-1)(delayed(experiment)() for _ in range(n_experiment))
print((np.mean(Js)))
| 26.422535 | 80 | 0.655117 |
b8c1beea49870d673c41dbabd215c2bea4001620 | 1,854 | py | Python | db.py | dashimaki360/mahjong-line-bot | e119e83308bed1bfe6d66d53e41a4b7908dceb5e | [
"MIT"
] | null | null | null | db.py | dashimaki360/mahjong-line-bot | e119e83308bed1bfe6d66d53e41a4b7908dceb5e | [
"MIT"
] | 5 | 2018-04-19T06:59:47.000Z | 2018-04-20T00:07:34.000Z | db.py | dashimaki360/mahjong-line-bot | e119e83308bed1bfe6d66d53e41a4b7908dceb5e | [
"MIT"
] | null | null | null | import os
from datetime import datetime
from flask_sqlalchemy import SQLAlchemy
# heroku postgresql setting
app.config['SQLALCHEMY_DATABASE_URI'] = os.getenv('DATABASE_URL', None)
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = True
db = SQLAlchemy(app)
def addToSql(event, reply, sticker=False, image=False):
'''
add message data to sql
'''
if sticker:
msg = "stamp {} {}".format(event.message.package_id, event.message.sticker_id)
elif image:
msg = "IMAGE_MESSAGE"
else:
msg = event.message.text,
add_data = usermessage(
id=event.message.id,
user_id=event.source.user_id,
message=msg,
reply_message=reply,
timestamp=datetime.fromtimestamp(int(event.timestamp)/1000)
)
try:
db.session.add(add_data)
db.session.commit()
except (SQLAlchemy.exc.SQLAlchemyError, SQLAlchemy.exc.DBAPIError) as e:
print("sql error happen")
print(e)
| 28.090909 | 86 | 0.60356 |
b8c255c102573468b31394960a4e3c18d4bdfc95 | 728 | py | Python | loss/voxel_match_loss.py | sennnnn/Refer-it-in-RGBD | ac8dcaed80e28d2708f14cba5142fec5301eb3cc | [
"MIT"
] | 28 | 2021-03-26T09:24:23.000Z | 2022-02-17T20:14:43.000Z | loss/voxel_match_loss.py | sennnnn/Refer-it-in-RGBD | ac8dcaed80e28d2708f14cba5142fec5301eb3cc | [
"MIT"
] | 1 | 2021-07-12T02:38:51.000Z | 2021-07-12T11:43:31.000Z | loss/voxel_match_loss.py | sennnnn/Refer-it-in-RGBD | ac8dcaed80e28d2708f14cba5142fec5301eb3cc | [
"MIT"
] | 4 | 2021-08-05T01:57:05.000Z | 2022-02-17T20:26:35.000Z | import torch
import torch.nn as nn | 42.823529 | 96 | 0.715659 |
b8c32fb0b4535e967806c491e7dce8ba89fb1433 | 1,134 | py | Python | app/cachedmodel/migrations/0001_initial.py | Uniquode/uniquode2 | 385f3e0b26383c042d8da64b52350e82414580ea | [
"MIT"
] | null | null | null | app/cachedmodel/migrations/0001_initial.py | Uniquode/uniquode2 | 385f3e0b26383c042d8da64b52350e82414580ea | [
"MIT"
] | null | null | null | app/cachedmodel/migrations/0001_initial.py | Uniquode/uniquode2 | 385f3e0b26383c042d8da64b52350e82414580ea | [
"MIT"
] | null | null | null | # Generated by Django 3.2.7 on 2021-09-19 03:41
from django.db import migrations, models
import django.db.models.deletion
import django.db.models.manager
| 32.4 | 128 | 0.574074 |
b8c4664f2ad6a4052e0d5d282f88dba0b1d97427 | 8,305 | py | Python | conduit/utils/awsbatch_operator.py | elenimath/saber | 71acab9798cf3aee1c4d64b09453e5234f8fdf1e | [
"Apache-2.0"
] | 12 | 2018-05-14T17:43:18.000Z | 2021-11-16T04:03:33.000Z | conduit/utils/awsbatch_operator.py | elenimath/saber | 71acab9798cf3aee1c4d64b09453e5234f8fdf1e | [
"Apache-2.0"
] | 34 | 2019-05-06T19:13:36.000Z | 2021-05-06T19:12:35.000Z | conduit/utils/awsbatch_operator.py | elenimath/saber | 71acab9798cf3aee1c4d64b09453e5234f8fdf1e | [
"Apache-2.0"
] | 3 | 2019-10-08T17:42:17.000Z | 2021-07-28T05:52:02.000Z | # Copyright 2019 The Johns Hopkins University Applied Physics Laboratory
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import parse
from math import log1p
from time import sleep, time
from airflow.exceptions import AirflowException
from airflow.models import BaseOperator
from airflow.utils import apply_defaults
from conduit.utils.datajoint_hook import DatajointHook, JobMetadata
from airflow.contrib.hooks.aws_hook import AwsHook
from datajoint.errors import DuplicateError
| 38.627907 | 116 | 0.596388 |
b8c4711e42028105dddd073eaee5ccd39e86f063 | 17,579 | py | Python | csapi.py | ria-ee/X-Road-cs-api | 37d28886e47eea21cb4e46ad20b84bbfcafe79ad | [
"MIT"
] | 1 | 2020-04-16T06:31:54.000Z | 2020-04-16T06:31:54.000Z | csapi.py | ria-ee/X-Road-cs-api | 37d28886e47eea21cb4e46ad20b84bbfcafe79ad | [
"MIT"
] | null | null | null | csapi.py | ria-ee/X-Road-cs-api | 37d28886e47eea21cb4e46ad20b84bbfcafe79ad | [
"MIT"
] | 1 | 2019-09-09T08:07:15.000Z | 2019-09-09T08:07:15.000Z | #!/usr/bin/env python3
"""This is a module for X-Road Central Server API.
This module allows:
* adding new member to the X-Road Central Server.
* adding new subsystem to the X-Road Central Server.
"""
import json
import logging
import re
import psycopg2
from flask import request, jsonify
from flask_restful import Resource
DB_CONF_FILE = '/etc/xroad/db.properties'
LOGGER = logging.getLogger('csapi')
def get_db_conf():
"""Get Central Server database configuration parameters"""
conf = {
'database': '',
'username': '',
'password': ''
}
# Getting database credentials from X-Road configuration
try:
with open(DB_CONF_FILE, 'r') as db_conf:
for line in db_conf:
match_res = re.match('^database\\s*=\\s*(.+)$', line)
if match_res:
conf['database'] = match_res.group(1)
match_res = re.match('^username\\s*=\\s*(.+)$', line)
if match_res:
conf['username'] = match_res.group(1)
match_res = re.match('^password\\s*=\\s*(.+)$', line)
if match_res:
conf['password'] = match_res.group(1)
except IOError:
pass
return conf
def get_db_connection(conf):
"""Get connection object for Central Server database"""
return psycopg2.connect(
'host={} port={} dbname={} user={} password={}'.format(
'localhost', '5432', conf['database'], conf['username'], conf['password']))
def get_member_class_id(cur, member_class):
"""Get ID of member class from Central Server"""
cur.execute("""select id from member_classes where code=%(str)s""", {'str': member_class})
rec = cur.fetchone()
if rec:
return rec[0]
return None
def subsystem_exists(cur, member_id, subsystem_code):
"""Check if subsystem exists in Central Server"""
cur.execute(
"""
select exists(
select * from security_server_clients
where type='Subsystem' and xroad_member_id=%(member_id)s
and subsystem_code=%(subsystem_code)s
)
""", {'member_id': member_id, 'subsystem_code': subsystem_code})
return cur.fetchone()[0]
def get_member_data(cur, class_id, member_code):
"""Get member data from Central Server"""
cur.execute(
"""
select id, name
from security_server_clients
where type='XRoadMember' and member_class_id=%(class_id)s
and member_code=%(member_code)s
""", {'class_id': class_id, 'member_code': member_code})
rec = cur.fetchone()
if rec:
return {'id': rec[0], 'name': rec[1]}
return None
def get_utc_time(cur):
"""Get current time in UTC timezone from Central Server database"""
cur.execute("""select current_timestamp at time zone 'UTC'""")
return cur.fetchone()[0]
def add_member_identifier(cur, **kwargs):
"""Add new X-Road member identifier to Central Server
Required keyword arguments:
member_class, member_code, utc_time
"""
cur.execute(
"""
insert into identifiers (
object_type, xroad_instance, member_class, member_code, type, created_at,
updated_at
) values (
'MEMBER', (select value from system_parameters where key='instanceIdentifier'),
%(class)s, %(code)s, 'ClientId', %(time)s, %(time)s
) returning id
""", {
'class': kwargs['member_class'], 'code': kwargs['member_code'],
'time': kwargs['utc_time']}
)
return cur.fetchone()[0]
def add_subsystem_identifier(cur, **kwargs):
"""Add new X-Road subsystem identifier to Central Server
Required keyword arguments:
member_class, member_code, subsystem_code, utc_time
"""
cur.execute(
"""
insert into identifiers (
object_type, xroad_instance, member_class, member_code, subsystem_code, type,
created_at, updated_at
) values (
'SUBSYSTEM', (select value from system_parameters where key='instanceIdentifier'),
%(class)s, %(member_code)s, %(subsystem_code)s, 'ClientId', %(time)s, %(time)s
) returning id
""", {
'class': kwargs['member_class'], 'member_code': kwargs['member_code'],
'subsystem_code': kwargs['subsystem_code'], 'time': kwargs['utc_time']}
)
return cur.fetchone()[0]
def add_member_client(cur, **kwargs):
"""Add new X-Road member client to Central Server
Required keyword arguments:
member_code, member_name, class_id, identifier_id, utc_time
"""
cur.execute(
"""
insert into security_server_clients (
member_code, name, member_class_id, server_client_id, type, created_at, updated_at
) values (
%(code)s, %(name)s, %(class_id)s, %(identifier_id)s, 'XRoadMember', %(time)s,
%(time)s
)
""", {
'code': kwargs['member_code'], 'name': kwargs['member_name'],
'class_id': kwargs['class_id'], 'identifier_id': kwargs['identifier_id'],
'time': kwargs['utc_time']
}
)
def add_subsystem_client(cur, **kwargs):
"""Add new X-Road subsystem as a client to Central Server
Required keyword arguments:
subsystem_code, member_id, identifier_id, utc_time
"""
cur.execute(
"""
insert into security_server_clients (
subsystem_code, xroad_member_id, server_client_id, type, created_at, updated_at
) values (
%(subsystem_code)s, %(member_id)s, %(identifier_id)s, 'Subsystem', %(time)s,
%(time)s
)
""", {
'subsystem_code': kwargs['subsystem_code'], 'member_id': kwargs['member_id'],
'identifier_id': kwargs['identifier_id'], 'time': kwargs['utc_time']
}
)
def add_client_name(cur, **kwargs):
"""Add new X-Road client name to Central Server
Required keyword arguments:
member_name, identifier_id, utc_time
"""
cur.execute(
"""
insert into security_server_client_names (
name, client_identifier_id, created_at, updated_at
) values (
%(name)s, %(identifier_id)s, %(time)s, %(time)s
)
""", {
'name': kwargs['member_name'], 'identifier_id': kwargs['identifier_id'],
'time': kwargs['utc_time']}
)
def add_member(member_class, member_code, member_name, json_data):
"""Add new X-Road member to Central Server"""
conf = get_db_conf()
if not conf['username'] or not conf['password'] or not conf['database']:
LOGGER.error('DB_CONF_ERROR: Cannot access database configuration')
return {
'http_status': 500, 'code': 'DB_CONF_ERROR',
'msg': 'Cannot access database configuration'}
with get_db_connection(conf) as conn:
with conn.cursor() as cur:
class_id = get_member_class_id(cur, member_class)
if class_id is None:
LOGGER.warning(
'INVALID_MEMBER_CLASS: Provided Member Class does not exist '
'(Request: %s)', json_data)
return {
'http_status': 400, 'code': 'INVALID_MEMBER_CLASS',
'msg': 'Provided Member Class does not exist'}
if get_member_data(cur, class_id, member_code) is not None:
LOGGER.warning(
'MEMBER_EXISTS: Provided Member already exists '
'(Request: %s)', json_data)
return {
'http_status': 409, 'code': 'MEMBER_EXISTS',
'msg': 'Provided Member already exists'}
# Timestamps must be in UTC timezone
utc_time = get_utc_time(cur)
identifier_id = add_member_identifier(
cur, member_class=member_class, member_code=member_code, utc_time=utc_time)
add_member_client(
cur, member_code=member_code, member_name=member_name, class_id=class_id,
identifier_id=identifier_id, utc_time=utc_time)
add_client_name(
cur, member_name=member_name, identifier_id=identifier_id, utc_time=utc_time)
conn.commit()
LOGGER.info(
'Added new Member: member_code=%s, member_name=%s, member_class=%s',
member_code, member_name, member_class)
return {'http_status': 201, 'code': 'CREATED', 'msg': 'New Member added'}
def add_subsystem(member_class, member_code, subsystem_code, json_data):
"""Add new X-Road subsystem to Central Server"""
conf = get_db_conf()
if not conf['username'] or not conf['password'] or not conf['database']:
LOGGER.error('DB_CONF_ERROR: Cannot access database configuration')
return {
'http_status': 500, 'code': 'DB_CONF_ERROR',
'msg': 'Cannot access database configuration'}
with get_db_connection(conf) as conn:
with conn.cursor() as cur:
class_id = get_member_class_id(cur, member_class)
if class_id is None:
LOGGER.warning(
'INVALID_MEMBER_CLASS: Provided Member Class does not exist '
'(Request: %s)', json_data)
return {
'http_status': 400, 'code': 'INVALID_MEMBER_CLASS',
'msg': 'Provided Member Class does not exist'}
member_data = get_member_data(cur, class_id, member_code)
if member_data is None:
LOGGER.warning(
'INVALID_MEMBER: Provided Member does not exist '
'(Request: %s)', json_data)
return {
'http_status': 400, 'code': 'INVALID_MEMBER',
'msg': 'Provided Member does not exist'}
if subsystem_exists(cur, member_data['id'], subsystem_code):
LOGGER.warning(
'SUBSYSTEM_EXISTS: Provided Subsystem already exists '
'(Request: %s)', json_data)
return {
'http_status': 409, 'code': 'SUBSYSTEM_EXISTS',
'msg': 'Provided Subsystem already exists'}
# Timestamps must be in UTC timezone
utc_time = get_utc_time(cur)
identifier_id = add_subsystem_identifier(
cur, member_class=member_class, member_code=member_code,
subsystem_code=subsystem_code, utc_time=utc_time)
add_subsystem_client(
cur, subsystem_code=subsystem_code, member_id=member_data['id'],
identifier_id=identifier_id, utc_time=utc_time)
add_client_name(
cur, member_name=member_data['name'], identifier_id=identifier_id,
utc_time=utc_time)
conn.commit()
LOGGER.info(
'Added new Subsystem: member_class=%s, member_code=%s, subsystem_code=%s',
member_class, member_code, subsystem_code)
return {'http_status': 201, 'code': 'CREATED', 'msg': 'New Subsystem added'}
def make_response(data):
"""Create JSON response object"""
response = jsonify({'code': data['code'], 'msg': data['msg']})
response.status_code = data['http_status']
LOGGER.info('Response: %s', data)
return response
def get_input(json_data, param_name):
"""Get parameter from request parameters
Returns two items:
* parameter value
* error response (if parameter not found).
If one parameter is set then other is always None.
"""
try:
param = json_data[param_name]
except KeyError:
LOGGER.warning(
'MISSING_PARAMETER: Request parameter %s is missing '
'(Request: %s)', param_name, json_data)
return None, {
'http_status': 400, 'code': 'MISSING_PARAMETER',
'msg': 'Request parameter {} is missing'.format(param_name)}
return param, None
def load_config(config_file):
"""Load configuration from JSON file"""
try:
with open(config_file, 'r') as conf:
LOGGER.info('Configuration loaded from file "%s"', config_file)
return json.load(conf)
except IOError as err:
LOGGER.error('Cannot load configuration file "%s": %s', config_file, str(err))
return None
except json.JSONDecodeError as err:
LOGGER.error('Invalid JSON configuration file "%s": %s', config_file, str(err))
return None
def check_client(config, client_dn):
"""Check if client dn is in whitelist"""
# If config is None then all clients are not allowed
if config is None:
return False
if config.get('allow_all', False) is True:
return True
allowed = config.get('allowed')
if client_dn is None or not isinstance(allowed, list):
return False
if client_dn in allowed:
return True
return False
def incorrect_client(client_dn):
"""Return error response when client is not allowed"""
LOGGER.error('FORBIDDEN: Client certificate is not allowed: %s', client_dn)
return make_response({
'http_status': 403, 'code': 'FORBIDDEN',
'msg': 'Client certificate is not allowed: {}'.format(client_dn)})
def test_db():
"""Add new X-Road subsystem to Central Server"""
conf = get_db_conf()
if not conf['username'] or not conf['password'] or not conf['database']:
LOGGER.error('DB_CONF_ERROR: Cannot access database configuration')
return {
'http_status': 500, 'code': 'DB_CONF_ERROR',
'msg': 'Cannot access database configuration'}
with get_db_connection(conf) as conn:
with conn.cursor() as cur:
cur.execute("""select 1 from system_parameters where key='instanceIdentifier'""")
rec = cur.fetchone()
if rec:
return {
'http_status': 200, 'code': 'OK',
'msg': 'API is ready'}
return {'http_status': 500, 'code': 'DB_ERROR', 'msg': 'Unexpected DB state'}
| 34.878968 | 98 | 0.598669 |
b210a7b86cf5f45e110a190e8d8eb560c075e998 | 397 | py | Python | dotacni_matice/migrations/0002_history.py | CzechInvest/ciis | c6102598f564a717472e5e31e7eb894bba2c8104 | [
"MIT"
] | 1 | 2019-05-26T22:24:01.000Z | 2019-05-26T22:24:01.000Z | dotacni_matice/migrations/0002_history.py | CzechInvest/ciis | c6102598f564a717472e5e31e7eb894bba2c8104 | [
"MIT"
] | 6 | 2019-01-22T14:53:43.000Z | 2020-09-22T16:20:28.000Z | dotacni_matice/migrations/0002_history.py | CzechInvest/ciis | c6102598f564a717472e5e31e7eb894bba2c8104 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.3 on 2019-12-27 18:46
from django.db import migrations, models
| 20.894737 | 58 | 0.602015 |
b2112b5aca6a5b5632c5810795648be898bf0703 | 898 | py | Python | telegrambotapiwrapper/printpretty.py | pynista/telegrambotapiwrapper | 4310882a1a7db94f5256b010ff8a3103b405dc0d | [
"MIT"
] | 1 | 2021-05-10T06:49:52.000Z | 2021-05-10T06:49:52.000Z | telegrambotapiwrapper/printpretty.py | pynista/telegrambotapiwrapper | 4310882a1a7db94f5256b010ff8a3103b405dc0d | [
"MIT"
] | null | null | null | telegrambotapiwrapper/printpretty.py | pynista/telegrambotapiwrapper | 4310882a1a7db94f5256b010ff8a3103b405dc0d | [
"MIT"
] | null | null | null | from collections import OrderedDict
from dataclasses import (
fields,
)
from prettyprinter.prettyprinter import pretty_call, register_pretty
| 23.631579 | 71 | 0.678174 |
b2113a9f179d1a1302e99c7904123f0326d3e145 | 1,055 | py | Python | bot.py | ctrezevant/GEFS-bot | 9fdfbb87e33399051ef2287e629baae234800dcf | [
"MIT"
] | null | null | null | bot.py | ctrezevant/GEFS-bot | 9fdfbb87e33399051ef2287e629baae234800dcf | [
"MIT"
] | null | null | null | bot.py | ctrezevant/GEFS-bot | 9fdfbb87e33399051ef2287e629baae234800dcf | [
"MIT"
] | null | null | null | """
GEFS Chart Bot
Polls https://www.tropicaltidbits.com/storminfo/11L_gefs_latest.png, but it can
really be used to monitor/notify about changes to any file on the web.
(c) Charlton Trezevant 2017
MIT License
Enjoy!
"""
import time, sys
sys.dont_write_bytecode = True
sys.path.insert(0, 'lib')
from EtagMonitor import EtagMonitor
from slackclient import SlackClient
CHART_URL = 'https://www.tropicaltidbits.com/storminfo/11L_gefs_latest.png'
DB_PATH = 'etag.db'
SLACK_TOKEN = ' '
SLACK_CHANNEL = ' '
monitor = EtagMonitor(dbpath=DB_PATH, url=CHART_URL)
slack = SlackClient(SLACK_TOKEN)
if monitor.has_updated() is True:
curtime = time.strftime('%b %d, %Y at %H:%M')
nocache = "?nocache=" + time.strftime('%d%H%M')
msg_text = 'Updated GEFS Chart: ' + curtime + '\n(NOAA, Irma-GEFS)'
msg_attachments = [{"title": "GEFS Chart - Updated " + curtime, "image_url": CHART_URL + nocache}]
slack.api_call("chat.postMessage", channel=SLACK_CHANNEL,
text=msg_text, attachments=msg_attachments)
| 30.142857 | 102 | 0.700474 |
b2116bed98a7e670916911f64ee1ba8f859af9bb | 6,442 | py | Python | tests/test_periodbase.py | pierfra-ro/astrobase | b9f62c59a3ab9cdc1388d409fa281c26f1e6db6c | [
"MIT"
] | 45 | 2017-03-09T19:08:44.000Z | 2022-03-24T00:36:28.000Z | tests/test_periodbase.py | pierfra-ro/astrobase | b9f62c59a3ab9cdc1388d409fa281c26f1e6db6c | [
"MIT"
] | 92 | 2016-12-21T19:01:20.000Z | 2022-01-03T15:28:45.000Z | tests/test_periodbase.py | pierfra-ro/astrobase | b9f62c59a3ab9cdc1388d409fa281c26f1e6db6c | [
"MIT"
] | 20 | 2016-12-20T23:01:29.000Z | 2021-03-07T16:24:15.000Z | '''test_periodbase.py - Waqas Bhatti (wbhatti@astro.princeton.edu) - Feb 2018
License: MIT - see the LICENSE file for details.
This tests the following:
- downloads a light curve from the github repository notebooks/nb-data dir
- reads the light curve using astrobase.hatlc
- runs the GLS, WIN, PDM, AoV, BLS, AoVMH, and ACF period finders on the LC
'''
from __future__ import print_function
import os
import os.path
try:
from urllib import urlretrieve
except Exception:
from urllib.request import urlretrieve
from numpy.testing import assert_allclose
from astrobase.hatsurveys import hatlc
from astrobase import periodbase
# separate testing for kbls and abls from now on
from astrobase.periodbase import kbls
from astrobase.periodbase import abls
try:
import transitleastsquares
from astrobase.periodbase import htls
htls_ok = True
except Exception:
htls_ok = False
############
## CONFIG ##
############
# this is the light curve used for tests
LCURL = ("https://github.com/waqasbhatti/astrobase-notebooks/raw/master/"
"nb-data/HAT-772-0554686-V0-DR0-hatlc.sqlite.gz")
# this function is used to check progress of the download
# get the light curve if it's not there
modpath = os.path.abspath(__file__)
LCPATH = os.path.abspath(os.path.join(os.getcwd(),
'HAT-772-0554686-V0-DR0-hatlc.sqlite.gz'))
if not os.path.exists(LCPATH):
localf, headerr = urlretrieve(
LCURL,LCPATH,reporthook=on_download_chunk
)
###########
## TESTS ##
###########
def test_gls():
'''
Tests periodbase.pgen_lsp.
'''
lcd, msg = hatlc.read_and_filter_sqlitecurve(LCPATH)
gls = periodbase.pgen_lsp(lcd['rjd'], lcd['aep_000'], lcd['aie_000'])
assert isinstance(gls, dict)
assert_allclose(gls['bestperiod'], 1.54289477)
def test_win():
'''
Tests periodbase.specwindow_lsp
'''
lcd, msg = hatlc.read_and_filter_sqlitecurve(LCPATH)
win = periodbase.specwindow_lsp(lcd['rjd'], lcd['aep_000'], lcd['aie_000'])
assert isinstance(win, dict)
assert_allclose(win['bestperiod'], 592.0307682142864)
def test_pdm():
'''
Tests periodbase.stellingwerf_pdm.
'''
lcd, msg = hatlc.read_and_filter_sqlitecurve(LCPATH)
pdm = periodbase.stellingwerf_pdm(lcd['rjd'],
lcd['aep_000'],
lcd['aie_000'])
assert isinstance(pdm, dict)
assert_allclose(pdm['bestperiod'], 3.08578956)
def test_aov():
'''
Tests periodbase.aov_periodfind.
'''
lcd, msg = hatlc.read_and_filter_sqlitecurve(LCPATH)
aov = periodbase.aov_periodfind(lcd['rjd'],
lcd['aep_000'],
lcd['aie_000'])
assert isinstance(aov, dict)
assert_allclose(aov['bestperiod'], 3.08578956)
def test_aovhm():
'''
Tests periodbase.aov_periodfind.
'''
lcd, msg = hatlc.read_and_filter_sqlitecurve(LCPATH)
mav = periodbase.aovhm_periodfind(lcd['rjd'],
lcd['aep_000'],
lcd['aie_000'])
assert isinstance(mav, dict)
assert_allclose(mav['bestperiod'], 3.08578956)
def test_acf():
'''
Tests periodbase.macf_period_find.
'''
lcd, msg = hatlc.read_and_filter_sqlitecurve(LCPATH)
acf = periodbase.macf_period_find(lcd['rjd'],
lcd['aep_000'],
lcd['aie_000'],
smoothacf=721)
assert isinstance(acf, dict)
assert_allclose(acf['bestperiod'], 3.0750854011348565)
def test_kbls_serial():
'''
Tests periodbase.kbls.bls_serial_pfind.
'''
lcd, msg = hatlc.read_and_filter_sqlitecurve(LCPATH)
bls = kbls.bls_serial_pfind(lcd['rjd'],
lcd['aep_000'],
lcd['aie_000'],
startp=1.0)
assert isinstance(bls, dict)
assert_allclose(bls['bestperiod'], 3.08560655)
def test_kbls_parallel():
'''
Tests periodbase.kbls.bls_parallel_pfind.
'''
lcd, msg = hatlc.read_and_filter_sqlitecurve(LCPATH)
bls = kbls.bls_parallel_pfind(lcd['rjd'],
lcd['aep_000'],
lcd['aie_000'],
startp=1.0)
assert isinstance(bls, dict)
assert_allclose(bls['bestperiod'], 3.08560655)
def test_abls_serial():
'''
This tests periodbase.abls.bls_serial_pfind.
'''
EXPECTED_PERIOD = 3.0873018
lcd, msg = hatlc.read_and_filter_sqlitecurve(LCPATH)
bls = abls.bls_serial_pfind(lcd['rjd'],
lcd['aep_000'],
lcd['aie_000'],
startp=1.0,
ndurations=50)
assert isinstance(bls, dict)
assert_allclose(bls['bestperiod'], EXPECTED_PERIOD)
def test_abls_parallel():
'''
This tests periodbase.abls.bls_parallel_pfind.
'''
EXPECTED_PERIOD = 3.0848887
lcd, msg = hatlc.read_and_filter_sqlitecurve(LCPATH)
bls = abls.bls_parallel_pfind(lcd['rjd'],
lcd['aep_000'],
lcd['aie_000'],
startp=1.0,
ndurations=50)
assert isinstance(bls, dict)
assert_allclose(bls['bestperiod'], EXPECTED_PERIOD, atol=1.0e-4)
if htls_ok:
def test_tls_parallel():
'''
This tests periodbase.htls.tls_parallel_pfind.
'''
EXPECTED_PERIOD = 3.0848887
lcd, msg = hatlc.read_and_filter_sqlitecurve(LCPATH)
tlsdict = htls.tls_parallel_pfind(
lcd['rjd'],
lcd['aep_000'],
lcd['aie_000'],
startp=2.0,
endp=5.0
)
tlsresult = tlsdict['tlsresult']
assert isinstance(tlsresult, dict)
# ensure period is within 2 sigma of what's expected.
assert_allclose(tlsdict['bestperiod'], EXPECTED_PERIOD,
atol=2.0*tlsresult['period_uncertainty'])
| 26.510288 | 80 | 0.591276 |
b211aa66d913dc22688021d4550c75de1f8811d6 | 2,877 | py | Python | tests/sdict/test_sdict_substitutor.py | nikitanovosibirsk/district42-exp-types | e36e43da62f32d58d4b14c65afa16856dc8849e1 | [
"Apache-2.0"
] | null | null | null | tests/sdict/test_sdict_substitutor.py | nikitanovosibirsk/district42-exp-types | e36e43da62f32d58d4b14c65afa16856dc8849e1 | [
"Apache-2.0"
] | 2 | 2021-08-01T05:02:21.000Z | 2021-08-01T10:06:28.000Z | tests/sdict/test_sdict_substitutor.py | nikitanovosibirsk/district42-exp-types | e36e43da62f32d58d4b14c65afa16856dc8849e1 | [
"Apache-2.0"
] | null | null | null | from _pytest.python_api import raises
from baby_steps import given, then, when
from district42 import schema
from revolt import substitute
from revolt.errors import SubstitutionError
from district42_exp_types.sdict import schema_sdict
| 23.77686 | 54 | 0.468891 |
b211f2e30f6de646dd73a75da8055e28f37f148d | 1,735 | py | Python | algo/problems/pascal_triangle.py | avi3tal/knowledgebase | fd30805aa94332a6c14c9d8631c7044673fb3e2c | [
"MIT"
] | null | null | null | algo/problems/pascal_triangle.py | avi3tal/knowledgebase | fd30805aa94332a6c14c9d8631c7044673fb3e2c | [
"MIT"
] | null | null | null | algo/problems/pascal_triangle.py | avi3tal/knowledgebase | fd30805aa94332a6c14c9d8631c7044673fb3e2c | [
"MIT"
] | 1 | 2021-11-19T13:45:59.000Z | 2021-11-19T13:45:59.000Z | """
Pascal's Triangle
1
11
121
1331
14641
Question:
Find the value in given row and column
First solution: brute force
Second solution: Dynamic programming
alternative
def pascal(r, c):
print(f"row: {r}, col: {c}")
if r == 0 or r == 1 or c == 0:
return 1
return pascal(r-1, c-1) + pascal(r-1, c)
res = pascal(4, 2)
print(res)
"""
from algo import dynamic_programming
if __name__ == "__main__":
print(find_value_brute_force(50, 28))
print(find_value_dynamically(50, 28))
| 20.903614 | 98 | 0.595389 |
b212f83168a6342d8bcbdaa233860a911b7cdadb | 1,117 | py | Python | drf_ujson/parsers.py | radzhome/drf-ujson-renderer | b65c01edc5311404178a9d245d40ccc10733c5d7 | [
"MIT"
] | null | null | null | drf_ujson/parsers.py | radzhome/drf-ujson-renderer | b65c01edc5311404178a9d245d40ccc10733c5d7 | [
"MIT"
] | null | null | null | drf_ujson/parsers.py | radzhome/drf-ujson-renderer | b65c01edc5311404178a9d245d40ccc10733c5d7 | [
"MIT"
] | 1 | 2019-04-04T13:25:22.000Z | 2019-04-04T13:25:22.000Z | from __future__ import unicode_literals
import codecs
from django.conf import settings
from rest_framework.compat import six
from rest_framework.parsers import BaseParser, ParseError
from rest_framework import renderers
from rest_framework.settings import api_settings
import ujson
| 32.852941 | 78 | 0.72068 |
b213326f9b1abfe3dfc2e0c0ee4f51afa2c00f6e | 778 | py | Python | Software_University/python_basics/exam_preparation/4_exam_prep/renovation.py | Ivanazzz/SoftUni-W3resource-Python | 892321a290e22a91ff2ac2fef5316179a93f2f17 | [
"MIT"
] | 1 | 2022-01-26T07:38:11.000Z | 2022-01-26T07:38:11.000Z | Software_University/python_basics/exam_preparation/4_exam_prep/renovation.py | Ivanazzz/SoftUni-W3resource-Python | 892321a290e22a91ff2ac2fef5316179a93f2f17 | [
"MIT"
] | null | null | null | Software_University/python_basics/exam_preparation/4_exam_prep/renovation.py | Ivanazzz/SoftUni-W3resource-Python | 892321a290e22a91ff2ac2fef5316179a93f2f17 | [
"MIT"
] | null | null | null | from math import ceil
walls_hight = int(input())
walls_witdh = int(input())
percentage_walls_tottal_area_not_painted = int(input())
total_walls_area = walls_hight * walls_witdh * 4
quadratic_meters_left = total_walls_area - ceil(total_walls_area * percentage_walls_tottal_area_not_painted / 100)
while True:
paint_liters = input()
if paint_liters == "Tired!":
print(f"{quadratic_meters_left} quadratic m left.")
break
paint_liters = int(paint_liters)
quadratic_meters_left -= paint_liters
if quadratic_meters_left < 0:
print(f"All walls are painted and you have {abs(quadratic_meters_left)} l paint left!")
break
elif quadratic_meters_left == 0:
print("All walls are painted! Great job, Pesho!")
break
| 32.416667 | 114 | 0.717224 |
b21485714fab66b89d8a0a3cada0bde14841a26b | 19,069 | py | Python | Commands.py | ehasting/psybot | 8699f1ad8010bac5d2622486cb549898fc979036 | [
"BSD-2-Clause"
] | null | null | null | Commands.py | ehasting/psybot | 8699f1ad8010bac5d2622486cb549898fc979036 | [
"BSD-2-Clause"
] | null | null | null | Commands.py | ehasting/psybot | 8699f1ad8010bac5d2622486cb549898fc979036 | [
"BSD-2-Clause"
] | null | null | null | import datetime
import re
import os
import requests
import json
import uuid
import random
import calendar
import time
import libs.SerializableDict as SerializableDict
import libs.StorageObjects as StorageObjects
import libs.Models as Models
import libs.Loggiz as Loggiz
from pytz import timezone
import pytz
import telegram
import logging
'''
Copyright (c) 2016, Egil Hasting
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
'''
__author__ = "Egil Hasting"
__copyright__ = "Copyright 2016"
__credits__ = ["Egil Hasting"]
__license__ = "BSD"
__version__ = "1.0.0"
__maintainer__ = "Egil Hasting"
__email__ = "egil.hasting@higen.org"
__status__ = "Production"
# self.db = dbobject
# self.uindex = dbobject.Get("userindex")
if __name__ == '__main__':
pass | 39.317526 | 204 | 0.608579 |
b218434a962715f2504f0272b199565a159dcf7b | 115 | py | Python | aim/pytorch.py | avkudr/aim | 5961f31d358929287986ace09c73310886a94704 | [
"Apache-2.0"
] | 2,195 | 2020-01-23T03:08:11.000Z | 2022-03-31T14:32:19.000Z | aim/pytorch.py | deepanprabhu/aim | c00d8ec7bb2d9fd230a9430b516ca90cdb8072cb | [
"Apache-2.0"
] | 696 | 2020-02-08T21:55:45.000Z | 2022-03-31T16:52:22.000Z | aim/pytorch.py | deepanprabhu/aim | c00d8ec7bb2d9fd230a9430b516ca90cdb8072cb | [
"Apache-2.0"
] | 150 | 2020-03-27T10:44:25.000Z | 2022-03-21T21:29:41.000Z | # Alias to SDK PyTorch utils
from aim.sdk.adapters.pytorch import track_params_dists, track_gradients_dists # noqa
| 38.333333 | 85 | 0.834783 |
b21881d06efcd08194a38d1b8b2a7efa72fa56b5 | 890 | py | Python | src/tools/checkDeckByUrl.py | kentokura/xenoparts | d861ca474accdf1ec7bcf6afcac6be9246cf4c85 | [
"MIT"
] | null | null | null | src/tools/checkDeckByUrl.py | kentokura/xenoparts | d861ca474accdf1ec7bcf6afcac6be9246cf4c85 | [
"MIT"
] | null | null | null | src/tools/checkDeckByUrl.py | kentokura/xenoparts | d861ca474accdf1ec7bcf6afcac6be9246cf4c85 | [
"MIT"
] | null | null | null | # coding: utf-8
# Your code here!
import csv
def encode_cardd_by_url(url: str) -> dict:
"""
:URL
:{ card_id, num }
:
URLid
"""
site_url, card_url = url.split("c=")
card_url, key_card_url = card_url.split("&")
arr_card_id = card_url.split(".")
deck = { card_id: arr_card_id.count(card_id) for card_id in arr_card_id }
return deck
#
deck = encode_cardd_by_url(input())
card_details = []
# csv, card_dbwith
with open('../db/dmps_card_db.csv') as card_db:
reader = csv.reader(f)
for row in reader:
for card_id, num in deck.items():
# key
card_details.append(row.split(","))
# card_details.append(csv(exist key line).split(","))
#
for card_detail in card_details:
print(card_detail)
| 22.25 | 77 | 0.61573 |
b21bfec88e0dfd45846324420361a10ba1865cb9 | 193 | py | Python | kleeneup/__init__.py | caiopo/kleeneup | 0050054853ac7a3a2e40d492cc5fe741ef737191 | [
"MIT"
] | null | null | null | kleeneup/__init__.py | caiopo/kleeneup | 0050054853ac7a3a2e40d492cc5fe741ef737191 | [
"MIT"
] | null | null | null | kleeneup/__init__.py | caiopo/kleeneup | 0050054853ac7a3a2e40d492cc5fe741ef737191 | [
"MIT"
] | 1 | 2018-10-10T00:59:54.000Z | 2018-10-10T00:59:54.000Z | from .regular_grammar import RegularGrammar
from .finite_automaton import FiniteAutomaton, State, Symbol, Sentence
from .regular_expression import RegularExpression, StitchedBinaryTree, Lambda
| 48.25 | 77 | 0.870466 |
b21c7a1509c7bfd68dcac48270c795470f743c73 | 89 | py | Python | recipe_backend/recipes/apps.py | jbernal0019/Recipe_site | 30090b521cac84156cf5f05429a12dd5889f8703 | [
"MIT"
] | null | null | null | recipe_backend/recipes/apps.py | jbernal0019/Recipe_site | 30090b521cac84156cf5f05429a12dd5889f8703 | [
"MIT"
] | 3 | 2020-02-12T01:22:24.000Z | 2021-06-10T21:49:21.000Z | recipe_backend/recipes/apps.py | jbernal0019/Recipe_site | 30090b521cac84156cf5f05429a12dd5889f8703 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
| 14.833333 | 33 | 0.752809 |
b21e2321bf77cc16cff7c91db7f72ea88ee39b5b | 1,337 | py | Python | mass_circular_weighing/constants.py | MSLNZ/Mass-Circular-Weighing | f144158b9e2337d7e9446326d6927e1dd606ed38 | [
"MIT"
] | 1 | 2020-02-19T09:10:43.000Z | 2020-02-19T09:10:43.000Z | mass_circular_weighing/constants.py | MSLNZ/Mass-Circular-Weighing | f144158b9e2337d7e9446326d6927e1dd606ed38 | [
"MIT"
] | null | null | null | mass_circular_weighing/constants.py | MSLNZ/Mass-Circular-Weighing | f144158b9e2337d7e9446326d6927e1dd606ed38 | [
"MIT"
] | null | null | null | """
A repository for constants and symbols used in the mass weighing program
Modify default folder paths as necessary
"""
import os
MU_STR = '' # ALT+0181 or ''. use 'u' if running into issues
SIGMA_STR = '' # \u03C3 for sigma sign
DELTA_STR = '' # \u0394 for capital delta sign
SQUARED_STR = ''
SUFFIX = {'ng': 1e-9, 'g': 1e-6, 'ug': 1e-6, 'mg': 1e-3, 'g': 1, 'kg': 1e3}
DEGREE_SIGN = '' # \xb0
IN_DEGREES_C = ' ('+DEGREE_SIGN+'C)'
NBC = True #
REL_UNC = 0.03 # relative uncertainty in ppm for no buoyancy correction: typically 0.03 or 0.1
local_backup = r'C:\CircularWeighingData'
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
admin_default = os.path.join(ROOT_DIR, r'utils\default_admin.xlsx')
config_default = os.path.join(ROOT_DIR, r'utils\default_config.xml')
save_folder_default = r'G:\My Drive'
commercial21_folder = r'I:\MSL\Private\Mass\Commercial Calibrations\2021'
mass_folder = r'I:\MSL\Private\Mass'
mydrive = r'G:\My Drive'
job_default = "J00000"
client_default = "Client"
client_wt_IDs_default = '1 2 5 10 20 50 100 200 500 1000 2000 5000 10000'.split()
MAX_BAD_RUNS = 6 # limit for aborting circular weighing due to multiple bad runs
FONTSIZE = 32 # size of text in large pop-ups
| 36.135135 | 107 | 0.658938 |
b21e3496e5fd13e41a572208964a13c7cf7ed7c2 | 3,032 | py | Python | UsbVibrationDevice.py | Suitceyes-Project-Code/Vibration-Pattern-Player | 44d8bac61eed0ee7712eb0299d0d7029f688fe24 | [
"MIT"
] | null | null | null | UsbVibrationDevice.py | Suitceyes-Project-Code/Vibration-Pattern-Player | 44d8bac61eed0ee7712eb0299d0d7029f688fe24 | [
"MIT"
] | null | null | null | UsbVibrationDevice.py | Suitceyes-Project-Code/Vibration-Pattern-Player | 44d8bac61eed0ee7712eb0299d0d7029f688fe24 | [
"MIT"
] | 1 | 2021-10-04T14:26:49.000Z | 2021-10-04T14:26:49.000Z | import PyCmdMessenger
from VestDeviceBase import VestDevice | 30.938776 | 115 | 0.539248 |
b2207477c30bc92b4836e3b6d2c7c4f40fd9d5d3 | 923 | py | Python | webapi/tests/test_models.py | c2masamichi/webapp-example-python-django | f0771526623bf5d1021ad1c5c8baf480fb285190 | [
"MIT"
] | null | null | null | webapi/tests/test_models.py | c2masamichi/webapp-example-python-django | f0771526623bf5d1021ad1c5c8baf480fb285190 | [
"MIT"
] | 4 | 2021-03-21T10:43:05.000Z | 2022-02-10T12:46:20.000Z | webapi/tests/test_models.py | c2masamichi/webapp-example-python-django | f0771526623bf5d1021ad1c5c8baf480fb285190 | [
"MIT"
] | null | null | null | from django.core.exceptions import ValidationError
import pytest
from api.models import Product
| 21.97619 | 50 | 0.576381 |
b222b2832f0113a6843a7ce7ec02f0e981a7b9ca | 12,099 | py | Python | tests/app/main/views/test_users.py | AusDTO/dto-digitalmarketplace-admin-frontend | 1858a653623999d81bb4fa3e51f7cb4df4b83079 | [
"MIT"
] | 1 | 2018-01-04T18:10:28.000Z | 2018-01-04T18:10:28.000Z | tests/app/main/views/test_users.py | AusDTO/dto-digitalmarketplace-admin-frontend | 1858a653623999d81bb4fa3e51f7cb4df4b83079 | [
"MIT"
] | 5 | 2016-12-12T04:58:12.000Z | 2019-02-05T21:19:38.000Z | tests/app/main/views/test_users.py | AusDTO/dto-digitalmarketplace-admin-frontend | 1858a653623999d81bb4fa3e51f7cb4df4b83079 | [
"MIT"
] | 3 | 2017-06-19T07:51:38.000Z | 2021-01-12T12:30:22.000Z |
import mock
import pytest
import copy
import six
from lxml import html
from ...helpers import LoggedInApplicationTest
from dmapiclient import HTTPError
| 42.452632 | 112 | 0.659889 |
b223d904c6830f2000cc2bff850aed8bde569ecc | 3,460 | py | Python | code/makestellar.py | gitter-badger/DHOD | f2f084fea6c299f95d15cbea5ec94d404bc946b5 | [
"MIT"
] | null | null | null | code/makestellar.py | gitter-badger/DHOD | f2f084fea6c299f95d15cbea5ec94d404bc946b5 | [
"MIT"
] | null | null | null | code/makestellar.py | gitter-badger/DHOD | f2f084fea6c299f95d15cbea5ec94d404bc946b5 | [
"MIT"
] | null | null | null | import numpy as np
import sys, os
from scipy.optimize import minimize
import json
import matplotlib.pyplot as plt
#
sys.path.append('./utils')
import tools
#
bs, ncf, stepf = 400, 512, 40
path = '../data/z00/'
ftype = 'L%04d_N%04d_S%04d_%02dstep/'
ftypefpm = 'L%04d_N%04d_S%04d_%02dstep_fpm/'
mm = np.load('../data/Illustris_halo_groupmass.npy').T
mh = mm[1]*1e10
ms = mm[2]*1e10
if __name__=='__main__':
if os.path.isfile('../data/stellar.json'): print('Stellar fit exits')
else: dofit()
dofit()
for seed in range(100, 1000, 100):
scattercatalog(seed)
| 29.827586 | 101 | 0.601734 |
b22419f7d9aaad90e17b3010a06a273060fa238e | 1,729 | py | Python | mem_py/login/forms.py | Ciuel/Proyecto-Django | a466659fa7e84e77d0692f4f3c3f8c5f541079d4 | [
"MIT"
] | null | null | null | mem_py/login/forms.py | Ciuel/Proyecto-Django | a466659fa7e84e77d0692f4f3c3f8c5f541079d4 | [
"MIT"
] | null | null | null | mem_py/login/forms.py | Ciuel/Proyecto-Django | a466659fa7e84e77d0692f4f3c3f8c5f541079d4 | [
"MIT"
] | 1 | 2021-07-17T19:41:40.000Z | 2021-07-17T19:41:40.000Z | from django import forms
from django.contrib.auth.forms import UserCreationForm,AuthenticationForm
from .models import UserProfile
# Create your forms here
| 39.295455 | 90 | 0.632736 |
b224af29a62a1d5910e33f4af9c4dfcede1d3b53 | 556 | py | Python | diagrams/alibabacloud/analytics.py | bry-c/diagrams | 4c377a073e0aa8fe41934195da7a0869f31c58eb | [
"MIT"
] | 17,037 | 2020-02-03T01:30:30.000Z | 2022-03-31T18:09:15.000Z | diagrams/alibabacloud/analytics.py | bry-c/diagrams | 4c377a073e0aa8fe41934195da7a0869f31c58eb | [
"MIT"
] | 529 | 2020-02-03T10:43:41.000Z | 2022-03-31T17:33:08.000Z | diagrams/alibabacloud/analytics.py | bry-c/diagrams | 4c377a073e0aa8fe41934195da7a0869f31c58eb | [
"MIT"
] | 1,068 | 2020-02-05T11:54:29.000Z | 2022-03-30T23:28:55.000Z | # This module is automatically generated by autogen.sh. DO NOT EDIT.
from . import _AlibabaCloud
# Aliases
| 17.375 | 68 | 0.726619 |
b224f08977080d30a8248e3383147fd3fad725df | 1,487 | py | Python | numpy_examples/basic_5_structured_arrays.py | stealthness/sklearn-examples | e755fd3804cc15dd28ff2a38e299e80c83565d0a | [
"BSD-3-Clause"
] | null | null | null | numpy_examples/basic_5_structured_arrays.py | stealthness/sklearn-examples | e755fd3804cc15dd28ff2a38e299e80c83565d0a | [
"BSD-3-Clause"
] | null | null | null | numpy_examples/basic_5_structured_arrays.py | stealthness/sklearn-examples | e755fd3804cc15dd28ff2a38e299e80c83565d0a | [
"BSD-3-Clause"
] | null | null | null | """
Purpose of this file is to give examples of structured arrays
This script is partially dirived from the LinkedIn learning course
https://www.linkedin.com/learning/numpy-data-science-essential-training/create-arrays-from-python-structures
"""
import numpy as np
person_data_def = [('name', 'S6'), ('height', 'f8'), ('weight', 'f8'), ('age', 'i8')]
# create a structured array
people_array = np.zeros(4, dtype=person_data_def)
print(f'The structured array is of type {type(people_array)}\n{people_array}')
# let us change some the data values
# note that any int for height or weight will processed as default
people_array[2] = ('Cat', 130, 56, 22)
people_array[0] = ('Amy', 126, 60, 25)
people_array[1] = ('Bell', 146, 60, 20)
people_array[3] = ('Amy', 140, 80, 55)
print(people_array)
# we can print the information for name, height, weight and age
ages = people_array['age']
print(f'the ages of the people are {ages}')
print(f'The names of the people are {people_array["name"]}')
print(f'The heights of the people are {people_array["height"]}')
print(f'The weights of the people are {people_array["weight"]}')
youthful = ages/2
print(f'The young ages are {youthful}')
# Note that youthful does not change the original data
print(f'The original ages are {ages}')
print(people_array[['name', 'age']])
# Record array is a thin wrapper around structured array
person_record_array = np.rec.array([('a', 100, 80, 50), ('b', 190, 189, 20)])
print(type(person_record_array[0])) | 33.795455 | 108 | 0.718897 |
b225b39eea6ed7af22b6d9216dba4156c3fa8839 | 5,716 | py | Python | scripts/ebook_meta_rename.py | mcxiaoke/python-labs | 61c0a1f91008ba82fc2f5a5deb19e60aec9df960 | [
"Apache-2.0"
] | 7 | 2016-07-08T10:53:13.000Z | 2021-07-20T00:20:10.000Z | scripts/ebook_meta_rename.py | mcxiaoke/python-labs | 61c0a1f91008ba82fc2f5a5deb19e60aec9df960 | [
"Apache-2.0"
] | 1 | 2021-05-11T05:20:18.000Z | 2021-05-11T05:20:18.000Z | scripts/ebook_meta_rename.py | mcxiaoke/python-labs | 61c0a1f91008ba82fc2f5a5deb19e60aec9df960 | [
"Apache-2.0"
] | 7 | 2016-10-31T06:31:54.000Z | 2020-08-31T20:55:00.000Z | '''
File: ebook_fix.py
Created: 2021-03-06 15:46:09
Modified: 2021-03-06 15:46:14
Author: mcxiaoke (github@mcxiaoke.com)
License: Apache License 2.0
'''
import sys
import os
from pprint import pprint
from types import new_class
from mobi import Mobi
from ebooklib import epub
import argparse
from multiprocessing.dummy import Pool
from functools import partial
RET_OK = 0
RET_IGNORE = -1
RET_SKIP = -2
RET_PARSE_ERROR = -101
RET_OS_ERROR = -102
BOOK_FORMATS = ('.mobi', '.azw', '.azw3', '.epub')
def list_files(source, recrusily=False, ext_filter=None):
files = []
if not recrusily:
names = os.listdir(source)
if not ext_filter:
files.extend([os.path.join(source, name) for name in names])
else:
for name in names:
_, ext = os.path.splitext(name)
if ext and ext.lower() in ext_filter:
files.append(os.path.join(source, name))
else:
for root, dirs, names in os.walk(source):
if not ext_filter:
files.extend([os.path.join(root, name) for name in names])
else:
for name in names:
_, ext = os.path.splitext(name)
if ext and ext.lower() in ext_filter:
files.append(os.path.join(root, name))
return files
def rename_one_book(fname, idx, total, execute=False):
print('Task({}/{}):\t{}'.format(idx, total, fname))
name = os.path.basename(fname)
_, ext = os.path.splitext(name)
if ext in ('.mobi', '.azw', '.azw3'):
book = MobiParser(fname)
elif ext == '.epub':
book = EpubParser(fname)
else:
print('Unknown Format: {}'.format(name))
book = None
if book:
if execute:
book.rename()
else:
book.check()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'source', help='Source folder contains ebooks')
parser.add_argument('-e', '--execute', action='store_true',
help='Rename all ebooks [default:False]')
parser.add_argument('-r', '--recrusily', action='store_true',
help='Process books in source folder recursively [default:False]')
args = parser.parse_args()
print(args)
rename_books(args.source, args.execute, args.recrusily)
| 31.755556 | 90 | 0.574003 |
b2268ed3f38975678da47248462c6f15c287a3c3 | 387 | py | Python | sources/boltun/util/collections.py | meiblorn/boltun | d141f555b4f0ed604d8d71883c0bc8811e74370e | [
"MIT"
] | 1 | 2019-12-06T04:19:37.000Z | 2019-12-06T04:19:37.000Z | sources/boltun/util/collections.py | meiblorn/boltun | d141f555b4f0ed604d8d71883c0bc8811e74370e | [
"MIT"
] | null | null | null | sources/boltun/util/collections.py | meiblorn/boltun | d141f555b4f0ed604d8d71883c0bc8811e74370e | [
"MIT"
] | null | null | null | from __future__ import absolute_import, division, print_function
import attr
| 18.428571 | 64 | 0.648579 |
b22a4ac4d8d41f1f54853d90f7a7aa435b4d6a78 | 41 | py | Python | test/python/echo_hi_then_error.py | WrkMetric/Python--NodeJS | 502bb3d81152ef9a16fb618f71f9e9fc43777349 | [
"MIT",
"Unlicense"
] | 1,869 | 2015-01-07T18:06:52.000Z | 2022-03-30T08:35:39.000Z | test/python/echo_hi_then_error.py | PavanAnanthSharma/python-shell | 502bb3d81152ef9a16fb618f71f9e9fc43777349 | [
"MIT",
"Unlicense"
] | 252 | 2015-01-08T17:33:58.000Z | 2022-03-31T09:04:38.000Z | test/python/echo_hi_then_error.py | PavanAnanthSharma/python-shell | 502bb3d81152ef9a16fb618f71f9e9fc43777349 | [
"MIT",
"Unlicense"
] | 238 | 2015-03-22T11:22:30.000Z | 2022-03-15T22:01:44.000Z | print('hi')
raise Exception('fibble-fah') | 20.5 | 29 | 0.731707 |
b22a61d2c3956ab8bd21246cdd7e1d90a774793b | 105 | py | Python | lamdata_baisal89/df_util.py | Baisal89/ds_8_lamdata | 67911b6f15ae6230a65c439a978303ac4b492075 | [
"MIT"
] | null | null | null | lamdata_baisal89/df_util.py | Baisal89/ds_8_lamdata | 67911b6f15ae6230a65c439a978303ac4b492075 | [
"MIT"
] | 1 | 2020-03-31T11:12:26.000Z | 2020-03-31T11:12:26.000Z | lamdata_baisal89/df_util.py | Baisal89/ds_8_lamdata | 67911b6f15ae6230a65c439a978303ac4b492075 | [
"MIT"
] | null | null | null | """
Utility functions for working with DataFrame
"""
import pandas
TEST_DF = pandas.DataFrame([1,2,3])
| 13.125 | 44 | 0.72381 |
b22aabd883c6bf5301a8cac5ec9620f3e682a650 | 2,160 | py | Python | lab05/parsePhoneNrs.py | peter201943/pjm349-CS265-winter2019 | 704ffa8fe0a51795670b6c2b40b153291846fe0b | [
"MIT"
] | null | null | null | lab05/parsePhoneNrs.py | peter201943/pjm349-CS265-winter2019 | 704ffa8fe0a51795670b6c2b40b153291846fe0b | [
"MIT"
] | null | null | null | lab05/parsePhoneNrs.py | peter201943/pjm349-CS265-winter2019 | 704ffa8fe0a51795670b6c2b40b153291846fe0b | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
#
#Peter J. Mangelsdorf
#
# Kurt Schmidt
# 7/06
#
#
# parsePhoneNrs.py - an example of 'grouping' - extracting parts of a match
#
# Python 3.5.2
# on Linux 4.4.0-36-generic x86_64
#
# Demonstrates: regexp, re, search, groups
#
# Usage: By default, reads telNrs.txt . You may supply a different filename
#
# Notes:
# The pattern:
# Note that it is not perfect, but allows a bit of leeway in how we
# write a phone #. No extensions.
# Of course, only handles US-style numbers
#
# EDITOR: cols=120, tabstop=2
#
import sys
import re
stderr = sys.stderr
DEF_A_CODE = "None"
main()
| 24.545455 | 77 | 0.602315 |
b22ba815e08036847020bc1f981a8232bfaa3cd2 | 567 | py | Python | board/gpio.py | JonathanItakpe/realtime-office-light-dashboard | a783152bfee3e099d039c574ca1ea5635f79900d | [
"MIT"
] | 1 | 2017-09-04T14:05:59.000Z | 2017-09-04T14:05:59.000Z | board/gpio.py | JonathanItakpe/realtime-office-light-dashboard | a783152bfee3e099d039c574ca1ea5635f79900d | [
"MIT"
] | null | null | null | board/gpio.py | JonathanItakpe/realtime-office-light-dashboard | a783152bfee3e099d039c574ca1ea5635f79900d | [
"MIT"
] | null | null | null | import RPi.GPIO as gpio
from pusher import Pusher
import time
pusher = Pusher(app_id=u'394325', key=u'cc900daae41222ea463e', secret=u'02ae96830fe03a094573', cluster=u'eu')
gpio.setmode(gpio.BCM)
gpio.setup(2, gpio.OUT)
# TODO: Map each gpio pin to a room eg 2: HNG Main
while True:
gpio.output(2, gpio.OUT)
passcode = raw_input('What is pi? ')
if passcode == 'Awesome':
gpio.output(2, gpio.HIGH)
pusher.trigger(u'statuses', u'new_status', {u'room': u'HNG Main', u'status': u'Off'})
time.sleep(4)
else:
gpio.output(2. gpio.LOW)
print 'Wrong Password'
| 27 | 109 | 0.708995 |
b22bc88a2d140b8c45a0fbac6ce8fea46af69f26 | 1,036 | py | Python | courses/python/mflac/vuln_app/patched_admin.py | tank1st99/securitygym | 2e4fbdf8002afbe51648706906f0db2c294362a6 | [
"MIT"
] | 49 | 2021-05-20T12:49:28.000Z | 2022-03-13T11:35:03.000Z | courses/python/mflac/vuln_app/patched_admin.py | tank1st99/securitygym | 2e4fbdf8002afbe51648706906f0db2c294362a6 | [
"MIT"
] | null | null | null | courses/python/mflac/vuln_app/patched_admin.py | tank1st99/securitygym | 2e4fbdf8002afbe51648706906f0db2c294362a6 | [
"MIT"
] | 5 | 2021-05-20T12:58:34.000Z | 2021-12-05T19:08:13.000Z | import functools
from flask import Blueprint
from flask import render_template
from flask import g
from flask import redirect
from flask import url_for
from flask import flash
from mflac.vuln_app.db import get_db
bp = Blueprint("admin", __name__, url_prefix="/admin")
| 25.268293 | 76 | 0.697876 |
b22c071ff2cdd5ff5f1c6258280e4c7e042b6c35 | 3,708 | py | Python | inpainting/common/eval_test.py | yuyay/ASNG-NAS | 6b908dd25e49471e454d3c2b1e93638af2bd8ecc | [
"MIT"
] | 96 | 2019-05-22T19:04:39.000Z | 2021-12-21T07:50:51.000Z | inpainting/common/eval_test.py | pawopawo/ASNG-NAS | a13c4828cfa9acc1eebd598dc1f88ee18e152159 | [
"MIT"
] | 3 | 2019-11-11T02:13:24.000Z | 2019-11-28T13:25:40.000Z | inpainting/common/eval_test.py | pawopawo/ASNG-NAS | a13c4828cfa9acc1eebd598dc1f88ee18e152159 | [
"MIT"
] | 14 | 2019-05-24T07:50:15.000Z | 2021-07-25T14:16:18.000Z | import os
import pandas as pd
import torch
from torch import nn
import common.utils as util
import scipy.misc as spmi
| 43.623529 | 128 | 0.562567 |
b22daaab8d4ecd141b8b7df40454e33e53d6bbdf | 10,218 | py | Python | connectivity/connectivity.py | vagechirkov/NI-project | fa0687d81ffad9b2e3737fe9115a151335bda358 | [
"MIT"
] | 1 | 2021-06-01T08:06:15.000Z | 2021-06-01T08:06:15.000Z | connectivity/connectivity.py | vagechirkov/NI-project | fa0687d81ffad9b2e3737fe9115a151335bda358 | [
"MIT"
] | null | null | null | connectivity/connectivity.py | vagechirkov/NI-project | fa0687d81ffad9b2e3737fe9115a151335bda358 | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
from nxviz import CircosPlot
from neurolib.utils import atlases
# https://doi.org/10.1016/j.neuroimage.2015.07.075 Table 2
# number corresponds to AAL2 labels indices
CORTICAL_REGIONS = {
'central_region': [1, 2, 61, 62, 13, 14],
'frontal_lobe': {
'Lateral surface': [3, 4, 5, 6, 7, 8, 9, 10],
'Medial surface': [19, 20, 15, 16, 73, 74],
'Orbital surface': [11, 12, 17, 18, 21, 22, 23, 24, 25, 26, 27, 28,
29, 30, 31, 32]
},
'temporal_lobe': {
'Lateral surface': [83, 84, 85, 86, 89, 90, 93, 94]
},
'parietal_lobe': {
'Lateral surface': [63, 64, 65, 66, 67, 68, 69, 70],
'Medial surface': [71, 72],
},
'occipital_lobe': {
'Lateral surface': [53, 54, 55, 56, 57, 58],
'Medial and inferior surfaces': [47, 48, 49, 50, 51, 52, 59, 60],
},
'limbic_lobe': [87, 88, 91, 92, 35, 36, 37, 38, 39, 40, 33, 34]
}
def aal2_atlas_add_cortical_regions(aal2_atlas):
"""Add groups of cortical regions.
Parameters
----------
atlas : neurolib.utils.atlases.AutomatedAnatomicalParcellation2()
AAL2 atlas
"""
for i in CORTICAL_REGIONS.items():
inx = []
if not isinstance(i[1], list):
for ii in i[1].items():
inx.append(ii[1])
inx = sum(inx, [])
else:
inx = i[1]
# reindexing from 1 to 0
inx = [i-1 for i in inx]
setattr(aal2_atlas, i[0], inx)
return aal2_atlas
if __name__ == '__main__':
from neurolib.utils.loadData import Dataset
ds = Dataset("gw")
G = make_graph(ds.Cmats[1])
| 33.501639 | 78 | 0.592582 |
b22e2d62e71b8fbbd1c5658b31e4eb4e56a96389 | 140 | py | Python | secao5/exercicio6.py | robinson-1985/exercicios_python_geek_university | 6dfc740472de9ff7c029e26a2ba8f51080e3860b | [
"MIT"
] | null | null | null | secao5/exercicio6.py | robinson-1985/exercicios_python_geek_university | 6dfc740472de9ff7c029e26a2ba8f51080e3860b | [
"MIT"
] | null | null | null | secao5/exercicio6.py | robinson-1985/exercicios_python_geek_university | 6dfc740472de9ff7c029e26a2ba8f51080e3860b | [
"MIT"
] | null | null | null | '''6. Escreva um programa que, dados dois nmeros inteiros, mostre na tela o maior deles,
assim como a diferena existente entre ambos.'''
| 35 | 89 | 0.757143 |
b22e6d6bc215d8e3aa72605534263f2c5a57156d | 1,694 | py | Python | src/conf.py | RJTK/dwglasso_cweeds | eaaa9cd3b3b4f0120f6d9061b585ec46f0678740 | [
"MIT"
] | null | null | null | src/conf.py | RJTK/dwglasso_cweeds | eaaa9cd3b3b4f0120f6d9061b585ec46f0678740 | [
"MIT"
] | null | null | null | src/conf.py | RJTK/dwglasso_cweeds | eaaa9cd3b3b4f0120f6d9061b585ec46f0678740 | [
"MIT"
] | null | null | null | '''
This is the config file for the code in src/. Essentially it
holds things like file and variable names.
'''
# The folder locations of the below files are specified by the
# cookie cutter data science format and are hardcoded into the code.
# I'm not entirely sure that that was the best way to go about it,
# but thats how it is for now.
import os
cwd = os.getcwd() # Current working directory
# Directories continaing data
RAW_DATA_DIR = cwd + '/data/raw/'
INTERIM_DATA_DIR = cwd + '/data/interim/'
PROCESSED_DATA_DIR = cwd + '/data/processed/'
# Path of initial locations text file
LOC_DATA_FILE = RAW_DATA_DIR + 'locations.txt'
# Path to pickle location data
LOC_PKL_FILE = INTERIM_DATA_DIR + 'locations.pkl'
# Path to HDFStores
HDF_INTERIM_FILE = INTERIM_DATA_DIR + 'interim_data.hdf'
HDF_FINAL_FILE = PROCESSED_DATA_DIR + 'final_data.hdf'
# Path to a place to store figures
FIGURE_ROOT = cwd + '/reports/figures/'
# The key for the locations DataFrame in the HDFStore
LOCATIONS_KEY = '/locations/D'
# File prefixes for pickle files
ZZT_FILE_PREFIX = cwd + '/data/processed/ZZT'
YZT_FILE_PREFIX = cwd + '/data/processed/YZT'
X_VALIDATE_FILE_PREFIX = cwd + '/data/processed/X_validate'
# The maximum value of p we are likely to use
MAX_P = 3
# The actual value of p that is used
P_LAG = 2
# The location of the canada shape file for geopandas
CANADA_SHAPE = cwd + '/reports/shapefiles/Canada/Canada.shp'
# Name of the temperature key in hdf
TEMPERATURE_TS_ROOT = 'temperature_ts'
# Used time intervals
INIT_YEAR = 1980 # The initial year for final dataset
FINAL_YEAR = 1990 # The final year for final dataset
FINAL_YEAR_VALIDATE = 1995 # last year for validation set.
| 30.25 | 68 | 0.756198 |
b22e87213200baf4d5c3c89eb335262571cc546e | 1,486 | py | Python | HackerEarth/Python/BasicProgramming/InputOutput/BasicsOfInputOutput/MinimizeCost.py | cychitivav/programming_exercises | e8e7ddb4ec4eea52ee0d3826a144c7dc97195e78 | [
"MIT"
] | null | null | null | HackerEarth/Python/BasicProgramming/InputOutput/BasicsOfInputOutput/MinimizeCost.py | cychitivav/programming_exercises | e8e7ddb4ec4eea52ee0d3826a144c7dc97195e78 | [
"MIT"
] | null | null | null | HackerEarth/Python/BasicProgramming/InputOutput/BasicsOfInputOutput/MinimizeCost.py | cychitivav/programming_exercises | e8e7ddb4ec4eea52ee0d3826a144c7dc97195e78 | [
"MIT"
] | null | null | null | #!/Usr/bin/env python
"""
You are given an array of numbers Ai which contains positive as well as negative numbers . The cost of the array can be defined as C(X)
C(x) = |A1 + T1| + |A2 + T2| + ... + |An + Tn|, where T is the transfer array which contains N zeros initially.
You need to minimize this cost. You can transfer value from one array element to another if and only if the distance between them is at most K.
Also, transfer value can't be transferred further.
Say array contains 3, -1, -2 and K = 1
if we transfer 3 from 1st element to 2nd, the array becomes
Original Value 3, -1, -2
Transferred value -3, 3, 0
C(x) = |3 - 3| + |-1 + 3| + ... + |-2 + 0| = 4 which is minimum in this case
Note :
Only positive value can be transferred
It is not necessary to transfer whole value i.e partial transfer is also acceptable. This means that if you have A[i] = 5 then you can distribute the value 5 across many other array elements provided that they finally sum to a number less than equal to 5. For example 5 can be transferred in chunks of smaller values say 2 , 3 but their sum should not exceed 5.
INPUT:
First line contains N and K separated by space
Second line denotes an array of size N
OUTPU:
Minimum value of C(X)
CONSTRAINTS:
1 N,K 10^5
-10^9 Ai 10^9
"""
import io
__author__ = "Cristian Chitiva"
__date__ = "March 18, 2019"
__email__ = "cychitivav@unal.edu.co"
N=os.read(0,2).decode()
print(type(N))
| 31.617021 | 362 | 0.691117 |
b231777aaaf136ffab975467c0c084dcecffc14f | 973 | py | Python | ansible/utils/check_droplet.py | louis-pre/NewsBlur | b4e9a56041ff187ef77b38dfd0778daf41b53f4f | [
"MIT"
] | 3,073 | 2015-01-01T07:20:18.000Z | 2022-03-31T20:33:41.000Z | ansible/utils/check_droplet.py | louis-pre/NewsBlur | b4e9a56041ff187ef77b38dfd0778daf41b53f4f | [
"MIT"
] | 1,054 | 2015-01-02T13:32:35.000Z | 2022-03-30T04:21:21.000Z | ansible/utils/check_droplet.py | louis-pre/NewsBlur | b4e9a56041ff187ef77b38dfd0778daf41b53f4f | [
"MIT"
] | 676 | 2015-01-03T16:40:29.000Z | 2022-03-30T14:00:40.000Z | import sys
import time
import digitalocean
import subprocess
TOKEN_FILE = "/srv/secrets-newsblur/keys/digital_ocean.token"
droplet_name = sys.argv[1]
with open(TOKEN_FILE) as f:
token = f.read().strip()
manager = digitalocean.Manager(token=token)
timeout = 180
timer = 0
ssh_works = False
while not ssh_works:
if timer > timeout:
raise Exception(f"The {droplet_name} droplet was not created.")
droplets = [drop for drop in manager.get_all_droplets() if drop.name == droplet_name]
if droplets:
droplet = droplets[0]
print(f"Found the {droplet_name} droplet. IP address is {droplet.ip_address}. Testing ssh...")
ssh_works = test_ssh(droplet)
time.sleep(3)
timer += 3
print("Success!") | 27.027778 | 105 | 0.697842 |
b231e5387c29a9c303c2891e543771ef5034fb5e | 671 | py | Python | logmappercommon/definitions/logmapperkeys.py | abaena78/logmapper-master | ef4cc7470aec274095afa09f0fe97d9d48299418 | [
"MIT"
] | null | null | null | logmappercommon/definitions/logmapperkeys.py | abaena78/logmapper-master | ef4cc7470aec274095afa09f0fe97d9d48299418 | [
"MIT"
] | null | null | null | logmappercommon/definitions/logmapperkeys.py | abaena78/logmapper-master | ef4cc7470aec274095afa09f0fe97d9d48299418 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sun Apr 8 09:45:29 2018
@author: abaena
"""
DATATYPE_AGENT = 'agent'
DATATYPE_PATH_METRICS = 'pathmet'
DATATYPE_LOG_EVENTS = 'logeve'
DATATYPE_LOG_METRICS = 'logmet'
DATATYPE_MONITOR_HOST = 'host'
DATATYPE_MONITOR_MICROSERVICE = 'ms'
DATATYPE_MONITOR_TOMCAT = 'tomc'
DATATYPE_MONITOR_POSTGRES = 'psql'
SOURCE_TYPE_READER = "reader"
SOURCE_TYPE_HOST = "host"
SOURCE_TYPE_SPRINGMICROSERVICE = "spring_microservice"
SOURCE_TYPE_TOMCAT = "tomcat"
SOURCE_TYPE_POSTGRES = "postgres"
MEASURE_CAT_METRIC=0
MEASURE_CAT_EVENT=1
TRANSF_TYPE_NONE=0
TRANSF_TYPE_MINMAX=1
TRANSF_TYPE_STD=2
TRANSF_TYPE_PERCENTAGE=3
TRANSF_TYPE_FUZZY_1=4
| 19.735294 | 54 | 0.797317 |
b23410413a31fad8b057b8f858083b133ba2f903 | 7,438 | py | Python | construct/expr.py | DominicAntonacci/construct | abd48c4892ceddc60c11d25f4a955573e2c61111 | [
"MIT"
] | 57 | 2019-12-08T00:02:14.000Z | 2022-03-24T20:40:40.000Z | construct/expr.py | DominicAntonacci/construct | abd48c4892ceddc60c11d25f4a955573e2c61111 | [
"MIT"
] | 3 | 2020-01-26T03:38:31.000Z | 2020-06-21T13:42:46.000Z | construct/expr.py | DominicAntonacci/construct | abd48c4892ceddc60c11d25f4a955573e2c61111 | [
"MIT"
] | 8 | 2020-04-20T08:17:57.000Z | 2021-10-04T06:04:51.000Z | import operator
if not hasattr(operator, "div"):
operator.div = operator.truediv
opnames = {
operator.add : "+",
operator.sub : "-",
operator.mul : "*",
operator.div : "/",
operator.floordiv : "//",
operator.mod : "%",
operator.pow : "**",
operator.xor : "^",
operator.lshift : "<<",
operator.rshift : ">>",
operator.and_ : "and",
operator.or_ : "or",
operator.not_ : "not",
operator.neg : "-",
operator.pos : "+",
operator.contains : "in",
operator.gt : ">",
operator.ge : ">=",
operator.lt : "<",
operator.le : "<=",
operator.eq : "==",
operator.ne : "!=",
}
this = Path("this")
obj_ = Path("obj_")
list_ = Path2("list_")
len_ = FuncPath(len)
sum_ = FuncPath(sum)
min_ = FuncPath(min)
max_ = FuncPath(max)
abs_ = FuncPath(abs)
| 28.941634 | 103 | 0.60285 |
b2341f237ea46f0ced528101120f6ba97f84d73f | 14,362 | py | Python | ci/unit_tests/functions_deploy/main_test.py | xverges/watson-assistant-workbench | b899784506c7469be332cb58ed447ca8f607ed30 | [
"Apache-2.0"
] | 1 | 2020-03-27T16:39:38.000Z | 2020-03-27T16:39:38.000Z | ci/unit_tests/functions_deploy/main_test.py | xverges/watson-assistant-workbench | b899784506c7469be332cb58ed447ca8f607ed30 | [
"Apache-2.0"
] | 1 | 2021-01-29T16:14:58.000Z | 2021-02-03T16:10:07.000Z | ci/unit_tests/functions_deploy/main_test.py | xverges/watson-assistant-workbench | b899784506c7469be332cb58ed447ca8f607ed30 | [
"Apache-2.0"
] | 1 | 2021-01-22T13:13:36.000Z | 2021-01-22T13:13:36.000Z | """
Copyright 2019 IBM Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import uuid
import zipfile
from urllib.parse import quote
import pytest
import requests
import functions_delete_package
import functions_deploy
from wawCommons import getFunctionResponseJson
from ...test_utils import BaseTestCaseCapture
| 49.524138 | 137 | 0.598176 |
b2354dd4a0bef69531cc2ff0b6a96364cece153b | 503 | py | Python | python_scripts/tip_loss/tip_loss.py | lawsonro3/python_scripts | 875ff607727ab37006d7b3cb793f1dd97c538d1b | [
"Apache-2.0"
] | null | null | null | python_scripts/tip_loss/tip_loss.py | lawsonro3/python_scripts | 875ff607727ab37006d7b3cb793f1dd97c538d1b | [
"Apache-2.0"
] | null | null | null | python_scripts/tip_loss/tip_loss.py | lawsonro3/python_scripts | 875ff607727ab37006d7b3cb793f1dd97c538d1b | [
"Apache-2.0"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
plt.close('all')
# From section 3.8.3 of wind energy explained
# Prandlt tip loss calc
B = 3 # number of blades
R = 1 # blade length
phi = np.deg2rad(10) # relative wind angle
r = np.linspace(0,R,100)
F = 2/np.pi * np.arccos(np.exp(-((B/2)*(1-(r/R)))/((r/R)*np.sin(phi))))
plt.figure(num='Tip loss for phi = %2.1f deg and %d blades' % (np.rad2deg(phi), B))
plt.plot(r,F)
plt.xlabel('Non-Dimensional Blade Radius (r/R)')
plt.ylabel('Tip Loss Factor')
| 29.588235 | 83 | 0.66998 |
b2356595618aa8cdf6515e41ee52e8a997567521 | 854 | py | Python | filter/mot.py | oza6ut0ne/CVStreamer | a299ab2802fe5c116df5c90c4ed872f2d05faaed | [
"MIT"
] | null | null | null | filter/mot.py | oza6ut0ne/CVStreamer | a299ab2802fe5c116df5c90c4ed872f2d05faaed | [
"MIT"
] | null | null | null | filter/mot.py | oza6ut0ne/CVStreamer | a299ab2802fe5c116df5c90c4ed872f2d05faaed | [
"MIT"
] | null | null | null | import time
import cv2
import numpy as np
| 28.466667 | 104 | 0.598361 |
b235d4cd98481beba4ed5022736424b39eba18ea | 8,730 | py | Python | social_auth/backends/contrib/vkontakte.py | ryr/django-social-auth | e1aa22ba8be027ea8e8b0a62caee90485aa44836 | [
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | social_auth/backends/contrib/vkontakte.py | ryr/django-social-auth | e1aa22ba8be027ea8e8b0a62caee90485aa44836 | [
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | social_auth/backends/contrib/vkontakte.py | ryr/django-social-auth | e1aa22ba8be027ea8e8b0a62caee90485aa44836 | [
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | """
VKontakte OpenAPI and OAuth 2.0 support.
This contribution adds support for VKontakte OpenAPI and OAuth 2.0 service in the form
www.vkontakte.ru. Username is retrieved from the identity returned by server.
"""
from django.conf import settings
from django.contrib.auth import authenticate
from django.utils import simplejson
from urllib import urlencode, unquote
from urllib2 import Request, urlopen, HTTPError
from hashlib import md5
from time import time
from social_auth.backends import SocialAuthBackend, OAuthBackend, BaseAuth, BaseOAuth2, USERNAME
VKONTAKTE_API_URL = 'https://api.vkontakte.ru/method/'
VKONTAKTE_SERVER_API_URL = 'http://api.vkontakte.ru/api.php'
VKONTAKTE_API_VERSION = '3.0'
VKONTAKTE_OAUTH2_SCOPE = [''] # Enough for authentication
EXPIRES_NAME = getattr(settings, 'SOCIAL_AUTH_EXPIRATION', 'expires')
USE_APP_AUTH = getattr(settings, 'VKONTAKTE_APP_AUTH', False)
LOCAL_HTML = getattr(settings, 'VKONTAKTE_LOCAL_HTML', 'vkontakte.html')
def vkontakte_api(method, data):
""" Calls VKontakte OpenAPI method
http://vkontakte.ru/apiclub,
http://vkontakte.ru/pages.php?o=-1&p=%C2%FB%EF%EE%EB%ED%E5%ED%E8%E5%20%E7%E0%EF%F0%EE%F1%EE%E2%20%EA%20API
"""
# We need to perform server-side call if no access_token
if not 'access_token' in data:
if not 'v' in data:
data['v'] = VKONTAKTE_API_VERSION
if not 'api_id' in data:
data['api_id'] = USE_APP_AUTH.get('id') if USE_APP_AUTH else settings.VKONTAKTE_APP_ID
data['method'] = method
data['format'] = 'json'
url = VKONTAKTE_SERVER_API_URL
secret = USE_APP_AUTH.get('key') if USE_APP_AUTH else settings.VKONTAKTE_APP_SECRET
param_list = sorted(list(item + '=' + data[item] for item in data))
data['sig'] = md5(''.join(param_list) + secret).hexdigest()
else:
url = VKONTAKTE_API_URL + method
params = urlencode(data)
api_request = Request(url + '?' + params)
try:
return simplejson.loads(urlopen(api_request).read())
except (TypeError, KeyError, IOError, ValueError, IndexError):
return None
# Backend definition
BACKENDS = {
'vkontakte': VKontakteAuth,
'vkontakte-oauth2': VKontakteOAuth2
}
| 35.778689 | 118 | 0.652921 |
b23604c9ede5f1199e722240913b11cf6fdf151d | 1,260 | py | Python | main.py | Harmanjit14/face-distance-detector | 82a491308e32e584750a9b2f757cacafc47e5aaf | [
"MIT"
] | null | null | null | main.py | Harmanjit14/face-distance-detector | 82a491308e32e584750a9b2f757cacafc47e5aaf | [
"MIT"
] | null | null | null | main.py | Harmanjit14/face-distance-detector | 82a491308e32e584750a9b2f757cacafc47e5aaf | [
"MIT"
] | null | null | null | import cv2
import cvzone
from cvzone.FaceMeshModule import FaceMeshDetector
import numpy as np
cap = cv2.VideoCapture(0)
detector = FaceMeshDetector()
text = ['Hello there.', 'My Name is Harman', 'I am bored!']
while True:
success, img = cap.read()
img, faces = detector.findFaceMesh(img, draw=False)
txt = np.zeros_like(img)
if faces:
face = faces[0]
left_pupil = face[145]
right_pupil = face[374]
# cv2.circle(img, left_pupil, 2, (255, 0, 255), cv2.FILLED)
# cv2.circle(img, right_pupil, 2, (255, 0, 255), cv2.FILLED)
# cv2.line(img, left_pupil, right_pupil, (255, 0, 255), 1)
w, _info, _image = detector.findDistance(
left_pupil, right_pupil, img)
W = 6.3
f = 600
D = W*f/w
for i, t in enumerate(text):
top_padding = 20 + int(D/2)
scale = 0.4+D/50
cv2.putText(txt, t, (50, 50+(i*top_padding)),
cv2.FONT_HERSHEY_PLAIN, scale, (255, 255, 255), 2)
cvzone.putTextRect(
img, f'Distance {int(D)} cm', (face[10][0]-100, face[10][1]-20), 2, 3)
stack = cvzone.stackImages([img, txt], 2, 1)
cv2.imshow("Image", stack)
cv2.waitKey(1)
| 27.391304 | 82 | 0.565079 |
b2362a2c9bbd2b259775e9395541cd8ca6653d97 | 3,188 | py | Python | bokeh/util/terminal.py | kinghows/bokeh | aeb7abc1dbe2b67ce0f4422838a96fb8362c52c7 | [
"BSD-3-Clause"
] | 1 | 2018-11-14T19:08:18.000Z | 2018-11-14T19:08:18.000Z | bokeh/util/terminal.py | kinghows/bokeh | aeb7abc1dbe2b67ce0f4422838a96fb8362c52c7 | [
"BSD-3-Clause"
] | 1 | 2021-05-09T02:45:17.000Z | 2021-05-09T02:45:17.000Z | bokeh/util/terminal.py | kinghows/bokeh | aeb7abc1dbe2b67ce0f4422838a96fb8362c52c7 | [
"BSD-3-Clause"
] | 1 | 2020-06-17T05:47:16.000Z | 2020-06-17T05:47:16.000Z | #-----------------------------------------------------------------------------
# Copyright (c) 2012 - 2017, Anaconda, Inc. All rights reserved.
#
# Powered by the Bokeh Development Team.
#
# The full license is in the file LICENSE.txt, distributed with this software.
#-----------------------------------------------------------------------------
''' Provide utilities for formatting terminal output.
'''
#-----------------------------------------------------------------------------
# Boilerplate
#-----------------------------------------------------------------------------
from __future__ import absolute_import, division, print_function, unicode_literals
import logging
log = logging.getLogger(__name__)
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
# Standard library imports
import sys
# External imports
# Bokeh imports
#-----------------------------------------------------------------------------
# General API
#-----------------------------------------------------------------------------
# provide fallbacks for highlights in case colorama is not installed
try:
import colorama
from colorama import Fore, Style
sys.platform == "win32" and colorama.init()
except ImportError:
#-----------------------------------------------------------------------------
# Dev API
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Private API
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Code
#-----------------------------------------------------------------------------
| 35.422222 | 91 | 0.414994 |
b2366956ff664bff3318ce968898d3246b9f6204 | 16,462 | py | Python | exp.py | SOPSLab/SwarmAggregation | 2678208bec747de4f1a925a0bed862cd4205743f | [
"MIT"
] | null | null | null | exp.py | SOPSLab/SwarmAggregation | 2678208bec747de4f1a925a0bed862cd4205743f | [
"MIT"
] | null | null | null | exp.py | SOPSLab/SwarmAggregation | 2678208bec747de4f1a925a0bed862cd4205743f | [
"MIT"
] | null | null | null | # Project: SwarmAggregation
# Filename: exp.py
# Authors: Joshua J. Daymude (jdaymude@asu.edu) and Noble C. Harasha
# (nharasha@mit.edu).
"""
exp: A flexible, unifying framework for defining and running experiments for
swarm aggregation.
"""
import argparse
from aggregation import aggregation, ideal
from itertools import product
from math import sin, cos, hypot, ceil
from matplotlib.animation import FFMpegWriter, ArtistAnimation
import matplotlib.cm as cm
from matplotlib.collections import LineCollection, PatchCollection, PolyCollection
import matplotlib.pyplot as plt
from metrics import *
import numpy as np
import pickle
from tqdm import tqdm
def load_exp(fname):
"""
Load an experiment from the specified file.
"""
with open(fname, 'rb') as f:
exp = pickle.load(f)
return exp
### DATA EXPERIMENTS ###
def exp_base(seed=None):
"""
With default parameters, investigate aggregation over time.
"""
params = {} # This uses all default values.
exp = Experiment('base', params, seed=seed)
exp.run()
exp.save()
exp.plot_evo(runs=[0], iters=[0])
exp.animate(run=0, iter=0)
def exp_symm(seed=None):
"""
With default parameters and symmetric initialization, investigate
aggregation over time for a few system sizes.
"""
N = [3, 5, 10]
params = {'N' : N, 'init' : ['symm']}
exp = Experiment('symm', params, seed=seed)
exp.run()
exp.save()
exp.plot_evo(runs=np.arange(len(exp.params)), iters=[0], metrics=['disp'], \
labels=['{} robots'.format(i) for i in N], \
title='Symmetric Initial Configuration')
def exp_errprob(seed=None):
"""
With default parameters and a range of error probabilities, investigate
average time to aggregation with a 15% stopping condition.
"""
N = [10, 25, 50, 100]
errprob = np.arange(0, 0.501, 0.0125)
params = {'N' : N, 'noise' : [('err', p) for p in errprob], 'stop' : [0.15]}
exp = Experiment('errprob', params, iters=25, savehist=False, seed=seed)
exp.run()
exp.save()
exp.plot_aggtime(N, errprob, 'Error Probability')
def exp_motion(seed=None):
"""
With default parameters and a range of motion noise strengths, investigate
average time to aggregation with a 15% stopping condition.
"""
N = [10, 25, 50, 100]
fmax = np.arange(0, 40.1, 1.25)
params = {'N' : N, 'noise' : [('mot', f) for f in fmax], 'stop' : [0.15]}
exp = Experiment('motion', params, iters=25, savehist=False, seed=seed)
exp.run()
exp.save()
exp.plot_aggtime(N, fmax, 'Max. Noise Force (N)')
def exp_cone(seed=None):
"""
With default parameters and a range of sight sensor sizes, investigate
average time to aggregation with a 15% stopping condition.
"""
N = [10, 25, 50, 100]
sensor = np.arange(0, np.pi, 0.1)
params = {'N' : N, 'sensor' : sensor, 'stop' : [0.15]}
exp = Experiment('cone', params, iters=25, savehist=False, seed=seed)
exp.run()
exp.save()
exp.plot_aggtime(N, sensor, 'Sight Sensor Size (rad)')
### CALIBRATION EXPERIMENTS ###
def exp_step(seed=None):
"""
With default parameters and a range of time step durations, investigate
aggregation over time.
"""
step = [0.0005, 0.001, 0.005, 0.01, 0.025]
params = {'N' : [50], 'time' : [120], 'step' : step}
exp = Experiment('step', params, seed=seed)
exp.run()
exp.save()
exp.plot_evo(runs=np.arange(len(exp.params)), iters=[0], metrics=['disp'], \
labels=['{}s'.format(i) for i in step])
if __name__ == '__main__':
# Parse command line arguments.
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument('-E', '--exps', type=str, nargs='+', required=True, \
help='IDs of experiments to run')
parser.add_argument('-R', '--rand_seed', type=int, default=None, \
help='Seed for random number generation')
args = parser.parse_args()
# Run selected experiments.
exps = {'base' : exp_base, 'symm' : exp_symm, 'errprob' : exp_errprob, \
'motion' : exp_motion, 'cone' : exp_cone, 'step' : exp_step}
for id in args.exps:
exps[id](args.rand_seed)
| 40.848635 | 82 | 0.551209 |
b2376657b0293a1d78aa6eb2c5f7730819b325c9 | 867 | py | Python | pychron/experiment/tests/comment_template.py | ael-noblegas/pychron | 6ebbbb1f66a614972b62b7a9be4c784ae61b5d62 | [
"Apache-2.0"
] | 1 | 2019-02-27T21:57:44.000Z | 2019-02-27T21:57:44.000Z | pychron/experiment/tests/comment_template.py | ael-noblegas/pychron | 6ebbbb1f66a614972b62b7a9be4c784ae61b5d62 | [
"Apache-2.0"
] | 80 | 2018-07-17T20:10:20.000Z | 2021-08-17T15:38:24.000Z | pychron/experiment/tests/comment_template.py | AGESLDEO/pychron | 1a81e05d9fba43b797f335ceff6837c016633bcf | [
"Apache-2.0"
] | null | null | null |
from __future__ import absolute_import
__author__ = 'ross'
import unittest
from pychron.experiment.utilities.comment_template import CommentTemplater
if __name__ == '__main__':
unittest.main()
| 22.815789 | 78 | 0.682814 |
b2377bde1e5c8e5670fad099a5e53482fcf577c1 | 1,823 | py | Python | apps/roles/views.py | andipandiber/CajaAhorros | cb0769fc04529088768ea650f9ee048bd9a55837 | [
"MIT"
] | null | null | null | apps/roles/views.py | andipandiber/CajaAhorros | cb0769fc04529088768ea650f9ee048bd9a55837 | [
"MIT"
] | 8 | 2021-03-30T13:39:24.000Z | 2022-03-12T00:36:15.000Z | apps/roles/views.py | andresbermeoq/CajaAhorros | cb0769fc04529088768ea650f9ee048bd9a55837 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.urls import reverse_lazy
from django.contrib.auth.mixins import LoginRequiredMixin
from django.views.generic import CreateView, ListView, UpdateView, DeleteView, TemplateView
from .models import Role
| 29.403226 | 91 | 0.705979 |
b2377be653f5937e37d815cfdc93d265c2fab546 | 4,227 | py | Python | vjezba5/DPcli-part.py | vmilkovic/primjena-blockchain-tehnologije | bb18abea1fc6d1a25ae936966231de70b2531bba | [
"MIT"
] | null | null | null | vjezba5/DPcli-part.py | vmilkovic/primjena-blockchain-tehnologije | bb18abea1fc6d1a25ae936966231de70b2531bba | [
"MIT"
] | null | null | null | vjezba5/DPcli-part.py | vmilkovic/primjena-blockchain-tehnologije | bb18abea1fc6d1a25ae936966231de70b2531bba | [
"MIT"
] | null | null | null | import rpyc
from Crypto.Signature import pkcs1_15
from Crypto.Hash import SHA256
from Crypto.PublicKey import RSA
#############
## KLIJENT ##
#############
flag = True
try:
#klijent iz prethodno stvorenih datoteka uitava svoj javni i privatni klju
prKey = RSA.import_key(open('private_key.pem').read())
puKey = RSA.import_key(open('public_key.pem').read())
except FileNotFoundError:
#ukoliko datoteke s kljuevima nisu pronaene, ide se u stvaranje novih
print("Nije pronaena adresa pridruena klijentu!")
odabir = input("Generirati novu adresu?[D/N]: ")
odabir = odabir.lower()
if odabir == 'd':
if generiraj_kljuceve():
print("Stvaranje kljueva uspjelo")
prKey = RSA.import_key(open('private_key.pem').read())
puKey = RSA.import_key(open('public_key.pem').read())
else:
print('Prekid programa!')
flag=False
if flag:
c = rpyc.connect("127.0.0.1", 25555)
#nakon povezivanja sa serverom, ide se u petlju korisninog suelja
while True:
opcija = int(input(
""" 1-Poaljite transakciju na odabranu adresu
2-Provjerite stanje svoje adrese
3-Provjerite stanje tue adrese
4-Prijavi svoju adresu na mreu
5-Odustani
Odabir[1-5]: """))
if opcija == 1:
###############################################
#implementirati unos odredine adrese i iznosa#
#-> korisnika se pita da unese ta 2 podatka #
###############################################
adresa_primatelja = input('Unesite adresu primatelja: ')
iznos = input('Unesite iznos transakcije: ')
#message sadri string s informacijama o transakciji u obliku:
#adresa_poiljatelja#adresa_primatelja#iznos
#znak # je graninik izmeu pojedinih vrijednosti
adresa_posiljatelja = str(puKey.n)
##################################################################
#sastaviti string koji e se poslati serveru prema gornjem opisu #
#spremiti ga u varijablu message #
##################################################################
message = '#'.join([adresa_primatelja, adresa_posiljatelja, iznos])
#hakirani sustav
#message = '#'.join([adresa_primatelja, adresa_posiljatelja, iznos])
#prije izrade signature-a moramo regularan string pretvoriti u byte string
message = message.encode()
#izraujemo hash kod poruke
h = SHA256.new(message)
#hash kod kriptiramo privatnim kljuem klijenta i tako dobijemo signature.
#server moe dekriptirati signature pomou javnog kljua klijenta i tako dobiti hash kod iz njega
#server moe odrediti javni klju klijenta na temelju njegove adrese
signature = pkcs1_15.new(prKey).sign(h)
print(c.root.transakcija(message,signature))
#gornja linija je slanje transakcije sa dig. potpisom dok je donja bez potpisa
##print(c.root.transakcija(message))
elif opcija == 2:
print('Adresa: ')
print(str(puKey.n))
print('Stanje: ')
#aljemo adresu klijenta
#adresa se iz javnog kljua uzima pozivom atributa n
#adresa se vraa kao integer pa ga treba pretvoriti u string
print(c.root.provjeri_adresu(str(puKey.n)))
elif opcija == 3:
add = str(input('Unesi adresu za provjeru: '))
print('Stanje: ')
print(c.root.provjeri_adresu(add))
elif opcija == 4:
print(c.root.registriraj_adresu(str(puKey.n)))
else:
break
| 40.257143 | 109 | 0.581263 |
b237ee0ace32e691329070ad414c8eef66fccd44 | 175 | py | Python | waiguan/layers/modules/__init__.py | heixialeeLeon/SSD | afdc90fafea0c0629bba789f546e3e0ca279f205 | [
"MIT"
] | null | null | null | waiguan/layers/modules/__init__.py | heixialeeLeon/SSD | afdc90fafea0c0629bba789f546e3e0ca279f205 | [
"MIT"
] | null | null | null | waiguan/layers/modules/__init__.py | heixialeeLeon/SSD | afdc90fafea0c0629bba789f546e3e0ca279f205 | [
"MIT"
] | null | null | null | from .l2norm import L2Norm
from .multibox_loss import MultiBoxLoss
from .multibox_focalloss import MultiBoxFocalLoss
__all__ = ['L2Norm', 'MultiBoxLoss', 'MultiBoxFocalLoss'] | 35 | 57 | 0.822857 |
b238f91a5ac084ae34b9c4b97d9a95b7ebca4518 | 418 | py | Python | hlwtadmin/migrations/0035_location_disambiguation.py | Kunstenpunt/havelovewilltravel | 6a27824b4d3d8b1bf19e0bc0d0648f0f4e8abc83 | [
"Apache-2.0"
] | 1 | 2020-10-16T16:29:01.000Z | 2020-10-16T16:29:01.000Z | hlwtadmin/migrations/0035_location_disambiguation.py | Kunstenpunt/havelovewilltravel | 6a27824b4d3d8b1bf19e0bc0d0648f0f4e8abc83 | [
"Apache-2.0"
] | 365 | 2020-02-03T12:46:53.000Z | 2022-02-27T17:20:46.000Z | hlwtadmin/migrations/0035_location_disambiguation.py | Kunstenpunt/havelovewilltravel | 6a27824b4d3d8b1bf19e0bc0d0648f0f4e8abc83 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.0 on 2020-07-22 08:26
from django.db import migrations, models
| 22 | 74 | 0.617225 |
b23ae56df03e56d6049586357b729e447c6dec2f | 658 | py | Python | 819. Rotate Array/Slicing.py | tulsishankarreddy/leetcode | fffe90b0ab43a57055c248550f31ac18967fe183 | [
"MIT"
] | 1 | 2022-01-19T16:26:49.000Z | 2022-01-19T16:26:49.000Z | 819. Rotate Array/Slicing.py | tulsishankarreddy/leetcode | fffe90b0ab43a57055c248550f31ac18967fe183 | [
"MIT"
] | null | null | null | 819. Rotate Array/Slicing.py | tulsishankarreddy/leetcode | fffe90b0ab43a57055c248550f31ac18967fe183 | [
"MIT"
] | null | null | null | ''' This can be solved using the slicing method used in list. We have to modify the list by take moving the
last part of the array in reverse order and joining it with the remaining part of the list to its right''' | 54.833333 | 120 | 0.600304 |
b23c1d878bde31a9833fb50b46f378e78aeb39e0 | 4,019 | py | Python | src/pdfOut.py | virus-on/magister_work | 803d218f83cba31900156ee5f2e2f4df807ccfff | [
"MIT"
] | 2 | 2020-12-02T12:45:08.000Z | 2021-11-15T10:55:10.000Z | src/pdfOut.py | virus-on/magister_work | 803d218f83cba31900156ee5f2e2f4df807ccfff | [
"MIT"
] | null | null | null | src/pdfOut.py | virus-on/magister_work | 803d218f83cba31900156ee5f2e2f4df807ccfff | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import subprocess
import time
| 42.755319 | 151 | 0.607863 |
b23ca7a903bc4922dc5e8b76e4f255954b93daec | 10,324 | py | Python | ecs/notifications/models.py | programmierfabrik/ecs | 2389a19453e21b2ea4e40b272552bcbd42b926a9 | [
"Apache-2.0"
] | 9 | 2017-02-13T18:17:13.000Z | 2020-11-21T20:15:54.000Z | ecs/notifications/models.py | programmierfabrik/ecs | 2389a19453e21b2ea4e40b272552bcbd42b926a9 | [
"Apache-2.0"
] | 2 | 2021-05-20T14:26:47.000Z | 2021-05-20T14:26:48.000Z | ecs/notifications/models.py | programmierfabrik/ecs | 2389a19453e21b2ea4e40b272552bcbd42b926a9 | [
"Apache-2.0"
] | 4 | 2017-04-02T18:48:59.000Z | 2021-11-23T15:40:35.000Z | from importlib import import_module
from django.conf import settings
from django.db import models
from django.utils.translation import ugettext_lazy as _
from django.utils.translation import ugettext
from django.template import loader
from django.utils.text import slugify
from django.utils import timezone
from reversion.models import Version
from reversion import revisions as reversion
from ecs.documents.models import Document
from ecs.utils.viewutils import render_pdf_context
from ecs.notifications.constants import SAFETY_TYPE_CHOICES
from ecs.notifications.managers import NotificationManager
from ecs.authorization.managers import AuthorizationManager
def get_submission(self):
sf = self.get_submission_form()
if sf:
return sf.submission
return None
def get_filename(self, suffix='.pdf'):
ec_num = '_'.join(
str(num)
for num in self.submission_forms
.order_by('submission__ec_number')
.distinct()
.values_list('submission__ec_number', flat=True)
)
base = '{}-{}'.format(slugify(ec_num), slugify(self.type.name))
return base[:(250 - len(suffix))] + suffix
def render_pdf(self):
tpl = self.type.get_template('notifications/pdf/%s.html')
submission_forms = self.submission_forms.select_related('submission')
return render_pdf_context(tpl, {
'notification': self,
'submission_forms': submission_forms,
'documents': self.documents.order_by('doctype__identifier', 'date', 'name'),
})
def render_pdf_document(self):
assert self.pdf_document is None
pdfdata = self.render_pdf()
self.pdf_document = Document.objects.create_from_buffer(pdfdata,
doctype='notification', parent_object=self, name=str(self)[:250],
original_file_name=self.get_filename())
self.save()
class ReportNotification(Notification):
study_started = models.BooleanField(default=True)
reason_for_not_started = models.TextField(null=True, blank=True)
recruited_subjects = models.PositiveIntegerField(null=True, blank=False)
finished_subjects = models.PositiveIntegerField(null=True, blank=False)
aborted_subjects = models.PositiveIntegerField(null=True, blank=False)
SAE_count = models.PositiveIntegerField(default=0, blank=False)
SUSAR_count = models.PositiveIntegerField(default=0, blank=False)
class CompletionReportNotification(ReportNotification):
study_aborted = models.BooleanField(default=False)
completion_date = models.DateField()
class ProgressReportNotification(ReportNotification):
runs_till = models.DateField(null=True, blank=True)
class AmendmentNotification(DiffNotification, Notification):
is_substantial = models.BooleanField(default=False)
meeting = models.ForeignKey('meetings.Meeting', null=True,
related_name='amendments')
needs_signature = models.BooleanField(default=False)
class SafetyNotification(Notification):
safety_type = models.CharField(max_length=6, db_index=True, choices=SAFETY_TYPE_CHOICES, verbose_name=_('Type'))
class CenterCloseNotification(Notification):
investigator = models.ForeignKey('core.Investigator', related_name="closed_by_notification")
close_date = models.DateField()
NOTIFICATION_MODELS = (
Notification, CompletionReportNotification, ProgressReportNotification,
AmendmentNotification, SafetyNotification, CenterCloseNotification,
)
| 37.955882 | 117 | 0.6852 |
b23cc3375a6c8a89472ca912854ca2234009998d | 2,339 | py | Python | event/event_handler.py | rafty/ServerlessEventSoutcing | 4759a187373af6f0bfded4ff388ba74c09fc4368 | [
"Apache-2.0"
] | null | null | null | event/event_handler.py | rafty/ServerlessEventSoutcing | 4759a187373af6f0bfded4ff388ba74c09fc4368 | [
"Apache-2.0"
] | null | null | null | event/event_handler.py | rafty/ServerlessEventSoutcing | 4759a187373af6f0bfded4ff388ba74c09fc4368 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import logging
from functools import reduce
from retrying import retry
from model import EventStore, Snapshot
from error import ItemRanShort, IntegrityError
from retry_handler import is_integrity_error, is_not_item_ran_short
logger = logging.getLogger()
logger.setLevel(logging.INFO)
| 29.987179 | 68 | 0.650278 |
b23d36fa5033cff1b7860caf5d44f22ca9d35ade | 3,422 | py | Python | iwjam_import.py | patrickgh3/iwjam | fd6f58bd5217dc13ed475779fe7f1ff6ca7f13be | [
"MIT"
] | null | null | null | iwjam_import.py | patrickgh3/iwjam | fd6f58bd5217dc13ed475779fe7f1ff6ca7f13be | [
"MIT"
] | null | null | null | iwjam_import.py | patrickgh3/iwjam | fd6f58bd5217dc13ed475779fe7f1ff6ca7f13be | [
"MIT"
] | null | null | null | from lxml import etree
import os
import sys
import shutil
import iwjam_util
# Performs an import of a mod project into a base project given a
# previously computed ProjectDiff between them,
# and a list of folder names to prefix
# ('%modname%' will be replaced with the mod's name)
| 36.404255 | 78 | 0.63647 |
b23d41e777497c29e58e3ac4394589928318d38e | 4,663 | py | Python | subspacemethods/basesubspace.py | AdriBesson/spl2018_joint_sparse | bc52b31a9361c73f07ee52b4d5f36a58fb231c96 | [
"MIT"
] | 2 | 2020-07-12T02:04:10.000Z | 2021-05-23T06:37:36.000Z | subspacemethods/basesubspace.py | AdriBesson/joint_sparse_algorithms | bc52b31a9361c73f07ee52b4d5f36a58fb231c96 | [
"MIT"
] | null | null | null | subspacemethods/basesubspace.py | AdriBesson/joint_sparse_algorithms | bc52b31a9361c73f07ee52b4d5f36a58fb231c96 | [
"MIT"
] | null | null | null | from abc import ABCMeta, abstractmethod
import numpy as np | 30.477124 | 107 | 0.57581 |
b23fca8a65b936733d00f0bac508e61b99fa0f3c | 4,550 | py | Python | glow/generate_data_sources.py | tomcent-tom/glow | 6ba5e8142416251a12e361f4216a40936562cfa1 | [
"Apache-2.0"
] | null | null | null | glow/generate_data_sources.py | tomcent-tom/glow | 6ba5e8142416251a12e361f4216a40936562cfa1 | [
"Apache-2.0"
] | null | null | null | glow/generate_data_sources.py | tomcent-tom/glow | 6ba5e8142416251a12e361f4216a40936562cfa1 | [
"Apache-2.0"
] | null | null | null | from connectors.tableau.tableau import TableauConnector
from posixpath import join
from typing import List, Dict, Tuple
import argparse
import connectors.tableau
import os
import utils
import logging
import sys
import yaml
logging.basicConfig(level=logging.INFO)
MAIN_PATH = '/Users/tomevers/projects/airglow'
CONNECTIONS_CONF_FILE = 'airglow_connections.yml'
DS_FILENAME = 'data sources.yml'
DS_TEMPLATE = 'templates/data_source.md'
def get_datasource_definitions(yaml_format=True) -> dict:
""" returns the data source definition yaml file as a dict.
Returns:
a dict with all data sources defined in the yaml file.
"""
yaml_file = os.path.join(MAIN_PATH, 'definitions', DS_FILENAME)
try:
return utils.get_file(yaml_file, yaml_format)
except FileNotFoundError:
logging.exception(FileNotFoundError('Datasource definition file can not be found.'))
sys.exit(1)
if __name__ == "__main__":
parser = argparse.ArgumentParser('Script to convert event definitions file into markdown format.')
parser.add_argument('--docs_dir', type=str,
help='path to the folder where the generated docs should be stored. The script will need write access to this folder. Defaults to "./docs/"')
parser.add_argument('--use_local_definitions', type=str,
help='path to the folder where the generated docs should be stored. The script will need write access to this folder. Defaults to "./docs/"')
args = parser.parse_args()
main(args)
| 39.565217 | 165 | 0.667253 |
b23fd53ddd58d9be266428160e71ab6d0021666d | 4,748 | py | Python | src/app/views/cookbook/recipe.py | rico0821/fridge | c564f9a4b656c06384d5c40db038328c35ccf1ed | [
"MIT"
] | null | null | null | src/app/views/cookbook/recipe.py | rico0821/fridge | c564f9a4b656c06384d5c40db038328c35ccf1ed | [
"MIT"
] | null | null | null | src/app/views/cookbook/recipe.py | rico0821/fridge | c564f9a4b656c06384d5c40db038328c35ccf1ed | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
app.views.cookbook.recipe
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Module for handling recipe view and upload.
--
:copyright: (c)2020 by rico0821
"""
from bson.objectid import ObjectId
from datetime import datetime
from flask import abort, request
from flask_jwt_extended import jwt_required
from schematics.types import DictType, IntType, ListType, StringType
from app.context import context_property
from app.decorators.validation import PayLoadLocation, BaseModel, validate_with_schematics
from app.extensions import mongo_db
from app.misc.imaging import make_filename, save_image
from app.misc.logger import Log
from app.views import BaseResource
| 31.236842 | 90 | 0.571398 |
b2410ae215724bbd3d52cfc6ac8fa233e41ad029 | 5,141 | py | Python | modules/password.py | MasterBurnt/ToolBurnt | 479a310b7ffff58d00d362ac0fa59d95750e3304 | [
"Apache-2.0"
] | 1 | 2021-10-18T09:03:21.000Z | 2021-10-18T09:03:21.000Z | modules/password.py | MasterBurnt/ToolBurnt | 479a310b7ffff58d00d362ac0fa59d95750e3304 | [
"Apache-2.0"
] | null | null | null | modules/password.py | MasterBurnt/ToolBurnt | 479a310b7ffff58d00d362ac0fa59d95750e3304 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding:utf-8 -*-
#
# @name : PassList
# @url : http://github.com/MasterBurnt
# @author : MasterBurnt
#Libraries
from concurrent.futures import ThreadPoolExecutor
import datetime,os,sys,random,time
from colorama import Fore,init,Style
#C&B&I
init()
c1 = Style.BRIGHT + Fore.LIGHTWHITE_EX
c2 = Style.BRIGHT + Fore.LIGHTGREEN_EX
c3 = Style.BRIGHT + Fore.LIGHTCYAN_EX
c4 = Style.BRIGHT + Fore.LIGHTRED_EX
c5 = Style.BRIGHT + Fore.LIGHTYELLOW_EX
c6 = Style.BRIGHT + Fore.LIGHTBLUE_EX
c7 = Fore.RESET
#Clear Console
clear = lambda: os.system('cls' if os.name in ('nt', 'dos') else 'clear')
list = []
out =[]
#Run
| 30.064327 | 183 | 0.491928 |
b24345cfa90040fa81b341f92e8e1c158be7a95e | 673 | py | Python | dvc/parsing/__init__.py | mbraakhekke/dvc | 235d4c9a94603131e00c9b770125584fdb369481 | [
"Apache-2.0"
] | null | null | null | dvc/parsing/__init__.py | mbraakhekke/dvc | 235d4c9a94603131e00c9b770125584fdb369481 | [
"Apache-2.0"
] | null | null | null | dvc/parsing/__init__.py | mbraakhekke/dvc | 235d4c9a94603131e00c9b770125584fdb369481 | [
"Apache-2.0"
] | null | null | null | import logging
from itertools import starmap
from funcy import join
from .context import Context
from .interpolate import resolve
logger = logging.getLogger(__name__)
STAGES = "stages"
| 24.035714 | 71 | 0.665676 |
b243c92f9b965a3b5d10ee0df149df6c22ac02d0 | 1,332 | py | Python | Mundo 2/Aula14.Ex59.py | uirasiqueira/Exercicios_Python | 409b7be9cf278e3043149654de7b41be56a3d951 | [
"MIT"
] | null | null | null | Mundo 2/Aula14.Ex59.py | uirasiqueira/Exercicios_Python | 409b7be9cf278e3043149654de7b41be56a3d951 | [
"MIT"
] | null | null | null | Mundo 2/Aula14.Ex59.py | uirasiqueira/Exercicios_Python | 409b7be9cf278e3043149654de7b41be56a3d951 | [
"MIT"
] | null | null | null | '''Crie um programa que leia dois valores e mostre um menu na tela:
[1] somar
[2] multiplicar
[3] maior
[4] novos numeros
[5] sair do programa
Seu programa devera realizar a operao solicitada em cada caso'''
v1= int(input('Digite um numero: '))
v2 = int(input('Digite outro numero: '))
operacao = 0
print('''[1] somar
[2] multiplicar
[3] maior
[4] novos numeros
[5] sair do programa''')
operacao = int(input('Para realizar uma das operaes anteriores, escolha uma das opes numericas: '))
while operacao!=0:
if operacao == 1:
v = v1+v2
operacao = int(input(f'O valor sera {v}. Qual a proxima operao a ser realizada? '))
if operacao == 2:
v = v1*v2
operacao = int(input(f'O valor sera {v}. Qual a proxima operao a ser realizada? '))
if operacao == 3:
if v1>v2:
operacao = int(input(f'O maior valor sera {v1}. Qual a proxima operao a ser realizada? '))
else:
operacao = int(input(f'O maior valor sera {v2}. Qual a proxima operao a ser realizada? '))
if operacao == 4:
v1 = int(input('Digite um novo numero: '))
v2 = int(input('Digite mais um novo numero: '))
operacao = int(input('Qual a proxima operao a ser realizada? '))
if operacao == 5:
operacao = 0
print('Fim')
| 33.3 | 104 | 0.62012 |
b243f7691e46a57fcead4522c62b345ef6662d0c | 1,692 | py | Python | interviewbit/TwoPointers/kthsmallest.py | zazhang/coding-problems | 704f0ab22ecdc5fca1978ac7791f43258eb441dd | [
"MIT"
] | null | null | null | interviewbit/TwoPointers/kthsmallest.py | zazhang/coding-problems | 704f0ab22ecdc5fca1978ac7791f43258eb441dd | [
"MIT"
] | null | null | null | interviewbit/TwoPointers/kthsmallest.py | zazhang/coding-problems | 704f0ab22ecdc5fca1978ac7791f43258eb441dd | [
"MIT"
] | null | null | null | #!usr/bin/env ipython
"""Coding interview problem (array, math):
See `https://www.interviewbit.com/problems/kth-smallest-element-in-the-array/`
Find the kth smallest element in an unsorted array of non-negative integers.
Definition of kth smallest element:
kth smallest element is the minimum possible n such that there are at least k elements in the array <= n.
In other words, if the array A was sorted, then A[k - 1] ( k is 1 based, while the arrays are 0 based )
NOTE:
You are not allowed to modify the array ( The array is read only ).
Try to do it using constant extra space.
Example:
A : [2 1 4 3 2]
k : 3
answer : 2
"""
if __name__ == '__main__':
s = Solution() # create Solution object
A = (1,3,2,234,5,6,1)
k = 4
print s.kthsmallest(A,k)
| 26.030769 | 105 | 0.613475 |
b2445103c2858f39d46bd3d45d182776355fdcdc | 90 | py | Python | grab_screen/__init__.py | andrei-shabanski/grab-screen | 758187262156aac85f6736c9b8299187b49e43a5 | [
"MIT"
] | 9 | 2017-08-15T03:45:03.000Z | 2022-02-21T18:06:32.000Z | grab_screen/__init__.py | andrei-shabanski/grab-screen | 758187262156aac85f6736c9b8299187b49e43a5 | [
"MIT"
] | 211 | 2017-07-03T15:24:15.000Z | 2022-02-21T14:09:36.000Z | grab_screen/__init__.py | andrei-shabanski/grab-screen | 758187262156aac85f6736c9b8299187b49e43a5 | [
"MIT"
] | 4 | 2017-08-15T03:44:46.000Z | 2022-02-03T10:25:20.000Z | from .cli import main
from .version import __version__
__all__ = ['__version__', 'main']
| 18 | 33 | 0.744444 |
b244e34a9bc2f4dc206325d9907079cdca8ac5ad | 1,021 | py | Python | Test/test_conf_ap/conf_hostapd/create_config.py | liquidinvestigations/wifi-test | beae8674730d78330b1b18214c86206d858ed604 | [
"MIT"
] | null | null | null | Test/test_conf_ap/conf_hostapd/create_config.py | liquidinvestigations/wifi-test | beae8674730d78330b1b18214c86206d858ed604 | [
"MIT"
] | null | null | null | Test/test_conf_ap/conf_hostapd/create_config.py | liquidinvestigations/wifi-test | beae8674730d78330b1b18214c86206d858ed604 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from hostapdconf.parser import HostapdConf
from hostapdconf import helpers as ha
import subprocess
def create_hostapd_conf(ssid, password, interface):
"""
Create a new hostapd.conf with the given ssid, password, interface.
Overwrites the current config file.
"""
subprocess.call(['touch', './hostapd.conf'])
conf = HostapdConf('./hostapd.conf')
# set some common options
ha.set_ssid(conf, ssid)
ha.reveal_ssid(conf)
ha.set_iface(conf, interface)
ha.set_driver(conf, ha.STANDARD)
ha.set_channel(conf, 2)
ha.enable_wpa(conf, passphrase=password, wpa_mode=ha.WPA2_ONLY)
ha.set_country(conf, 'ro')
# my hostapd doesn't like the default values of -1 here, so we set some
# dummy values
conf.update({'rts_threshold': 0, 'fragm_threshold': 256})
print("writing configuration")
conf.write()
if __name__ == '__main__':
print("Creating conf file...")
create_hostapd_conf('test_conf_supplicant', 'password', 'wlan0')
| 27.594595 | 75 | 0.695397 |
b245f0f4f9cda0ef7cfead4f0aa73f69f90186e7 | 1,669 | py | Python | tests/test_paddle.py | ankitshah009/MMdnn | a03d800eb4016765e97f82eb5d2e69f98de3a9cf | [
"MIT"
] | 3,442 | 2017-11-20T08:39:51.000Z | 2019-05-06T10:51:19.000Z | tests/test_paddle.py | ankitshah009/MMdnn | a03d800eb4016765e97f82eb5d2e69f98de3a9cf | [
"MIT"
] | 430 | 2017-11-29T04:21:48.000Z | 2019-05-06T05:37:37.000Z | tests/test_paddle.py | ankitshah009/MMdnn | a03d800eb4016765e97f82eb5d2e69f98de3a9cf | [
"MIT"
] | 683 | 2017-11-20T08:50:34.000Z | 2019-05-04T04:25:14.000Z | from __future__ import absolute_import
from __future__ import print_function
import os
import sys
from conversion_imagenet import TestModels
from conversion_imagenet import is_paddle_supported
if __name__ == '__main__':
test_paddle()
| 30.345455 | 56 | 0.559617 |
b246c295c51a8336e9c8fb87cdefb3fbbe9fe216 | 900 | py | Python | cabinet/tools.py | cauabernardino/cabinet | 96bf0d6e467f35d6241ea97f0553bb449fefd15e | [
"MIT"
] | null | null | null | cabinet/tools.py | cauabernardino/cabinet | 96bf0d6e467f35d6241ea97f0553bb449fefd15e | [
"MIT"
] | null | null | null | cabinet/tools.py | cauabernardino/cabinet | 96bf0d6e467f35d6241ea97f0553bb449fefd15e | [
"MIT"
] | null | null | null | import pathlib
import shutil
from typing import Dict, List, Union
from cabinet.consts import SUPPORTED_FILETYPES
def dir_parser(path_to_dir: str) -> Dict[str, Dict[str, str]]:
"""
Parses the given directory, and returns the path, stem and suffix for files.
"""
files = pathlib.Path(path_to_dir).resolve().glob("*.*")
files_data = {}
for file in files:
files_data[file.stem] = {
"suffix": file.suffix,
"path": file.as_posix(),
}
return files_data
def bin_resolver(file_data: Dict[str, str]) -> Union[List[str], None]:
"""
Resolves the right binary to run the script.
"""
file_suffix = file_data["suffix"]
if file_suffix in SUPPORTED_FILETYPES.keys():
commands = SUPPORTED_FILETYPES[file_suffix].split(" ")
commands[0] = shutil.which(commands[0])
return commands
return None
| 23.684211 | 80 | 0.638889 |
b2473e8998bf083e1cd206ca3716ffba6efcc23c | 1,778 | py | Python | stickyuploads/utils.py | caktus/django-sticky-uploads | a57539655ba991f63f31f0a5c98d790947bcd1b8 | [
"BSD-3-Clause"
] | 11 | 2015-08-14T14:38:02.000Z | 2019-12-16T14:39:30.000Z | stickyuploads/utils.py | caktus/django-sticky-uploads | a57539655ba991f63f31f0a5c98d790947bcd1b8 | [
"BSD-3-Clause"
] | 16 | 2015-08-05T14:02:19.000Z | 2018-03-28T15:43:47.000Z | stickyuploads/utils.py | caktus/django-sticky-uploads | a57539655ba991f63f31f0a5c98d790947bcd1b8 | [
"BSD-3-Clause"
] | 6 | 2015-08-14T12:34:52.000Z | 2019-10-16T04:18:37.000Z | from __future__ import unicode_literals
import os
from django.core import signing
from django.core.exceptions import ImproperlyConfigured
from django.core.files.storage import get_storage_class
from django.utils.functional import LazyObject
def serialize_upload(name, storage, url):
"""
Serialize uploaded file by name and storage. Namespaced by the upload url.
"""
if isinstance(storage, LazyObject):
# Unwrap lazy storage class
storage._setup()
cls = storage._wrapped.__class__
else:
cls = storage.__class__
return signing.dumps({
'name': name,
'storage': '%s.%s' % (cls.__module__, cls.__name__)
}, salt=url)
def deserialize_upload(value, url):
"""
Restore file and name and storage from serialized value and the upload url.
"""
result = {'name': None, 'storage': None}
try:
result = signing.loads(value, salt=url)
except signing.BadSignature:
# TODO: Log invalid signature
pass
else:
try:
result['storage'] = get_storage_class(result['storage'])
except (ImproperlyConfigured, ImportError):
# TODO: Log invalid class
result = {'name': None, 'storage': None}
return result
def open_stored_file(value, url):
"""
Deserialize value for a given upload url and return open file.
Returns None if deserialization fails.
"""
upload = None
result = deserialize_upload(value, url)
filename = result['name']
storage_class = result['storage']
if storage_class and filename:
storage = storage_class()
if storage.exists(filename):
upload = storage.open(filename)
upload.name = os.path.basename(filename)
return upload
| 29.147541 | 79 | 0.654668 |
b248f043c0feea53fbb2ab2028061229d654718b | 693 | py | Python | build/beginner_tutorials/cmake/beginner_tutorials-genmsg-context.py | aracelis-git/beginner_tutorials | 3bb11e496c414237543e8783dd01b57ef8952bca | [
"Apache-2.0"
] | null | null | null | build/beginner_tutorials/cmake/beginner_tutorials-genmsg-context.py | aracelis-git/beginner_tutorials | 3bb11e496c414237543e8783dd01b57ef8952bca | [
"Apache-2.0"
] | null | null | null | build/beginner_tutorials/cmake/beginner_tutorials-genmsg-context.py | aracelis-git/beginner_tutorials | 3bb11e496c414237543e8783dd01b57ef8952bca | [
"Apache-2.0"
] | null | null | null | # generated from genmsg/cmake/pkg-genmsg.context.in
messages_str = "/home/viki/catkin_ws/src/beginner_tutorials/msg/Num.msg"
services_str = "/home/viki/catkin_ws/src/beginner_tutorials/srv/ResetCount.srv;/home/viki/catkin_ws/src/beginner_tutorials/srv/AddTwoInts.srv"
pkg_name = "beginner_tutorials"
dependencies_str = "std_msgs"
langs = "gencpp;genlisp;genpy"
dep_include_paths_str = "beginner_tutorials;/home/viki/catkin_ws/src/beginner_tutorials/msg;std_msgs;/opt/ros/indigo/share/std_msgs/cmake/../msg"
PYTHON_EXECUTABLE = "/usr/bin/python"
package_has_static_sources = '' == 'TRUE'
genmsg_check_deps_script = "/opt/ros/indigo/share/genmsg/cmake/../../../lib/genmsg/genmsg_check_deps.py"
| 57.75 | 145 | 0.799423 |
b249e4cc4dd6019c8854e04867ecd673f6f4e948 | 9,392 | py | Python | demo/utils.py | NguyenTuan-Dat/Custom_3D | 148d3e4baa0d0d36714ec2c164ef31cff1bb5751 | [
"Apache-2.0"
] | 41 | 2021-09-16T08:19:19.000Z | 2022-03-22T10:10:31.000Z | demo/utils.py | NguyenTuan-Dat/Custom_3D | 148d3e4baa0d0d36714ec2c164ef31cff1bb5751 | [
"Apache-2.0"
] | null | null | null | demo/utils.py | NguyenTuan-Dat/Custom_3D | 148d3e4baa0d0d36714ec2c164ef31cff1bb5751 | [
"Apache-2.0"
] | 2 | 2021-11-26T14:55:32.000Z | 2021-12-05T12:57:24.000Z | import os
import cv2
import numpy as np
import torch
import torch.nn as nn
import yaml
IMG_EXTENSIONS = (".jpg", ".jpeg", ".png", ".ppm", ".bmp", ".pgm", ".tif", ".tiff", "webp")
def xmkdir(path):
"""Create directory PATH recursively if it does not exist."""
os.makedirs(path, exist_ok=True)
| 40.834783 | 117 | 0.544719 |
b24b76ff37f2289a78c64dcda02fb884eb113dbd | 227 | py | Python | examples/scannet_normals/data.py | goodok/fastai_sparse | 802ede772c19ccca7449eb13d0a107bc0c10ab0f | [
"MIT"
] | 49 | 2019-03-31T21:20:27.000Z | 2021-06-30T18:46:58.000Z | examples/scannet_normals/data.py | goodok/fastai_sparse | 802ede772c19ccca7449eb13d0a107bc0c10ab0f | [
"MIT"
] | 6 | 2019-04-17T16:01:05.000Z | 2020-11-10T09:22:10.000Z | examples/scannet_normals/data.py | goodok/fastai_sparse | 802ede772c19ccca7449eb13d0a107bc0c10ab0f | [
"MIT"
] | 5 | 2019-04-01T10:46:29.000Z | 2021-01-03T05:18:08.000Z | # -*- coding: utf-8 -*-
from functools import partial
from fastai_sparse.data import SparseDataBunch
merge_fn = partial(SparseDataBunch.merge_fn, keys_lists=['id', 'labels_raw', 'filtred_mask', 'random_seed', 'num_points'])
| 28.375 | 122 | 0.753304 |
b24d5ff4a2324937255e18e0f636457956239a07 | 1,749 | py | Python | plugins/okta/komand_okta/actions/reset_password/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/okta/komand_okta/actions/reset_password/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/okta/komand_okta/actions/reset_password/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | # GENERATED BY KOMAND SDK - DO NOT EDIT
import komand
import json
| 22.714286 | 184 | 0.606632 |
b24fa470c54ab2d92980faab3b5c114f1efa0392 | 151 | py | Python | Recursion/recursiopow.py | TheG0dfath3r/Python | 73f40e9828b953c3e614a21a8980eaa81b5c066e | [
"MIT"
] | null | null | null | Recursion/recursiopow.py | TheG0dfath3r/Python | 73f40e9828b953c3e614a21a8980eaa81b5c066e | [
"MIT"
] | null | null | null | Recursion/recursiopow.py | TheG0dfath3r/Python | 73f40e9828b953c3e614a21a8980eaa81b5c066e | [
"MIT"
] | 2 | 2019-09-30T21:17:57.000Z | 2019-10-01T16:23:33.000Z | x=int(input("no 1 "))
y=int(input("no 2 "))
print(pow(x,y))
| 16.777778 | 29 | 0.463576 |
b2517e50917150fdb0763470ea2ed80dc851178d | 1,800 | py | Python | scripts/egomotion_kitti_eval/old/generate_grid_search_validation_freak_stage2.py | bartn8/stereo-vision | 1180045fe560478e5c441e75202cc899fe90ec3d | [
"BSD-3-Clause"
] | 52 | 2016-04-02T18:18:48.000Z | 2022-02-14T11:47:58.000Z | scripts/egomotion_kitti_eval/old/generate_grid_search_validation_freak_stage2.py | bartn8/stereo-vision | 1180045fe560478e5c441e75202cc899fe90ec3d | [
"BSD-3-Clause"
] | 3 | 2016-08-01T14:36:44.000Z | 2021-02-14T08:15:50.000Z | scripts/egomotion_kitti_eval/old/generate_grid_search_validation_freak_stage2.py | bartn8/stereo-vision | 1180045fe560478e5c441e75202cc899fe90ec3d | [
"BSD-3-Clause"
] | 26 | 2016-08-25T11:28:05.000Z | 2022-02-18T12:17:47.000Z | #!/usr/bin/python
hamming_threshold = [50, 60]
pattern_scale = [4.0, 6.0, 8.0, 10.0]
fp_runscript = open("/mnt/ssd/kivan/cv-stereo/scripts/eval_batch/run_batch_validation.sh", 'w')
fp_runscript.write("#!/bin/bash\n\n")
cnt = 0
for i in range(len(hamming_threshold)):
for j in range(len(pattern_scale)):
cnt += 1
filepath = "/home/kivan/Projects/cv-stereo/config_files/experiments/kitti/validation_freak/freak_tracker_validation_stage2_" + str(cnt) + ".txt"
print(filepath)
fp = open(filepath, 'w')
fp.write("odometry_method = VisualOdometryRansac\n")
fp.write("use_deformation_field = false\n")
fp.write("ransac_iters = 1000\n\n")
fp.write("tracker = StereoTracker\n")
fp.write("max_disparity = 160\n")
fp.write("stereo_wsz = 15\n")
fp.write("ncc_threshold_s = 0.7\n\n")
fp.write("tracker_mono = TrackerBFMcv\n")
fp.write("max_features = 5000\n")
fp.write("search_wsz = 230\n\n")
fp.write("hamming_threshold = " + str(hamming_threshold[i]) + "\n\n")
fp.write("detector = FeatureDetectorHarrisFREAK\n")
fp.write("harris_block_sz = 3\n")
fp.write("harris_filter_sz = 1\n")
fp.write("harris_k = 0.04\n")
fp.write("harris_thr = 1e-06\n")
fp.write("harris_margin = 15\n\n")
fp.write("freak_norm_scale = false\n")
fp.write("freak_norm_orient = false\n")
fp.write("freak_pattern_scale = " + str(pattern_scale[j]) + "\n")
fp.write("freak_num_octaves = 0\n")
fp.write("use_bundle_adjustment = false")
fp.close()
fp_runscript.write('./run_kitti_evaluation_dinodas.sh "' + filepath + '"\n')
fp_runscript.close()
| 40 | 152 | 0.606111 |
b25199ace7d60d001d07006102f2cf38ff218d27 | 8,618 | py | Python | tensorflow_tts/processor/baker_online_tts.py | outman2008/TensorFlowTTS | 7e84f9d91fcfefc031c28df5203779af5614fe5e | [
"Apache-2.0"
] | null | null | null | tensorflow_tts/processor/baker_online_tts.py | outman2008/TensorFlowTTS | 7e84f9d91fcfefc031c28df5203779af5614fe5e | [
"Apache-2.0"
] | null | null | null | tensorflow_tts/processor/baker_online_tts.py | outman2008/TensorFlowTTS | 7e84f9d91fcfefc031c28df5203779af5614fe5e | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
#
# python online_tts.py -client_secret=client_secret -client_id=client_secret -file_save_path=test.wav --text= --audiotype=6
from typing import TextIO
import requests
import json
import argparse
import os
import time
from g2p_en import G2p as grapheme_to_phn
import random
import soundfile as sf
import winsound
# access_token
#
#
train_f_name: str = "metadata.csv"
data_dir: str = "C:\\Users\\outman.t.yang\\Pictures\\baker_test\\new"
positions = {
"wave_file": 0,
"text": 1,
"text_norm": 2,
}
get_g2p = grapheme_to_phn()
# def create_wavs(access_token, args):
# file_list = os.listdir(data_dir)
# for file in file_list:
# fileName = os.path.splitext(file)
# if fileName[1] == '.txt':
# file_path = os.path.join(data_dir, file)
# # with open(file_path, encoding="utf-8") as ttf:
# # line = ttf.readline().strip()
# utt_id = fileName[0]
# wav_path = os.path.join(data_dir, "%s.wav" % utt_id)
# utt_id = utt_id.replace("LJ00", "2")
# utt_id = utt_id.replace("-", "")
# dstTxt = os.path.join(data_dir, "%s.txt" % utt_id)
# dstWav = os.path.join(data_dir, "%s.wav" % utt_id)
# os.rename(file_path, dstTxt)
# os.rename(wav_path, dstWav)
# print('create_items rename', utt_id)
# # #
# # audiotype = args.audiotype
# # domain = args.domain
# # language = args.language
# # voice_name = args.voice_name
# # data = {'access_domain': access_token, 'audiotype': audiotype, 'domain': domain, 'language': language,
# # 'voice_name': voice_name, 'text': line}
# # content = get_audio(data)
# # #
# # with open(wav_path, 'wb') as audio:
# # audio.write(content)
# # time.sleep(0.1)
# # print('create_items', utt_id)
charList = []
if __name__ == '__main__':
try:
# args = get_args()
# # create_items()
# # access_token
# # client_secret = args.client_secret
# # client_id = args.client_id
# # # print("running", args)
# # access_token = get_access_token(client_secret, client_id)
# # print("access_token", access_token)
# access_token = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiKiJdLCJzY29wZSI6WyJ0dHMtb25lc2hvdCJdLCJleHAiOjE2Mzk0NjQwMzMsImF1dGhvcml0aWVzIjpbIioiXSwianRpIjoiNjk2MTM0NGItODMyZS00YWJkLTllNDgtMDVjOWJlNDU4YTRhIiwiY2xpZW50X2lkIjoiODRmM2RiYTZhNjliNDIwNzhmOWZlMTk0MmJhOGVjZjMifQ.uwdrR7TjZZjyO3VAb2FN4v_MJz8vCjcriIA3yLSGTHc'
# # #
# audiotype = args.audiotype
# domain = args.domain
# language = args.language
# voice_name = args.voice_name
# create_wavs(access_token, args)
# text = args.text
# data = {'access_domain': access_token, 'audiotype': audiotype, 'domain': domain, 'language': language,
# 'voice_name': voice_name, 'text': text}
# content = get_audio(data)
# #
# with open('test.wav', 'wb') as audio:
# audio.write(content)
# txt = get_phoneme_from_g2p_en("All prisoners passed their time in absolute idleness, or killed it by gambling and loose conversation.")
# print(txt)
audio_lst = ['200003', '200006', '200008']
audios = []
for word in audio_lst:
wav_path = os.path.join(data_dir, f"{word}.wav")
print(wav_path)
if os.path.exists(wav_path):
# with open(wav_path, 'rb') as audio:
audio, rate = sf.read(wav_path)
print(audio)
# winsound.PlaySound(audio.read(), winsound.SND_MEMORY)
audios.append(audio)
# winsound.PlaySound(audios, winsound.SND_MEMORY)
except Exception as e:
print(e)
| 35.759336 | 330 | 0.598863 |