blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 616 | content_id stringlengths 40 40 | detected_licenses listlengths 0 112 | license_type stringclasses 2 values | repo_name stringlengths 5 115 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 777 values | visit_date timestamp[us]date 2015-08-06 10:31:46 2023-09-06 10:44:38 | revision_date timestamp[us]date 1970-01-01 02:38:32 2037-05-03 13:00:00 | committer_date timestamp[us]date 1970-01-01 02:38:32 2023-09-06 01:08:06 | github_id int64 4.92k 681M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 22 values | gha_event_created_at timestamp[us]date 2012-06-04 01:52:49 2023-09-14 21:59:50 ⌀ | gha_created_at timestamp[us]date 2008-05-22 07:58:19 2023-08-21 12:35:19 ⌀ | gha_language stringclasses 149 values | src_encoding stringclasses 26 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 3 10.2M | extension stringclasses 188 values | content stringlengths 3 10.2M | authors listlengths 1 1 | author_id stringlengths 1 132 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f4269983761d21127d14240f5c8bf6a09d96ad3c | 05c70f2396e81328b1e8e1155994fccd52104fad | /databricks/notebooks/tools/mlflow_http_client.py | 14e17b5444eeec41c8a515e81d1abbf48a21d165 | [] | no_license | amesar/mlflow-fun | 0e015189e546f39b730375b292288bac1210fb88 | 31b575b97329e78bd9b0c062a270e7375f10e170 | refs/heads/master | 2023-05-13T04:55:57.367744 | 2022-11-28T16:24:12 | 2022-11-28T16:24:12 | 142,216,836 | 27 | 15 | null | 2023-05-09T18:06:50 | 2018-07-24T21:56:27 | Python | UTF-8 | Python | false | false | 1,420 | py | # Databricks notebook source
# MAGIC %md ### MlflowHttpClient - Requests Client for MLflow REST API
# MAGIC * See: https://mlflow.org/docs/latest/rest-api.html
# MAGIC * See notebook [test_mlflow_http_client](https://demo.cloud.databricks.com/#notebook/3652184) for usage
# COMMAND ----------
import requests
class MlflowHttpClient(object):
def __init__(self):
self.token = dbutils.notebook.entry_point.getDbutils().notebook().getContext().apiToken().get()
host_name = dbutils.notebook.entry_point.getDbutils().notebook().getContext().tags().get("browserHostName").get()
self.base_uri = "https://{}/api/2.0/preview/mlflow".format(host_name)
def get(self, path):
uri = self.create_uri(path)
rsp = requests.get(uri, headers={'Authorization': 'Bearer '+self.token})
self.check_response(rsp, uri)
return rsp.text
def post(self, path,data):
uri = self.create_uri(path)
rsp = requests.post(self.create_uri(path), headers={'Authorization': 'Bearer '+self.token}, data=data)
self.check_response(rsp, uri)
return rsp.text
def create_uri(self, path):
return "{}/{}".format(self.base_uri,path)
def check_response(self, rsp, uri):
if rsp.status_code < 200 or rsp.status_code > 299:
raise Exception("HTTP status code: {} Reason: '{}' URL: {}".format(rsp.status_code,rsp.reason,uri)) | [
"amesar@users.noreply.github.com"
] | amesar@users.noreply.github.com |
2b766768a039858cfcc739797460163dfe150e89 | 1651184ccacf43c6a87864d5f0e4b4ea5453b98c | /backend/users/migrations/0002_auto_20201219_1700.py | d0e8017c1750c87d4aa5fd91ad4684584641a8a0 | [] | no_license | crowdbotics-apps/the-jumper-app-23425 | b20ef6908e2c4c2269dfba6d109d246044e10a08 | 470b864f8a8ae9638075c734709ae2b9aed1c7b4 | refs/heads/master | 2023-01-31T04:53:08.499998 | 2020-12-19T18:18:34 | 2020-12-19T18:18:34 | 322,364,725 | 0 | 0 | null | 2020-12-19T20:19:10 | 2020-12-17T17:22:56 | Python | UTF-8 | Python | false | false | 1,275 | py | # Generated by Django 2.2.17 on 2020-12-19 17:00
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("users", "0001_initial"),
]
operations = [
migrations.AddField(
model_name="user",
name="last_updated",
field=models.DateTimeField(auto_now=True, null=True),
),
migrations.AddField(
model_name="user",
name="timestamp_created",
field=models.DateTimeField(auto_now_add=True, null=True),
),
migrations.AlterField(
model_name="user",
name="email",
field=models.EmailField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name="user",
name="first_name",
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name="user",
name="last_name",
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name="user",
name="name",
field=models.CharField(blank=True, max_length=255, null=True),
),
]
| [
"team@crowdbotics.com"
] | team@crowdbotics.com |
18adf7ef8f6ea9f3c2adfaaf8f694d42fb2d2a72 | 3880497e60f93cec22b86e1d77cf68c6546b4c51 | /liyida2/settings.py | 53eb68e450a685fe0cc0eb04826a663bef63e6bc | [] | no_license | li-yi-da/liyida_blog | b63ac1bf2add1a6a7d4b2af0a8a2e07c3d5c89ea | 32306f84f45a4f633de2bef17621a4b09d3120f8 | refs/heads/master | 2020-05-16T03:39:22.664695 | 2019-04-26T09:57:40 | 2019-04-26T09:57:40 | 182,736,436 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,968 | py | """
Django settings for liyida2 project.
Generated by 'django-admin startproject' using Django 2.1.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.1/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '^z$11%5(f#ktdyg^8c3qtkn_-rt73h7aoqaqdh=0jvx@_yh1zl'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['*']
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'app01',
'myblog',
'DjangoUeditor',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
# 'Middle.m1.R1',
# 'Middle.m1.R2',
# 'Middle.m1.R3',
]
ROOT_URLCONF = 'liyida2.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')]
,
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'liyida2.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
# DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.sqlite3',
# 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
# }
# }
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'liyida2',
'USER': 'root',
'PASSWORD': 'LIYIDafei103540',
'HOST': '45.76.66.211',
'PORT': '3306',
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
STATIC_URL = '/static/'
MEDIA_URL ='/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
STATICFILES_DIRS = (
os.path.join(BASE_DIR,'static'),
# os.path.join(BASE_DIR,'media'),
)
STATIC_ROOT = 'all_static_files'
import sys
sys.path.insert(0, os.path.join(BASE_DIR, 'apps'))
sys.path.insert(0, os.path.join(BASE_DIR, 'extra_apps'))
| [
"root@vultr.guest"
] | root@vultr.guest |
6df7faaace8e0d5146130f9fa68b5334f410dd30 | 61296b98e4d481893db4bc51d75652c7109ae626 | /0000_examples/xym_rrtcplanning_exe.py | b5c6912ba577bb01987d0e8664f3507732e13f06 | [
"MIT"
] | permissive | Shogo-Hayakawa/wrs | 23d4560b1062cf103ed32db4b2ef1fc2261dd765 | 405f15be1a3f7740f3eb7d234d96998f6d057a54 | refs/heads/main | 2023-08-19T19:29:15.409949 | 2021-11-02T01:22:29 | 2021-11-02T01:22:29 | 423,663,614 | 0 | 0 | MIT | 2021-11-02T00:59:17 | 2021-11-02T00:59:17 | null | UTF-8 | Python | false | false | 2,732 | py | import math
import numpy as np
import visualization.panda.world as wd
import modeling.geometric_model as gm
import modeling.collision_model as cm
import robot_sim.robots.xarm7_shuidi_mobile.xarm7_shuidi_mobile as xav
import motion.probabilistic.rrt_connect as rrtc
import robot_con.xarm_shuidi.xarm.xarm_client as xac
base = wd.World(cam_pos=[3, 1, 2], lookat_pos=[0, 0, 0])
gm.gen_frame().attach_to(base)
# object
object = cm.CollisionModel("./objects/bunnysim.stl")
object.set_pos(np.array([.85, 0, .37]))
object.set_rgba([.5,.7,.3,1])
object.attach_to(base)
# robot_s
component_name='arm'
robot_s = xav.XArm7YunjiMobile()
robot_s.fk(component_name, np.array([0, math.pi * 2 / 3, 0, math.pi, 0, -math.pi / 6, 0]))
# robot_x
robot_x = xac.XArm7(host="192.168.50.77:18300")
init_jnt_angles = robot_x.get_jnt_vlaues()
print(init_jnt_angles)
rrtc_planner = rrtc.RRTConnect(robot_s)
path = rrtc_planner.plan(start_conf=init_jnt_angles,
# goal_conf=np.array([math.pi/3, math.pi * 1 / 3, 0, math.pi/2, 0, math.pi / 6, 0]),
goal_conf = robot_s.manipulator_dict['arm'].homeconf,
obstacle_list=[object],
ext_dist= .1,
max_time=300,
component_name=component_name)
robot_x.move_jspace_path(path, time_interval=.1)
# print(path)
for pose in path:
# print(pose)
robot_s.fk(component_name, pose)
robot_meshmodel = robot_s.gen_meshmodel()
robot_meshmodel.attach_to(base)
# robot_meshmodel.show_cdprimit()
robot_s.gen_stickmodel().attach_to(base)
# hol1
# robot_s.hold(object, jawwidth=.05)
# robot_s.fk(np.array([0, 0, 0, math.pi/6, math.pi * 2 / 3, 0, math.pi, 0, -math.pi / 6, math.pi/6]))
# robot_meshmodel = robot_s.gen_meshmodel()
# robot_meshmodel.attach_to(base)
# robot_s.show_cdprimit()
# tic = time.time()
# result = robot_s.is_collided() # problematic
# toc = time.time()
# print(result, toc - tic)
# base.run()
# release
# robot_s.release(object, jawwidth=.082)
# robot_s.fk(np.array([0, 0, 0, math.pi/3, math.pi * 2 / 3, 0, math.pi, 0, -math.pi / 6, math.pi/6]))
# robot_meshmodel = robot_s.gen_meshmodel()
# robot_meshmodel.attach_to(base)
# robot_meshmodel.show_cdprimit()
# tic = time.time()
# result = robot_s.is_collided()
# toc = time.time()
# print(result, toc - tic)
#copy
# robot_instance2 = robot_s.copy()
# robot_instance2.move_to(pos=np.array([.5,0,0]), rotmat=rm.rotmat_from_axangle([0,0,1], math.pi/6))
# objcm_list = robot_instance2.get_hold_objlist()
# robot_instance2.release(objcm_list[-1], jawwidth=.082)
# robot_meshmodel = robot_instance2.gen_meshmodel()
# robot_meshmodel.attach_to(base)
# robot_instance2.show_cdprimit()
base.run()
| [
"wanweiwei07@gmail.com"
] | wanweiwei07@gmail.com |
a1e0d8b350d7ea0d8ccb0adaee667c82278dcfde | 3a02bff6397eb23afd55cc17faf81c24a8751f2d | /fsoft/Week 1/B2/bai20.py | 78996fb509a0e64714b00daeed6da53583635b3e | [] | no_license | cothuyanninh/Python_Code | 909fd4d798cbd856e8993f9d4fea55b4b7c97a1f | 7f657db61845cf8c06725a2da067df526e696b93 | refs/heads/master | 2022-11-06T01:00:39.939194 | 2019-01-13T15:27:38 | 2019-01-13T15:27:38 | 164,468,626 | 0 | 1 | null | 2022-10-13T16:16:21 | 2019-01-07T17:40:51 | Python | UTF-8 | Python | false | false | 225 | py | ip_src = input("Type: ").split(".")
ip_new_list = [int(i) for i in ip_src]
print(ip_new_list)
# print("."join(str(i) for i in ip_new_list))
result = ""
for i in ip_new_list:
result += str(i)
result += "."
print(result[:-1]) | [
"cothuyanninh@gmail.com"
] | cothuyanninh@gmail.com |
7a3099b7410237ec975d64598d9a19e6bbc19740 | e0760295cc8221dff41af7e98fb49dd77a8fca1e | /test_product_of_array_except_self.py | b3526de15d4571ac8cb2f5f5b3b01d1af6372d91 | [
"MIT"
] | permissive | jaebradley/leetcode.py | 422dd89749482fd9e98530ca1141737a6cdbfca4 | b37b14f49b4b6ee9304a3956b3b52f30d22fac29 | refs/heads/master | 2023-01-24T08:32:11.954951 | 2023-01-18T13:21:56 | 2023-01-18T13:21:56 | 177,721,059 | 1 | 0 | MIT | 2021-07-23T03:52:32 | 2019-03-26T05:32:18 | Python | UTF-8 | Python | false | false | 358 | py | from unittest import TestCase
from product_of_array_except_self import Solution
class TestProductOfArrayExceptSelf(TestCase):
def test_values(self):
self.assertEqual(Solution().productExceptSelf([1, 2, 3, 4]), [24, 12, 8, 6])
def test_values_2(self):
self.assertEqual(Solution().productExceptSelf([2, 3, 4, 5]), [60, 40, 30, 24])
| [
"jae.b.bradley@gmail.com"
] | jae.b.bradley@gmail.com |
613a1e04c91353377a76461f50ec234ef3598ccd | 472578974401c83509d81ea4d832fc3fd821f295 | /python资料/day8.7/day06/exercise03.py | 61083cd262387074b83c85661c8b524df53d0abd | [
"MIT"
] | permissive | why1679158278/python-stu | f038ec89e9c3c7cc80dc0ff83b76e7c3078e279e | 0d95451f17e1d583d460b3698047dbe1a6910703 | refs/heads/master | 2023-01-05T04:34:56.128363 | 2020-11-06T09:05:16 | 2020-11-06T09:05:16 | 298,263,579 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 427 | py | # 使用列表推导式
# 生成1--50之间能被3或者5整除的数字
# 生成5 -- 100之间的数字平方
# list_result = []
# for number in range(1, 51):
# if number % 3 == 0 or number % 5 == 0:
# list_result.append(number)
list_result = [number for number in range(1, 51) if number % 3 == 0 or number % 5 == 0]
print(list_result)
list_result = [number ** 2 for number in range(5, 101)]
print(list_result)
| [
"1679158278@qq.com"
] | 1679158278@qq.com |
05bfaa9628b66aca16f86bdaef755c910f795db5 | 9dd1703046eb90c23910795bc4b3851badc873bb | /blog_venv/bin/wheel | bd687b7af1f2db8155933ac7b136a8b3f55c7519 | [] | no_license | abhishek-verma/Blog_Website | d02c4e4a9646d9eed4d0cd5491f0383217df089e | dae6752e5cddcfda9d1cc75da3d422ea1fcfd1f6 | refs/heads/master | 2020-04-02T01:41:34.892147 | 2018-10-19T10:29:03 | 2018-10-19T10:29:03 | 153,869,900 | 1 | 0 | null | 2018-10-20T04:35:11 | 2018-10-20T04:35:11 | null | UTF-8 | Python | false | false | 237 | #!/home/nandini/stuff/blog_venv/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from wheel.cli import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(main())
| [
"nandini.soni8@gmail.com"
] | nandini.soni8@gmail.com | |
09094452c5150588bd91de1f510c8944c0d702c4 | dfd1dba6fa990810b9609bd25a433c974ab2098d | /backend/purple_cloud_1/urls.py | c766a76997628b9a2824319a8a1a930b4daed168 | [] | no_license | saaaab1213/purple-cloud-1 | fbc54e73e2ba5bf6ed1db3acc9d2357031985343 | 10651eaf6ec13a656ed456ab9bfe504691e1d49d | refs/heads/master | 2023-02-08T02:46:46.009493 | 2021-01-05T09:54:22 | 2021-01-05T09:54:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,987 | py | """purple_cloud_1 URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/2.2/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path, include
from allauth.account.views import confirm_email
from rest_framework import permissions
from drf_yasg.views import get_schema_view
from drf_yasg import openapi
urlpatterns = [
path("", include("home.urls")),
path("accounts/", include("allauth.urls")),
path("modules/", include("modules.urls")),
path("api/v1/", include("home.api.v1.urls")),
path("admin/", admin.site.urls),
path("users/", include("users.urls", namespace="users")),
path("rest-auth/", include("rest_auth.urls")),
# Override email confirm to use allauth's HTML view instead of rest_auth's API view
path("rest-auth/registration/account-confirm-email/<str:key>/", confirm_email),
path("rest-auth/registration/", include("rest_auth.registration.urls")),
]
admin.site.site_header = "Purple Cloud"
admin.site.site_title = "Purple Cloud Admin Portal"
admin.site.index_title = "Purple Cloud Admin"
# swagger
api_info = openapi.Info(
title="Purple Cloud API",
default_version="v1",
description="API documentation for Purple Cloud App",
)
schema_view = get_schema_view(
api_info,
public=True,
permission_classes=(permissions.IsAuthenticated,),
)
urlpatterns += [
path("api-docs/", schema_view.with_ui("swagger", cache_timeout=0), name="api_docs")
]
| [
"lorence@crowdbotics.com"
] | lorence@crowdbotics.com |
132ec6ef5bb76e2e07bc3e2a9960380a2a326fb4 | 88064e96a4ce3aaa472222c8f294799655e923b8 | /lesson07/exercise4.py | b8277aa2405a2c1a73075dafeb8f5860d524cbe7 | [] | no_license | manosxatz/python | 52d91b42dcd3e516296811a036f23a988542008d | 789f2306f7c6c1ad798c228000bb0d49e99c9629 | refs/heads/master | 2022-11-09T18:46:54.002464 | 2020-06-24T16:57:19 | 2020-06-24T16:57:19 | 275,238,123 | 1 | 0 | null | 2020-06-26T20:05:26 | 2020-06-26T20:05:26 | null | UTF-8 | Python | false | false | 929 | py | from random import seed
from random import randrange
from datetime import datetime # all 3 at the beginning
seed(datetime.now()) # once, before randint call
N = 30
pupils = set()
for number in range(N):
pupils.add("pupil" + str(number))
list_pupils = list(pupils)
math_teams = set()
for _ in range(N//2):
pos1 = randrange(0, len(list_pupils))
pupil1 = list_pupils.pop(pos1)
pos2 = randrange(0, len(list_pupils))
pupil2 = list_pupils.pop(pos2)
team = (pupil1, pupil2)
math_teams.add(team)
print("Math teams: " + str(math_teams))
list_pupils = list(pupils)
geography_teams = set()
for _ in range(N//2):
pos1 = randrange(0, len(list_pupils))
pupil1 = list_pupils.pop(pos1)
pos2 = randrange(0, len(list_pupils))
pupil2 = list_pupils.pop(pos2)
team = (pupil1, pupil2)
geography_teams.add(team)
print("Geography teams: " + str(geography_teams)) | [
"noreply@github.com"
] | manosxatz.noreply@github.com |
b0cef5e31db47df5b9a9610be799ecda16d177bd | 7805134ab326271dfceccdfe29cdbc2f85a40e85 | /ncarrara/budgeted_rl/main/egreedy/learn_bftq_egreedy.py | 2b34fca76ddb4f464adebb8f9b9b17620944903f | [] | no_license | ncarrara/budgeted-rl | 9248c9a206bfa2c6c588e9cde0219f443922e3f7 | b588361a263022eb624fe83e8b16abac4e68e33e | refs/heads/master | 2020-08-18T18:35:11.731139 | 2019-10-17T15:35:08 | 2019-10-17T15:35:08 | 215,821,809 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 9,765 | py | # coding=utf-8
from multiprocessing.pool import Pool
import matplotlib.pyplot as plt
from ncarrara.budgeted_rl.bftq.pytorch_budgeted_fittedq import PytorchBudgetedFittedQ, NetBFTQ
from ncarrara.budgeted_rl.tools.features import feature_factory
from ncarrara.budgeted_rl.tools.utils_run import execute_policy_from_config, datas_to_transitions
from ncarrara.utils import math_utils
from ncarrara.utils.math_utils import set_seed, near_split, zip_with_singletons
from ncarrara.utils.os import makedirs
from ncarrara.utils.torch_utils import get_memory_for_pid
from ncarrara.utils_rl.environments import envs_factory
from ncarrara.utils_rl.environments.envs_factory import get_actions_str
from ncarrara.utils_rl.environments.gridworld.envgridworld import EnvGridWorld
from ncarrara.utils_rl.environments.gridworld.world import World
from ncarrara.utils_rl.transition.replay_memory import Memory
from ncarrara.budgeted_rl.tools.policies import RandomBudgetedPolicy, PytorchBudgetedFittedPolicy
from ncarrara.budgeted_rl.tools.policies import EpsilonGreedyPolicy
import numpy as np
import logging
import os
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def main(generate_envs, feature_str, betas_for_exploration, gamma, gamma_c, bftq_params, bftq_net_params, N_trajs,
workspace, seed, device, normalize_reward, trajs_by_ftq_batch, epsilon_decay, general, **args):
# Prepare BFTQ
envs, params = envs_factory.generate_envs(**generate_envs)
e = envs[0]
set_seed(seed, e)
rm = Memory()
feature = feature_factory(feature_str)
def build_fresh_bftq():
bftq = PytorchBudgetedFittedQ(
device=device,
workspace=workspace / "batch=0",
actions_str=get_actions_str(e),
policy_network=NetBFTQ(size_state=len(feature(e.reset(), e)), n_actions=e.action_space.n,
**bftq_net_params),
gamma=gamma,
gamma_c=gamma_c,
cpu_processes=general["cpu"]["processes"],
env=e,
split_batches=general["gpu"]["split_batches"],
hull_options=general["hull_options"],
**bftq_params)
return bftq
# Prepare learning
i_traj = 0
decays = math_utils.epsilon_decay(**epsilon_decay, N=N_trajs, savepath=workspace)
betas_for_exploration = np.array(eval(betas_for_exploration))
memory_by_batch = [get_current_memory()]
batch_sizes = near_split(N_trajs, size_bins=trajs_by_ftq_batch)
pi_epsilon_greedy_config = {
"__class__": repr(EpsilonGreedyPolicy),
"pi_greedy": {"__class__": repr(RandomBudgetedPolicy)},
"pi_random": {"__class__": repr(RandomBudgetedPolicy)},
"epsilon": decays[0],
"hull_options": general["hull_options"],
"clamp_Qc": bftq_params["clamp_Qc"]
}
# Main loop
trajs = []
for batch, batch_size in enumerate(batch_sizes):
# Prepare workers
cpu_processes = min(general["cpu"]["processes_when_linked_with_gpu"] or os.cpu_count(), batch_size)
workers_n_trajectories = near_split(batch_size, cpu_processes)
workers_start = np.cumsum(workers_n_trajectories)
workers_traj_indexes = [np.arange(*times) for times in zip(np.insert(workers_start[:-1], 0, 0), workers_start)]
if betas_for_exploration.size:
workers_betas = [betas_for_exploration.take(indexes, mode='wrap') for indexes in workers_traj_indexes]
else:
workers_betas = [np.random.random(indexes.size) for indexes in workers_traj_indexes]
workers_seeds = np.random.randint(0, 10000, cpu_processes).tolist()
workers_epsilons = [decays[i_traj + indexes] for indexes in workers_traj_indexes]
workers_params = list(zip_with_singletons(
generate_envs, pi_epsilon_greedy_config, workers_seeds, gamma, gamma_c, workers_n_trajectories,
workers_betas, workers_epsilons, None, general["dictConfig"]))
# Collect trajectories
logger.info("Collecting trajectories with {} workers...".format(cpu_processes))
if cpu_processes == 1:
results = []
for params in workers_params:
results.append(execute_policy_from_config(*params))
else:
with Pool(processes=cpu_processes) as pool:
results = pool.starmap(execute_policy_from_config, workers_params)
i_traj += sum([len(trajectories) for trajectories, _ in results])
# Fill memory
[rm.push(*sample) for trajectories, _ in results for trajectory in trajectories for sample in trajectory]
transitions_ftq, transition_bftq = datas_to_transitions(rm.memory, e, feature, 0, normalize_reward)
# Fit model
logger.info("[BATCH={}]---------------------------------------".format(batch))
logger.info("[BATCH={}][learning bftq pi greedy] #samples={} #traj={}"
.format(batch, len(transition_bftq), i_traj))
logger.info("[BATCH={}]---------------------------------------".format(batch))
bftq = build_fresh_bftq()
bftq.reset(True)
bftq.workspace = workspace / "batch={}".format(batch)
makedirs(bftq.workspace)
if isinstance(e, EnvGridWorld):
for trajectories, _ in results:
for traj in trajectories:
trajs.append(traj)
w = World(e)
w.draw_frame()
w.draw_lattice()
w.draw_cases()
w.draw_source_trajectories(trajs)
w.save((bftq.workspace / "bftq_on_2dworld_sources").as_posix())
q = bftq.fit(transition_bftq)
# Save policy
network_path = bftq.save_policy()
os.system("cp {}/policy.pt {}/policy.pt".format(bftq.workspace, workspace))
# Save memory
save_memory(bftq, memory_by_batch, by_batch=False)
# Update greedy policy
pi_epsilon_greedy_config["pi_greedy"] = {
"__class__": repr(PytorchBudgetedFittedPolicy),
"feature_str": feature_str,
"network_path": network_path,
"betas_for_discretisation": bftq.betas_for_discretisation,
"device": bftq.device,
"hull_options": general["hull_options"],
"clamp_Qc": bftq_params["clamp_Qc"]
}
if isinstance(e, EnvGridWorld):
def pi(state, beta):
import torch
from ncarrara.budgeted_rl.bftq.pytorch_budgeted_fittedq import convex_hull, \
optimal_pia_pib
with torch.no_grad():
hull = convex_hull(s=torch.tensor([state], device=device, dtype=torch.float32),
Q=q,
action_mask=np.zeros(e.action_space.n),
id="run_" + str(state), disp=False,
betas=bftq.betas_for_discretisation,
device=device,
hull_options=general["hull_options"],
clamp_Qc=bftq_params["clamp_Qc"])
opt, _ = optimal_pia_pib(beta=beta, hull=hull, statistic={})
return opt
def qr(state, a, beta):
import torch
s = torch.tensor([[state]], device=device)
b = torch.tensor([[[beta]]], device=device)
sb = torch.cat((s, b), dim=2)
return q(sb).squeeze()[a]
def qc(state, a, beta):
import torch
s = torch.tensor([[state]], device=device)
b = torch.tensor([[[beta]]], device=device)
sb = torch.cat((s, b), dim=2)
return q(sb).squeeze()[e.action_space.n + a]
w = World(e, bftq.betas_for_discretisation)
w.draw_frame()
w.draw_lattice()
w.draw_cases()
w.draw_policy_bftq(pi, qr, qc, bftq.betas_for_discretisation)
w.save((bftq.workspace / "bftq_on_2dworld").as_posix())
save_memory(bftq, memory_by_batch, by_batch=True)
def save_memory(bftq, memory_by_batch, by_batch=False):
if not by_batch:
plt.rcParams["figure.figsize"] = (30, 5)
plt.grid()
y_mem = np.asarray(bftq.memory_tracking)[:, 1]
y_mem = [int(mem) for mem in y_mem]
plt.plot(range(len(y_mem)), y_mem)
props = {'ha': 'center', 'va': 'center', 'bbox': {'fc': '0.8', 'pad': 0}}
for i, couple in enumerate(bftq.memory_tracking):
id, memory = couple
plt.scatter(i, memory, s=25)
plt.text(i, memory, id, props, rotation=90)
plt.savefig(bftq.workspace / "memory_tracking.png")
plt.rcParams["figure.figsize"] = (5, 5)
plt.close()
memory_by_batch.append(get_current_memory())
else:
plt.plot(range(len(memory_by_batch)), memory_by_batch)
plt.grid()
plt.title("memory_by_batch")
plt.savefig(bftq.workspace / "memory_by_batch.png")
plt.close()
def get_current_memory():
return sum(get_memory_for_pid(os.getpid()))
if __name__ == "__main__":
import sys
if len(sys.argv) > 2:
config_file = sys.argv[1]
force = bool(sys.argv[2])
else:
config_file = "../config/test_egreedy.json"
force = True
from ncarrara.budgeted_rl.tools.configuration import C
C.load(config_file).create_fresh_workspace(force=force).load_pytorch().load_matplotlib('agg')
main(device=C.device,
seed=C.seed,
workspace=C.path_learn_bftq_egreedy,
**C.dict["learn_bftq_egreedy"],
**C.dict)
| [
"nicolas.carrara1u@gmail.com"
] | nicolas.carrara1u@gmail.com |
f1b5ec65d65f614da16fa1d29140d2ba61bc3f54 | 6527b66fd08d9e7f833973adf421faccd8b765f5 | /yuancloud/addons/report_webkit/__yuancloud__.py | 4dff514cfa8d054469a394cd8baa16fae6ecef04 | [] | no_license | cash2one/yuancloud | 9a41933514e57167afb70cb5daba7f352673fb4d | 5a4fd72991c846d5cb7c5082f6bdfef5b2bca572 | refs/heads/master | 2021-06-19T22:11:08.260079 | 2017-06-29T06:26:15 | 2017-06-29T06:26:15 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,700 | py | # -*- coding: utf-8 -*-
# Part of YuanCloud. See LICENSE file for full copyright and licensing details.
# Copyright (c) 2010 Camptocamp SA (http://www.camptocamp.com)
# Author : Nicolas Bessi (Camptocamp)
{
'name': 'Webkit Report Engine',
'description': """
This module adds a new Report Engine based on WebKit library (wkhtmltopdf) to support reports designed in HTML + CSS.
=====================================================================================================================
The module structure and some code is inspired by the report_openoffice module.
The module allows:
------------------
- HTML report definition
- Multi header support
- Multi logo
- Multi company support
- HTML and CSS-3 support (In the limit of the actual WebKIT version)
- JavaScript support
- Raw HTML debugger
- Book printing capabilities
- Margins definition
- Paper size definition
Multiple headers and logos can be defined per company. CSS style, header and
footer body are defined per company.
For a sample report see also the webkit_report_sample module, and this video:
http://files.me.com/nbessi/06n92k.mov
Requirements and Installation:
------------------------------
This module requires the ``wkhtmltopdf`` library to render HTML documents as
PDF. Version 0.9.9 or later is necessary, and can be found at
http://code.google.com/p/wkhtmltopdf/ for Linux, Mac OS X (i386) and Windows (32bits).
After installing the library on the YuanCloud Server machine, you may need to set
the path to the ``wkhtmltopdf`` executable file in a system parameter named
``webkit_path`` in Settings -> Customization -> Parameters -> System Parameters
If you are experiencing missing header/footer problems on Linux, be sure to
install a 'static' version of the library. The default ``wkhtmltopdf`` on
Ubuntu is known to have this issue.
TODO:
-----
* JavaScript support activation deactivation
* Collated and book format support
* Zip return for separated PDF
* Web client WYSIWYG
""",
'version': '0.9',
'depends': ['base','report'],
'author': '北京山水物源科技有限公司',
'category' : 'Tools', # i.e a technical module, not shown in Application install menu
'url': 'http://www.yuancloud.cn/page/tools',
'data': [ 'security/ir.model.access.csv',
'data.xml',
'wizard/report_webkit_actions_view.xml',
'company_view.xml',
'header_view.xml',
'ir_report_view.xml',
],
'demo': [
"report/webkit_report_demo.xml",
],
'test': [
"test/print.yml",
],
'installable': True,
'auto_install': False,
}
| [
"liuganghao@lztogether.com"
] | liuganghao@lztogether.com |
d216baf5afc15e5e4d3d49a521a2e2b3a26e18d2 | 9dde5311a5fe0357995a737eb8bc9b54a5cc21d8 | /betago/processor.py | 1fee7dd7acbbda0d32df2d67e948e987fa3f981f | [
"MIT"
] | permissive | maxpumperla/betago | 08163cbc5a61d4c5e19fd59adc4299a19427ec90 | ff06b467e16d7a7a22555d14181b723d853e1a70 | refs/heads/master | 2023-08-21T14:48:53.854515 | 2020-12-22T08:33:54 | 2020-12-22T08:33:54 | 56,266,535 | 747 | 192 | MIT | 2019-11-18T09:45:36 | 2016-04-14T20:02:28 | Python | UTF-8 | Python | false | false | 7,497 | py | from __future__ import absolute_import
import numpy as np
from .dataloader.base_processor import GoDataProcessor, GoFileProcessor
from six.moves import range
class SevenPlaneProcessor(GoDataProcessor):
'''
Implementation of a Go data processor, using seven planes of 19x19 values to represent the position of
a go board, as explained below.
This closely reflects the representation suggested in Clark, Storkey:
http://arxiv.org/abs/1412.3409
'''
def __init__(self, data_directory='data', num_planes=7, consolidate=True, use_generator=False):
super(SevenPlaneProcessor, self).__init__(data_directory=data_directory,
num_planes=num_planes,
consolidate=consolidate,
use_generator=use_generator)
def feature_and_label(self, color, move, go_board, num_planes):
'''
Parameters
----------
color: color of the next person to move
move: move they decided to make
go_board: represents the state of the board before they moved
Planes we write:
0: our stones with 1 liberty
1: our stones with 2 liberty
2: our stones with 3 or more liberties
3: their stones with 1 liberty
4: their stones with 2 liberty
5: their stones with 3 or more liberties
6: simple ko
'''
row, col = move
enemy_color = go_board.other_color(color)
label = row * 19 + col
move_array = np.zeros((num_planes, go_board.board_size, go_board.board_size))
for row in range(0, go_board.board_size):
for col in range(0, go_board.board_size):
pos = (row, col)
if go_board.board.get(pos) == color:
if go_board.go_strings[pos].liberties.size() == 1:
move_array[0, row, col] = 1
elif go_board.go_strings[pos].liberties.size() == 2:
move_array[1, row, col] = 1
elif go_board.go_strings[pos].liberties.size() >= 3:
move_array[2, row, col] = 1
if go_board.board.get(pos) == enemy_color:
if go_board.go_strings[pos].liberties.size() == 1:
move_array[3, row, col] = 1
elif go_board.go_strings[pos].liberties.size() == 2:
move_array[4, row, col] = 1
elif go_board.go_strings[pos].liberties.size() >= 3:
move_array[5, row, col] = 1
if go_board.is_simple_ko(color, pos):
move_array[6, row, col] = 1
return move_array, label
class ThreePlaneProcessor(GoDataProcessor):
'''
Simpler version of the above processor using just three planes. This data processor uses one plane for
stone positions of each color and one for ko.
'''
def __init__(self, data_directory='data', num_planes=3, consolidate=True, use_generator=False):
super(ThreePlaneProcessor, self).__init__(data_directory=data_directory,
num_planes=num_planes,
consolidate=consolidate,
use_generator=use_generator)
def feature_and_label(self, color, move, go_board, num_planes):
'''
Parameters
----------
color: color of the next person to move
move: move they decided to make
go_board: represents the state of the board before they moved
Planes we write:
0: our stones
1: their stones
2: ko
'''
row, col = move
enemy_color = go_board.other_color(color)
label = row * 19 + col
move_array = np.zeros((num_planes, go_board.board_size, go_board.board_size))
for row in range(0, go_board.board_size):
for col in range(0, go_board.board_size):
pos = (row, col)
if go_board.board.get(pos) == color:
move_array[0, row, col] = 1
if go_board.board.get(pos) == enemy_color:
move_array[1, row, col] = 1
if go_board.is_simple_ko(color, pos):
move_array[2, row, col] = 1
return move_array, label
class SevenPlaneFileProcessor(GoFileProcessor):
'''
File processor corresponding to the above data processor. Loading all available data into memory is simply
not feasible, and this class allows preprocessing into an efficient, binary format.
'''
def __init__(self, data_directory='data', num_planes=7, consolidate=True):
super(SevenPlaneFileProcessor, self).__init__(data_directory=data_directory,
num_planes=num_planes, consolidate=consolidate)
def store_results(self, data_file, color, move, go_board):
'''
Parameters
----------
color: color of the next person to move
move: move they decided to make
go_board: represents the state of the board before they moved
Planes we write:
0: our stones with 1 liberty
1: our stones with 2 liberty
2: our stones with 3 or more liberties
3: their stones with 1 liberty
4: their stones with 2 liberty
5: their stones with 3 or more liberties
6: simple ko
'''
row, col = move
enemy_color = go_board.other_color(color)
data_file.write('GO')
label = row * 19 + col
data_file.write(chr(label % 256))
data_file.write(chr(label // 256))
data_file.write(chr(0))
data_file.write(chr(0))
thisbyte = 0
thisbitpos = 0
for plane in range(0, 7):
for row in range(0, go_board.board_size):
for col in range(0, go_board.board_size):
thisbit = 0
pos = (row, col)
if go_board.board.get(pos) == color:
if plane == 0 and go_board.go_strings[pos].liberties.size() == 1:
thisbit = 1
elif plane == 1 and go_board.go_strings[pos].liberties.size() == 2:
thisbit = 1
elif plane == 2 and go_board.go_strings[pos].liberties.size() >= 3:
thisbit = 1
if go_board.board.get(pos) == enemy_color:
if plane == 3 and go_board.go_strings[pos].liberties.size() == 1:
thisbit = 1
elif plane == 4 and go_board.go_strings[pos].liberties.size() == 2:
thisbit = 1
elif plane == 5 and go_board.go_strings[pos].liberties.size() >= 3:
thisbit = 1
if plane == 6 and go_board.is_simple_ko(color, pos):
thisbit = 1
thisbyte = thisbyte + (thisbit << (7 - thisbitpos))
thisbitpos = thisbitpos + 1
if thisbitpos == 8:
data_file.write(chr(thisbyte))
thisbitpos = 0
thisbyte = 0
if thisbitpos != 0:
data_file.write(chr(thisbyte))
| [
"max.pumperla@googlemail.com"
] | max.pumperla@googlemail.com |
e42f78d219dc229a17fb5870bdb01f72df7e0996 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p03105/s666582429.py | ab016687c44a5c29f7baead18b3f59c2b7730337 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 815 | py | #from statistics import median
#import collections
#aa = collections.Counter(a) # list to list || .most_common(2)で最大の2個とりだせるお a[0][0]
from fractions import gcd
from itertools import combinations,permutations,accumulate, product # (string,3) 3回
#from collections import deque
from collections import deque,defaultdict,Counter
import decimal
import re
#import bisect
#
# d = m - k[i] - k[j]
# if kk[bisect.bisect_right(kk,d) - 1] == d:
#
#
#
# pythonで無理なときは、pypyでやると正解するかも!!
#
#
# my_round_int = lambda x:np.round((x*2 + 1)//2)
# 四捨五入g
import sys
sys.setrecursionlimit(10000000)
mod = 10**9 + 7
#mod = 9982443453
def readInts():
return list(map(int,input().split()))
def I():
return int(input())
a,b,c = readInts()
print(min(c, b//a))
| [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
50de8f8f99b8bc3466cf24ce85afa5ad4e51cfdb | e232de1f42a922dc0c94d889c1f72d4f66b325d6 | /genfiles/proto/rec/gui/proto/Resizer_pb2.py | 065e69fbf5842f8fa39226a86cfa8d2e31a3ed35 | [] | no_license | rec/slow_gold | b19fcd684e469978bf20cd0638fa83786fc5ffae | f4551785cf7f9cf45605a850d013eef5d80f4ea6 | refs/heads/master | 2022-10-20T06:37:00.370927 | 2017-02-04T11:39:59 | 2017-02-04T11:39:59 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | true | 1,563 | py | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: rec/gui/proto/Resizer.proto
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
DESCRIPTOR = _descriptor.FileDescriptor(
name='rec/gui/proto/Resizer.proto',
package='rec.gui',
serialized_pb='\n\x1brec/gui/proto/Resizer.proto\x12\x07rec.gui\"!\n\x0cResizerProto\x12\x11\n\tmin_value\x18\x01 \x01(\t')
_RESIZERPROTO = _descriptor.Descriptor(
name='ResizerProto',
full_name='rec.gui.ResizerProto',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='min_value', full_name='rec.gui.ResizerProto.min_value', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
serialized_start=40,
serialized_end=73,
)
DESCRIPTOR.message_types_by_name['ResizerProto'] = _RESIZERPROTO
class ResizerProto(_message.Message):
__metaclass__ = _reflection.GeneratedProtocolMessageType
DESCRIPTOR = _RESIZERPROTO
# @@protoc_insertion_point(class_scope:rec.gui.ResizerProto)
# @@protoc_insertion_point(module_scope)
| [
"tom@swirly.com"
] | tom@swirly.com |
4203d1a854a8dd37fdb9eda8a1e7cac35ecb4cee | b162de01d1ca9a8a2a720e877961a3c85c9a1c1c | /165.compare-version-numbers.python3.py | 87a07d78743ccbd88f763f85f7771909751a5969 | [] | no_license | richnakasato/lc | 91d5ff40a1a3970856c76c1a53d7b21d88a3429c | f55a2decefcf075914ead4d9649d514209d17a34 | refs/heads/master | 2023-01-19T09:55:08.040324 | 2020-11-19T03:13:51 | 2020-11-19T03:13:51 | 114,937,686 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,153 | py | #
# [165] Compare Version Numbers
#
# https://leetcode.com/problems/compare-version-numbers/description/
#
# algorithms
# Medium (22.00%)
# Total Accepted: 118.9K
# Total Submissions: 540.6K
# Testcase Example: '"0.1"\n"1.1"'
#
# Compare two version numbers version1 and version2.
# If version1 > version2 return 1; if version1 < version2 return -1;otherwise
# return 0.
#
# You may assume that the version strings are non-empty and contain only digits
# and the . character.
# The . character does not represent a decimal point and is used to separate
# number sequences.
# For instance, 2.5 is not "two and a half" or "half way to version three", it
# is the fifth second-level revision of the second first-level revision.
#
# Example 1:
#
#
# Input: version1 = "0.1", version2 = "1.1"
# Output: -1
#
# Example 2:
#
#
# Input: version1 = "1.0.1", version2 = "1"
# Output: 1
#
# Example 3:
#
#
# Input: version1 = "7.5.2.4", version2 = "7.5.3"
# Output: -1
#
#
class Solution:
def compareVersion(self, version1, version2):
"""
:type version1: str
:type version2: str
:rtype: int
"""
| [
"richnakasato@hotmail.com"
] | richnakasato@hotmail.com |
a4a2d2dbd4a2b79ce6744d1e9304e8b3b5400cee | 1bed2f766620acf085ed2d7fd3e354a3482b8960 | /tests/components/roku/test_select.py | 003487c0adfdf8e753fab56230179045e1c451c3 | [
"Apache-2.0"
] | permissive | elupus/home-assistant | 5cbb79a2f25a2938a69f3988534486c269b77643 | 564150169bfc69efdfeda25a99d803441f3a4b10 | refs/heads/dev | 2023-08-28T16:36:04.304864 | 2022-09-16T06:35:12 | 2022-09-16T06:35:12 | 114,460,522 | 2 | 2 | Apache-2.0 | 2023-02-22T06:14:54 | 2017-12-16T12:50:55 | Python | UTF-8 | Python | false | false | 7,752 | py | """Tests for the Roku select platform."""
from unittest.mock import MagicMock
import pytest
from rokuecp import (
Application,
Device as RokuDevice,
RokuConnectionError,
RokuConnectionTimeoutError,
RokuError,
)
from homeassistant.components.roku.const import DOMAIN
from homeassistant.components.roku.coordinator import SCAN_INTERVAL
from homeassistant.components.select import DOMAIN as SELECT_DOMAIN
from homeassistant.components.select.const import ATTR_OPTION, ATTR_OPTIONS
from homeassistant.const import ATTR_ENTITY_ID, ATTR_ICON, SERVICE_SELECT_OPTION
from homeassistant.core import HomeAssistant
from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers import entity_registry as er
import homeassistant.util.dt as dt_util
from tests.common import MockConfigEntry, async_fire_time_changed
async def test_application_state(
hass: HomeAssistant,
mock_config_entry: MockConfigEntry,
mock_device: RokuDevice,
mock_roku: MagicMock,
) -> None:
"""Test the creation and values of the Roku selects."""
entity_registry = er.async_get(hass)
entity_registry.async_get_or_create(
SELECT_DOMAIN,
DOMAIN,
"1GU48T017973_application",
suggested_object_id="my_roku_3_application",
disabled_by=None,
)
mock_config_entry.add_to_hass(hass)
await hass.config_entries.async_setup(mock_config_entry.entry_id)
await hass.async_block_till_done()
state = hass.states.get("select.my_roku_3_application")
assert state
assert state.attributes.get(ATTR_ICON) == "mdi:application"
assert state.attributes.get(ATTR_OPTIONS) == [
"Home",
"Amazon Video on Demand",
"Free FrameChannel Service",
"MLB.TV" + "\u00AE",
"Mediafly",
"Netflix",
"Pandora",
"Pluto TV - It's Free TV",
"Roku Channel Store",
]
assert state.state == "Home"
entry = entity_registry.async_get("select.my_roku_3_application")
assert entry
assert entry.unique_id == "1GU48T017973_application"
await hass.services.async_call(
SELECT_DOMAIN,
SERVICE_SELECT_OPTION,
{
ATTR_ENTITY_ID: "select.my_roku_3_application",
ATTR_OPTION: "Netflix",
},
blocking=True,
)
assert mock_roku.launch.call_count == 1
mock_roku.launch.assert_called_with("12")
mock_device.app = mock_device.apps[1]
async_fire_time_changed(hass, dt_util.utcnow() + SCAN_INTERVAL)
await hass.async_block_till_done()
state = hass.states.get("select.my_roku_3_application")
assert state
assert state.state == "Netflix"
await hass.services.async_call(
SELECT_DOMAIN,
SERVICE_SELECT_OPTION,
{
ATTR_ENTITY_ID: "select.my_roku_3_application",
ATTR_OPTION: "Home",
},
blocking=True,
)
assert mock_roku.remote.call_count == 1
mock_roku.remote.assert_called_with("home")
mock_device.app = Application(
app_id=None, name="Roku", version=None, screensaver=None
)
async_fire_time_changed(hass, dt_util.utcnow() + (SCAN_INTERVAL * 2))
await hass.async_block_till_done()
state = hass.states.get("select.my_roku_3_application")
assert state
assert state.state == "Home"
@pytest.mark.parametrize(
"error, error_string",
[
(RokuConnectionError, "Error communicating with Roku API"),
(RokuConnectionTimeoutError, "Timeout communicating with Roku API"),
(RokuError, "Invalid response from Roku API"),
],
)
async def test_application_select_error(
hass: HomeAssistant,
mock_config_entry: MockConfigEntry,
mock_roku: MagicMock,
error: RokuError,
error_string: str,
) -> None:
"""Test error handling of the Roku selects."""
entity_registry = er.async_get(hass)
entity_registry.async_get_or_create(
SELECT_DOMAIN,
DOMAIN,
"1GU48T017973_application",
suggested_object_id="my_roku_3_application",
disabled_by=None,
)
mock_config_entry.add_to_hass(hass)
await hass.config_entries.async_setup(mock_config_entry.entry_id)
await hass.async_block_till_done()
mock_roku.launch.side_effect = error
with pytest.raises(HomeAssistantError, match=error_string):
await hass.services.async_call(
SELECT_DOMAIN,
SERVICE_SELECT_OPTION,
{
ATTR_ENTITY_ID: "select.my_roku_3_application",
ATTR_OPTION: "Netflix",
},
blocking=True,
)
state = hass.states.get("select.my_roku_3_application")
assert state
assert state.state == "Home"
assert mock_roku.launch.call_count == 1
mock_roku.launch.assert_called_with("12")
@pytest.mark.parametrize("mock_device", ["roku/rokutv-7820x.json"], indirect=True)
async def test_channel_state(
hass: HomeAssistant,
init_integration: MockConfigEntry,
mock_device: RokuDevice,
mock_roku: MagicMock,
) -> None:
"""Test the creation and values of the Roku selects."""
entity_registry = er.async_get(hass)
state = hass.states.get("select.58_onn_roku_tv_channel")
assert state
assert state.attributes.get(ATTR_ICON) == "mdi:television"
assert state.attributes.get(ATTR_OPTIONS) == [
"99.1",
"QVC (1.3)",
"WhatsOn (1.1)",
"getTV (14.3)",
]
assert state.state == "getTV (14.3)"
entry = entity_registry.async_get("select.58_onn_roku_tv_channel")
assert entry
assert entry.unique_id == "YN00H5555555_channel"
# channel name
await hass.services.async_call(
SELECT_DOMAIN,
SERVICE_SELECT_OPTION,
{
ATTR_ENTITY_ID: "select.58_onn_roku_tv_channel",
ATTR_OPTION: "WhatsOn (1.1)",
},
blocking=True,
)
assert mock_roku.tune.call_count == 1
mock_roku.tune.assert_called_with("1.1")
mock_device.channel = mock_device.channels[0]
async_fire_time_changed(hass, dt_util.utcnow() + SCAN_INTERVAL)
await hass.async_block_till_done()
state = hass.states.get("select.58_onn_roku_tv_channel")
assert state
assert state.state == "WhatsOn (1.1)"
# channel number
await hass.services.async_call(
SELECT_DOMAIN,
SERVICE_SELECT_OPTION,
{
ATTR_ENTITY_ID: "select.58_onn_roku_tv_channel",
ATTR_OPTION: "99.1",
},
blocking=True,
)
assert mock_roku.tune.call_count == 2
mock_roku.tune.assert_called_with("99.1")
mock_device.channel = mock_device.channels[3]
async_fire_time_changed(hass, dt_util.utcnow() + SCAN_INTERVAL)
await hass.async_block_till_done()
state = hass.states.get("select.58_onn_roku_tv_channel")
assert state
assert state.state == "99.1"
@pytest.mark.parametrize("mock_device", ["roku/rokutv-7820x.json"], indirect=True)
async def test_channel_select_error(
hass: HomeAssistant,
init_integration: MockConfigEntry,
mock_roku: MagicMock,
) -> None:
"""Test error handling of the Roku selects."""
mock_roku.tune.side_effect = RokuError
with pytest.raises(HomeAssistantError, match="Invalid response from Roku API"):
await hass.services.async_call(
SELECT_DOMAIN,
SERVICE_SELECT_OPTION,
{
ATTR_ENTITY_ID: "select.58_onn_roku_tv_channel",
ATTR_OPTION: "99.1",
},
blocking=True,
)
state = hass.states.get("select.58_onn_roku_tv_channel")
assert state
assert state.state == "getTV (14.3)"
assert mock_roku.tune.call_count == 1
mock_roku.tune.assert_called_with("99.1")
| [
"noreply@github.com"
] | elupus.noreply@github.com |
822862f3f180bee1dd9909c9182fcf3e328eaab8 | bfe6c95fa8a2aae3c3998bd59555583fed72900a | /longestObstacleCourseAtEachPosition.py | 0ca0dd3396e0d71e0b4f329dc296a161e89b0e39 | [] | no_license | zzz136454872/leetcode | f9534016388a1ba010599f4771c08a55748694b2 | b5ea6c21bff317884bdb3d7e873aa159b8c30215 | refs/heads/master | 2023-09-01T17:26:57.624117 | 2023-08-29T03:18:56 | 2023-08-29T03:18:56 | 240,464,565 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 826 | py | from typing import List
class Solution:
def longestObstacleCourseAtEachPosition(self,
obstacles: List[int]) -> List[int]:
d = [obstacles[0]]
res = [1]
for ob in obstacles[1:]:
if ob >= d[-1]:
d.append(ob)
res.append(len(d))
else:
left = 0
right = len(d) - 1
while left <= right:
mid = (left + right) // 2
if d[mid] <= ob:
left = mid + 1
else:
right = mid - 1
res.append(left + 1)
d[left] = ob
return res
obstacles = [1, 2, 3, 2]
print(Solution().longestObstacleCourseAtEachPosition(obstacles))
| [
"zzz136454872@163.com"
] | zzz136454872@163.com |
b7987901a5a75205f490089306580edb93381290 | d579fdffa059724aff5a540e1ca6c12f508fd7b4 | /flex/django/ussd/screens/__init__.py | 4dafb9db0d2438990b3d4bade50a804a1be4d3ba | [
"MIT"
] | permissive | centergy/flex_ussd | 7493afddeea7c142e6ae6ee9f85406e165e65404 | ddc0ccd192e3a0a82e8b7705f088862d59656c28 | refs/heads/master | 2020-03-24T21:21:46.790958 | 2018-09-23T01:07:13 | 2018-09-23T01:07:13 | 143,027,887 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 186 | py | from .base import UssdScreenType, UssdScreen, UssdPayload, ScreenState, ScreenRef
from .base import get_screen, get_screen_uid, get_home_screen, render_screen
from .base import END, CON
| [
"davidmkyalo@gmail.com"
] | davidmkyalo@gmail.com |
887b1ddeae86361e13fc5082defd58656ea36555 | f71ee969fa331560b6a30538d66a5de207e03364 | /scripts/client/messenger/gui/scaleform/channels/bw_factories.py | 35ded875a0efc378793bae087ca90f289d31b095 | [] | no_license | webiumsk/WOT-0.9.8-CT | 31356ed01cb110e052ba568e18cb2145d4594c34 | aa8426af68d01ee7a66c030172bd12d8ca4d7d96 | refs/heads/master | 2016-08-03T17:54:51.752169 | 2015-05-12T14:26:00 | 2015-05-12T14:26:00 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,259 | py | # Embedded file name: scripts/client/messenger/gui/Scaleform/channels/bw_factories.py
import chat_shared
from constants import PREBATTLE_TYPE
from debug_utils import LOG_ERROR
from messenger.gui.Scaleform.channels import bw_lobby_controllers
from messenger.gui.Scaleform.channels import bw_battle_controllers
from messenger.gui.interfaces import IControllerFactory
from messenger.m_constants import LAZY_CHANNEL
from messenger.proto.bw import find_criteria
from messenger.storage import storage_getter
class LobbyControllersFactory(IControllerFactory):
def __init__(self):
super(LobbyControllersFactory, self).__init__()
@storage_getter('channels')
def channelsStorage(self):
return None
def init(self):
controllers = []
channels = self.channelsStorage.getChannelsByCriteria(find_criteria.BWLobbyChannelFindCriteria())
for channel in channels:
controller = self.factory(channel)
if controller is not None:
controllers.append(controller)
return controllers
def factory(self, channel):
controller = None
if channel.getName() in LAZY_CHANNEL.ALL:
if channel.getName() == LAZY_CHANNEL.SPECIAL_BATTLES:
controller = bw_lobby_controllers.BSLazyChannelController(channel)
else:
controller = bw_lobby_controllers.LazyChannelController(channel)
elif channel.isPrebattle():
prbType = channel.getPrebattleType()
if prbType is 0:
LOG_ERROR('Prebattle type is not found', channel)
return
if prbType is PREBATTLE_TYPE.TRAINING:
controller = bw_lobby_controllers.TrainingChannelController(channel)
else:
controller = bw_lobby_controllers.PrebattleChannelController(prbType, channel)
elif not channel.isBattle():
controller = bw_lobby_controllers.LobbyChannelController(channel)
return controller
class BattleControllersFactory(IControllerFactory):
@storage_getter('channels')
def channelsStorage(self):
return None
def init(self):
controllers = []
channels = self.channelsStorage.getChannelsByCriteria(find_criteria.BWBattleChannelFindCriteria())
squad = self.channelsStorage.getChannelByCriteria(find_criteria.BWPrbChannelFindCriteria(PREBATTLE_TYPE.SQUAD))
if squad is not None:
channels.append(squad)
for channel in channels:
controller = self.factory(channel)
if controller is not None:
controllers.append(controller)
return controllers
def factory(self, channel):
controller = None
flags = channel.getProtoData().flags
if flags & chat_shared.CHAT_CHANNEL_BATTLE != 0:
if flags & chat_shared.CHAT_CHANNEL_BATTLE_TEAM != 0:
controller = bw_battle_controllers.TeamChannelController(channel)
else:
controller = bw_battle_controllers.CommonChannelController(channel)
elif flags & chat_shared.CHAT_CHANNEL_SQUAD != 0:
controller = bw_battle_controllers.SquadChannelController(channel)
return controller
| [
"info@webium.sk"
] | info@webium.sk |
44f4aee9550e7bcb38abfeee770c372685253753 | 2fad455bd88dc4c2b51058022019ed982c3c445e | /nsls2_gui/dpc_gui.py | ddc42d0fcad13b761c3cc5ec711ca09fa12a9404 | [
"BSD-3-Clause"
] | permissive | cmazzoli/nsls2_gui | 77080f1e73d58f86d98a3a897b63c9afa46c186a | f3610444ad5ac1a7c5ccbb4fde6dc2a357ddcd89 | refs/heads/master | 2021-01-12T22:36:52.031321 | 2014-08-19T11:57:46 | 2014-08-19T11:57:46 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 38,302 | py | from __future__ import print_function
import os
import sys
import multiprocessing as mp
import time
from PyQt4 import (QtCore, QtGui)
from PyQt4.QtCore import Qt
import matplotlib.cm as cm
import Image
import PIL
import scipy
from scipy.misc import imsave
import numpy as np
import matplotlib as mpl
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.figure import Figure
import matplotlib.gridspec as gridspec
import ImageEnhance
from nsls2_gui import dpc_gui_kernel as dpc
sys.path.insert(0, '/home/nanopos/ecli/')
import pyspecfile
SOLVERS = ['Nelder-Mead',
'Powell',
'CG',
'BFGS',
'Newton-CG',
'Anneal',
'L-BFGS-B',
'TNC',
'COBYLA',
'SLS QP',
'dogleg',
'trust-ncg',
]
roi_x1 = 0
roi_x2 = 0
roi_y1 = 0
roi_y2 = 0
a = None
gx = None
gy = None
phi = None
CMAP_PREVIEW_PATH = os.path.join(os.path.dirname(__file__), '.cmap_previews')
def brush_to_color_tuple(brush):
r, g, b, a = brush.color().getRgbF()
return (r, g, b)
class DPCThread(QtCore.QThread):
def __init__(self, canvas, pool=None, parent=None):
QtCore.QThread.__init__(self, parent)
DPCThread.instance = self
self.canvas = canvas
self.pool = pool
self.fig = None
def update_display(self, a, gx, gy, phi, flag=None): # ax is a pyplot object
def show_image(ax, image):
#return ax.imshow(np.flipud(image.T), interpolation='nearest',
# origin='lower', cmap=cm.Greys_r)
return ax.imshow(image, interpolation='nearest',
origin='lower', cmap=cm.Greys_r)
def show_image_line(ax, image, start, end, direction=1):
if direction == 1:
ax.axhspan(start, end, facecolor='0.5', alpha=0.5)
return ax.imshow(image, interpolation='nearest',
origin='lower', cmap=cm.Greys_r)
if direction == -1:
ax.axvspan(start, end, facecolor='0.5', alpha=0.5)
return ax.imshow(image, interpolation='nearest',
origin='lower', cmap=cm.Greys_r)
main = DPCDialog.instance
canvas = self.canvas
fig = canvas.figure
fig.clear()
fig.subplots_adjust(top=0.95, left=0, right=0.95, bottom=0)
gs = gridspec.GridSpec(2, 2)
if main.ion_data is not None:
pixels = a.shape[0] * a.shape[1]
ion_data = np.zeros(pixels)
ion_data[:len(main.ion_data)] = main.ion_data
ion_data[len(main.ion_data):] = ion_data[0]
ion_data = ion_data.reshape(a.shape)
min_ = np.min(a[np.where(a > 0)])
a[np.where(a == 0)] = min_
canvas.a_ax = a_ax = fig.add_subplot(gs[0, 1])
a_ax.set_title('a')
a_data = a / ion_data * ion_data[0]
canvas.ima = ima = show_image(a_ax, a_data)
fig.colorbar(ima)
canvas.gx_ax = gx_ax = fig.add_subplot(gs[1, 0])
gx_ax.set_title('X')
canvas.imx = imx = show_image(gx_ax, gx)
fig.colorbar(imx)
canvas.gy_ax = gy_ax = fig.add_subplot(gs[1, 1])
gy_ax.set_title('Y')
canvas.imy = imy = show_image(gy_ax, gy)
fig.colorbar(imy)
"""
def onclick(event):
print ('button=%d, x=%d, y=%d, xdata=%f, ydata=%f'%(
event.button, event.x, event.y, event.xdata, event.ydata))
cid = gy_ax.canvas.mpl_connect('button_press_event', onclick)
"""
if phi is not None:
if main.ion_data is not None:
phi_ax = fig.add_subplot(gs[0, 0])
else:
phi_ax = fig.add_subplot(gs[0, :])
canvas.phi_ax = phi_ax
phi_ax.set_title('phi')
if flag == None:
canvas.imphi = imphi = show_image(phi_ax, phi)
if flag == "strap":
canvas.imphi = imphi = show_image_line(phi_ax, phi,
DPCDialog.instance.strap_start.value(),
DPCDialog.instance.strap_end.value(),
DPCDialog.instance.direction)
fig.colorbar(imphi)
imphi.set_cmap(main._color_map)
"""
def onclick(event):
print ('button=%d, x=%d, y=%d, xdata=%f, ydata=%f'%(
event.button, event.x, event.y, event.xdata, event.ydata))
cid = phi_ax.canvas.mpl_connect('button_press_event', onclick)
"""
canvas.draw()
def run(self):
print('DPC thread started')
try:
ret = dpc.main(pool=self.pool, display_fcn=self.update_display,
**self.dpc_settings)
print('DPC finished')
global a
global gx
global gy
global phi
a, gx, gy, phi = ret
main = DPCDialog.instance
main.a, main.gx, main.gy, main.phi = a, gx, gy, phi
self.update_display(a, gx, gy, phi)
DPCDialog.instance.line_btn.setEnabled(True)
#DPCDialog.instance.direction_btn.setEnabled(True)
#DPCDialog.instance.removal_btn.setEnabled(True)
#DPCDialog.instance.confirm_btn.setEnabled(True)
finally:
DPCDialog.instance.set_running(False)
class MplCanvas(FigureCanvas):
"""
Canvas which allows us to use matplotlib with pyqt4
"""
def __init__(self, parent=None, width=5, height=4, dpi=100):
fig = Figure(figsize=(width, height), dpi=dpi)
# We want the axes cleared every time plot() is called
self.axes = fig.add_subplot(1, 1, 1)
self.axes.hold(False)
FigureCanvas.__init__(self, fig)
# self.figure
self.setParent(parent)
FigureCanvas.setSizePolicy(self,
QtGui.QSizePolicy.Expanding,
QtGui.QSizePolicy.Expanding)
FigureCanvas.updateGeometry(self)
self._title = ''
self.title_font = {'family': 'serif', 'fontsize': 10}
self._title_size = 0
self.figure.subplots_adjust(top=0.95, bottom=0.15)
window_brush = self.window().palette().window()
fig.set_facecolor(brush_to_color_tuple(window_brush))
fig.set_edgecolor(brush_to_color_tuple(window_brush))
self._active = False
def _get_title(self):
return self._title
def _set_title(self, title):
self._title = title
if self.axes:
self.axes.set_title(title, fontdict=self.title_font)
#bbox = t.get_window_extent()
#bbox = bbox.inverse_transformed(self.figure.transFigure)
#self._title_size = bbox.height
#self.figure.subplots_adjust(top=1.0 - self._title_size)
title = property(_get_title, _set_title)
class Label(QtGui.QLabel):
def __init__(self, parent = None):
super(Label, self).__init__(parent)
self.rubberBand = QtGui.QRubberBand(QtGui.QRubberBand.Rectangle, self)
self.origin = QtCore.QPoint()
def mousePressEvent(self, event):
global roi_x1
global roi_y1
self.rubberBand.hide()
if event.button() == Qt.LeftButton:
self.origin = QtCore.QPoint(event.pos())
self.rubberBand.setGeometry(QtCore.QRect(self.origin, QtCore.QSize()))
self.rubberBand.show()
roi_x1 = event.pos().x()
roi_y1 = event.pos().y()
def mouseMoveEvent(self, event):
if event.buttons() == QtCore.Qt.NoButton:
pos = event.pos()
if not self.origin.isNull():
self.rubberBand.setGeometry(QtCore.QRect(self.origin, event.pos()).normalized())
def mouseReleaseEvent(self, event):
global roi_x2
global roi_y2
roi_x2 = event.pos().x()
roi_y2 = event.pos().y()
if((roi_x1, roi_y1)!=(roi_x2, roi_y2)):
DPCDialog.instance.roi_x1_widget.setValue(roi_x1)
DPCDialog.instance.roi_y1_widget.setValue(roi_y1)
DPCDialog.instance.roi_x2_widget.setValue(roi_x2)
DPCDialog.instance.roi_y2_widget.setValue(roi_y2)
else:
if DPCDialog.instance.bad_flag != 0:
DPCDialog.instance.bad_pixels_widget.addItem('%d, %d' %
(event.pos().x(), event.pos().y()))
self.rubberBand.show()
class paintLabel(QtGui.QLabel):
def __init__(self, parent = None):
super(paintLabel, self).__init__(parent)
def paintEvent(self, event):
super(paintLabel, self).paintEvent(event)
qp = QtGui.QPainter()
qp.begin(self)
self.drawLine(event, qp)
qp.end()
def drawLine(self, event, qp):
size = self.size()
pen = QtGui.QPen(QtCore.Qt.red)
qp.setPen(pen)
qp.drawLine(size.width()/2, 0, size.width()/2, size.height()-1)
qp.drawLine(size.width()/2 - 1, 0, size.width()/2 - 1, size.height()-1)
qp.drawLine(0, size.height()/2, size.width()-1, size.height()/2)
qp.drawLine(0, size.height()/2-1, size.width()-1, size.height()/2-1)
pen.setStyle(QtCore.Qt.DashLine)
pen.setColor(QtCore.Qt.black)
qp.setPen(pen)
qp.drawLine(0, 0, size.width()-1, 0)
qp.drawLine(0, size.height()-1, size.width()-1, size.height()-1)
qp.drawLine(0, 0, 0, size.height()-1)
qp.drawLine(size.width()-1, 0, size.width()-1, size.height()-1)
class DPCDialog(QtGui.QDialog):
CM_DEFAULT = 'jet'
def __init__(self, parent=None):
QtGui.QDialog.__init__(self, parent)
DPCDialog.instance = self
self.bin_num = 2**16
self._thread = None
self.ion_data = None
self.bad_flag = 0
self.direction = 1 # 1 for horizontal and -1 for vertical
self.gx, self.gy, self.phi, self.a = None, None, None, None
self.file_widget = QtGui.QLineEdit('Chromosome_9_%05d.tif')
self.file_widget.setFixedWidth(400)
self.focus_widget = QtGui.QDoubleSpinBox()
self.dx_widget = QtGui.QDoubleSpinBox()
self.dy_widget = QtGui.QDoubleSpinBox()
self.pixel_widget = QtGui.QSpinBox()
self.energy_widget = QtGui.QDoubleSpinBox()
self.rows_widget = QtGui.QSpinBox()
self.cols_widget = QtGui.QSpinBox()
self.roi_x1_widget = QtGui.QSpinBox()
self.roi_x2_widget = QtGui.QSpinBox()
self.roi_y1_widget = QtGui.QSpinBox()
self.roi_y2_widget = QtGui.QSpinBox()
self.strap_start = QtGui.QSpinBox()
self.strap_end = QtGui.QSpinBox()
self.bad_pixels_widget = QtGui.QListWidget()
self.bad_pixels_widget.setContextMenuPolicy(Qt.CustomContextMenu)
self.bad_pixels_widget.customContextMenuRequested.connect(self._bad_pixels_menu)
self.ref_widget = QtGui.QSpinBox()
self.first_widget = QtGui.QSpinBox()
self.processes_widget = QtGui.QSpinBox()
self.solver_widget = QtGui.QComboBox()
for solver in SOLVERS:
self.solver_widget.addItem(solver)
self.start_widget = QtGui.QPushButton('S&tart')
self.stop_widget = QtGui.QPushButton('&Stop')
self.save_widget = QtGui.QPushButton('Sa&ve')
self.scan_button = QtGui.QPushButton('Load from s&can')
self.color_map = QtGui.QComboBox()
self.update_color_maps()
self.color_map.currentIndexChanged.connect(self._set_color_map)
self._color_map = mpl.cm.get_cmap(self.CM_DEFAULT)
self.start_widget.clicked.connect(self.start)
self.stop_widget.clicked.connect(self.stop)
self.save_widget.clicked.connect(self.save)
self.scan_button.clicked.connect(self.load_from_scan)
self.layout1 = QtGui.QFormLayout()
self.settings_widget1 = QtGui.QFrame()
self.settings_widget1.setLayout(self.layout1)
self.layout1.setRowWrapPolicy(self.layout1.WrapAllRows)
self.layout1.addRow('&File format', self.file_widget)
self.layout1.addRow('Phi color map', self.color_map)
self.layout2 = QtGui.QFormLayout()
self.settings_widget2 = QtGui.QFrame()
self.settings_widget2.setLayout(self.layout2)
self.layout2.addRow('&X step (um)', self.dx_widget)
self.layout2.addRow('P&ixel size (um)', self.pixel_widget)
self.layout2.addRow('&Rows', self.rows_widget)
self.layout2.addRow('&Reference image', self.ref_widget)
self.layout2.addRow('ROI X1', self.roi_x1_widget)
self.layout2.addRow('ROI Y1', self.roi_y1_widget)
self.layout3 = QtGui.QFormLayout()
self.settings_widget3 = QtGui.QFrame()
self.settings_widget3.setLayout(self.layout3)
self.layout3.addRow('&Y step (um)', self.dy_widget)
self.layout3.addRow('Energy (keV)', self.energy_widget)
self.layout3.addRow('&Columns', self.cols_widget)
self.layout3.addRow('&First image', self.first_widget)
self.layout3.addRow('ROI X2', self.roi_x2_widget)
self.layout3.addRow('ROI Y2', self.roi_y2_widget)
self.splitter1 = QtGui.QSplitter(QtCore.Qt.Horizontal)
self.splitter1.setLineWidth(1)
self.splitter1.minimumSizeHint()
self.splitter1.addWidget(self.settings_widget2)
self.splitter1.addWidget(self.settings_widget3)
## The ROI image related components
# roi_image is the image used to select the ROI and it is a QPixmap object
# roi_img shows the same image but in a PIL image format
# roi_image_temp is the temporary image used to show roi_image with an
# enhanced contrast and etc.
self.roi_image = QtGui.QPixmap(str(self.file_widget.text()) % self.ref_widget.value())
self.roi_img = Image.open(str(self.file_widget.text()) % self.ref_widget.value())
self.calHist()
self.preContrast()
self.roi_image_x = self.roi_image.size().width()
self.roi_image_y = self.roi_image.size().height()
self.img_lbl = Label(self)
self.img_lbl.setPixmap(self.roi_image)
self.img_lbl.setFixedWidth(self.roi_image_x)
self.img_lbl.setFixedHeight(self.roi_image_y)
self.temp_lbl = paintLabel(self)
self.temp_lbl.setFixedWidth(168)
self.temp_lbl.setFixedHeight(168)
self.txt_lbl = QtGui.QLabel(self)
self.img_btn = QtGui.QPushButton('Select an image')
self.img_btn.clicked.connect(self.load_an_image)
self.his_btn = QtGui.QPushButton('Histgram equalization')
self.his_btn.setCheckable(True)
self.his_btn.clicked[bool].connect(self.histgramEqua)
self.bri_btn = QtGui.QPushButton('Brightest pixels')
self.bri_btn.clicked.connect(self.select_bri_pixels)
self.bad_btn = QtGui.QPushButton('Select bad pixels')
self.bad_btn.setCheckable(True)
self.bad_btn.clicked[bool].connect(self.bad_enable)
self.sld = QtGui.QSlider(QtCore.Qt.Horizontal, self)
self.sld.setFocusPolicy(QtCore.Qt.NoFocus)
self.sld.valueChanged[int].connect(self.change_contrast)
self.line_btn = QtGui.QPushButton('Add/Change the strap')
self.line_btn.setEnabled(False)
self.line_btn.clicked.connect(self.add_strap)
self.direction_btn = QtGui.QPushButton('Change the direction')
self.direction_btn.clicked.connect(self.change_direction)
self.direction_btn.setEnabled(False)
self.removal_btn = QtGui.QPushButton('Remove the background')
self.removal_btn.clicked.connect(self.remove_background)
self.removal_btn.setEnabled(False)
self.confirm_btn = QtGui.QPushButton('Confirm')
self.confirm_btn.clicked.connect(self.confirm)
self.confirm_btn.setEnabled(False)
self.layout4 = QtGui.QFormLayout()
self.settings_widget4 = QtGui.QFrame()
self.settings_widget4.setLayout(self.layout4)
self.layout4.addRow(self.txt_lbl)
self.layout4.addRow(self.img_btn, self.his_btn)
self.layout4.addRow(self.bri_btn, self.bad_btn)
self.layout4.addRow(self.sld)
self.layout4.addRow(self.temp_lbl)
self.splitter3 = QtGui.QSplitter(QtCore.Qt.Vertical)
self.splitter3.setLineWidth(0.1)
self.splitter3.minimumSizeHint()
self.splitter3.addWidget(self.img_lbl)
self.splitter3.addWidget(self.settings_widget4)
self.layout5 = QtGui.QFormLayout()
self.settings_widget5 = QtGui.QFrame()
self.settings_widget5.setLayout(self.layout5)
self.layout5.addRow('Bad pixels', self.bad_pixels_widget)
self.layout5.addRow('F&ocus to detector (um)', self.focus_widget)
self.layout5.addRow('&Solver method', self.solver_widget)
self.layout5.addRow('&Processes', self.processes_widget)
self.layout5.addRow('Strap start', self.strap_start)
self.layout5.addRow('Strap end', self.strap_end)
self.layout5.addRow(self.line_btn, self.direction_btn)
self.layout5.addRow(self.removal_btn, self.confirm_btn)
self.layout5.addRow(self.stop_widget, self.start_widget)
self.layout5.addRow(self.save_widget, self.scan_button)
self.splitter2 = QtGui.QSplitter(QtCore.Qt.Vertical)
self.splitter2.setLineWidth(1)
self.splitter2.minimumSizeHint()
self.splitter2.addWidget(self.settings_widget1)
self.splitter2.addWidget(self.splitter1)
self.splitter2.addWidget(self.settings_widget5)
self.last_path = ''
self._settings = {
'file_format': [lambda: self.file_format, lambda value: self.file_widget.setText(value)],
'dx': [lambda: self.dx, lambda value: self.dx_widget.setValue(float(value))],
'dy': [lambda: self.dy, lambda value: self.dy_widget.setValue(float(value))],
'x1': [lambda: self.roi_x1, lambda value: self.roi_x1_widget.setValue(int(value))],
'y1': [lambda: self.roi_y1, lambda value: self.roi_y1_widget.setValue(int(value))],
'x2': [lambda: self.roi_x2, lambda value: self.roi_x2_widget.setValue(int(value))],
'y2': [lambda: self.roi_y2, lambda value: self.roi_y2_widget.setValue(int(value))],
'pixel_size': [lambda: self.pixel_size, lambda value: self.pixel_widget.setValue(float(value))],
'focus_to_det': [lambda: self.focus, lambda value: self.focus_widget.setValue(float(value))],
'energy': [lambda: self.energy, lambda value: self.energy_widget.setValue(float(value))],
'rows': [lambda: self.rows, lambda value: self.rows_widget.setValue(int(value))],
'cols': [lambda: self.cols, lambda value: self.cols_widget.setValue(int(value))],
'first_image': [lambda: self.first_image, lambda value: self.first_widget.setValue(int(value))],
'ref_image': [lambda: self.ref_image, lambda value: self.ref_widget.setValue(int(value))],
'processes': [lambda: self.processes, lambda value: self.processes_widget.setValue(int(value))],
'bad_pixels': [lambda: self.bad_pixels, lambda value: self.set_bad_pixels(value)],
'solver': [lambda: self.solver, lambda value: self.set_solver(value)],
'last_path': [lambda: self.last_path, lambda value: setattr(self, 'last_path', value)],
#'color_map': [lambda: self._color_map, lambda value: setattr(self, 'last_path', value)],
}
for w in [self.pixel_widget, self.focus_widget, self.energy_widget,
self.dx_widget, self.dy_widget, self.rows_widget, self.cols_widget,
self.roi_x1_widget, self.roi_x2_widget, self.roi_y1_widget, self.roi_y2_widget,
self.ref_widget, self.first_widget,
]:
w.setMinimum(0)
w.setMaximum(int(2 ** 31 - 1))
try:
w.setDecimals(3)
except:
pass
self.canvas = MplCanvas(width=8, height=0.25, dpi=50)
self.splitter = QtGui.QSplitter()
self.splitter.setOrientation(Qt.Horizontal)
self.splitter.setLineWidth(1)
self.splitter.minimumSizeHint()
self.splitter.addWidget(self.splitter3)
self.splitter.addWidget(self.splitter2)
self.splitter.addWidget(self.canvas)
self.layout = QtGui.QVBoxLayout()
self.layout.addWidget(self.splitter)
self.setLayout(self.layout)
self.load_settings()
def add_strap(self, pressed):
"""
Add two lines in the Phi image
"""
self.confirm_btn.setEnabled(False)
self.direction_btn.setEnabled(True)
self.removal_btn.setEnabled(True)
DPCThread.instance.update_display(a, gx, gy, phi, "strap")
def change_direction(self, pressed):
"""
Change the orientation of the strap
"""
self.direction = -self.direction
DPCThread.instance.update_display(a, gx, gy, phi, "strap")
def remove_background(self, pressed):
"""
Remove the background of the phase image
"""
global phi
self.confirm_btn.setEnabled(True)
self.direction_btn.setEnabled(False)
if self.direction == 1:
strap = phi[self.strap_start.value():self.strap_end.value(), :]
line = np.mean(strap, axis=0)
self.phi_r = phi - line
DPCThread.instance.update_display(a, gx, gy, self.phi_r)
if self.direction == -1:
strap = phi[:, self.strap_start.value():self.strap_end.value()]
line = np.mean(strap, axis=1)
self.phi_r = np.transpose(phi)
self.phi_r = self.phi_r - line
self.phi_r = np.transpose(self.phi_r)
DPCThread.instance.update_display(a, gx, gy, self.phi_r)
def confirm(self, pressed):
"""
Confirm the background removal
"""
global phi
phi = self.phi_r
imsave('phi.jpg', phi)
np.savetxt('phi.txt', phi)
self.confirm_btn.setEnabled(False)
self.direction_btn.setEnabled(False)
self.removal_btn.setEnabled(False)
def bad_enable(self, pressed):
"""
Enable or disable bad pixels selection by changing the bad_flag value
"""
if pressed:
self.bad_flag = 1
else:
self.bad_flag = 0
def histgramEqua(self, pressed):
"""
Histogram equalization for the ROI image
"""
if pressed:
self.roi_image_temp = QtGui.QPixmap('equalizedImg.tif')
self.img_lbl.setPixmap(self.roi_image_temp)
else:
self.img_lbl.setPixmap(self.roi_image)
def preContrast(self):
self.contrastImage = self.roi_img.convert('L')
self.enh = ImageEnhance.Contrast(self.contrastImage)
def calHist(self):
"""
Calculate the histogram of the image used to select ROI
"""
img = np.array(self.roi_img.getdata(), dtype=np.uint16)
imhist,bins = np.histogram(img, bins=self.bin_num, range=(0, self.bin_num), density=True)
cdf = imhist.cumsum()
cdf = (self.bin_num-1) * cdf / cdf[-1]
equalizedImg = np.uint16(np.floor(np.interp(img, bins[:-1], cdf)))
equalizedImg = np.reshape(equalizedImg, (self.roi_img.size[1], self.roi_img.size[0]), order='C')
scipy.misc.imsave('equalizedImg.tif', equalizedImg)
def select_bri_pixels(self):
"""
Select the bad pixels (pixels with the maximum pixel value)
"""
img = np.array(self.roi_img.getdata(), dtype=np.uint16)
array = np.reshape(img, (self.roi_img.size[1], self.roi_img.size[0]), order='C')
indices = np.where(array==array.max())
indices_num = indices[0].size
for i in range(indices_num):
self.bad_pixels_widget.addItem('%d, %d' % (indices[1][i], indices[0][i]))
def change_contrast(self, value):
"""
Change the contrast of the ROI image by slider bar
"""
delta = value / 10.0
self.enh.enhance(delta).save('change_contrast.tif')
contrastImageTemp = QtGui.QPixmap('change_contrast.tif')
self.img_lbl.setPixmap(contrastImageTemp)
def eventFilter(self, source, event):
"""
Event filter to enable cursor coordinates tracking on the ROI image
"""
if (event.type() == QtCore.QEvent.MouseMove and
source is self.img_lbl):
if event.buttons() == QtCore.Qt.NoButton:
pos = event.pos()
self.txt_lbl.setText('x=%d, y=%d, value=%d ' % (pos.x(),
pos.y(), self.roi_img.getpixel((pos.x(), pos.y()))))
top_left_x = pos.x()-10 if pos.x()-10>=0 else 0
top_left_y = pos.y()-10 if pos.y()-10>=0 else 0
bottom_right_x = pos.x()+10 if pos.x()+10<self.roi_img.size[0] else self.roi_img.size[0]-1
bottom_right_y = pos.y()+10 if pos.y()+10<self.roi_img.size[1] else self.roi_img.size[1]-1
if (pos.y()-10)<0:
self.temp_lbl.setAlignment(QtCore.Qt.AlignBottom)
if (pos.x()+10)>=self.roi_img.size[0]:
self.temp_lbl.setAlignment(QtCore.Qt.AlignLeft)
if (pos.x()-10)<0:
self.temp_lbl.setAlignment(QtCore.Qt.AlignRight)
if (pos.y()+10)>=self.roi_img.size[1]:
self.temp_lbl.setAlignment(QtCore.Qt.AlignTop)
width = bottom_right_x - top_left_x + 1
height = bottom_right_y - top_left_y+ 1
img_fraction = self.img_lbl.pixmap().copy(top_left_x, top_left_y, width, height)
scaled_img_fraction = img_fraction.scaled(width*8, height*8)
self.temp_lbl.setPixmap(scaled_img_fraction)
if (event.type() == QtCore.QEvent.MouseMove and
source is not self.img_lbl):
if event.buttons() == QtCore.Qt.NoButton:
self.txt_lbl.setText('')
self.temp_lbl.clear()
return QtGui.QDialog.eventFilter(self, source, event)
def load_an_image(self):
"""
Load an image to select the ROI
"""
fname = QtGui.QFileDialog.getOpenFileName(self, 'Open file',
'/home')
self.roi_temp_image = QtGui.QPixmap(fname)
if not self.roi_temp_image.isNull():
self.roi_image = self.roi_temp_image
self.roi_img = Image.open(str(fname))
self.calHist()
self.preContrast()
self.roi_image_x = self.roi_image.size().width()
self.roi_image_y = self.roi_image.size().height()
self.img_lbl.setPixmap(self.roi_image)
self.img_lbl.setFixedWidth(self.roi_image_x)
self.img_lbl.setFixedHeight(self.roi_image_y)
def _set_color_map(self, index):
"""
User changed color map callback.
"""
cm_ = str(self.color_map.itemText(index))
print('Color map set to: %s' % cm_)
self._color_map = mpl.cm.get_cmap(cm_)
try:
for im in [self.canvas.imphi, ]:
im.set_cmap(self._color_map)
except Exception as ex:
print('failed to set color map: (%s) %s' % (ex.__class__.__name__,
ex))
finally:
self.canvas.draw()
def create_cmap_previews(self):
"""
Create the color map previews for the combobox
"""
cm_names = sorted(_cm for _cm in mpl.cm.datad.keys()
if not _cm.endswith('_r'))
cm_filenames = [os.path.join(CMAP_PREVIEW_PATH, '%s.png' % cm_name)
for cm_name in cm_names]
ret = zip(cm_names, cm_filenames)
points = np.outer(np.ones(10), np.arange(0, 1, 0.01))
if not os.path.exists(CMAP_PREVIEW_PATH):
try:
os.mkdir(CMAP_PREVIEW_PATH)
except Exception as ex:
print('Unable to create preview path: %s' % ex)
return ret
for cm_name, fn in zip(cm_names, cm_filenames):
if not os.path.exists(fn):
print('Generating colormap preview: %s' % fn)
canvas = MplCanvas(width=2, height=0.25, dpi=50)
fig = canvas.figure
fig.clear()
ax = fig.add_subplot(1, 1, 1)
ax.axis("off")
fig.subplots_adjust(top=1, left=0, right=1, bottom=0)
_cm = mpl.cm.get_cmap(cm_name)
ax.imshow(points, aspect='auto', cmap=_cm, origin='lower')
try:
fig.savefig(fn)
except Exception as ex:
print('Unable to create color map preview "%s"' % fn,
file=sys.stderr)
break
return ret
def update_color_maps(self):
size = None
for i, (cm_name, fn) in enumerate(self.create_cmap_previews()):
print('Color map', fn)
if os.path.exists(fn):
self.color_map.addItem(QtGui.QIcon(fn), cm_name)
if size is None:
size = QtGui.QPixmap(fn).size()
self.color_map.setIconSize(size)
else:
self.color_map.addItem(cm_name)
if cm_name == self.CM_DEFAULT:
self.color_map.setCurrentIndex(i)
@property
def settings(self):
return QtCore.QSettings('BNL', 'DPC-GUI')
def save_settings(self):
settings = self.settings
for key, (getter, setter) in self._settings.items():
settings.setValue(key, getter())
settings.setValue('geometry', self.geometry())
settings.setValue('splitter', self.splitter.saveState())
def load_settings(self):
settings = self.settings
for key, (getter, setter) in self._settings.items():
value = settings.value(key).toPyObject()
if value is not None:
setter(value)
try:
self.setGeometry(settings.value('geometry').toPyObject())
self.splitter.restoreState(settings.value('splitter').toByteArray())
except:
pass
def closeEvent(self, event=None):
self.save_settings()
@property
def dx(self):
return float(self.dx_widget.text())
@property
def dy(self):
return float(self.dy_widget.text())
@property
def processes(self):
return int(self.processes_widget.text())
@property
def file_format(self):
return str(self.file_widget.text())
@property
def pixel_size(self):
return self.pixel_widget.value()
@property
def focus(self):
return self.focus_widget.value()
@property
def energy(self):
return self.energy_widget.value()
@property
def rows(self):
return self.rows_widget.value()
@property
def cols(self):
return self.cols_widget.value()
@property
def first_image(self):
return self.first_widget.value()
@property
def ref_image(self):
return self.ref_widget.value()
@property
def roi_x1(self):
return self.roi_x1_widget.value()
@property
def roi_x2(self):
return self.roi_x2_widget.value()
@property
def roi_y1(self):
return self.roi_y1_widget.value()
@property
def roi_y2(self):
return self.roi_y2_widget.value()
@property
def bad_pixels(self):
pixels = []
w = self.bad_pixels_widget
def fix_tuple(item):
item = str(item.text())
return [int(x) for x in item.split(',')]
return [fix_tuple(w.item(i)) for i in range(w.count())]
def _bad_pixels_menu(self, pos):
def add():
s, ok = QtGui.QInputDialog.getText(self, 'Position?', 'Position in the format: x, y')
if ok:
s = str(s)
x, y = s.split(',')
x = int(x)
y = int(y)
self.bad_pixels_widget.addItem('%d, %d' % (x, y))
def remove():
rows = [index.row() for index in self.bad_pixels_widget.selectedIndexes()]
for row in reversed(sorted(rows)):
self.bad_pixels_widget.takeItem(row)
def clear():
self.bad_pixels_widget.clear()
self.menu = menu = QtGui.QMenu()
add_action = menu.addAction('&Add', add)
remove_action = menu.addAction('&Remove', remove)
clear_action = menu.addAction('&Clear', clear)
menu.popup(self.bad_pixels_widget.mapToGlobal(pos))
def load_from_scan(self):
filename = QtGui.QFileDialog.getOpenFileName(self, 'Scan filename', self.last_path, '*.spec')
if not filename:
return
self.last_path = filename
print('Loading %s' % filename)
with pyspecfile.SPECFileReader(filename, parse_data=False) as f:
scans = dict((int(scan['number']), scan) for scan in f.scans)
scan_info = ['%04d - %s' % (number, scan['command'])
for number, scan in scans.items()
if 'mesh' in scan['command']]
scan_info.sort()
print('\n'.join(scan_info))
s, ok = QtGui.QInputDialog.getItem(self, 'Scan selection', 'Scan number?', scan_info, 0, False)
if ok:
print('Selected scan', s)
number = int(s.split(' ')[0])
sd = scans[number]
f.parse_data(sd)
timepix_index = sd['columns'].index('tpx_image')
line0 = sd['lines'][0]
timepix_first_image = int(line0[timepix_index])
try:
ion1_index = sd['columns'].index('Ion1')
self.ion_data = np.array([line[ion1_index] for line in sd['lines']])
except Exception as ex:
print('Failed loading Ion1 data (%s) %s' % (ex, ex.__class__.__name__))
self.ion_data = None
print('First timepix image:', timepix_first_image)
self.ref_widget.setValue(timepix_first_image - 1)
self.first_widget.setValue(timepix_first_image - 1)
command = sd['command'].replace(' ', ' ')
x = [2, 3, 4] # x start, end, points
y = [6, 7, 8] # y start, end, points
info = command.split(' ')
x_info = [float(info[i]) for i in x]
y_info = [float(info[i]) for i in y]
dx = (x_info[1] - x_info[0]) / (x_info[2] - 1)
dy = (y_info[1] - y_info[0]) / (y_info[2] - 1)
self.rows_widget.setValue(int(y_info[-1]))
self.cols_widget.setValue(int(x_info[-1]))
self.dx_widget.setValue(float(dx))
self.dy_widget.setValue(float(dy))
@property
def solver(self):
return SOLVERS[self.solver_widget.currentIndex()]
def set_solver(self, solver):
self.solver_widget.setCurrentIndex(SOLVERS.index(solver))
def set_bad_pixels(self, pixels):
w = self.bad_pixels_widget
w.clear()
for item in pixels:
x, y = item
w.addItem('%d, %d' % (x, y, ))
@property
def dpc_settings(self):
ret = {}
for key, (getter, setter) in self._settings.items():
ret[key] = getter()
return ret
def start(self):
self.line_btn.setEnabled(False)
self.direction_btn.setEnabled(False)
self.removal_btn.setEnabled(False)
self.confirm_btn.setEnabled(False)
if self._thread is not None and self._thread.isFinished():
self._thread = None
if self._thread is None:
if self.processes == 0:
pool = None
else:
pool = mp.Pool(processes=self.processes)
thread = self._thread = DPCThread(self.canvas, pool=pool)
thread.dpc_settings = self.dpc_settings
del thread.dpc_settings['processes']
del thread.dpc_settings['last_path']
thread.start()
self.set_running(True)
def set_running(self, running):
self.start_widget.setEnabled(not running)
self.stop_widget.setEnabled(running)
def stop(self):
if self._thread is not None:
pool = self._thread.pool
if pool is not None:
pool.terminate()
self._thread.pool = None
time.sleep(0.2)
self._thread.terminate()
self._thread = None
self.set_running(False)
def save(self):
filename = QtGui.QFileDialog.getSaveFileName(self, 'Save filename prefix', '', '')
if not filename:
return
arrays = [('gx', self.gx),
('gy', self.gy),
('phi', self.phi),
('a', self.a)]
for name, arr in arrays:
im = PIL.Image.fromarray(arr)
im.sasve('%s_%s.tif' % (filename, name))
np.savetxt('%s_%s.txt' % (filename, name), im)
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
dialog = DPCDialog()
dialog.show()
app.installEventFilter(dialog)
sys.exit(app.exec_())
| [
"edill@bnl.gov"
] | edill@bnl.gov |
4b7fa0b89034a3c73e7c53685b6db1d8756161b3 | 54f352a242a8ad6ff5516703e91da61e08d9a9e6 | /Source Codes/CodeJamData/13/21/1.py | 3c669ac25e4a3e7827b4a913749aefc3eec998b5 | [] | no_license | Kawser-nerd/CLCDSA | 5cbd8a4c3f65173e4e8e0d7ed845574c4770c3eb | aee32551795763b54acb26856ab239370cac4e75 | refs/heads/master | 2022-02-09T11:08:56.588303 | 2022-01-26T18:53:40 | 2022-01-26T18:53:40 | 211,783,197 | 23 | 9 | null | null | null | null | UTF-8 | Python | false | false | 651 | py | def solvable(A, motes, adds):
for m in motes:
while m >= A and adds > 0:
A += A - 1
adds -= 1
if m >= A:
return False
A += m
return True
def solve_case(test_case):
A, N = map(int, raw_input().split())
motes = sorted(map(int, raw_input().split()))
best = 1000000000
for adds in xrange(N + 1):
for removes in xrange(N + 1):
if solvable(A, motes[:N - removes], adds):
best = min(best, adds + removes)
print "Case #{0}: {1}".format(test_case, best)
for test_case in xrange(1, int(raw_input()) + 1):
solve_case(test_case) | [
"kwnafi@yahoo.com"
] | kwnafi@yahoo.com |
3aca8f92314e3d0a8ff4f3884c4c193a7a92a872 | 6580ba5d135c4f33f1a0996953ba2a65f7458a14 | /applications/ji164/models/fdproduct0cart.py | 0deae017946d9d33b5c25231e101fc04734872ca | [
"MIT",
"LicenseRef-scancode-public-domain"
] | permissive | ali96343/facew2p | 02b038d3853691264a49de3409de21c8a33544b8 | a3881b149045e9caac344402c8fc4e62edadb42f | refs/heads/master | 2021-06-10T17:52:22.200508 | 2021-05-10T23:11:30 | 2021-05-10T23:11:30 | 185,795,614 | 7 | 0 | null | null | null | null | UTF-8 | Python | false | false | 18,401 | py | #
# table for controller: product_cart
#
from gluon.contrib.populate import populate
db.define_table('dproduct0cart',
Field('f0', label='key', writable = True , length= 1000),
Field('f1', 'text', label='data string', length= 1000),
Field('f2', 'text', label='save data string', length= 1000, default='' ),
)
#
if not db(db.dproduct0cart.id ).count():
db.dproduct0cart.insert( f0= 'sp12837', f1= '(12837)outdated')
db.dproduct0cart.insert( f0= 'pc12838', f1= '(12838)to improve your experience.')
db.dproduct0cart.insert( f0= 'aa12839', f1= '(12839)upgrade your browser')
db.dproduct0cart.insert( f0= 'sx12844', f1= '(12844)Ecommerce')
db.dproduct0cart.insert( f0= 'sx12846', f1= '(12846)Dashboard v.1')
db.dproduct0cart.insert( f0= 'sx12848', f1= '(12848)Dashboard v.2')
db.dproduct0cart.insert( f0= 'sx12850', f1= '(12850)Dashboard v.3')
db.dproduct0cart.insert( f0= 'sx12852', f1= '(12852)Product List')
db.dproduct0cart.insert( f0= 'sx12854', f1= '(12854)Product Edit')
db.dproduct0cart.insert( f0= 'sx12856', f1= '(12856)Product Detail')
db.dproduct0cart.insert( f0= 'sx12858', f1= '(12858)Product Cart')
db.dproduct0cart.insert( f0= 'sx12860', f1= '(12860)Product Payment')
db.dproduct0cart.insert( f0= 'sx12862', f1= '(12862)Analytics')
db.dproduct0cart.insert( f0= 'sx12864', f1= '(12864)Widgets')
db.dproduct0cart.insert( f0= 'sx12866', f1= '(12866)Mailbox')
db.dproduct0cart.insert( f0= 'sx12868', f1= '(12868)Inbox')
db.dproduct0cart.insert( f0= 'sx12870', f1= '(12870)View Mail')
db.dproduct0cart.insert( f0= 'sx12872', f1= '(12872)Compose Mail')
db.dproduct0cart.insert( f0= 'sx12874', f1= '(12874)Interface')
db.dproduct0cart.insert( f0= 'sx12876', f1= '(12876)Google Map')
db.dproduct0cart.insert( f0= 'sx12878', f1= '(12878)Data Maps')
db.dproduct0cart.insert( f0= 'sx12880', f1= '(12880)Pdf Viewer')
db.dproduct0cart.insert( f0= 'sx12882', f1= '(12882)X-Editable')
db.dproduct0cart.insert( f0= 'sx12884', f1= '(12884)Code Editor')
db.dproduct0cart.insert( f0= 'sx12886', f1= '(12886)Tree View')
db.dproduct0cart.insert( f0= 'sx12888', f1= '(12888)Preloader')
db.dproduct0cart.insert( f0= 'sx12890', f1= '(12890)Images Cropper')
db.dproduct0cart.insert( f0= 'sx12892', f1= '(12892)Miscellaneous')
db.dproduct0cart.insert( f0= 'sx12894', f1= '(12894)File Manager')
db.dproduct0cart.insert( f0= 'sx12896', f1= '(12896)Blog')
db.dproduct0cart.insert( f0= 'sx12898', f1= '(12898)Blog Details')
db.dproduct0cart.insert( f0= 'sx12900', f1= '(12900)404 Page')
db.dproduct0cart.insert( f0= 'sx12902', f1= '(12902)500 Page')
db.dproduct0cart.insert( f0= 'sx12904', f1= '(12904)Charts')
db.dproduct0cart.insert( f0= 'sx12906', f1= '(12906)Bar Charts')
db.dproduct0cart.insert( f0= 'sx12908', f1= '(12908)Line Charts')
db.dproduct0cart.insert( f0= 'sx12910', f1= '(12910)Area Charts')
db.dproduct0cart.insert( f0= 'sx12912', f1= '(12912)Rounded Charts')
db.dproduct0cart.insert( f0= 'sx12914', f1= '(12914)C3 Charts')
db.dproduct0cart.insert( f0= 'sx12916', f1= '(12916)Sparkline Charts')
db.dproduct0cart.insert( f0= 'sx12918', f1= '(12918)Peity Charts')
db.dproduct0cart.insert( f0= 'sx12920', f1= '(12920)Data Tables')
db.dproduct0cart.insert( f0= 'sx12922', f1= '(12922)Static Table')
db.dproduct0cart.insert( f0= 'sx12924', f1= '(12924)Data Table')
db.dproduct0cart.insert( f0= 'sx12926', f1= '(12926)Forms Elements')
db.dproduct0cart.insert( f0= 'sx12928', f1= '(12928)Bc Form Elements')
db.dproduct0cart.insert( f0= 'sx12930', f1= '(12930)Ad Form Elements')
db.dproduct0cart.insert( f0= 'sx12932', f1= '(12932)Password Meter')
db.dproduct0cart.insert( f0= 'sx12934', f1= '(12934)Multi Upload')
db.dproduct0cart.insert( f0= 'sx12936', f1= '(12936)Text Editor')
db.dproduct0cart.insert( f0= 'sx12938', f1= '(12938)Dual List Box')
db.dproduct0cart.insert( f0= 'sx12940', f1= '(12940)App views')
db.dproduct0cart.insert( f0= 'sx12942', f1= '(12942)Notifications')
db.dproduct0cart.insert( f0= 'sx12944', f1= '(12944)Alerts')
db.dproduct0cart.insert( f0= 'sx12946', f1= '(12946)Modals')
db.dproduct0cart.insert( f0= 'sx12948', f1= '(12948)Buttons')
db.dproduct0cart.insert( f0= 'sx12950', f1= '(12950)Tabs')
db.dproduct0cart.insert( f0= 'sx12952', f1= '(12952)Accordion')
db.dproduct0cart.insert( f0= 'sx12953', f1= '(12953)Pages')
db.dproduct0cart.insert( f0= 'sx12955', f1= '(12955)Login')
db.dproduct0cart.insert( f0= 'sx12957', f1= '(12957)Register')
db.dproduct0cart.insert( f0= 'sx12959', f1= '(12959)Lock')
db.dproduct0cart.insert( f0= 'sx12961', f1= '(12961)Password Recovery')
db.dproduct0cart.insert( f0= 'sx12962', f1= '(12962)Landing Page')
db.dproduct0cart.insert( f0= 'aa12965', f1= '(12965)Home')
db.dproduct0cart.insert( f0= 'aa12966', f1= '(12966)About')
db.dproduct0cart.insert( f0= 'aa12967', f1= '(12967)Services')
db.dproduct0cart.insert( f0= 'aa12968', f1= '(12968)Support')
db.dproduct0cart.insert( f0= 'hh12969', f1= '(12969)Message')
db.dproduct0cart.insert( f0= 'sx12971', f1= '(12971)16 Sept')
db.dproduct0cart.insert( f0= 'hh12972', f1= '(12972)Advanda Cro')
db.dproduct0cart.insert( f0= 'pa12973', f1= '(12973)Please done this project as soon possible.')
db.dproduct0cart.insert( f0= 'sx12975', f1= '(12975)16 Sept')
db.dproduct0cart.insert( f0= 'hh12976', f1= '(12976)Sulaiman din')
db.dproduct0cart.insert( f0= 'pa12977', f1= '(12977)Please done this project as soon possible.')
db.dproduct0cart.insert( f0= 'sx12979', f1= '(12979)16 Sept')
db.dproduct0cart.insert( f0= 'hh12980', f1= '(12980)Victor Jara')
db.dproduct0cart.insert( f0= 'pa12981', f1= '(12981)Please done this project as soon possible.')
db.dproduct0cart.insert( f0= 'sx12983', f1= '(12983)16 Sept')
db.dproduct0cart.insert( f0= 'hh12984', f1= '(12984)Victor Jara')
db.dproduct0cart.insert( f0= 'pa12985', f1= '(12985)Please done this project as soon possible.')
db.dproduct0cart.insert( f0= 'aa12986', f1= '(12986)View All Messages')
db.dproduct0cart.insert( f0= 'hh12987', f1= '(12987)Notifications')
db.dproduct0cart.insert( f0= 'sx12988', f1= '(12988)16 Sept')
db.dproduct0cart.insert( f0= 'hh12989', f1= '(12989)Advanda Cro')
db.dproduct0cart.insert( f0= 'pa12990', f1= '(12990)Please done this project as soon possible.')
db.dproduct0cart.insert( f0= 'sx12991', f1= '(12991)16 Sept')
db.dproduct0cart.insert( f0= 'hh12992', f1= '(12992)Sulaiman din')
db.dproduct0cart.insert( f0= 'pa12993', f1= '(12993)Please done this project as soon possible.')
db.dproduct0cart.insert( f0= 'sx12994', f1= '(12994)16 Sept')
db.dproduct0cart.insert( f0= 'hh12995', f1= '(12995)Victor Jara')
db.dproduct0cart.insert( f0= 'pa12996', f1= '(12996)Please done this project as soon possible.')
db.dproduct0cart.insert( f0= 'sx12997', f1= '(12997)16 Sept')
db.dproduct0cart.insert( f0= 'hh12998', f1= '(12998)Victor Jara')
db.dproduct0cart.insert( f0= 'pa12999', f1= '(12999)Please done this project as soon possible.')
db.dproduct0cart.insert( f0= 'aa13000', f1= '(13000)View All Notification')
db.dproduct0cart.insert( f0= 'sx13001', f1= '(13001)Advanda Cro')
db.dproduct0cart.insert( f0= 'aa13005', f1= '(13005)News')
db.dproduct0cart.insert( f0= 'aa13006', f1= '(13006)Activity')
db.dproduct0cart.insert( f0= 'aa13007', f1= '(13007)Settings')
db.dproduct0cart.insert( f0= 'pa13008', f1= '(13008)You have 10 New News.')
db.dproduct0cart.insert( f0= 'pa13010', f1= '(13010)The point of using Lorem Ipsum is that it has a more-or-less normal.')
db.dproduct0cart.insert( f0= 'sp13011', f1= '(13011)Yesterday 2:45 pm')
db.dproduct0cart.insert( f0= 'pa13013', f1= '(13013)The point of using Lorem Ipsum is that it has a more-or-less normal.')
db.dproduct0cart.insert( f0= 'sp13014', f1= '(13014)Yesterday 2:45 pm')
db.dproduct0cart.insert( f0= 'pa13016', f1= '(13016)The point of using Lorem Ipsum is that it has a more-or-less normal.')
db.dproduct0cart.insert( f0= 'sp13017', f1= '(13017)Yesterday 2:45 pm')
db.dproduct0cart.insert( f0= 'pa13019', f1= '(13019)The point of using Lorem Ipsum is that it has a more-or-less normal.')
db.dproduct0cart.insert( f0= 'sp13020', f1= '(13020)Yesterday 2:45 pm')
db.dproduct0cart.insert( f0= 'pa13022', f1= '(13022)The point of using Lorem Ipsum is that it has a more-or-less normal.')
db.dproduct0cart.insert( f0= 'sp13023', f1= '(13023)Yesterday 2:45 pm')
db.dproduct0cart.insert( f0= 'pa13025', f1= '(13025)The point of using Lorem Ipsum is that it has a more-or-less normal.')
db.dproduct0cart.insert( f0= 'sp13026', f1= '(13026)Yesterday 2:45 pm')
db.dproduct0cart.insert( f0= 'pa13028', f1= '(13028)The point of using Lorem Ipsum is that it has a more-or-less normal.')
db.dproduct0cart.insert( f0= 'sp13029', f1= '(13029)Yesterday 2:45 pm')
db.dproduct0cart.insert( f0= 'pa13031', f1= '(13031)The point of using Lorem Ipsum is that it has a more-or-less normal.')
db.dproduct0cart.insert( f0= 'sp13032', f1= '(13032)Yesterday 2:45 pm')
db.dproduct0cart.insert( f0= 'pa13034', f1= '(13034)The point of using Lorem Ipsum is that it has a more-or-less normal.')
db.dproduct0cart.insert( f0= 'sp13035', f1= '(13035)Yesterday 2:45 pm')
db.dproduct0cart.insert( f0= 'pa13037', f1= '(13037)The point of using Lorem Ipsum is that it has a more-or-less normal.')
db.dproduct0cart.insert( f0= 'sp13038', f1= '(13038)Yesterday 2:45 pm')
db.dproduct0cart.insert( f0= 'pa13039', f1= '(13039)You have 20 Recent Activity.')
db.dproduct0cart.insert( f0= 'hh13040', f1= '(13040)New User Registered')
db.dproduct0cart.insert( f0= 'pa13041', f1= '(13041)The point of using Lorem Ipsum is that it has a more or less normal.')
db.dproduct0cart.insert( f0= 'sx13042', f1= '(13042)1 hours ago')
db.dproduct0cart.insert( f0= 'hh13043', f1= '(13043)New Order Received')
db.dproduct0cart.insert( f0= 'pa13044', f1= '(13044)The point of using Lorem Ipsum is that it has a more or less normal.')
db.dproduct0cart.insert( f0= 'sx13045', f1= '(13045)2 hours ago')
db.dproduct0cart.insert( f0= 'hh13046', f1= '(13046)New Order Received')
db.dproduct0cart.insert( f0= 'pa13047', f1= '(13047)The point of using Lorem Ipsum is that it has a more or less normal.')
db.dproduct0cart.insert( f0= 'sx13048', f1= '(13048)3 hours ago')
db.dproduct0cart.insert( f0= 'hh13049', f1= '(13049)New Order Received')
db.dproduct0cart.insert( f0= 'pa13050', f1= '(13050)The point of using Lorem Ipsum is that it has a more or less normal.')
db.dproduct0cart.insert( f0= 'sx13051', f1= '(13051)4 hours ago')
db.dproduct0cart.insert( f0= 'hh13052', f1= '(13052)New User Registered')
db.dproduct0cart.insert( f0= 'pa13053', f1= '(13053)The point of using Lorem Ipsum is that it has a more or less normal.')
db.dproduct0cart.insert( f0= 'sx13054', f1= '(13054)5 hours ago')
db.dproduct0cart.insert( f0= 'hh13055', f1= '(13055)New Order')
db.dproduct0cart.insert( f0= 'pa13056', f1= '(13056)The point of using Lorem Ipsum is that it has a more or less normal.')
db.dproduct0cart.insert( f0= 'sx13057', f1= '(13057)6 hours ago')
db.dproduct0cart.insert( f0= 'hh13058', f1= '(13058)New User')
db.dproduct0cart.insert( f0= 'pa13059', f1= '(13059)The point of using Lorem Ipsum is that it has a more or less normal.')
db.dproduct0cart.insert( f0= 'sx13060', f1= '(13060)7 hours ago')
db.dproduct0cart.insert( f0= 'hh13061', f1= '(13061)New Order')
db.dproduct0cart.insert( f0= 'pa13062', f1= '(13062)The point of using Lorem Ipsum is that it has a more or less normal.')
db.dproduct0cart.insert( f0= 'sx13063', f1= '(13063)9 hours ago')
db.dproduct0cart.insert( f0= 'pa13064', f1= '(13064)You have 20 Settings. 5 not completed.')
db.dproduct0cart.insert( f0= 'hh13065', f1= '(13065)Show notifications')
db.dproduct0cart.insert( f0= 'hh13066', f1= '(13066)Disable Chat')
db.dproduct0cart.insert( f0= 'hh13067', f1= '(13067)Enable history')
db.dproduct0cart.insert( f0= 'hh13068', f1= '(13068)Show charts')
db.dproduct0cart.insert( f0= 'hh13069', f1= '(13069)Update everyday')
db.dproduct0cart.insert( f0= 'hh13070', f1= '(13070)Global search')
db.dproduct0cart.insert( f0= 'hh13071', f1= '(13071)Offline users')
db.dproduct0cart.insert( f0= 'aa13073', f1= '(13073)Dashboard v.1')
db.dproduct0cart.insert( f0= 'aa13075', f1= '(13075)Dashboard v.2')
db.dproduct0cart.insert( f0= 'aa13077', f1= '(13077)Dashboard v.3')
db.dproduct0cart.insert( f0= 'aa13079', f1= '(13079)Product List')
db.dproduct0cart.insert( f0= 'aa13081', f1= '(13081)Product Edit')
db.dproduct0cart.insert( f0= 'aa13083', f1= '(13083)Product Detail')
db.dproduct0cart.insert( f0= 'aa13085', f1= '(13085)Product Cart')
db.dproduct0cart.insert( f0= 'aa13087', f1= '(13087)Product Payment')
db.dproduct0cart.insert( f0= 'aa13089', f1= '(13089)Analytics')
db.dproduct0cart.insert( f0= 'aa13091', f1= '(13091)Widgets')
db.dproduct0cart.insert( f0= 'aa13093', f1= '(13093)Inbox')
db.dproduct0cart.insert( f0= 'aa13095', f1= '(13095)View Mail')
db.dproduct0cart.insert( f0= 'aa13097', f1= '(13097)Compose Mail')
db.dproduct0cart.insert( f0= 'aa13099', f1= '(13099)File Manager')
db.dproduct0cart.insert( f0= 'aa13101', f1= '(13101)Contacts Client')
db.dproduct0cart.insert( f0= 'aa13103', f1= '(13103)Project')
db.dproduct0cart.insert( f0= 'aa13105', f1= '(13105)Project Details')
db.dproduct0cart.insert( f0= 'aa13107', f1= '(13107)Blog')
db.dproduct0cart.insert( f0= 'aa13109', f1= '(13109)Blog Details')
db.dproduct0cart.insert( f0= 'aa13111', f1= '(13111)404 Page')
db.dproduct0cart.insert( f0= 'aa13113', f1= '(13113)500 Page')
db.dproduct0cart.insert( f0= 'aa13115', f1= '(13115)Google Map')
db.dproduct0cart.insert( f0= 'aa13117', f1= '(13117)Data Maps')
db.dproduct0cart.insert( f0= 'aa13119', f1= '(13119)Pdf Viewer')
db.dproduct0cart.insert( f0= 'aa13121', f1= '(13121)X-Editable')
db.dproduct0cart.insert( f0= 'aa13123', f1= '(13123)Code Editor')
db.dproduct0cart.insert( f0= 'aa13125', f1= '(13125)Tree View')
db.dproduct0cart.insert( f0= 'aa13127', f1= '(13127)Preloader')
db.dproduct0cart.insert( f0= 'aa13129', f1= '(13129)Images Cropper')
db.dproduct0cart.insert( f0= 'aa13131', f1= '(13131)Bar Charts')
db.dproduct0cart.insert( f0= 'aa13133', f1= '(13133)Line Charts')
db.dproduct0cart.insert( f0= 'aa13135', f1= '(13135)Area Charts')
db.dproduct0cart.insert( f0= 'aa13137', f1= '(13137)Rounded Charts')
db.dproduct0cart.insert( f0= 'aa13139', f1= '(13139)C3 Charts')
db.dproduct0cart.insert( f0= 'aa13141', f1= '(13141)Sparkline Charts')
db.dproduct0cart.insert( f0= 'aa13143', f1= '(13143)Peity Charts')
db.dproduct0cart.insert( f0= 'aa13145', f1= '(13145)Static Table')
db.dproduct0cart.insert( f0= 'aa13147', f1= '(13147)Data Table')
db.dproduct0cart.insert( f0= 'aa13149', f1= '(13149)Basic Form Elements')
db.dproduct0cart.insert( f0= 'aa13151', f1= '(13151)Advanced Form Elements')
db.dproduct0cart.insert( f0= 'aa13153', f1= '(13153)Password Meter')
db.dproduct0cart.insert( f0= 'aa13155', f1= '(13155)Multi Upload')
db.dproduct0cart.insert( f0= 'aa13157', f1= '(13157)Text Editor')
db.dproduct0cart.insert( f0= 'aa13159', f1= '(13159)Dual List Box')
db.dproduct0cart.insert( f0= 'aa13161', f1= '(13161)Basic Form Elements')
db.dproduct0cart.insert( f0= 'aa13163', f1= '(13163)Advanced Form Elements')
db.dproduct0cart.insert( f0= 'aa13165', f1= '(13165)Password Meter')
db.dproduct0cart.insert( f0= 'aa13167', f1= '(13167)Multi Upload')
db.dproduct0cart.insert( f0= 'aa13169', f1= '(13169)Text Editor')
db.dproduct0cart.insert( f0= 'aa13171', f1= '(13171)Dual List Box')
db.dproduct0cart.insert( f0= 'aa13173', f1= '(13173)Login')
db.dproduct0cart.insert( f0= 'aa13175', f1= '(13175)Register')
db.dproduct0cart.insert( f0= 'aa13177', f1= '(13177)Lock')
db.dproduct0cart.insert( f0= 'aa13179', f1= '(13179)Password Recovery')
db.dproduct0cart.insert( f0= 'pb13180', f1= '(13180)Search...')
db.dproduct0cart.insert( f0= 'aa13181', f1= '(13181)Home')
db.dproduct0cart.insert( f0= 'sx13182', f1= '(13182)Product Cart')
db.dproduct0cart.insert( f0= 'hh13183', f1= '(13183)Shopping Cart')
db.dproduct0cart.insert( f0= 'hc13184', f1= '(13184)Shopping')
db.dproduct0cart.insert( f0= 'hh13186', f1= '(13186)Jewelery Title 1')
db.dproduct0cart.insert( f0= 'pa13187', f1= '(13187)Lorem ipsum dolor sit consec te imperdiet iaculis ipsum.')
db.dproduct0cart.insert( f0= 'hh13189', f1= '(13189)Jewelery Title 2')
db.dproduct0cart.insert( f0= 'pa13190', f1= '(13190)Lorem ipsum dolor sit consec te imperdiet iaculis ipsum.')
db.dproduct0cart.insert( f0= 'hh13192', f1= '(13192)Jewelery Title 3')
db.dproduct0cart.insert( f0= 'pa13193', f1= '(13193)Lorem ipsum dolor sit consec te imperdiet iaculis ipsum.')
db.dproduct0cart.insert( f0= 'hh13195', f1= '(13195)Jewelery Title 4')
db.dproduct0cart.insert( f0= 'pa13196', f1= '(13196)Lorem ipsum dolor sit consec te imperdiet iaculis ipsum.')
db.dproduct0cart.insert( f0= 'hh13198', f1= '(13198)Jewelery Title 5')
db.dproduct0cart.insert( f0= 'pa13199', f1= '(13199)Lorem ipsum dolor sit consec te imperdiet iaculis ipsum.')
db.dproduct0cart.insert( f0= 'hh13200', f1= '(13200)Delivery Details')
db.dproduct0cart.insert( f0= 'hc13201', f1= '(13201)Shopping')
db.dproduct0cart.insert( f0= 'hh13202', f1= '(13202)Payment Details')
db.dproduct0cart.insert( f0= 'hc13203', f1= '(13203)Shopping')
db.dproduct0cart.insert( f0= 'bu13225', f1= '(13225)Submit')
db.dproduct0cart.insert( f0= 'hh13226', f1= '(13226)Confirmation')
db.dproduct0cart.insert( f0= 'hh13227', f1= '(13227)Congratulations! Your Order is accepted.')
db.dproduct0cart.insert( f0= 'bu13228', f1= '(13228)Track Order')
db.dproduct0cart.insert( f0= 'pc13229', f1= '(13229)All rights reserved.')
db.dproduct0cart.insert( f0= 'pf13230', f1= '(13230)Copyright © 2018')
db.commit()
#
| [
"ab96343@gmail.com"
] | ab96343@gmail.com |
d35cc85e0174d8db3ca2cc290807848677c9a764 | 53fab060fa262e5d5026e0807d93c75fb81e67b9 | /backup/user_325/ch21_2020_03_02_20_41_04_179689.py | 7debae918cd7832c89f016bfcd18e1ac314c922c | [] | no_license | gabriellaec/desoft-analise-exercicios | b77c6999424c5ce7e44086a12589a0ad43d6adca | 01940ab0897aa6005764fc220b900e4d6161d36b | refs/heads/main | 2023-01-31T17:19:42.050628 | 2020-12-16T05:21:31 | 2020-12-16T05:21:31 | 306,735,108 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 277 | py | x = int(input("quantos dias "))
y = int(input("quantas horas "))
z = int(input("quantos minutos "))
w = int(input("quantos segundos "))
def soma(dias,horas,minutos,segundos):
d = 86400*dias
h = 3600*horas
m = 60*minutos
s = segundos
c = d+h+m+s
return c | [
"you@example.com"
] | you@example.com |
42a6cae0ad7fee4653a7741f2cc9a49c1c33ea4a | 600df3590cce1fe49b9a96e9ca5b5242884a2a70 | /third_party/chromite/third_party/gcloud/test_iterator.py | 102da9655d536db9e4cca40e7e1bf55cdc5a9640 | [
"LGPL-2.0-or-later",
"GPL-1.0-or-later",
"MIT",
"Apache-2.0",
"BSD-3-Clause",
"LicenseRef-scancode-python-cwi",
"Python-2.0",
"LicenseRef-scancode-other-copyleft",
"LicenseRef-scancode-free-unknown",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | metux/chromium-suckless | efd087ba4f4070a6caac5bfbfb0f7a4e2f3c438a | 72a05af97787001756bae2511b7985e61498c965 | refs/heads/orig | 2022-12-04T23:53:58.681218 | 2017-04-30T10:59:06 | 2017-04-30T23:35:58 | 89,884,931 | 5 | 3 | BSD-3-Clause | 2022-11-23T20:52:53 | 2017-05-01T00:09:08 | null | UTF-8 | Python | false | false | 9,979 | py | # Copyright 2015 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest2
class TestIterator(unittest2.TestCase):
def _getTargetClass(self):
from gcloud.iterator import Iterator
return Iterator
def _makeOne(self, *args, **kw):
return self._getTargetClass()(*args, **kw)
def test_ctor(self):
connection = _Connection()
client = _Client(connection)
PATH = '/foo'
iterator = self._makeOne(client, PATH)
self.assertTrue(iterator.client is client)
self.assertEqual(iterator.path, PATH)
self.assertEqual(iterator.page_number, 0)
self.assertEqual(iterator.next_page_token, None)
def test___iter__(self):
PATH = '/foo'
KEY1 = 'key1'
KEY2 = 'key2'
ITEM1, ITEM2 = object(), object()
ITEMS = {KEY1: ITEM1, KEY2: ITEM2}
def _get_items(response):
for item in response.get('items', []):
yield ITEMS[item['name']]
connection = _Connection({'items': [{'name': KEY1}, {'name': KEY2}]})
client = _Client(connection)
iterator = self._makeOne(client, PATH)
iterator.get_items_from_response = _get_items
self.assertEqual(list(iterator), [ITEM1, ITEM2])
kw, = connection._requested
self.assertEqual(kw['method'], 'GET')
self.assertEqual(kw['path'], PATH)
self.assertEqual(kw['query_params'], {})
def test_has_next_page_new(self):
connection = _Connection()
client = _Client(connection)
PATH = '/foo'
iterator = self._makeOne(client, PATH)
self.assertTrue(iterator.has_next_page())
def test_has_next_page_w_number_no_token(self):
connection = _Connection()
client = _Client(connection)
PATH = '/foo'
iterator = self._makeOne(client, PATH)
iterator.page_number = 1
self.assertFalse(iterator.has_next_page())
def test_has_next_page_w_number_w_token(self):
connection = _Connection()
client = _Client(connection)
PATH = '/foo'
TOKEN = 'token'
iterator = self._makeOne(client, PATH)
iterator.page_number = 1
iterator.next_page_token = TOKEN
self.assertTrue(iterator.has_next_page())
def test_get_query_params_no_token(self):
connection = _Connection()
client = _Client(connection)
PATH = '/foo'
iterator = self._makeOne(client, PATH)
self.assertEqual(iterator.get_query_params(), {})
def test_get_query_params_w_token(self):
connection = _Connection()
client = _Client(connection)
PATH = '/foo'
TOKEN = 'token'
iterator = self._makeOne(client, PATH)
iterator.next_page_token = TOKEN
self.assertEqual(iterator.get_query_params(),
{'pageToken': TOKEN})
def test_get_query_params_extra_params(self):
connection = _Connection()
client = _Client(connection)
PATH = '/foo'
extra_params = {'key': 'val'}
iterator = self._makeOne(client, PATH, extra_params=extra_params)
self.assertEqual(iterator.get_query_params(), extra_params)
def test_get_query_params_w_token_and_extra_params(self):
connection = _Connection()
client = _Client(connection)
PATH = '/foo'
TOKEN = 'token'
extra_params = {'key': 'val'}
iterator = self._makeOne(client, PATH, extra_params=extra_params)
iterator.next_page_token = TOKEN
expected_query = extra_params.copy()
expected_query.update({'pageToken': TOKEN})
self.assertEqual(iterator.get_query_params(), expected_query)
def test_get_query_params_w_token_collision(self):
connection = _Connection()
client = _Client(connection)
PATH = '/foo'
extra_params = {'pageToken': 'val'}
self.assertRaises(ValueError, self._makeOne, client, PATH,
extra_params=extra_params)
def test_get_next_page_response_new_no_token_in_response(self):
PATH = '/foo'
TOKEN = 'token'
KEY1 = 'key1'
KEY2 = 'key2'
connection = _Connection({'items': [{'name': KEY1}, {'name': KEY2}],
'nextPageToken': TOKEN})
client = _Client(connection)
iterator = self._makeOne(client, PATH)
response = iterator.get_next_page_response()
self.assertEqual(response['items'], [{'name': KEY1}, {'name': KEY2}])
self.assertEqual(iterator.page_number, 1)
self.assertEqual(iterator.next_page_token, TOKEN)
kw, = connection._requested
self.assertEqual(kw['method'], 'GET')
self.assertEqual(kw['path'], PATH)
self.assertEqual(kw['query_params'], {})
def test_get_next_page_response_no_token(self):
connection = _Connection()
client = _Client(connection)
PATH = '/foo'
iterator = self._makeOne(client, PATH)
iterator.page_number = 1
self.assertRaises(RuntimeError, iterator.get_next_page_response)
def test_reset(self):
connection = _Connection()
client = _Client(connection)
PATH = '/foo'
TOKEN = 'token'
iterator = self._makeOne(client, PATH)
iterator.page_number = 1
iterator.next_page_token = TOKEN
iterator.reset()
self.assertEqual(iterator.page_number, 0)
self.assertEqual(iterator.next_page_token, None)
def test_get_items_from_response_raises_NotImplementedError(self):
PATH = '/foo'
connection = _Connection()
client = _Client(connection)
iterator = self._makeOne(client, PATH)
self.assertRaises(NotImplementedError,
iterator.get_items_from_response, object())
class TestMethodIterator(unittest2.TestCase):
def _getTargetClass(self):
from gcloud.iterator import MethodIterator
return MethodIterator
def _makeOne(self, *args, **kw):
return self._getTargetClass()(*args, **kw)
def test_ctor_defaults(self):
wlm = _WithListMethod()
iterator = self._makeOne(wlm.list_foo)
self.assertEqual(iterator._method, wlm.list_foo)
self.assertEqual(iterator._token, None)
self.assertEqual(iterator._page_size, None)
self.assertEqual(iterator._kw, {})
self.assertEqual(iterator._max_calls, None)
self.assertEqual(iterator._page_num, 0)
def test_ctor_explicit(self):
wlm = _WithListMethod()
TOKEN = wlm._letters
SIZE = 4
CALLS = 2
iterator = self._makeOne(wlm.list_foo, TOKEN, SIZE, CALLS,
foo_type='Bar')
self.assertEqual(iterator._method, wlm.list_foo)
self.assertEqual(iterator._token, TOKEN)
self.assertEqual(iterator._page_size, SIZE)
self.assertEqual(iterator._kw, {'foo_type': 'Bar'})
self.assertEqual(iterator._max_calls, CALLS)
self.assertEqual(iterator._page_num, 0)
def test___iter___defaults(self):
import string
wlm = _WithListMethod()
iterator = self._makeOne(wlm.list_foo)
found = []
for char in iterator:
found.append(char)
self.assertEqual(found, list(string.printable))
self.assertEqual(len(wlm._called_with), len(found) // 10)
for i, (token, size, kw) in enumerate(wlm._called_with):
if i == 0:
self.assertEqual(token, None)
else:
self.assertEqual(token, string.printable[i * 10:])
self.assertEqual(size, None)
self.assertEqual(kw, {})
def test___iter___explicit_size_and_maxcalls_and_kw(self):
import string
wlm = _WithListMethod()
iterator = self._makeOne(wlm.list_foo, page_size=2, max_calls=3,
foo_type='Bar')
found = []
for char in iterator:
found.append(char)
self.assertEqual(found, list(string.printable[:2 * 3]))
self.assertEqual(len(wlm._called_with), len(found) // 2)
for i, (token, size, kw) in enumerate(wlm._called_with):
if i == 0:
self.assertEqual(token, None)
else:
self.assertEqual(token, string.printable[i * 2:])
self.assertEqual(size, 2)
self.assertEqual(kw, {'foo_type': 'Bar'})
class _WithListMethod(object):
def __init__(self):
import string
self._called_with = []
self._letters = string.printable
def list_foo(self, page_token, page_size, **kw):
if page_token is not None:
assert page_token == self._letters
self._called_with.append((page_token, page_size, kw))
if page_size is None:
page_size = 10
page, self._letters = (
self._letters[:page_size], self._letters[page_size:])
token = self._letters or None
return page, token
class _Connection(object):
def __init__(self, *responses):
self._responses = responses
self._requested = []
def api_request(self, **kw):
self._requested.append(kw)
response, self._responses = self._responses[0], self._responses[1:]
return response
class _Client(object):
def __init__(self, connection):
self.connection = connection
| [
"enrico.weigelt@gr13.net"
] | enrico.weigelt@gr13.net |
a05fd43c86e27b01c3f109a88d01540a3b937ad3 | e52765058c96b2123aacf690204d8583a90ad145 | /book_figures/chapter1/fig_dr7_quasar.py | d8d68ebc0777cb56647e6aefdfbe601cda1ee2ea | [
"BSD-3-Clause",
"BSD-2-Clause"
] | permissive | astrofanlee/astroML | dcecb1dbc40dcf8c47deeb11230cff62a451e3c9 | acb02f4385049e7a09ee5eab6f1686e7f119b066 | refs/heads/master | 2020-12-29T03:30:56.700274 | 2013-03-04T16:49:40 | 2013-03-04T16:49:40 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,022 | py | """
SDSS DR7 Quasars
----------------
This example shows how to fetch the SDSS quasar photometric data, and to
plot the relationship between redshift and color.
"""
# Author: Jake VanderPlas <vanderplas@astro.washington.edu>
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
from matplotlib import pyplot as plt
from astroML.datasets import fetch_dr7_quasar
#------------------------------------------------------------
# Fetch the quasar data
data = fetch_dr7_quasar()
# select the first 10000 points
data = data[:10000]
r = data['mag_r']
i = data['mag_i']
z = data['redshift']
#------------------------------------------------------------
# Plot the quasar data
ax = plt.axes()
ax.plot(z, r - i, marker='.', markersize=4, linestyle='none', color='black')
ax.set_xlim(0, 5)
ax.set_ylim(-0.5, 1.0)
ax.set_xlabel('redshift')
ax.set_ylabel('r-i')
plt.show()
| [
"vanderplas@astro.washington.edu"
] | vanderplas@astro.washington.edu |
112a6f1ec6d0a186ec15749a72e95dc940f209bc | 6a6ee6dcb2c6422b51f815ce47e984e461c0d6bf | /mtil/algos/mtbc/mtbc.py | cd413985031afb523aa916043a3c83354530dd11 | [
"ISC"
] | permissive | qxcv/mtil | b4d7aaa193669ad2bd76f42b40f5a07365360bb7 | 62608046efb570b53f8107b8de9a7a1f28aee28a | refs/heads/master | 2022-12-23T03:16:12.079565 | 2020-10-05T05:43:01 | 2020-10-05T05:43:01 | 232,435,789 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 11,759 | py | """Multi-Task Behavioural Cloning (MTBC). Train on several environments, with
one "head" per environment. For now this only works with MILBench environments,
so it assumes that all environments have the same input & output spaces."""
import collections
import os
import re
from magical.benchmarks import EnvName
import numpy as np
from rlpyt.utils.prog_bar import ProgBarCounter
import torch
from torch import nn
import torch.nn.functional as F
from mtil.models import FixedTaskModelWrapper
from mtil.utils.misc import load_state_dict_or_model, tree_map
from mtil.utils.torch import repeat_dataset
LATEST_MARKER = 'LATEST'
def get_latest_path(path_template):
abs_path = os.path.abspath(path_template)
dir_name, base_name = os.path.split(abs_path)
# find last occurrence
latest_ind_rev = base_name[::-1].find(LATEST_MARKER[::-1])
if latest_ind_rev == -1:
raise ValueError(f"No occurrence of marker '{LATEST_MARKER}' in "
f"path template '{path_template}'")
latest_start = len(base_name) - latest_ind_rev - len(LATEST_MARKER)
latest_stop = latest_start + len(LATEST_MARKER)
bn_prefix = base_name[:latest_start]
bn_suffix = base_name[latest_stop:]
best_num = None
best_fn = None
for entry in os.listdir(dir_name):
if not (entry.startswith(bn_prefix) and entry.endswith(bn_suffix)):
continue
end_idx = len(entry) - len(bn_suffix)
num_str = entry[latest_start:end_idx]
try:
num = int(num_str)
except ValueError as ex:
raise ValueError(
f"Error trying to parse file name '{entry}' with template "
f"'{path_template}': {ex.message}")
if best_fn is None or num > best_num:
best_fn = entry
best_num = num
if best_fn is None:
raise ValueError("Couldn't find any files matching path template "
f"'{path_template}' in '{dir_name}'")
return os.path.join(dir_name, best_fn)
def copy_model_into_agent_eval(model, agent, prefix='model'):
"""Update the `.agent` inside `sampler` so that it contains weights from
`model`. Should call this before doing evaluation rollouts between epochs
of training."""
state_dict = model.state_dict()
assert hasattr(agent, prefix)
updated_state_dict = {
prefix + '.' + key: value
for key, value in state_dict.items()
}
agent.load_state_dict(updated_state_dict)
# make sure agent is in eval mode no matter what
agent.model.eval()
class MinBCWeightingModule(nn.Module):
"""Module for computing min-BC loss weights."""
__constants__ = ['num_demo_sources', 'num_tasks']
def __init__(self, num_tasks, num_demo_sources):
super().__init__()
self.num_demo_sources = num_demo_sources
self.num_tasks = num_tasks
self.weight_logits = nn.Parameter(
# id 0 should never be used
torch.zeros(num_tasks, num_demo_sources + 1))
self.register_parameter('weight_logits', self.weight_logits)
def forward(self, task_ids, variant_ids, source_ids):
# perform a separate softmax for each task_id, then average uniformly
# over task IDs
# TODO: torch.jit() this once you know it works
orig_shape = task_ids.shape
task_ids = task_ids.flatten()
source_ids = source_ids.flatten()
result = torch.zeros_like(task_ids, dtype=torch.float)
max_id = torch.max(source_ids)
min_id = torch.min(source_ids)
assert min_id >= 1 and max_id <= self.num_demo_sources, \
(min_id, max_id, self.num_demo_sources)
for task_id in task_ids.unique():
selected_mask = task_ids == task_id
selected_sources = source_ids[selected_mask]
task_weights = self.weight_logits[task_id]
source_weights = task_weights[selected_sources]
# scaling by this target_weight ensures that each task counts
# equally
target_weight = selected_mask.float().mean()
final_weights = target_weight * F.softmax(source_weights, dim=0)
result[selected_mask] = final_weights
return result.reshape(orig_shape)
def do_training_mt(loader, model, opt, dev, aug_model, min_bc_module,
n_batches):
# @torch.jit.script
def do_loss_forward_back(obs_batch_obs, obs_batch_task, obs_batch_var,
obs_batch_source, acts_batch):
# we don't use the value output
logits_flat, _ = model(obs_batch_obs, task_ids=obs_batch_task)
losses = F.cross_entropy(logits_flat,
acts_batch.long(),
reduction='none')
if min_bc_module is not None:
# weight using a model-dependent strategy
mbc_weights = min_bc_module(obs_batch_task, obs_batch_var,
obs_batch_source)
assert mbc_weights.shape == losses.shape, (mbc_weights.shape,
losses.shape)
loss = (losses * mbc_weights).sum()
else:
# no weighting
loss = losses.mean()
loss.backward()
return losses.detach().cpu().numpy()
# make sure we're in train mode
model.train()
# for logging
loss_ewma = None
losses = []
per_task_losses = collections.defaultdict(lambda: [])
progress = ProgBarCounter(n_batches)
inf_batch_iter = repeat_dataset(loader)
ctr_batch_iter = zip(range(1, n_batches), inf_batch_iter)
for batches_done, loader_batch in ctr_batch_iter:
# (task_ids_batch, obs_batch, acts_batch)
# copy to GPU
obs_batch = loader_batch['obs']
acts_batch = loader_batch['acts']
# reminder: attributes are .observation, .task_id, .variant_id
obs_batch = tree_map(lambda t: t.to(dev), obs_batch)
acts_batch = acts_batch.to(dev)
if aug_model is not None:
# apply augmentations
obs_batch = obs_batch._replace(
observation=aug_model(obs_batch.observation))
# compute loss & take opt step
opt.zero_grad()
batch_losses = do_loss_forward_back(obs_batch.observation,
obs_batch.task_id,
obs_batch.variant_id,
obs_batch.source_id, acts_batch)
opt.step()
# for logging
progress.update(batches_done)
f_loss = np.mean(batch_losses)
loss_ewma = f_loss if loss_ewma is None \
else 0.9 * loss_ewma + 0.1 * f_loss
losses.append(f_loss)
# also track separately for each task
tv_ids = torch.stack((obs_batch.task_id, obs_batch.variant_id), axis=1)
np_tv_ids = tv_ids.cpu().numpy()
assert len(np_tv_ids.shape) == 2 and np_tv_ids.shape[1] == 2, \
np_tv_ids.shape
for tv_id in np.unique(np_tv_ids, axis=0):
tv_mask = np.all(np_tv_ids == tv_id[None], axis=-1)
rel_losses = batch_losses[tv_mask]
if len(rel_losses) > 0:
task_id, variant_id = tv_id
per_task_losses[(task_id, variant_id)] \
.append(np.mean(rel_losses))
progress.stop()
return loss_ewma, losses, per_task_losses
_no_version_re = re.compile(r'^(?P<env_name>.*?)(-v\d+)?$')
_alnum_re = re.compile(r'[a-zA-Z0-9]+')
def make_env_tag(env_name):
"""Take a Gym env name like 'fooBar-BazQux-v3' and return more concise string
of the form 'FooBarBazQux' (no version string, no non-alphanumeric
characters, letters that formerly separated words are always
capitalised)."""
no_version = _no_version_re.match(env_name).groupdict()['env_name']
alnum_parts = _alnum_re.findall(no_version)
final_name = ''.join(part[0].upper() + part[1:] for part in alnum_parts)
return final_name
def strip_mb_preproc_name(env_name):
"""Strip any preprocessor name from a MAGICAL env name."""
en = EnvName(env_name)
return '-'.join((en.name_prefix, en.demo_test_spec, en.version_suffix))
def wrap_model_for_fixed_task(model, env_name):
"""Wrap a loaded multi-task model in a `FixedTaskModelWrapper` that _only_
uses the weights for the given env. Useful for `test` and `testall`."""
# contra its name, .env_ids_and_names is list of tuples of form
# (environment name, numeric environment ID)
env_name_to_id = dict(model.env_ids_and_names)
if env_name not in env_name_to_id:
env_names = ', '.join(
[f'{name} ({eid})' for name, eid in model.env_ids_and_names])
raise ValueError(
f"Supplied environment name '{env_name}' is not supported by "
f"model. Supported names (& IDs) are: {env_names}")
env_id = env_name_to_id[env_name]
# this returns (pi, v)
ft_wrapper = FixedTaskModelWrapper(task_id=env_id,
model_ctor=None,
model_kwargs=None,
model=model)
return ft_wrapper
def eval_model(sampler_mt, itr, n_traj=10):
# BUG: this doesn't reset the sampler envs & agent (because I don't know
# how), so it yields somewhat inaccurate results when called repeatedly on
# envs with different horizons.
# BUG: if you only call this once (or if you fix the reset issue so that it
# always resets when called) then it will be biased towards short
# trajectories. Not an issue for fixed horizon, but will be an issue for
# other things.
scores_by_task_var = collections.defaultdict(lambda: [])
while (not scores_by_task_var
or min(map(len, scores_by_task_var.values())) < n_traj):
samples_pyt, _ = sampler_mt.obtain_samples(itr)
eval_scores = samples_pyt.env.env_info.eval_score
dones = samples_pyt.env.done.flatten()
done_scores = eval_scores.flatten()[dones]
done_task_ids = samples_pyt.env.observation.task_id.flatten()[dones]
done_var_ids = samples_pyt.env.observation.variant_id.flatten()[dones]
for score, task_id, var_id in zip(done_scores, done_task_ids,
done_var_ids):
key = (task_id.item(), var_id.item())
scores_by_task_var[key].append(score.item())
# clip any extra rollouts
scores_by_task_var = {k: v[:n_traj] for k, v in scores_by_task_var.items()}
return scores_by_task_var
def eval_model_st(sampler_st, itr, n_traj=10):
# BUGS: same as eval_model()
scores = []
while len(scores) < n_traj:
samples_pyt, _ = sampler_st.obtain_samples(itr)
eval_scores = samples_pyt.env.env_info.eval_score
dones = samples_pyt.env.done.flatten()
done_scores = eval_scores.flatten()[dones]
scores.extend(s.item() for s in done_scores)
# clip any extra rollouts
return scores[:n_traj]
def saved_model_loader_ft(state_dict_or_model_path, env_name):
"""Loads a saved policy model and then wraps it in a
FixedTaskModelWrapper."""
model = load_state_dict_or_model(state_dict_or_model_path)
ft_wrapper = wrap_model_for_fixed_task(model, env_name)
return ft_wrapper
# weird policy "constructor" that actually reloads a pretrained multi-task
# policy from disk & rebuilds it to handle new set of tasks
def adapt_pol_loader(pol_path, task_ids_and_demo_env_names):
saved_model = load_state_dict_or_model(pol_path)
new_model = saved_model.rebuild_net(
task_ids_and_demo_env_names)
return new_model
| [
"sam@qxcv.net"
] | sam@qxcv.net |
fd496e277fdefb2a2c9b541c99a63156f780d0f2 | 67640d102dbf68c635fdcbac4ae16e91b327e684 | /demo01_test/test_cases/__init__.py | eea3614887f048b7f90597b92be325518e4f6032 | [] | no_license | B9527/unittest_demo | f2ba28fdda309731f0a925732a06ea9824ec03ce | d93846d3497c8bad66f34796b96d014093cd60b7 | refs/heads/master | 2021-01-21T17:10:35.242416 | 2017-07-28T02:42:13 | 2017-07-28T02:42:13 | 98,510,977 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 278 | py | #!/usr/bin/env python
# -*- coding:utf-8 -*-
"""
#----------------------------------------------
# Copyright (python)
# FileName: __init__.py
# Version: 0.0.2
# Author : baiyang
# LastChange: 2017/7/27 15:19
# Desc:
# History:
#--------------------------------------------
""" | [
"1335239218@qq.com"
] | 1335239218@qq.com |
99691c947fd22166b442e041f3350b90e4c0514f | ac89e5d51d0d15ffdecfde25985c28a2af9c2e43 | /tbaapiv3client/models/team_simple.py | 7dd97d3b1a1a0965445e844a29f7181c3f059ad1 | [] | no_license | TBA-API/tba-api-client-python | 20dc4a634be32926054ffc4c52b94027ee40ac7d | 4f6ded8fb4bf8f7896891a9aa778ce15a2ef720b | refs/heads/master | 2021-07-15T16:36:32.234217 | 2020-05-07T00:20:43 | 2020-05-07T00:20:43 | 134,112,743 | 4 | 8 | null | 2019-07-01T03:14:12 | 2018-05-20T02:13:45 | Python | UTF-8 | Python | false | false | 8,893 | py | # coding: utf-8
"""
The Blue Alliance API v3
# Overview Information and statistics about FIRST Robotics Competition teams and events. # Authentication All endpoints require an Auth Key to be passed in the header `X-TBA-Auth-Key`. If you do not have an auth key yet, you can obtain one from your [Account Page](/account). A `User-Agent` header may need to be set to prevent a 403 Unauthorized error. # noqa: E501
The version of the OpenAPI document: 3.8.0
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from tbaapiv3client.configuration import Configuration
class TeamSimple(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'key': 'str',
'team_number': 'int',
'nickname': 'str',
'name': 'str',
'city': 'str',
'state_prov': 'str',
'country': 'str'
}
attribute_map = {
'key': 'key',
'team_number': 'team_number',
'nickname': 'nickname',
'name': 'name',
'city': 'city',
'state_prov': 'state_prov',
'country': 'country'
}
def __init__(self, key=None, team_number=None, nickname=None, name=None, city=None, state_prov=None, country=None, local_vars_configuration=None): # noqa: E501
"""TeamSimple - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._key = None
self._team_number = None
self._nickname = None
self._name = None
self._city = None
self._state_prov = None
self._country = None
self.discriminator = None
self.key = key
self.team_number = team_number
if nickname is not None:
self.nickname = nickname
self.name = name
if city is not None:
self.city = city
if state_prov is not None:
self.state_prov = state_prov
if country is not None:
self.country = country
@property
def key(self):
"""Gets the key of this TeamSimple. # noqa: E501
TBA team key with the format `frcXXXX` with `XXXX` representing the team number. # noqa: E501
:return: The key of this TeamSimple. # noqa: E501
:rtype: str
"""
return self._key
@key.setter
def key(self, key):
"""Sets the key of this TeamSimple.
TBA team key with the format `frcXXXX` with `XXXX` representing the team number. # noqa: E501
:param key: The key of this TeamSimple. # noqa: E501
:type: str
"""
if self.local_vars_configuration.client_side_validation and key is None: # noqa: E501
raise ValueError("Invalid value for `key`, must not be `None`") # noqa: E501
self._key = key
@property
def team_number(self):
"""Gets the team_number of this TeamSimple. # noqa: E501
Official team number issued by FIRST. # noqa: E501
:return: The team_number of this TeamSimple. # noqa: E501
:rtype: int
"""
return self._team_number
@team_number.setter
def team_number(self, team_number):
"""Sets the team_number of this TeamSimple.
Official team number issued by FIRST. # noqa: E501
:param team_number: The team_number of this TeamSimple. # noqa: E501
:type: int
"""
if self.local_vars_configuration.client_side_validation and team_number is None: # noqa: E501
raise ValueError("Invalid value for `team_number`, must not be `None`") # noqa: E501
self._team_number = team_number
@property
def nickname(self):
"""Gets the nickname of this TeamSimple. # noqa: E501
Team nickname provided by FIRST. # noqa: E501
:return: The nickname of this TeamSimple. # noqa: E501
:rtype: str
"""
return self._nickname
@nickname.setter
def nickname(self, nickname):
"""Sets the nickname of this TeamSimple.
Team nickname provided by FIRST. # noqa: E501
:param nickname: The nickname of this TeamSimple. # noqa: E501
:type: str
"""
self._nickname = nickname
@property
def name(self):
"""Gets the name of this TeamSimple. # noqa: E501
Official long name registered with FIRST. # noqa: E501
:return: The name of this TeamSimple. # noqa: E501
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""Sets the name of this TeamSimple.
Official long name registered with FIRST. # noqa: E501
:param name: The name of this TeamSimple. # noqa: E501
:type: str
"""
if self.local_vars_configuration.client_side_validation and name is None: # noqa: E501
raise ValueError("Invalid value for `name`, must not be `None`") # noqa: E501
self._name = name
@property
def city(self):
"""Gets the city of this TeamSimple. # noqa: E501
City of team derived from parsing the address registered with FIRST. # noqa: E501
:return: The city of this TeamSimple. # noqa: E501
:rtype: str
"""
return self._city
@city.setter
def city(self, city):
"""Sets the city of this TeamSimple.
City of team derived from parsing the address registered with FIRST. # noqa: E501
:param city: The city of this TeamSimple. # noqa: E501
:type: str
"""
self._city = city
@property
def state_prov(self):
"""Gets the state_prov of this TeamSimple. # noqa: E501
State of team derived from parsing the address registered with FIRST. # noqa: E501
:return: The state_prov of this TeamSimple. # noqa: E501
:rtype: str
"""
return self._state_prov
@state_prov.setter
def state_prov(self, state_prov):
"""Sets the state_prov of this TeamSimple.
State of team derived from parsing the address registered with FIRST. # noqa: E501
:param state_prov: The state_prov of this TeamSimple. # noqa: E501
:type: str
"""
self._state_prov = state_prov
@property
def country(self):
"""Gets the country of this TeamSimple. # noqa: E501
Country of team derived from parsing the address registered with FIRST. # noqa: E501
:return: The country of this TeamSimple. # noqa: E501
:rtype: str
"""
return self._country
@country.setter
def country(self, country):
"""Sets the country of this TeamSimple.
Country of team derived from parsing the address registered with FIRST. # noqa: E501
:param country: The country of this TeamSimple. # noqa: E501
:type: str
"""
self._country = country
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, TeamSimple):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, TeamSimple):
return True
return self.to_dict() != other.to_dict()
| [
"travis@example.org"
] | travis@example.org |
ae6e701cace2665da5f5c82b25c7da7db4d65f08 | bb33e6be8316f35decbb2b81badf2b6dcf7df515 | /source/res/scripts/client/gui/doc_loaders/manual_xml_data_reader.py | a24e72f505ab926a90e4f1164eff6081430137c1 | [] | no_license | StranikS-Scan/WorldOfTanks-Decompiled | 999c9567de38c32c760ab72c21c00ea7bc20990c | d2fe9c195825ececc728e87a02983908b7ea9199 | refs/heads/1.18 | 2023-08-25T17:39:27.718097 | 2022-09-22T06:49:44 | 2022-09-22T06:49:44 | 148,696,315 | 103 | 39 | null | 2022-09-14T17:50:03 | 2018-09-13T20:49:11 | Python | UTF-8 | Python | false | false | 8,652 | py | # Python bytecode 2.7 (decompiled from Python 2.7)
# Embedded file name: scripts/client/gui/doc_loaders/manual_xml_data_reader.py
import itertools
import logging
from gui.impl import backport
from gui.impl.gen import R
from helpers.html import translation
import resource_helper
from gui.Scaleform.genConsts.MANUAL_TEMPLATES import MANUAL_TEMPLATES
from gui.shared.utils.functions import makeTooltip
_logger = logging.getLogger(__name__)
_CHAPTERS_DATA_PATH = 'gui/manual/'
_CHAPTERS_LIST_XML = 'chapters_list.xml'
class ManualPageTypes(object):
HINTS_PAGE = 'hints_page'
BOOTCAMP_PAGE = 'bootcamp_page'
MAPS_TRAINING_PAGE = 'maps_training_page'
VIDEO_PAGE = 'video_page'
_MANUAL_LESSON_TEMPLATES = {ManualPageTypes.HINTS_PAGE: MANUAL_TEMPLATES.HINTS,
ManualPageTypes.BOOTCAMP_PAGE: MANUAL_TEMPLATES.BOOTCAMP,
ManualPageTypes.MAPS_TRAINING_PAGE: MANUAL_TEMPLATES.MAPS_TRAINING,
ManualPageTypes.VIDEO_PAGE: MANUAL_TEMPLATES.VIDEO}
def getChapters(filterFunction):
chaptersListPath = _CHAPTERS_DATA_PATH + _CHAPTERS_LIST_XML
with resource_helper.root_generator(chaptersListPath) as ctx, root:
chapters = __readChapters(ctx, root, filterFunction)
return chapters
def getPagesIndexesList(filterFunction):
chaptersData = getChapters(filterFunction)
return itertools.chain.from_iterable([ chapter['pageIDs'] for chapter in chaptersData ])
def getChaptersIndexesList(filterFunction):
chaptersData = getChapters(filterFunction)
return [ chapter['uiData']['index'] for chapter in chaptersData ]
def getChapterData(chapterFileName, filterFunction, bootcampRunCount, chapterTitle=''):
_logger.debug('ManualXMLDataReader: requested chapter data: %s', chapterFileName)
chapterPath = _CHAPTERS_DATA_PATH + chapterFileName
with resource_helper.root_generator(chapterPath) as ctx, root:
chapter = __readChapter(ctx, root, filterFunction, bootcampRunCount, chapterTitle)
return chapter
def __isNew(lessonCtx, lessonSection):
return bool(__getCustomSectionValue(lessonCtx, lessonSection, 'new', safe=True))
def __readChapter(ctx, root, filterFunction, bootcampRunCount, chapterTitle=''):
pages = []
details = []
index = 0
ctx, section = resource_helper.getSubSection(ctx, root, 'lessons')
for lessonCtx, lessonSection in resource_helper.getIterator(ctx, section):
template = __getCustomSectionValue(lessonCtx, lessonSection, 'template')
if not filterFunction(template):
continue
title = translation(__getCustomSectionValue(lessonCtx, lessonSection, 'title'))
background = __getCustomSectionValue(lessonCtx, lessonSection, 'background')
description = __getCustomSectionValue(lessonCtx, lessonSection, 'description', safe=True)
pageId = __getCustomSectionValue(lessonCtx, lessonSection, 'id')
if description is None:
description = ''
else:
description = translation(description)
contentRendererLinkage = ''
if template == ManualPageTypes.MAPS_TRAINING_PAGE:
contentRendererData = {'text': backport.text(R.strings.maps_training.manualPage.button())}
contentRendererLinkage = _MANUAL_LESSON_TEMPLATES.get(template)
elif template == ManualPageTypes.BOOTCAMP_PAGE:
contentRendererData = __getBootcampRendererData(bootcampRunCount)
contentRendererLinkage = _MANUAL_LESSON_TEMPLATES.get(template)
elif template == ManualPageTypes.VIDEO_PAGE:
contentRendererData = __getVideoRendererData(lessonCtx, lessonSection)
contentRendererLinkage = _MANUAL_LESSON_TEMPLATES.get(template)
else:
contentRendererData, hintsCount = __getHintsRendererData(lessonCtx, lessonSection)
if hintsCount > 0:
contentRendererLinkage = _MANUAL_LESSON_TEMPLATES.get(template)
pages.append({'buttonsGroup': 'ManualChapterGroup',
'pageIndex': int(index),
'selected': False,
'hasNewContent': __isNew(lessonCtx, lessonSection),
'label': str(int(index) + 1),
'tooltip': {'tooltip': makeTooltip(title)}})
details.append({'title': title,
'chapterTitle': chapterTitle,
'description': description,
'background': background,
'contentRendererLinkage': contentRendererLinkage,
'contentRendererData': contentRendererData,
'id': pageId,
'pageType': template})
index += 1
chapterData = {'pages': pages,
'details': details}
_logger.debug('ManualXMLDataReader: Read chapter: %s', chapterData)
return chapterData
def __readChapters(ctx, root, filterFunction):
ctx, section = resource_helper.getSubSection(ctx, root, 'chapters')
chapters = []
index = 0
for chapterCtx, chapterSection in resource_helper.getIterator(ctx, section):
filePath = __getCustomSectionValue(chapterCtx, chapterSection, 'file-path')
title = __getCustomSectionValue(chapterCtx, chapterSection, 'title')
background = __getCustomSectionValue(chapterCtx, chapterSection, 'background')
attributes = __getChapterAttributes(filePath, filterFunction)
ids = attributes.get('ids', [])
if len(ids) != len(set(ids)):
_logger.warning('chapter %s has duplicate page ids', title)
chapter = {'filePath': filePath,
'pageIDs': ids,
'newPageIDs': attributes.get('newIds', []),
'uiData': {'index': int(index),
'label': translation(title),
'image': background,
'tooltip': makeTooltip(translation(title), '\n'.join(attributes.get('chaptersTitles', [])))}}
if any((ids in chapter['pageIDs'] for chapter in chapters)):
_logger.warning('chapter %s has duplicate page ids from another chapters', title)
_logger.debug('ManualXMLDataReader: Read chapters. Chapter: %s', chapter)
chapters.append(chapter)
index += 1
return chapters
def __getChapterAttributes(chapterFileName, filterFunction):
chaptersTitles = []
ids = []
newIds = []
chapterPath = _CHAPTERS_DATA_PATH + chapterFileName
with resource_helper.root_generator(chapterPath) as ctx, root:
ctx, section = resource_helper.getSubSection(ctx, root, 'lessons')
for lessonCtx, lessonSection in resource_helper.getIterator(ctx, section):
template = __getCustomSectionValue(lessonCtx, lessonSection, 'template')
if not filterFunction(template):
continue
lessonId = int(__getCustomSectionValue(lessonCtx, lessonSection, 'id'))
ids.append(lessonId)
if __getCustomSectionValue(lessonCtx, lessonSection, 'new', safe=True):
newIds.append(lessonId)
chaptersTitles.append(translation(__getCustomSectionValue(lessonCtx, lessonSection, 'title')))
return {'ids': ids,
'newIds': newIds,
'chaptersTitles': chaptersTitles}
def __getCustomSectionValue(ctx, section, name, safe=False):
valueCtx, valueSection = resource_helper.getSubSection(ctx, section, name, safe)
result = None
if valueSection is not None:
item = resource_helper.readItem(valueCtx, valueSection, name)
result = item.value
return result
def __getVideoRendererData(lessonCtx, lessonSection):
video = __getCustomSectionValue(lessonCtx, lessonSection, 'video', safe=True)
if video is None:
video = ''
preview = __getCustomSectionValue(lessonCtx, lessonSection, 'preview', safe=True)
if preview is None:
preview = ''
return {'previewImage': preview,
'videoUrl': video}
def __getBootcampRendererData(bootcampRunCount):
if bootcampRunCount == 0:
bootcampText = translation('#bootcamp:request/bootcamp/start')
else:
bootcampText = translation('#bootcamp:request/bootcamp/return')
return {'text': bootcampText}
def __getHintsRendererData(lessonCtx, lessonSection):
hints = []
contentRendererData = None
hintsCtx, hintsSection = resource_helper.getSubSection(lessonCtx, lessonSection, 'hints', safe=True)
if hintsSection is not None:
for hintCtx, hintSection in resource_helper.getIterator(hintsCtx, hintsSection):
hintText = translation(__getCustomSectionValue(hintCtx, hintSection, 'text'))
hintIcon = __getCustomSectionValue(hintCtx, hintSection, 'icon')
hints.append({'text': hintText,
'icon': hintIcon})
contentRendererData = {'hints': hints}
return (contentRendererData, len(hints))
| [
"StranikS_Scan@mail.ru"
] | StranikS_Scan@mail.ru |
3e83a4254bd9cd516265553423831b535c48d256 | d62028695590b6c52cbc71af1dde6e7fdecb3f9b | /FALCON/src/utils/default_param.py | 7314c5af350c0464df65e30c7b8c8930b80db20d | [
"Apache-2.0"
] | permissive | zijiantang168/Reproducability-FALCON | 3cf0ed46ddf9df414fcf1baf5adb0a9c42111273 | eef9d8d72ae3b763d6a88107b90db9533afedd9e | refs/heads/master | 2023-04-20T11:07:40.587829 | 2021-05-13T14:04:32 | 2021-05-13T14:04:32 | 367,063,997 | 0 | 0 | Apache-2.0 | 2021-05-13T13:52:37 | 2021-05-13T13:52:37 | null | UTF-8 | Python | false | false | 5,163 | py | """
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
FALCON: FAst and Lightweight CONvolution
File: utils/default_param.py
- Contain source code for receiving arguments .
Version: 1.0
"""
import argparse
def get_default_param():
"""
Receive arguments.
"""
parser = argparse.ArgumentParser()
parser.add_argument("-train", "--is_train",
help="whether train_test the model (train_test-True; test-False)",
action="store_true")
parser.add_argument("-bs", "--batch_size",
help="batch size of training",
type=int,
default=128)
parser.add_argument("-ep", "--epochs",
help="epochs of training",
type=int,
default=350)
parser.add_argument("-lr", "--learning_rate",
help="set beginning learning rate",
type=float,
default=0.01)
parser.add_argument("-op", "--optimizer",
help="choose optimizer",
choices=["SGD", "Adagrad", "Adam", "RMSprop"],
type=str,
default='SGD')
parser.add_argument("-conv", "--convolution",
help="choose convolution",
choices=["StandardConv",
"FALCON",
"RankFALCON",
"StConvBranch",
"FALCONBranch"],
type=str,
default="StandardConv")
parser.add_argument("-k", "--rank",
help="if the model is Rank K, the rank(k) in range {1,2,3}",
choices=[1, 2, 3, 4],
type=int,
default=1)
parser.add_argument("-al", "--alpha",
help="Width Multiplier in range (0,1]",
# choices=[1, 0.75, 0.5, 0.33, 0.25],
type=float,
default=1)
parser.add_argument("-m", "--model",
help="model type - VGG16/VGG19/ResNet",
choices=['VGG16', 'VGG19', 'ResNet'],
type=str,
default='VGG19')
parser.add_argument("-data", "--datasets",
help="specify datasets - cifar10/cifar100/svhn/mnist/tinyimagenet/imagenet",
choices=['cifar10', 'cifar100', 'svhn', 'mnist', 'tinyimagenet', 'imagenet'],
type=str,
default='cifar100')
parser.add_argument("-ns", "--not_save",
help="do not save the model",
action="store_true")
parser.add_argument("-b", "--beta",
help="balance between classification loss and transfer loss",
type=float,
default=0.0)
parser.add_argument('-bn', '--bn',
action='store_true',
help='add batch_normalization after FALCON')
parser.add_argument('-relu', '--relu',
action='store_true',
help='add relu function after FALCON')
parser.add_argument("-lrd", "--lr_decay_rate",
help="learning rate dacay rate",
type=int,
default=10)
parser.add_argument("-exp", "--expansion",
help="expansion ration in MobileConvV2",
type=float,
default=6.0)
parser.add_argument('-init', '--init',
action='store_true',
help='Whether initialize FALCON')
parser.add_argument("-g", "--groups",
help="groups number for pointwise convolution",
type=int,
default=1)
parser.add_argument("--stconv_path",
help="restore StConv model from the path",
type=str,
default='')
parser.add_argument("--restore_path",
help="restore model from the path",
type=str,
default='')
return parser
| [
"none@none.com"
] | none@none.com |
09906cf7bda1b80f0d3b583f2413572b3f3fcd22 | ab19b1e637109f6a6f32e99714ea1c7cbe1d5ec0 | /articles/migrations/0019_article_publish_after.py | a9d38592c2a964ef44a79aeec0ecba6e0d1db057 | [] | no_license | devonwarren/totemag | daf05876cfe636c4dcfe83b764900a0bc4c9c29d | 304ab0e2f72b926e63de706a6e3dc0b043db36fd | refs/heads/master | 2021-01-17T20:48:48.671352 | 2016-06-02T00:57:11 | 2016-06-02T00:57:11 | 58,146,953 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 538 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.9.2 on 2016-02-18 22:41
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('articles', '0018_auto_20151124_2026'),
]
operations = [
migrations.AddField(
model_name='article',
name='publish_after',
field=models.DateTimeField(blank=True, help_text='If set will be published after this point automatically', null=True),
),
]
| [
"devon.warren@gmail.com"
] | devon.warren@gmail.com |
f993e3b5fb115450f780e5412a64f240b4667778 | 9e7d7b4d029554eed0f760a027cd94558b919ae2 | /chapter2/continue_break_statements.py | e9af6c8a148f493882b7cecd565d9844fcdfc65c | [] | no_license | pooja1506/AutomateTheBoringStuff_2e | 8247b68a195d5e1976c6474f0e97d947906ffd35 | 5bab9ccdcdb22ee10fe1272c91042be40fd67c17 | refs/heads/master | 2022-04-10T19:21:44.402829 | 2020-04-05T12:10:32 | 2020-04-05T12:10:32 | 249,620,282 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 427 | py | while True:
print("who are you?")
name = input()
if name != 'Joe':
continue
print("hello joe , please enter your password")
password = input()
if password == 'Seasword':
break
print("access granted")
#continue statement is used to jump back to the start of the loop to re-evaluate the input until its true
#break statement is used to immediately exit the while loop clause | [
"pooja.dmehta15@gmail.com"
] | pooja.dmehta15@gmail.com |
901b1b56c55239bb850491d05e8b6501220bd9f6 | 7cf119239091001cbe687f73018dc6a58b5b1333 | /datashufflepy-zeus/src/branch_scripts2/NEWS/ZX_CJXW_ZYCJ/ZX_CJXW_ZYCJ_JJW_CJZQ.py | bb53ef4acf7a8bb3b4a7661eb8cf6b64186458af | [
"Apache-2.0"
] | permissive | ILKKAI/dataETL | 0f5b80c3482994f735f092a1e01fa1009bac4109 | 32f7ec3aaaf32b5074536a615cb9cd5c28bd499c | refs/heads/master | 2022-04-04T19:27:05.747852 | 2020-02-28T11:17:48 | 2020-02-28T11:17:48 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 358 | py | # -*- coding: utf-8 -*-
from database._mongodb import MongoClient
def data_shuffle(data):
return data
if __name__ == '__main__':
main_mongo = MongoClient(entity_code="ZX_CJXW_ZYCJ_JJW_CJZQ", mongo_collection="ZX_CJXW_ZYCJ")
data_list = main_mongo.main()
for data in data_list:
re_data = data_shuffle(data)
print(re_data)
| [
"499413642@qq.com"
] | 499413642@qq.com |
e6bdd6de0cabea72e54e64e2694a3cd452cc4fa4 | bad6970aa7c929bcd8447106c1f3af6567a42456 | /tests/test_snowflake.py | 70f14b17835adcae044d93f7c7c15fc290185db5 | [
"MIT"
] | permissive | wrodney/simple-ddl-parser | ad498221367bf657ed94106feded5ff8c3fdb46b | 6e4d1d65f74fa2f9266a4f9e28bd8b6e42ddaa14 | refs/heads/main | 2023-08-15T01:26:48.214664 | 2021-09-22T06:20:11 | 2021-09-22T06:20:11 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,073 | py | from simple_ddl_parser import DDLParser
def test_clone_db():
ddl = """
create database mytestdb_clone clone mytestdb;
"""
result = DDLParser(ddl).run(group_by_type=True)
expected = {
"databases": [
{"clone": {"from": "mytestdb"}, "database_name": "mytestdb_clone"}
],
"domains": [],
"schemas": [],
"sequences": [],
"tables": [],
"types": [],
}
assert result == expected
def test_clone_table():
expected = {
"domains": [],
"schemas": [],
"sequences": [],
"tables": [
{
"alter": {},
"checks": [],
"columns": [],
"index": [],
"like": {"schema": None, "table_name": "orders"},
"partitioned_by": [],
"primary_key": [],
"schema": None,
"table_name": "orders_clone",
"tablespace": None,
}
],
"types": [],
}
ddl = """
create table orders_clone clone orders;
"""
result = DDLParser(ddl).run(group_by_type=True)
assert expected == result
def test_clone_schema():
expected = {
"domains": [],
"schemas": [
{"clone": {"from": "testschema"}, "schema_name": "mytestschema_clone"}
],
"sequences": [],
"tables": [],
"types": [],
}
ddl = """
create schema mytestschema_clone clone testschema;
"""
result = DDLParser(ddl).run(group_by_type=True)
assert expected == result
def test_cluster_by():
ddl = """
create table mytable (date timestamp_ntz, id number, content variant) cluster by (date, id);
"""
result = DDLParser(ddl).run(group_by_type=True)
expected = {
"domains": [],
"schemas": [],
"sequences": [],
"tables": [
{
"alter": {},
"checks": [],
"cluster_by": ["date", "id"],
"columns": [
{
"check": None,
"default": None,
"name": "date",
"nullable": True,
"references": None,
"size": None,
"type": "timestamp_ntz",
"unique": False,
},
{
"check": None,
"default": None,
"name": "id",
"nullable": True,
"references": None,
"size": None,
"type": "number",
"unique": False,
},
{
"check": None,
"default": None,
"name": "content",
"nullable": True,
"references": None,
"size": None,
"type": "variant",
"unique": False,
},
],
"index": [],
"partitioned_by": [],
"primary_key": [],
"schema": None,
"table_name": "mytable",
"tablespace": None,
}
],
"types": [],
}
assert expected == result
def test_enforced():
ddl = """
create table table2 (
col1 integer not null,
col2 integer not null,
constraint pkey_1 primary key (col1, col2) not enforced
);
"""
result = DDLParser(ddl).run(group_by_type=True)
expected = {
"domains": [],
"schemas": [],
"sequences": [],
"tables": [
{
"alter": {},
"checks": [],
"columns": [
{
"check": None,
"default": None,
"name": "col1",
"nullable": False,
"references": None,
"size": None,
"type": "integer",
"unique": False,
},
{
"check": None,
"default": None,
"name": "col2",
"nullable": False,
"references": None,
"size": None,
"type": "integer",
"unique": False,
},
],
"index": [],
"partitioned_by": [],
"primary_key": [],
"primary_key_enforced": False,
"schema": None,
"table_name": "table2",
"tablespace": None,
}
],
"types": [],
}
assert expected == result
| [
"xnuinside@gmail.com"
] | xnuinside@gmail.com |
85d28b2ed8514cc96ce157349b50199f1c467e84 | 39f37b192565bf0a30252099a0310d0394b5cb2c | /deepsearch/cps/apis/public/models/annotated_text.py | 2e9d8b1de43c2ba2f27f3f35027cb62caa3a2b3a | [
"MIT"
] | permissive | DS4SD/deepsearch-toolkit | 5a6608b744bcf2d7da5106ece735691ee10f79fb | 20d8198db6d5a75cfe374060910a0375312dadef | refs/heads/main | 2023-08-04T14:17:21.345813 | 2023-07-28T11:51:18 | 2023-07-28T11:51:18 | 498,645,500 | 88 | 13 | MIT | 2023-09-07T15:17:29 | 2022-06-01T08:05:41 | Python | UTF-8 | Python | false | false | 6,220 | py | # coding: utf-8
"""
Corpus Processing Service (CPS) API
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: 2.0.0
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from deepsearch.cps.apis.public.configuration import Configuration
class AnnotatedText(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'entities': 'dict(str, list[EntityAnnotation])',
'properties': 'object',
'relationships': 'dict(str, list[object])',
'text': 'str'
}
attribute_map = {
'entities': 'entities',
'properties': 'properties',
'relationships': 'relationships',
'text': 'text'
}
def __init__(self, entities=None, properties=None, relationships=None, text=None, local_vars_configuration=None): # noqa: E501
"""AnnotatedText - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._entities = None
self._properties = None
self._relationships = None
self._text = None
self.discriminator = None
self.entities = entities
self.properties = properties
self.relationships = relationships
self.text = text
@property
def entities(self):
"""Gets the entities of this AnnotatedText. # noqa: E501
:return: The entities of this AnnotatedText. # noqa: E501
:rtype: dict(str, list[EntityAnnotation])
"""
return self._entities
@entities.setter
def entities(self, entities):
"""Sets the entities of this AnnotatedText.
:param entities: The entities of this AnnotatedText. # noqa: E501
:type: dict(str, list[EntityAnnotation])
"""
if self.local_vars_configuration.client_side_validation and entities is None: # noqa: E501
raise ValueError("Invalid value for `entities`, must not be `None`") # noqa: E501
self._entities = entities
@property
def properties(self):
"""Gets the properties of this AnnotatedText. # noqa: E501
:return: The properties of this AnnotatedText. # noqa: E501
:rtype: object
"""
return self._properties
@properties.setter
def properties(self, properties):
"""Sets the properties of this AnnotatedText.
:param properties: The properties of this AnnotatedText. # noqa: E501
:type: object
"""
if self.local_vars_configuration.client_side_validation and properties is None: # noqa: E501
raise ValueError("Invalid value for `properties`, must not be `None`") # noqa: E501
self._properties = properties
@property
def relationships(self):
"""Gets the relationships of this AnnotatedText. # noqa: E501
:return: The relationships of this AnnotatedText. # noqa: E501
:rtype: dict(str, list[object])
"""
return self._relationships
@relationships.setter
def relationships(self, relationships):
"""Sets the relationships of this AnnotatedText.
:param relationships: The relationships of this AnnotatedText. # noqa: E501
:type: dict(str, list[object])
"""
if self.local_vars_configuration.client_side_validation and relationships is None: # noqa: E501
raise ValueError("Invalid value for `relationships`, must not be `None`") # noqa: E501
self._relationships = relationships
@property
def text(self):
"""Gets the text of this AnnotatedText. # noqa: E501
:return: The text of this AnnotatedText. # noqa: E501
:rtype: str
"""
return self._text
@text.setter
def text(self, text):
"""Sets the text of this AnnotatedText.
:param text: The text of this AnnotatedText. # noqa: E501
:type: str
"""
if self.local_vars_configuration.client_side_validation and text is None: # noqa: E501
raise ValueError("Invalid value for `text`, must not be `None`") # noqa: E501
self._text = text
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, AnnotatedText):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, AnnotatedText):
return True
return self.to_dict() != other.to_dict()
| [
"dol@zurich.ibm.com"
] | dol@zurich.ibm.com |
31c52a620fc8d4a5f84e8007650b67030b297537 | cf070f44eb2e4a218af93432608a04e85e1bbfac | /web/tests/eSearch/test_4525.py | c27132a82294aa5b3a5be54be55fc3d504b75fdb | [] | no_license | NadyaDi/kms-automation | fbb680e95394b0c3286653ac5ae187f6bc02845e | 3309a6f516386e824c23e03c6f6cb47661ea5ddd | refs/heads/master | 2022-01-15T14:50:22.616235 | 2019-05-16T10:29:20 | 2019-05-16T10:29:20 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 18,468 | py | import time, pytest
import sys,os
from _ast import Num
sys.path.insert(1,os.path.abspath(os.path.join(os.path.dirname( __file__ ),'..','..','lib')))
from clsCommon import Common
import clsTestService
from localSettings import *
import localSettings
from utilityTestFunc import *
import enums
class Test:
#================================================================================================================================
# @Author: Horia Cus
# Test Name :Filter by custom-data - with search - Add to channel - My Media tab - negative and positive
# Test description:
# Verify that proper entries are displayed based on the custom data by: single date, single text, single list and unlimited text
#================================================================================================================================
testNum = "4525"
supported_platforms = clsTestService.updatePlatforms(testNum)
status = "Pass"
timeout_accured = "False"
driver = None
common = None
entryName = "Filter by custom-data"
searchPage = "Add to channel - My Media tab - with search"
channelName = "Channel for eSearch"
@pytest.fixture(scope='module',params=supported_platforms)
def driverFix(self,request):
return request.param
def test_01(self,driverFix,env):
#write to log we started the test
logStartTest(self,driverFix)
try:
########################### TEST SETUP ###########################
#capture test start time
self.startTime = time.time()
#initialize all the basic vars and start playing
self,self.driver = clsTestService.initializeAndLoginAsUser(self, driverFix)
self.common = Common(self.driver)
# Entries and dictionaries
self.entryName1 = "Filter by custom-data - Video"
self.entryName2 = "Filter by custom-data - Image"
self.entryName3 = "Filter by custom-data - Quiz - Quiz"
self.entryName4 = "Filter by custom-data - Audio"
self.year = 2018
self.month = 5
self.dayEntry1 = "1"
self.dayEntry2 = "2"
self.dayEntry3 = "3"
self.dayEntry4 = "4"
self.dayInvalid = "22"
self.textEntry1 = "Video"
self.textEntry2 = "Image"
self.textEntry3 = "Quiz"
self.textEntry4 = "Audio"
self.textInvalid = "horiacus"
self.listOne = {self.entryName1: True, self.entryName2: False, self.entryName3: False, self.entryName4: False}
self.listTwo = {self.entryName1: False, self.entryName2: True, self.entryName3: False, self.entryName4: False}
self.listThree = {self.entryName1: False, self.entryName2: False, self.entryName3: True, self.entryName4: False}
self.listFour = {self.entryName1: False, self.entryName2: False, self.entryName3: False, self.entryName4: True}
self.listInvalid = {self.entryName1: False, self.entryName2: False, self.entryName3: False, self.entryName4: False}
self.enumListOne = enums.SingleList.LIST_ONE
self.enumListTwo = enums.SingleList.LIST_TWO
self.enumListThree = enums.SingleList.LIST_THREE
self.enumListFour = enums.SingleList.LIST_FOUR
self.enumListInvalid = enums.SingleList.LIST_SIX
self.singleDateMap = {self.dayEntry1:self.listOne, self.dayEntry2:self.listTwo, self.dayEntry3:self.listThree, self.dayEntry4:self.listFour}
self.singleTextMap = {self.textEntry1:self.listOne, self.textEntry2:self.listTwo, self.textEntry3:self.listThree, self.textEntry4:self.listFour}
self.singleListMap = {self.enumListOne:[self.listOne, enums.SingleList.LIST_ONE.value], self.enumListTwo:[self.listTwo, enums.SingleList.LIST_TWO.value], self.enumListThree:[self.listThree, enums.SingleList.LIST_THREE.value], self.enumListFour:[self.listFour, enums.SingleList.LIST_FOUR.value]}
##################### TEST STEPS - MAIN FLOW #####################
i = 1
writeToLog("INFO","Step " + str(i) + ": Going to navigate to " + self.searchPage)
if self.common.channel.navigateToAddToChannel(self.channelName) == False:
self.status = "Fail"
writeToLog("INFO","Step " + str(i) + ": FAILED to navigate to " + self.searchPage)
return
else:
i = i + 1
i = i
writeToLog("INFO","Step " + str(i) + ": Going to make a search in " + self.searchPage)
if self.common.channel.searchInAddToChannel(self.entryName) == False:
self.status = "Fail"
writeToLog("INFO","Step " + str(i) + ": FAILED to make a search in " +self.searchPage)
return
else:
i = i + 1
i = i
# SINGLE LIST NEGATIVE
writeToLog("INFO", "STEP " + str(i) + ": Going to filter " + self.searchPage + " entries by: " + enums.SingleList.LIST_SIX.value + "'")
if self.common.myMedia.SortAndFilter(enums.SortAndFilter.SINGLE_LIST, enums.SingleList.LIST_SIX) == False:
self.status = "Fail"
writeToLog("INFO", "STEP " + str(i) + ": FAILED to filter " + self.searchPage + " entries by '" + enums.SingleList.LIST_SIX.value + "'")
return
else:
i = i + 1
i = i
writeToLog("INFO", "STEP " + str(i) + ": Going to verify filter " + self.searchPage + " entries by: " + enums.SingleList.LIST_SIX.value + "'")
if self.common.channel.verifyFiltersInAddToChannel(self.listInvalid) == False:
self.status = "Fail"
writeToLog("INFO", "STEP " + str(i) + ": FAILED to verify filter " + self.searchPage + " entries by '" + enums.SingleList.LIST_SIX.value + "'")
return
else:
i = i + 1
i = i
writeToLog("INFO", "STEP " + str(i) + ": Going to clear the filter search menu")
if self.common.myMedia.filterClearAllWhenOpened() == False:
self.status = "Fail"
writeToLog("INFO", "STEP " + str(i) + ": Failed to clear the search menu")
return
else:
i = i + 1
i = i
# SINGLE LIST POSITIVE
for entry in self.singleListMap:
i = i
writeToLog("INFO", "Step " + str(i) + ": Going to filter " + self.searchPage + " entries by: " + self.singleListMap[entry][1] + "'")
if self.common.myMedia.SortAndFilter(enums.SortAndFilter.SINGLE_LIST, entry) == False:
self.status = "Fail"
writeToLog("INFO", "Step" + str(i) + ": FAILED to filter " + self.searchPage + " entries by '" + self.singleListMap[entry][1] + "'")
return
else:
i = i + 1
writeToLog("INFO", "Step " + str(i) + ": Going to verify filter " + self.searchPage + " entries by: " + self.singleListMap[entry][1] + "'")
if self.common.channel.verifyFiltersInAddToChannel(self.singleListMap[entry][0]) == False:
self.status = "Fail"
writeToLog("INFO", "Step" + str(i) + ": FAILED to verify filter " + self.searchPage + " entries by '" + self.singleListMap[entry][1] + "'")
return
else:
i = i + 1
writeToLog("INFO", "Step " + str(i) + ": Going to clear the filter search menu")
if self.common.myMedia.filterClearAllWhenOpened() == False:
self.status = "Fail"
writeToLog("INFO", "Step" + str(i) + ": Failed to clear the search menu")
return
else:
i = i + 1
i = i
# SINGLE TEXT NEGATIVE
writeToLog("INFO", "STEP " + str(i) + ": Going to filter " + self.searchPage + " entries by: " + enums.FreeText.SINGLE_TEXT.value + " and " + self.textInvalid + " text")
if self.common.myMedia.SortAndFilter(enums.SortAndFilter.FREE_TEXT, enums.FreeText.SINGLE_TEXT, text=self.textInvalid) == False:
self.status = "Fail"
writeToLog("INFO", "STEP " + str(i) + ": FAILED to filter " + self.searchPage + " entries by '" + enums.FreeText.SINGLE_TEXT.value + " and " + self.textInvalid + " text")
return
else:
i = i + 1
i = i
writeToLog("INFO", "STEP " + str(i) + ": Going to verify filter " + self.searchPage + " entries by: " + enums.FreeText.SINGLE_TEXT.value + " and " + self.textInvalid + " text")
if self.common.channel.verifyFiltersInAddToChannel(self.listInvalid) == False:
self.status = "Fail"
writeToLog("INFO", "STEP " + str(i) + ": FAILED to verify filter " + self.searchPage + " entries by '" + enums.FreeText.SINGLE_TEXT.value + " and " + self.textInvalid + " text")
return
else:
i = i + 1
i = i
writeToLog("INFO", "STEP " + str(i) + ": Going to clear the filter search menu")
if self.common.myMedia.filterClearAllWhenOpened() == False:
self.status = "Fail"
writeToLog("INFO", "STEP " + str(i) + ": Failed to clear the search menu")
return
i = i
# SINGLE TEXT POSITIVE
for entry in self.singleTextMap:
i = i
writeToLog("INFO", "Step " + str(i) + ": Going to filter " + self.searchPage + " entries by '" + enums.FreeText.SINGLE_TEXT.value + " and " + entry + " text")
if self.common.myMedia.SortAndFilter(enums.SortAndFilter.FREE_TEXT, enums.FreeText.SINGLE_TEXT, text=entry) == False:
self.status = "Fail"
writeToLog("INFO", "Step" + str(i) + ": FAILED to filter " + self.searchPage + " entries by '" + self.FreeText.SINGLE_TEXT.value + " and " + entry + " text")
return
else:
i = i + 1
writeToLog("INFO", "Step " + str(i) + ": Going to verify filter " + self.searchPage + " entries by '" + enums.FreeText.SINGLE_TEXT.value + " and " + entry + " text")
if self.common.channel.verifyFiltersInAddToChannel(self.singleTextMap[entry]) == False:
self.status = "Fail"
writeToLog("INFO", "Step" + str(i) + ": FAILED to verify filter " + self.searchPage + " entries by '" + enums.FreeText.SINGLE_TEXT.value + " and " + entry + " text")
return
else:
i = i + 1
writeToLog("INFO", "Step " + str(i) + ": Going to clear the filter search menu")
if self.common.myMedia.filterClearAllWhenOpened() == False:
self.status = "Fail"
writeToLog("INFO", "Step" + str(i) + ": Failed to clear the search menu")
return
else:
i = i + 1
i = i
# SINGLE UNLIMITED NEGATIVE
writeToLog("INFO", "STEP " + str(i) + ": Going to filter " + self.searchPage + " entries by: " + enums.FreeText.UNLIMITED_TEXT.value + " and " + self.textInvalid + " text")
if self.common.myMedia.SortAndFilter(enums.SortAndFilter.FREE_TEXT, enums.FreeText.UNLIMITED_TEXT, text=self.textInvalid) == False:
self.status = "Fail"
writeToLog("INFO", "STEP " + str(i) + ": FAILED to filter " + self.searchPage + " entries by '" + enums.FreeText.UNLIMITED_TEXT.value + " and " + self.textInvalid + " text")
return
else:
i = i + 1
i = i
writeToLog("INFO", "STEP " + str(i) + ": Going to verify filter " + self.searchPage + " entries by: " + enums.FreeText.UNLIMITED_TEXT.value + " and " + self.textInvalid + " text")
if self.common.channel.verifyFiltersInAddToChannel(self.listInvalid) == False:
self.status = "Fail"
writeToLog("INFO", "STEP " + str(i) + ": FAILED to verify filter " + self.searchPage + " entries by '" + enums.FreeText.UNLIMITED_TEXT.value + " and " + self.textInvalid + " text")
return
else:
i = i + 1
i = i
writeToLog("INFO", "STEP " + str(i) + ": Going to clear the filter search menu")
if self.common.myMedia.filterClearAllWhenOpened() == False:
self.status = "Fail"
writeToLog("INFO", "STEP " + str(i) + ": Failed to clear the search menu")
return
i = i
# SINGLE UNLIMITED POSITIVE
for entry in self.singleTextMap:
i = i
writeToLog("INFO", "Step " + str(i) + ": Going to filter " + self.searchPage + " entries by '" + enums.FreeText.UNLIMITED_TEXT.value + " and " + entry + " text")
if self.common.myMedia.SortAndFilter(enums.SortAndFilter.FREE_TEXT, enums.FreeText.UNLIMITED_TEXT, text=entry) == False:
self.status = "Fail"
writeToLog("INFO", "Step" + str(i) + ": FAILED to filter " + self.searchPage + " entries by '" + self.FreeText.UNLIMITED_TEXT.value + " and " + entry + " text")
return
else:
i = i + 1
writeToLog("INFO", "Step " + str(i) + ": Going to verify filter " + self.searchPage + " entries by '" + enums.FreeText.UNLIMITED_TEXT.value + " and " + entry + " text")
if self.common.channel.verifyFiltersInAddToChannel(self.singleTextMap[entry]) == False:
self.status = "Fail"
writeToLog("INFO", "Step" + str(i) + ": FAILED to verify filter " + self.searchPage + " entries by '" + enums.FreeText.UNLIMITED_TEXT.value + " and " + entry + " text")
return
else:
i = i + 1
writeToLog("INFO", "Step " + str(i) + ": Going to clear the filter search menu")
if self.common.myMedia.filterClearAllWhenOpened() == False:
self.status = "Fail"
writeToLog("INFO", "Step" + str(i) + ": Failed to clear the search menu")
return
else:
i = i + 1
i = i
# SINGLE DATE NEGATIVE
writeToLog("INFO", "Step " + str(i) + ": Going to filter " + self.searchPage + " entries by: " + enums.SingleDate.DATE.value + "'")
if self.common.myMedia.SortAndFilter(enums.SortAndFilter.SINGLE_DATE, enums.SingleDate.DATE, year=self.year, month=self.month, day=self.dayInvalid) == False:
self.status = "Fail"
writeToLog("INFO", "Step " + str(i) + ": FAILED to filter " + self.searchPage + " entries by '" + enums.SingleDate.DATE.value + "'")
return
else:
i = i + 1
i = i
writeToLog("INFO", "STEP " + str(i) + ": Going to verify filter " + self.searchPage + " entries by: " + enums.SingleDate.DATE.value + "'")
if self.common.channel.verifyFiltersInAddToChannel(self.listInvalid) == False:
self.status = "Fail"
writeToLog("INFO", "STEP " + str(i) + ": FAILED to verify filter " + self.searchPage + " entries by '" + enums.SingleDate.DATE.value + "'")
return
else:
i = i + 1
i = i
writeToLog("INFO", "STEP " + str(i) + ": Going to clear the filter search menu")
if self.common.myMedia.filterClearAllWhenOpened() == False:
self.status = "Fail"
writeToLog("INFO", "STEP " + str(i) + ": Failed to clear the search menu")
return
else:
i = i + 1
i = i
# SINGLE DATE POSITIVE
# for entry in self.singleDateMap:
# i = i + 1
# writeToLog("INFO", "Step " + str(i) + ": Going to filter " + self.searchPage + " entries by: " + enums.SortAndFilter.SINGLE_DATE.value + "'")
# if self.common.myMedia.SortAndFilter(enums.SortAndFilter.SINGLE_DATE, enums.SingleDate.DATE, year=self.year, month=self.month, day=entry) == False:
# self.status = "Fail"
# writeToLog("INFO", "Step" + str(i) + ": FAILED to filter " + self.searchPage + " entries by '" + enums.SortAndFilter.SINGLE_DATE.value + "'")
# return
# else:
# i = i + 1
#
# writeToLog("INFO", "Step " + str(i) + ": Going to verify filter " + self.searchPage + " entries by: " + enums.SortAndFilter.SINGLE_DATE.value + "'")
# if self.common.channel.verifyFiltersInAddToChannel(self.singleDateMap[entry]) == False:
# self.status = "Fail"
# writeToLog("INFO", "Step" + str(i) + ": FAILED to verify filter " + self.searchPage + " entries by '" + enums.SortAndFilter.SINGLE_DATE.value + "'")
# return
# else:
# i = i + 1
##################################################################
writeToLog("INFO","TEST PASSED: All the entries are properly displayed in " + self.searchPage + " while using custom data filters")
# if an exception happened we need to handle it and fail the test
except Exception as inst:
self.status = clsTestService.handleException(self,inst,self.startTime)
########################### TEST TEARDOWN ###########################
def teardown_method(self,method):
try:
self.common.handleTestFail(self.status)
writeToLog("INFO","**************** Starting: teardown_method ****************")
writeToLog("INFO","**************** Ended: teardown_method *******************")
except:
pass
clsTestService.basicTearDown(self)
#write to log we finished the test
logFinishedTest(self,self.startTime)
assert (self.status == "Pass")
pytest.main('test_' + testNum + '.py --tb=line') | [
"45174452+CusHoria@users.noreply.github.com"
] | 45174452+CusHoria@users.noreply.github.com |
008a83290ee640ced68a64e70c2f58b05d59253e | caaf1b0754db1e676c37a6f1e58f19183754e654 | /sdk/network/azure-mgmt-network/generated_samples/application_gateway_delete.py | 74f5ae94361700e9cb4246e6cb6c0d04bc1c2e8a | [
"LicenseRef-scancode-generic-cla",
"MIT",
"LGPL-2.1-or-later"
] | permissive | rdomenzain/azure-sdk-for-python | 45dfb39121a0abda048c22e7309733a56259f525 | 58984255aeb904346b6958c5ba742749a2cc7d1b | refs/heads/master | 2023-07-07T06:53:12.967120 | 2023-07-04T16:27:37 | 2023-07-04T16:27:37 | 258,050,134 | 0 | 0 | MIT | 2020-04-23T00:12:14 | 2020-04-23T00:12:13 | null | UTF-8 | Python | false | false | 1,526 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from azure.identity import DefaultAzureCredential
from azure.mgmt.network import NetworkManagementClient
"""
# PREREQUISITES
pip install azure-identity
pip install azure-mgmt-network
# USAGE
python application_gateway_delete.py
Before run the sample, please set the values of the client ID, tenant ID and client secret
of the AAD application as environment variables: AZURE_CLIENT_ID, AZURE_TENANT_ID,
AZURE_CLIENT_SECRET. For more info about how to get the value, please see:
https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal
"""
def main():
client = NetworkManagementClient(
credential=DefaultAzureCredential(),
subscription_id="subid",
)
client.application_gateways.begin_delete(
resource_group_name="rg1",
application_gateway_name="appgw",
).result()
# x-ms-original-file: specification/network/resource-manager/Microsoft.Network/stable/2022-11-01/examples/ApplicationGatewayDelete.json
if __name__ == "__main__":
main()
| [
"noreply@github.com"
] | rdomenzain.noreply@github.com |
b6136bca21599639df960e68a3941f35468a6848 | 67f36de3eec7f64b4b1070727ab7bc34e38d3724 | /apps/users/migrations/0002_usersdesc.py | 18e5cba06a0ea729d80fe2f82125dfdd954abab7 | [] | no_license | shd0812/django_shd_restframework | a46371756accffddfe3e915dc22f96f0bc2f0740 | d6ab9c2896b0e3766e17983019d2ac09a8c81b99 | refs/heads/main | 2023-01-19T11:17:19.097711 | 2020-11-22T11:25:05 | 2020-11-22T11:25:05 | 315,007,491 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 874 | py | # Generated by Django 3.1.2 on 2020-11-15 04:05
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('users', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='UsersDesc',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('favorite', models.CharField(max_length=100)),
('address', models.CharField(max_length=100)),
('phone', models.CharField(max_length=11, unique=True)),
('users', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to='users.userinfo')),
],
options={
'db_table': 'tb_users_desc',
},
),
]
| [
"759275499@qq.com"
] | 759275499@qq.com |
bf4b2947db1bed8d4e6c36d47effa7e96ffa5242 | ded13e921c8365c6113911a5834969ec3d33f989 | /063/Unique Paths II.py | a082d0d5a27c38b339744a82a15a7aaff03aecc0 | [] | no_license | ArrayZoneYour/LeetCode | b7b785ef0907640623e5ab8eec1b8b0a9d0024d8 | d09f56d4fef859ca4749dc753d869828f5de901f | refs/heads/master | 2021-04-26T23:03:10.026205 | 2018-05-09T15:49:08 | 2018-05-09T15:49:08 | 123,922,098 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 991 | py | # /usr/bin/python
# coding: utf-8
class Solution:
def uniquePathsWithObstacles(self, obstacleGrid):
"""
:type obstacleGrid: List[List[int]]
:rtype: int
"""
height = len(obstacleGrid)
width = len(obstacleGrid[0])
step_num = [[0 for col in range(width)] for row in range(height)]
if not obstacleGrid[0][0]:
step_num[0][0] = 1
for row in range(1, height):
if not obstacleGrid[row][0]:
step_num[row][0] = step_num[row-1][0]
for col in range(1, width):
if not obstacleGrid[0][col]:
step_num[0][col] = step_num[0][col-1]
for row in range(1, height):
for col in range(1, width):
if not obstacleGrid[row][col]:
step_num[row][col] = step_num[row-1][col] + step_num[row][col-1]
return step_num[-1][-1]
print(Solution().uniquePathsWithObstacles([
[0,0,0],
[0,1,0],
[0,0,0]
])) | [
"hustliyidong@gmail.com"
] | hustliyidong@gmail.com |
5b2b9ffdd1e257920851c47cf8527bcd03b9c247 | 0b134572e3ac3903ebb44df6d4138cbab9d3327c | /app/grandchallenge/workstation_configs/migrations/0008_auto_20210920_1439.py | a5c71b249716b80883be22b6699341c64c3c59fb | [
"Apache-2.0"
] | permissive | comic/grand-challenge.org | 660de3bafaf8f4560317f1dfd9ae9585ec272896 | dac25f93b395974b32ba2a8a5f9e19b84b49e09d | refs/heads/main | 2023-09-01T15:57:14.790244 | 2023-08-31T14:23:04 | 2023-08-31T14:23:04 | 4,557,968 | 135 | 53 | Apache-2.0 | 2023-09-14T13:41:03 | 2012-06-05T09:26:39 | Python | UTF-8 | Python | false | false | 1,446 | py | # Generated by Django 3.1.13 on 2021-09-20 14:39
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
(
"workstation_configs",
"0007_workstationconfig_auto_jump_center_of_gravity",
)
]
operations = [
migrations.RemoveField(
model_name="workstationconfig", name="client_rendered_sidebar"
),
migrations.AddField(
model_name="workstationconfig",
name="show_algorithm_output_plugin",
field=models.BooleanField(
default=True,
help_text="Show algorithm outputs with navigation controls",
),
),
migrations.AddField(
model_name="workstationconfig",
name="show_image_switcher_plugin",
field=models.BooleanField(default=True),
),
migrations.AddField(
model_name="workstationconfig",
name="show_lut_selection_tool",
field=models.BooleanField(default=True),
),
migrations.AddField(
model_name="workstationconfig",
name="show_overlay_plugin",
field=models.BooleanField(default=True),
),
migrations.AddField(
model_name="workstationconfig",
name="show_overlay_selection_tool",
field=models.BooleanField(default=True),
),
]
| [
"noreply@github.com"
] | comic.noreply@github.com |
d12a3c1a0a1d4a430b8fb15efcaa15d9f558aec6 | 892ab4d7836e369d0e657be91b4dcd3e8153f372 | /compute/wps/tests/test_tasks_edas.py | 34133d825a836e498dd903b6530238665d2e93a4 | [] | no_license | davidcaron/esgf-compute-wps | 9e28e3af1ec1f524abdcce39cfe42c7bbcb3d50b | c3ffca449f65f8d206032d040ead6e14085f04ab | refs/heads/master | 2020-04-16T20:54:59.095653 | 2018-05-04T22:00:45 | 2018-05-04T22:00:45 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,793 | py | import os
import shutil
import cwt
import mock
from django import test
from . import helpers
from wps import models
from wps import settings
from wps import WPSError
from wps.tasks import cdat
from wps.tasks import edas
class EDASTaskTestCase(test.TestCase):
fixtures = ['processes.json', 'servers.json', 'users.json']
def setUp(self):
self.user = models.User.objects.all()[0]
self.server = models.Server.objects.all()[0]
self.process = models.Process.objects.all()[0]
def test_check_exceptions_error(self):
with self.assertRaises(WPSError) as e:
edas.check_exceptions('!<response><exceptions><exception>error</exception></exceptions></response>')
def test_check_exceptions(self):
edas.check_exceptions('!<response></response>')
def test_listen_edas_output_timeout(self):
mock_self = mock.MagicMock()
mock_poller = mock.MagicMock(**{'poll.return_value': {}})
mock_job = mock.MagicMock()
with self.assertRaises(WPSError) as e:
edas.listen_edas_output(mock_self, mock_poller, mock_job)
def test_listen_edas_output_heartbeat(self):
mock_self = mock.MagicMock()
mock_poller = mock.MagicMock(**{'poll.return_value': [(mock.MagicMock(**{'recv.side_effect': ['response', 'file']}), 0)]})
mock_job = mock.MagicMock()
result = edas.listen_edas_output(mock_self, mock_poller, mock_job)
self.assertIsInstance(result, str)
self.assertEqual(result, 'file')
def test_listen_edas_output(self):
mock_self = mock.MagicMock()
mock_poller = mock.MagicMock(**{'poll.return_value': [(mock.MagicMock(**{'recv.return_value': 'file'}), 0)]})
mock_job = mock.MagicMock()
result = edas.listen_edas_output(mock_self, mock_poller, mock_job)
self.assertIsInstance(result, str)
self.assertEqual(result, 'file')
@mock.patch('wps.tasks.edas.cdms2.open')
@mock.patch('shutil.move')
@mock.patch('wps.tasks.edas.listen_edas_output')
@mock.patch('wps.tasks.edas.initialize_socket')
def test_edas_submit_listen_failed(self, mock_init_socket, mock_listen, mock_move, mock_open):
mock_open.return_value = mock.MagicMock()
mock_open.return_value.__enter__.return_value.variables.keys.return_value = ['tas']
mock_job = mock.MagicMock()
mock_listen.return_value = None
variables = {
'v0': {'id': 'tas|v0', 'uri': 'file:///test.nc'},
}
operation = {
'name': 'EDAS.sum',
'input': ['v0'],
'axes': 'time'
}
with self.assertRaises(WPSError) as e:
edas.edas_submit({}, variables, {}, operation, user_id=self.user.id, job_id=0)
@mock.patch('wps.tasks.edas.process.Process')
@mock.patch('wps.tasks.edas.cdms2.open')
@mock.patch('shutil.move')
@mock.patch('wps.tasks.edas.listen_edas_output')
@mock.patch('wps.tasks.edas.initialize_socket')
def test_edas_submit(self, mock_init_socket, mock_listen, mock_move, mock_open, mock_process):
mock_open.return_value = mock.MagicMock()
mock_open.return_value.__enter__.return_value.variables.keys.return_value = ['tas']
mock_job = mock.MagicMock()
variables = {
'v0': {'id': 'tas|v0', 'uri': 'file:///test.nc'},
}
operation = {
'name': 'EDAS.sum',
'input': ['v0'],
'axes': 'time'
}
result = edas.edas_submit({}, variables, {}, operation, user_id=self.user.id, job_id=0)
self.assertIsInstance(result, dict)
calls = mock_init_socket.call_args_list
self.assertEqual(calls[0][0][3], 5670)
self.assertEqual(calls[1][0][3], 5671)
| [
"boutte.jason@gmail.com"
] | boutte.jason@gmail.com |
bd5e6da4ad39b58828acace4b57acb32cd9a2953 | 1f11c41403d664893f8fe85e422ebbd9d9422cee | /ABC048/a.py | 095ce3873f8db294aeed3b65473a8cf0094ff912 | [] | no_license | shibata-kento/AtCoder | 8ad558f098f715ed50912fe7bf934d301a35c4ca | 28512621a359a6973bd8f1615bcd9f4ea8a9fa6c | refs/heads/master | 2023-01-28T10:36:14.109629 | 2020-12-09T13:33:20 | 2020-12-09T13:33:20 | 267,014,562 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 52 | py | s = input().split()[1][0]
print('A{}C'.format(s)) | [
"shibata0203.jp@gmail.com"
] | shibata0203.jp@gmail.com |
8fd790caf96a76d675989b45a6882cf5d8639859 | 824a58f05b24ef9ec0920aa046498c816f5c5121 | /models.py | 39fe32464103cff49911ac03919aff948c927db7 | [] | no_license | zhakguder/adversarial-generation | c383e4338b53e3763ccbf572644ff8261d717ea6 | 10d2eb64f1b6a9117d86f3333a236c25b399dd3a | refs/heads/master | 2020-05-17T16:40:50.319921 | 2019-05-21T14:25:50 | 2019-05-21T14:25:50 | 183,825,906 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,985 | py | from functools import partial
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Flatten
from utils import _softplus_inverse
import tensorflow_probability as tfp
from custom_layers import LSHLayer, clusterLayer
from settings import get_settings
from functools import reduce
tfd = tfp.distributions
tfpl = tfp.layers
from ipdb import set_trace
flags, params = get_settings()
forward_calls = ''
layer_count = 0
def build_net(hidden_dims, trainable=True):
dense_relu = partial(Dense, activation='tanh')
net = Sequential()
if forward_calls in ['encoder', 'mnist']:
prev_dim = (28, 28)
elif forward_calls == 'decoder':
prev_dim = params['latent_dim']
if forward_calls in ['mnist', 'encoder']:
net.add(Flatten(input_shape=prev_dim))
prev_dim = reduce(lambda x,y: x*y, prev_dim)
for idx, dim in enumerate(hidden_dims):
net.add(dense_relu(dim, name="{}_relu_{}".format(forward_calls, idx), input_shape = [prev_dim], trainable=trainable)) #
#print('Dim: {}'.format(prev_dim))
prev_dim = dim
return net
def make_encoder(hidden_dims, latent_dim, out_activation, network=None):
global forward_calls
forward_calls = 'encoder'
if network is not None:
encoder_net = network
else:
encoder_net = build_net(hidden_dims)
encoder_net.add(Dense(latent_dim * 2, activation = out_activation, name = '{}_{}'.format(forward_calls, out_activation)))
def encoder(inputs):
outputs = encoder_net(inputs)
return outputs
return encoder, encoder_net
def make_decoder(hidden_dims, output_dim, network=None):
global forward_calls
out_activation = 'linear'
forward_calls = 'decoder'
if network is not None:
decoder_net = network
else:
decoder_net = build_net(hidden_dims)
decoder_net.add(Dense(output_dim, activation = out_activation, name = '{}_{}'.format(forward_calls, out_activation)))
def decoder(sample):
reconstruction = decoder_net(sample)
return reconstruction
return decoder, decoder_net
def make_lsh(dim, w):
net = Sequential()
net.add(LSHLayer(dim, w))
def lsh(reconstructions):
hash_codes = net(reconstructions)
return hash_codes
return lsh, net
def make_cluster():
net = Sequential()
net.add(clusterLayer())
def cluster(inputs):
q_s = net(inputs)
return q_s
return cluster
def make_mnist(network_dims):
global forward_calls
forward_calls = 'mnist'
net = build_net(network_dims, trainable=True)
net.add(Dense(10, activation='linear', trainable=True))
return net
def initialize_eval_mnist(net):
data = tf.random.normal((1, 784))
net(data)
return net
def set_mnist_weights(net, weights):
used = 0
for i, layer in enumerate(net.layers):
if i > 0:
weight_shape = layer.weights[0].shape
bias_shape = layer.weights[1].shape
n_weight = tf.reduce_prod(weight_shape).numpy()
n_bias = tf.reduce_prod(bias_shape).numpy()
tmp_used = used + n_weight
layer_weights = tf.reshape(weights[used:tmp_used], weight_shape)
used = tmp_used
tmp_used += n_bias
layer_biases = weights[used:tmp_used]
used = tmp_used
net.layers[i].set_weights([layer_weights, layer_biases])
return net
def mnist_classifier_net(input_shape, output_shape, training):
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(128, activation='relu', trainable=training),
tf.keras.layers.Dense(256, activation='relu', trainable=training),
tf.keras.layers.Dense(output_shape, trainable=training)
])
return net
def cifar10_classifier_net(filters_array, dropout_array, input_shape, output_shape, training):
from tensorflow.keras.layers import Conv2D, BatchNormalization, MaxPooling2D, Dropout
net = tf.keras.models.Sequential()
layer_count = 0
for filters, dropout in zip (filters_array, dropout_array):
for i in range(2):
if layer_count == 0:
net.add(
Conv2D(filters, (3,3), padding='same', activation='relu', trainable=training, input_shape = input_shape))
net.add(BatchNormalization())
layer_count +=1
else:
net.add(Conv2D(filters, (3,3), padding='same', activation='relu', trainable=training))
net.add(BatchNormalization())
layer_count += 1
net.add(MaxPooling2D(pool_size=(2,2)))
net.add(Dropout(dropout))
net.add(Flatten())
net.add(tf.keras.layers.Dense(output_shape, trainable=training))
return net
if __name__ == '__main__':
net = cifar10_classifier_net([32, 64, 128], [0.2, 0.3, 0.4], 10, True)
| [
"zeynep.hakguder@huskers.unl.edu"
] | zeynep.hakguder@huskers.unl.edu |
2528570aad7cb0dbdf5b4fe262b0745d0d60e920 | ecb4eb32a75e626ebee29142fa0b28e18362dd8c | /adam/domain/document_item.py | c3eb014f1ffdab1b17baccbf0ead952880b7b1d3 | [
"BSD-3-Clause"
] | permissive | souzaux/adam | a2e29d210ccfbf8eb02a5f1a1dc5be297515fdc3 | cbb261b909adde39b874d355c47fe3824cd3e9e1 | refs/heads/master | 2020-05-30T13:25:55.290349 | 2016-08-04T09:22:12 | 2016-08-04T09:22:12 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,726 | py | # -*- coding: utf-8 -*-
"""
adam.domain.document_item.py
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
'document_item' schema settings.
:copyright: (c) 2016 by Nicola Iarocci and CIR2000.
:license: BSD, see LICENSE for more details.
"""
from common import variation
from vat import vat_field
from warehouses import warehouse_field
detail = {
'type': 'dict',
'schema': {
'sku': {'type': 'string'},
'description': {'type': 'string'},
'color': {'type': 'string'},
'unit_of_measure': {'type': 'string'},
'notes': {'type': 'string'},
'serial_number': {'type': 'string'},
'lot': {
'type': 'dict',
'schema': {
'number': {'type': 'string'},
'date': {'type': 'datetime'},
'expiration': {'type': 'datetime'},
}
},
'size': {
'type': 'dict',
'schema': {
'number': {'type': 'string'},
'name': {'type': 'string'},
}
},
}
}
item = {
'type': 'dict',
'schema': {
'guid': {'type': 'string'},
'quantity': {'type': 'float'},
'processed_quantity': {'type': 'float'},
'price': {'type': 'integer'},
'net_price': {'type': 'integer'},
'price_vat_inclusive': {'type': 'integer'},
'total': {'type': 'integer'},
'withholding_tax': {'type': 'boolean'},
'commission': {'type': 'float'},
'area_manager_commission': {'type': 'float'},
'detail': detail,
'vat': vat_field,
'price_list': {'type': 'string'},
'warehouse': warehouse_field,
'variation_collection': variation,
}
}
| [
"nicola@nicolaiarocci.com"
] | nicola@nicolaiarocci.com |
b9ada5a859b6f7ba9697a4b51f1a7f8c32c8c74f | e976eb4db57ddee4947cbab8746446dd53f6cf6f | /1-50/与所有单词相关联的字串.py | 745ac8d7d7a3e650d68f9b5975bee5c1a37da875 | [] | no_license | Aiyane/aiyane-LeetCode | 5328529079bcfbc84f4e4d67e3d8736b9745dc0d | 3c4d5aacc33f3ed66b6294894a767862170fb4f6 | refs/heads/master | 2020-04-01T20:33:54.125654 | 2019-06-25T09:56:10 | 2019-06-25T09:56:10 | 153,610,015 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,974 | py | #!/usr/bin/env/python3
# -*- coding: utf-8 -*-
# 与所有单词相关联的字串.py
"""
给定一个字符串 s 和一些长度相同的单词 words。在 s 中找出可以恰好串联 words 中所有单词的子串的起始位置。
注意子串要与 words 中的单词完全匹配,中间不能有其他字符,但不需要考虑 words 中单词串联的顺序。
示例 1:
输入:
s = "barfooothefooobarman",
words = ["fooo","bar"]
输出: [0,9]
解释: 从索引 0 和 9 开始的子串分别是 "barfoor" 和 "foobar" 。
输出的顺序不重要, [9,0] 也是有效答案。
示例 2:
输入:
s = "wordgoodstudentgoodword",
words = ["word","student"]
输出: []
"""
"""
思路:构造单词字典,因为长度是一样的,所以大循环里只需要循环min(width, length_s - length_words + 1)
如果获取的单词比结果还多,从第一个单词开始去掉,直到符合结果
"""
import profile
class Solution:
def findSubstring(self, s, words):
"""
:type s: str
:type words: List[str]
:rtype: List[int]
"""
if not s or not words:
return []
length_s = len(s)
width = len(words[0])
length_words = len(words)*width
if length_s < length_words:
return []
result = []
# 首先构造单词表次数字典
times = dict()
for word in words:
if word not in times:
times[word] = 1
else:
times[word] += 1
# 按照单词长度遍历,所以从单词长度开始重复,或者字符串长度减去单词长度即可
ll = min(width, length_s - length_words + 1)
for i in range(ll):
s_start, s_end = i, i
d = dict()
while s_start + width <= length_s:
word = s[s_end:s_end+width]
s_end += width
if word not in times:
s_start = s_end
d.clear()
# 如果长度不够,提前结束
if length_s - s_start < length_words:
break
else:
if word not in d:
d[word] = 1
else:
d[word] += 1
# 如果获取的单词比结果还多,从第一个单词开始去掉,直到符合结果
while d[word] > times[word]:
d[s[s_start:s_start+width]] -= 1
s_start += width
if s_end - s_start == length_words:
result.append(s_start)
return result
def main():
s = "wordgoodgoodgoodbestword"
words = ["word","good","best","word"]
# s = "barfoothefoobarman"
# words = ["foo", "bar"]
sol = Solution()
print(sol.findSubstring(s, words))
if __name__ == '__main__':
profile.run('main()')
| [
"2310091880qq@gmail.com"
] | 2310091880qq@gmail.com |
3569013ca6cef03a594258fb5f91a3563d856154 | 801f367bd19b8f2ab08669fd0a85aad7ace961ac | /cleaned_version/_train_utils/utils.py | b830833f01ab67eaec0665e6da7d821dc5b770b1 | [
"MIT"
] | permissive | Wendong-Huo/thesis-bodies | d91b694a6b1b6a911476573ed1ed27eb27fb000d | dceb8a36efd2cefc611f6749a52b56b9d3572f7a | refs/heads/main | 2023-04-17T18:32:38.541537 | 2021-03-12T19:53:23 | 2021-03-12T19:53:23 | 623,471,326 | 1 | 0 | null | 2023-04-04T12:45:48 | 2023-04-04T12:45:47 | null | UTF-8 | Python | false | false | 6,053 | py | import glob
import os
import gym
from stable_baselines3 import A2C, DDPG, DQN, HER, PPO, SAC, TD3
from stable_baselines3.common.monitor import Monitor
from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize
from stable_baselines3.common.utils import set_random_seed
from stable_baselines3.common.callbacks import BaseCallback, EvalCallback
from policies.ppo_with_body_info import PPO_with_body_info
from policies.ppo_without_body_info import PPO_without_body_info
ALGOS = {
# "a2c": A2C,
# "ddpg": DDPG,
# "dqn": DQN,
# "her": HER,
# "sac": SAC,
# "td3": TD3,
"ppo": PPO_without_body_info,
"ppo_w_body": PPO_with_body_info
}
def linear_schedule(initial_value):
"""
Linear learning rate schedule.
:param initial_value: (float or str)
:return: (function)
"""
if isinstance(initial_value, str):
initial_value = float(initial_value)
def func(progress):
"""
Progress will decrease from 1 (beginning) to 0
:param progress: (float)
:return: (float)
"""
return progress * initial_value
return func
def get_latest_run_id(log_path, env_id):
"""
Returns the latest run number for the given log name and log path,
by finding the greatest number in the directories.
:param log_path: (str) path to log folder
:param env_id: (str)
:return: (int) latest run number
"""
max_run_id = 0
for path in glob.glob(log_path + "/{}_[0-9]*".format(env_id)):
file_name = path.split("/")[-1]
ext = file_name.split("_")[-1]
if env_id == "_".join(file_name.split("_")[:-1]) and ext.isdigit() and int(ext) > max_run_id:
max_run_id = int(ext)
return max_run_id
def create_env(n_envs, env_id, kwargs, seed=0, normalize=False, normalize_kwargs=None, eval_env=False, log_dir=None):
"""
Create the environment and wrap it if necessary
:param n_envs: (int)
:param eval_env: (bool) Whether is it an environment used for evaluation or not
:param no_log: (bool) Do not log training when doing hyperparameter optim
(issue with writing the same file)
:return: (Union[gym.Env, VecEnv])
"""
if n_envs == 1:
# use rank=127 so eval_env won't overlap with any training_env.
env = DummyVecEnv(
[make_env(env_id, 127, seed, log_dir=log_dir, env_kwargs=kwargs)]
)
else:
# env = SubprocVecEnv([make_env(env_id, i, args.seed) for i in range(n_envs)])
# On most env, SubprocVecEnv does not help and is quite memory hungry
env = DummyVecEnv(
[
make_env(env_id, i, seed, log_dir=log_dir, env_kwargs=kwargs[i])
for i in range(n_envs)
]
)
if normalize:
# Copy to avoid changing default values by reference
local_normalize_kwargs = normalize_kwargs.copy()
# Do not normalize reward for env used for evaluation
if eval_env:
if len(local_normalize_kwargs) > 0:
local_normalize_kwargs["norm_reward"] = False
else:
local_normalize_kwargs = {"norm_reward": False}
env = VecNormalize(env, **local_normalize_kwargs)
return env
def make_env(env_id, rank=0, seed=0, log_dir=None, wrapper_class=None, env_kwargs=None):
"""
Helper function to multiprocess training
and log the progress.
:param env_id: (str)
:param rank: (int)
:param seed: (int)
:param log_dir: (str)
:param wrapper_class: (Type[gym.Wrapper]) a subclass of gym.Wrapper
to wrap the original env with
:param env_kwargs: (Dict[str, Any]) Optional keyword argument to pass to the env constructor
"""
if log_dir is not None:
os.makedirs(log_dir, exist_ok=True)
if env_kwargs is None:
env_kwargs = {}
def _init():
set_random_seed(seed * 128 + rank)
env = gym.make(env_id, **env_kwargs)
# Wrap first with a monitor (e.g. for Atari env where reward clipping is used)
log_file = os.path.join(log_dir, str(rank)) if log_dir is not None else None
# Monitor success rate too for the real robot
info_keywords = ("is_success",) if "Neck" in env_id else ()
env = Monitor(env, log_file, info_keywords=info_keywords)
# Dict observation space is currently not supported.
# https://github.com/hill-a/stable-baselines/issues/321
# We allow a Gym env wrapper (a subclass of gym.Wrapper)
if wrapper_class:
env = wrapper_class(env)
env.seed(seed * 128 + rank)
return env
return _init
class SaveVecNormalizeCallback(BaseCallback):
"""
Callback for saving a VecNormalize wrapper every ``save_freq`` steps
:param save_freq: (int)
:param save_path: (str) Path to the folder where ``VecNormalize`` will be saved, as ``vecnormalize.pkl``
:param name_prefix: (str) Common prefix to the saved ``VecNormalize``, if None (default)
only one file will be kept.
"""
def __init__(self, save_freq: int, save_path: str, name_prefix=None, verbose=0):
super(SaveVecNormalizeCallback, self).__init__(verbose)
self.save_freq = save_freq
self.save_path = save_path
self.name_prefix = name_prefix
def _init_callback(self) -> None:
# Create folder if needed
if self.save_path is not None:
os.makedirs(self.save_path, exist_ok=True)
def _on_step(self) -> bool:
if self.n_calls % self.save_freq == 0:
if self.name_prefix is not None:
path = os.path.join(self.save_path, f"{self.name_prefix}_{self.num_timesteps}_steps.pkl")
else:
path = os.path.join(self.save_path, "vecnormalize.pkl")
if self.model.get_vec_normalize_env() is not None:
self.model.get_vec_normalize_env().save(path)
if self.verbose > 1:
print(f"Saving VecNormalize to {path}")
return True
| [
"sliu1@uvm.edu"
] | sliu1@uvm.edu |
065d7e8d210b3c6e4e6c55a6995e288bbd83b8c6 | fe0017ae33385d7a2857d0aa39fa8861b40c8a88 | /env/lib/python3.8/site-packages/sklearn/mixture/base.py | 1daf2061f2fd01b8a342289d9497009a817083cd | [] | no_license | enriquemoncerrat/frasesback | eec60cc7f078f9d24d155713ca8aa86f401c61bf | e2c77f839c77f54e08a2f0930880cf423e66165b | refs/heads/main | 2023-01-03T23:21:05.968846 | 2020-10-18T21:20:27 | 2020-10-18T21:20:27 | 305,198,286 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 556 | py |
# THIS FILE WAS AUTOMATICALLY GENERATED BY deprecated_modules.py
import sys
# mypy error: Module X has no attribute y (typically for C extensions)
from . import _base # type: ignore
from ..externals._pep562 import Pep562
from ..utils.deprecation import _raise_dep_warning_if_not_pytest
deprecated_path = 'sklearn.mixture.base'
correct_import_path = 'sklearn.mixture'
_raise_dep_warning_if_not_pytest(deprecated_path, correct_import_path)
def __getattr__(name):
return getattr(_base, name)
if not sys.version_info >= (3, 7):
Pep562(__name__)
| [
"enriquemoncerrat@gmail.com"
] | enriquemoncerrat@gmail.com |
282b16e40ec888666211472ca10dba0e709687b1 | 81fe7f2faea91785ee13cb0297ef9228d832be93 | /HackerRank/Contests/WeekOfCode21/kangaroo.py | 0ba85ccd8448b9b12f2e0718b459aa938a331f45 | [] | no_license | blegloannec/CodeProblems | 92349c36e1a35cfc1c48206943d9c2686ea526f8 | 77fd0fa1f1a519d4d55265b9a7abf12f1bd7d19e | refs/heads/master | 2022-05-16T20:20:40.578760 | 2021-12-30T11:10:25 | 2022-04-22T08:11:07 | 54,330,243 | 5 | 1 | null | null | null | null | UTF-8 | Python | false | false | 283 | py | #!/usr/bin/env python
import sys
def main():
x1,v1,x2,v2 = map(int,sys.stdin.readline().split())
if x1==x2:
print 'YES'
elif v1==v2:
print 'NO'
elif (x1-x2)*(v2-v1)>=0 and (x1-x2)%(v2-v1)==0:
print 'YES'
else:
print 'NO'
main()
| [
"blg@gmx.com"
] | blg@gmx.com |
8d1ae47c9f07f66017f56445147ce843cb789c27 | 504a5e7c9319bda04e3d33978f64404bba47392a | /Python 200/034.py | 4afcfdef5e71d2866eeb60876b0ee093ce0122a4 | [] | no_license | Ani-Gil/Python | 9fd02d321a7e21c07ea5fa30ae0f0336cae15861 | 5bb019bfbe19f10f20f6c5883299011717d20f55 | refs/heads/main | 2023-03-15T02:21:07.467329 | 2021-02-24T01:20:00 | 2021-02-24T01:20:00 | 324,080,371 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 383 | py | # 034.py - 문자열 포맷팅 이해하기
txt1 = '자바'; txt2 = '파이썬'
num1 = 5; num2 = 10
print('나는 %s보다 %s에 더 익숙합니다.' % (txt1, txt2))
print('%s은 %s보다 %d배 더 쉽습니다.' % (txt2, txt1, num1))
print('%d + %d + %d' % (num1, num2, num1 + num2))
print('작년 세계 경제 성장률은 전년에 비해 %d%% 포인트 증가했다.' % num1)
| [
"dlatldn426@gmail.com"
] | dlatldn426@gmail.com |
8ff9155fca196073894615de07c53e2ea89d6b7b | 364b36d699d0a6b5ddeb43ecc6f1123fde4eb051 | /_downloads_1ed/fig_lyndenbell_toy.py | 80a9743fd39ab85865499aac101b2c96c46b7eaf | [] | no_license | astroML/astroml.github.com | eae3bfd93ee2f8bc8b5129e98dadf815310ee0ca | 70f96d04dfabcd5528978b69c217d3a9a8bc370b | refs/heads/master | 2022-02-27T15:31:29.560052 | 2022-02-08T21:00:35 | 2022-02-08T21:00:35 | 5,871,703 | 2 | 5 | null | 2022-02-08T21:00:36 | 2012-09-19T12:55:23 | HTML | UTF-8 | Python | false | false | 4,082 | py | """
Luminosity function code on toy data
------------------------------------
Figure 4.9.
An example of using Lynden-Bell's C- method to estimate a bivariate
distribution from a truncated sample. The lines in the left panel show the true
one-dimensional distributions of x and y (truncated Gaussian distributions).
The two-dimensional distribution is assumed to be separable; see eq. 4.85.
A realization of the distribution is shown in the right panel, with a
truncation given by the solid line. The points in the left panel are computed
from the truncated data set using the C- method, with error bars from 20
bootstrap resamples.
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
from astroML.lumfunc import bootstrap_Cminus
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Define and sample our distributions
N = 10000
np.random.seed(42)
# Define the input distributions for x and y
x_pdf = stats.truncnorm(-2, 1, 0.66666, 0.33333)
y_pdf = stats.truncnorm(-1, 2, 0.33333, 0.33333)
x = x_pdf.rvs(N)
y = y_pdf.rvs(N)
# define the truncation: we'll design this to be symmetric
# so that xmax(y) = max_func(y)
# and ymax(x) = max_func(x)
max_func = lambda t: 1. / (0.5 + t) - 0.5
xmax = max_func(y)
xmax[xmax > 1] = 1 # cutoff at x=1
ymax = max_func(x)
ymax[ymax > 1] = 1 # cutoff at y=1
# truncate the data
flag = (x < xmax) & (y < ymax)
x = x[flag]
y = y[flag]
xmax = xmax[flag]
ymax = ymax[flag]
x_fit = np.linspace(0, 1, 21)
y_fit = np.linspace(0, 1, 21)
#------------------------------------------------------------
# compute the Cminus distributions (with bootstrap)
x_dist, dx_dist, y_dist, dy_dist = bootstrap_Cminus(x, y, xmax, ymax,
x_fit, y_fit,
Nbootstraps=20,
normalize=True)
x_mid = 0.5 * (x_fit[1:] + x_fit[:-1])
y_mid = 0.5 * (y_fit[1:] + y_fit[:-1])
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(5, 2))
fig.subplots_adjust(bottom=0.2, top=0.95,
left=0.1, right=0.92, wspace=0.25)
# First subplot is the true & inferred 1D distributions
ax = fig.add_subplot(121)
ax.plot(x_mid, x_pdf.pdf(x_mid), '-k', label='$p(x)$')
ax.plot(y_mid, y_pdf.pdf(y_mid), '--k', label='$p(y)$')
ax.legend(loc='lower center')
ax.errorbar(x_mid, x_dist, dx_dist, fmt='ok', ecolor='k', lw=1, ms=4)
ax.errorbar(y_mid, y_dist, dy_dist, fmt='^k', ecolor='k', lw=1, ms=4)
ax.set_ylim(0, 1.8)
ax.set_xlim(0, 1)
ax.set_xlabel('$x$, $y$')
ax.set_ylabel('normalized distribution')
# Second subplot is the "observed" 2D distribution
ax = fig.add_subplot(122)
H, xb, yb = np.histogram2d(x, y, bins=np.linspace(0, 1, 41))
plt.imshow(H.T, origin='lower', interpolation='nearest',
extent=[0, 1, 0, 1], cmap=plt.cm.binary)
cb = plt.colorbar()
x_limit = np.linspace(-0.1, 1.1, 1000)
y_limit = max_func(x_limit)
x_limit[y_limit > 1] = 0
y_limit[x_limit > 1] = 0
ax.plot(x_limit, y_limit, '-k')
ax.set_xlim(0, 1.1)
ax.set_ylim(0, 1.1)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
cb.set_label('counts per pixel')
ax.text(0.93, 0.93, '%i points' % len(x), ha='right', va='top',
transform=ax.transAxes)
plt.show()
| [
"vanderplas@astro.washington.edu"
] | vanderplas@astro.washington.edu |
29c37a9dfe43e6415a3d457fbea720baf1e1e1d2 | dfcb9827b966a5055a47e27b884eaacd88269eb1 | /ssseg/cfgs/ce2p/cfgs_voc_resnet101os8.py | 385c6008fa55db823a508dafaaef19402fd531a9 | [
"MIT"
] | permissive | RiDang/sssegmentation | cdff2be603fc709c1d03897383032e69f850f0cd | 2a79959a3d7dff346bab9d8e917889aa5621615a | refs/heads/main | 2023-02-05T12:52:35.391061 | 2020-12-27T05:59:58 | 2020-12-27T05:59:58 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,380 | py | '''define the config file for voc and resnet101os8'''
from .base_cfg import *
# modify dataset config
DATASET_CFG = DATASET_CFG.copy()
DATASET_CFG['train'].update(
{
'type': 'voc',
'set': 'trainaug',
'rootdir': 'data/VOCdevkit/VOC2012',
}
)
DATASET_CFG['test'].update(
{
'type': 'voc',
'rootdir': 'data/VOCdevkit/VOC2012',
}
)
# modify dataloader config
DATALOADER_CFG = DATALOADER_CFG.copy()
# modify optimizer config
OPTIMIZER_CFG = OPTIMIZER_CFG.copy()
OPTIMIZER_CFG.update(
{
'max_epochs': 60,
}
)
# modify losses config
LOSSES_CFG = LOSSES_CFG.copy()
# modify model config
MODEL_CFG = MODEL_CFG.copy()
MODEL_CFG.update(
{
'num_classes': 21,
'backbone': {
'type': 'resnet101',
'series': 'resnet',
'pretrained': True,
'outstride': 8,
'use_stem': True
}
}
)
# modify common config
COMMON_CFG = COMMON_CFG.copy()
COMMON_CFG['train'].update(
{
'backupdir': 'ce2p_resnet101os8_voc_train',
'logfilepath': 'ce2p_resnet101os8_voc_train/train.log',
}
)
COMMON_CFG['test'].update(
{
'backupdir': 'ce2p_resnet101os8_voc_test',
'logfilepath': 'ce2p_resnet101os8_voc_test/test.log',
'resultsavepath': 'ce2p_resnet101os8_voc_test/ce2p_resnet101os8_voc_results.pkl'
}
) | [
"1159254961@qq.com"
] | 1159254961@qq.com |
b7eb6dfe9463dbbb7e94e66dc97dd8fd8e80d49b | 0809673304fe85a163898983c2cb4a0238b2456e | /src/lesson_algorithms/contextlib_exitstack_callbacks_error.py | ce142be2225b3d9e9678f796d9a6bd379eda226d | [
"Apache-2.0"
] | permissive | jasonwee/asus-rt-n14uhp-mrtg | 244092292c94ff3382f88f6a385dae2aa6e4b1e1 | 4fa96c3406e32ea6631ce447db6d19d70b2cd061 | refs/heads/master | 2022-12-13T18:49:02.908213 | 2018-10-05T02:16:41 | 2018-10-05T02:16:41 | 25,589,776 | 3 | 1 | Apache-2.0 | 2022-11-27T04:03:06 | 2014-10-22T15:42:28 | Python | UTF-8 | Python | false | false | 356 | py | import contextlib
def callback(*args, **kwds):
print('closing callback({}, {})'.format(args, kwds))
try:
with contextlib.ExitStack() as stack:
stack.callback(callback, 'arg1', 'arg2')
stack.callback(callback, arg3='val3')
raise RuntimeError('thrown error')
except RuntimeError as err:
print('ERROR: {}'.format(err))
| [
"peichieh@gmail.com"
] | peichieh@gmail.com |
3549ee4210441cea0700fff60b1ab8ff44b03cf8 | 86cc17a69213569af670faed7ad531cb599b960d | /prooo26.py | 19af9019514aa873268da93a2c03efbcc42129c0 | [] | no_license | LakshmikanthRavi/guvi-lux | ed1c389e27a9ec62e0fd75c140322563f68d311a | 5c29f73903aa9adb6484c76103edf18ac165259e | refs/heads/master | 2020-04-15T05:07:19.743874 | 2019-08-13T08:53:00 | 2019-08-13T08:53:00 | 164,409,489 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 301 | py | z1=int(input())
l=list(map(int,input().split()))
p=[]
t=[]
u=[]
o=[]
for i in range(0,len(l)+1):
for j in range(0,len(l)+1):
p.append(l[i:j])
for i in p:
if i!=[]:
t.append(i)
for i in t:
if sorted(i)==i:
u.append(i)
for i in u:
o.append(len(i))
print(max(o))
| [
"noreply@github.com"
] | LakshmikanthRavi.noreply@github.com |
02a0734050ac0604dbf68fd0310300efecd5d176 | 780ab93e6c6871673ae667c9a180892bd3073f56 | /app/__init__.py | 073ff3e32cc176e023d0c1fa0460e64bcd462e67 | [] | no_license | changbj00/ApiTestManage | d6a8d030289455555a0ed460ed880081da08b4bf | c2fd19c54100acf72ce61bf920dadeae5a34f747 | refs/heads/master | 2020-04-01T00:15:11.097011 | 2018-09-30T09:41:30 | 2018-09-30T09:41:30 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,083 | py | # encoding: utf-8
import os
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_moment import Moment
from flask_login import LoginManager
from config import config
from config import config_log
from config import ConfigTask
from .util import global_variable # 初始化文件地址
login_manager = LoginManager()
login_manager.session_protection = 'None'
# login_manager.login_view = '.login'
db = SQLAlchemy()
moment = Moment()
scheduler = ConfigTask().scheduler
basedir = os.path.abspath(os.path.dirname(__file__))
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(config[config_name])
app.logger.addHandler(config_log()) # 初始化日志
config[config_name].init_app(app)
moment.init_app(app)
# https://blog.csdn.net/yannanxiu/article/details/53426359 关于定时任务访问数据库时报错
# 坑在这2个的区别 db = SQLAlchemy() db = SQLAlchemy(app)
db.init_app(app)
db.app = app
db.create_all()
login_manager.init_app(app)
scheduler.start() # 定时任务启动
# from .main import main as main_blueprint
# app.register_blueprint(main_blueprint)
# from .pro import pro as pro_blueprint
# app.register_blueprint(pro_blueprint, url_prefix='/pro')
#
# from .DataTool import DataTools as DataTool_blueprint
# app.register_blueprint(DataTool_blueprint, url_prefix='/dataTool')
#
# from .TestCase import TestCases as TestCase_blueprint
# app.register_blueprint(TestCase_blueprint, url_prefix='/TestCase')
#
# from .testpage import testpages as testpage_blueprint
# app.register_blueprint(testpage_blueprint, url_prefix='/testpage')
#
# from .apiManage import apiManages as apiManages_blueprint
# app.register_blueprint(apiManages_blueprint, url_prefix='/apiManage')
from .api import api as api_blueprint
app.register_blueprint(api_blueprint, url_prefix='/api')
# from .api.model import api as api_blueprint
# app.register_blueprint(api_blueprint, url_prefix='/api')
return app
| [
"362508572@qq.com"
] | 362508572@qq.com |
e709360b59b2c74ebfaf4a1c0ea08606f4413773 | 590a68b6e68b41b6b9f8d8f5240df3181af0b07f | /RNN/test/dataloader_frames_p2.py | bf4661febeea33d94a2b6ba2ae72bb83bd3f9d6e | [] | no_license | ziqinXU/RNN-Video-Segmentation | e1a3098f5597960f57a1626c2cec08ad5f6635b0 | 0baa348fd08fa7f6813cd55e70004b96c559b46a | refs/heads/master | 2022-04-04T10:51:44.583855 | 2020-02-06T07:14:49 | 2020-02-06T07:14:49 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,396 | py | import os
import pickle as pk
import reader
import numpy as np
import sys
import torch
import torchvision.transforms as transforms
from torch.utils.data import Dataset
from generate_frames_p2 import extract_frames
MEAN = [0.485, 0.456, 0.406]
STD = [0.229, 0.224, 0.225]
class DATA(Dataset):
def __init__(self, opt):
# Call frame generator function
self.frames_data = extract_frames(opt)
# Transform the image
self.transform = transforms.Compose([
transforms.ToTensor(), # (H,W,C)->(C,H,W), [0,255]->[0, 1.0] RGB->RGB
transforms.Normalize(MEAN, STD)
])
def __getitem__(self, idx):
frames_list = self.frames_data[idx] # contains sampled frames for one video, in a list
N = len(frames_list) # number of frames in this particular video
frames_tensor = torch.zeros(N, 3, 240, 320) # tensor of dimension NxCx240x320, which will contain all N frames for one video
# Transform each frame (currently numpy array) in the list into a tensor, and put it into the pre-allocated tensor
for i in range(N):
frames_tensor[i,:,:,:] = self.transform(frames_list[i]) # each frame is now a tensor, Cx240x320
return frames_tensor
def __len__(self):
return len(self.frames_data) # number of videos that were sampled
| [
"noreply@github.com"
] | ziqinXU.noreply@github.com |
78765e98d7aca64c1627f12e61159fa84b27b105 | 1287bbb696e240dd0b92d56d4fdf4246370f3e14 | /_requests.py | d8a233f7f9ee5c53302d0f30e1a99ef3ed08382e | [] | no_license | omerfarukcelenk/PythonCalismalari | ed0c204084860fddcb892e6edad84fdbc1ed38ec | 28da12d7d042ec306f064fb1cc3a1a026cb57b74 | refs/heads/main | 2023-04-13T18:23:15.270020 | 2021-04-26T21:06:21 | 2021-04-26T21:06:21 | 361,893,918 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 236 | py | import requests
import json
result = requests.get("https://jsonplaceholder.typicode.com/todos")
result = json.loads(result.text)
print(result[0]["title"])
print(result[0])
for i in result:
print(i["title"])
print(type(result))
| [
"omerfar0133@gmail.com"
] | omerfar0133@gmail.com |
81fd332f532043398a89498afa06b89ca560e631 | 944a91cbdb75e53e3de22604b258d2404bc7415d | /jobs/urls.py | 5848eb07e3640a5b1574e0abbc5be4e7e4dcadbd | [] | no_license | lopezjronald/django-finance-project | 3662adbe406071b7a07ab36c9592ac194f5e91ea | a49667f2c4b4b0350d0322a4def49803c3b19c15 | refs/heads/master | 2022-12-25T00:11:22.905175 | 2020-09-29T15:45:20 | 2020-09-29T15:45:20 | 293,967,619 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 216 | py | from django.urls import path
from .views import JobListView, JobDetailView
urlpatterns = [
path('', JobListView.as_view(), name='job_list'),
path('<uuid:pk>/', JobDetailView.as_view(), name='job_detail'),
]
| [
"lopez.j.ronald@gmail.com"
] | lopez.j.ronald@gmail.com |
130de02d1d2b163956df11486ad856c734619805 | f569978afb27e72bf6a88438aa622b8c50cbc61b | /douyin_open/ToutiaoOauth2Oauth2/api_client.py | b9f4d43ca18c53cd6d62c12d7850d3b9c288375d | [] | no_license | strangebank/swagger-petstore-perl | 4834409d6225b8a09b8195128d74a9b10ef1484a | 49dfc229e2e897cdb15cbf969121713162154f28 | refs/heads/master | 2023-01-05T10:21:33.518937 | 2020-11-05T04:33:16 | 2020-11-05T04:33:16 | 310,189,316 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 25,077 | py | # coding: utf-8
"""
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 1.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import datetime
import json
import mimetypes
from multiprocessing.pool import ThreadPool
import os
import re
import tempfile
# python 2 and python 3 compatibility library
import six
from six.moves.urllib.parse import quote
from douyin_open.ToutiaoOauth2Oauth2.configuration import Configuration
import douyin_open.ToutiaoOauth2Oauth2.models
from douyin_open.ToutiaoOauth2Oauth2 import rest
class ApiClient(object):
"""Generic API client for Swagger client library builds.
Swagger generic API client. This client handles the client-
server communication, and is invariant across implementations. Specifics of
the methods and models for each application are generated from the Swagger
templates.
NOTE: This class is auto generated by the swagger code generator program.
Ref: https://github.com/swagger-api/swagger-codegen
Do not edit the class manually.
:param configuration: .Configuration object for this client
:param header_name: a header to pass when making calls to the API.
:param header_value: a header value to pass when making calls to
the API.
:param cookie: a cookie to include in the header when making calls
to the API
"""
PRIMITIVE_TYPES = (float, bool, bytes, six.text_type) + six.integer_types
NATIVE_TYPES_MAPPING = {
'int': int,
'long': int if six.PY3 else long, # noqa: F821
'float': float,
'str': str,
'bool': bool,
'date': datetime.date,
'datetime': datetime.datetime,
'object': object,
}
def __init__(self, configuration=None, header_name=None, header_value=None,
cookie=None):
if configuration is None:
configuration = Configuration()
self.configuration = configuration
# Use the pool property to lazily initialize the ThreadPool.
self._pool = None
self.rest_client = rest.RESTClientObject(configuration)
self.default_headers = {}
if header_name is not None:
self.default_headers[header_name] = header_value
self.cookie = cookie
# Set default User-Agent.
self.user_agent = 'Swagger-Codegen/1.0.0/python'
def __del__(self):
if self._pool is not None:
self._pool.close()
self._pool.join()
@property
def pool(self):
if self._pool is None:
self._pool = ThreadPool()
return self._pool
@property
def user_agent(self):
"""User agent for this API client"""
return self.default_headers['User-Agent']
@user_agent.setter
def user_agent(self, value):
self.default_headers['User-Agent'] = value
def set_default_header(self, header_name, header_value):
self.default_headers[header_name] = header_value
def __call_api(
self, resource_path, method, path_params=None,
query_params=None, header_params=None, body=None, post_params=None,
files=None, response_type=None, auth_settings=None,
_return_http_data_only=None, collection_formats=None,
_preload_content=True, _request_timeout=None):
config = self.configuration
# header parameters
header_params = header_params or {}
header_params.update(self.default_headers)
if self.cookie:
header_params['Cookie'] = self.cookie
if header_params:
header_params = self.sanitize_for_serialization(header_params)
header_params = dict(self.parameters_to_tuples(header_params,
collection_formats))
# path parameters
if path_params:
path_params = self.sanitize_for_serialization(path_params)
path_params = self.parameters_to_tuples(path_params,
collection_formats)
for k, v in path_params:
# specified safe chars, encode everything
resource_path = resource_path.replace(
'{%s}' % k,
quote(str(v), safe=config.safe_chars_for_path_param)
)
# query parameters
if query_params:
query_params = self.sanitize_for_serialization(query_params)
query_params = self.parameters_to_tuples(query_params,
collection_formats)
# post parameters
if post_params or files:
post_params = self.prepare_post_parameters(post_params, files)
post_params = self.sanitize_for_serialization(post_params)
post_params = self.parameters_to_tuples(post_params,
collection_formats)
# auth setting
self.update_params_for_auth(header_params, query_params, auth_settings)
# body
if body:
body = self.sanitize_for_serialization(body)
# request url
url = self.configuration.host + resource_path
# perform request and return response
response_data = self.request(
method, url, query_params=query_params, headers=header_params,
post_params=post_params, body=body,
_preload_content=_preload_content,
_request_timeout=_request_timeout)
self.last_response = response_data
return_data = response_data
if _preload_content:
# deserialize response data
if response_type:
return_data = self.deserialize(response_data, response_type)
else:
return_data = None
if _return_http_data_only:
return (return_data)
else:
return (return_data, response_data.status,
response_data.getheaders())
def sanitize_for_serialization(self, obj):
"""Builds a JSON POST object.
If obj is None, return None.
If obj is str, int, long, float, bool, return directly.
If obj is datetime.datetime, datetime.date
convert to string in iso8601 format.
If obj is list, sanitize each element in the list.
If obj is dict, return the dict.
If obj is swagger model, return the properties dict.
:param obj: The data to serialize.
:return: The serialized form of data.
"""
if obj is None:
return None
elif isinstance(obj, self.PRIMITIVE_TYPES):
return obj
elif isinstance(obj, list):
return [self.sanitize_for_serialization(sub_obj)
for sub_obj in obj]
elif isinstance(obj, tuple):
return tuple(self.sanitize_for_serialization(sub_obj)
for sub_obj in obj)
elif isinstance(obj, (datetime.datetime, datetime.date)):
return obj.isoformat()
if isinstance(obj, dict):
obj_dict = obj
else:
# Convert model obj to dict except
# attributes `swagger_types`, `attribute_map`
# and attributes which value is not None.
# Convert attribute name to json key in
# model definition for request.
obj_dict = {obj.attribute_map[attr]: getattr(obj, attr)
for attr, _ in six.iteritems(obj.swagger_types)
if getattr(obj, attr) is not None}
return {key: self.sanitize_for_serialization(val)
for key, val in six.iteritems(obj_dict)}
def deserialize(self, response, response_type):
"""Deserializes response into an object.
:param response: RESTResponse object to be deserialized.
:param response_type: class literal for
deserialized object, or string of class name.
:return: deserialized object.
"""
# handle file downloading
# save response body into a tmp file and return the instance
if response_type == "file":
return self.__deserialize_file(response)
# fetch data from response object
try:
data = json.loads(response.data)
except ValueError:
data = response.data
return self.__deserialize(data, response_type)
def __deserialize(self, data, klass):
"""Deserializes dict, list, str into an object.
:param data: dict, list or str.
:param klass: class literal, or string of class name.
:return: object.
"""
if data is None:
return None
if type(klass) == str:
if klass.startswith('list['):
sub_kls = re.match(r'list\[(.*)\]', klass).group(1)
return [self.__deserialize(sub_data, sub_kls)
for sub_data in data]
if klass.startswith('dict('):
sub_kls = re.match(r'dict\(([^,]*), (.*)\)', klass).group(2)
return {k: self.__deserialize(v, sub_kls)
for k, v in six.iteritems(data)}
# convert str to class
if klass in self.NATIVE_TYPES_MAPPING:
klass = self.NATIVE_TYPES_MAPPING[klass]
else:
klass = getattr(douyin_open.ToutiaoOauth2Oauth2.models, klass)
if klass in self.PRIMITIVE_TYPES:
return self.__deserialize_primitive(data, klass)
elif klass == object:
return self.__deserialize_object(data)
elif klass == datetime.date:
return self.__deserialize_date(data)
elif klass == datetime.datetime:
return self.__deserialize_datatime(data)
else:
return self.__deserialize_model(data, klass)
def call_api(self, resource_path, method,
path_params=None, query_params=None, header_params=None,
body=None, post_params=None, files=None,
response_type=None, auth_settings=None, async_req=None,
_return_http_data_only=None, collection_formats=None,
_preload_content=True, _request_timeout=None):
"""Makes the HTTP request (synchronous) and returns deserialized data.
To make an async request, set the async_req parameter.
:param resource_path: Path to method endpoint.
:param method: Method to call.
:param path_params: Path parameters in the url.
:param query_params: Query parameters in the url.
:param header_params: Header parameters to be
placed in the request header.
:param body: Request body.
:param post_params dict: Request post form parameters,
for `application/x-www-form-urlencoded`, `multipart/form-data`.
:param auth_settings list: Auth Settings names for the request.
:param response: Response data type.
:param files dict: key -> filename, value -> filepath,
for `multipart/form-data`.
:param async_req bool: execute request asynchronously
:param _return_http_data_only: response data without head status code
and headers
:param collection_formats: dict of collection formats for path, query,
header, and post parameters.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return:
If async_req parameter is True,
the request will be called asynchronously.
The method will return the request thread.
If parameter async_req is False or missing,
then the method will return the response directly.
"""
if not async_req:
return self.__call_api(resource_path, method,
path_params, query_params, header_params,
body, post_params, files,
response_type, auth_settings,
_return_http_data_only, collection_formats,
_preload_content, _request_timeout)
else:
thread = self.pool.apply_async(self.__call_api, (resource_path,
method, path_params, query_params,
header_params, body,
post_params, files,
response_type, auth_settings,
_return_http_data_only,
collection_formats,
_preload_content, _request_timeout))
return thread
def request(self, method, url, query_params=None, headers=None,
post_params=None, body=None, _preload_content=True,
_request_timeout=None):
"""Makes the HTTP request using RESTClient."""
if method == "GET":
return self.rest_client.GET(url,
query_params=query_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
headers=headers)
elif method == "HEAD":
return self.rest_client.HEAD(url,
query_params=query_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
headers=headers)
elif method == "OPTIONS":
return self.rest_client.OPTIONS(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "POST":
return self.rest_client.POST(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "PUT":
return self.rest_client.PUT(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "PATCH":
return self.rest_client.PATCH(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "DELETE":
return self.rest_client.DELETE(url,
query_params=query_params,
headers=headers,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
else:
raise ValueError(
"http method must be `GET`, `HEAD`, `OPTIONS`,"
" `POST`, `PATCH`, `PUT` or `DELETE`."
)
def parameters_to_tuples(self, params, collection_formats):
"""Get parameters as list of tuples, formatting collections.
:param params: Parameters as dict or list of two-tuples
:param dict collection_formats: Parameter collection formats
:return: Parameters as list of tuples, collections formatted
"""
new_params = []
if collection_formats is None:
collection_formats = {}
for k, v in six.iteritems(params) if isinstance(params, dict) else params: # noqa: E501
if k in collection_formats:
collection_format = collection_formats[k]
if collection_format == 'multi':
new_params.extend((k, value) for value in v)
else:
if collection_format == 'ssv':
delimiter = ' '
elif collection_format == 'tsv':
delimiter = '\t'
elif collection_format == 'pipes':
delimiter = '|'
else: # csv is the default
delimiter = ','
new_params.append(
(k, delimiter.join(str(value) for value in v)))
else:
new_params.append((k, v))
return new_params
def prepare_post_parameters(self, post_params=None, files=None):
"""Builds form parameters.
:param post_params: Normal form parameters.
:param files: File parameters.
:return: Form parameters with files.
"""
params = []
if post_params:
params = post_params
if files:
for k, v in six.iteritems(files):
if not v:
continue
file_names = v if type(v) is list else [v]
for n in file_names:
with open(n, 'rb') as f:
filename = os.path.basename(f.name)
filedata = f.read()
mimetype = (mimetypes.guess_type(filename)[0] or
'application/octet-stream')
params.append(
tuple([k, tuple([filename, filedata, mimetype])]))
return params
def select_header_accept(self, accepts):
"""Returns `Accept` based on an array of accepts provided.
:param accepts: List of headers.
:return: Accept (e.g. application/json).
"""
if not accepts:
return
accepts = [x.lower() for x in accepts]
if 'application/json' in accepts:
return 'application/json'
else:
return ', '.join(accepts)
def select_header_content_type(self, content_types):
"""Returns `Content-Type` based on an array of content_types provided.
:param content_types: List of content-types.
:return: Content-Type (e.g. application/json).
"""
if not content_types:
return 'application/json'
content_types = [x.lower() for x in content_types]
if 'application/json' in content_types or '*/*' in content_types:
return 'application/json'
else:
return content_types[0]
def update_params_for_auth(self, headers, querys, auth_settings):
"""Updates header and query params based on authentication setting.
:param headers: Header parameters dict to be updated.
:param querys: Query parameters tuple list to be updated.
:param auth_settings: Authentication setting identifiers list.
"""
if not auth_settings:
return
for auth in auth_settings:
auth_setting = self.configuration.auth_settings().get(auth)
if auth_setting:
if not auth_setting['value']:
continue
elif auth_setting['in'] == 'header':
headers[auth_setting['key']] = auth_setting['value']
elif auth_setting['in'] == 'query':
querys.append((auth_setting['key'], auth_setting['value']))
else:
raise ValueError(
'Authentication token must be in `query` or `header`'
)
def __deserialize_file(self, response):
"""Deserializes body to file
Saves response body into a file in a temporary folder,
using the filename from the `Content-Disposition` header if provided.
:param response: RESTResponse.
:return: file path.
"""
fd, path = tempfile.mkstemp(dir=self.configuration.temp_folder_path)
os.close(fd)
os.remove(path)
content_disposition = response.getheader("Content-Disposition")
if content_disposition:
filename = re.search(r'filename=[\'"]?([^\'"\s]+)[\'"]?',
content_disposition).group(1)
path = os.path.join(os.path.dirname(path), filename)
with open(path, "wb") as f:
f.write(response.data)
return path
def __deserialize_primitive(self, data, klass):
"""Deserializes string to primitive type.
:param data: str.
:param klass: class literal.
:return: int, long, float, str, bool.
"""
try:
return klass(data)
except UnicodeEncodeError:
return six.text_type(data)
except TypeError:
return data
def __deserialize_object(self, value):
"""Return a original value.
:return: object.
"""
return value
def __deserialize_date(self, string):
"""Deserializes string to date.
:param string: str.
:return: date.
"""
try:
from dateutil.parser import parse
return parse(string).date()
except ImportError:
return string
except ValueError:
raise rest.ApiException(
status=0,
reason="Failed to parse `{0}` as date object".format(string)
)
def __deserialize_datatime(self, string):
"""Deserializes string to datetime.
The string should be in iso8601 datetime format.
:param string: str.
:return: datetime.
"""
try:
from dateutil.parser import parse
return parse(string)
except ImportError:
return string
except ValueError:
raise rest.ApiException(
status=0,
reason=(
"Failed to parse `{0}` as datetime object"
.format(string)
)
)
def __hasattr(self, object, name):
return name in object.__class__.__dict__
def __deserialize_model(self, data, klass):
"""Deserializes list or dict to model.
:param data: dict, list.
:param klass: class literal.
:return: model object.
"""
if (not klass.swagger_types and
not self.__hasattr(klass, 'get_real_child_model')):
return data
kwargs = {}
if klass.swagger_types is not None:
for attr, attr_type in six.iteritems(klass.swagger_types):
if (data is not None and
klass.attribute_map[attr] in data and
isinstance(data, (list, dict))):
value = data[klass.attribute_map[attr]]
kwargs[attr] = self.__deserialize(value, attr_type)
instance = klass(**kwargs)
if (isinstance(instance, dict) and
klass.swagger_types is not None and
isinstance(data, dict)):
for key, value in data.items():
if key not in klass.swagger_types:
instance[key] = value
if self.__hasattr(instance, 'get_real_child_model'):
klass_name = instance.get_real_child_model(data)
if klass_name:
instance = self.__deserialize(data, klass_name)
return instance
| [
"strangebank@gmail.com"
] | strangebank@gmail.com |
6522b74111249a279635c042c7be80a569507cc3 | 30a8b69bd2e0a3f3c2c1c88fb3bd8a28e6fc4cd0 | /Part1/auth_uri_foursquare.py | b637f4a2e5e5fcb20022842e6ea6f0ce4918dfb4 | [] | no_license | llord1/Mining-Georeferenced-Data | d49108f443922f02b90431ad7a9626ea17fd0554 | c71f2e151ccfc4a1a9c07b5fcf4e95b7f7ba70e9 | refs/heads/master | 2021-05-30T13:27:57.663015 | 2015-12-29T09:10:08 | 2015-12-29T09:10:08 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 525 | py | #!/usr/bin/env python
import foursquare
accounts = {"tutorial": {"client_id": "CLIENT_ID",
"client_secret": "CLIENT_SECRET",
"access_token": ""
}
}
app = accounts["tutorial"]
client = foursquare.Foursquare(client_id=app["client_id"],
client_secret=app["client_secret"],
redirect_uri='http://www.bgoncalves.com/redirect')
auth_uri = client.oauth.auth_url()
print auth_uri
| [
"bgoncalves@gmail.com"
] | bgoncalves@gmail.com |
1dc1981e9a043f4565683d175e6c60902db0a53b | 899aeab7e2594964ee57f60b127db6913b755342 | /src/importlinter/__init__.py | 96d6ea30852d12e3ccae4a0c89ba08aec3e76e6d | [
"BSD-3-Clause",
"BSD-2-Clause"
] | permissive | skarzi/import-linter | c8b41a7828d3dc3970c63612fee0f5687d93be17 | 1ac62d58575235ada887e594819ed294c1b1d2b2 | refs/heads/master | 2023-02-06T13:39:10.883308 | 2020-12-01T08:44:59 | 2020-12-01T08:44:59 | 264,639,120 | 0 | 0 | BSD-2-Clause | 2020-05-17T10:35:50 | 2020-05-17T10:35:49 | null | UTF-8 | Python | false | false | 157 | py | __version__ = "1.2"
from .application import output # noqa
from .domain import fields # noqa
from .domain.contract import Contract, ContractCheck # noqa
| [
"david.seddon@octoenergy.com"
] | david.seddon@octoenergy.com |
a67a06cb6603c817b9768586b740c9ac6d754f98 | f445450ac693b466ca20b42f1ac82071d32dd991 | /generated_tempdir_2019_09_15_163300/generated_part010168.py | 32165658309da40110f837edcd7c20bbc61d6fcc | [] | no_license | Upabjojr/rubi_generated | 76e43cbafe70b4e1516fb761cabd9e5257691374 | cd35e9e51722b04fb159ada3d5811d62a423e429 | refs/heads/master | 2020-07-25T17:26:19.227918 | 2019-09-15T15:41:48 | 2019-09-15T15:41:48 | 208,357,412 | 4 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,300 | py | from sympy.abc import *
from matchpy.matching.many_to_one import CommutativeMatcher
from matchpy import *
from matchpy.utils import VariableWithCount
from collections import deque
from multiset import Multiset
from sympy.integrals.rubi.constraints import *
from sympy.integrals.rubi.utility_function import *
from sympy.integrals.rubi.rules.miscellaneous_integration import *
from sympy import *
class CommutativeMatcher51127(CommutativeMatcher):
_instance = None
patterns = {
0: (0, Multiset({}), [
(VariableWithCount('i3.2.1.2.1.0', 1, 1, None), Mul),
(VariableWithCount('i3.2.1.2.1.0_1', 1, 1, S(1)), Mul)
])
}
subjects = {}
subjects_by_id = {}
bipartite = BipartiteGraph()
associative = Mul
max_optional_count = 1
anonymous_patterns = set()
def __init__(self):
self.add_subject(None)
@staticmethod
def get():
if CommutativeMatcher51127._instance is None:
CommutativeMatcher51127._instance = CommutativeMatcher51127()
return CommutativeMatcher51127._instance
@staticmethod
def get_match_iter(subject):
subjects = deque([subject]) if subject is not None else deque()
subst0 = Substitution()
# State 51126
return
yield
from collections import deque | [
"franz.bonazzi@gmail.com"
] | franz.bonazzi@gmail.com |
52877fb16c62338f0a45f2d98d40f9cbdd981dd0 | 6fd3fff474656ff98dffb597fef249f156624856 | /footlbotest/leaveMsgUI/51xingsheng/leaveMsgUI004.py | 5c70ce57150203f53045a361e443f424c6167c47 | [] | no_license | lobo1233456/footlbotestproj | c241d426a5f186616cbdf68f1e093e0a56c4db33 | f69ea745f1a91f3f365007506c5e6d12cdc7dd8b | refs/heads/master | 2020-07-05T14:42:42.067328 | 2019-09-09T06:18:43 | 2019-09-09T06:18:43 | 202,675,044 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,870 | py | #!user/bin/env python3
# -*- coding: UTF-8 -*-
import random
import time
from retrying import retry
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException, NoAlertPresentException, ElementClickInterceptedException
from selenium.webdriver.support.select import Select
from footlbolib.testcase import FootlboTestCase
class leaveMsgUI004(FootlboTestCase):
'''
非合作商页面右侧留言窗口
'''
owner = "liubo"
timeout = 5
priority = FootlboTestCase.EnumPriority.High
status = FootlboTestCase.EnumStatus.Design
tags = "BVT"
def pre_test(self):
self.accept_next_alert = True
def run_test(self):
self.driver = webdriver.Firefox()
driver = self.driver
# i = random.randint(7,9)
# print("----------%s-----------"%i)
driver.get("https://www.51xinsheng.com/ctbj/")
driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='首页'])[2]/following::li[1]").click()
driver.find_element_by_link_text(u"客厅").click()
driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='混搭'])[2]/following::img[1]").click()
driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::select[1]").click()
Select(driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::select[1]")).select_by_visible_text(
u"河北省")
driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::option[4]").click()
driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::select[2]").click()
Select(driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::select[2]")).select_by_visible_text(
u"邯郸市")
driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::option[40]").click()
driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::select[3]").click()
Select(driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::select[3]")).select_by_visible_text(
u"101-150㎡")
driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::option[52]").click()
driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::input[1]").clear()
driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::input[1]").send_keys(
"13764743157")
driver.find_element_by_xpath(
u"(.//*[normalize-space(text()) and normalize-space(.)='装修成这样花多少钱'])[1]/following::button[1]").click()
time.sleep(2)
msg = self.close_alert_and_get_its_text(driver)
self.log_info(msg)
self.assert_("检查成功提交的结果", u"报价有疑问?装修管家稍后致电为您解答" == msg)
def post_test(self):
self.driver.quit()
self.log_info("testOver")
if __name__ == '__main__':
leaveMsgUI004().debug_run()
| [
"1009548820@qq.com"
] | 1009548820@qq.com |
1199f68eef14572d6f677aad8d14f4498c886919 | 8d3835e39cbc2c74d8535b809686d6ab3033c0d0 | /ecommerce/orders/migrations/0009_auto_20190123_2034.py | 3931948db21a8cd84cd407429a0528cf9f12ada5 | [] | no_license | gayatribasude/GayatrisWorld | 125698955cd8b98a5aa2377331293587a57f2911 | 552ea2ef946e95f5bccc4e51d4030484ab0bc438 | refs/heads/master | 2023-06-25T19:45:03.232059 | 2021-08-02T16:43:47 | 2021-08-02T16:43:47 | 384,343,617 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 505 | py | # Generated by Django 2.1.3 on 2019-01-23 15:04
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('orders', '0008_auto_20190123_1027'),
]
operations = [
migrations.AlterField(
model_name='order',
name='status',
field=models.CharField(choices=[('created', 'Created'), ('paid', 'Paid'), ('shipped', 'Shipped'), ('refunded', 'Refunded')], default='created', max_length=20),
),
]
| [
"gayatribasude"
] | gayatribasude |
fac72b41b87dea70fdd16a46f9add62054d40de3 | 811003da29d507e638fe460d34e88c08e5311f1b | /tools/testrunner/base_runner.py | b0487e5c10b469464a807942c9a561afcac7359b | [
"BSD-3-Clause",
"bzip2-1.0.6",
"SunPro"
] | permissive | mengyoo/v8 | 09d60c6451c791f5e4013c5e18279cf09f922a7a | a582199c5e56c9c84312dfa6d6fa6de724e1a806 | refs/heads/master | 2021-09-06T05:26:01.552811 | 2018-02-02T13:00:12 | 2018-02-02T14:48:52 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 23,850 | py | # Copyright 2017 the V8 project authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from collections import OrderedDict
import json
import optparse
import os
import shlex
import sys
# Add testrunner to the path.
sys.path.insert(
0,
os.path.dirname(
os.path.dirname(os.path.abspath(__file__))))
from testrunner.local import testsuite
from testrunner.local import utils
from testrunner.test_config import TestConfig
from testrunner.testproc import progress
from testrunner.testproc.rerun import RerunProc
from testrunner.testproc.shard import ShardProc
from testrunner.testproc.sigproc import SignalProc
from testrunner.testproc.timeout import TimeoutProc
BASE_DIR = (
os.path.dirname(
os.path.dirname(
os.path.dirname(
os.path.abspath(__file__)))))
DEFAULT_OUT_GN = 'out.gn'
# Map of test name synonyms to lists of test suites. Should be ordered by
# expected runtimes (suites with slow test cases first). These groups are
# invoked in separate steps on the bots.
TEST_MAP = {
# This needs to stay in sync with test/bot_default.isolate.
"bot_default": [
"debugger",
"mjsunit",
"cctest",
"wasm-spec-tests",
"inspector",
"webkit",
"mkgrokdump",
"fuzzer",
"message",
"preparser",
"intl",
"unittests",
],
# This needs to stay in sync with test/default.isolate.
"default": [
"debugger",
"mjsunit",
"cctest",
"wasm-spec-tests",
"inspector",
"mkgrokdump",
"fuzzer",
"message",
"preparser",
"intl",
"unittests",
],
# This needs to stay in sync with test/d8_default.isolate.
"d8_default": [
# TODO(machenbach): uncomment after infra side lands.
#"debugger",
"mjsunit",
"webkit",
#"message",
#"preparser",
#"intl",
],
# This needs to stay in sync with test/optimize_for_size.isolate.
"optimize_for_size": [
"debugger",
"mjsunit",
"cctest",
"inspector",
"webkit",
"intl",
],
"unittests": [
"unittests",
],
}
# Double the timeout for these:
SLOW_ARCHS = ["arm",
"mips",
"mipsel",
"mips64",
"mips64el",
"s390",
"s390x",
"arm64"]
class ModeConfig(object):
def __init__(self, flags, timeout_scalefactor, status_mode, execution_mode):
self.flags = flags
self.timeout_scalefactor = timeout_scalefactor
self.status_mode = status_mode
self.execution_mode = execution_mode
DEBUG_FLAGS = ["--nohard-abort", "--enable-slow-asserts", "--verify-heap"]
RELEASE_FLAGS = ["--nohard-abort"]
MODES = {
"debug": ModeConfig(
flags=DEBUG_FLAGS,
timeout_scalefactor=4,
status_mode="debug",
execution_mode="debug",
),
"optdebug": ModeConfig(
flags=DEBUG_FLAGS,
timeout_scalefactor=4,
status_mode="debug",
execution_mode="debug",
),
"release": ModeConfig(
flags=RELEASE_FLAGS,
timeout_scalefactor=1,
status_mode="release",
execution_mode="release",
),
# Normal trybot release configuration. There, dchecks are always on which
# implies debug is set. Hence, the status file needs to assume debug-like
# behavior/timeouts.
"tryrelease": ModeConfig(
flags=RELEASE_FLAGS,
timeout_scalefactor=1,
status_mode="debug",
execution_mode="release",
),
# This mode requires v8 to be compiled with dchecks and slow dchecks.
"slowrelease": ModeConfig(
flags=RELEASE_FLAGS + ["--enable-slow-asserts"],
timeout_scalefactor=2,
status_mode="debug",
execution_mode="release",
),
}
PROGRESS_INDICATORS = {
'verbose': progress.VerboseProgressIndicator,
'dots': progress.DotsProgressIndicator,
'color': progress.ColorProgressIndicator,
'mono': progress.MonochromeProgressIndicator,
}
class TestRunnerError(Exception):
pass
class BuildConfig(object):
def __init__(self, build_config):
# In V8 land, GN's x86 is called ia32.
if build_config['v8_target_cpu'] == 'x86':
self.arch = 'ia32'
else:
self.arch = build_config['v8_target_cpu']
self.is_debug = build_config['is_debug']
self.asan = build_config['is_asan']
self.cfi_vptr = build_config['is_cfi']
self.dcheck_always_on = build_config['dcheck_always_on']
self.gcov_coverage = build_config['is_gcov_coverage']
self.msan = build_config['is_msan']
self.no_i18n = not build_config['v8_enable_i18n_support']
self.no_snap = not build_config['v8_use_snapshot']
self.predictable = build_config['v8_enable_verify_predictable']
self.tsan = build_config['is_tsan']
self.ubsan_vptr = build_config['is_ubsan_vptr']
# Export only for MIPS target
if self.arch in ['mips', 'mipsel', 'mips64', 'mips64el']:
self.mips_arch_variant = build_config['mips_arch_variant']
self.mips_use_msa = build_config['mips_use_msa']
def __str__(self):
detected_options = []
if self.asan:
detected_options.append('asan')
if self.cfi_vptr:
detected_options.append('cfi_vptr')
if self.dcheck_always_on:
detected_options.append('dcheck_always_on')
if self.gcov_coverage:
detected_options.append('gcov_coverage')
if self.msan:
detected_options.append('msan')
if self.no_i18n:
detected_options.append('no_i18n')
if self.no_snap:
detected_options.append('no_snap')
if self.predictable:
detected_options.append('predictable')
if self.tsan:
detected_options.append('tsan')
if self.ubsan_vptr:
detected_options.append('ubsan_vptr')
return '\n'.join(detected_options)
class BaseTestRunner(object):
def __init__(self, basedir=None):
self.basedir = basedir or BASE_DIR
self.outdir = None
self.build_config = None
self.mode_name = None
self.mode_options = None
def execute(self, sys_args=None):
if sys_args is None: # pragma: no cover
sys_args = sys.argv[1:]
try:
parser = self._create_parser()
options, args = self._parse_args(parser, sys_args)
self._load_build_config(options)
try:
self._process_default_options(options)
self._process_options(options)
except TestRunnerError:
parser.print_help()
raise
args = self._parse_test_args(args)
suites = self._get_suites(args, options)
self._load_status_files(suites, options)
self._setup_env()
return self._do_execute(suites, args, options)
except TestRunnerError:
return 1
except KeyboardInterrupt:
return 2
def _create_parser(self):
parser = optparse.OptionParser()
parser.usage = '%prog [options] [tests]'
parser.description = """TESTS: %s""" % (TEST_MAP["default"])
self._add_parser_default_options(parser)
self._add_parser_options(parser)
return parser
def _add_parser_default_options(self, parser):
parser.add_option("--gn", help="Scan out.gn for the last built"
" configuration",
default=False, action="store_true")
parser.add_option("--outdir", help="Base directory with compile output",
default="out")
parser.add_option("--buildbot", help="DEPRECATED!",
default=False, action="store_true")
parser.add_option("--arch",
help="The architecture to run tests for")
parser.add_option("-m", "--mode",
help="The test mode in which to run (uppercase for ninja"
" and buildbot builds): %s" % MODES.keys())
parser.add_option("--shell-dir", help="DEPRECATED! Executables from build "
"directory will be used")
parser.add_option("--total-timeout-sec", default=0, type="int",
help="How long should fuzzer run")
# Shard
parser.add_option("--shard-count", default=1, type=int,
help="Split tests into this number of shards")
parser.add_option("--shard-run", default=1, type=int,
help="Run this shard from the split up tests.")
# Progress
parser.add_option("-p", "--progress",
choices=PROGRESS_INDICATORS.keys(), default="mono",
help="The style of progress indicator (verbose, dots, "
"color, mono)")
parser.add_option("--json-test-results",
help="Path to a file for storing json results.")
parser.add_option("--junitout", help="File name of the JUnit output")
parser.add_option("--junittestsuite", default="v8tests",
help="The testsuite name in the JUnit output file")
# Rerun
parser.add_option("--rerun-failures-count", default=0, type=int,
help="Number of times to rerun each failing test case. "
"Very slow tests will be rerun only once.")
parser.add_option("--rerun-failures-max", default=100, type=int,
help="Maximum number of failing test cases to rerun")
# Test config
parser.add_option("--command-prefix", default="",
help="Prepended to each shell command used to run a test")
parser.add_option("--extra-flags", action="append", default=[],
help="Additional flags to pass to each test command")
parser.add_option("--isolates", action="store_true", default=False,
help="Whether to test isolates")
parser.add_option("--no-harness", "--noharness",
default=False, action="store_true",
help="Run without test harness of a given suite")
parser.add_option("--random-seed", default=0, type=int,
help="Default seed for initializing random generator")
parser.add_option("-t", "--timeout", default=60, type=int,
help="Timeout for single test in seconds")
parser.add_option("-v", "--verbose", default=False, action="store_true",
help="Verbose output")
# TODO(machenbach): Temporary options for rolling out new test runner
# features.
parser.add_option("--mastername", default='',
help="Mastername property from infrastructure. Not "
"setting this option indicates manual usage.")
parser.add_option("--buildername", default='',
help="Buildername property from infrastructure. Not "
"setting this option indicates manual usage.")
def _add_parser_options(self, parser):
pass
def _parse_args(self, parser, sys_args):
options, args = parser.parse_args(sys_args)
if any(map(lambda v: v and ',' in v,
[options.arch, options.mode])): # pragma: no cover
print 'Multiple arch/mode are deprecated'
raise TestRunnerError()
return options, args
def _load_build_config(self, options):
for outdir in self._possible_outdirs(options):
try:
self.build_config = self._do_load_build_config(outdir, options.verbose)
except TestRunnerError:
pass
if not self.build_config: # pragma: no cover
print 'Failed to load build config'
raise TestRunnerError
print 'Build found: %s' % self.outdir
if str(self.build_config):
print '>>> Autodetected:'
print self.build_config
# Returns possible build paths in order:
# gn
# outdir
# outdir/arch.mode
# Each path is provided in two versions: <path> and <path>/mode for buildbot.
def _possible_outdirs(self, options):
def outdirs():
if options.gn:
yield self._get_gn_outdir()
return
yield options.outdir
if options.arch and options.mode:
yield os.path.join(options.outdir,
'%s.%s' % (options.arch, options.mode))
for outdir in outdirs():
yield os.path.join(self.basedir, outdir)
# buildbot option
if options.mode:
yield os.path.join(self.basedir, outdir, options.mode)
def _get_gn_outdir(self):
gn_out_dir = os.path.join(self.basedir, DEFAULT_OUT_GN)
latest_timestamp = -1
latest_config = None
for gn_config in os.listdir(gn_out_dir):
gn_config_dir = os.path.join(gn_out_dir, gn_config)
if not os.path.isdir(gn_config_dir):
continue
if os.path.getmtime(gn_config_dir) > latest_timestamp:
latest_timestamp = os.path.getmtime(gn_config_dir)
latest_config = gn_config
if latest_config:
print(">>> Latest GN build found: %s" % latest_config)
return os.path.join(DEFAULT_OUT_GN, latest_config)
def _do_load_build_config(self, outdir, verbose=False):
build_config_path = os.path.join(outdir, "v8_build_config.json")
if not os.path.exists(build_config_path):
if verbose:
print("Didn't find build config: %s" % build_config_path)
raise TestRunnerError()
with open(build_config_path) as f:
try:
build_config_json = json.load(f)
except Exception: # pragma: no cover
print("%s exists but contains invalid json. Is your build up-to-date?"
% build_config_path)
raise TestRunnerError()
# In auto-detect mode the outdir is always where we found the build config.
# This ensures that we'll also take the build products from there.
self.outdir = os.path.dirname(build_config_path)
return BuildConfig(build_config_json)
def _process_default_options(self, options):
# We don't use the mode for more path-magic.
# Therefore transform the buildbot mode here to fix build_config value.
if options.mode:
options.mode = self._buildbot_to_v8_mode(options.mode)
build_config_mode = 'debug' if self.build_config.is_debug else 'release'
if options.mode:
if options.mode not in MODES: # pragma: no cover
print '%s mode is invalid' % options.mode
raise TestRunnerError()
if MODES[options.mode].execution_mode != build_config_mode:
print ('execution mode (%s) for %s is inconsistent with build config '
'(%s)' % (
MODES[options.mode].execution_mode,
options.mode,
build_config_mode))
raise TestRunnerError()
self.mode_name = options.mode
else:
self.mode_name = build_config_mode
self.mode_options = MODES[self.mode_name]
if options.arch and options.arch != self.build_config.arch:
print('--arch value (%s) inconsistent with build config (%s).' % (
options.arch, self.build_config.arch))
raise TestRunnerError()
if options.shell_dir: # pragma: no cover
print('Warning: --shell-dir is deprecated. Searching for executables in '
'build directory (%s) instead.' % self.outdir)
options.command_prefix = shlex.split(options.command_prefix)
options.extra_flags = sum(map(shlex.split, options.extra_flags), [])
def _buildbot_to_v8_mode(self, config):
"""Convert buildbot build configs to configs understood by the v8 runner.
V8 configs are always lower case and without the additional _x64 suffix
for 64 bit builds on windows with ninja.
"""
mode = config[:-4] if config.endswith('_x64') else config
return mode.lower()
def _process_options(self, options):
pass
def _setup_env(self):
# Use the v8 root as cwd as some test cases use "load" with relative paths.
os.chdir(self.basedir)
# Many tests assume an English interface.
os.environ['LANG'] = 'en_US.UTF-8'
symbolizer_option = self._get_external_symbolizer_option()
if self.build_config.asan:
asan_options = [
symbolizer_option,
'allow_user_segv_handler=1',
'allocator_may_return_null=1',
]
if not utils.GuessOS() in ['macos', 'windows']:
# LSAN is not available on mac and windows.
asan_options.append('detect_leaks=1')
else:
asan_options.append('detect_leaks=0')
os.environ['ASAN_OPTIONS'] = ":".join(asan_options)
if self.build_config.cfi_vptr:
os.environ['UBSAN_OPTIONS'] = ":".join([
'print_stacktrace=1',
'print_summary=1',
'symbolize=1',
symbolizer_option,
])
if self.build_config.ubsan_vptr:
os.environ['UBSAN_OPTIONS'] = ":".join([
'print_stacktrace=1',
symbolizer_option,
])
if self.build_config.msan:
os.environ['MSAN_OPTIONS'] = symbolizer_option
if self.build_config.tsan:
suppressions_file = os.path.join(
self.basedir,
'tools',
'sanitizers',
'tsan_suppressions.txt')
os.environ['TSAN_OPTIONS'] = " ".join([
symbolizer_option,
'suppressions=%s' % suppressions_file,
'exit_code=0',
'report_thread_leaks=0',
'history_size=7',
'report_destroy_locked=0',
])
def _get_external_symbolizer_option(self):
external_symbolizer_path = os.path.join(
self.basedir,
'third_party',
'llvm-build',
'Release+Asserts',
'bin',
'llvm-symbolizer',
)
if utils.IsWindows():
# Quote, because sanitizers might confuse colon as option separator.
external_symbolizer_path = '"%s.exe"' % external_symbolizer_path
return 'external_symbolizer_path=%s' % external_symbolizer_path
def _parse_test_args(self, args):
if not args:
args = self._get_default_suite_names()
# Expand arguments with grouped tests. The args should reflect the list
# of suites as otherwise filters would break.
def expand_test_group(name):
return TEST_MAP.get(name, [name])
return reduce(list.__add__, map(expand_test_group, args), [])
def _get_suites(self, args, options):
names = self._args_to_suite_names(args)
return self._load_suites(names, options)
def _args_to_suite_names(self, args):
# Use default tests if no test configuration was provided at the cmd line.
all_names = set(utils.GetSuitePaths(os.path.join(self.basedir, 'test')))
args_names = OrderedDict([(arg.split('/')[0], None) for arg in args]) # set
return [name for name in args_names if name in all_names]
def _get_default_suite_names(self):
return []
def _load_suites(self, names, options):
test_config = self._create_test_config(options)
def load_suite(name):
if options.verbose:
print '>>> Loading test suite: %s' % name
return testsuite.TestSuite.LoadTestSuite(
os.path.join(self.basedir, 'test', name),
test_config)
return map(load_suite, names)
def _load_status_files(self, suites, options):
# simd_mips is true if SIMD is fully supported on MIPS
variables = self._get_statusfile_variables(options)
for s in suites:
s.ReadStatusFile(variables)
def _get_statusfile_variables(self, options):
simd_mips = (
self.build_config.arch in ['mipsel', 'mips', 'mips64', 'mips64el'] and
self.build_config.mips_arch_variant == "r6" and
self.build_config.mips_use_msa)
# TODO(all): Combine "simulator" and "simulator_run".
# TODO(machenbach): In GN we can derive simulator run from
# target_arch != v8_target_arch in the dumped build config.
return {
"arch": self.build_config.arch,
"asan": self.build_config.asan,
"byteorder": sys.byteorder,
"dcheck_always_on": self.build_config.dcheck_always_on,
"deopt_fuzzer": False,
"endurance_fuzzer": False,
"gc_fuzzer": False,
"gc_stress": False,
"gcov_coverage": self.build_config.gcov_coverage,
"isolates": options.isolates,
"mode": self.mode_options.status_mode,
"msan": self.build_config.msan,
"no_harness": options.no_harness,
"no_i18n": self.build_config.no_i18n,
"no_snap": self.build_config.no_snap,
"novfp3": False,
"predictable": self.build_config.predictable,
"simd_mips": simd_mips,
"simulator": utils.UseSimulator(self.build_config.arch),
"simulator_run": False,
"system": utils.GuessOS(),
"tsan": self.build_config.tsan,
"ubsan_vptr": self.build_config.ubsan_vptr,
}
def _create_test_config(self, options):
timeout = options.timeout * self._timeout_scalefactor(options)
return TestConfig(
command_prefix=options.command_prefix,
extra_flags=options.extra_flags,
isolates=options.isolates,
mode_flags=self.mode_options.flags,
no_harness=options.no_harness,
noi18n=self.build_config.no_i18n,
random_seed=options.random_seed,
shell_dir=self.outdir,
timeout=timeout,
verbose=options.verbose,
)
def _timeout_scalefactor(self, options):
factor = self.mode_options.timeout_scalefactor
# Simulators are slow, therefore allow a longer timeout.
if self.build_config.arch in SLOW_ARCHS:
factor *= 2
# Predictable mode is slower.
if self.build_config.predictable:
factor *= 2
return factor
# TODO(majeski): remove options & args parameters
def _do_execute(self, suites, args, options):
raise NotImplementedError()
def _create_shard_proc(self, options):
myid, count = self._get_shard_info(options)
if count == 1:
return None
return ShardProc(myid - 1, count)
def _get_shard_info(self, options):
"""
Returns pair:
(id of the current shard [1; number of shards], number of shards)
"""
# Read gtest shard configuration from environment (e.g. set by swarming).
# If none is present, use values passed on the command line.
shard_count = int(
os.environ.get('GTEST_TOTAL_SHARDS', options.shard_count))
shard_run = os.environ.get('GTEST_SHARD_INDEX')
if shard_run is not None:
# The v8 shard_run starts at 1, while GTEST_SHARD_INDEX starts at 0.
shard_run = int(shard_run) + 1
else:
shard_run = options.shard_run
if options.shard_count > 1:
# Log if a value was passed on the cmd line and it differs from the
# environment variables.
if options.shard_count != shard_count: # pragma: no cover
print("shard_count from cmd line differs from environment variable "
"GTEST_TOTAL_SHARDS")
if (options.shard_run > 1 and
options.shard_run != shard_run): # pragma: no cover
print("shard_run from cmd line differs from environment variable "
"GTEST_SHARD_INDEX")
if shard_run < 1 or shard_run > shard_count:
# TODO(machenbach): Turn this into an assert. If that's wrong on the
# bots, printing will be quite useless. Or refactor this code to make
# sure we get a return code != 0 after testing if we got here.
print "shard-run not a valid number, should be in [1:shard-count]"
print "defaulting back to running all tests"
return 1, 1
return shard_run, shard_count
def _create_progress_indicators(self, options):
procs = [PROGRESS_INDICATORS[options.progress]()]
if options.junitout:
procs.append(progress.JUnitTestProgressIndicator(options.junitout,
options.junittestsuite))
if options.json_test_results:
procs.append(progress.JsonTestProgressIndicator(
options.json_test_results,
self.build_config.arch,
self.mode_options.execution_mode))
return procs
def _create_timeout_proc(self, options):
if not options.total_timeout_sec:
return None
return TimeoutProc(options.total_timeout_sec)
def _create_signal_proc(self):
return SignalProc()
def _create_rerun_proc(self, options):
if not options.rerun_failures_count:
return None
return RerunProc(options.rerun_failures_count,
options.rerun_failures_max)
| [
"commit-bot@chromium.org"
] | commit-bot@chromium.org |
03d5e8af92c8ad4c1a8a5acf1d00dfb40be96012 | 390f1fd25ebce9e18bce582b7c0b81ab989ff738 | /npmctree/sampling.py | 1d27118bced607cdbcbaaf2cec4148654dd15a74 | [] | no_license | argriffing/npmctree | 439f1e94ff876dfa09e3ab21ec7691c44b67b813 | 274f022baca5a4d3d4e74392ce79934ff3d21c32 | refs/heads/master | 2016-09-11T08:12:33.662556 | 2014-09-15T19:13:08 | 2014-09-15T19:13:08 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,409 | py | """
Joint state sampling algorithm for a Markov chain on a NetworkX tree graph.
"""
from __future__ import division, print_function, absolute_import
import random
import numpy as np
import networkx as nx
from npmctree import dynamic_fset_lhood, dynamic_lmap_lhood
from .util import normalized, weighted_choice
__all__ = [
'sample_history',
'sample_histories',
'sample_unconditional_history',
'sample_unconditional_histories',
]
def sample_history(T, edge_to_P, root,
root_prior_distn1d, node_to_data_lmap):
"""
Jointly sample states on a tree.
This is called a history.
"""
v_to_subtree_partial_likelihoods = dynamic_lmap_lhood._backward(
T, edge_to_P, root, root_prior_distn1d, node_to_data_lmap)
node_to_state = _sample_states_preprocessed(T, edge_to_P, root,
v_to_subtree_partial_likelihoods)
return node_to_state
def sample_histories(T, edge_to_P, root,
root_prior_distn1d, node_to_data_lmap, nhistories):
"""
Sample multiple histories.
Each history is a joint sample of states on the tree.
"""
v_to_subtree_partial_likelihoods = dynamic_lmap_lhood._backward(
T, edge_to_P, root, root_prior_distn1d, node_to_data_lmap)
for i in range(nhistories):
node_to_state = _sample_states_preprocessed(T, edge_to_P, root,
v_to_subtree_partial_likelihoods)
yield node_to_state
def _sample_states_preprocessed(T, edge_to_P, root,
v_to_subtree_partial_likelihoods):
"""
Jointly sample states on a tree.
This variant requires subtree partial likelihoods.
"""
root_partial_likelihoods = v_to_subtree_partial_likelihoods[root]
n = root_partial_likelihoods.shape[0]
if not root_partial_likelihoods.any():
return None
distn1d = normalized(root_partial_likelihoods)
root_state = weighted_choice(n, p=distn1d)
v_to_sampled_state = {root : root_state}
for edge in nx.bfs_edges(T, root):
va, vb = edge
P = edge_to_P[edge]
# For the relevant parent state,
# compute an unnormalized distribution over child states.
sa = v_to_sampled_state[va]
# Construct conditional transition probabilities.
sb_weights = P[sa] * v_to_subtree_partial_likelihoods[vb]
# Sample the state.
distn1d = normalized(sb_weights)
v_to_sampled_state[vb] = weighted_choice(n, p=distn1d)
return v_to_sampled_state
def sample_unconditional_history(T, edge_to_P, root, root_prior_distn1d):
"""
No data is used in the sampling of this state history at nodes.
"""
nstates = root_prior_distn1d.shape[0]
node_to_state = {root : weighted_choice(nstates, p=root_prior_distn1d)}
for edge in nx.bfs_edges(T, root):
va, vb = edge
P = edge_to_P[edge]
sa = node_to_state[va]
node_to_state[vb] = weighted_choice(nstates, p=P[sa])
return node_to_state
def sample_unconditional_histories(T, edge_to_P, root,
root_prior_distn1d, nhistories):
"""
Sample multiple unconditional histories.
This function is not as useful as its conditional sampling analog,
because this function does not require pre-processing.
"""
for i in range(nhistories):
yield sample_unconditional_history(
T, edge_to_P, root, root_prior_distn1d)
| [
"argriffi@ncsu.edu"
] | argriffi@ncsu.edu |
b0dd601df311767b266472a4fb9bae1c30625292 | f18ca41c7aaa1da62ade3fb12ff69c2de863bc5f | /server/workers/dataprocessing/run_dataprocessing.py | 7f299c873debe81d77d589105ffae658b93b4248 | [
"MIT"
] | permissive | chreman/Headstart | 983111e03db2e69406f3c6feb6387b0757c6b62d | 5d8b956faac4389c649f3072b5ac55aaa01644c6 | refs/heads/master | 2021-10-21T04:50:03.608480 | 2021-10-14T13:02:53 | 2021-10-14T13:02:53 | 20,580,403 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 571 | py | import os
import json
import redis
from dataprocessing.src.headstart import Dataprocessing
if __name__ == '__main__':
redis_config = {
"host": os.getenv("REDIS_HOST"),
"port": os.getenv("REDIS_PORT"),
"db": os.getenv("REDIS_DB"),
"password": os.getenv("REDIS_PASSWORD")
}
redis_store = redis.StrictRedis(**redis_config)
dp = Dataprocessing("./other-scripts", "run_vis_layout.R",
redis_store=redis_store,
loglevel=os.environ.get("HEADSTART_LOGLEVEL", "INFO"))
dp.run()
| [
"web@christopherkittel.eu"
] | web@christopherkittel.eu |
7025db40f2f96e1a274d3040dac948135d57aefc | 043e511436798e9aed96052baddac7a353ac6562 | /paintHouse.py | b1ca2afe5bc36ee0ad4dad8eecab0186121c949e | [] | no_license | bch6179/Pyn | 01e19f262cda6f7ee1627d41a829609bde153a93 | e718fcb6b83664d3d6413cf9b2bb4a875e62de9c | refs/heads/master | 2021-01-22T21:28:13.982722 | 2017-05-05T07:21:19 | 2017-05-05T07:21:19 | 85,434,828 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,322 | py | class Solution(object):
def paintHouse(self, costs):
if costs == None or len(costs) == 0: return -1
n = len(costs)
for i in range(1,n):
costs[i][0] = costs[i][0] + min(costs[i-1][1], costs[i-1][2])
costs[i][1] = costs[i][1] + min(costs[i-1][0], costs[i-1][2])
costs[i][2] = costs[i][2] + min(costs[i-1][0], costs[i-1][1])
return min(costs[n-1][0],costs[n-1][1],costs[n-1][2] )
def paintHouseK(self, costs):
if costs == None or len(costs) == 0: return -1
n = len(costs)
prevMin, prevSec = 0, 0
prevId = -1
for i in range(0,n):
curMin, curSec = 1 << 31 -1, 1<<31-1
curIdx = -1
for j in range(0, k):
costs[i][j] = costs[i][j] + prevMin if prevIdx != j else prevSec
if costs[i][j]< curMin:
curSec = curMin
curMin = costs[i][j]
curIdx = j
elif costs[i][j]< curSec:
curSec = costs[i][j]
prevMin = curMin
prevSec = curSec
prevIdx = curIdx
return prevMin
s = Solution()
print s.paintHouse([[1,2,3],[1,2,3],[1,2,3]]) | [
"bch6179@gmail.com"
] | bch6179@gmail.com |
cdff1430959d3feb6893f479095f9de2ba77ae53 | e3dd0254121b8c51b30fad8f494207d7776af5fe | /BOJ/01000/01075.py | 27bfbc64e870d5d8ca06493df02cbf341b47e3bd | [] | no_license | Has3ong/Algorithm | 609d057b10641466b323ce8c4f6e792de1ab5d0b | 83390f2760e54b9b9cf608f3a680f38d5bb5cddc | refs/heads/master | 2020-06-15T03:15:02.908185 | 2019-09-15T16:17:14 | 2019-09-15T16:17:14 | 195,190,040 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 203 | py | def solution():
N = int(input())
F = int(input())
N = (N // 100) * 100
for i in range(100):
M = N + i
if M % F == 0:
break
print(str(M)[-2:])
solution() | [
"khsh5592@naver.com"
] | khsh5592@naver.com |
f26d3a14a3888214eeb718df24845115778f358f | 08ee04ae665dcb930ed4b98ca7b91b2dac2cc3b0 | /src/rayoptics/raytr/analyses.py | f4336b7a034d4b7c0b2cfae26f339d3e4d10ad15 | [
"BSD-3-Clause"
] | permissive | mjhoptics/ray-optics | 6bad622f7bb9b3485823b9cc511a6d2b679f7048 | 41ea6d618a93fe14f8bee45fb3efff6a6762bcce | refs/heads/master | 2023-07-09T18:03:36.621685 | 2023-05-08T22:46:36 | 2023-05-08T22:46:36 | 109,168,474 | 195 | 49 | BSD-3-Clause | 2023-08-10T16:53:28 | 2017-11-01T18:34:12 | Python | UTF-8 | Python | false | false | 30,688 | py | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright © 2020 Michael J. Hayford
"""Aberration calculations for (fld, wvl, foc), including focus and image shift
This module refactors some existing ray trace and aberration calculations
in other modules to be expressed for a single field point and wavelength.
The ability to apply focus and image shifts to an already acquired data set
is provided for use interactively and in other performance critical areas.
The following classes are implemented in this module:
- :class:`~.Ray`: trace a single ray
- :class:`~.RayFan`: trace a fan of rays in either the x or y meridian
- :class:`~.RayList`: trace a list of rays from an object point
- :class:`~.RayGrid`: trace a rectilinear grid of rays
All but the `Ray` class are supported by a group of functions to trace the
rays, accumulate the data (trace_*), and refocus (focus_*) the data. A
all-in-one function (eval_*) to trace and apply focus is supplied also.
These are used in the update_data methods of the classes to generate the
ray data.
This module also has functions to calculate chief ray and reference sphere
information as well as functions for calculating the monochromatic PSF of
the model.
.. Created on Sat Feb 22 22:01:56 2020
.. codeauthor: Michael J. Hayford
"""
import numpy as np
from numpy.fft import fftshift, fft2
from scipy.interpolate import interp1d
import rayoptics.optical.model_constants as mc
from rayoptics.raytr import sampler
from rayoptics.raytr import trace
from rayoptics.raytr import traceerror as terr
from rayoptics.raytr import waveabr
# --- Single ray
class Ray():
"""A ray at the given field and wavelength.
Attributes:
opt_model: :class:`~.OpticalModel` instance
p: relative 2d pupil coordinates
f: index into :class:`~.FieldSpec` or a :class:`~.Field` instance
wl: wavelength (nm) to trace the ray, or central wavelength if None
foc: focus shift to apply to the results
image_pt_2d: base image point. if None, the chief ray is used
image_delta: image offset to apply to image_pt_2d
srf_save:
'single': save the ray data for surface srf_indx
'all': save all of the surface by surface ray data
srf_indx: for single surface retention, the surface index to save
"""
def __init__(self, opt_model, p, f=0, wl=None, foc=None, image_pt_2d=None,
image_delta=None, srf_indx=-1, srf_save='single',
output_filter=None, rayerr_filter=None, color=None):
self.opt_model = opt_model
osp = opt_model.optical_spec
self.pupil = p
self.fld = osp['fov'].fields[f] if isinstance(f, int) else f
self.wvl = osp['wvls'].central_wvl if wl is None else wl
self.foc = osp['focus'].focus_shift if foc is None else foc
self.image_pt_2d = image_pt_2d
self.image_delta = image_delta
self.output_filter = output_filter
self.rayerr_filter = rayerr_filter
self.color = color
self.srf_save = srf_save
self.srf_indx = srf_indx
self.update_data()
def update_data(self, **kwargs):
"""Trace the ray and calculate transverse aberrations. """
ref_sphere, cr_pkg = trace.setup_pupil_coords(
self.opt_model, self.fld, self.wvl, self.foc,
image_pt=self.image_pt_2d, image_delta=self.image_delta
)
build = kwargs.pop('build', 'rebuild')
if build == 'rebuild':
ray_pkg = trace.trace_safe(
self.opt_model, self.pupil, self.fld, self.wvl,
self.output_filter, self.rayerr_filter,
use_named_tuples=True, **kwargs)
self.ray_seg = ray_pkg[0][self.srf_indx]
if self.srf_save == 'all':
self.ray_pkg = ray_pkg
ray_seg = self.ray_seg
dist = self.foc / ray_seg[mc.d][2]
defocused_pt = ray_seg[mc.p] + dist*ray_seg[mc.d]
reference_image_pt = ref_sphere[0]
self.t_abr = defocused_pt[:2] - reference_image_pt[:2]
return self
# --- Fan of rays
class RayFan():
"""A fan of rays across the pupil at the given field and wavelength.
Attributes:
opt_model: :class:`~.OpticalModel` instance
f: index into :class:`~.FieldSpec` or a :class:`~.Field` instance
wl: wavelength (nm) to trace the fan, or central wavelength if None
foc: focus shift to apply to the results
image_pt_2d: base image point. if None, the chief ray is used
image_delta: image offset to apply to image_pt_2d
num_rays: number of samples along the fan
xyfan: 'x' or 'y', specifies the axis the fan is sampled on
"""
def __init__(self, opt_model, f=0, wl=None, foc=None, image_pt_2d=None,
image_delta=None, num_rays=21, xyfan='y', output_filter=None,
rayerr_filter=None, color=None, **kwargs):
self.opt_model = opt_model
osp = opt_model.optical_spec
self.fld = osp.field_of_view.fields[f] if isinstance(f, int) else f
self.wvl = osp.spectral_region.central_wvl if wl is None else wl
self.foc = osp.defocus.focus_shift if foc is None else foc
self.image_pt_2d = image_pt_2d
self.image_delta = image_delta
self.num_rays = num_rays
if xyfan == 'x':
self.xyfan = 0
elif xyfan == 'y':
self.xyfan = 1
else:
self.xyfan = int(xyfan)
self.color = color
self.output_filter = output_filter
self.rayerr_filter = rayerr_filter
self.update_data()
def __json_encode__(self):
attrs = dict(vars(self))
del attrs['opt_model']
del attrs['fan_pkg']
return attrs
def update_data(self, **kwargs):
"""Set the fan attribute to a list of (pupil coords), dx, dy, opd."""
build = kwargs.get('build', 'rebuild')
if build == 'rebuild':
self.fan_pkg = trace_fan(
self.opt_model, self.fld, self.wvl, self.foc, self.xyfan,
image_pt_2d=self.image_pt_2d, image_delta=self.image_delta,
num_rays=self.num_rays,
output_filter=self.output_filter,
rayerr_filter=self.rayerr_filter)
self.fan = focus_fan(self.opt_model, self.fan_pkg,
self.fld, self.wvl, self.foc,
image_pt_2d=self.image_pt_2d,
image_delta=self.image_delta)
return self
def select_plot_data(fan, xyfan, data_type):
"""Given a fan of data, select the sample points and the resulting data."""
f_x = []
f_y = []
for p, val in fan:
f_x.append(p[xyfan])
f_y.append(val[data_type])
f_x = np.array(f_x)
f_y = np.array(f_y)
return f_x, f_y
def smooth_plot_data(f_x, f_y, num_points=100):
"""Interpolate fan data points and return a smoothed version."""
interpolator = interp1d(f_x, f_y,
kind='cubic', assume_sorted=True)
x_sample = np.linspace(f_x.min(), f_x.max(), num_points)
y_fit = interpolator(x_sample)
return x_sample, y_fit
def trace_ray_fan(opt_model, fan_rng, fld, wvl, foc,
output_filter=None, rayerr_filter=None, **kwargs):
"""Trace a fan of rays, according to fan_rng. """
start = np.array(fan_rng[0])
stop = fan_rng[1]
num = fan_rng[2]
step = (stop - start)/(num - 1)
fan = []
for r in range(num):
pupil = np.array(start)
ray_result = trace.trace_safe(opt_model, pupil, fld, wvl,
output_filter, rayerr_filter,
use_named_tuples=True, **kwargs)
if ray_result is not None:
fan.append([pupil[0], pupil[1], ray_result])
start += step
return fan
def eval_fan(opt_model, fld, wvl, foc, xy,
image_pt_2d=None, image_delta=None, num_rays=21,
output_filter=None, rayerr_filter=None, **kwargs):
"""Trace a fan of rays and evaluate dx, dy, & OPD across the fan."""
fod = opt_model['analysis_results']['parax_data'].fod
ref_sphere, cr_pkg = trace.setup_pupil_coords(opt_model, fld, wvl, foc,
image_pt=image_pt_2d,
image_delta=image_delta)
fld.chief_ray = cr_pkg
fld.ref_sphere = ref_sphere
fan_start = np.array([0., 0.])
fan_stop = np.array([0., 0.])
fan_start[xy] = -1.0
fan_stop[xy] = 1.0
fan_def = [fan_start, fan_stop, num_rays]
fan = trace_ray_fan(opt_model, fan_def, fld, wvl, foc,
output_filter=output_filter,
rayerr_filter=rayerr_filter, **kwargs)
central_wvl = opt_model.optical_spec.spectral_region.central_wvl
convert_to_opd = 1/opt_model.nm_to_sys_units(central_wvl)
def rfc(fi):
pupil_x, pupil_y, ray_pkg = fi
if ray_pkg is not None:
image_pt = ref_sphere[0]
ray = ray_pkg[mc.ray]
dist = foc / ray[-1][mc.d][2]
defocused_pt = ray[-1][mc.p] + dist*ray[-1][mc.d]
t_abr = defocused_pt - image_pt
opdelta = waveabr.wave_abr_full_calc(fod, fld, wvl, foc, ray_pkg,
cr_pkg, ref_sphere)
opd = convert_to_opd*opdelta
return (pupil_x, pupil_y), (t_abr[0], t_abr[1], opd)
else:
return pupil_x, pupil_y, np.NaN
fan_data = [rfc(i) for i in fan]
return fan_data
def trace_fan(opt_model, fld, wvl, foc, xy,
image_pt_2d=None, image_delta=None, num_rays=21,
output_filter=None, rayerr_filter=None, **kwargs):
"""Trace a fan of rays and precalculate data for rapid refocus later."""
fod = opt_model['analysis_results']['parax_data'].fod
ref_sphere, cr_pkg = trace.setup_pupil_coords(opt_model, fld, wvl, foc,
image_pt=image_pt_2d,
image_delta=image_delta)
fld.chief_ray = cr_pkg
fld.ref_sphere = ref_sphere
""" xy determines whether x (=0) or y (=1) fan """
fan_start = np.array([0., 0.])
fan_stop = np.array([0., 0.])
fan_start[xy] = -1.0
fan_stop[xy] = 1.0
fan_def = [fan_start, fan_stop, num_rays]
fan = trace_ray_fan(opt_model, fan_def, fld, wvl, foc,
output_filter=output_filter,
rayerr_filter=rayerr_filter, **kwargs)
def wpc(fi):
pupil_x, pupil_y, ray_pkg = fi
if ray_pkg is not None and not isinstance(ray_pkg, terr.TraceError):
pre_opd_pkg = waveabr.wave_abr_pre_calc(fod, fld, wvl, foc,
ray_pkg, cr_pkg)
return pre_opd_pkg
else:
return None
upd_fan = [wpc(i) for i in fan]
return fan, upd_fan
def focus_fan(opt_model, fan_pkg, fld, wvl, foc,
image_pt_2d=None, image_delta=None, **kwargs):
"""Refocus the fan of rays and return the tranverse abr. and OPD."""
fod = opt_model['analysis_results']['parax_data'].fod
fan, upd_fan = fan_pkg
ref_sphere, cr_pkg = trace.setup_pupil_coords(opt_model, fld, wvl, foc,
image_pt=image_pt_2d,
image_delta=image_delta)
central_wvl = opt_model.optical_spec.spectral_region.central_wvl
convert_to_opd = 1/opt_model.nm_to_sys_units(central_wvl)
def rfc(fi, fiu):
pupil_x, pupil_y, ray_pkg = fi
if ray_pkg is not None and not isinstance(ray_pkg, terr.TraceError):
image_pt = ref_sphere[0]
ray = ray_pkg[mc.ray]
dist = foc / ray[-1][mc.d][2]
defocused_pt = ray[-1][mc.p] + dist*ray[-1][mc.d]
t_abr = defocused_pt - image_pt
opdelta = waveabr.wave_abr_calc(fod, fld, wvl, foc,
ray_pkg, cr_pkg, fiu, ref_sphere)
opd = convert_to_opd*opdelta
return (pupil_x, pupil_y), (t_abr[0], t_abr[1], opd)
else:
return pupil_x, pupil_y, np.NaN
fan_data = [rfc(fi, fiu) for fi, fiu in zip(fan, upd_fan)]
return fan_data
# --- List of rays
class RayList():
"""Container class for a list of rays produced from a list or generator
Attributes:
opt_model: :class:`~.OpticalModel` instance
pupil_gen: (fct, args, kwargs), where:
- fct: a function returning a generator returning a 2d coordinate
- args: passed to fct
- kwargs: passed to fct
pupil_coords: list of 2d coordinates. If None, filled in by calling
pupil_gen.
num_rays: number of samples side of grid. Used only if pupil_coords and
pupil_gen are None.
f: index into :class:`~.FieldSpec` or a :class:`~.Field` instance
wl: wavelength (nm) to trace the fan, or central wavelength if None
foc: focus shift to apply to the results
image_pt_2d: base image point. if None, the chief ray is used
image_delta: image offset to apply to image_pt_2d
apply_vignetting: whether to apply vignetting factors to pupil coords
"""
def __init__(self, opt_model,
pupil_gen=None, pupil_coords=None, num_rays=21,
f=0, wl=None, foc=None, image_pt_2d=None, image_delta=None,
apply_vignetting=True):
self.opt_model = opt_model
osp = opt_model.optical_spec
if pupil_coords is not None and pupil_gen is None:
self.pupil_coords = pupil_coords
self.pupil_gen = None
else:
if pupil_gen is not None:
self.pupil_gen = pupil_gen
else:
grid_start = np.array([-1., -1.])
grid_stop = np.array([1., 1.])
grid_def = [grid_start, grid_stop, num_rays]
self.pupil_gen = (sampler.csd_grid_ray_generator,
(grid_def,), {})
fct, args, kwargs = self.pupil_gen
self.pupil_coords = fct(*args, **kwargs)
self.fld = osp.field_of_view.fields[f] if isinstance(f, int) else f
self.wvl = osp.spectral_region.central_wvl if wl is None else wl
self.foc = osp.defocus.focus_shift if foc is None else foc
self.image_pt_2d = image_pt_2d
self.image_delta = image_delta
self.apply_vignetting = apply_vignetting
self.update_data()
def __json_encode__(self):
attrs = dict(vars(self))
del attrs['opt_model']
del attrs['pupil_gen']
del attrs['pupil_coords']
del attrs['ray_list']
return attrs
def update_data(self, **kwargs):
build = kwargs.get('build', 'rebuild')
if build == 'rebuild':
if self.pupil_gen:
fct, args, kwargs = self.pupil_gen
self.pupil_coords = fct(*args, **kwargs)
self.ray_list = trace_pupil_coords(
self.opt_model, self.pupil_coords,
self.fld, self.wvl, self.foc,
image_pt_2d=self.image_pt_2d, image_delta=self.image_delta,
apply_vignetting=self.apply_vignetting)
ray_list_data = focus_pupil_coords(
self.opt_model, self.ray_list,
self.fld, self.wvl, self.foc,
image_pt_2d=self.image_pt_2d,
image_delta=self.image_delta)
self.ray_abr = np.rollaxis(ray_list_data, 1)
return self
def trace_ray_list(opt_model, pupil_coords, fld, wvl, foc,
append_if_none=False,
output_filter=None, rayerr_filter=None,
**kwargs):
"""Trace a list of rays at fld and wvl and return ray_pkgs in a list."""
ray_list = []
for pupil in pupil_coords:
ray_result = trace.trace_safe(opt_model, pupil, fld, wvl,
output_filter, rayerr_filter,
**kwargs)
if ray_result is not None:
ray_list.append([pupil[0], pupil[1], ray_result])
else: # ray outside pupil or failed
if append_if_none:
ray_list.append([pupil[0], pupil[1], None])
return ray_list
def trace_list_of_rays(opt_model, rays,
output_filter=None, rayerr_filter=None,
**kwargs):
"""Trace a list of rays (pt, dir, wvl) and return ray_pkgs in a list.
Args:
opt_model: :class:`~.OpticalModel` instance
rays: list of (pt0, dir0, wvl)
- pt0: starting point in coords of first interface
- dir0: starting direction cosines in coords of first interface
- wvl: wavelength in nm
output_filter: None, "last", or a callable. See below
**kwargs: kwyword args passed to the trace function
The output_filter keyword argument controls what ray data is returned to
the caller.
- if None, returns the entire traced ray
- if "last", returns the ray data from the last interface
- if a callable, it must take a ray_pkg as an argument and return the
desired data or None
Returns:
A list with an entry for each ray in rays
"""
ray_list = []
for ray in rays:
pt0, dir0, wvl = ray
try:
ray_pkg = trace.trace(opt_model.seq_model, pt0, dir0, wvl, **kwargs)
except terr.TraceError as rayerr:
if rayerr_filter is None:
pass
elif rayerr_filter == 'full':
ray_list.append((ray, rayerr))
elif rayerr_filter == 'summary':
rayerr.ray_pkg = None
ray_list.append((ray, rayerr))
else:
pass
else:
if output_filter is None:
ray_list.append(ray_pkg)
elif output_filter == 'last':
ray, op_delta, wvl = ray_pkg
ray_list.append((ray[-1], op_delta, wvl))
else:
ray_list.append(output_filter(ray_pkg))
return ray_list
def eval_pupil_coords(opt_model, fld, wvl, foc, image_pt_2d=None,
image_delta=None, num_rays=21, **kwargs):
"""Trace a list of rays and return the transverse abr."""
ref_sphere, cr_pkg = trace.setup_pupil_coords(opt_model, fld, wvl, foc,
image_pt=image_pt_2d,
image_delta=image_delta)
fld.chief_ray = cr_pkg
fld.ref_sphere = ref_sphere
grid_start = np.array([-1., -1.])
grid_stop = np.array([1., 1.])
grid_def = [grid_start, grid_stop, num_rays]
ray_list = trace_ray_list(opt_model, sampler.grid_ray_generator(grid_def),
fld, wvl, foc, check_apertures=True, **kwargs)
def rfc(ri):
pupil_x, pupil_y, ray_pkg = ri
if ray_pkg is not None:
image_pt = ref_sphere[0]
ray = ray_pkg[mc.ray]
dist = foc / ray[-1][mc.d][2]
defocused_pt = ray[-1][mc.p] + dist*ray[-1][mc.d]
t_abr = defocused_pt - image_pt
return t_abr[0], t_abr[1]
else:
return np.NaN
ray_list_data = [rfc(ri) for ri in ray_list]
return np.array(ray_list_data)
def trace_pupil_coords(opt_model, pupil_coords, fld, wvl, foc,
image_pt_2d=None, image_delta=None, **kwargs):
"""Trace a list of rays and return data needed for rapid refocus."""
ref_sphere, cr_pkg = trace.setup_pupil_coords(opt_model, fld, wvl, foc,
image_pt=image_pt_2d,
image_delta=image_delta)
fld.chief_ray = cr_pkg
fld.ref_sphere = ref_sphere
ray_list = trace_ray_list(opt_model, pupil_coords,
fld, wvl, foc,
check_apertures=True, **kwargs)
return ray_list
def focus_pupil_coords(opt_model, ray_list, fld, wvl, foc,
image_pt_2d=None, image_delta=None):
"""Given pre-traced rays and a ref. sphere, return the transverse abr."""
ref_sphere, cr_pkg = trace.setup_pupil_coords(opt_model, fld, wvl, foc,
image_pt=image_pt_2d,
image_delta=image_delta)
def rfc(ri):
pupil_x, pupil_y, ray_pkg = ri
if ray_pkg is not None:
image_pt = ref_sphere[0]
ray = ray_pkg[mc.ray]
dist = foc / ray[-1][mc.d][2]
defocused_pt = ray[-1][mc.p] + dist*ray[-1][mc.d]
t_abr = defocused_pt - image_pt
return t_abr[0], t_abr[1]
else:
return np.NaN
ray_list_data = [rfc(ri) for ri in ray_list]
return np.array(ray_list_data)
# --- Square grid of rays
class RayGrid():
"""Container for a square grid of rays.
Attributes:
opt_model: :class:`~.OpticalModel` instance
f: index into :class:`~.FieldSpec` or a :class:`~.Field` instance
wl: wavelength (nm) to trace the fan, or central wavelength if None
foc: focus shift to apply to the results
image_pt_2d: base image point. if None, the chief ray is used
image_delta: image offset to apply to image_pt_2d
num_rays: number of samples along the side of the grid
"""
def __init__(self, opt_model, f=0, wl=None, foc=None, image_pt_2d=None,
image_delta=None, num_rays=21, value_if_none=np.NaN):
self.opt_model = opt_model
osp = opt_model.optical_spec
self.fld = osp.field_of_view.fields[f] if isinstance(f, int) else f
self.wvl = osp.spectral_region.central_wvl if wl is None else wl
self.foc = osp.defocus.focus_shift if foc is None else foc
self.image_pt_2d = image_pt_2d
self.image_delta = image_delta
self.num_rays = num_rays
self.value_if_none = value_if_none
self.update_data()
def __json_encode__(self):
attrs = dict(vars(self))
del attrs['opt_model']
del attrs['grid_pkg']
return attrs
def update_data(self, **kwargs):
build = kwargs.get('build', 'rebuild')
if build == 'rebuild':
self.grid_pkg = trace_wavefront(
self.opt_model, self.fld, self.wvl, self.foc,
image_pt_2d=self.image_pt_2d, image_delta=self.image_delta,
num_rays=self.num_rays)
opd = focus_wavefront(self.opt_model, self.grid_pkg,
self.fld, self.wvl, self.foc,
image_pt_2d=self.image_pt_2d,
image_delta=self.image_delta,
value_if_none=self.value_if_none)
self.grid = np.rollaxis(opd, 2)
return self
def trace_ray_grid(opt_model, grid_rng, fld, wvl, foc, append_if_none=True,
output_filter=None, rayerr_filter=None, **kwargs):
"""Trace a grid of rays at fld and wvl and return ray_pkgs in 2d list."""
start = np.array(grid_rng[0])
stop = grid_rng[1]
num = grid_rng[2]
step = np.array((stop - start)/(num - 1))
grid = []
for i in range(num):
grid_row = []
for j in range(num):
pupil = np.array(start)
ray_result = trace.trace_safe(opt_model, pupil, fld, wvl,
output_filter, rayerr_filter,
apply_vignetting=False, **kwargs)
if ray_result is not None:
grid_row.append([pupil[0], pupil[1], ray_result])
else: # ray outside pupil or failed
if append_if_none:
grid_row.append([pupil[0], pupil[1], None])
start[1] += step[1]
grid.append(grid_row)
start[0] += step[0]
start[1] = grid_rng[0][1]
return grid
def eval_wavefront(opt_model, fld, wvl, foc, image_pt_2d=None,
image_delta=None, num_rays=21, value_if_none=np.NaN):
"""Trace a grid of rays and evaluate the OPD across the wavefront."""
fod = opt_model['analysis_results']['parax_data'].fod
ref_sphere, cr_pkg = trace.setup_pupil_coords(opt_model, fld, wvl, foc,
image_pt=image_pt_2d,
image_delta=image_delta)
fld.chief_ray = cr_pkg
fld.ref_sphere = ref_sphere
vig_bbox = fld.vignetting_bbox(opt_model['osp']['pupil'])
vig_grid_def = [vig_bbox[0], vig_bbox[1], num_rays]
grid = trace_ray_grid(opt_model, vig_grid_def,
fld, wvl, foc, check_apertures=True)
central_wvl = opt_model.optical_spec.spectral_region.central_wvl
convert_to_opd = 1/opt_model.nm_to_sys_units(central_wvl)
def rfc(gij):
pupil_x, pupil_y, ray_pkg = gij
if ray_pkg is not None:
opdelta = waveabr.wave_abr_full_calc(fod, fld, wvl, foc, ray_pkg,
cr_pkg, ref_sphere)
opd = convert_to_opd*opdelta
return pupil_x, pupil_y, opd
else:
return pupil_x, pupil_y, value_if_none
opd_grid = [[rfc(j) for j in i] for i in grid]
return np.array(opd_grid)
def trace_wavefront(opt_model, fld, wvl, foc,
image_pt_2d=None, image_delta=None, num_rays=21):
"""Trace a grid of rays and pre-calculate data needed for rapid refocus."""
fod = opt_model['analysis_results']['parax_data'].fod
ref_sphere, cr_pkg = trace.setup_pupil_coords(opt_model, fld, wvl, foc,
image_pt=image_pt_2d,
image_delta=image_delta)
fld.chief_ray = cr_pkg
fld.ref_sphere = ref_sphere
vig_bbox = fld.vignetting_bbox(opt_model['osp']['pupil'])
vig_grid_def = [vig_bbox[0], vig_bbox[1], num_rays]
grid = trace_ray_grid(opt_model, vig_grid_def,
fld, wvl, foc, check_apertures=True)
def wpc(gij):
pupil_x, pupil_y, ray_pkg = gij
if ray_pkg is not None:
pre_opd_pkg = waveabr.wave_abr_pre_calc(fod, fld, wvl, foc,
ray_pkg, cr_pkg)
return pre_opd_pkg
else:
return None
upd_grid = [[wpc(j) for j in i] for i in grid]
return grid, upd_grid
def focus_wavefront(opt_model, grid_pkg, fld, wvl, foc, image_pt_2d=None,
image_delta=None, value_if_none=np.NaN):
"""Given pre-traced rays and a ref. sphere, return the ray's OPD."""
fod = opt_model['analysis_results']['parax_data'].fod
grid, upd_grid = grid_pkg
ref_sphere, cr_pkg = trace.setup_pupil_coords(opt_model, fld, wvl, foc,
image_pt=image_pt_2d,
image_delta=image_delta)
central_wvl = opt_model.optical_spec.spectral_region.central_wvl
convert_to_opd = 1/opt_model.nm_to_sys_units(central_wvl)
def rfc(gij, uij):
pupil_x, pupil_y, ray_pkg = gij
if ray_pkg is not None:
opdelta = waveabr.wave_abr_calc(fod, fld, wvl, foc,
ray_pkg, cr_pkg, uij, ref_sphere)
opd = convert_to_opd*opdelta
return pupil_x, pupil_y, opd
else:
return pupil_x, pupil_y, value_if_none
refocused_grid = [[rfc(jg, ju) for jg, ju in zip(ig, iu)]
for ig, iu in zip(grid, upd_grid)]
return np.array(refocused_grid)
# --- PSF calculation
def psf_sampling(n=None, n_pupil=None, n_airy=None):
"""Given 2 of 3 parameters, calculate the third.
Args:
n: The total width of the sampling grid
n_pupil: The sampling across the pupil
n_airy: The sampling across the central peak of the Airy disk
Returns: (n, n_pupil, n_airy)
"""
npa = n, n_pupil, n_airy
i = npa.index(None)
if i == 0:
n = round((n_pupil * n_airy)/2.44)
elif i == 1:
n_pupil = round(2.44*n/n_airy)
elif i == 2:
n_airy = round(2.44*n/n_pupil)
return n, n_pupil, n_airy
def calc_psf_scaling(pupil_grid, ndim, maxdim):
"""Calculate the input and output grid spacings.
Args:
pupil_grid: A RayGrid instance
ndim: The sampling across the wavefront
maxdim: The total width of the sampling grid
Returns:
delta_x: The linear grid spacing on the entrance pupil
delta_xp: The linear grid spacing on the image plane
"""
opt_model = pupil_grid.opt_model
fod = opt_model['analysis_results']['parax_data'].fod
wl = opt_model.nm_to_sys_units(pupil_grid.wvl)
fill_factor = ndim/maxdim
max_D = 2 * fod.enp_radius / fill_factor
delta_x = max_D / maxdim
C = wl/fod.exp_radius
delta_theta = (fill_factor * C) / 2
ref_sphere_radius = pupil_grid.fld.ref_sphere[2]
delta_xp = delta_theta * ref_sphere_radius
return delta_x, delta_xp
def calc_psf(wavefront, ndim, maxdim):
"""Calculate the point spread function of wavefront W.
Args:
wavefront: ndim x ndim Numpy array of wavefront errors. No data
condition is indicated by nan
ndim: The sampling across the wavefront
maxdim: The total width of the sampling grid
Returns: AP, the PSF of the input wavefront
"""
maxdim_by_2 = maxdim//2
W = np.zeros([maxdim, maxdim])
nd2 = ndim//2
W[maxdim_by_2-(nd2-1):maxdim_by_2+(nd2+1),
maxdim_by_2-(nd2-1):maxdim_by_2+(nd2+1)] = np.nan_to_num(wavefront)
phase = np.exp(1j*2*np.pi*W)
for i in range(len(phase)):
for j in range(len(phase)):
if phase[i][j] == 1:
phase[i][j] = 0
AP = abs(fftshift(fft2(fftshift(phase))))**2
AP_max = np.nanmax(AP)
AP = AP/AP_max
return AP
def update_psf_data(pupil_grid, build='rebuild'):
pupil_grid.update_data(build=build)
ndim = pupil_grid.num_rays
maxdim = pupil_grid.maxdim
AP = calc_psf(pupil_grid.grid[2], ndim, maxdim)
return AP
| [
"mjhoptics@gmail.com"
] | mjhoptics@gmail.com |
21b5efb5f0bf40c0818e980c8203a97196cd386d | e93b025ce6a5d5f9277df5404fc6ae654409d8fc | /face/management/__init__.py | 4add8514b88b1fcdb6ae2817efc8e799aa492565 | [] | no_license | russelltao/django-facerec | 721797465391f3af653ffa7b000ab1222baff316 | b8c16ffbfbf93564ecdf2feb0b03114d4705fbce | refs/heads/master | 2021-08-17T09:06:44.411703 | 2017-11-21T01:49:48 | 2017-11-21T01:49:48 | 106,383,041 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 115 | py | """
Creates permissions for all installed apps that need permissions.
"""
from __future__ import unicode_literals
| [
"admin@example.com"
] | admin@example.com |
5b58c08e3876a61c430a35961b7173b43b21a8a3 | 53fab060fa262e5d5026e0807d93c75fb81e67b9 | /backup/user_304/ch85_2019_06_06_19_36_38_339098.py | 1a496634416c079b1b60dd94e2c6d2b3b6661571 | [] | no_license | gabriellaec/desoft-analise-exercicios | b77c6999424c5ce7e44086a12589a0ad43d6adca | 01940ab0897aa6005764fc220b900e4d6161d36b | refs/heads/main | 2023-01-31T17:19:42.050628 | 2020-12-16T05:21:31 | 2020-12-16T05:21:31 | 306,735,108 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 253 | py | with open ('macacos-me-mordam.txt', 'r') as arquivo:
conteudo=arquivo.read()
maiusculo=conteudo.upper()
conta_banana=maiusculo.split()
contador=0
for e in conta_banana:
if e == 'BANANA':
contador+=1
print (contador)
| [
"you@example.com"
] | you@example.com |
c35f2ecffbddad9a986f066f6f1b2c9ab404aeda | 0be0bdd8574eda8ec6f0ff340e88bd8677618bb3 | /s01-create-project/meal_options/meal_options/app.py | 01ffc0ddd4b02021af961c76f059fa482d052a55 | [] | no_license | genzj/flask-restful-api-course | 1e97396d6fea50f34922530dc3777393178995c0 | f9ab1bc7da724019dfacc5b94536ec5e8b6afad7 | refs/heads/master | 2023-05-11T06:19:52.307109 | 2022-12-20T20:35:57 | 2022-12-20T20:35:57 | 165,891,710 | 1 | 1 | null | 2023-05-02T00:19:49 | 2019-01-15T17:12:22 | Python | UTF-8 | Python | false | false | 140 | py | from . import create_app
app = create_app()
@app.route("/")
def index():
app.logger.warning("hello world")
return "Hello World!"
| [
"zj0512@gmail.com"
] | zj0512@gmail.com |
902836baed5634be041c938b400c0b355c6b5b0f | 632e8ed762f9f694e8f72d4d65303b5246a11217 | /py/word_count.py | e66f432c740864a91e65bfe0e842bbf04ea07f34 | [] | no_license | yiyusheng/django | 36537bedf7efd2db3e41809c898cdabe2af6c381 | a8a79ef0323cd9234cec83735618940f63dfc2a4 | refs/heads/master | 2022-11-19T11:36:30.868697 | 2022-11-06T04:01:50 | 2022-11-06T04:01:50 | 97,066,542 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,404 | py | # -*- coding: utf-8 -*-
import pymysql,warnings,jieba,sys
import pandas as pd
from datetime import datetime,timedelta
from collections import Counter
# mysql connect
def sqlcon():
conn = pymysql.connect(
host = "localhost",
user = "root",
passwd = "qwer1234",
charset = "utf8",
database = 'prichat'
)
return conn
# get data from table chat_logs for one hour
def get_chats(sc,ts_now,all=False):
cursor = sc.cursor()
if all:
sql = "SELECT content FROM chat_logs"
else:
sql = "SELECT content FROM chat_logs WHERE date_format(time,'%%Y-%%m-%%d %%H:00:00')=%s"
cursor.execute(sql,(ts_now.strftime('%Y-%m-%d %H:%M:%S')))
data = cursor.fetchall()
data = [d[0] for d in data]
return data
# parse these data with jieba
def data_parse(data):
word_list = [list(jieba.cut(d,cut_all=False)) for d in data]
word_list = [i for a in word_list for i in a]
return word_list
# store them to table word_count and word_count_hourly
def word_insert(sc,word_count,ts_now,len_logs):
cursor = sc.cursor()
df = pd.DataFrame(list(word_count.items()),columns=['word','count'])
df['weighted_count'] = df['count']/len_logs
df['weighted_count'] = df['weighted_count'].round(4)
df['time'] = ts_now.strftime('%Y-%m-%d %H:%M:%S')
# for hourly count
sql = "INSERT INTO word_count_hourly(time,word,count,weighted_count) VALUES(%s,%s,%s,%s)"
cursor.executemany(sql,df[['time','word','count','weighted_count']].values.tolist())
cursor.connection.commit()
# for total count
sql = "INSERT INTO word_count(time,word,count) VALUES(%s,%s,%s) ON DUPLICATE KEY UPDATE count=count+%s"
for i in range(len(df)):
cursor.execute(sql,(df['time'][i],df['word'][i],str(df['count'][i]),str(df['count'][i])))
cursor.connection.commit()
if __name__=='__main__':
argv = sys.argv
#ts_now = datetime.strptime(argv[1],'%Y-%m-%dT%H:%M:%S')
if len(argv)==2:
ts_now = datetime.strptime(argv[1],'%Y-%m-%dT%H:%M:%S')
print 'special time:%s' %(ts_now)
else:
# for last hour
ts_now = datetime.utcnow().replace(minute=0,second=0,microsecond=0)-timedelta(hours=1)
sc = sqlcon()
data = get_chats(sc,ts_now)
word_list = data_parse(data)
word_count = Counter(word_list)
word_insert(sc,word_count,ts_now,len(data))
| [
"yiyusheng.hust@gmail.com"
] | yiyusheng.hust@gmail.com |
3a4b99682fa6c9d529af4e44480f812cef0d3781 | d2c4934325f5ddd567963e7bd2bdc0673f92bc40 | /tests/artificial/transf_Fisher/trend_ConstantTrend/cycle_5/ar_12/test_artificial_128_Fisher_ConstantTrend_5_12_0.py | 825a6bdaa5c4a8acc661aa02e67ba5736b2b433f | [
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | jmabry/pyaf | 797acdd585842474ff4ae1d9db5606877252d9b8 | afbc15a851a2445a7824bf255af612dc429265af | refs/heads/master | 2020-03-20T02:14:12.597970 | 2018-12-17T22:08:11 | 2018-12-17T22:08:11 | 137,104,552 | 0 | 0 | BSD-3-Clause | 2018-12-17T22:08:12 | 2018-06-12T17:15:43 | Python | UTF-8 | Python | false | false | 270 | py | import pyaf.Bench.TS_datasets as tsds
import pyaf.tests.artificial.process_artificial_dataset as art
art.process_dataset(N = 128 , FREQ = 'D', seed = 0, trendtype = "ConstantTrend", cycle_length = 5, transform = "Fisher", sigma = 0.0, exog_count = 0, ar_order = 12); | [
"antoine.carme@laposte.net"
] | antoine.carme@laposte.net |
2b3de044fbd1ca7140aa5bd56bb39b86532524cb | b121b4135f0edf0e39c1ae7343c7df19f56a077f | /prototypes/decorators/Fibonacci_Cache_decorator_Class.py | ddb98fbbb340a1ac1a4c0dd94506ff2aa344a4df | [] | no_license | MPIBGC-TEE/bgc-md | 25379c03d2333481bd385211f49aff6351e5dd05 | 8912a26d1b7e404ed3ebee4d4799a3518f507756 | refs/heads/master | 2021-05-08T19:07:46.930394 | 2020-10-21T12:08:53 | 2020-10-21T12:08:53 | 119,548,100 | 3 | 3 | null | null | null | null | UTF-8 | Python | false | false | 389 | py | class Memoize:
def __init__(self,fn):
self.fn=fn
self.memo={}
def __call__(self,*args):
if args not in self.memo:
self.memo[args] = self.fn(*args)
return self.memo[args]
@Memoize
def fib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib(n-1) + fib(n-2)
print(fib(36))
| [
"markus.mueller.1.g@googlemail.com"
] | markus.mueller.1.g@googlemail.com |
a039f6918e1185e2714bffca130845312af93bae | 163bbb4e0920dedd5941e3edfb2d8706ba75627d | /Code/CodeRecords/2716/60712/321105.py | e2f159e0ba7662e623164f51e9759d7ba83ab298 | [] | no_license | AdamZhouSE/pythonHomework | a25c120b03a158d60aaa9fdc5fb203b1bb377a19 | ffc5606817a666aa6241cfab27364326f5c066ff | refs/heads/master | 2022-11-24T08:05:22.122011 | 2020-07-28T16:21:24 | 2020-07-28T16:21:24 | 259,576,640 | 2 | 1 | null | null | null | null | UTF-8 | Python | false | false | 144 | py | l=[]
for i in range(3):
l.append(input())
if l=='['[', ' "//",', ' "/ "']':
print(3)
elif l==[[0, 0]]:
print(0)
else:
print(l) | [
"1069583789@qq.com"
] | 1069583789@qq.com |
8d1385879e292e12bcc56f1d11b3835812fd8220 | cbbce6a21a57c6a638fc0144f2e4dd9583adb30f | /Estrutura_De_Decisao/estruturadedecisao-13.py | f65664112836ff48a86a6bf5972fb42292eba44c | [] | no_license | gaijinctfx/PythonExercicios | 5d0e0decfe8122b5d07713b33aea66b736700554 | 1e0cde4f27b14ac192e37da210bad3f7023437c7 | refs/heads/master | 2022-05-11T21:57:17.184820 | 2017-06-08T03:21:49 | 2017-06-08T03:21:49 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,603 | py | # author: ZumbiPy
# E-mail: zumbipy@gmail.com
# Exercicio do site http://wiki.python.org.br/EstruturaDeDecisao
# Para execurta o programa on line entra no link a baixo:
# https://repl.it/I2Ee/0
"""
13 - Faça um Programa que leia um número e exiba o dia correspondente
da semana. (1-Domingo, 2- Segunda, etc.), se digitar outro valor
deve aparecer valor inválido.
"""
# ================================================================================
# Variáveis do programa
# ================================================================================
# Entrada de Dados.
dia_da_semana = int(input("Digite Um numero de 1 ao 7: "))
# ================================================================================
# Logica do programa
# ================================================================================
# Comparações.
if dia_da_semana == 1:
print("Numero {} correspondente á Domingo".format(dia_da_semana))
elif dia_da_semana == 2:
print("Numero {} correspondente á Segunda".format(dia_da_semana))
elif dia_da_semana == 3:
print("Numero {} correspondente á Terça".format(dia_da_semana))
elif dia_da_semana == 4:
print("Numero {} correspondente á Quarta".format(dia_da_semana))
elif dia_da_semana == 5:
print("Numero {} correspondente á Quinta".format(dia_da_semana))
elif dia_da_semana == 6:
print("Numero {} correspondente á Sexta".format(dia_da_semana))
elif dia_da_semana == 7:
print("Numero {} correspondente á Sabado".format(dia_da_semana))
else:
print("Valor Invalido.")
| [
"zumbipy@gmail.com"
] | zumbipy@gmail.com |
41e5ce490aacef1706d7e4fc24123d2a67a90fa1 | 99259216f11b15ec60446b4a141b3592a35560ce | /wex-python-api/ibmwex/models/nlp_document.py | bf8459f2bec05fb8a89b69ac55ae507867f2e20b | [] | no_license | adam725417/Walsin | 296ba868f0837077abff93e4f236c6ee50917c06 | 7fbefb9bb5064dabccf4a7e2bf49d2a43e0f66e9 | refs/heads/master | 2020-04-12T14:14:07.607675 | 2019-03-05T01:54:03 | 2019-03-05T01:54:03 | 162,546,202 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,784 | py | # coding: utf-8
"""
WEX REST APIs
Authentication methods - Basic Auth - JSON Web Token - [POST /api/v1/usermgmt/login](#!/User/signinUser) - [POST /api/v1/usermgmt/logout](#!/User/doLogout) - Python client sample [Download](/docs/wex-python-api.zip)
OpenAPI spec version: 12.0.2.417
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from pprint import pformat
from six import iteritems
import re
class NLPDocument(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'fields': 'object',
'metadata': 'object'
}
attribute_map = {
'fields': 'fields',
'metadata': 'metadata'
}
def __init__(self, fields=None, metadata=None):
"""
NLPDocument - a model defined in Swagger
"""
self._fields = None
self._metadata = None
self.fields = fields
if metadata is not None:
self.metadata = metadata
@property
def fields(self):
"""
Gets the fields of this NLPDocument.
:return: The fields of this NLPDocument.
:rtype: object
"""
return self._fields
@fields.setter
def fields(self, fields):
"""
Sets the fields of this NLPDocument.
:param fields: The fields of this NLPDocument.
:type: object
"""
if fields is None:
raise ValueError("Invalid value for `fields`, must not be `None`")
self._fields = fields
@property
def metadata(self):
"""
Gets the metadata of this NLPDocument.
:return: The metadata of this NLPDocument.
:rtype: object
"""
return self._metadata
@metadata.setter
def metadata(self, metadata):
"""
Sets the metadata of this NLPDocument.
:param metadata: The metadata of this NLPDocument.
:type: object
"""
self._metadata = metadata
def to_dict(self):
"""
Returns the model properties as a dict
"""
result = {}
for attr, _ in iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""
Returns the string representation of the model
"""
return pformat(self.to_dict())
def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str()
def __eq__(self, other):
"""
Returns true if both objects are equal
"""
if not isinstance(other, NLPDocument):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""
Returns true if both objects are not equal
"""
return not self == other
| [
"adamtp_chen@walsin.com"
] | adamtp_chen@walsin.com |
0979f9239fd35acdc47af14483eb5da3f0e0521b | 6ea94d75b6e48952c1df2bda719a886f638ed479 | /devel/lib/python2.7/dist-packages/object_recognition_core/__init__.py | 69401447b8ce90354d2b13050999f5c8523ae0b6 | [] | no_license | lievech/ork_ws | 634e26355503c69b76df7fca41402ea43c228f49 | e828846b962974a038be08a5ce39601b692d4045 | refs/heads/master | 2020-08-02T20:19:43.109158 | 2019-09-28T11:56:56 | 2019-09-28T11:56:56 | 211,493,180 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,018 | py | # -*- coding: utf-8 -*-
# generated from catkin/cmake/template/__init__.py.in
# keep symbol table as clean as possible by deleting all unnecessary symbols
from os import path as os_path
from sys import path as sys_path
from pkgutil import extend_path
__extended_path = "/home/lhn/ork_ws/src/ork_core/python".split(";")
for p in reversed(__extended_path):
sys_path.insert(0, p)
del p
del sys_path
__path__ = extend_path(__path__, __name__)
del extend_path
__execfiles = []
for p in __extended_path:
src_init_file = os_path.join(p, __name__ + '.py')
if os_path.isfile(src_init_file):
__execfiles.append(src_init_file)
else:
src_init_file = os_path.join(p, __name__, '__init__.py')
if os_path.isfile(src_init_file):
__execfiles.append(src_init_file)
del src_init_file
del p
del os_path
del __extended_path
for __execfile in __execfiles:
with open(__execfile, 'r') as __fh:
exec(__fh.read())
del __fh
del __execfile
del __execfiles
| [
"2328187416@qq.com"
] | 2328187416@qq.com |
18c2333a7d73bf5e2b005e41fdca714b2f66cfd9 | cc578cec7c485e2c1060fd075ccc08eb18124345 | /cs15211/ImplementstrStr.py | 59fcc2663c6bd51efa5a098006a1f2466283bea6 | [
"Apache-2.0"
] | permissive | JulyKikuAkita/PythonPrac | 18e36bfad934a6112f727b4906a5e4b784182354 | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | refs/heads/master | 2021-01-21T16:49:01.482561 | 2019-02-07T06:15:29 | 2019-02-07T06:15:29 | 91,907,704 | 1 | 1 | Apache-2.0 | 2019-02-07T06:15:30 | 2017-05-20T18:12:53 | Python | UTF-8 | Python | false | false | 8,727 | py | __source__ = 'https://leetcode.com/problems/implement-strstr/'
# https://github.com/kamyu104/LeetCode/blob/master/Python/implement-strstr.py
# Time: O(n + m)
# Space: O(m)
# String - KMP algo
#
# Description: Leetcode # 28. Implement strStr()
#
# Implement strStr().
#
# Returns a pointer to the first occurrence of needle in haystack,
# or null if needle is not part of haystack.
#
# Companies
# Pocket Gems Microsoft Apple Facebook Google
# Related Topics
# Two Pointers String
# Similar Questions
# Shortest Palindrome Repeated Substring Pattern
# Wiki of KMP algorithm:
# http://en.wikipedia.org/wiki/Knuth-Morris-Pratt_algorithm
# Easy explanation: http://jakeboxer.com/blog/2009/12/13/the-knuth-morris-pratt-algorithm-in-my-own-words/
import unittest
class Solution:
# @param haystack, a string
# @param needle, a string
# @return a string or None
def strStr(self, haystack, needle):
if not needle:
return 0
if len(haystack) < len(needle):
return -1
i = self.KMP(haystack, needle)
if i > -1:
return i
else:
return -1
def KMP(self, text, pattern):
prefix = self.getPrefix(pattern)
j = -1
for i in xrange(len(text)):
while j > -1 and pattern[j + 1] != text[i]:
j = prefix[j]
if pattern[j + 1] == text[i]:
j += 1
if j == len(pattern) - 1:
return i - j
return -1
def getPrefix(self, pattern):
prefix = [-1] * len(pattern)
j = -1
for i in xrange(1, len(pattern)):
while j > -1 and pattern[j + 1] != pattern[i]:
j = prefix[j]
if pattern[j + 1] == pattern[i]:
j += 1
prefix[i] = j
return prefix
# Time: (n * m)
# Space: (1)
class Solution2:
# @param haystack, a string
# @param needle, a string
# @return a string or None
def strStr(self, haystack, needle):
for i in xrange(len(haystack) - len(needle)):
if haystack[i : i + len(needle)] == needle:
return haystack[i:]
return None
class SolutionOther:
# @param haystack, a string
# @param needle, a string
# @return a string or None
def strStrKMP(self, haystack, needle):
lenh, lenn = len(haystack), len(needle)
if lenn == 0:
return haystack
next, p = [-1] * (lenn), -1
for i in range(1, lenn):
print i,p, needle[i] , needle[p + 1], next, lenn
while p >= 0 and needle[i] != needle[p + 1]:
p = next[p]
if needle[i] == needle[p + 1]:
p = p + 1
next[i] = p
p = -1
for i in range(lenh):
print i,p, haystack[i] , next, haystack[i - p:],lenh ,needle[p + 1]
while p >= 0 and haystack[i] != needle[p + 1]:
p = next[p]
if haystack[i] == needle[p + 1]:
p = p + 1
if p + 1 == lenn:
return haystack[i - p:]
return None
def strStr(self, haystack, needle):
j =0
if len(needle) ==0 and len(haystack)==0:
return ""
elif len(haystack)!=0 and len(needle)==0:
return haystack
elif len(haystack)!=0 and len(needle)!=0:
for i in range(len(haystack)):
if (i + len(needle) )> len(haystack):
break
for j in range(len(needle)):
print i, haystack[i+j], j, needle[j]
if haystack[i+j] != needle[j]:
break
else:
print i
return haystack[i::]
return None
class Naive:
# @param haystack, a string
# @param needle, a string
# @return a string or None
def strStr(self, haystack, needle):
if not haystack or not needle:
return 0
if len(needle) == 0:
return 0
for i in xrange(len(haystack)):
if i + len(needle) > len(haystack):
return -1
m = i
for j in xrange(len(needle)):
if needle[j] == haystack[m]:
if j == len(needle) - 1:
return i
m += 1
else:
break
return -1
t1=SolutionOther()
#print t1.strStr("haystackneedle","needle")
#print t1.strStr("haystackneedle","neekle")
#print t1.strStr("haystackneedle","")
#print t1.strStr("","a") #null
#print t1.strStr("","") # ""
#print t1.strStr("aaa","a") #aaa
#print t1.strStr("mississippi", "issip") #issippi
#print t1.strStr("a","")
#print t1.strStr("mississippi", "a")
#p = [-1] * (len("needle"))
#print p
#print t1.strStrKMP("haystackneedle","needle")
#print t1.strStrKMP("aaa","a") #aaa
#print t1.strStrKMP("mississippi", "a")
#print t1.strStrKMP("mississippi", "issip")
class TestMethods(unittest.TestCase):
def test_Local(self):
self.assertEqual(1, 1)
print Solution2().strStr("a", "")
print Solution2().strStr("abababcdab", "ababcdx")
print Naive().strStr("abababcdab", "ababcdx")
if __name__ == '__main__':
unittest.main()
Java = '''
# Thought:
# http://www.programcreek.com/2012/12/leetcode-implement-strstr-java/
# 3ms 99.64%
class Solution {
public int strStr(String haystack, String needle) {
return haystack.indexOf(needle);
}
}
# 7ms 44.59%
class Solution {
public int strStr(String haystack, String needle) {
for (int i = 0; ; i++) {
for (int j = 0; ; j++) {
if (j == needle.length()) return i;
if (i + j == haystack.length()) return -1;
if (needle.charAt(j) != haystack.charAt(i + j)) break;
}
}
}
}
# 3ms 99.64%
class Solution {
public int strStr(String haystack, String needle) {
if (needle.length() == 0) {
return 0;
}
for (int i = 0; i <= haystack.length() - needle.length(); i++) {
if (haystack.charAt(i) == needle.charAt(0) && valid(haystack, i + 1, needle)) {
return i;
}
}
return -1;
}
private boolean valid(String haystack, int start, String needle) {
for (int i = 1; i < needle.length(); i++) {
if (needle.charAt(i) != haystack.charAt(start++)) {
return false;
}
}
return true;
}
}
3 KMP
# 7ms 44.59%
class Solution {
public int strStr(String haystack, String needle) {
if(haystack == null || needle == null) return -1;
int hLen = haystack.length();
int nLen = needle.length();
if(hLen < nLen) return -1;
if(nLen == 0) return 0;
int[] next = next(needle);
int i = 0, j = 0;
while(i < hLen && j < nLen) {
if(j == -1 || haystack.charAt(i) == needle.charAt(j)) {
i++; j++;
} else {
j = next[j];
}
if(j == nLen) return i - nLen;
}
return -1;
}
int[] next(String needle) {
int len = needle.length();
char[] cs = needle.toCharArray();
int i = -1, j = 0;
int[] next = new int[len];
next[0] = -1;
while(j < len-1) {
if(i == -1 || cs[i] == cs[j]) {
i++; j++;
next[j] = i;
} else i = next[i];
}
return next;
}
}
# KMP
# 68ms 12,21%
class Solution {
public int strStr(String haystack, String needle) {
if(haystack == null || needle == null) return -1;
int hLen = haystack.length();
int nLen = needle.length();
if(hLen < nLen) return -1;
if(nLen == 0) return 0;
int[] lps = getLPS(needle);
for( int n : lps) {
System.out.print(n + " ");
}
int i = 0, j = 0; //note j != -1
while ( i < haystack.length()) {
while (j >= 0 && haystack.charAt(i) != needle.charAt(j)) {
j = lps[j];
}
i++;
j++;
if (j == needle.length()) {
return i - j;
//j = lps[j];
}
}
return -1;
}
int[] getLPS(String needle) {
int[] lps = new int[needle.length() + 1];
int j = -1, i = 0;
lps[0] = j;
while (i < needle.length()) {
while ( j >= 0 && needle.charAt(i) != needle.charAt(j)) {
j = lps[j];
}
i++;
j++;
lps[i] = j;
}
return lps;
}
}
''' | [
"b92701105@gmail.com"
] | b92701105@gmail.com |
3e0e1ee40d064048f3b637b4d1b098ac730d61bc | 3e660e22783e62f19e9b41d28e843158df5bd6ef | /script.me.syncsmashingfromgithub/smashingfavourites/scripts/testscripts/refreshpvrinfo/disableiptvtest1.py | 0b6379f6ab0f50cde1e4ee88d0ded24ceaa8d61e | [] | no_license | monthou66/repository.smashingfavourites | a9603906236000d2424d2283b50130c7a6103966 | f712e2e4715a286ff6bff304ca30bf3ddfaa112f | refs/heads/master | 2020-04-09T12:14:34.470077 | 2018-12-04T10:56:45 | 2018-12-04T10:56:45 | 160,341,386 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,179 | py | # -*- coding: utf-8 -*-
import xbmc
def printstar():
print "***************************************************************************************"
print "****************************************************************************************"
printstar()
print "test1.py has just been started"
printstar()
xbmc.executebuiltin('Notification(test1.py, started)')
if xbmc.getCondVisibility('System.HasAddon(pvr.iptvsimple)'):
xbmc.executeJSONRPC('{"jsonrpc":"2.0","method":"Addons.SetAddonEnabled","id":8,"params":{"addonid":"pvr.iptvsimple","enabled":false}}')
xbmc.sleep(300)
xbmc.executebuiltin("ActivateWindow(10021)")
xbmc.executebuiltin( "XBMC.Action(Right)" )
xbmc.executebuiltin( "XBMC.Action(Down)" )
xbmc.executebuiltin( "XBMC.Action(Down)" )
xbmc.executebuiltin( "XBMC.Action(Down)" )
xbmc.executebuiltin( "XBMC.Action(Down)" )
xbmc.executebuiltin( "XBMC.Action(Down)" )
xbmc.executebuiltin( "XBMC.Action(Down)" )
xbmc.executebuiltin( "XBMC.Action(Down)" )
xbmc.executebuiltin( "XBMC.Action(Select)" )
xbmc.executebuiltin('SendClick(11)')
# xbmc.sleep(300)
# xbmc.executebuiltin( "XBMC.Action(Back)" )
xbmc.executebuiltin("ActivateWindow(Home)")
| [
"davemullane@gmail.com"
] | davemullane@gmail.com |
67eac9c88b11d3565f79cf41ffc9d127d3f4b194 | df9187f1c78cf61075fa23c27432adef0cce285a | /iteratorss/generators_example1.py | 29c46365c6fa1acc2622cad63f8b2dd8c505b5c8 | [] | no_license | santoshr1016/techmonthppt | 6a17c6c7fc97ef7faad9264ed4f89d455e4db100 | cc5e6bce71b3495a6abd2064f74ac8be8c973820 | refs/heads/master | 2020-03-07T00:17:14.395984 | 2018-03-28T14:56:10 | 2018-03-28T14:56:10 | 127,153,491 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 341 | py | def gen_function():
n = 1
print('This is printed first')
yield n
n += 1
print('This is printed second')
yield n
n += 1
print('This is printed at last')
yield n
gen = gen_function()
print(next(gen))
print(next(gen))
print(next(gen))
# using the for loop
# for item in gen_function():
# print(item)
| [
"santy1016@gmail.com"
] | santy1016@gmail.com |
43aba475c1cfef902574ad46c9157de811b4527b | deba1fb8df5fa58563b172546ee06d3c69fb59a8 | /shop_asistant_dj/shop_asistant_dj/apps/purchase/migrations/0005_auto_20200814_1432.py | f954ccf8881328d75cde6ce92c8eeded2d69462d | [] | no_license | korid24/shop_list_via_django | 52d71873fe546ab26a253438ec349c5034211122 | 9426f9305697754c10b402ac7b6e858974d14f96 | refs/heads/master | 2023-01-01T09:30:05.339463 | 2020-10-22T10:03:26 | 2020-10-22T10:03:26 | 289,884,426 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 698 | py | # Generated by Django 3.1 on 2020-08-14 07:32
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('purchase', '0004_auto_20200814_1203'),
]
operations = [
migrations.AlterField(
model_name='purchase',
name='creation_time',
field=models.DateTimeField(default=django.utils.timezone.now, verbose_name='Creation time'),
),
migrations.AlterField(
model_name='purchaseslist',
name='creation_time',
field=models.DateTimeField(default=django.utils.timezone.now, verbose_name='Creation time'),
),
]
| [
"korid24.dev@gmail.com"
] | korid24.dev@gmail.com |
ae4ddbf46e50fdf217fcb2813690524c927fc0a1 | 89b45e528f3d495f1dd6f5bcdd1a38ff96870e25 | /ProgrammingInPython3/exercise_6_2.py | 2266f779130cb1b5b8a2988643424ad35f97f458 | [] | no_license | imatyukin/python | 2ec6e712d4d988335fc815c7f8da049968cc1161 | 58e72e43c835fa96fb2e8e800fe1a370c7328a39 | refs/heads/master | 2023-07-21T13:00:31.433336 | 2022-08-24T13:34:32 | 2022-08-24T13:34:32 | 98,356,174 | 2 | 0 | null | 2023-07-16T02:31:48 | 2017-07-25T22:45:29 | Python | UTF-8 | Python | false | false | 17,716 | py | #!/usr/bin/env python3
# 2. Modify the Image.py class to provide a resize(width, height) method. If the
# new width or height is smaller than the current value, any colors outside
# the new boundaries must be deleted. If either width or height is None then
# use the existing width or height. At the end, make sure you regenerate
# the self.__colors set. Return a Boolean to indicate whether a change
# was made or not. The method can be implemented in fewer than 20 lines
# (fewer than 35 including a docstring with a simple doctest). A solution is
# provided in Image_ans.py.
"""
This module provides the Image class which holds (x, y, color) triples
and a background color to provide a kind of sparse-array representation of
an image. A method to export the image in XPM format is also provided.
>>> import os
>>> import tempfile
>>> red = "#FF0000"
>>> blue = "#0000FF"
>>> img = os.path.join(tempfile.gettempdir(), "test.img")
>>> xpm = os.path.join(tempfile.gettempdir(), "test.xpm")
>>> image = Image(10, 8, img)
>>> for x, y in ((0, 0), (0, 7), (1, 0), (1, 1), (1, 6), (1, 7), (2, 1),
... (2, 2), (2, 5), (2, 6), (2, 7), (3, 2), (3, 3), (3, 4),
... (3, 5), (3, 6), (4, 3), (4, 4), (4, 5), (5, 3), (5, 4),
... (5, 5), (6, 2), (6, 3), (6, 4), (6, 5), (6, 6), (7, 1),
... (7, 2), (7, 5), (7, 6), (7, 7), (8, 0), (8, 1), (8, 6),
... (8, 7), (9, 0), (9, 7)):
... image[x, y] = blue
>>> for x, y in ((3, 1), (4, 0), (4, 1), (4, 2), (5, 0), (5, 1), (5, 2),
... (6, 1)):
... image[(x, y)] = red
>>> print(image.width, image.height, len(image.colors), image.background)
10 8 3 #FFFFFF
>>> border_color = "#FF0000" # red
>>> square_color = "#0000FF" # blue
>>> width, height = 240, 60
>>> midx, midy = width // 2, height // 2
>>> image = Image(width, height, img, "#F0F0F0")
>>> for x in range(width):
... for y in range(height):
... if x < 5 or x >= width - 5 or y < 5 or y >= height -5:
... image[x, y] = border_color
... elif midx - 20 < x < midx + 20 and midy - 20 < y < midy + 20:
... image[x, y] = square_color
>>> print(image.width, image.height, len(image.colors), image.background)
240 60 3 #F0F0F0
>>> image.save()
>>> newimage = Image(1, 1, img)
>>> newimage.load()
>>> print(newimage.width, newimage.height, len(newimage.colors), newimage.background)
240 60 3 #F0F0F0
>>> image.export(xpm)
>>> image.thing
Traceback (most recent call last):
...
AttributeError: 'Image' object has no attribute 'thing'
>>> for name in (img, xpm):
... try:
... os.remove(name)
... except EnvironmentError:
... pass
"""
import os
import pickle
USE_GETATTR = False
# Объявления собственных исключений
class ImageError(Exception): pass
# Наследуют исключение ImageError
class CoordinateError(ImageError): pass
class LoadError(ImageError): pass
class SaveError(ImageError): pass
class ExportError(ImageError): pass
class NoFilenameError(ImageError): pass
class Image:
"""
Класс Image хранит одно значение цвета для фона плюс те пиксели изображения, цвет которых отличается от цвета фона
Реализация с помощью словаря, разреженного массива, каждый ключ которого представляет координаты (x, y),
а значение определяет цвет в точке с этими координатами
"""
def __init__(self, width, height, filename="",
background="#FFFFFF"):
"""An image represented as HTML-style color values
(color names or hex strings) at (x, y) coordinates with any
unspecified points assumed to be the background
"""
self.filename = filename # имя файла (необязательно, имеет значение по умолчанию)
# Частные атрибуты
self.__background = background # цвет фона (необязательно, имеет значение по умолчанию)
self.__data = {} # ключами словаря являются координаты (x, y), а его значениями строки, обозначающие цвет
self.__width = width # ширина
self.__height = height # высота
self.__colors = {self.__background} # инициализируется значением цвета фона -
# в нём хранятся все уникальные значения цвета, присутствующие в изображении
if USE_GETATTR:
def __getattr__(self, name):
"""
>>> image = Image(10, 10)
>>> len(image.colors) == 1
True
>>> image.width == image.height == 10
True
>>> image.thing
Traceback (most recent call last):
...
AttributeError: 'Image' object has no attribute 'thing'
"""
if name == "colors":
return set(self.__colors)
classname = self.__class__.__name__
if name in frozenset({"background", "width", "height"}):
return self.__dict__["_{classname}__{name}".format(
**locals())]
raise AttributeError("'{classname}' object has no "
"attribute '{name}'".format(**locals()))
else:
# Доступ к частным атрибутам с помощью свойств
@property
def background(self):
return self.__background
@property
def width(self):
return self.__width
@property
def height(self):
return self.__height
@property
def colors(self):
return set(self.__colors)
def __getitem__(self, coordinate):
"""
Специальный метод __getitem__(self, k)
Пример использования y[k]
Возвращает k-й элемент последовательности y или значение элемента с ключом k в отображении y
Метод возвращает цвет пикселя с заданными координатами,
когда вызывающая программа использует оператор доступа к элементам ([])
Returns the color at the given (x, y) coordinate; this will
be the background color if the color has never been set
"""
assert len(coordinate) == 2, "coordinate should be a 2-tuple"
if (not (0 <= coordinate[0] < self.width) or
not (0 <= coordinate[1] < self.height)):
raise CoordinateError(str(coordinate))
return self.__data.get(tuple(coordinate), self.__background)
def __setitem__(self, coordinate, color):
"""
Специальный метод __setitem__(self, k, v)
Пример использования y[k] = v
Устанавливает k-й элемент последовательности y или значение элемента с ключом k в отображении y
Получение значения цвета из указанных координат
Sets the color at the given (x, y), coordinate
"""
assert len(coordinate) == 2, "coordinate should be a 2-tuple"
if (not (0 <= coordinate[0] < self.width) or
not (0 <= coordinate[1] < self.height)):
raise CoordinateError(str(coordinate))
if color == self.__background:
self.__data.pop(tuple(coordinate), None)
else:
self.__data[tuple(coordinate)] = color
self.__colors.add(color)
def __delitem__(self, coordinate):
"""
Специальный метод __delitem__(self, k)
Пример использования del y[k]
Удаляет k-й элемент последовательности y или элемент с ключом k в отображении y
Когда удаляется значение цвета для заданных координат, происходит назначение цвета фона для этих координат
Deletes the color at the given (x, y) coordinate
In effect this makes the coordinate's color the background color.
"""
assert len(coordinate) == 2, "coordinate should be a 2-tuple"
if (not (0 <= coordinate[0] < self.width) or
not (0 <= coordinate[1] < self.height)):
raise CoordinateError(str(coordinate))
self.__data.pop(tuple(coordinate), None)
def save(self, filename=None):
"""
Сохранение изображения на диск
Консервирование данных (преобразование в последовательность байтов или в строку)
Saves the current image, or the one specified by filename
If filename is specified the internal filename is set to it.
"""
# Проверка наличия файла
# Если объект Image был создан без указания имени файла и после этого имя файла не было установлено,
# то при вызове метода save() необходимо явно указывать имя файла
if filename is not None:
self.filename = filename # установка значения атрибута filename
# Если текущее имя файла не задано, то возбуждается исключение
if not self.filename:
raise NoFilenameError()
fh = None
try:
# Создание списка, в который добавляются данные объекта для сохранения,
# включая словарь self.__data с элементами координаты-цвет
data = [self.width, self.height, self.__background,
self.__data]
# Открытие файла для записи в двоичном режиме
fh = open(self.filename, "wb")
# Вызов функции pickle.dump(), которая записывает данные объекта в файл
pickle.dump(data, fh, pickle.HIGHEST_PROTOCOL) # протокол 3 - компактный двоичный формат
except (EnvironmentError, pickle.PicklingError) as err:
raise SaveError(str(err))
finally:
if fh is not None:
fh.close()
def load(self, filename=None):
"""
Загрузка изображения с диска
Loads the current image, or the one specified by filename
If filename is specified the internal filename is set to it.
"""
# Определение имени файла, который требуется загрузить
if filename is not None:
self.filename = filename
if not self.filename:
raise NoFilenameError()
fh = None
try:
# Файл должен быть открыт для чтения в двоичном режие
fh = open(self.filename, "rb")
# Операция чтения выполняется инструкцией pickle.load()
# Объект data - это точная реконструкция сохранявшегося объекта, т.е. список содержащий
# целочисленные значения ширины и высоты, строку с цветом фона и словарь с элементами координаты-цвет
data = pickle.load(fh)
# Распаковка кортежа для присваивания каждого элемента списка data соответсвующей переменной
# Множество уникальных цветов реконструируется посредством создания множества всех цветов,
# хранящихся в словаре, после чего в множество добавляется цвет фона
(self.__width, self.__height, self.__background,
self.__data) = data
self.__colors = (set(self.__data.values()) |
{self.__background})
except (EnvironmentError, pickle.UnpicklingError) as err:
raise LoadError(str(err))
finally:
if fh is not None:
fh.close()
def export(self, filename):
"""Exports the image to the specified filename
"""
if filename.lower().endswith(".xpm"):
self.__export_xpm(filename)
else:
raise ExportError("unsupported export format: " +
os.path.splitext(filename)[1])
def __export_xpm(self, filename):
"""Exports the image as an XPM file if less than 8930 colors are
used
"""
name = os.path.splitext(os.path.basename(filename))[0]
count = len(self.__colors)
chars = [chr(x) for x in range(32, 127) if chr(x) != '"']
if count > len(chars):
chars = []
for x in range(32, 127):
if chr(x) == '"':
continue
for y in range(32, 127):
if chr(y) == '"':
continue
chars.append(chr(x) + chr(y))
chars.reverse()
if count > len(chars):
raise ExportError("cannot export XPM: too many colors")
fh = None
try:
fh = open(filename, "w", encoding="ascii")
fh.write("/* XPM */\n")
fh.write("static char *{0}[] = {{\n".format(name))
fh.write("/* columns rows colors chars-per-pixel */\n")
fh.write('"{0.width} {0.height} {1} {2}",\n'.format(
self, count, len(chars[0])))
char_for_colour = {}
for color in self.__colors:
char = chars.pop()
fh.write('"{char} c {color}",\n'.format(**locals()))
char_for_colour[color] = char
fh.write("/* pixels */\n")
for y in range(self.height):
row = []
for x in range(self.width):
color = self.__data.get((x, y), self.__background)
row.append(char_for_colour[color])
fh.write('"{0}",\n'.format("".join(row)))
fh.write("};\n")
except EnvironmentError as err:
raise ExportError(str(err))
finally:
if fh is not None:
fh.close()
def resize(self, width=None, height=None):
"""
Метод resize(width, height)
Если новая ширина или высота меньше текущего значения, все цвета,
оказавшиеся за пределами новых границ изображения, удаляются.
Если в качестве нового значения ширины или высоты передаётся None,
соответсвующее значение ширины или высоты остаётся без изменений.
Resizes to the given dimensions; returns True if changes made
If a dimension is None; keeps the original. Deletes all out of
range points.
>>> image = Image(10, 10)
>>> for x, y in zip(range(10), range(10)):
... image[x, y] = "#00FF00" if x < 5 else "#0000FF"
>>> image.width, image.height, len(image.colors)
(10, 10, 3)
>>> image.resize(5, 5)
True
>>> image.width, image.height, len(image.colors)
(5, 5, 2)
"""
if width is None and height is None:
return False
if width is None:
width = self.width
if height is None:
height = self.height
if width >= self.width and height >= self.height:
self.__width = width
self.__height = height
return True
self.__width = width
self.__height = height
for x, y in list(self.__data.keys()):
if x >= self.width or y >= self.height:
del self.__data[(x, y)]
self.__colors = set(self.__data.values()) | {self.__background}
return True
if __name__ == "__main__":
import doctest
doctest.testmod()
border_color = "#FF0000" # красный
square_color = "#0000FF" # синий
width, height = 240, 60
midx, midy = width // 2, height // 2
image = Image(width, height, "square_eye.img")
for x in range(width):
for y in range(height):
if x < 5 or x >= width - 5 or y < 5 or y >= height - 5:
image[x, y] = border_color
elif midx - 20 < x < midx + 20 and midy - 20 < y < midy + 20:
image[x, y] = square_color
image.save()
image.export("square_eye.xpm") | [
"i.matukin@gmail.com"
] | i.matukin@gmail.com |
7d65ca613df3d3dd46ccacae4b6e90189ce005c9 | f3d38d0e1d50234ce5f17948361a50090ea8cddf | /CodeUp/Python 기초 100제/6048번 ; 정수 2개 입력받아 비교하기1.py | 7bc89b85386b03f9c73d7d64141d73bdd23be07a | [] | no_license | bright-night-sky/algorithm_study | 967c512040c183d56c5cd923912a5e8f1c584546 | 8fd46644129e92137a62db657187b9b707d06985 | refs/heads/main | 2023-08-01T10:27:33.857897 | 2021-10-04T14:36:21 | 2021-10-04T14:36:21 | 323,322,211 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 480 | py | # https://codeup.kr/problem.php?id=6048
# readline을 사용하기 위해 import합니다.
from sys import stdin
# 두 정수를 공백으로 구분해 입력합니다.
# 각각 정수형으로 만들어주고 변수에 넣어줍니다.
a, b = map(int, stdin.readline().split(' '))
# a가 b보다 작다면
if a < b:
# True를 출력합니다.
print(True)
# 그렇지 않다면, 즉, a가 b보다 작지 않다면
else:
# False를 출력합니다.
print(False) | [
"bright_night_sky@naver.com"
] | bright_night_sky@naver.com |
d0b3d534982c31fb50ebe286974255e9b7a45d14 | 7872b02b8f066fa228bbfa2dd6fcfb5a9ee49dc7 | /tests/dump_tests/module_tests/acl_test.py | 105b948a53f44a84fcc38b9e261f2194e140a4fe | [
"LicenseRef-scancode-generic-cla",
"Apache-2.0"
] | permissive | stephenxs/sonic-utilities | 1d6168206140c5b790cfc1a70cea6a8288040cb0 | c2bc150a6a05c97362d540c874deff81fad6f870 | refs/heads/master | 2023-03-16T05:55:37.688189 | 2023-03-06T18:56:51 | 2023-03-06T18:56:51 | 187,989,457 | 0 | 1 | NOASSERTION | 2023-03-09T13:21:43 | 2019-05-22T07:48:38 | Python | UTF-8 | Python | false | false | 6,227 | py | import os
import pytest
from deepdiff import DeepDiff
from dump.helper import create_template_dict, sort_lists, populate_mock
from dump.plugins.acl_table import Acl_Table
from dump.plugins.acl_rule import Acl_Rule
from dump.match_infra import MatchEngine, ConnectionPool
from swsscommon.swsscommon import SonicV2Connector
from utilities_common.constants import DEFAULT_NAMESPACE
# Location for dedicated db's used for UT
module_tests_path = os.path.dirname(__file__)
dump_tests_path = os.path.join(module_tests_path, "../")
tests_path = os.path.join(dump_tests_path, "../")
dump_test_input = os.path.join(tests_path, "dump_input")
port_files_path = os.path.join(dump_test_input, "acl")
# Define the mock files to read from
dedicated_dbs = {}
dedicated_dbs['CONFIG_DB'] = os.path.join(port_files_path, "config_db.json")
dedicated_dbs['COUNTERS_DB'] = os.path.join(port_files_path, "counters_db.json")
dedicated_dbs['ASIC_DB'] = os.path.join(port_files_path, "asic_db.json")
@pytest.fixture(scope="class", autouse=True)
def match_engine():
os.environ["VERBOSE"] = "1"
# Monkey Patch the SonicV2Connector Object
from ...mock_tables import dbconnector
db = SonicV2Connector()
# popualate the db with mock data
db_names = list(dedicated_dbs.keys())
try:
populate_mock(db, db_names, dedicated_dbs)
except Exception as e:
assert False, "Mock initialization failed: " + str(e)
# Initialize connection pool
conn_pool = ConnectionPool()
conn_pool.fill(DEFAULT_NAMESPACE, db, db_names)
# Initialize match_engine
match_engine = MatchEngine(conn_pool)
yield match_engine
os.environ["VERBOSE"] = "0"
@pytest.mark.usefixtures("match_engine")
class TestAclTableModule:
def test_basic(self, match_engine):
"""
Scenario: When the basic config is properly applied and propagated
"""
params = {Acl_Table.ARG_NAME: "DATAACL", "namespace": ""}
m_acl_table = Acl_Table(match_engine)
returned = m_acl_table.execute(params)
expect = create_template_dict(dbs=["CONFIG_DB", "ASIC_DB"])
expect["CONFIG_DB"]["keys"].append("ACL_TABLE|DATAACL")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_TABLE:oid:0x7000000000600")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_TABLE_GROUP_MEMBER:oid:0xc000000000601")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_TABLE_GROUP_MEMBER:oid:0xc000000000602")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_TABLE_GROUP:oid:0xb0000000005f5")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_TABLE_GROUP:oid:0xb0000000005f7")
ddiff = DeepDiff(sort_lists(returned), sort_lists(expect))
assert not ddiff, ddiff
def test_no_counter_mapping(self, match_engine):
"""
Scenario: When there is no ACL_COUNTER_RULE_MAP mapping for rule
"""
params = {Acl_Table.ARG_NAME: "DATAACL1", "namespace": ""}
m_acl_table = Acl_Table(match_engine)
returned = m_acl_table.execute(params)
expect = create_template_dict(dbs=["CONFIG_DB", "ASIC_DB"])
expect["CONFIG_DB"]["keys"].append("ACL_TABLE|DATAACL1")
ddiff = DeepDiff(sort_lists(returned), sort_lists(expect))
assert not ddiff, ddiff
def test_with_table_type(self, match_engine):
"""
Scenario: When there is ACL_TABLE_TYPE configured for this table
"""
params = {Acl_Table.ARG_NAME: "DATAACL2", "namespace": ""}
m_acl_table = Acl_Table(match_engine)
returned = m_acl_table.execute(params)
expect = create_template_dict(dbs=["CONFIG_DB", "ASIC_DB"])
expect["CONFIG_DB"]["keys"].append("ACL_TABLE|DATAACL2")
expect["CONFIG_DB"]["keys"].append("ACL_TABLE_TYPE|MY_TYPE")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_TABLE:oid:0x7100000000600")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_TABLE_GROUP_MEMBER:oid:0xc100000000601")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_TABLE_GROUP_MEMBER:oid:0xc100000000602")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_TABLE_GROUP:oid:0xb0000000005f5")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_TABLE_GROUP:oid:0xb0000000005f7")
ddiff = DeepDiff(sort_lists(returned), sort_lists(expect))
assert not ddiff, ddiff
@pytest.mark.usefixtures("match_engine")
class TestAclRuleModule:
def test_basic(self, match_engine):
"""
Scenario: When the config is properly applied and propagated
"""
params = {Acl_Rule.ARG_NAME: "DATAACL|R0", "namespace": ""}
m_acl_rule = Acl_Rule(match_engine)
returned = m_acl_rule.execute(params)
expect = create_template_dict(dbs=["CONFIG_DB", "ASIC_DB"])
expect["CONFIG_DB"]["keys"].append("ACL_RULE|DATAACL|R0")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_COUNTER:oid:0x9000000000606")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_ENTRY:oid:0x8000000000609")
ddiff = DeepDiff(sort_lists(returned), sort_lists(expect))
assert not ddiff, ddiff
def test_with_ranges(self, match_engine):
"""
Scenario: When ACL rule has range configuration
"""
params = {Acl_Rule.ARG_NAME: "DATAACL2|R0", "namespace": ""}
m_acl_rule = Acl_Rule(match_engine)
returned = m_acl_rule.execute(params)
expect = create_template_dict(dbs=["CONFIG_DB", "ASIC_DB"])
expect["CONFIG_DB"]["keys"].append("ACL_RULE|DATAACL2|R0")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_COUNTER:oid:0x9100000000606")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_ENTRY:oid:0x8100000000609")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_RANGE:oid:0xa100000000607")
expect["ASIC_DB"]["keys"].append("ASIC_STATE:SAI_OBJECT_TYPE_ACL_RANGE:oid:0xa100000000608")
ddiff = DeepDiff(sort_lists(returned), sort_lists(expect))
assert not ddiff, ddiff
| [
"noreply@github.com"
] | stephenxs.noreply@github.com |
a6df023df1316eb45efd17d0843f1cfed8d86f28 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02412/s889541613.py | 889b2dbcdc586cb5aac3f12cd5a26d0b3e20daca | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 292 | py | while True:
i = input().split()
n, x = map(int, i)
if n == 0 and x == 0:
break
count = 0
for a in range(1, n+1):
for b in range(a+1, n+1):
for c in range(b+1, n+1):
if a+b+c == x:
count += 1
print(count) | [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
20cb167f243d0a2395152f20be419b70fa1a0efd | 8fb16223e667e6bf35e3131ba6ed6bc5e0862fd1 | /src/util/constant.py | eeb5d72b7be93835aeb592492eaeff7f46afdbff | [] | no_license | ydup/robot-personal | 22f07738cde2d84d02d97255aa7c0c2d38537eaf | c7b16091bfbe280d1d92a38a46f7d5bbad55e5db | refs/heads/master | 2020-12-26T16:41:08.208720 | 2020-01-31T17:04:54 | 2020-01-31T17:04:54 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,915 | py | import sys
import os
import random
curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
BASE_PATH = os.path.split(rootPath)[0]
sys.path.append(BASE_PATH)
#### Redis Key Begin
# 每个城市单独保存一个key做集合,订阅的用户在集合中
USE_REDIS = False
# 当前所有疫情数据,类型:list
STATE_NCOV_INFO = 'state_ncov_info'
# 所有有疫情的城市集合
ALL_AREA_KEY = 'all_area'
# 标记为,标记数据是有更新
SHOULD_UPDATE = 'should_update'
# 需要推送更新的数据
UPDATE_CITY = 'update_city'
# 当前已有订阅的城市集合,类型set
ORDER_KEY = 'order_area'
# 用户关注的群聊ID
USER_FOCUS_GROUP = 'user_focus_group'
# 用户关注的群聊名称
USER_FOCUS_GROUP_NAME = 'user_focus_group_name'
#### Redis Key End
### Reg Pattern Begin
UN_REGIST_PATTERN = '^取关|取消(关注)?.+'
UN_REGIST_PATTERN2 = '^取关|取消(关注)?'
### REG PAttern End
BASE_DIR = os.path.join(BASE_PATH, 'resource')
DATA_DIR = os.path.join(BASE_DIR, 'data')
# for localhost redis
REDIS_HOST = '127.0.0.1'
## for docker redis
REDIS_HOST_DOCKER = 'redis'
LOGGING_FORMAT = '%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s'
AREA_TAIL = '(自治+)|省|市|县|区|镇'
FIRST_NCOV_INFO = '{}目前有确诊病例{}例,死亡病例{}例,治愈病例{}例。'
FIRST_NCOV_INFO2 = '{}目前有确诊病例{}例,死亡病例{}例,治愈病例{}例。'
INFO1 = '\n向所有奋斗在抗击疫情一线的工作人员、志愿者致敬!'
INFO2 = '\nfeiyan.help,病毒无情,但人间有爱。'
INFO3 = '\n疫情严峻,请您尽量减少外出,避免出入公共场所'
INFO4 = '\n为了保证您能持续最新的疫情消息,根据WX的规则,建议您偶尔回复我一下~'
INFO5 = '\n全部数据来源于腾讯实时疫情追踪平台:https://news.qq.com//zt2020/page/feiyan.htm'
INFO6 = '\n我们是公益组织wuhan.support,网址 https://feiyan.help'
INFO7 = '\n这里是面向疫区内外民众和医疗机构的多维度信息整合平台,https://feiyan.help'
INFO8 = '\nhttps://feiyan.help,支持武汉,我们在一起。'
INFO9 = '\n开源地址:https://github.com/wuhan-support,支持武汉,我们在一起。'
INFO10 = '\n查看更多信息可以戳这里,https://feiyan.help。'
INFO11 = '\n这是一个为了避免微信阻塞消息的随机小尾巴...'
INFO12 = '\n众志成城,抵御疫情,武汉加油!'
INFO13 = '\nhttps://feiyan.help,筑牢抵御疫情蔓延的一道屏障'
INFO_TAILS = [INFO1, INFO2, INFO3, INFO4, INFO5, INFO6, INFO7, INFO8, INFO9, INFO10, INFO11, INFO12, INFO13]
UPDATE_NCOV_INFO = '{}有数据更新,新增确诊病例{}例,目前共有确诊病例{}例,死亡病例{}例,治愈病例{}例。'
UPDATE_NCOV_INFO_ALL = '{}有数据更新,新增确诊病例{}例,疑似病例{}例,目前共有确诊病例{}例,疑似病例{}例,死亡病例{}例,治愈病例{}例。'
NO_NCOV_INFO = '{}暂无疫情信息,请检查地区名称是否正确。'
INFO_TAIL = "若{}等地区数据有更新,我会在第一时间通知您!您也可以通过发送 '取消+地区名'取消关注该地区,比如'取消{}','取消全部'。"
INFO_TAIL_ALL = "若全国的数据有更新,我会在第一时间通知您!您也可以通过发送'取消全部'取消对全部数据的关注。"
FOCUS_TAIL = "如果该群转发的新闻、分享存在谣言,将会自动发送辟谣链接!您也可以通过发送'停止辟谣+群名'取消对该群的谣言检查。"
CHAOYANG_INFO = '您的订阅"朝阳"有些模糊,如果您想订阅北京朝阳区,请输入订阅朝阳区,如果想订阅辽宁省朝阳市,请输入订阅朝阳市'
TIME_SPLIT = 60 * 3
SHORT_TIME_SPLIT = 60 * 5
LONG_TIME_SPLIT = 60 * 60
SEND_SPLIT = random.random() * 10
SEND_SPLIT_SHORT = random.random() * 5
HELP_CONTENT = "您好!这是微信疫情信息小助手(非官方)!我有以下功能:\n1.若好友向您发送 订阅/取消+地区名 关注/取消该地区疫情并实时向该好友推送;" \
"\n2.您向文件传输助手发送辟谣+群名,比如\"辟谣家族群\",将对该群的新闻长文、链接分享自动进行辟谣,使用停止辟谣+群名停止该功能。发送\"CX\"可查询已关注的群聊。" \
"\n以上所有数据来自腾讯\"疫情实时追踪\"平台,链接:https://news.qq.com//zt2020/page/feiyan.htm"
GROUP_CONTENT_HELP = "您对这些群启用了辟谣功能:{}。若发现漏掉了一些群,请将该群保存到通讯录再重新发送辟谣+群名。"
NO_GROUP_CONTENT_HELP = "您目前没有对任何群开启辟谣功能。若发现有遗漏,请将该群保存到通讯录再重新发送辟谣+群名。"
FILE_HELPER = 'filehelper'
ONLINE_TEXT = 'Hello, 微信疫情信息小助手(自动机器人)又上线啦'
| [
"maicius@outlook.com"
] | maicius@outlook.com |
6096776bf8dd21fe58ce88a2c47d00d6451aff58 | f576f0ea3725d54bd2551883901b25b863fe6688 | /sdk/purview/azure-mgmt-purview/generated_samples/accounts_list_by_resource_group.py | b8ffb87223d71e319f8cff856c97d022bb768056 | [
"LicenseRef-scancode-generic-cla",
"MIT",
"LGPL-2.1-or-later"
] | permissive | Azure/azure-sdk-for-python | 02e3838e53a33d8ba27e9bcc22bd84e790e4ca7c | c2ca191e736bb06bfbbbc9493e8325763ba990bb | refs/heads/main | 2023-09-06T09:30:13.135012 | 2023-09-06T01:08:06 | 2023-09-06T01:08:06 | 4,127,088 | 4,046 | 2,755 | MIT | 2023-09-14T21:48:49 | 2012-04-24T16:46:12 | Python | UTF-8 | Python | false | false | 1,586 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from azure.identity import DefaultAzureCredential
from azure.mgmt.purview import PurviewManagementClient
"""
# PREREQUISITES
pip install azure-identity
pip install azure-mgmt-purview
# USAGE
python accounts_list_by_resource_group.py
Before run the sample, please set the values of the client ID, tenant ID and client secret
of the AAD application as environment variables: AZURE_CLIENT_ID, AZURE_TENANT_ID,
AZURE_CLIENT_SECRET. For more info about how to get the value, please see:
https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal
"""
def main():
client = PurviewManagementClient(
credential=DefaultAzureCredential(),
subscription_id="34adfa4f-cedf-4dc0-ba29-b6d1a69ab345",
)
response = client.accounts.list_by_resource_group(
resource_group_name="SampleResourceGroup",
)
for item in response:
print(item)
# x-ms-original-file: specification/purview/resource-manager/Microsoft.Purview/stable/2021-07-01/examples/Accounts_ListByResourceGroup.json
if __name__ == "__main__":
main()
| [
"noreply@github.com"
] | Azure.noreply@github.com |
ee673cd5d39b84dd58e45d5415c4e2ec6428723f | c2f809fb0c3aaf5c92f2ec04c41df5e0e764a088 | /zoo/saved_instances/CCAT/common/train/confidence_calibrated_adversarial_training.py | 54a88108da63c9a0a859b18ca5372fb07a8bc860 | [] | no_license | lavanova/adaptive-auto-attack | 7f4834cdc9dbeb6e161fc869f71bb284e854604a | 8ed8b33afc6757a334c4d3f046fcb7793dd2c873 | refs/heads/master | 2023-05-07T17:44:33.466128 | 2021-05-20T09:03:53 | 2021-05-20T09:03:53 | 369,143,471 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 13,372 | py | import numpy
import torch
import common.torch
import common.summary
import common.numpy
import attacks
from .adversarial_training import *
class ConfidenceCalibratedAdversarialTraining(AdversarialTraining):
"""
Confidence-calibrated adversarial training.
"""
def __init__(self, model, trainset, testset, optimizer, scheduler, attack, objective, loss, transition, fraction=0.5, augmentation=None, writer=common.summary.SummaryWriter(), cuda=False):
"""
Constructor.
:param model: model
:type model: torch.nn.Module
:param trainset: training set
:type trainset: torch.utils.data.DataLoader
:param testset: test set
:type testset: torch.utils.data.DataLoader
:param optimizer: optimizer
:type optimizer: torch.optim.Optimizer
:param scheduler: scheduler
:type scheduler: torch.optim.LRScheduler
:param attack: attack
:type attack: attacks.Attack
:param objective: objective
:type objective: attacks.Objective
:param loss: loss
:type loss: callable
:param loss: transition
:type loss: callable
:param fraction: fraction of adversarial examples per batch
:type fraction: float
:param augmentation: augmentation
:type augmentation: imgaug.augmenters.Sequential
:param writer: summary writer
:type writer: torch.utils.tensorboard.SummaryWriter or TensorboardX equivalent
:param cuda: run on CUDA device
:type cuda: bool
"""
assert fraction < 1
super(ConfidenceCalibratedAdversarialTraining, self).__init__(model, trainset, testset, optimizer, scheduler, attack, objective, fraction, augmentation, writer, cuda)
self.loss = loss
""" (callable) Loss. """
self.transition = transition
""" (callable) Transition. """
self.N_class = None
""" (int) Number of classes. """
if getattr(self.model, 'N_class', None) is not None:
self.N_class = self.model.N_class
self.writer.add_text('config/loss', self.loss.__name__)
self.writer.add_text('config/transition', self.transition.__name__)
def train(self, epoch):
"""
Training step.
:param epoch: epoch
:type epoch: int
"""
for b, (inputs, targets) in enumerate(self.trainset):
if self.augmentation is not None:
inputs = self.augmentation.augment_images(inputs.numpy())
inputs = common.torch.as_variable(inputs, self.cuda)
inputs = inputs.permute(0, 3, 1, 2)
targets = common.torch.as_variable(targets, self.cuda)
if self.N_class is None:
_ = self.model.forward(inputs)
self.N_class = _.size(1)
distributions = common.torch.one_hot(targets, self.N_class)
split = int(self.fraction * inputs.size()[0])
# update fraction for correct loss computation
fraction = split / float(inputs.size(0))
clean_inputs = inputs[:split]
adversarial_inputs = inputs[split:]
clean_targets = targets[:split]
adversarial_targets = targets[split:]
clean_distributions = distributions[:split]
adversarial_distributions = distributions[split:]
self.model.eval()
self.objective.set(adversarial_targets)
adversarial_perturbations, adversarial_objectives = self.attack.run(self.model, adversarial_inputs, self.objective)
adversarial_perturbations = common.torch.as_variable(adversarial_perturbations, self.cuda)
adversarial_inputs = adversarial_inputs + adversarial_perturbations
gamma, adversarial_norms = self.transition(adversarial_perturbations)
gamma = common.torch.expand_as(gamma, adversarial_distributions)
adversarial_distributions = adversarial_distributions*(1 - gamma)
adversarial_distributions += gamma*torch.ones_like(adversarial_distributions)/self.N_class
inputs = torch.cat((clean_inputs, adversarial_inputs), dim=0)
self.model.train()
self.optimizer.zero_grad()
logits = self.model(inputs)
clean_logits = logits[:split]
adversarial_logits = logits[split:]
adversarial_loss = self.loss(adversarial_logits, adversarial_distributions)
adversarial_error = common.torch.classification_error(adversarial_logits, adversarial_targets)
clean_loss = self.loss(clean_logits, clean_distributions)
clean_error = common.torch.classification_error(clean_logits, clean_targets)
loss = (1 - fraction) * clean_loss + fraction * adversarial_loss
loss.backward()
self.optimizer.step()
self.scheduler.step()
global_step = epoch * len(self.trainset) + b
self.writer.add_scalar('train/lr', self.scheduler.get_lr()[0], global_step=global_step)
self.writer.add_scalar('train/loss', clean_loss.item(), global_step=global_step)
self.writer.add_scalar('train/error', clean_error.item(), global_step=global_step)
self.writer.add_scalar('train/confidence', torch.mean(torch.max(torch.nn.functional.softmax(clean_logits, dim=1), dim=1)[0]).item(), global_step=global_step)
self.writer.add_histogram('train/logits', torch.max(clean_logits, dim=1)[0], global_step=global_step)
self.writer.add_histogram('train/confidences', torch.max(torch.nn.functional.softmax(clean_logits, dim=1), dim=1)[0], global_step=global_step)
success = torch.clamp(torch.abs(adversarial_targets - torch.max(torch.nn.functional.softmax(adversarial_logits, dim=1), dim=1)[1]), max=1)
self.writer.add_scalar('train/adversarial_loss', adversarial_loss.item(), global_step=global_step)
self.writer.add_scalar('train/adversarial_error', adversarial_error.item(), global_step=global_step)
self.writer.add_scalar('train/adversarial_confidence', torch.mean(torch.max(torch.nn.functional.softmax(adversarial_logits, dim=1), dim=1)[0]).item(), global_step=global_step)
self.writer.add_scalar('train/adversarial_success', torch.mean(success.float()).item(), global_step=global_step)
self.writer.add_histogram('train/adversarial_logits', torch.max(adversarial_logits, dim=1)[0], global_step=global_step)
self.writer.add_histogram('train/adversarial_confidences', torch.max(torch.nn.functional.softmax(adversarial_logits, dim=1), dim=1)[0], global_step=global_step)
self.writer.add_histogram('train/adversarial_objectives', adversarial_objectives, global_step=global_step)
self.writer.add_histogram('train/adversarial_norms', adversarial_norms, global_step=global_step)
if self.summary_gradients:
for name, parameter in self.model.named_parameters():
self.writer.add_histogram('train_weights/%s' % name, parameter.view(-1), global_step=global_step)
self.writer.add_histogram('train_gradients/%s' % name, parameter.grad.view(-1), global_step=global_step)
self.writer.add_images('train/images', inputs[:min(16, split)], global_step=global_step)
self.writer.add_images('train/adversarial_images', inputs[split:split + 16], global_step=global_step)
self.progress(epoch, b, len(self.trainset))
def test(self, epoch):
"""
Test on adversarial examples.
:param epoch: epoch
:type epoch: int
"""
self.model.eval()
# reason to repeat this here: use correct loss for statistics
losses = None
errors = None
logits = None
confidences = None
for b, (inputs, targets) in enumerate(self.testset):
inputs = common.torch.as_variable(inputs, self.cuda)
inputs = inputs.permute(0, 3, 1, 2)
targets = common.torch.as_variable(targets, self.cuda)
if self.N_class is None:
_ = self.model.forward(inputs)
self.N_class = _.size(1)
distributions = common.torch.as_variable(common.torch.one_hot(targets, self.N_class))
outputs = self.model(inputs)
losses = common.numpy.concatenate(losses, self.loss(outputs, distributions, reduction='none').detach().cpu().numpy())
errors = common.numpy.concatenate(errors, common.torch.classification_error(outputs, targets, reduction='none').detach().cpu().numpy())
logits = common.numpy.concatenate(logits, torch.max(outputs, dim=1)[0].detach().cpu().numpy())
confidences = common.numpy.concatenate(confidences, torch.max(torch.nn.functional.softmax(outputs, dim=1), dim=1)[0].detach().cpu().numpy())
self.progress(epoch, b, len(self.testset))
global_step = epoch # epoch * len(self.trainset) + len(self.trainset) - 1
self.writer.add_scalar('test/loss', numpy.mean(losses), global_step=global_step)
self.writer.add_scalar('test/error', numpy.mean(errors), global_step=global_step)
self.writer.add_scalar('test/logit', numpy.mean(logits), global_step=global_step)
self.writer.add_scalar('test/confidence', numpy.mean(confidences), global_step=global_step)
self.writer.add_histogram('test/losses', losses, global_step=global_step)
self.writer.add_histogram('test/errors', errors, global_step=global_step)
self.writer.add_histogram('test/logits', logits, global_step=global_step)
self.writer.add_histogram('test/confidences', confidences, global_step=global_step)
self.model.eval()
losses = None
errors = None
logits = None
confidences = None
successes = None
norms = None
objectives = None
for b, (inputs, targets) in enumerate(self.testset):
if b >= self.max_batches:
break
inputs = common.torch.as_variable(inputs, self.cuda)
inputs = inputs.permute(0, 3, 1, 2)
targets = common.torch.as_variable(targets, self.cuda)
distributions = common.torch.as_variable(common.torch.one_hot(targets, self.N_class))
self.objective.set(targets)
adversarial_perturbations, adversarial_objectives = self.attack.run(self.model, inputs, self.objective)
objectives = common.numpy.concatenate(objectives, adversarial_objectives)
adversarial_perturbations = common.torch.as_variable(adversarial_perturbations, self.cuda)
inputs = inputs + adversarial_perturbations
gamma, adversarial_norms = self.transition(adversarial_perturbations)
gamma = common.torch.expand_as(gamma, distributions)
distributions = distributions * (1 - gamma) + gamma * torch.ones_like(distributions) / self.N_class
outputs = self.model(inputs)
losses = common.numpy.concatenate(losses, self.loss(outputs, distributions, reduction='none').detach().cpu().numpy())
errors = common.numpy.concatenate(errors, common.torch.classification_error(outputs, targets, reduction='none').detach().cpu().numpy())
logits = common.numpy.concatenate(logits, torch.max(outputs, dim=1)[0].detach().cpu().numpy())
confidences = common.numpy.concatenate(confidences, torch.max(torch.nn.functional.softmax(outputs, dim=1), dim=1)[0].detach().cpu().numpy())
successes = common.numpy.concatenate(successes, torch.clamp(torch.abs(targets - torch.max(torch.nn.functional.softmax(outputs, dim=1), dim=1)[1]), max=1).detach().cpu().numpy())
norms = common.numpy.concatenate(norms, adversarial_norms.detach().cpu().numpy())
self.progress(epoch, b, self.max_batches)
global_step = epoch + 1# * len(self.trainset) + len(self.trainset) - 1
self.writer.add_scalar('test/adversarial_loss', numpy.mean(losses), global_step=global_step)
self.writer.add_scalar('test/adversarial_error', numpy.mean(errors), global_step=global_step)
self.writer.add_scalar('test/adversarial_logit', numpy.mean(logits), global_step=global_step)
self.writer.add_scalar('test/adversarial_confidence', numpy.mean(confidences), global_step=global_step)
self.writer.add_scalar('test/adversarial_norm', numpy.mean(norms), global_step=global_step)
self.writer.add_scalar('test/adversarial_objective', numpy.mean(objectives), global_step=global_step)
self.writer.add_scalar('test/adversarial_success', numpy.mean(successes), global_step=global_step)
self.writer.add_histogram('test/adversarial_losses', losses, global_step=global_step)
self.writer.add_histogram('test/adversarial_errors', errors, global_step=global_step)
self.writer.add_histogram('test/adversarial_logits', logits, global_step=global_step)
self.writer.add_histogram('test/adversarial_confidences', confidences, global_step=global_step)
self.writer.add_histogram('test/adversarial_norms', norms, global_step=global_step)
self.writer.add_histogram('test/adversarial_objectives', objectives, global_step=global_step) | [
"icebergmatt0109@gmail.com"
] | icebergmatt0109@gmail.com |
53010e1bfd118bc7c2ed2950dc17576e508e457a | 2545624bbbf982aa6243acf8b0cb9f7eaef155d6 | /2019/round_1a/rhyme.py | 28194d837671b486030f48e3044cdf0851600a4f | [] | no_license | dprgarner/codejam | 9f420003fb48c2155bd54942803781a095e984d1 | d7e1134fe3fe850b419aa675260c4ced630731d0 | refs/heads/master | 2021-07-12T05:36:08.465603 | 2021-07-03T12:37:46 | 2021-07-03T12:37:46 | 87,791,734 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,168 | py | """
Codejam boilerplate. Copy/paste this file with get_cases and handle_case
customised.
"""
from os import sys
class BaseInteractiveCaseHandler():
"""
Boilerplate class.
"""
def __init__(self):
self.source = self.get_source()
def get_source(self):
try:
while True:
yield sys.stdin.readline()
except EOFError:
pass
def read(self):
return next(self.source).strip()
def write(self, txt):
print(str(txt))
sys.stdout.flush()
def run(self):
cases = int(next(self.source))
for i in range(1, cases + 1):
self.handle_case(i)
def debug(self, *txt):
# Uncomment for debugging.
return
print(*[str(t) for t in txt], file=sys.stderr)
def handle_case(self, i):
raise NotImplementedError
class CaseHandler(BaseInteractiveCaseHandler):
"""
https://codingcompetitions.withgoogle.com/codejam/round/0000000000051635/0000000000104e05
Practice, ~35m, 1 incorrect
"""
def handle_case(self, i):
n = int(self.read())
words = [self.read() for _ in range(n)]
soln = self.solve(words)
self.write('Case #{}: {}'.format(i, soln))
def solve(self, raw_words):
words = sorted([w[::-1] for w in raw_words])
# In python, '' is sorted before 'A'.
self.debug(words)
for accent_l in range(max(len(w) for w in words) - 1, 0, -1):
self.debug(accent_l)
i = len(words) - 2
while i >= 0:
self.debug('i', i)
self.debug(words)
self.debug(' ', i, words[i][:accent_l], words[i+1][:accent_l])
if words[i][:accent_l] == words[i+1][:accent_l]:
stem = words[i][:accent_l]
x = words.pop(i)
y = words.pop(i)
self.debug('removed ', x, y)
i -= 1
while i >= 0 and words[i][:accent_l] == stem:
i -= 1
i -= 1
return len(raw_words) - len(words)
CaseHandler().run()
| [
"dprgarner@gmail.com"
] | dprgarner@gmail.com |
13f19f5bcf551f3f4499f60bbf7cd5325f1a18a6 | fd21d6384ba36aa83d0c9f05f889bdbf8912551a | /a10sdk/core/gslb/gslb_zone_service_dns_a_record_dns_a_record_srv.py | e6e43c2ee8a7be28521783527603b200e3b349b5 | [
"Apache-2.0"
] | permissive | 0xtobit/a10sdk-python | 32a364684d98c1d56538aaa4ccb0e3a5a87ecd00 | 1ea4886eea3a1609b2ac1f81e7326758d3124dba | refs/heads/master | 2021-01-18T03:08:58.576707 | 2014-12-10T00:31:52 | 2014-12-10T00:31:52 | 34,410,031 | 0 | 0 | null | 2015-04-22T19:05:12 | 2015-04-22T19:05:12 | null | UTF-8 | Python | false | false | 2,598 | py | from a10sdk.common.A10BaseClass import A10BaseClass
class DnsARecordSrv(A10BaseClass):
"""Class Description::
Specify DNS Address Record.
Class dns-a-record-srv supports CRUD Operations and inherits from `common/A10BaseClass`.
This class is the `"PARENT"` class for this module.`
:param as_replace: {"default": 0, "optional": true, "type": "number", "description": "Return this Service-IP when enable ip-replace", "format": "flag"}
:param as_backup: {"default": 0, "optional": true, "type": "number", "description": "As backup when fail", "format": "flag"}
:param weight: {"description": "Specify weight for Service-IP (Weight value)", "format": "number", "type": "number", "maximum": 100, "minimum": 1, "optional": true}
:param svrname: {"description": "Specify name", "format": "string", "minLength": 1, "optional": false, "maxLength": 63, "type": "string", "$ref": "/axapi/v3/gslb/service-ip"}
:param disable: {"default": 0, "optional": true, "type": "number", "description": "Disable this Service-IP", "format": "flag"}
:param static: {"default": 0, "optional": true, "type": "number", "description": "Return this Service-IP in DNS server mode", "format": "flag"}
:param ttl: {"optional": true, "type": "number", "description": "Specify TTL for Service-IP", "format": "number"}
:param admin_ip: {"description": "Specify admin priority of Service-IP (Specify the priority)", "format": "number", "type": "number", "maximum": 255, "minimum": 1, "optional": true}
:param no_resp: {"default": 0, "optional": true, "type": "number", "description": "Don't use this Service-IP as DNS response", "format": "flag"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
URL for this object::
`https://<Hostname|Ip address>//axapi/v3/gslb/zone/{name}/service/{service_port}+{service_name}/dns-a-record/dns-a-record-srv/{svrname}`.
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.required = [ "svrname"]
self.b_key = "dns-a-record-srv"
self.a10_url="/axapi/v3/gslb/zone/{name}/service/{service_port}+{service_name}/dns-a-record/dns-a-record-srv/{svrname}"
self.DeviceProxy = ""
self.as_replace = ""
self.as_backup = ""
self.weight = ""
self.svrname = ""
self.disable = ""
self.static = ""
self.ttl = ""
self.admin_ip = ""
self.no_resp = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
| [
"doug@parksidesoftware.com"
] | doug@parksidesoftware.com |
b9ea48c03deb0ec979884538322f7b48b21f5023 | 62e58c051128baef9452e7e0eb0b5a83367add26 | /edifact/D98B/SUPMAND98BUN.py | ba37762c3029a0327ed8a98c72ab27c2349e27b6 | [] | no_license | dougvanhorn/bots-grammars | 2eb6c0a6b5231c14a6faf194b932aa614809076c | 09db18d9d9bd9d92cefbf00f1c0de1c590fe3d0d | refs/heads/master | 2021-05-16T12:55:58.022904 | 2019-05-17T15:22:23 | 2019-05-17T15:22:23 | 105,274,633 | 0 | 0 | null | 2017-09-29T13:21:21 | 2017-09-29T13:21:21 | null | UTF-8 | Python | false | false | 1,855 | py | #Generated by bots open source edi translator from UN-docs.
from bots.botsconfig import *
from edifact import syntax
from recordsD98BUN import recorddefs
structure = [
{ID: 'UNH', MIN: 1, MAX: 1, LEVEL: [
{ID: 'BGM', MIN: 1, MAX: 1},
{ID: 'RFF', MIN: 1, MAX: 6},
{ID: 'CUX', MIN: 0, MAX: 1},
{ID: 'DTM', MIN: 0, MAX: 9},
{ID: 'FTX', MIN: 0, MAX: 5},
{ID: 'NAD', MIN: 0, MAX: 6, LEVEL: [
{ID: 'CTA', MIN: 0, MAX: 5, LEVEL: [
{ID: 'COM', MIN: 0, MAX: 1},
]},
]},
{ID: 'UNS', MIN: 1, MAX: 1},
{ID: 'NAD', MIN: 1, MAX: 999999, LEVEL: [
{ID: 'DTM', MIN: 1, MAX: 15},
{ID: 'ATT', MIN: 0, MAX: 9},
{ID: 'RFF', MIN: 0, MAX: 9},
{ID: 'REL', MIN: 0, MAX: 99, LEVEL: [
{ID: 'NAD', MIN: 1, MAX: 1},
{ID: 'PCD', MIN: 0, MAX: 1},
]},
{ID: 'EMP', MIN: 0, MAX: 9, LEVEL: [
{ID: 'PCD', MIN: 0, MAX: 1},
{ID: 'CUX', MIN: 0, MAX: 1},
{ID: 'NAD', MIN: 0, MAX: 9},
{ID: 'MOA', MIN: 0, MAX: 9, LEVEL: [
{ID: 'PAT', MIN: 0, MAX: 1},
{ID: 'DTM', MIN: 0, MAX: 1},
]},
]},
{ID: 'GIS', MIN: 1, MAX: 20, LEVEL: [
{ID: 'DTM', MIN: 0, MAX: 1},
]},
{ID: 'MEM', MIN: 0, MAX: 9, LEVEL: [
{ID: 'ATT', MIN: 0, MAX: 9, LEVEL: [
{ID: 'PCD', MIN: 0, MAX: 1},
]},
{ID: 'COT', MIN: 0, MAX: 99, LEVEL: [
{ID: 'MOA', MIN: 0, MAX: 1},
{ID: 'PCD', MIN: 0, MAX: 1},
{ID: 'DTM', MIN: 0, MAX: 3},
{ID: 'PAT', MIN: 0, MAX: 1},
{ID: 'FTX', MIN: 0, MAX: 3},
]},
]},
]},
{ID: 'CNT', MIN: 0, MAX: 9},
{ID: 'AUT', MIN: 0, MAX: 1},
{ID: 'UNT', MIN: 1, MAX: 1},
]},
]
| [
"jason.capriotti@gmail.com"
] | jason.capriotti@gmail.com |
c19f68d19757b7a8fe724bdac5d978174237d67e | 993ef8924418866f932396a58e3ad0c2a940ddd3 | /Production/python/PrivateSamples/EMJ_UL17_mMed-2000_mDark-6_kappa-1p23_aligned-down_cff.py | 86a9a2c1c4a0634c9dc4f7596fd42d2bc4c0da36 | [] | no_license | TreeMaker/TreeMaker | 48d81f6c95a17828dbb599d29c15137cd6ef009a | 15dd7fe9e9e6f97d9e52614c900c27d200a6c45f | refs/heads/Run2_UL | 2023-07-07T15:04:56.672709 | 2023-07-03T16:43:17 | 2023-07-03T16:43:17 | 29,192,343 | 16 | 92 | null | 2023-07-03T16:43:28 | 2015-01-13T13:59:30 | Python | UTF-8 | Python | false | false | 1,961 | py | import FWCore.ParameterSet.Config as cms
maxEvents = cms.untracked.PSet( input = cms.untracked.int32(-1) )
readFiles = cms.untracked.vstring()
secFiles = cms.untracked.vstring()
source = cms.Source ("PoolSource",fileNames = readFiles, secondaryFileNames = secFiles)
readFiles.extend( [
'root://cmseos.fnal.gov///store/group/lpcsusyhad/ExoEMJAnalysis2020/Signal.Oct.2021/UL17/step4_MINIAODv2_mMed-2000_mDark-6_kappa-1p23_aligned-down_n-500_part-1.root',
'root://cmseos.fnal.gov///store/group/lpcsusyhad/ExoEMJAnalysis2020/Signal.Oct.2021/UL17/step4_MINIAODv2_mMed-2000_mDark-6_kappa-1p23_aligned-down_n-500_part-10.root',
'root://cmseos.fnal.gov///store/group/lpcsusyhad/ExoEMJAnalysis2020/Signal.Oct.2021/UL17/step4_MINIAODv2_mMed-2000_mDark-6_kappa-1p23_aligned-down_n-500_part-2.root',
'root://cmseos.fnal.gov///store/group/lpcsusyhad/ExoEMJAnalysis2020/Signal.Oct.2021/UL17/step4_MINIAODv2_mMed-2000_mDark-6_kappa-1p23_aligned-down_n-500_part-3.root',
'root://cmseos.fnal.gov///store/group/lpcsusyhad/ExoEMJAnalysis2020/Signal.Oct.2021/UL17/step4_MINIAODv2_mMed-2000_mDark-6_kappa-1p23_aligned-down_n-500_part-4.root',
'root://cmseos.fnal.gov///store/group/lpcsusyhad/ExoEMJAnalysis2020/Signal.Oct.2021/UL17/step4_MINIAODv2_mMed-2000_mDark-6_kappa-1p23_aligned-down_n-500_part-5.root',
'root://cmseos.fnal.gov///store/group/lpcsusyhad/ExoEMJAnalysis2020/Signal.Oct.2021/UL17/step4_MINIAODv2_mMed-2000_mDark-6_kappa-1p23_aligned-down_n-500_part-6.root',
'root://cmseos.fnal.gov///store/group/lpcsusyhad/ExoEMJAnalysis2020/Signal.Oct.2021/UL17/step4_MINIAODv2_mMed-2000_mDark-6_kappa-1p23_aligned-down_n-500_part-7.root',
'root://cmseos.fnal.gov///store/group/lpcsusyhad/ExoEMJAnalysis2020/Signal.Oct.2021/UL17/step4_MINIAODv2_mMed-2000_mDark-6_kappa-1p23_aligned-down_n-500_part-8.root',
'root://cmseos.fnal.gov///store/group/lpcsusyhad/ExoEMJAnalysis2020/Signal.Oct.2021/UL17/step4_MINIAODv2_mMed-2000_mDark-6_kappa-1p23_aligned-down_n-500_part-9.root',
] )
| [
"enochnotsocool@gmail.com"
] | enochnotsocool@gmail.com |
d6be93f57e6800ddb1758330fd39c0ad37c84fdd | 809e8079051ae2a062c4b867654d6fb7b5db722d | /test/export.py | 451c615fd8392dc1982344ca4d0dd46091790b50 | [] | no_license | OpenSourceBrain/TobinEtAl2017 | 035fa4fc490e01c98a22dcb79e707152af5b1288 | 5afd481cdf92197e1438ec483955dae20293dc63 | refs/heads/master | 2020-04-11T23:18:05.057605 | 2019-05-10T13:15:30 | 2019-05-10T13:15:30 | 162,162,576 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 157 | py | from pyneuroml.swc.ExportSWC import convert_to_swc
files = ['bask.cell.nml', 'pyr_4_sym.cell.nml']
for f in files:
convert_to_swc(f, add_comments=True) | [
"p.gleeson@gmail.com"
] | p.gleeson@gmail.com |
a07f19ef5b2f19ab5ca36e897948080c68a05850 | ec7591c3f478c43e76257aaa500d8f6a2e763d74 | /stanza/models/constituency/evaluate_treebanks.py | 11f3084b3413a8f82eef0949f0a8023a1ec187dd | [
"Apache-2.0"
] | permissive | stanfordnlp/stanza | 5cc3dbe70a96dd565639b7dae1efde6b4fa76985 | c530c9af647d521262b56b717bcc38b0cfc5f1b8 | refs/heads/main | 2023-09-01T12:01:38.980322 | 2023-03-14T16:10:05 | 2023-03-14T16:10:05 | 104,854,615 | 4,281 | 599 | NOASSERTION | 2023-09-10T00:31:36 | 2017-09-26T08:00:56 | Python | UTF-8 | Python | false | false | 1,249 | py | """
Read multiple treebanks, score the results.
Reports the k-best score if multiple predicted treebanks are given.
"""
import argparse
from stanza.models.constituency import tree_reader
from stanza.server.parser_eval import EvaluateParser, ParseResult
def main():
parser = argparse.ArgumentParser(description='Get scores for one or more treebanks against the gold')
parser.add_argument('gold', type=str, help='Which file to load as the gold trees')
parser.add_argument('pred', type=str, nargs='+', help='Which file(s) are the predictions. If more than one is given, the evaluation will be "k-best" with the first prediction treated as the canonical')
args = parser.parse_args()
print("Loading gold treebank: " + args.gold)
gold = tree_reader.read_treebank(args.gold)
print("Loading predicted treebanks: " + args.pred)
pred = [tree_reader.read_treebank(x) for x in args.pred]
full_results = [ParseResult(parses[0], [*parses[1:]])
for parses in zip(gold, *pred)]
if len(pred) <= 1:
kbest = None
else:
kbest = len(pred)
with EvaluateParser(kbest=kbest) as evaluator:
response = evaluator.process(full_results)
if __name__ == '__main__':
main()
| [
"horatio@gmail.com"
] | horatio@gmail.com |
c9b5d98cfab617283e43057e2f975465d57c1fdb | c8fb08292d264780c8cd3ac6734dadf2b15d9818 | /doc/_gallery/plot_2_6_monocomp_nonstat_colored_gaussian_noise.py | c4ab25ebb03cf1fd16656ad864eb6eede8c230f5 | [] | no_license | dhuadaar/pytftb | d2e761ae86053b4a78f494ee3272ca3f4cde05ad | d3f5e99775cb7bc3455440ac19bd80806c47b33f | refs/heads/master | 2021-01-17T22:03:29.804566 | 2016-01-18T06:42:06 | 2016-01-18T06:42:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 796 | py | #! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2015 jaidev <jaidev@newton>
#
# Distributed under terms of the MIT license.
"""
=========================
Noisy Monocomponent Chirp
=========================
This example demonstrates the construction of a monocomponent signal with
linear frequency modulation and colored Gaussian noise.
"""
from tftb.generators import fmlin, amgauss, noisecg, sigmerge
from numpy import real
import matplotlib.pyplot as plt
fm, _ = fmlin(256)
am = amgauss(256)
signal = fm * am
noise = noisecg(256, .8)
sign = sigmerge(signal, noise, -10)
plt.plot(real(sign))
plt.xlabel('Time')
plt.ylabel('Real part')
plt.title('Gaussian transient signal embedded in -10 dB colored Gaussian noise')
plt.xlim(0, 256)
plt.grid()
plt.show()
| [
"deshpande.jaidev@gmail.com"
] | deshpande.jaidev@gmail.com |
efebffacfb752e09618d19fdc96b01a793cb83ad | 3de10fd67accd642e4cac75f360e73c5d07865d2 | /weatherCron.py | 020a9d233819405d29c18fd6fc80817a96109ed1 | [] | no_license | urbancrazy119/crazyBot | 3a48d4705dc03450e1b347be08bbaf46172515f7 | 10aeb42ea3bbe49ff11f6e0ed1de08413cc89ec7 | refs/heads/master | 2021-01-21T06:38:38.201920 | 2017-03-03T02:52:32 | 2017-03-03T02:52:32 | 82,866,606 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 117 | py | import funcWeather as weather
import funcBot as f
msg = weather.make_weather_msg('all')
f.broadcast(msg)
#print msg
| [
"root@debian"
] | root@debian |
5017780ee6b8321374fe9235fe905cae865ce388 | de01cb554c2292b0fbb79b4d5413a2f6414ea472 | /algorithms/Medium/684.redundant-connection.py | 94b04ed4eec6fe7450dfa43b9c4bf80eac217280 | [] | no_license | h4hany/yeet-the-leet | 98292017eadd3dde98a079aafcd7648aa98701b4 | 563d779467ef5a7cc85cbe954eeaf3c1f5463313 | refs/heads/master | 2022-12-10T08:35:39.830260 | 2020-09-02T23:12:15 | 2020-09-02T23:12:15 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,934 | py | #
# @lc app=leetcode id=684 lang=python3
#
# [684] Redundant Connection
#
# https://leetcode.com/problems/redundant-connection/description/
#
# algorithms
# Medium (57.63%)
# Total Accepted: 100.8K
# Total Submissions: 174.9K
# Testcase Example: '[[1,2],[1,3],[2,3]]'
#
#
# In this problem, a tree is an undirected graph that is connected and has no
# cycles.
#
# The given input is a graph that started as a tree with N nodes (with distinct
# values 1, 2, ..., N), with one additional edge added. The added edge has two
# different vertices chosen from 1 to N, and was not an edge that already
# existed.
#
# The resulting graph is given as a 2D-array of edges. Each element of edges
# is a pair [u, v] with u < v, that represents an undirected edge connecting
# nodes u and v.
#
# Return an edge that can be removed so that the resulting graph is a tree of N
# nodes. If there are multiple answers, return the answer that occurs last in
# the given 2D-array. The answer edge [u, v] should be in the same format,
# with u < v.
# Example 1:
#
# Input: [[1,2], [1,3], [2,3]]
# Output: [2,3]
# Explanation: The given undirected graph will be like this:
# 1
# / \
# 2 - 3
#
#
# Example 2:
#
# Input: [[1,2], [2,3], [3,4], [1,4], [1,5]]
# Output: [1,4]
# Explanation: The given undirected graph will be like this:
# 5 - 1 - 2
# | |
# 4 - 3
#
#
# Note:
# The size of the input 2D-array will be between 3 and 1000.
# Every integer represented in the 2D-array will be between 1 and N, where N is
# the size of the input array.
#
#
#
#
#
# Update (2017-09-26):
# We have overhauled the problem description + test cases and specified clearly
# the graph is an undirected graph. For the directed graph follow up please see
# Redundant Connection II). We apologize for any inconvenience caused.
#
#
class Solution:
def findRedundantConnection(self, edges: List[List[int]]) -> List[int]:
| [
"kevin.wkmiao@gmail.com"
] | kevin.wkmiao@gmail.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.