blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 616 | content_id stringlengths 40 40 | detected_licenses listlengths 0 112 | license_type stringclasses 2 values | repo_name stringlengths 5 115 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 777 values | visit_date timestamp[us]date 2015-08-06 10:31:46 2023-09-06 10:44:38 | revision_date timestamp[us]date 1970-01-01 02:38:32 2037-05-03 13:00:00 | committer_date timestamp[us]date 1970-01-01 02:38:32 2023-09-06 01:08:06 | github_id int64 4.92k 681M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 22 values | gha_event_created_at timestamp[us]date 2012-06-04 01:52:49 2023-09-14 21:59:50 ⌀ | gha_created_at timestamp[us]date 2008-05-22 07:58:19 2023-08-21 12:35:19 ⌀ | gha_language stringclasses 149 values | src_encoding stringclasses 26 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 3 10.2M | extension stringclasses 188 values | content stringlengths 3 10.2M | authors listlengths 1 1 | author_id stringlengths 1 132 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ba3fe86f04f4feeb9d8e21a759d4a3d71dcbfa87 | 015383d460fa4321391d964c4f65c4d0c044dcc1 | /.venv/lib/python3.7/site-packages/faker/providers/color/uk_UA/__init__.py | 082c0e3c70d4d54b748de7d862ec0568cffeaf26 | [
"Unlicense"
] | permissive | kobbyrythm/temperature_stories_django | 8f400c8d3c8190b0e83f7bcfece930d696c4afe9 | 552d39f1f6f3fc1f0a2f7308a7da61bf1b9b3de3 | refs/heads/main | 2023-07-03T21:28:46.020709 | 2021-07-20T09:44:29 | 2021-07-20T09:44:29 | 468,728,039 | 3 | 0 | Unlicense | 2022-03-11T11:41:47 | 2022-03-11T11:41:46 | null | UTF-8 | Python | false | false | 10,523 | py | from collections import OrderedDict
from .. import Provider as ColorProvider
class Provider(ColorProvider):
"""Implement color provider for ``uk_UA`` locale.
Sources:
- https://uk.wikipedia.org/wiki/Список_кольорів
"""
all_colors = OrderedDict((
('Абрикосовий', '#FBCEB1'),
('Аквамариновий', '#7FFFD4'),
('Алізариновий червоний', '#E32636'),
('Амарантовий', '#E52B50'),
('Амарантово-рожевий', '#F19CBB'),
('Аметистовий', '#9966CC'),
('Андроїдний зелений', '#A4C639'),
('Арсеновий', '#3B444B'),
('Атомний мандаріновий', '#FF9966'),
('Багряний', '#FF2400'),
('Баклажановий', '#990066'),
('Барвінковий', '#CCCCFF'),
('Бежевий', '#F5F5DC'),
('Берлінська лазур', '#003153'),
('Блаватний', '#6495ED'),
('Блакитний', '#AFEEEE'),
('Блакитний Брандейса', '#0070FF'),
('Блакитно-зелений', '#00DDDD'),
('Блакитно-фіолетовий', '#8A2BE2'),
('Блідий рожево-ліловий', '#996666'),
('Блідо-брунатний', '#987654'),
('Блідо-волошковий', '#ABCDEF'),
('Блідо-карміновий', '#AF4035'),
('Блідо-каштановий', '#DDADAF'),
('Блідо-пурпуровий', '#F984E5'),
('Блідо-пісочний', '#DABDAB'),
('Блідо-рожевий', '#FADADD'),
('Болотний', '#ACB78E'),
('Бронзовий', '#CD7F32'),
('Брунатний', '#964B00'),
('Брунато-малиновий', '#800000'),
('Будяковий', '#D8BFD8'),
('Бузковий', '#C8A2C8'),
('Бургундський', '#900020'),
('Бурий', '#755A57'),
('Бурштиновий', '#FFBF00'),
('Білий', '#FFFFFF'),
('Білий навахо', '#FFDEAD'),
('Бірюзовий', '#30D5C8'),
('Бістр', '#3D2B1F'),
('Вода пляжа Бонді', '#0095B6'),
('Вохра', '#CC7722'),
('Відбірний жовтий', '#FFBA00'),
('Візантійський', '#702963'),
('Гарбуз', '#FF7518'),
('Гарячо-рожевий', '#FC0FC0'),
('Геліотроп', '#DF73FF'),
('Глибокий фіолетовий', '#423189'),
('Глицінія', '#C9A0DC'),
('Грушевий', '#D1E231'),
('Гумігут', '#E49B0F'),
('Гірчичний', '#FFDB58'),
('Дерева', '#79443B'),
('Джинсовий', '#1560BD'),
('Діамантово-рожевий', '#FF55A3'),
('Жовтий', '#FFFF00'),
('Жовто-зелений', '#ADFF2F'),
('Жовто-персиковий', '#FADFAD'),
('Захисний синій', '#1E90FF'),
('Зелена весна', '#00FF7F'),
('Зелена мʼята', '#98FF98'),
('Зелена сосна', '#01796F'),
('Зелене море', '#2E8B57'),
('Зелений', '#00FF00'),
('Зелений армійський', '#4B5320'),
('Зелений мох', '#ADDFAD'),
('Зелений папороть', '#4F7942'),
('Зелений чай', '#D0F0C0'),
('Зелено-сірий чай', '#CADABA'),
('Зеленувато-блакитний', '#008080'),
('Золотаво-березовий', '#DAA520'),
('Золотий', '#FFD700'),
('Золотисто-каштановий', '#6D351A'),
('Індиго', '#4B0082'),
('Іржавий', '#B7410E'),
('Кардинал (колір)', '#C41E3A'),
('Карміновий', '#960018'),
('Каштановий', '#CD5C5C'),
('Кобальтовий', '#0047AB'),
('Колір жовтого шкільного автобуса', '#FFD800'),
('Колір засмаги', '#D2B48C'),
('Колір морської піни', '#FFF5EE'),
('Колір морської хвилі', '#00FFFF'),
('Кораловий', '#FF7F50'),
('Королівський синій', '#4169E1'),
('Кремовий', '#FFFDD0'),
('Кукурудзяний', '#FBEC5D'),
('Кіновар', '#FF4D00'),
('Лавандний', '#E6E6FA'),
('Лазуровий', '#007BA7'),
('Лазурово-синій', '#2A52BE'),
('Лайм', '#CCFF00'),
('Латунний', '#B5A642'),
('Лимонний', '#FDE910'),
('Лимонно-кремовий', '#FFFACD'),
('Лляний', '#EEDC82'),
('Лляний', '#FAF0E6'),
('Лососевий', '#FF8C69'),
('Ліловий', '#DB7093'),
('Малахітовий', '#0BDA51'),
('Малиновий', '#DC143C'),
('Мандариновий', '#FFCC00'),
('Мисливський', '#004225'),
('Морквяний', '#ED9121'),
('Мідний', '#B87333'),
('Міжнародний помаранчевий', '#FF4F00'),
('Нефритовий', '#00A86B'),
('Ніжно-блакитний', '#E0FFFF'),
('Ніжно-оливковий', '#6B8E23'),
('Ніжно-рожевий', '#FB607F'),
('Оливковий', '#808000'),
('Опівнічно-синій', '#003366'),
('Орхідея', '#DA70D6'),
('Палена сіена', '#E97451'),
('Палений оранжевий', '#CC5500'),
('Панг', '#C7FCEC'),
('Паросток папаї', '#FFEFD5'),
('Пастельно-зелений', '#77DD77'),
('Пастельно-рожевий', '#FFD1DC'),
('Персиковий', '#FFE5B4'),
('Перський синій', '#6600FF'),
('Помаранчевий', '#FFA500'),
('Помаранчево-персиковий', '#FFCC99'),
('Помаранчево-рожевий', '#FF9966'),
('Пурпурний', '#FF00FF'),
('Пурпуровий', '#660099'),
('Пшеничний', '#F5DEB3'),
('Пісочний колір', '#F4A460'),
('Рожевий', '#FFC0CB'),
('Рожевий Маунтбеттена', '#997A8D'),
('Рожево-лавандний', '#FFF0F5'),
('Рожево-ліловий', '#993366'),
('Салатовий', '#7FFF00'),
('Сангрія', '#92000A'),
('Сапфіровий', '#082567'),
('Світло-синій', '#007DFF'),
('Сепія', '#704214'),
('Сиваво-зелений', '#ACE1AF'),
('Сигнально-помаранчевий', '#FF9900'),
('Синя пил', '#003399'),
('Синя сталь', '#4682B4'),
('Сині яйця малинівки', '#00CCCC'),
('Синій', '#0000FF'),
('Синій (RYB)', '#0247FE'),
('Синій (пігмент)', '#333399'),
('Синій ВПС', '#5D8AA8'),
('Синій Клейна', '#3A75C4'),
('Сливовий', '#660066'),
('Смарагдовий', '#50C878'),
('Спаржевий', '#7BA05B'),
('Срібний', '#C0C0C0'),
('Старе золото', '#CFB53B'),
('Сіра спаржа', '#465945'),
('Сірий', '#808080'),
('Сірий шифер', '#708090'),
('Темний весняно-зелений', '#177245'),
('Темний жовто-брунатний', '#918151'),
('Темний зелений чай', '#BADBAD'),
('Темний пастельно-зелений', '#03C03C'),
('Темний хакі', '#BDB76B'),
('Темний індиго', '#310062'),
('Темно-аспідний сірий', '#2F4F4F'),
('Темно-брунатний', '#654321'),
('Темно-бірюзовий', '#116062'),
('Темно-зелений', '#013220'),
('Темно-зелений хакі', '#78866B'),
('Темно-золотий', '#B8860B'),
('Темно-карміновий', '#560319'),
('Темно-каштановий', '#986960'),
('Темно-кораловий', '#CD5B45'),
('Темно-лазурний', '#08457E'),
('Темно-лососевий', '#E9967A'),
('Темно-мандариновий', '#FFA812'),
('Темно-оливковий', '#556832'),
('Темно-персиковий', '#FFDAB9'),
('Темно-рожевий', '#E75480'),
('Темно-синій', '#000080'),
('Ультрамариновий', '#120A8F'),
('Умбра', '#734A12'),
('Умбра палена', '#8A3324'),
('Фуксія', '#FF00FF'),
('Фіолетовий', '#8B00FF'),
('Фіолетово-баклажановий', '#991199'),
('Фіолетово-червоний', '#C71585'),
('Хакі', '#C3B091'),
('Цинамоновий', '#7B3F00'),
('Циннвальдит', '#EBC2AF'),
('Ціан (колір)', '#00FFFF'),
('Ціано-блакитний', '#F0F8FF'),
('Червоний', '#FF0000'),
('Червоно-буро-помаранчевий', '#CD5700'),
('Червоновато-брунатний', '#CC8899'),
('Чорний', '#000000'),
('Шафрановий', '#F4C430'),
('Шкіра буйвола', '#F0DC82'),
('Шоколадний', '#D2691E'),
('Яскраво-бурштиновий', '#FF7E00'),
('Яскраво-бірюзовий', '#08E8DE'),
('Яскраво-зелений', '#66FF00'),
('Яскраво-зелений', '#40826D'),
('Яскраво-рожевий', '#FF007F'),
('Яскраво-фіолетовий', '#CD00CD'),
('Ясно-брунатний', '#CD853F'),
('Ясно-вишневий', '#DE3163'),
('Ясно-лазуровий', '#007FFF'),
('Ясно-лазуровий (веб)', '#F0FFFF'),
))
| [
"b.scharlau@abdn.ac.uk"
] | b.scharlau@abdn.ac.uk |
fdb8b901086777739d2ab22a08e6e45e12e6d7f2 | 3ae62276c9aad8b9612d3073679b5cf3cb695e38 | /easyleetcode/leetcodes/SYL_4数学和位运算_9Fast Power.py | ce4086955015bc4ab044ceec5c0c1ae6f05889ed | [
"Apache-2.0"
] | permissive | gongtian1234/easy_leetcode | bc0b33c3c4f61d58a6111d76707903efe0510cb4 | d2b8eb5d2cafc71ee1ca633ce489c1a52bcc39ce | refs/heads/master | 2022-11-16T17:48:33.596752 | 2020-07-13T02:55:03 | 2020-07-13T02:55:03 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 767 | py |
class Solution:
"""
@param a, b, n: 32bit integers
@return: An integer
"""
def fastPower(self, a, b, n):
if n == 1:
return a % b
elif n == 0:
return 1 % b
elif n < 0:
return -1
# 拆分:(a * b) % p = ((a % p) * (b % p)) % p
# 拆分:(a^n) % p = ((a^n/2) * (a^n/2)) % p
product = self.fastPower(a, b, int(n / 2))
product = (product * product) % b
# 奇数次
if n % 2 == 1:
product = (product * a) % b
return product
'''
没必要算2^32再不断取%!!
(a+b) % p =((a % p)+(b % p))% p
(a*b) % p =((a % p)*(b % p))% p
'''
s = Solution()
print(s.fastPower(2, 3, 31))
print(s.fastPower(100, 1000, 1000))
| [
"425776024@qq.com"
] | 425776024@qq.com |
44eb876e9aaf0e2fd5d2c0b3e7b4f9bda5960f39 | 7393987b67f845cd5db4c83e3063b3d36108aa58 | /ansible/roles/cloud_master/files/api_srv/do_api.py | 77bcdca698bd9bbf5cfd3ce948fd211766bf3407 | [] | no_license | HackerDom/ructfe-2020 | 2d859afe113203813b1f65e9a55d275963b3af65 | a7a13546389bac2d39ef51e65eb3320569d02247 | refs/heads/main | 2023-02-04T21:10:40.035902 | 2020-12-26T20:37:25 | 2020-12-26T20:37:25 | 310,278,476 | 9 | 2 | null | null | null | null | UTF-8 | Python | false | false | 7,932 | py | # Developed by Alexander Bersenev from Hackerdom team, bay@hackerdom.ru
"""Common functions that make requests to digital ocean api"""
import requests
import time
import json
import sys
from do_token import TOKEN
VERBOSE = True
HEADERS = {
"Content-Type": "application/json",
"Authorization": "Bearer %s" % TOKEN,
}
def log(*params):
if VERBOSE:
print(*params, file=sys.stderr)
def get_all_vms(attempts=5, timeout=10):
vms = {}
url = "https://api.digitalocean.com/v2/droplets?per_page=200"
cur_attempt = 1
while True:
try:
resp = requests.get(url, headers=HEADERS)
if not str(resp.status_code).startswith("2"):
log(resp.status_code, resp.headers, resp.text)
raise Exception("bad status code %d" % resp.status_code)
data = json.loads(resp.text)
for droplet in data["droplets"]:
vms[droplet["id"]] = droplet
if ("links" in data and "pages" in data["links"] and
"next" in data["links"]["pages"]):
url = data["links"]["pages"]["next"]
else:
break
except Exception as e:
log("get_all_vms trying again %s" % (e,))
cur_attempt += 1
if cur_attempt > attempts:
return None # do not return parts of the output
time.sleep(timeout)
return list(vms.values())
def get_ids_by_vmname(vm_name):
ids = set()
droplets = get_all_vms()
if droplets is None:
return None
for droplet in droplets:
if droplet["name"] == vm_name:
ids.add(droplet['id'])
return ids
def check_vm_exists(vm_name):
droplets = get_all_vms()
if droplets is None:
return None
for droplet in droplets:
if droplet["name"] == vm_name:
return True
return False
def create_vm(vm_name, image, ssh_keys,
region="ams2", size="s-1vcpu-1gb", attempts=10, timeout=20):
for i in range(attempts):
try:
data = json.dumps({
"name": vm_name,
"region": region,
"size": size,
"image": image,
"ssh_keys": ssh_keys,
"backups": False,
"ipv6": False,
"user_data": "#!/bin/bash\n\n",
"private_networking": None,
"volumes": None,
"tags": [] # tags are too unstable in DO
})
log("creating new")
url = "https://api.digitalocean.com/v2/droplets"
resp = requests.post(url, headers=HEADERS, data=data)
if resp.status_code not in [200, 201, 202]:
log(resp.status_code, resp.headers, resp.text)
raise Exception("bad status code %d" % resp.status_code)
droplet_id = json.loads(resp.text)["droplet"]["id"]
return droplet_id
except Exception as e:
log("create_vm trying again %s" % (e,))
time.sleep(timeout)
return None
def delete_vm_by_id(droplet_id, attempts=10, timeout=20):
for i in range(attempts):
try:
log("deleting droplet")
url = "https://api.digitalocean.com/v2/droplets/%d" % droplet_id
resp = requests.delete(url, headers=HEADERS)
if not str(resp.status_code).startswith("2"):
log(resp.status_code, resp.headers, resp.text)
raise Exception("bad status code %d" % resp.status_code)
return True
except Exception as e:
log("delete_vm_by_id trying again %s" % (e,))
time.sleep(timeout)
return False
def get_ip_by_id(droplet_id, attempts=5, timeout=20):
for i in range(attempts):
try:
url = "https://api.digitalocean.com/v2/droplets/%d" % droplet_id
resp = requests.get(url, headers=HEADERS)
data = json.loads(resp.text)
ip = data['droplet']['networks']['v4'][0]['ip_address']
if ip.startswith("10."):
# take next
ip = data['droplet']['networks']['v4'][1]['ip_address']
return ip
except Exception as e:
log("get_ip_by_id trying again %s" % (e,))
time.sleep(timeout)
log("failed to get ip by id")
return None
def get_ip_by_vmname(vm_name):
ids = set()
droplets = get_all_vms()
if droplets is None:
return None
for droplet in droplets:
if droplet["name"] == vm_name:
ids.add(droplet['id'])
if len(ids) > 1:
log("warning: there are more than one droplet with name " + vm_name +
", using random :)")
if not ids:
return None
return get_ip_by_id(list(ids)[0])
def get_all_domain_records(domain, attempts=5, timeout=20):
records = {}
url = ("https://api.digitalocean.com/v2/domains/" + domain +
"/records?per_page=200")
cur_attempt = 1
while True:
try:
resp = requests.get(url, headers=HEADERS)
if not str(resp.status_code).startswith("2"):
log(resp.status_code, resp.headers, resp.text)
raise Exception("bad status code %d" % resp.status_code)
data = json.loads(resp.text)
for record in data["domain_records"]:
records[record["id"]] = record
if ("links" in data and "pages" in data["links"] and
"next" in data["links"]["pages"]):
url = data["links"]["pages"]["next"]
else:
break
except Exception as e:
log("get_all_domain_records trying again %s" % (e,))
cur_attempt += 1
if cur_attempt > attempts:
return None # do not return parts of the output
time.sleep(timeout)
return list(records.values())
def get_domain_ids_by_hostname(host_name, domain, print_warning_on_fail=False):
ids = set()
records = get_all_domain_records(domain)
if records is None:
return None
for record in records:
if record["type"] == "A" and record["name"] == host_name:
ids.add(record['id'])
if not ids:
if print_warning_on_fail:
log("failed to get domain ids by hostname", host_name)
return ids
def create_domain_record(name, ip, domain, attempts=10, timeout=20):
for i in range(attempts):
try:
data = json.dumps({
"type": "A",
"name": name,
"data": ip,
"ttl": 30
})
url = "https://api.digitalocean.com/v2/domains/%s/records" % domain
resp = requests.post(url, headers=HEADERS, data=data)
if not str(resp.status_code).startswith("2"):
log(resp.status_code, resp.headers, resp.text)
raise Exception("bad status code %d" % resp.status_code)
return True
except Exception as e:
log("create_domain_record trying again %s" % (e,))
time.sleep(timeout)
return None
def delete_domain_record(domain_id, domain, attempts=10, timeout=20):
for i in range(attempts):
try:
log("deleting domain record %d" % domain_id)
url = ("https://api.digitalocean.com/v2/domains" +
"/%s/records/%d" % (domain, domain_id))
resp = requests.delete(url, headers=HEADERS)
if not str(resp.status_code).startswith("2"):
log(resp.status_code, resp.headers, resp.text)
raise Exception("bad status code %d" % resp.status_code)
return True
except Exception as e:
log("delete_domain_record trying again %s" % (e,))
time.sleep(timeout)
return False
| [
"bay@hackerdom.ru"
] | bay@hackerdom.ru |
6d49234b323cdcab62ce20e342bac60b8ce76fdb | 5ee3aa64cffb7cd13df824b0e669145cf41ca106 | /kinoko/misc/__init__.py | 6a575e89380da378b3e32bb85991341812c11671 | [
"MIT"
] | permissive | youxiaoxing/kinoko | aa0af77bde7a8349293c29a02e977d147c06f9d1 | 4750d8e6b1a68ba771cd89b352989ef05b293d45 | refs/heads/master | 2022-03-25T19:28:55.737172 | 2019-10-19T17:56:03 | 2019-10-19T17:56:03 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 171 | py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# vim: tabstop=4 shiftwidth=4 expandtab number
"""
Authors: qianweishuo<qzy922@gmail.com>
Date: 2019/6/27 下午11:20
"""
| [
"koyo922@qq.com"
] | koyo922@qq.com |
a0508c2b3dce01a6195c9765c62d941647c47e98 | a03eba726a432d8ef133f2dc55894ba85cdc4a08 | /events/mixins.py | c91997975613113f8e2e94939e013eea469b1be0 | [
"MIT"
] | permissive | mansonul/events | 2546c9cfe076eb59fbfdb7b4ec8bcd708817d59b | 4f6ca37bc600dcba3f74400d299826882d53b7d2 | refs/heads/master | 2021-01-15T08:53:22.442929 | 2018-01-30T16:14:20 | 2018-01-30T16:14:20 | 99,572,230 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 697 | py | from django.http import JsonResponse
from django.forms.models import model_to_dict
class AjaxFormMixin(object):
def form_invalid(self, form):
response = super(AjaxFormMixin, self).form_invalid(form)
if self.request.is_ajax():
return JsonResponse(form.errors, status=400)
else:
return response
def form_valid(self, form):
response = super(AjaxFormMixin, self).form_valid(form)
if self.request.is_ajax():
data = {
'title': form.instance.title,
'description': form.instance.description,
}
return JsonResponse(data)
else:
return response
| [
"contact@dragosnicu.com"
] | contact@dragosnicu.com |
c3ebfd0b4f069d5706c9ede95f3ab0a751c57bf6 | 4f2cdd9a34fce873ff5995436edf403b38fb2ea5 | /Data-Structures/String/part1/P008.py | 5734c472a41b5a886b102af012eaecfaaa7e3e87 | [] | no_license | sanjeevseera/Python-Practice | 001068e9cd144c52f403a026e26e9942b56848b0 | 5ad502c0117582d5e3abd434a169d23c22ef8419 | refs/heads/master | 2021-12-11T17:24:21.136652 | 2021-08-17T10:25:01 | 2021-08-17T10:25:01 | 153,397,297 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 325 | py | """
Write a Python function that takes a list of words and returns the length of the longest one
"""
def Lword(wlist):
word=wlist[0]
for w in wlist[1:]:
if len(w)>len(word):
word=w
return word
wlist=input("enter the words by comma separated:...").split(',')
print(Lword(wlist)) | [
"seerasanjeev@gmail.com"
] | seerasanjeev@gmail.com |
3aa4e4c2b6fe388c724541de0a28a0383afd51c3 | 5ba3115523fb052d32db827e09443248ec5f6629 | /algorithm/PycharmProjects/0211/ladder.py | eb6210c9100757556cc04b234568b9a34172aa51 | [] | no_license | oliviaspark0825/TIL | 841095003ae794e14bd8c7e8c883826667c25f37 | 8bc66836f9a1eea5f42e9e1172f81f005abc042d | refs/heads/master | 2023-01-10T22:14:15.341489 | 2019-08-22T09:09:52 | 2019-08-22T09:09:52 | 162,099,057 | 0 | 0 | null | 2023-01-04T07:52:28 | 2018-12-17T08:32:43 | Jupyter Notebook | UTF-8 | Python | false | false | 1,040 | py | import sys
sys.stdin = open("ladder_input.txt")
T = 10
SIZE = 100
for tc in range(T):
data = [[0 for i in range(100)] for j in range(100)]
data = list(map(int, input().split()))
while x < 100 and y < 100:
if data[x][y] == 1 and data[x][y-1] == 1:
x -= 1
elif data[x][y] == 1 and data[x][y-1] != 1:
while x !=0 and x <99:
for x in range(100) and y in range(100):
# for x in range(100):
# for y in range(100):
# ans = 0
# # 다음 칸이 1이 아닐 경우는 아래로 이동
# if data[x][y] == 1 and data[x][y+1] != 1:
# ans = x
# x = x
# y += 1
# # 다음 칸도 1일 경우는 옆으로 이동
# elif data[x][y] == 1 and data[x][y+1] == 1:
# x += 1
# y = y
# if data[x][y] == 2:
# return ans
# 다시 못돌아가게 하려면, 이미 지나온 1은 값을 바꿔야 함
# print("{} {}".format()) | [
"suhyunpark0825@gmail.com"
] | suhyunpark0825@gmail.com |
baeeb7458a0b27e77d946ec28a0609a49a7c9719 | b545bc57f3359a42b034078e3acb3e4d0c77a971 | /src/containerapp/azext_containerapp/commands.py | b118add8f3858b623c9bfacbd068194a7c7a47ca | [
"LicenseRef-scancode-generic-cla",
"MIT"
] | permissive | ShichaoQiu/azure-cli-extensions | d91672b3f7bf2ffae4f1072830e99632b66cf754 | 8134c01681963387a496b5d4627527a5ed044e19 | refs/heads/main | 2023-08-24T09:09:55.689202 | 2023-08-15T06:08:35 | 2023-08-15T06:08:35 | 230,201,126 | 0 | 1 | MIT | 2020-12-11T07:14:51 | 2019-12-26T05:33:04 | Python | UTF-8 | Python | false | false | 14,994 | py | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# pylint: disable=line-too-long, too-many-statements, bare-except
# from azure.cli.core.commands import CliCommandType
# from msrestazure.tools import is_valid_resource_id, parse_resource_id
from azext_containerapp._client_factory import ex_handler_factory
from ._validators import validate_ssh
from ._transformers import (transform_containerapp_output,
transform_containerapp_list_output,
transform_job_execution_list_output,
transform_job_execution_show_output,
transform_revision_list_output,
transform_revision_output)
def load_command_table(self, _):
with self.command_group('containerapp') as g:
g.custom_show_command('show', 'show_containerapp', table_transformer=transform_containerapp_output)
g.custom_command('list', 'list_containerapp', table_transformer=transform_containerapp_list_output)
g.custom_command('create', 'create_containerapp', supports_no_wait=True, exception_handler=ex_handler_factory(), table_transformer=transform_containerapp_output)
g.custom_command('update', 'update_containerapp', supports_no_wait=True, exception_handler=ex_handler_factory(), table_transformer=transform_containerapp_output)
g.custom_command('delete', 'delete_containerapp', supports_no_wait=True, confirmation=True, exception_handler=ex_handler_factory())
g.custom_command('exec', 'containerapp_ssh', validator=validate_ssh)
g.custom_command('up', 'containerapp_up', supports_no_wait=False, exception_handler=ex_handler_factory())
g.custom_command('browse', 'open_containerapp_in_browser')
with self.command_group('containerapp replica') as g:
g.custom_show_command('show', 'get_replica') # TODO implement the table transformer
g.custom_command('list', 'list_replicas')
with self.command_group('containerapp logs') as g:
g.custom_show_command('show', 'stream_containerapp_logs', validator=validate_ssh)
with self.command_group('containerapp env logs') as g:
g.custom_show_command('show', 'stream_environment_logs')
with self.command_group('containerapp env') as g:
g.custom_show_command('show', 'show_managed_environment')
g.custom_command('list', 'list_managed_environments')
g.custom_command('create', 'create_managed_environment', supports_no_wait=True, exception_handler=ex_handler_factory())
g.custom_command('delete', 'delete_managed_environment', supports_no_wait=True, confirmation=True, exception_handler=ex_handler_factory())
g.custom_command('update', 'update_managed_environment', supports_no_wait=True, exception_handler=ex_handler_factory())
with self.command_group('containerapp job') as g:
g.custom_show_command('show', 'show_containerappsjob')
g.custom_command('list', 'list_containerappsjob')
g.custom_command('create', 'create_containerappsjob', supports_no_wait=True, exception_handler=ex_handler_factory())
g.custom_command('delete', 'delete_containerappsjob', supports_no_wait=True, confirmation=True, exception_handler=ex_handler_factory())
g.custom_command('update', 'update_containerappsjob', supports_no_wait=True, exception_handler=ex_handler_factory())
g.custom_command('start', 'start_containerappsjob', supports_no_wait=True, exception_handler=ex_handler_factory())
g.custom_command('stop', 'stop_containerappsjob', supports_no_wait=True, exception_handler=ex_handler_factory())
with self.command_group('containerapp job execution') as g:
g.custom_show_command('list', 'listexecution_containerappsjob', table_transformer=transform_job_execution_list_output)
g.custom_show_command('show', 'getSingleExecution_containerappsjob', table_transformer=transform_job_execution_show_output)
with self.command_group('containerapp job secret') as g:
g.custom_command('list', 'list_secrets_job')
g.custom_show_command('show', 'show_secret_job')
g.custom_command('remove', 'remove_secrets_job', confirmation=True, exception_handler=ex_handler_factory())
g.custom_command('set', 'set_secrets_job', exception_handler=ex_handler_factory())
with self.command_group('containerapp job identity') as g:
g.custom_command('assign', 'assign_managed_identity_job', supports_no_wait=True, exception_handler=ex_handler_factory())
g.custom_command('remove', 'remove_managed_identity_job', confirmation=True, supports_no_wait=True, exception_handler=ex_handler_factory())
g.custom_show_command('show', 'show_managed_identity_job')
with self.command_group('containerapp env dapr-component') as g:
g.custom_command('list', 'list_dapr_components')
g.custom_show_command('show', 'show_dapr_component')
g.custom_command('set', 'create_or_update_dapr_component')
g.custom_command('remove', 'remove_dapr_component')
with self.command_group('containerapp env certificate') as g:
g.custom_command('create', 'create_managed_certificate', is_preview=True)
g.custom_command('list', 'list_certificates', is_preview=True)
g.custom_command('upload', 'upload_certificate')
g.custom_command('delete', 'delete_certificate', confirmation=True, exception_handler=ex_handler_factory(), is_preview=True)
with self.command_group('containerapp env storage') as g:
g.custom_show_command('show', 'show_storage')
g.custom_command('list', 'list_storage')
g.custom_command('set', 'create_or_update_storage', supports_no_wait=True, exception_handler=ex_handler_factory())
g.custom_command('remove', 'remove_storage', supports_no_wait=True, confirmation=True, exception_handler=ex_handler_factory())
with self.command_group('containerapp service', is_preview=True) as g:
g.custom_command('list', 'list_all_services')
with self.command_group('containerapp service redis') as g:
g.custom_command('create', 'create_redis_service', supports_no_wait=True)
g.custom_command('delete', 'delete_redis_service', confirmation=True, supports_no_wait=True)
with self.command_group('containerapp service postgres') as g:
g.custom_command('create', 'create_postgres_service', supports_no_wait=True)
g.custom_command('delete', 'delete_postgres_service', confirmation=True, supports_no_wait=True)
with self.command_group('containerapp service kafka') as g:
g.custom_command('create', 'create_kafka_service', supports_no_wait=True)
g.custom_command('delete', 'delete_kafka_service', confirmation=True, supports_no_wait=True)
with self.command_group('containerapp service mariadb') as g:
g.custom_command('create', 'create_mariadb_service', supports_no_wait=True)
g.custom_command('delete', 'delete_mariadb_service', confirmation=True, supports_no_wait=True)
with self.command_group('containerapp identity') as g:
g.custom_command('assign', 'assign_managed_identity', supports_no_wait=True, exception_handler=ex_handler_factory())
g.custom_command('remove', 'remove_managed_identity', supports_no_wait=True, exception_handler=ex_handler_factory())
g.custom_show_command('show', 'show_managed_identity')
with self.command_group('containerapp github-action') as g:
g.custom_command('add', 'create_or_update_github_action', exception_handler=ex_handler_factory())
g.custom_show_command('show', 'show_github_action', exception_handler=ex_handler_factory())
g.custom_command('delete', 'delete_github_action', exception_handler=ex_handler_factory())
with self.command_group('containerapp revision') as g:
g.custom_command('activate', 'activate_revision')
g.custom_command('deactivate', 'deactivate_revision')
g.custom_command('list', 'list_revisions', table_transformer=transform_revision_list_output, exception_handler=ex_handler_factory())
g.custom_command('restart', 'restart_revision')
g.custom_show_command('show', 'show_revision', table_transformer=transform_revision_output, exception_handler=ex_handler_factory())
g.custom_command('copy', 'copy_revision', exception_handler=ex_handler_factory())
g.custom_command('set-mode', 'set_revision_mode', exception_handler=ex_handler_factory())
with self.command_group('containerapp revision label') as g:
g.custom_command('add', 'add_revision_label')
g.custom_command('remove', 'remove_revision_label')
g.custom_command('swap', 'swap_revision_label')
with self.command_group('containerapp ingress') as g:
g.custom_command('enable', 'enable_ingress', exception_handler=ex_handler_factory())
g.custom_command('disable', 'disable_ingress', exception_handler=ex_handler_factory())
g.custom_command('update', 'update_ingress', exception_handler=ex_handler_factory())
g.custom_show_command('show', 'show_ingress')
with self.command_group('containerapp ingress traffic') as g:
g.custom_command('set', 'set_ingress_traffic', exception_handler=ex_handler_factory())
g.custom_show_command('show', 'show_ingress_traffic')
with self.command_group('containerapp ingress sticky-sessions') as g:
g.custom_command('set', 'set_ingress_sticky_session', exception_handler=ex_handler_factory())
g.custom_show_command('show', 'show_ingress_sticky_session')
with self.command_group('containerapp ingress access-restriction') as g:
g.custom_command('set', 'set_ip_restriction', exception_handler=ex_handler_factory())
g.custom_command('remove', 'remove_ip_restriction')
g.custom_show_command('list', 'show_ip_restrictions')
with self.command_group('containerapp ingress cors') as g:
g.custom_command('enable', 'enable_cors_policy', exception_handler=ex_handler_factory())
g.custom_command('disable', 'disable_cors_policy', exception_handler=ex_handler_factory())
g.custom_command('update', 'update_cors_policy', exception_handler=ex_handler_factory())
g.custom_show_command('show', 'show_cors_policy')
with self.command_group('containerapp registry') as g:
g.custom_command('set', 'set_registry', exception_handler=ex_handler_factory())
g.custom_show_command('show', 'show_registry')
g.custom_command('list', 'list_registry')
g.custom_command('remove', 'remove_registry', exception_handler=ex_handler_factory())
with self.command_group('containerapp secret') as g:
g.custom_command('list', 'list_secrets')
g.custom_show_command('show', 'show_secret')
g.custom_command('remove', 'remove_secrets', exception_handler=ex_handler_factory())
g.custom_command('set', 'set_secrets', exception_handler=ex_handler_factory())
with self.command_group('containerapp dapr') as g:
g.custom_command('enable', 'enable_dapr', exception_handler=ex_handler_factory())
g.custom_command('disable', 'disable_dapr', exception_handler=ex_handler_factory())
with self.command_group('containerapp auth') as g:
g.custom_show_command('show', 'show_auth_config')
g.custom_command('update', 'update_auth_config', exception_handler=ex_handler_factory())
with self.command_group('containerapp auth microsoft') as g:
g.custom_show_command('show', 'get_aad_settings')
g.custom_command('update', 'update_aad_settings', exception_handler=ex_handler_factory())
with self.command_group('containerapp auth facebook') as g:
g.custom_show_command('show', 'get_facebook_settings')
g.custom_command('update', 'update_facebook_settings', exception_handler=ex_handler_factory())
with self.command_group('containerapp auth github') as g:
g.custom_show_command('show', 'get_github_settings')
g.custom_command('update', 'update_github_settings', exception_handler=ex_handler_factory())
with self.command_group('containerapp auth google') as g:
g.custom_show_command('show', 'get_google_settings')
g.custom_command('update', 'update_google_settings', exception_handler=ex_handler_factory())
with self.command_group('containerapp auth twitter') as g:
g.custom_show_command('show', 'get_twitter_settings')
g.custom_command('update', 'update_twitter_settings', exception_handler=ex_handler_factory())
with self.command_group('containerapp auth apple') as g:
g.custom_show_command('show', 'get_apple_settings')
g.custom_command('update', 'update_apple_settings', exception_handler=ex_handler_factory())
with self.command_group('containerapp auth openid-connect') as g:
g.custom_show_command('show', 'get_openid_connect_provider_settings')
g.custom_command('add', 'add_openid_connect_provider_settings', exception_handler=ex_handler_factory())
g.custom_command('update', 'update_openid_connect_provider_settings', exception_handler=ex_handler_factory())
g.custom_command('remove', 'remove_openid_connect_provider_settings', confirmation=True)
with self.command_group('containerapp ssl') as g:
g.custom_command('upload', 'upload_ssl', exception_handler=ex_handler_factory())
with self.command_group('containerapp hostname') as g:
g.custom_command('add', 'add_hostname', exception_handler=ex_handler_factory())
g.custom_command('bind', 'bind_hostname', exception_handler=ex_handler_factory())
g.custom_command('list', 'list_hostname')
g.custom_command('delete', 'delete_hostname', confirmation=True, exception_handler=ex_handler_factory())
with self.command_group('containerapp compose') as g:
g.custom_command('create', 'create_containerapps_from_compose')
with self.command_group('containerapp env workload-profile') as g:
g.custom_command('list-supported', 'list_supported_workload_profiles')
g.custom_command('list', 'list_workload_profiles')
g.custom_show_command('show', 'show_workload_profile')
g.custom_command('set', 'set_workload_profile', deprecate_info=g.deprecate(hide=True))
g.custom_command('add', 'add_workload_profile')
g.custom_command('update', 'update_workload_profile')
g.custom_command('delete', 'delete_workload_profile')
with self.command_group('containerapp patch', is_preview=True) as g:
g.custom_command('list', 'patch_list')
g.custom_command('apply', 'patch_apply')
g.custom_command('interactive', 'patch_interactive')
| [
"noreply@github.com"
] | ShichaoQiu.noreply@github.com |
63fa17d294b0132764b6e2dcc7e328765d7b7745 | d3d730cda1d4fd89dc2f52bcb5366c8e7dd8e1db | /Tenka1/D2.py | 4ad83d559b672ddd0b07c717e3cc39720f530999 | [] | no_license | sasakishun/atcoder | 86e3c161f306d96e026172138aca06ac9a90f3ea | 687afaa05b5a98a04675ab24ac7a53943a295d8e | refs/heads/master | 2020-03-19T19:58:07.717059 | 2019-05-04T14:39:00 | 2019-05-04T14:39:00 | 136,882,026 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 429 | py | N=int(input())
k=2
while N>=(k-1)*k/2:
if N==(k-1)*k/2:
print("Yes")
print(k)
a=list([] for i in range(k))
t=1
for i in range(0,k):
for n in range(1,k-i):
a[i].append(t)
a[i+n].append(t)
t+=1
for i in range(k):
print(str(k-1)+" "+" ".join(map(str,a[i])))
exit()
else:
k+=1
print("No")
| [
"Pakka-xeno@keio.jp"
] | Pakka-xeno@keio.jp |
a5b7bc2c28b4c11679b81349b8926af80e4b08ab | 1c2428489013d96ee21bcf434868358312f9d2af | /ultracart/models/gift_certificate_response.py | 199e3478d7a6a976125ce016d228ad964699033a | [
"Apache-2.0"
] | permissive | UltraCart/rest_api_v2_sdk_python | 7821a0f6e0e19317ee03c4926bec05972900c534 | 8529c0bceffa2070e04d467fcb2b0096a92e8be4 | refs/heads/master | 2023-09-01T00:09:31.332925 | 2023-08-31T12:52:10 | 2023-08-31T12:52:10 | 67,047,356 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,234 | py | # coding: utf-8
"""
UltraCart Rest API V2
UltraCart REST API Version 2 # noqa: E501
OpenAPI spec version: 2.0.0
Contact: support@ultracart.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class GiftCertificateResponse(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'error': 'Error',
'gift_certificate': 'GiftCertificate',
'metadata': 'ResponseMetadata',
'success': 'bool',
'warning': 'Warning'
}
attribute_map = {
'error': 'error',
'gift_certificate': 'gift_certificate',
'metadata': 'metadata',
'success': 'success',
'warning': 'warning'
}
def __init__(self, error=None, gift_certificate=None, metadata=None, success=None, warning=None): # noqa: E501
"""GiftCertificateResponse - a model defined in Swagger""" # noqa: E501
self._error = None
self._gift_certificate = None
self._metadata = None
self._success = None
self._warning = None
self.discriminator = None
if error is not None:
self.error = error
if gift_certificate is not None:
self.gift_certificate = gift_certificate
if metadata is not None:
self.metadata = metadata
if success is not None:
self.success = success
if warning is not None:
self.warning = warning
@property
def error(self):
"""Gets the error of this GiftCertificateResponse. # noqa: E501
:return: The error of this GiftCertificateResponse. # noqa: E501
:rtype: Error
"""
return self._error
@error.setter
def error(self, error):
"""Sets the error of this GiftCertificateResponse.
:param error: The error of this GiftCertificateResponse. # noqa: E501
:type: Error
"""
self._error = error
@property
def gift_certificate(self):
"""Gets the gift_certificate of this GiftCertificateResponse. # noqa: E501
:return: The gift_certificate of this GiftCertificateResponse. # noqa: E501
:rtype: GiftCertificate
"""
return self._gift_certificate
@gift_certificate.setter
def gift_certificate(self, gift_certificate):
"""Sets the gift_certificate of this GiftCertificateResponse.
:param gift_certificate: The gift_certificate of this GiftCertificateResponse. # noqa: E501
:type: GiftCertificate
"""
self._gift_certificate = gift_certificate
@property
def metadata(self):
"""Gets the metadata of this GiftCertificateResponse. # noqa: E501
:return: The metadata of this GiftCertificateResponse. # noqa: E501
:rtype: ResponseMetadata
"""
return self._metadata
@metadata.setter
def metadata(self, metadata):
"""Sets the metadata of this GiftCertificateResponse.
:param metadata: The metadata of this GiftCertificateResponse. # noqa: E501
:type: ResponseMetadata
"""
self._metadata = metadata
@property
def success(self):
"""Gets the success of this GiftCertificateResponse. # noqa: E501
Indicates if API call was successful # noqa: E501
:return: The success of this GiftCertificateResponse. # noqa: E501
:rtype: bool
"""
return self._success
@success.setter
def success(self, success):
"""Sets the success of this GiftCertificateResponse.
Indicates if API call was successful # noqa: E501
:param success: The success of this GiftCertificateResponse. # noqa: E501
:type: bool
"""
self._success = success
@property
def warning(self):
"""Gets the warning of this GiftCertificateResponse. # noqa: E501
:return: The warning of this GiftCertificateResponse. # noqa: E501
:rtype: Warning
"""
return self._warning
@warning.setter
def warning(self, warning):
"""Sets the warning of this GiftCertificateResponse.
:param warning: The warning of this GiftCertificateResponse. # noqa: E501
:type: Warning
"""
self._warning = warning
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(GiftCertificateResponse, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, GiftCertificateResponse):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| [
"perry@ultracart.com"
] | perry@ultracart.com |
8ee0e7e7f5d3d875d56c624b78678ea83bbff8eb | 669eee5f6d8093186a5485456c5cea3ba7f76b5f | /1.py | 4be2afec506ba70e8fc35fc14d8f129e1cb6e93c | [] | no_license | AnnaCt/Zadanie2 | ff1fcfb4b08851a2281328c40f7e8bb7450197a1 | 914ac4cc753353824cf0eff3d4067d6329246f74 | refs/heads/master | 2023-08-05T16:45:24.961910 | 2021-10-01T10:21:40 | 2021-10-01T10:21:40 | 412,194,890 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 112 | py | name = input()
surname = input()
date_of_birth = input()
print(name, '_', surname, '_', int(date_of_birth)+60)
| [
"you@example.com"
] | you@example.com |
1f5101cb29cfe1daa30392aa6f2aeeafa8e4209b | eba02c3c98f00288e81b5898a201cc29518364f7 | /chapter_005/exercises/more_conditional_tests.py | 9f4d2560c6178f258467c7d80860c4bb30f28dcb | [] | no_license | kengru/pcrash-course | 29f3cf49acfd4a177387634410d28de71d279e06 | 5aa5b174e85a0964eaeee1874b2be1c144b7c192 | refs/heads/master | 2021-05-16T09:36:16.349626 | 2017-10-11T17:56:56 | 2017-10-11T17:56:56 | 104,481,645 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 429 | py | # Exercise 5-2. Creating diferent test conditions based on what was learned.
if 'ham' == 'ham' and 'ham' != 'cheese':
print('Yes it does, ham.')
if 'ham' == 'HAM'.lower() or 'magic' != 'cool':
print('Lower works.')
if 45 > 22:
print('Math is on point.')
compilation = ['movies', 'tv', 'internet']
if 'movies' in compilation:
print('Its here.')
if 'ipad' not in compilation:
print('Yes, it is not here.')
| [
"kengrullon@gmail.com"
] | kengrullon@gmail.com |
5b8097b0e65b0a05a6362af661749b6e98f2a706 | 1a9852fe468f18e1ac3042c09286ccda000a4135 | /Specialist Certificate in Data Analytics Essentials/DataCamp/05-Working_with_Dates_and_Times/e21_what_time_did_the_bike_leave_global_edition.py | 6d9438cf0cfab5cf725998d2d137511ed7776d3e | [] | no_license | sarmabhamidipati/UCD | 452b2f1e166c1079ec06d78e473730e141f706b2 | 101ca3152207e2fe67cca118923896551d5fee1c | refs/heads/master | 2023-08-14T15:41:24.312859 | 2021-09-22T17:33:01 | 2021-09-22T17:33:01 | 386,592,878 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,105 | py | """
What time did the bike leave? (Global edition)
When you need to move a datetime from one timezone into another, use .astimezone() and tz.
Often you will be moving things into UTC, but for fun let's try moving things from 'America/New_York'
into a few different time zones.
Set uk to be the timezone for the UK: 'Europe/London'.
Change local to be in the uk timezone and assign it to notlocal.
Set ist to be the timezone for India: 'Asia/Kolkata'.
Change local to be in the ist timezone and assign it to notlocal.
Set sm to be the timezone for Samoa: 'Pacific/Apia'.
Change local to be in the sm timezone and assign it to notlocal.
"""
from dateutil import tz
from datetime import datetime, timedelta, timezone
onebike_datetimes = [
{'start': datetime(2017, 10, 1, 15, 23, 25), 'end': datetime(2017, 10, 1, 15, 26, 26)},
{'start': datetime(2017, 10, 1, 15, 42, 57), 'end': datetime(2017, 10, 1, 17, 49, 59)},
{'start': datetime(2017, 10, 2, 6, 37, 10), 'end': datetime(2017, 10, 2, 6, 42, 53)},
{'start': datetime(2017, 10, 2, 8, 56, 45), 'end': datetime(2017, 10, 2, 9, 18, 3)},
{'start': datetime(2017, 10, 2, 18, 23, 48), 'end': datetime(2017, 10, 2, 18, 45, 5)}
]
# Create the timezone object
uk = tz.gettz('Europe/London')
# Pull out the start of the first trip
local = onebike_datetimes[0]['start']
# What time was it in the UK?
notlocal = local.astimezone(uk)
# Print them out and see the difference
print(local.isoformat())
print(notlocal.isoformat())
# Create the timezone object
ist = tz.gettz('Asia/Kolkata')
# Pull out the start of the first trip
local = onebike_datetimes[0]['start']
# What time was it in India?
notlocal = local.astimezone(ist)
# Print them out and see the difference
print(local.isoformat())
print(notlocal.isoformat())
# Create the timezone object
sm = tz.gettz('Pacific/Apia')
# Pull out the start of the first trip
local = onebike_datetimes[0]['start']
# What time was it in Samoa?
notlocal = local.astimezone(sm)
# Print them out and see the difference
print(local.isoformat())
print(notlocal.isoformat()) | [
"b_vvs@yahoo.com"
] | b_vvs@yahoo.com |
1da4536c0af19d8e8bde11b153ecb6d410d36f41 | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/nouns/_moderator.py | 74b75d194a30c3cf654b57a41026b96f7db76daf | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 695 | py |
#calss header
class _MODERATOR():
def __init__(self,):
self.name = "MODERATOR"
self.definitions = [u'someone who tries to help other people come to an agreement: ', u'someone who makes certain that a formal discussion happens without problems and follows the rules: ', u'someone who makes certain that all the people marking an examination use the same standards: ', u'someone who makes sure that the rules of an internet discussion are not broken, for example by removing any threatening or offensive messages']
self.parents = []
self.childen = []
self.properties = []
self.jsondata = {}
self.specie = 'nouns'
def run(self, obj1 = [], obj2 = []):
return self.jsondata
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
62d9c8feb5e10a434cb0452b1e2a854e61bd7836 | 4751fd86184b64316d694a98671d34faae76ffe6 | /plannerrr/migrations/0025_schedules_course_title.py | 3e43b6f4e109551e60a1553e79f52dd5b692afbb | [] | no_license | mohammedaliyu136/dg_planner | 8a6a4888cc109d6c3a1cb115494a1e6decbb864a | a0fb87e182527e541e7758a2c4720ddbb2438145 | refs/heads/master | 2020-04-03T08:09:02.020426 | 2018-10-29T19:57:16 | 2018-10-29T19:57:16 | 155,124,132 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 469 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.11.1 on 2018-01-29 06:42
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('planner', '0024_auto_20180129_0715'),
]
operations = [
migrations.AddField(
model_name='schedules',
name='course_title',
field=models.CharField(max_length=50, null=True),
),
]
| [
"mohammedaliyu136@gmail.com"
] | mohammedaliyu136@gmail.com |
7b0e8af35ae9596fa6784ec9856f4ceaa39818e0 | 97886c65242f9fa3814f205b509483890b709e8a | /1_Zadania/Dzien_2/4_Iteratory_generatory/zad_1.py | 2b92f19d7f42761f1100686273b493a56f09f56c | [] | no_license | Danutelka/Coderslab-Python-progr-obiektowe | d16cad0711079c9dd83676066f8f44dedb9013a2 | b68aeda14024be48fdb4fb1b5e3d48afbaac0b8c | refs/heads/master | 2020-08-04T15:26:20.479103 | 2019-05-05T18:16:04 | 2019-05-05T18:25:56 | 212,183,724 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 439 | py | import random
class Dice:
def __init__(self, type):
self._types = [3, 4, 6, 8, 10, 12, 20, 100]
self.type = type
@property
def type(self):
return self._type
@type.setter
def type(self, type):
if type in self._types:
self._type = type
else:
self._type = 6
def roll(self):
return random.randint(1, self.type)
d = Dice(10)
print(d.roll())
| [
"kawecka.d@gmail.com"
] | kawecka.d@gmail.com |
9631e0cb94c18704e11042ac1a62aa26b73253be | 3a547785455c4b447de5f43e134aee1f57388a7e | /SWEA/4406.py | 66281b078c22f92d03c424cb7c51ce997463f0e7 | [] | no_license | Jungwoo-20/Algorithm | bfdbca1b87500e508307a639dc2af5a86258c227 | 7767bf5b0ce089155809743af0b562b076e75d9b | refs/heads/master | 2023-08-17T05:59:25.083859 | 2021-09-22T03:26:45 | 2021-09-22T03:26:45 | 289,666,779 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 287 | py | T = int(input())
arr = ['a','e','i','o','u']
for cnt in range(1, T + 1):
n = list(map(str, input()))
temp = []
result = ''
for i in n:
if i not in arr:
temp.append(i)
for i in temp:
result +=i
print('#' + str(cnt) + ' ' + str(result)) | [
"jungwoo7250@naver.com"
] | jungwoo7250@naver.com |
96351d6ee72ddd536395d7c25445d3e4beb59284 | f5dd6cb8e9c4faa637a5ad0a1a09301665a2da72 | /src/scripts/adaboost.py | 3526ac9d81851678f4233f496390c85cb52f2272 | [
"MIT"
] | permissive | elfmanryan/CRC4Docker | ac33b8952ddb5e9b2619fc0b9e28b30edec85ca6 | 8f9db8a1abb2bd3bb45f510c39268f365ed8a616 | refs/heads/master | 2020-03-29T16:01:30.890993 | 2018-09-07T16:32:19 | 2018-09-07T16:32:19 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 9,880 | py | #!/usr/bin/env python
#******************************************************************************
# Name: adaboost.py
# Purpose: supervised classification of multispectral images with ADABOOST.M1
# Usage:
# python adaboost.py
#
# Copyright (c) 2018 Mort Canty
import auxil.supervisedclass as sc
import auxil.readshp as rs
import gdal, os, time, sys, getopt
from osgeo.gdalconst import GA_ReadOnly, GDT_Byte
import matplotlib.pyplot as plt
import numpy as np
def seq_class(ffns,Xs,alphas,K):
# weighted classification of observations Xs with list of FFN classifiers
# returns labels, class membership probabilities
M = len(ffns)
_,ps1 = ffns[0].classify(Xs)
ps = alphas[0]*ps1
for i in range(1,M):
_,ps1 = ffns[i].classify(Xs)
ps += alphas[i]*ps1
den = np.sum(ps,0)
for i in range(K):
ps[i,:] = ps[i,:]/den
labels = np.argmax(ps,1)
return (labels,ps)
class Ffnekfab(sc.Ffnekf):
def __init__(self,Gs,ls,p,L,epochs=5):
sc.Ffnekf.__init__(self,Gs,ls,L,epochs)
tmp = np.roll(np.cumsum(p),1)
tmp[0] = 0.0
self._sd = tmp
def train(self):
try:
# update matrices for hidden and output weight
dWh = np.zeros((self._N+1,self._L))
dWo = np.zeros((self._L+1,self._K))
cost = []
costv = []
itr = 0
epoch = 0
maxitr = self._epochs*self._m
while itr < maxitr:
# select training pair from distribution d
nu = np.sum(np.where(self._sd < np.random.rand(),1,0))-1
x = self._Gs[:,nu]
y = self._ls[:,nu]
# forward pass
m = self.forwardpass(x)
# output error
e = y - m
# loop over output neurons
for k in range(self._K):
# linearized input
Ao = m[k,0]*(1-m[k,0])*self._n
# Kalman gain
So = self._So[:,:,k]
SA = So*Ao
Ko = SA/((Ao.T*SA)[0] + 1)
# determine delta for this neuron
dWo[:,k] = (Ko*e[k,0]).ravel()
# update its covariance matrix
So -= Ko*Ao.T*So
self._So[:,:,k] = So
# update the output weights
self._Wo = self._Wo + dWo
# backpropagated error
beta_o = e.A*m.A*(1-m.A)
# loop over hidden neurons
for j in range(self._L):
# linearized input
Ah = x*(self._n)[j+1,0]*(1-self._n[j+1,0])
# Kalman gain
Sh = self._Sh[:,:,j]
SA = Sh*Ah
Kh = SA/((Ah.T*SA)[0] + 1)
# determine delta for this neuron
dWh[:,j] = (Kh*(self._Wo[j+1,:]*beta_o)).ravel()
# update its covariance matrix
Sh -= Kh*Ah.T*Sh
self._Sh[:,:,j] = Sh
# update the hidden weights
self._Wh = self._Wh + dWh
if itr % self._m == 0:
cost.append(self.cost())
costv.append(self.costv())
epoch += 1
itr += 1
return (cost,costv)
except Exception as e:
print 'Error: %s'%e
return None
def main():
usage = '''
Usage:
------------------------------------------------
supervised classification of multispectral images with ADABOOST.M1
python %s [OPTIONS] filename trainShapefile
Options:
-h this help
-p <list> band positions e.g. -p [1,2,3,4]
-L <int> number of hidden neurons (default 10)
-n <int> number of nnet instances (default 50)
-e <int> epochs for ekf training (default 3)
If the input file is named
path/filenbasename.ext then
The output classification file is named
path/filebasename_class.ext
------------------------------------------------''' %sys.argv[0]
outbuffer = 100
options, args = getopt.getopt(sys.argv[1:],'hp:n:e:L:')
pos = None
L = [10]
epochs = 3
instances = 50
for option, value in options:
if option == '-h':
print usage
return
elif option == '-p':
pos = eval(value)
elif option == '-e':
epochs = eval(value)
elif option == '-n':
instances = eval(value)
elif option == '-L':
L = [eval(value)]
if len(args) != 2:
print 'Incorrect number of arguments'
print usage
sys.exit(1)
print 'Training with ADABOOST.M1 and %i epochs per ffn'%epochs
infile = args[0]
trnfile = args[1]
gdal.AllRegister()
if infile:
inDataset = gdal.Open(infile,GA_ReadOnly)
cols = inDataset.RasterXSize
rows = inDataset.RasterYSize
bands = inDataset.RasterCount
geotransform = inDataset.GetGeoTransform()
else:
return
if pos is None:
pos = range(1,bands+1)
N = len(pos)
rasterBands = []
for b in pos:
rasterBands.append(inDataset.GetRasterBand(b))
# output file
path = os.path.dirname(infile)
basename = os.path.basename(infile)
root, ext = os.path.splitext(basename)
outfile = '%s/%s_class%s'%(path,root,ext)
# setup output class image dataset
driver = inDataset.GetDriver()
outDataset = driver.Create(outfile,cols,rows,1,GDT_Byte)
projection = inDataset.GetProjection()
if geotransform is not None:
outDataset.SetGeoTransform(geotransform)
if projection is not None:
outDataset.SetProjection(projection)
outBand = outDataset.GetRasterBand(1)
# get the training data
Xs,Ls,K,_ = rs.readshp(trnfile,inDataset,pos)
m = Ls.shape[0]
# stretch the pixel vectors to [-1,1]
maxx = np.max(Xs,0)
minx = np.min(Xs,0)
for j in range(len(pos)):
Xs[:,j] = 2*(Xs[:,j]-minx[j])/(maxx[j]-minx[j]) - 1.0
# random permutation of training data
idx = np.random.permutation(m)
Xs = Xs[idx,:]
Ls = Ls[idx,:]
# train on 2/3 of training examples, rest for testing
mtrn = int(0.67*m)
mtst = m-mtrn
Xstrn = Xs[:mtrn,:]
Lstrn = Ls[:mtrn,:]
Xstst = Xs[mtrn:,:]
Lstst = Ls[mtrn:,:]
labels_train = np.argmax(Lstrn,1)
labels_test = np.argmax(Lstst,1)
# list of network instances, weights and errors
ffns = []
alphas = []
errtrn = []
errtst = []
# initial probability distribution
p = np.ones(mtrn)/mtrn
# loop through the network instance
start = time.time()
instance = 1
while instance<instances:
trial = 1
while trial < 6:
print 'running instance: %i trial: %i' \
%(instance,trial)
# instantiate a ffn and train it
ffn = Ffnekfab(Xstrn,Lstrn,p,L,epochs)
ffn.train()
# determine beta
labels,_ = ffn.classify(Xstrn)
labels -= 1
idxi = np.where(labels != labels_train)[0]
idxc = np.where(labels == labels_train)[0]
epsilon = np.sum(p[idxi])
beta = epsilon/(1-epsilon)
if beta < 1.0:
# continue
ffns.append(ffn)
alphas.append(np.log(1.0/beta))
# update distribution
p[idxc] = p[idxc]*beta
p = p/np.sum(p)
# train error
labels,_=seq_class(ffns,Xstrn,alphas,K)
tmp=np.where(labels!=labels_train,1,0)
errtrn.append(np.sum(tmp)/float(mtrn))
# test error
labels,_=seq_class(ffns,Xstst,alphas,K)
tmp = np.where(labels!=labels_test,1,0)
errtst.append(np.sum(tmp)/float(mtst))
print 'train error: %f test error: %f'\
%(errtrn[-1],errtst[-1])
# this instance is done
trial = 6
instance += 1
else:
trial += 1
# break off training
if trial==6:
instance = instances
print 'elapsed time %s' %str(time.time()-start)
# plot errors
n = len(errtrn)
errtrn = np.array(errtrn)
errtst = np.array(errtst)
x = np.arange(1,n+1,1)
ax = plt.subplot(111)
ax.semilogx(x,errtrn,label='train')
ax.semilogx(x,errtst,label='test')
ax.legend()
ax.set_xlabel('number of networks')
ax.set_ylabel('classification error')
plt.show()
# classify the image
print 'classifying...'
start = time.time()
tile = np.zeros((outbuffer*cols,N),dtype=np.float32)
for row in range(rows/outbuffer):
print 'row: %i'%(row*outbuffer)
for j in range(N):
tile[:,j] = rasterBands[j].ReadAsArray(0,row*outbuffer,cols,outbuffer).ravel()
tile[:,j] = 2*(tile[:,j]-minx[j])/(maxx[j]-minx[j]) - 1.0
cls, _ = seq_class(ffns,tile,alphas,K)
outBand.WriteArray(np.reshape(cls,(outbuffer,cols)),0,row*outbuffer)
outBand.FlushCache()
print 'thematic map written to: %s'%outfile
print 'elapsed time %s' %str(time.time()-start)
if __name__ == '__main__':
main() | [
"mort.canty@gmail.com"
] | mort.canty@gmail.com |
a3e2fd3c796bc8fc667d472ac22d714eeb9e8107 | 4175c20f89fc408696d22a488a29b46836e15cbf | /travelly/travelly/wsgi.py | b96b60b26a9cae9a3f63b84f092c6c3d187f1e2f | [
"MIT",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | UPstartDeveloper/fiercely-souvenir | 47df7885a153b8df9e6af1aac72579da4af85e63 | 65d933c64a3bf830f51ac237f5781ddfb69f342c | refs/heads/master | 2022-12-09T21:37:19.810414 | 2021-04-30T23:47:39 | 2021-04-30T23:47:39 | 228,493,146 | 0 | 0 | MIT | 2022-12-08T03:24:23 | 2019-12-16T23:20:11 | JavaScript | UTF-8 | Python | false | false | 393 | py | """
WSGI config for travelly project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'travelly.settings')
application = get_wsgi_application()
| [
"zainr7989@gmail.com"
] | zainr7989@gmail.com |
68c6dbebfeb0c8c88a4d8d02279baf07b53e202c | facb8b9155a569b09ba66aefc22564a5bf9cd319 | /wp2/merra_scripts/01_netCDF_extraction/merra902Combine/258-tideGauge.py | c13716ac8bcd72579f0ccc17290b62f790b515e3 | [] | no_license | moinabyssinia/modeling-global-storm-surges | 13e69faa8f45a1244a964c5de4e2a5a6c95b2128 | 6e385b2a5f0867df8ceabd155e17ba876779c1bd | refs/heads/master | 2023-06-09T00:40:39.319465 | 2021-06-25T21:00:44 | 2021-06-25T21:00:44 | 229,080,191 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,376 | py | # -*- coding: utf-8 -*-
"""
Created on Tue Jun 17 11:28:00 2020
--------------------------------------------
Load predictors for each TG and combine them
--------------------------------------------
@author: Michael Tadesse
"""
import os
import pandas as pd
#define directories
# dir_name = 'F:\\01_erainterim\\01_eraint_predictors\\eraint_D3'
dir_in = "/lustre/fs0/home/mtadesse/merraLocalized"
dir_out = "/lustre/fs0/home/mtadesse/merraAllCombined"
def combine():
os.chdir(dir_in)
#get names
tg_list_name = os.listdir()
x = 258
y = 259
for tg in range(x, y):
os.chdir(dir_in)
tg_name = tg_list_name[tg]
print(tg_name, '\n')
#looping through each TG folder
os.chdir(tg_name)
#check for empty folders
if len(os.listdir()) == 0:
continue
#defining the path for each predictor
where = os.getcwd()
csv_path = {'slp' : os.path.join(where, 'slp.csv'),\
"wnd_u": os.path.join(where, 'wnd_u.csv'),\
'wnd_v' : os.path.join(where, 'wnd_v.csv')}
first = True
for pr in csv_path.keys():
print(tg_name, ' ', pr)
#read predictor
pred = pd.read_csv(csv_path[pr])
#remove unwanted columns
pred.drop(['Unnamed: 0'], axis = 1, inplace=True)
#sort based on date as merra files are scrambled
pred.sort_values(by = 'date', inplace=True)
#give predictor columns a name
pred_col = list(pred.columns)
for pp in range(len(pred_col)):
if pred_col[pp] == 'date':
continue
pred_col[pp] = pr + str(pred_col[pp])
pred.columns = pred_col
#merge all predictors
if first:
pred_combined = pred
first = False
else:
pred_combined = pd.merge(pred_combined, pred, on = 'date')
#saving pred_combined
os.chdir(dir_out)
tg_name = str(tg)+"_"+tg_name;
pred_combined.to_csv('.'.join([tg_name, 'csv']))
os.chdir(dir_in)
print('\n')
#run script
combine()
| [
"michaelg.tadesse@gmail.com"
] | michaelg.tadesse@gmail.com |
cc5701e8bd14efc27bacd072459b9c2c0c1f0638 | f4f147a9859c5605b22429b05dc43315b06b3215 | /manage.py | 32de577bd064ffbff0ca93d7ae91639c2e93486b | [] | no_license | nickdotreid/hit-fails | 151cebdf3ecfb5168bf7a5b3937d6ed4abdb1cb9 | 7e8f0b0af60181d1922043270de27159ba4f4337 | refs/heads/master | 2016-09-15T18:08:00.311352 | 2014-07-22T16:43:51 | 2014-07-22T16:43:51 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 366 | py | #!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
# Load the Heroku environment.
from herokuapp.env import load_env
load_env(__file__, "hitfails")
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "hitfails.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
| [
"nickreid@nickreid.com"
] | nickreid@nickreid.com |
d30f9f68f0df2ebbe23d6062fc3e63d1497ed96f | ffa8b19913d891a655ff78384847ea9fdc5b0bc9 | /test/test_content_api.py | 912e0cf62412bffafa42e0aad94aefbb8ce1888f | [] | no_license | ccalipSR/python_sdk2 | b76124f409e26128ff291d2c33612883929c1b5f | d8979ed7434f4ffbc62fc30c90d40d93a327b7d1 | refs/heads/master | 2020-04-09T17:13:43.581633 | 2018-12-05T06:53:50 | 2018-12-05T06:53:50 | 160,473,001 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,321 | py | # coding: utf-8
"""
Looker API 3.0 Reference
### Authorization The Looker API uses Looker **API3** credentials for authorization and access control. Looker admins can create API3 credentials on Looker's **Admin/Users** page. Pass API3 credentials to the **/login** endpoint to obtain a temporary access_token. Include that access_token in the Authorization header of Looker API requests. For details, see [Looker API Authorization](https://looker.com/docs/r/api/authorization) ### Client SDKs The Looker API is a RESTful system that should be usable by any programming language capable of making HTTPS requests. Client SDKs for a variety of programming languages can be generated from the Looker API's Swagger JSON metadata to streamline use of the Looker API in your applications. A client SDK for Ruby is available as an example. For more information, see [Looker API Client SDKs](https://looker.com/docs/r/api/client_sdks) ### Try It Out! The 'api-docs' page served by the Looker instance includes 'Try It Out!' buttons for each API method. After logging in with API3 credentials, you can use the \"Try It Out!\" buttons to call the API directly from the documentation page to interactively explore API features and responses. ### Versioning Future releases of Looker will expand this API release-by-release to securely expose more and more of the core power of Looker to API client applications. API endpoints marked as \"beta\" may receive breaking changes without warning. Stable (non-beta) API endpoints should not receive breaking changes in future releases. For more information, see [Looker API Versioning](https://looker.com/docs/r/api/versioning) # noqa: E501
OpenAPI spec version: 3.0.0
Contact: support@looker.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import swagger_client
from swagger_client.api.content_api import ContentApi # noqa: E501
from swagger_client.rest import ApiException
class TestContentApi(unittest.TestCase):
"""ContentApi unit test stubs"""
def setUp(self):
self.api = swagger_client.api.content_api.ContentApi() # noqa: E501
def tearDown(self):
pass
def test_all_content_metadata_accesss(self):
"""Test case for all_content_metadata_accesss
Get All Content Metadata Accesss # noqa: E501
"""
pass
def test_all_content_metadatas(self):
"""Test case for all_content_metadatas
Get All Content Metadatas # noqa: E501
"""
pass
def test_content_favorite(self):
"""Test case for content_favorite
Get Favorite Content # noqa: E501
"""
pass
def test_content_metadata(self):
"""Test case for content_metadata
Get Content Metadata # noqa: E501
"""
pass
def test_create_content_favorite(self):
"""Test case for create_content_favorite
Create Favorite Content # noqa: E501
"""
pass
def test_create_content_metadata_access(self):
"""Test case for create_content_metadata_access
Create Content Metadata Access # noqa: E501
"""
pass
def test_delete_content_favorite(self):
"""Test case for delete_content_favorite
Delete Favorite Content # noqa: E501
"""
pass
def test_delete_content_metadata_access(self):
"""Test case for delete_content_metadata_access
Delete Content Metadata Access # noqa: E501
"""
pass
def test_search_content_favorites(self):
"""Test case for search_content_favorites
Search Favorite Contents # noqa: E501
"""
pass
def test_search_content_views(self):
"""Test case for search_content_views
Search Content Views # noqa: E501
"""
pass
def test_update_content_metadata(self):
"""Test case for update_content_metadata
Update Content Metadata # noqa: E501
"""
pass
def test_update_content_metadata_access(self):
"""Test case for update_content_metadata_access
Update Content Metadata Access # noqa: E501
"""
pass
if __name__ == '__main__':
unittest.main()
| [
"ccalip@shoprunner.com"
] | ccalip@shoprunner.com |
eacc91c40d88de42dd69c15079c96805027b981b | e4cd0810417fecc5aaa5f1e5ccaf4af75c57b4cd | /data_set/error_row_parsing.py | 268e86abffd99f42508fc86ab90101e8b92bedbd | [] | no_license | Areum120/epis_data_project | c7c9d859d70df1f9bef4b7dd691a09c27d078e8f | 567c51aa89139666521e45f76c9fd23029d2660b | refs/heads/master | 2023-06-01T15:03:16.908647 | 2021-04-14T01:39:37 | 2021-04-14T01:39:37 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 149 | py | import json
with open('bds_safe_restaurant_error.json', encoding='utf-8') as f:
ls = f.readlines()
for l in ls:
print(json.loads(l))
| [
"oceanfog1@gmail.com"
] | oceanfog1@gmail.com |
ad79bcdb94077ac1c13af74936728b2ff0f4b9bf | 1577e1cf4e89584a125cffb855ca50a9654c6d55 | /pyobjc/pyobjc/pyobjc-framework-Cocoa-2.5.1/PyObjCTest/test_cfdictionary.py | be87f85973a7b9246d9e2b734926df2bda623acc | [
"MIT"
] | permissive | apple-open-source/macos | a4188b5c2ef113d90281d03cd1b14e5ee52ebffb | 2d2b15f13487673de33297e49f00ef94af743a9a | refs/heads/master | 2023-08-01T11:03:26.870408 | 2023-03-27T00:00:00 | 2023-03-27T00:00:00 | 180,595,052 | 124 | 24 | null | 2022-12-27T14:54:09 | 2019-04-10T14:06:23 | null | UTF-8 | Python | false | false | 6,093 | py | from CoreFoundation import *
from Foundation import NSDictionary, NSMutableDictionary, NSCFDictionary
from PyObjCTools.TestSupport import *
try:
long
except NameError:
long = int
class TestCFDictionary (TestCase):
def testCreation(self):
dictionary = CFDictionaryCreate(None,
('aap', 'noot', 'mies', 'wim'),
('monkey', 'nut', 'missy', 'john'),
4, kCFTypeDictionaryKeyCallBacks, kCFTypeDictionaryValueCallBacks)
self.assert_(isinstance(dictionary, CFDictionaryRef))
self.assertEqual(dictionary, {
'aap': 'monkey',
'noot': 'nut',
'mies': 'missy',
'wim': 'john'
})
dictionary = CFDictionaryCreateMutable(None, 0, kCFTypeDictionaryKeyCallBacks, kCFTypeDictionaryValueCallBacks)
self.assert_(isinstance(dictionary, CFMutableDictionaryRef))
CFDictionarySetValue(dictionary, 'hello', 'world')
self.assertEqual(dictionary, {'hello': 'world'})
def testApplyFunction(self):
dictionary = CFDictionaryCreate(None,
('aap', 'noot', 'mies', 'wim'),
('monkey', 'nut', 'missy', 'john'), 4, kCFTypeDictionaryKeyCallBacks, kCFTypeDictionaryValueCallBacks)
context = []
def function(key, value, context):
context.append((key, value))
self.assertArgIsFunction(CFDictionaryApplyFunction, 1, b'v@@@', False)
self.assertArgHasType(CFDictionaryApplyFunction, 2, b'@')
CFDictionaryApplyFunction(dictionary, function, context)
context.sort()
self.assertEqual(len(context) , 4)
self.assertEqual(context,
[
(b'aap'.decode('ascii'), b'monkey'.decode('ascii')),
(b'mies'.decode('ascii'), b'missy'.decode('ascii')),
(b'noot'.decode('ascii'), b'nut'.decode('ascii')),
(b'wim'.decode('ascii'), b'john'.decode('ascii'))
])
def testTypeID(self):
self.assertIsInstance(CFDictionaryGetTypeID(), (int, long))
def testCreation(self):
dct = CFDictionaryCreate(None, [b"key1".decode('ascii'), b"key2".decode('ascii')], [42, 43], 2, kCFTypeDictionaryKeyCallBacks, kCFTypeDictionaryValueCallBacks)
self.assertIsInstance(dct, CFDictionaryRef)
dct = CFDictionaryCreateCopy(None, dct)
self.assertIsInstance(dct, CFDictionaryRef)
dct = CFDictionaryCreateMutable(None, 0, kCFTypeDictionaryKeyCallBacks, kCFTypeDictionaryValueCallBacks)
self.assertIsInstance(dct, CFDictionaryRef)
dct = CFDictionaryCreateMutableCopy(None, 0, dct)
self.assertIsInstance(dct, CFDictionaryRef)
def testInspection(self):
dct = CFDictionaryCreate(None, [b"key1".decode('ascii'), b"key2".decode('ascii')], [42, 42], 2, kCFTypeDictionaryKeyCallBacks, kCFTypeDictionaryValueCallBacks)
self.assertIsInstance(dct, CFDictionaryRef)
self.assertEqual(CFDictionaryGetCount(dct) , 2)
self.assertEqual(CFDictionaryGetCountOfKey(dct, b"key1".decode('ascii')) , 1)
self.assertEqual(CFDictionaryGetCountOfKey(dct, b"key3".decode('ascii')) , 0)
self.assertEqual(CFDictionaryGetCountOfValue(dct, 42) , 2)
self.assertEqual(CFDictionaryGetCountOfValue(dct, 44) , 0)
self.assertResultHasType(CFDictionaryContainsKey, objc._C_NSBOOL)
self.assertTrue(CFDictionaryContainsKey(dct, b"key1".decode('ascii')))
self.assertFalse(CFDictionaryContainsKey(dct, b"key3".decode('ascii')))
self.assertResultHasType(CFDictionaryContainsValue, objc._C_NSBOOL)
self.assertTrue(CFDictionaryContainsValue(dct, 42))
self.assertFalse(CFDictionaryContainsValue(dct, b"key3".decode('ascii')))
self.assertEqual(CFDictionaryGetValue(dct, "key2") , 42)
self.assertIs(CFDictionaryGetValue(dct, "key3"), None)
self.assertResultHasType(CFDictionaryGetValueIfPresent, objc._C_NSBOOL)
self.assertArgIsOut(CFDictionaryGetValueIfPresent, 2)
ok, value = CFDictionaryGetValueIfPresent(dct, "key2", None)
self.assertTrue(ok)
self.assertEqual(value , 42)
ok, value = CFDictionaryGetValueIfPresent(dct, "key3", None)
self.assertFalse(ok)
self.assertIs(value, None)
keys, values = CFDictionaryGetKeysAndValues(dct, None, None)
self.assertEqual(values , (42, 42))
keys = list(keys)
keys.sort()
self.assertEqual(keys , ['key1', 'key2'])
def testMutation(self):
dct = CFDictionaryCreateMutable(None, 0, kCFTypeDictionaryKeyCallBacks, kCFTypeDictionaryValueCallBacks)
self.assertEqual(CFDictionaryGetCount(dct) , 0)
CFDictionaryAddValue(dct, b"key1".decode('ascii'), b"value1".decode('ascii'))
self.assertEqual(CFDictionaryGetCount(dct) , 1)
self.assertTrue(CFDictionaryContainsKey(dct, b"key1".decode('ascii')))
CFDictionarySetValue(dct, b"key2".decode('ascii'), b"value2".decode('ascii'))
self.assertEqual(CFDictionaryGetCount(dct) , 2)
self.assertTrue(CFDictionaryContainsKey(dct, b"key2".decode('ascii')))
CFDictionaryReplaceValue(dct, b"key2".decode('ascii'), b"value2b".decode('ascii'))
self.assertEqual(CFDictionaryGetCount(dct) , 2)
self.assertTrue(CFDictionaryContainsKey(dct, b"key2".decode('ascii')))
self.assertEqual(CFDictionaryGetValue(dct, "key2") , b"value2b".decode('ascii'))
CFDictionaryReplaceValue(dct, b"key3".decode('ascii'), b"value2b".decode('ascii'))
self.assertEqual(CFDictionaryGetCount(dct) , 2)
self.assertFalse(CFDictionaryContainsKey(dct, b"key3".decode('ascii')))
CFDictionaryRemoveValue(dct, b"key1".decode('ascii'))
self.assertFalse(CFDictionaryContainsKey(dct, b"key1".decode('ascii')))
CFDictionaryRemoveAllValues(dct)
self.assertFalse(CFDictionaryContainsKey(dct, b"key2".decode('ascii')))
self.assertEqual(CFDictionaryGetCount(dct) , 0)
if __name__ == "__main__":
main()
| [
"opensource@apple.com"
] | opensource@apple.com |
4347b0b5ff4fe7dbd42fa72e6f26f59966cde029 | f336bcdc1eeab553e0d3d1de2ca6da64cd7f27bc | /macd/ma.py | 095e1514d3c74ba792b173092c2ffe23a7839f1a | [] | no_license | tonylibing/stockpractice | 04568c017a96815e3796c895e74f11fa128d3ffe | 039e144b3a4cc00e400338174b31fa277df55517 | refs/heads/main | 2023-09-05T03:53:02.565539 | 2021-10-30T22:08:16 | 2021-10-30T22:08:16 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,019 | py | # coding:utf-8
# 1000元实盘练习程序
# 测试判断牛熊的指标
# 根据《阿佩尔均线操盘术》第二章
import pandas as pd
import numpy as np
import akshare as ak
import run
import tools
import efinance as ef
import datetime, quandl
import matplotlib.pyplot as plt
import os
from backtest import BackTest
import strategy as st
# 对策略进行回测
@run.change_dir
def backTest(refresh = False):
month = 15*12
code = "000300" # 沪深300指数
benchmark = tools.getBenchmarkData(month = month, refresh = refresh, path = "./stockdata/")
backtest = BackTest(codes = [code], strategy = st.MA, benchmark = benchmark, month = month, cash = 1000000, refresh = refresh, path = "./stockdata/", bOpt = False)
results = backtest.getResults()
print(results)
backtest.drawResults(code + "result")
# res = backtest.optRun(period = range(5,200))
# print("测试c", res)
if __name__ == "__main__":
tools.init()
backTest(refresh = False)
| [
"zwdnet@163.com"
] | zwdnet@163.com |
8fdde8c62e9df33aab12ae2f99d6ab864c1398b3 | f80ef3a3cf859b13e8af8433af549b6b1043bf6e | /pyobjc-core/PyObjCTest/test_nsdecimal.py | 1bc56a02ba036eff309a722cfc92e26df3b754b2 | [
"MIT"
] | permissive | ronaldoussoren/pyobjc | 29dc9ca0af838a56105a9ddd62fb38ec415f0b86 | 77b98382e52818690449111cd2e23cd469b53cf5 | refs/heads/master | 2023-09-01T05:15:21.814504 | 2023-06-13T20:00:17 | 2023-06-13T20:00:17 | 243,933,900 | 439 | 49 | null | 2023-06-25T02:49:07 | 2020-02-29T08:43:12 | Python | UTF-8 | Python | false | false | 13,611 | py | """
Tests for the NSDecimal wrapper type
"""
import decimal
import warnings
import objc
from objc import super
from PyObjCTest.decimal import OC_TestDecimal
from PyObjCTools.TestSupport import TestCase, expectedFailure
class TestNSDecimalWrapper(TestCase):
def test_creation(self):
d = objc.NSDecimal(0)
self.assertEqual(str(d), "0")
d = objc.NSDecimal(-5)
self.assertEqual(str(d), "-5")
with self.assertRaisesRegex(OverflowError, "int too big to convert"):
objc.NSDecimal(1 << 66)
d = objc.NSDecimal(0.0)
self.assertEqual(str(d), "0")
d = objc.NSDecimal(0.5)
self.assertEqual(str(d), "0.5")
d = objc.NSDecimal("1.24")
self.assertEqual(str(d), "1.24")
d = objc.NSDecimal(500, 3, False)
self.assertEqual(str(d), str(500 * 10**3))
d = objc.NSDecimal(500, -6, True)
self.assertEqual(str(d), str(500 * 10**-6 * -1))
with self.assertRaisesRegex(
ValueError, "depythonifying 'unsigned long long', got 'str'"
):
objc.NSDecimal("a", -6, True)
with self.assertRaisesRegex(ValueError, "depythonifying 'short', got 'str'"):
objc.NSDecimal(500, "a", True)
with self.assertRaisesRegex(
TypeError,
r"NSDecimal\(value\) or NSDecimal\(mantissa, exponent, isNegative\)",
):
objc.NSDecimal(500, -6, True, False)
with self.assertRaisesRegex(
TypeError, "cannot convert instance of NSObject to NSDecimal"
):
objc.NSDecimal(objc.lookUpClass("NSObject").new())
d = objc.NSDecimal("invalid")
self.assertEqual(str(d), "NaN")
def test_comparing(self):
d1 = objc.NSDecimal("1.500")
d2 = objc.NSDecimal("1.500")
d3 = objc.NSDecimal("1.4")
d4 = objc.NSDecimal("1.6")
self.assertTrue(d1 == d1)
self.assertTrue(d1 == d2)
self.assertFalse(d1 != d1)
self.assertFalse(d1 != d2)
self.assertTrue(d1 != d3)
self.assertFalse(d1 != d2)
self.assertFalse(d1 != d1)
self.assertTrue(d1 <= d1)
self.assertTrue(d1 <= d2)
self.assertFalse(d1 <= d3)
self.assertTrue(d1 <= d4)
self.assertFalse(d1 < d1)
self.assertFalse(d1 < d2)
self.assertFalse(d1 < d3)
self.assertTrue(d1 < d4)
self.assertFalse(d1 > d1)
self.assertFalse(d1 > d2)
self.assertTrue(d1 > d3)
self.assertFalse(d1 > d4)
self.assertTrue(d1 >= d1)
self.assertTrue(d1 >= d2)
self.assertTrue(d1 >= d3)
self.assertFalse(d1 >= d4)
self.assertEqual(objc.NSDecimal("1.50"), objc.NSDecimal("1.500"))
# Comparison with other types is possible when
# they can be casted to NSDecimal without loosing
# precision.
d5 = objc.NSDecimal("5")
i5 = 5
f5 = 5.0
D5 = decimal.Decimal(5)
self.assertTrue(d5 == i5)
self.assertTrue(d5 == f5)
self.assertFalse(d5 == D5)
self.assertFalse(d5 != i5)
self.assertFalse(d5 != f5)
self.assertTrue(d5 != D5)
self.assertFalse(d5 < i5)
self.assertFalse(d5 < f5)
with self.assertRaisesRegex(
TypeError, "Cannot compare NSDecimal and decimal.Decimal"
):
d5 < D5 # noqa: B015
self.assertFalse(d5 > i5)
self.assertFalse(d5 > f5)
with self.assertRaisesRegex(
TypeError, "Cannot compare NSDecimal and decimal.Decimal"
):
d5 > D5 # noqa: B015
self.assertTrue(d5 >= i5)
self.assertTrue(d5 >= f5)
with self.assertRaisesRegex(
TypeError, "Cannot compare NSDecimal and decimal.Decimal"
):
d5 >= D5 # noqa: B015
self.assertTrue(d5 <= i5)
self.assertTrue(d5 <= f5)
with self.assertRaisesRegex(
TypeError, "Cannot compare NSDecimal and decimal.Decimal"
):
d5 <= D5 # noqa: B015
def test_hash(self):
self.assertEqual(hash(objc.NSDecimal("1.50")), hash(objc.NSDecimal("1.500")))
def test_conversion(self):
d1 = objc.NSDecimal("1.5")
d2 = objc.NSDecimal("25")
self.assertEqual(d1.as_int(), 1)
self.assertEqual(d2.as_int(), 25)
self.assertEqual(d1.as_float(), 1.5)
self.assertEqual(d2.as_float(), 25.0)
with self.assertRaisesRegex(
TypeError, r"int\(\) argument must .*, not 'objc.NSDecimal'"
):
int(d1)
with self.assertRaisesRegex(
TypeError, r"float\(\) argument must be .*, not 'objc.NSDecimal'"
):
float(d1)
def test_rounding(self):
d1 = objc.NSDecimal("1.5781")
d2 = round(d1)
self.assertEqual(d2, objc.NSDecimal("2"))
d2 = round(d1, 2)
self.assertEqual(d2, objc.NSDecimal("1.58"))
d2 = round(d1, 3)
self.assertEqual(d2, objc.NSDecimal("1.578"))
d1 = objc.NSDecimal("15.44")
d2 = round(d1, -1)
self.assertEqual(d2, objc.NSDecimal("20"))
with self.assertRaisesRegex(
TypeError, r"function takes at most 1 argument \(2 given\)"
):
d1.__round__(1, 2)
def test_pow(self):
with self.assertRaisesRegex(
TypeError, r"pow\(\) and \*\* are not supported for NSDecimal"
):
pow(objc.NSDecimal("3.5"), 3, 1)
with self.assertRaisesRegex(
TypeError, r"pow\(\) and \*\* are not supported for NSDecimal"
):
pow(objc.NSDecimal("3.5"), 3)
with self.assertRaisesRegex(
TypeError, r"pow\(\) and \*\* are not supported for NSDecimal"
):
objc.NSDecimal("3.5") ** 3
with self.assertRaisesRegex(
TypeError, r"pow\(\) and \*\* are not supported for NSDecimal"
):
objc.NSDecimal("3.5") ** objc.NSDecimal("2")
def test_operators(self):
d1 = objc.NSDecimal("1.5")
self.assertEqual(+d1, d1)
self.assertEqual(-d1, objc.NSDecimal("-1.5"))
d2 = objc.NSDecimal("0.5")
o = d1 + d2
self.assertEqual(o, objc.NSDecimal("2"))
o = d1 - d2
self.assertEqual(o, objc.NSDecimal("1.0"))
o = d1 / d2
self.assertEqual(o, objc.NSDecimal("3.0"))
o = d1 * d2
self.assertEqual(o, objc.NSDecimal("0.75"))
o = d1 + 1
self.assertEqual(o, objc.NSDecimal("2.5"))
o = d1 - 1
self.assertEqual(o, objc.NSDecimal("0.5"))
o = d1 * 2
self.assertEqual(o, objc.NSDecimal("3"))
o = d1 / 2
self.assertEqual(o, objc.NSDecimal("0.75"))
o = d1 // 2
self.assertEqual(o, objc.NSDecimal("0"))
with self.assertRaisesRegex(
TypeError,
r"unsupported operand type\(s\) for \+: 'objc.NSDecimal' and 'float'",
):
d1 + 0.5
with self.assertRaisesRegex(
TypeError,
r"unsupported operand type\(s\) for -: 'objc.NSDecimal' and 'float'",
):
d1 - 0.5
with self.assertRaisesRegex(
TypeError,
r"unsupported operand type\(s\) for \*: 'objc.NSDecimal' and 'float'",
):
d1 * 0.5
with self.assertRaisesRegex(
TypeError,
r"unsupported operand type\(s\) for /: 'objc.NSDecimal' and 'float'",
):
d1 / 0.5
with self.assertRaisesRegex(
TypeError,
r"unsupported operand type\(s\) for \+: 'float' and 'objc.NSDecimal'",
):
0.5 + d1
with self.assertRaisesRegex(
TypeError,
r"unsupported operand type\(s\) for -: 'float' and 'objc.NSDecimal'",
):
0.5 - d1
with self.assertRaisesRegex(
TypeError,
r"unsupported operand type\(s\) for \*: 'float' and 'objc.NSDecimal'",
):
0.5 * d1
with self.assertRaisesRegex(
TypeError,
r"unsupported operand type\(s\) for /: 'float' and 'objc.NSDecimal'",
):
0.5 / d1
def test_inplace_ro(self):
d1 = objc.NSDecimal("1.5")
d2 = objc.NSDecimal("0.5")
orig = d1
d1 += d2
self.assertEqual(d1, objc.NSDecimal("2.0"))
self.assertEqual(orig, objc.NSDecimal("1.5"))
d1 = orig
d1 -= d2
self.assertEqual(d1, objc.NSDecimal("1.0"))
self.assertEqual(orig, objc.NSDecimal("1.5"))
d1 = orig
d1 /= d2
self.assertEqual(d1, objc.NSDecimal("3.0"))
self.assertEqual(orig, objc.NSDecimal("1.5"))
d1 = orig
d1 *= d2
self.assertEqual(d1, objc.NSDecimal("0.75"))
self.assertEqual(orig, objc.NSDecimal("1.5"))
class TestUsingNSDecimalNumber(TestCase):
def test_creation(self):
cls = objc.lookUpClass("NSDecimalNumber")
d = objc.NSDecimal("1.5")
n = cls.decimalNumberWithDecimal_(d)
self.assertIsInstance(n, cls)
self.assertEqual(str(n), str(d))
n = cls.alloc().initWithDecimal_(d)
self.assertIsInstance(n, cls)
self.assertEqual(str(n), str(d))
v = n.decimalValue()
self.assertEqual(d, v)
with self.assertRaisesRegex(TypeError, "expected 1 arguments, got 2"):
cls.decimalNumberWithDecimal_(d, 1)
with self.assertRaisesRegex(
TypeError, "Expecting an NSDecimal, got instance of 'str'"
):
cls.decimalNumberWithDecimal_("42.5")
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=objc.UninitializedDeallocWarning)
with self.assertRaisesRegex(TypeError, "expected 1 arguments, got 2"):
cls.alloc().initWithDecimal_(d, 1)
with self.assertRaisesRegex(
TypeError, "Expecting an NSDecimal, got instance of 'str'"
):
cls.alloc().initWithDecimal_("42.5")
@expectedFailure
def test_subclassing(self):
# At least on macOS 13 subclassing of NSDecimalNumber basically doesn't work,
# leaving the test here as a reminder of that.
NSDecimalNumber = objc.lookUpClass("NSDecimalNumber")
class OC_DecimalNumberPlusOne(NSDecimalNumber):
@objc.objc_method(signature=NSDecimalNumber.initWithDecimal_.signature)
def initWithDecimal_(self, value):
return super().initWithDecimal_(value)
def decimalValue(self):
return super().decimalValue() + 1
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=objc.UninitializedDeallocWarning)
o = OC_DecimalNumberPlusOne.alloc().initWithDecimal_(objc.NSDecimal("1.5"))
v = objc.NSDecimal(o)
print(v)
class TestDecimalByReference(TestCase):
def test_byref_in(self):
d = objc.NSDecimal("1.5")
o = OC_TestDecimal.alloc().init()
self.assertArgIsIn(o.stringFromDecimal_, 0)
r = o.stringFromDecimal_(d)
self.assertIsInstance(r, str)
self.assertEqual(r, "1.5")
with self.assertRaisesRegex(
TypeError, "Expecting an NSDecimal, got instance of 'str'"
):
o.stringFromDecimal_("42.5")
def test_byref_out(self):
o = OC_TestDecimal.alloc().init()
self.assertArgIsOut(o.getDecimal_, 0)
r = o.getDecimal_(None)
self.assertIsInstance(r, tuple)
self.assertEqual(r[0], 1)
d = r[1]
self.assertIsInstance(d, objc.NSDecimal)
self.assertEqual(str(d), "2.5")
objc._updatingMetadata(True)
objc.registerMetaDataForSelector(
b"OC_TestDecimal",
b"getDecimal:",
{
"arguments": {
2
+ 0: {
"type_modifier": objc._C_OUT,
"type": b"^{_NSDecimal=b8b4b1b1b18[8S]}",
"null_accepted": False,
}
}
},
)
objc._updatingMetadata(False)
self.assertArgIsOut(o.getDecimal_, 0)
r = o.getDecimal_(None)
self.assertIsInstance(r, tuple)
self.assertEqual(r[0], 1)
d = r[1]
self.assertIsInstance(d, objc.NSDecimal)
self.assertEqual(str(d), "2.5")
def test_byref_inout(self):
d1 = objc.NSDecimal("1.25")
o = OC_TestDecimal.alloc().init()
self.assertArgIsInOut(o.doubleDecimal_, 0)
d2 = o.doubleDecimal_(d1)
self.assertIsNot(d1, d2)
self.assertEqual(str(d1), "1.25")
self.assertIsInstance(d2, objc.NSDecimal)
self.assertEqual(str(d2), "2.5")
def test_to_id(self):
v = objc.NSDecimal("2.75")
a = objc.lookUpClass("NSArray").arrayWithArray_([v])
o = a[0]
self.assertIsInstance(o, objc.lookUpClass("NSDecimalNumber"))
self.assertEqual(o, v)
b = objc.NSDecimal(o)
self.assertIsInstance(b, objc.NSDecimal)
self.assertIsNot(b, v)
self.assertEqual(b, v)
def test_create_no_args(self):
v = objc.NSDecimal()
self.assertEqual(v, objc.NSDecimal("0.0"))
def test_bool_context(self):
v = objc.NSDecimal("0")
self.assertIs(bool(v), False)
v = objc.NSDecimal("0.0001")
self.assertIs(bool(v), True)
| [
"ronaldoussoren@mac.com"
] | ronaldoussoren@mac.com |
1f6538e3cfedc22ca80c5652086f6deb7f4bf652 | 6fce025097cebfd9d1dd37f6611e7fdfdbea90e6 | /data_sync/nwp_prec_map.py | 6c629d1d70cd7fceffb4ccd6c33fc27c2260f5bf | [] | no_license | ANU-WALD/pluvi_pondus | ec0439d19acdcf4fdf712d6b14a1714297d661b2 | ff8680f7115ab2cb75138bf6705abb59618e47d1 | refs/heads/master | 2021-07-01T14:32:14.501631 | 2020-08-22T09:41:28 | 2020-08-22T09:41:28 | 138,804,652 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,533 | py | import xarray as xr
import numpy as np
import sys
import imageio
import os
if len(sys.argv) != 3:
sys.exit(1)
ds = xr.open_dataset(sys.argv[1])
print(ds['tp'].shape)
p = ds['tp'][0,:,:].data * 1000
p = np.clip(p, 0, 150)
p = np.log(1 + p)
norm_p = np.log(1 + p) / 5.01728
im = np.zeros((p.shape[0], p.shape[1], 4), dtype=np.float64)
im[:,:,2] = 1
im[:,:,3] = norm_p
im = (im*255).astype(np.uint8)
fname, _ = os.path.splitext(sys.argv[2])
imageio.imwrite(sys.argv[2], im)
os.system("gdal_translate -of GTiff -a_ullr -180 90 180 -90 -a_srs EPSG:4326 {}.png {}.tif".format(fname, fname))
os.system("gdalwarp -of GTiff -s_srs EPSG:4326 -t_srs EPSG:3857 -te_srs EPSG:4326 -te -180 -85.0511 180 85.0511 {}.tif {}_proj.tif".format(fname, fname))
os.system("gdal_translate -of PNG {}_proj.tif {}.png".format(fname, fname))
os.system("rm *.tif")
print(ds['cp'].shape)
p = ds['cp'][0,:,:].data * 1000
p = np.clip(p, 0, 150)
p = np.log(1 + p)
norm_p = np.log(1 + p) / 5.01728
im = np.zeros((p.shape[0], p.shape[1], 4), dtype=np.float64)
im[:,:,2] = 1
im[:,:,3] = norm_p
im = (im*255).astype(np.uint8)
fname = "CP-" + fname
imageio.imwrite("{}.png".format(fname), im)
os.system("gdal_translate -of GTiff -a_ullr -180 90 180 -90 -a_srs EPSG:4326 {}.png {}.tif".format(fname, fname))
os.system("gdalwarp -of GTiff -s_srs EPSG:4326 -t_srs EPSG:3857 -te_srs EPSG:4326 -te -180 -85.0511 180 85.0511 {}.tif {}_proj.tif".format(fname, fname))
os.system("gdal_translate -of PNG {}_proj.tif {}.png".format(fname, fname))
os.system("rm *.tif")
| [
"pablo.larraondo@anu.edu.au"
] | pablo.larraondo@anu.edu.au |
31c568493d455ebdff42337b27ee809862d80424 | 16516732031deb7f7e074be9fe757897557eee2d | /朝活/朝活/20200420/A - C-Filter.py | 8f6a3a88e641497c59026898a80ef871bdebcde2 | [] | no_license | cale-i/atcoder | 90a04d3228864201cf63c8f8fae62100a19aefa5 | c21232d012191ede866ee4b9b14ba97eaab47ea9 | refs/heads/master | 2021-06-24T13:10:37.006328 | 2021-03-31T11:41:59 | 2021-03-31T11:41:59 | 196,288,266 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 364 | py | # https://atcoder.jp/contests/digitalarts2012/tasks/digitalarts_1
import re
s=list(input().split())
n=int(input())
t=[input().replace('*','.') for _ in range(n)]
for pat in t:
regex=re.compile(r'^{}$'.format(pat))
for i in range(len(s)):
has_word=regex.search(s[i])
if has_word:
s[i]='*'*len(s[i])
print(*s) | [
"calei078029@gmail.com"
] | calei078029@gmail.com |
2eb4a12253b0787e1d7727e128ff4724255853eb | a4b938b953d25bb529564d0e3f025b5a93a73d8b | /gui/http_api_e2e_test.py | 5bc9859a94d4cf20f236341bd64619f36fb03bcb | [
"DOC",
"Apache-2.0"
] | permissive | greg-gallaway/grr | 7887d3ecca33f9b5544f297d1bb1320672678078 | 919a844c396136bd49c457f18d853dd10b79abed | refs/heads/master | 2021-01-18T17:27:19.111359 | 2015-06-05T09:44:25 | 2015-06-05T09:44:25 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,378 | py | #!/usr/bin/env python
"""End-to-end tests for HTTP API.
HTTP API plugins are tested with their own dedicated unit-tests that are
protocol- and server-independent. Tests in this file test the full GRR server
stack with regards to the HTTP API.
"""
import json
import requests
import logging
from grr.gui import runtests
from grr.lib import config_lib
from grr.lib import flags
from grr.lib import test_lib
class HTTPApiEndToEndTestProgram(test_lib.GrrTestProgram):
server_port = None
def setUp(self):
self.trd = runtests.DjangoThread()
self.trd.StartAndWaitUntilServing()
class CSRFProtectionTest(test_lib.GRRBaseTest):
"""Tests GRR's CSRF protection logic for the HTTP API."""
def setUp(self):
super(CSRFProtectionTest, self).setUp()
port = (HTTPApiEndToEndTestProgram.server_port or
config_lib.CONFIG["AdminUI.port"])
self.base_url = "http://localhost:%s" % port
def testGETRequestWithoutCSRFTokenSucceeds(self):
response = requests.get(self.base_url + "/api/config")
self.assertEquals(response.status_code, 200)
# Assert XSSI protection is in place.
self.assertEquals(response.text[:5], ")]}'\n")
def testPOSTRequestWithoutCSRFTokenFails(self):
data = {
"client_ids": ["C.0000000000000000"],
"labels": ["foo", "bar"]
}
response = requests.post(self.base_url + "/api/clients/labels/add",
data=json.dumps(data))
self.assertEquals(response.status_code, 403)
self.assertTrue("CSRF" in response.text)
def testPOSTRequestWithCSRFTokenSucceeds(self):
# Fetch csrf token from the cookie set on the main page.
index_response = requests.get(self.base_url)
csrf_token = index_response.cookies.get("csrftoken")
headers = {
"x-csrftoken": csrf_token,
"x-requested-with": "XMLHttpRequest"
}
data = {
"client_ids": ["C.0000000000000000"],
"labels": ["foo", "bar"]
}
cookies = {
"csrftoken": csrf_token
}
response = requests.post(self.base_url + "/api/clients/labels/add",
headers=headers, data=json.dumps(data),
cookies=cookies)
self.assertEquals(response.status_code, 200)
def main(argv):
HTTPApiEndToEndTestProgram(argv=argv)
if __name__ == "__main__":
flags.StartMain(main)
| [
"github@mailgreg.com"
] | github@mailgreg.com |
f2d7d79f569684d8a1acd419e1f4cded176a399e | 71894f980d1209017837d7d02bc38ffb5dbcb22f | /audio/DIYAmazonAlexa/DIYAmazonAlexa.py | 9238be8e205d02461cb2611d43648d5199630f4c | [] | no_license | masomel/py-iot-apps | 0f2418f8d9327a068e5db2cdaac487c321476f97 | 6c22ff2f574a37ba40a02625d6ed68d7bc7058a9 | refs/heads/master | 2021-03-22T04:47:59.930338 | 2019-05-16T06:48:32 | 2019-05-16T06:48:32 | 112,631,988 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,807 | py | #! /usr/bin/env python
import os
import random
import time
import random
from creds import *
import requests
import json
import re
import subprocess
from memcache import Client
# Setup
recorded = False
servers = ["127.0.0.1:11211"]
mc = Client(servers, debug=1)
path = os.path.realpath(__file__).rstrip(os.path.basename(__file__))
def internet_on():
print("Checking Internet Connection")
try:
r = requests.get('https://api.amazon.com/auth/o2/token')
print("Connection OK")
return True
except:
print("Connection Failed")
return False
def gettoken():
token = mc.get("access_token")
refresh = refresh_token
if token:
return token
elif refresh:
payload = {"client_id": Client_ID, "client_secret": Client_Secret,
"refresh_token": refresh, "grant_type": "refresh_token", }
url = "https://api.amazon.com/auth/o2/token"
print("payload=")
print(payload)
r = requests.post(url, data=payload)
print("res=")
print((r.text))
resp = json.loads(r.text)
mc.set("access_token", resp['access_token'], 3570)
return resp['access_token']
else:
return False
def alexa():
url = 'https://access-alexa-na.amazon.com/v1/avs/speechrecognizer/recognize'
headers = {'Authorization': 'Bearer %s' % gettoken()}
d = { # a dict
"messageHeader": {
"deviceContext": [
{
"name": "playbackState",
"namespace": "AudioPlayer",
"payload": {
"streamId": "",
"offsetInMilliseconds": "0",
"playerActivity": "IDLE"
}
}
]
},
"messageBody": {
"profile": "alexa-close-talk",
"locale": "en-us",
"format": "audio/L16; rate=16000; channels=1"
}
}
with open(path + 'recording.wav') as inf:
files = [ # a list
('file', ('request', json.dumps(d), 'application/json; charset=UTF-8')),
('file', ('audio', inf, 'audio/L16; rate=16000; channels=1'))
]
print(type(files))
print(type(d))
r = requests.post(url, headers=headers, files=files)
if r.status_code == 200:
for v in r.headers['content-type'].split(";"):
if re.match('.*boundary.*', v):
boundary = v.split("=")[1]
data = r.content.split(boundary)
for d in data:
if (len(d) >= 1024):
audio = d.split('\r\n\r\n')[1].rstrip('--')
print(type(audio))
with open(path + "response.mp3", 'wb') as f:
f.write(audio)
os.system(
'mpg123 -q {}1sec.mp3 {}response.mp3'.format(path + "/assets/", path))
else:
print("requests returned r.status_code = %r" % r.status_code)
def start():
print("Touch MATRIX Creator IR Sensor")
process = subprocess.Popen(
['./micarray/build/micarray_dump'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
audio, err = process.communicate()
rf = open(path + 'recording.wav', 'w')
rf.write(audio)
rf.close()
alexa()
if __name__ == "__main__":
print("This is a MATRIX Creator demo - not ready for production")
print("Running workaround for GPIO 16 (IR-RX) ")
subprocess.Popen(['sudo', 'rmmod', 'lirc_rpi'])
while internet_on() == False:
print(".")
token = gettoken()
os.system('mpg123 -q {}1sec.mp3 {}hello.mp3'.format(path +
"/assets/", path + "/assets/"))
while True:
subprocess.Popen(['gpio','edge','16','both'])
start()
| [
"msmelara@gmail.com"
] | msmelara@gmail.com |
067bb5ca47e251d38571d4f8c1e9fea477cedd2b | bd8d89a09438328e0e9b76b1ed8bc7517cfd0f79 | /pifify/materials/inconel.py | c510e89e163c0968faac5471f318f885d8af349d | [] | no_license | bkappes/pifify | 3361925b875ce3ce216361d0657251f058e30d82 | 92ed2d27d7bca26c23db4604e155c7565f14413c | refs/heads/master | 2021-01-01T03:35:37.989045 | 2016-11-17T17:28:12 | 2016-11-17T17:28:12 | 58,249,456 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,777 | py | import sys, os
# sys.path.append(os.path.dirname(os.path.realpath(__file__) + \
# os.path.sep + os.path.pardir + \
# os.path.sep + os.path.pardir))
# this is specific the location of pypif, since I haven't
# installed pypif
sys.path.append('/Users/bkappes/src/citrine/pypif')
from pypif import pif
from alloy import AlloyBase
class Inconel718(AlloyBase):
def __init__(self, **kwds):
super(Inconel718, self).__init__(**kwds)
# set names
names = ['Inconel', 'Inconel 718', '718', 'UNS N07718',
'W.Nr. 2.4668', 'AMS 5596', 'ASTM B637']
self.names = names
# set references
url='http://www.specialmetals.com/documents/Inconel%20alloy%20718.pdf'
references = [pif.Reference(url=url)]
self.references = references
# preparation
if 'preparation' in kwds:
self.preparation = kwds['preparation']
else:
self.preparation = []
# set composition
balance = {'low' : 100., 'high' : 100.}
# at some point, allow the user to tweak the composition on an
# element-by-element basis by passing something to the class
# alloy compositions are typically defined in weight/mass percent
# with one element set by "balance".
composition = []
for elem, (low, high) in (('Ni', (50., 55.)),
('Cr', (17., 21.)),
('Nb', (4.75, 5.5)),
('Mo', (2.8, 3.3)),
('Ti', (0.65, 1.15)),
('Al', (0.2, 0.8)),
('Co', (0.0, 1.0)),
('C', (0.0, 0.08)),
('Mn', (0.0, 0.35)),
('Si', (0.0, 0.35)),
('P', (0.0, 0.015)),
('S', (0.0, 0.015)),
('B', (0.0, 0.006)),
('Cu', (0.0, 0.30))):
balance['low'] -= high
balance['high'] -= low
component = pif.Composition(element=elem,
ideal_weight_percent=pif.Scalar(minimum=low,
maximum=high))
composition.append(component)
assert(balance['low'] >= 0.0)
assert(balance['high'] >= 0.0)
component = pif.Composition(element='Fe',
ideal_weight_percent=pif.Scalar(minimum=balance['low'],
maximum=balance['high']))
composition.append(component)
self.composition = composition
#end 'class Inconel718(pif.Alloy):'
| [
"bkappes@mines.edu"
] | bkappes@mines.edu |
ac66cdaca079fc5ed364b91e7b7c335ff66e2240 | 09e5cfe06e437989a2ccf2aeecb9c73eb998a36c | /modules/cctbx_project/gltbx/viewer_utils.py | cf30f350988463aeda5d40e35106362d81ad140a | [
"BSD-3-Clause",
"BSD-3-Clause-LBNL",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | jorgediazjr/dials-dev20191018 | b81b19653624cee39207b7cefb8dfcb2e99b79eb | 77d66c719b5746f37af51ad593e2941ed6fbba17 | refs/heads/master | 2020-08-21T02:48:54.719532 | 2020-01-25T01:41:37 | 2020-01-25T01:41:37 | 216,089,955 | 0 | 1 | BSD-3-Clause | 2020-01-25T01:41:39 | 2019-10-18T19:03:17 | Python | UTF-8 | Python | false | false | 1,298 | py | from __future__ import absolute_import, division, print_function
import scitbx.array_family.flex # import dependency
import time
import boost.python
ext = boost.python.import_ext("gltbx_viewer_utils_ext")
from gltbx_viewer_utils_ext import *
def read_pixels_to_str(x, y, width, height):
from gltbx.gl import glPixelStorei, glReadPixels, \
GL_PACK_ALIGNMENT, GL_RGB, GL_UNSIGNED_BYTE
glPixelStorei(GL_PACK_ALIGNMENT, 1)
pixels = []
glReadPixels(
x=0, y=0, width=width, height=height,
format=GL_RGB, type=GL_UNSIGNED_BYTE,
pixels=pixels)
return pixels[0]
def read_pixels_to_pil_image(x, y, width, height):
try:
import PIL.Image
except ImportError:
return None
mode = "RGB"
size = (width, height)
data = read_pixels_to_str(x=x, y=y, width=width, height=height)
decoder_name = "raw"
raw_mode = "RGB"
stride = 0
orientation = -1
return PIL.Image.frombytes(
mode, size, data, decoder_name, raw_mode, stride, orientation)
class fps_monitor(object):
def __init__(self):
self._t_start = time.time()
self._n = 0
def update(self):
self._n += 1
if (self._n % 10 == 0):
t_curr = time.time()
t_elapsed = t_curr - self._t_start
self._t_start = t_curr
print("%.2f fps" % (10 / t_elapsed))
self._n = 0
| [
"jorge7soccer@gmail.com"
] | jorge7soccer@gmail.com |
ff9d75b69e1c2286b7676c81629e2cea48dce8fd | 24dfab72bd987988a0f1d7786ba98281287704f7 | /proposed_algorithms/RF_DT_xgboost_demo.py | a87b41a354680189a11e8b79999b7570916a5dbe | [] | no_license | yougwypf1991/application_classification | 435432aea5b2ad055c67889057047291ef200feb | 667a86b98eb7cc2d8bd87eb1dcdad0efeaca38a7 | refs/heads/master | 2022-11-14T19:13:17.054673 | 2020-07-13T03:15:32 | 2020-07-13T03:15:32 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,893 | py | import random
from sklearn import metrics
from sklearn.decomposition import PCA
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from numpy_load_and_arff import load_npy_data
from xgboost import XGBClassifier
from sklearn.preprocessing import StandardScaler
random.seed(20)
def main_xgboost(X, Y, session_size=2000, test_percent=0.1):
input_size = 500
reduce_feature_flg = False
if reduce_feature_flg:
print(f'Using PCA to reduce features.')
sc = StandardScaler()
X_train = sc.fit_transform(X)
X = sc.transform(X)
pca_model = PCA(n_components=input_size, random_state=0)
pca_model.fit_transform(X, y)
X = pca_model.transform(X)
explained_variance = pca_model.explained_variance_ratio_
print(f'explained_variance={explained_variance}')
# session_size = input_size # X.shape[1]
print(f'X.shape={X.shape}')
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_percent, random_state=42)
print(f'train_test_ratio:[{1-test_percent}:{test_percent}]')
# listdir = {}
# for i in range(0, len(y_train)):
# if y_train[i] not in listdir:
# listdir.update({y_train[i]: 0})
# else:
# listdir[y_train[i]] = listdir[y_train[i]] + 1
# print(f'X_train:{listdir}')
#
# listdir = {}
# for i in range(0, len(y_test)):
# if y_test[i] not in listdir:
# listdir.update({y_test[i]: 0})
# else:
# listdir[y_test[i]] = listdir[y_test[i]] + 1
# print(f'X_test:{listdir}')
# train&test
result = []
# for i in range(10,300,30):
# value.append(i)
# value = [100]
value = 100
print("n_estimators: ", value)
truncatelist = [10, 100, 300, 500, 2000, 3000, 6000, 7000, 8000, session_size]
# truncatelist = [i * 100 + 500 for i in range(50, 100, 5)]
print(f'{truncatelist}')
# for i in range(50,1500,50):
for i in truncatelist:
print(f'session_size:{i}')
# clf = RandomForestClassifier(n_estimators=value, min_samples_leaf=2)
# clf = RandomForestClassifier()
# clf = DecisionTreeClassifier(criterion="entropy", splitter="best",
# max_depth = 20,
# # min_samples_split=5,
# min_samples_leaf=5,
# # min_weight_fraction_leaf=0.,
# # max_features=None,
# random_state=20)
clf = DecisionTreeClassifier(random_state=20)
# clf = XGBClassifier(n_estimators=150)
X_train_t = X_train[:, :i]
X_test_t = X_test[:, :i]
print("before input....")
print(f'X_train_t.shape:{X_train_t.shape}')
print(y_train.shape)
print(f'X_test_t.shape:{X_test_t.shape}')
print(y_test.shape)
# print((X_train_t[0])[0:10])
clf.fit(X_train_t, y_train)
predtrain = clf.predict(X_train_t)
print(confusion_matrix(y_train, predtrain))
predtest = clf.predict(X_test_t)
print(confusion_matrix(y_test, predtest))
print("train acc:", metrics.accuracy_score(y_train, predtrain))
print("test acc", metrics.accuracy_score(y_test, predtest))
result.append(metrics.accuracy_score(y_test, predtest))
# print(result)
print(f'test acc: {result}')
if __name__ == '__main__':
input_file = '../input_data/trdata-8000B_payload.npy'
# input_file = '../input_data/trdata-8000B_header_payload_20190326.npy'
input_file = '../input_data/trdata_P_8000.npy' # test acc: [0.26696329254727474, 0.42936596218020023, 0.43492769744160176, 0.492769744160178, 0.6651835372636262, 0.6685205784204672]
# input_file = '../input_data/trdata_PH_8000.npy' # test acc: [0.22024471635150167, 0.5183537263626251, 0.5717463848720801, 0.610678531701891, 0.8153503893214683, 0.8209121245828699]
# input_file = '../input_data/trdata_PHT_8000.npy' # test acc: [0.27697441601779754, 0.5116796440489433, 0.6028921023359288, 0.6062291434927698, 0.8075639599555061, 0.8186874304783093]
# input_file = '../input_data/trdata_PT_8000.npy' # test acc: [0.24916573971078976, 0.45161290322580644, 0.5617352614015573, 0.5761957730812013, 0.7552836484983315, 0.7552836484983315]
# input_file ='../input_data/trdata_PT_8000_padding.npy' # test acc: [0.389321468298109, 0.5828698553948832, 0.6262513904338154, 0.6551724137931034, 0.8731924360400445, 0.8921023359288098]
input_file = '../input_data/newapp_10220_pt.npy'
session_size = 10220
X, y = load_npy_data(input_file, session_size)
main_xgboost(X, y, session_size)
| [
"kun.bj@foxmail.com"
] | kun.bj@foxmail.com |
3cbf273f544246dd4945252ce79cf5936764e9eb | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/adverbs/_enormously.py | 08cbf1915bafcb261f9de99d048c67794cf5bd02 | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 387 | py |
#calss header
class _ENORMOUSLY():
def __init__(self,):
self.name = "ENORMOUSLY"
self.definitions = [u'extremely or very much: ']
self.parents = []
self.childen = []
self.properties = []
self.jsondata = {}
self.specie = 'adverbs'
def run(self, obj1, obj2):
self.jsondata[obj2] = {}
self.jsondata[obj2]['properties'] = self.name.lower()
return self.jsondata
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
887f2b39b26d4a530f903ceabe283c002de6052c | 05c5349fff1c85c41c92c9894006e2fe2464177b | /lib/api/mineration/mineration_controller.py | ea6b03f5a61d5d0bfeec33029e3474d5a6c76236 | [] | no_license | gabrielmoreira-dev/blockchain-flask | 2a0b637a4a3e1d3f732f06cc59ae6e01422efd76 | df70ed9535e397d192ddaff04be017a15b621253 | refs/heads/main | 2023-04-26T21:18:43.490144 | 2021-05-30T18:04:21 | 2021-05-30T18:04:21 | 361,319,236 | 0 | 0 | null | 2021-05-30T18:04:22 | 2021-04-25T03:11:00 | Python | UTF-8 | Python | false | false | 2,376 | py | from domain.model.block import Block
from domain.use_case.add_transaction_uc import AddTransactionUC, AddTransactionUCParams
from domain.use_case.create_block_uc import CreateBlockUC, CreateBlockUCParams
from domain.use_case.get_hash_uc import GetHashUC, GetHashUCParams
from domain.use_case.get_address_uc import GetAddressUC
from domain.use_case.get_previous_block_uc import GetPreviousBlockUC
from domain.use_case.get_proof_of_work_uc import GetProofOfWorkUC, GetProofOfWorkUCParams
from .mineration_mapper import MinerationMapper
class MinerationController:
def __init__(self, add_transaction_uc: AddTransactionUC,
get_address_uc: GetAddressUC,
get_previous_block_uc: GetPreviousBlockUC,
get_proof_of_work_uc: GetProofOfWorkUC,
get_hash_uc: GetHashUC, create_block_uc: CreateBlockUC):
self.add_transaction_uc = add_transaction_uc
self.get_address_uc = get_address_uc
self.get_previous_block_uc = get_previous_block_uc
self.get_proof_of_work_uc = get_proof_of_work_uc
self.get_hash_uc = get_hash_uc
self.create_block_uc = create_block_uc
def mine_block(self):
previous_block = self._get_previous_block()
proof = self._get_proof_of_work(previous_proof=previous_block.proof)
previous_hash = self._generate_block_hash(previous_block)
self._get_reward()
block = self._create_block(proof, previous_hash)
return MinerationMapper.toDict(block)
def _get_previous_block(self):
return self.get_previous_block_uc.execute()
def _get_proof_of_work(self, previous_proof: str):
params = GetProofOfWorkUCParams(previous_proof)
return self.get_proof_of_work_uc.execute(params)
def _generate_block_hash(self, block: Block):
params = GetHashUCParams(block)
return self.get_hash_uc.execute(params)
def _get_reward(self):
node_address = self.get_address_uc.execute()
params = AddTransactionUCParams(sender='',
receiver=node_address,
amount=1)
self.add_transaction_uc.execute(params)
def _create_block(self, proof: str, previous_hash: str):
params = CreateBlockUCParams(proof, previous_hash)
return self.create_block_uc.execute(params) | [
"="
] | = |
d1b1e05652e4e222093eb2da065223abf007094f | 2b42b40ae2e84b438146003bf231532973f1081d | /spec/mgm4459484.3.spec | f867d798a8dd43df1de6c166d285c799b7b42912 | [] | no_license | MG-RAST/mtf | 0ea0ebd0c0eb18ec6711e30de7cc336bdae7215a | e2ddb3b145068f22808ef43e2bbbbaeec7abccff | refs/heads/master | 2020-05-20T15:32:04.334532 | 2012-03-05T09:51:49 | 2012-03-05T09:51:49 | 3,625,755 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 14,311 | spec | {
"id": "mgm4459484.3",
"metadata": {
"mgm4459484.3.metadata.json": {
"format": "json",
"provider": "metagenomics.anl.gov"
}
},
"providers": {
"metagenomics.anl.gov": {
"files": {
"100.preprocess.info": {
"compression": null,
"description": null,
"size": 736,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/100.preprocess.info"
},
"100.preprocess.passed.fna.gz": {
"compression": "gzip",
"description": null,
"size": 731987,
"type": "fasta",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/100.preprocess.passed.fna.gz"
},
"100.preprocess.passed.fna.stats": {
"compression": null,
"description": null,
"size": 310,
"type": "fasta",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/100.preprocess.passed.fna.stats"
},
"100.preprocess.removed.fna.gz": {
"compression": "gzip",
"description": null,
"size": 17992,
"type": "fasta",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/100.preprocess.removed.fna.gz"
},
"100.preprocess.removed.fna.stats": {
"compression": null,
"description": null,
"size": 304,
"type": "fasta",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/100.preprocess.removed.fna.stats"
},
"205.screen.h_sapiens_asm.info": {
"compression": null,
"description": null,
"size": 450,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/205.screen.h_sapiens_asm.info"
},
"299.screen.info": {
"compression": null,
"description": null,
"size": 410,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/299.screen.info"
},
"299.screen.passed.fna.gcs": {
"compression": null,
"description": null,
"size": 1652,
"type": "fasta",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/299.screen.passed.fna.gcs"
},
"299.screen.passed.fna.gz": {
"compression": "gzip",
"description": null,
"size": 420091,
"type": "fasta",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/299.screen.passed.fna.gz"
},
"299.screen.passed.fna.lens": {
"compression": null,
"description": null,
"size": 504,
"type": "fasta",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/299.screen.passed.fna.lens"
},
"299.screen.passed.fna.stats": {
"compression": null,
"description": null,
"size": 310,
"type": "fasta",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/299.screen.passed.fna.stats"
},
"440.cluster.rna97.fna.gz": {
"compression": "gzip",
"description": null,
"size": 9593,
"type": "fasta",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/440.cluster.rna97.fna.gz"
},
"440.cluster.rna97.fna.stats": {
"compression": null,
"description": null,
"size": 306,
"type": "fasta",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/440.cluster.rna97.fna.stats"
},
"440.cluster.rna97.info": {
"compression": null,
"description": null,
"size": 947,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/440.cluster.rna97.info"
},
"440.cluster.rna97.mapping": {
"compression": null,
"description": null,
"size": 1185010,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/440.cluster.rna97.mapping"
},
"440.cluster.rna97.mapping.stats": {
"compression": null,
"description": null,
"size": 49,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/440.cluster.rna97.mapping.stats"
},
"450.rna.expand.lca.gz": {
"compression": "gzip",
"description": null,
"size": 77913,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/450.rna.expand.lca.gz"
},
"450.rna.expand.rna.gz": {
"compression": "gzip",
"description": null,
"size": 22845,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/450.rna.expand.rna.gz"
},
"450.rna.sims.filter.gz": {
"compression": "gzip",
"description": null,
"size": 15190,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/450.rna.sims.filter.gz"
},
"450.rna.sims.gz": {
"compression": "gzip",
"description": null,
"size": 161498,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/450.rna.sims.gz"
},
"900.abundance.function.gz": {
"compression": "gzip",
"description": null,
"size": 6583,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/900.abundance.function.gz"
},
"900.abundance.lca.gz": {
"compression": "gzip",
"description": null,
"size": 4640,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/900.abundance.lca.gz"
},
"900.abundance.md5.gz": {
"compression": "gzip",
"description": null,
"size": 10275,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/900.abundance.md5.gz"
},
"900.abundance.ontology.gz": {
"compression": "gzip",
"description": null,
"size": 43,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/900.abundance.ontology.gz"
},
"900.abundance.organism.gz": {
"compression": "gzip",
"description": null,
"size": 14622,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/900.abundance.organism.gz"
},
"900.loadDB.sims.filter.seq": {
"compression": null,
"description": null,
"size": 11243894,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/900.loadDB.sims.filter.seq"
},
"900.loadDB.source.stats": {
"compression": null,
"description": null,
"size": 122,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/900.loadDB.source.stats"
},
"999.done.COG.stats": {
"compression": null,
"description": null,
"size": 1,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.COG.stats"
},
"999.done.KO.stats": {
"compression": null,
"description": null,
"size": 1,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.KO.stats"
},
"999.done.NOG.stats": {
"compression": null,
"description": null,
"size": 1,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.NOG.stats"
},
"999.done.Subsystems.stats": {
"compression": null,
"description": null,
"size": 1,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.Subsystems.stats"
},
"999.done.class.stats": {
"compression": null,
"description": null,
"size": 439,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.class.stats"
},
"999.done.domain.stats": {
"compression": null,
"description": null,
"size": 29,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.domain.stats"
},
"999.done.family.stats": {
"compression": null,
"description": null,
"size": 1153,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.family.stats"
},
"999.done.genus.stats": {
"compression": null,
"description": null,
"size": 1379,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.genus.stats"
},
"999.done.order.stats": {
"compression": null,
"description": null,
"size": 590,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.order.stats"
},
"999.done.phylum.stats": {
"compression": null,
"description": null,
"size": 236,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.phylum.stats"
},
"999.done.rarefaction.stats": {
"compression": null,
"description": null,
"size": 22912,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.rarefaction.stats"
},
"999.done.sims.stats": {
"compression": null,
"description": null,
"size": 79,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.sims.stats"
},
"999.done.species.stats": {
"compression": null,
"description": null,
"size": 4426,
"type": "txt",
"url": "http://api.metagenomics.anl.gov/analysis/data/id/mgm4459484.3/file/999.done.species.stats"
}
},
"id": "mgm4459484.3",
"provider": "metagenomics.anl.gov",
"providerId": "mgm4459484.3"
}
},
"raw": {
"mgm4459484.3.fna.gz": {
"compression": "gzip",
"format": "fasta",
"provider": "metagenomics.anl.gov",
"url": "http://api.metagenomics.anl.gov/reads/mgm4459484.3"
}
}
} | [
"jared.wilkening@gmail.com"
] | jared.wilkening@gmail.com |
6ce349ee762970cd72732c32964c64ab5a9aa36f | e23a4f57ce5474d468258e5e63b9e23fb6011188 | /125_algorithms/_exercises/templates/Practical_Python_Programming_Practices/Practice 39. How to get Percentage of Uppercase and NUKE.py | ebdaf0316b7c79b9a2f73adaa920b50ccc796029 | [] | no_license | syurskyi/Python_Topics | 52851ecce000cb751a3b986408efe32f0b4c0835 | be331826b490b73f0a176e6abed86ef68ff2dd2b | refs/heads/master | 2023-06-08T19:29:16.214395 | 2023-05-29T17:09:11 | 2023-05-29T17:09:11 | 220,583,118 | 3 | 2 | null | 2023-02-16T03:08:10 | 2019-11-09T02:58:47 | Python | UTF-8 | Python | false | false | 328 | py | s.. input("Insert some strings of Uppercase and Lowercase: ")
len_str l..(s..)
upper lower 0
___ i __ s..:
__ 'a' < i < 'z':
lower + 1
____ 'A' < i < 'Z':
upper + 1
print("Percentage of Uppercase: %.2f %%" % (upper/len_str * 100))
print("Percentage of Lowercase: %.2f %%" % (lower/len_str * 100)) | [
"sergejyurskyj@yahoo.com"
] | sergejyurskyj@yahoo.com |
d6bd4f9b0d5c2cf392d5313ef024de5f1b8ae429 | cfd93c9d0a39c1f1a2778a23977e7b3bd5fd9b84 | /baseline2018a-doc/baseline-doc/configs/astro-lsst-01_2022/deepdrillingcosmology1_prop.py | 43bbee995904d3508477450e9a88d2b4e4213039 | [
"CC-BY-4.0"
] | permissive | lsst-pst/survey_strategy | ec3f7a277b60559c6d7f0fad4a837e22255565ea | 47aa3e00576172bfeec264e2b594c99355b875ea | refs/heads/main | 2023-06-11T17:36:24.000124 | 2023-05-25T23:07:29 | 2023-05-25T23:07:29 | 102,892,062 | 10 | 6 | null | 2022-09-23T21:08:27 | 2017-09-08T18:24:28 | Jupyter Notebook | UTF-8 | Python | false | false | 8,041 | py | import lsst.sims.ocs.configuration.science.deep_drilling_cosmology1
assert type(config)==lsst.sims.ocs.configuration.science.deep_drilling_cosmology1.DeepDrillingCosmology1, 'config is of type %s.%s instead of lsst.sims.ocs.configuration.science.deep_drilling_cosmology1.DeepDrillingCosmology1' % (type(config).__module__, type(config).__name__)
# The maximum airmass allowed for any field.
config.sky_constraints.max_airmass=1.5
# The maximum fraction of clouds allowed for any field.
config.sky_constraints.max_cloud=0.7
# Flag to use 2 degree exclusion zone around bright planets.
config.sky_constraints.exclude_planets=True
# The minimum distance (units=degrees) from the moon a field must be.
config.sky_constraints.min_distance_moon=30.0
# Name for the proposal.
config.name='DeepDrillingCosmology1'
# Sky user regions for the proposal as a list of field Ids.
config.sky_user_regions=[290, 744, 1427, 2412, 2786]
config.sub_sequences={}
config.sub_sequences[0]=lsst.sims.ocs.configuration.proposal.sub_sequence.SubSequence()
# Time (units=seconds) between subsequent visits for a field/filter combination. Must be non-zero if number of grouped visits is greater than one.
config.sub_sequences[0].time_interval=259200.0
# The number of visits required for each filter in the sub-sequence.
config.sub_sequences[0].visits_per_filter=[20, 10, 20, 26, 20]
# Relative time when the window reaches maximum rank for subsequent grouped visits.
config.sub_sequences[0].time_window_max=1.0
# The number of required events for the sub-sequence.
config.sub_sequences[0].num_events=27
# The maximum number of events the sub-sequence is allowed to miss.
config.sub_sequences[0].num_max_missed=0
# Weighting factor for scaling the shape of the time window.
config.sub_sequences[0].time_weight=1.0
# The list of filters required for the sub-sequence.
config.sub_sequences[0].filters=['r', 'g', 'i', 'z', 'y']
# Relative time when the window opens for subsequent grouped visits.
config.sub_sequences[0].time_window_start=0.8
# Relative time when the window ends for subsequent grouped visits.
config.sub_sequences[0].time_window_end=1.4
# The identifier for the sub-sequence.
config.sub_sequences[0].name='main'
config.sub_sequences[1]=lsst.sims.ocs.configuration.proposal.sub_sequence.SubSequence()
# Time (units=seconds) between subsequent visits for a field/filter combination. Must be non-zero if number of grouped visits is greater than one.
config.sub_sequences[1].time_interval=86400.0
# The number of visits required for each filter in the sub-sequence.
config.sub_sequences[1].visits_per_filter=[20]
# Relative time when the window reaches maximum rank for subsequent grouped visits.
config.sub_sequences[1].time_window_max=1.0
# The number of required events for the sub-sequence.
config.sub_sequences[1].num_events=7
# The maximum number of events the sub-sequence is allowed to miss.
config.sub_sequences[1].num_max_missed=0
# Weighting factor for scaling the shape of the time window.
config.sub_sequences[1].time_weight=1.0
# The list of filters required for the sub-sequence.
config.sub_sequences[1].filters=['u']
# Relative time when the window opens for subsequent grouped visits.
config.sub_sequences[1].time_window_start=0.8
# Relative time when the window ends for subsequent grouped visits.
config.sub_sequences[1].time_window_end=1.4
# The identifier for the sub-sequence.
config.sub_sequences[1].name='u-band'
config.filters={}
config.filters['g']=lsst.sims.ocs.configuration.proposal.band_filter.BandFilter()
# Brightest magnitude limit for filter.
config.filters['g'].bright_limit=19.5
# Darkest magnitude limit for filter.
config.filters['g'].dark_limit=30.0
# The maximum seeing limit for filter
config.filters['g'].max_seeing=1.5
# Band name of the filter.
config.filters['g'].name='g'
# The list of exposure times (units=seconds) for the filter
config.filters['g'].exposures=[15.0, 15.0]
config.filters['i']=lsst.sims.ocs.configuration.proposal.band_filter.BandFilter()
# Brightest magnitude limit for filter.
config.filters['i'].bright_limit=19.5
# Darkest magnitude limit for filter.
config.filters['i'].dark_limit=30.0
# The maximum seeing limit for filter
config.filters['i'].max_seeing=1.5
# Band name of the filter.
config.filters['i'].name='i'
# The list of exposure times (units=seconds) for the filter
config.filters['i'].exposures=[15.0, 15.0]
config.filters['r']=lsst.sims.ocs.configuration.proposal.band_filter.BandFilter()
# Brightest magnitude limit for filter.
config.filters['r'].bright_limit=19.5
# Darkest magnitude limit for filter.
config.filters['r'].dark_limit=30.0
# The maximum seeing limit for filter
config.filters['r'].max_seeing=1.5
# Band name of the filter.
config.filters['r'].name='r'
# The list of exposure times (units=seconds) for the filter
config.filters['r'].exposures=[15.0, 15.0]
config.filters['u']=lsst.sims.ocs.configuration.proposal.band_filter.BandFilter()
# Brightest magnitude limit for filter.
config.filters['u'].bright_limit=21.3
# Darkest magnitude limit for filter.
config.filters['u'].dark_limit=30.0
# The maximum seeing limit for filter
config.filters['u'].max_seeing=1.5
# Band name of the filter.
config.filters['u'].name='u'
# The list of exposure times (units=seconds) for the filter
config.filters['u'].exposures=[15.0, 15.0]
config.filters['y']=lsst.sims.ocs.configuration.proposal.band_filter.BandFilter()
# Brightest magnitude limit for filter.
config.filters['y'].bright_limit=17.5
# Darkest magnitude limit for filter.
config.filters['y'].dark_limit=30.0
# The maximum seeing limit for filter
config.filters['y'].max_seeing=1.5
# Band name of the filter.
config.filters['y'].name='y'
# The list of exposure times (units=seconds) for the filter
config.filters['y'].exposures=[15.0, 15.0]
config.filters['z']=lsst.sims.ocs.configuration.proposal.band_filter.BandFilter()
# Brightest magnitude limit for filter.
config.filters['z'].bright_limit=17.5
# Darkest magnitude limit for filter.
config.filters['z'].dark_limit=30.0
# The maximum seeing limit for filter
config.filters['z'].max_seeing=1.5
# Band name of the filter.
config.filters['z'].name='z'
# The list of exposure times (units=seconds) for the filter
config.filters['z'].exposures=[15.0, 15.0]
# Flag to restart sequences that were lost due to observational constraints.
config.scheduling.restart_lost_sequences=True
# Bonus to apply to fields giving precedence to low arimass ones. Bonus runs from 0 to 1.
config.scheduling.airmass_bonus=0.0
# Bonus to apply to fields giving precedence to fields near the meridian. Bonus runs from 0 to 1.
config.scheduling.hour_angle_bonus=0.3
# The maximum number of visits requested for the proposal over the lifetime of the survey. This effects the time-balancing for the proposal, but does not prevent more visits from being taken.
config.scheduling.max_visits_goal=250000
# Flag to determine if consecutive visits are accepted.
config.scheduling.accept_consecutive_visits=True
# Flag to restart sequences that were already completed.
config.scheduling.restart_complete_sequences=True
# Maximum hour angle (units=hours) for the bonus factor calculation. Hour angles larger will cause the bonus to be negative. Range is 0.1 to 12.
config.scheduling.hour_angle_max=6.0
# The maximum number of targets the proposal will propose.
config.scheduling.max_num_targets=100
# Flag to determine if observations other than proposal's top target are accepted.
config.scheduling.accept_serendipity=False
# The sun altitude (units=degrees) for twilight consideration.
config.sky_nightly_bounds.twilight_boundary=-12.0
# LST extent (units=degrees) before sunset LST (-) and after sunrise LST (+) for providing a region of the sky to select.
config.sky_nightly_bounds.delta_lst=60.0
config.master_sub_sequences={}
# Angle (units=degrees) around the observing site's latitude for which to create a Declination window for field selection.
config.sky_exclusion.dec_window=90.0
config.sky_exclusion.selections={}
| [
"lynnej@uw.edu"
] | lynnej@uw.edu |
4161acb2fdd7cfcdf83223d02609b5bd32490bb8 | 42eaacac77b57d7bd1379afe249e2d3286596fe4 | /problems/1/problem173.py | bbdea0dd73ae7f3abe9553900c94ffebec17e20c | [] | no_license | JustinKnueppel/ProjectEuler | 08256bda59a4ad6c40d33bada17c59e3338c3525 | 21a805a061383bc75a2ec6eb7473975e377e701a | refs/heads/master | 2021-09-27T07:46:03.073046 | 2018-11-07T01:20:37 | 2018-11-07T01:20:37 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 542 | py | """Solution to problem 173 on project euler"""
# https://projecteuler.net/problem=173
# We shall define a square lamina to be a square outline with a square "hole" so that the shape possesses vertical and horizontal symmetry. For example, using exactly thirty-two square tiles we can form two different square laminae:
# With one-hundred tiles, and not necessarily using all of the tiles at one time, it is possible to form forty-one different square laminae.
# Using up to one million tiles how many different square laminae can be formed?
| [
"justinknueppel@gmail.com"
] | justinknueppel@gmail.com |
65842432a6e0af776d78eb30c10bf3c942ea422a | e3365bc8fa7da2753c248c2b8a5c5e16aef84d9f | /indices/devilri.py | f0d9762b1a00466ade759ffe4c8bfaca93e61ad1 | [] | no_license | psdh/WhatsintheVector | e8aabacc054a88b4cb25303548980af9a10c12a8 | a24168d068d9c69dc7a0fd13f606c080ae82e2a6 | refs/heads/master | 2021-01-25T10:34:22.651619 | 2015-09-23T11:54:06 | 2015-09-23T11:54:06 | 42,749,205 | 2 | 3 | null | 2015-09-23T11:54:07 | 2015-09-18T22:06:38 | Python | UTF-8 | Python | false | false | 81 | py | ii = [('KembFJ1.py', 1), ('AinsWRR.py', 1), ('KembFJ2.py', 1), ('LewiMJW.py', 1)] | [
"prabhjyotsingh95@gmail.com"
] | prabhjyotsingh95@gmail.com |
8048ac30dfdd0589ef3441d32f3d6debb2b76c92 | 039f2c747a9524daa1e45501ada5fb19bd5dd28f | /ARC041/ARC041d.py | 0d72600f2ec256a877e7d9897a34350f18b80ed0 | [
"Unlicense"
] | permissive | yuto-moriizumi/AtCoder | 86dbb4f98fea627c68b5391bf0cc25bcce556b88 | 21acb489f1594bbb1cdc64fbf8421d876b5b476d | refs/heads/master | 2023-03-25T08:10:31.738457 | 2021-03-23T08:48:01 | 2021-03-23T08:48:01 | 242,283,632 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 136 | py | #ARC041d
def main():
import sys
input=sys.stdin.readline
sys.setrecursionlimit(10**6)
if __name__ == '__main__':
main() | [
"kurvan1112@gmail.com"
] | kurvan1112@gmail.com |
72a4439b0560de9d63f09cece712d914e09dd71d | 1046db6bc56b41d01b5ccb885f3686918c657ecc | /matrix/argparser.py | bb228ab7d929d67c5622d509a5757d29198ba904 | [] | no_license | astsu-dev/matrix | 016d4044da640337ec3fde1611befcbbbb76f749 | 1835804fba08ad2d24cb430c6d3234d736001074 | refs/heads/master | 2023-04-26T08:49:21.993850 | 2021-05-30T10:23:50 | 2021-05-30T10:23:50 | 307,655,496 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 601 | py | import argparse
from .consts import AVAILABLE_COLORS
from .defaults import DEFAULT_CHARS, DEFAULT_COLOR, DEFAULT_SPEED
def setup_argparser(parser: argparse.ArgumentParser) -> None:
parser.add_argument("--color", "-c", default=DEFAULT_COLOR,
choices=AVAILABLE_COLORS, help="matrix characters color")
parser.add_argument("--speed", "-s", type=int,
default=DEFAULT_SPEED, help="lines per second")
parser.add_argument("--chars", "-ch", type=list, default=DEFAULT_CHARS,
help="matrix will consist of these characters")
| [
"None"
] | None |
faaba892ad47eb547e3d7fe82eb2ae986794cda1 | 537b7b1d67f39b2c0351d58906e7b24125866d5f | /github_apis_sample/create_new_branch_from_working_branch.py | 0420cd33a9821e808d20e4f53c1703b4bf523418 | [] | no_license | PhungXuanAnh/python-note | 8836ca37d9254c504f9801acca3977e2c28a4f60 | 9181c4845af32c2148e65313cbb51c2837420078 | refs/heads/master | 2023-08-31T02:43:16.469562 | 2023-08-22T03:14:44 | 2023-08-22T03:14:44 | 94,281,269 | 16 | 5 | null | 2023-05-22T22:30:46 | 2017-06-14T02:46:49 | Python | UTF-8 | Python | false | false | 8,240 | py | import time
import requests
import json
import sys
sys.path.append("/home/xuananh/repo/python-note")
from subprocess_sample.subprocess_sample import run_command_print_output
gh_token = open("/home/xuananh/Dropbox/Work/Other/credentials_bk/github_basic-token-PhungXuanAnh.txt", "r").read()
def create_pull_request(owner, repo, base_branch, working_branch):
"""
Reference: https://docs.github.com/en/rest/pulls/pulls?apiVersion=2022-11-28#create-a-pull-request
curl -L \
-X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer <YOUR-TOKEN>"\
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/OWNER/REPO/pulls \
-d '{"title":"Amazing new feature","body":"Please pull these awesome changes in!","head":"octocat:new-feature","base":"master"}'
"""
resp = requests.post(
url=f"https://api.github.com/repos/{owner}/{repo}/pulls",
headers={
"Authorization": f"Bearer {gh_token}",
"Accept": "application/vnd.github+json",
"X-GitHub-Api-Version": "2022-11-28",
},
json={
"title": f"Temporary PR for {working_branch}",
"body": "Please pull these awesome changes in!",
"head": working_branch,
"base": base_branch,
},
)
response = resp.json()
pr_number = response.get('number')
if pr_number:
print(f" ==============> pull request id: {pr_number}")
print(f" ==============> pull request url: {response.get('html_url')}")
else:
print("Create PR failed: ", json.dumps(response, indent=4, sort_keys=True))
return response
def create_new_branch(repository_dir, base_branch, working_branch):
"""
base_branch: is branch from which we create working_branch
"""
new_branch_name = f"{working_branch}_{int(time.time())}"
command = (
f"cd {repository_dir} && "
f"git checkout {base_branch} && "
f"git pull xuananh {base_branch} && "
f"git checkout -b {new_branch_name} && "
f"git push xuananh {new_branch_name}"
)
return_code = run_command_print_output(command)
if return_code != 0:
print("Error while creating new branch")
sys.exit()
else:
print(" ==============> new branch name: ", new_branch_name)
return new_branch_name
def get_pr(owner, repo, pr_number):
"""
Reference: https://docs.github.com/en/rest/pulls/pulls?apiVersion=2022-11-28#get-a-pull-request
curl -L \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer <YOUR-TOKEN>"\
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/OWNER/REPO/pulls/PULL_NUMBER
"""
resp = requests.get(
url=f"https://api.github.com/repos/{owner}/{repo}/pulls/{pr_number}",
headers={
"Authorization": f"Bearer {gh_token}",
"Accept": "application/vnd.github+json",
"X-GitHub-Api-Version": "2022-11-28",
},
)
resp = resp.json()
# print(json.dumps(resp, indent=4, sort_keys=True))
return resp
def is_mergeable(owner, repo, pr_number):
pr = get_pr(owner, repo, pr_number)
return pr.get('mergeable')
def merge_pr(owner, repo, pr_number, commit_title):
"""
Reference: https://docs.github.com/en/rest/pulls/pulls?apiVersion=2022-11-28#merge-a-pull-request
curl -L \
-X PUT \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer <YOUR-TOKEN>"\
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/OWNER/REPO/pulls/PULL_NUMBER/merge \
-d '{"commit_title":"Expand enum","commit_message":"Add a new value to the merge_method enum"}'
"""
resp = requests.put(
url=f"https://api.github.com/repos/{owner}/{repo}/pulls/{pr_number}/merge",
headers={
"Authorization": f"Bearer {gh_token}",
"Accept": "application/vnd.github+json",
"X-GitHub-Api-Version": "2022-11-28",
},
json={
"commit_title": commit_title,
"commit_message": "",
"merge_method": "squash"
},
)
response = resp.json()
print(json.dumps(response, indent=4, sort_keys=True))
return response
def pull_new_branch_after_merge(repository_dir, new_branch_name):
command = (
f"cd {repository_dir} && "
f"git checkout {new_branch_name} && "
f"git pull xuananh {new_branch_name}"
)
return_code = run_command_print_output(command)
if return_code != 0:
print("EEEEEEEEEEEEEEEError while pull branch")
sys.exit()
else:
print(f"Completed to create new working branch: {new_branch_name}")
def merge_working_branch_to_main_branch(working_branch, main_branch, repository_dir, owner, repo, merge_right_now):
"""
Create new branch name_main_timestamp from main branch same as working branch
push new branch name_main_timestamp to PhungXuanAnh/reponame
create new PR from working branch to name_main_timestamp
https://docs.github.com/en/rest/pulls/pulls?apiVersion=2022-11-28#create-a-pull-request
Check Conflict
https://docs.github.com/en/rest/pulls/pulls?apiVersion=2022-11-28#get-a-pull-request
If not conflict Accept PR and to merge squashed PR
Else stop and do other steps manually
Push name_main_timestamp to remote
print branch name name_main_timestamp
Create PR manually
Continue to create new PR for merging new_branch to main branch
"""
# merge working branch to new branch
new_branch_name = create_new_branch(repository_dir, main_branch, working_branch)
temporary_pr = create_pull_request(owner, repo, base_branch=new_branch_name, working_branch=working_branch)
pr_number = temporary_pr.get("number")
if is_mergeable(owner, repo, pr_number):
merge_pr(owner, repo, pr_number, commit_title=working_branch)
pull_new_branch_after_merge(repository_dir, new_branch_name)
else:
print(f"Cannot merge pull request for updating code to new created branch: {pr_number}")
if merge_right_now:
# merge new branch to main branch
new_pr = create_pull_request(owner, repo, base_branch=main_branch, working_branch=new_branch_name)
new_pr_number = new_pr.get("number")
if is_mergeable(owner, repo, new_pr_number):
merge_pr(owner, repo, new_pr_number, commit_title=working_branch)
pull_new_branch_after_merge(repository_dir, main_branch)
else:
print(f"Cannot merge pull request: {new_pr_number}")
def create_new_branch_spectre_dashboard_repo(working_branch, main_branch, merge_right_now):
repository_dir = "/home/xuananh/repo/Spectre.Dashboard.Backend"
owner = "PhungXuanAnh"
repo = "Spectre.Dashboard.Backend"
merge_working_branch_to_main_branch(working_branch, main_branch, repository_dir, owner, repo, merge_right_now)
def create_new_branch_ablr_repo(working_branch, main_branch, merge_right_now):
repository_dir = "/home/xuananh/repo/ablr_django"
owner = "PhungXuanAnh"
repo = "ablr_django"
merge_working_branch_to_main_branch(working_branch, main_branch, repository_dir, owner, repo, merge_right_now)
def create_new_branch_castnet_repo(working_branch, main_branch, merge_right_now):
repository_dir = "/home/xuananh/repo/castnet"
owner = "PhungXuanAnh"
repo = "castnet"
merge_working_branch_to_main_branch(working_branch, main_branch, repository_dir, owner, repo, merge_right_now)
if __name__ == "__main__":
working_branch = 'feature/update-settlements-api___fixtest'
main_branch = 'feature/update-settlements-api'
merge_right_now = True
# create_new_branch_spectre_dashboard_repo(working_branch, main_branch, merge_right_now)
create_new_branch_ablr_repo(working_branch, main_branch, merge_right_now)
# create_new_branch_castnet_repo(working_branch, main_branch, merge_right_now)
| [
"phungxuananh1991@gmail.com"
] | phungxuananh1991@gmail.com |
64ed1cf2b4a71b236997456f92b6ad8258b2fd68 | 9fb52109b2fb6e6e2ebc49d646e5436406bc60c2 | /tests/pools/test_add_liquidity_initial.py | 5c4e5361bf7bc6979040788c2e6c73affcf2f90f | [] | no_license | lurium/curve-factory | d5083c116b006f3a68f6500081d3494d3a96d317 | d6f0ef79f0fbb215033330cd1b61e78eee5cb0a1 | refs/heads/master | 2023-03-13T01:35:57.683184 | 2021-03-01T16:52:03 | 2021-03-01T16:52:03 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 995 | py | import brownie
import pytest
pytestmark = pytest.mark.usefixtures("mint_alice", "approve_alice")
@pytest.mark.parametrize("min_amount", [0, 2 * 10**18])
def test_initial(
alice, swap, wrapped_coins, min_amount, wrapped_decimals, initial_amounts, base_pool
):
amounts = [10**i for i in wrapped_decimals]
swap.add_liquidity(amounts, min_amount, {'from': alice})
for coin, amount, initial in zip(wrapped_coins, amounts, initial_amounts):
assert coin.balanceOf(alice) == initial - amount
assert coin.balanceOf(swap) == amount
ideal = 10**18 + base_pool.get_virtual_price()
assert 0.9999 < swap.balanceOf(alice) / ideal < 1
assert swap.balanceOf(alice) == swap.totalSupply()
@pytest.mark.parametrize("idx", range(2))
def test_initial_liquidity_missing_coin(alice, swap, idx, wrapped_decimals):
amounts = [10**i for i in wrapped_decimals]
amounts[idx] = 0
with brownie.reverts():
swap.add_liquidity(amounts, 0, {'from': alice})
| [
"ben@hauser.id"
] | ben@hauser.id |
b84d3b11fa6dfbb8473a9b3f047db072ab6e4e1c | 733496067584ee32eccc333056c82d60f673f211 | /idfy_rest_client/models/lei_extension.py | 1450e531f5c6749b7d897d4b93f8308aab8171d1 | [
"MIT"
] | permissive | dealflowteam/Idfy | 90ee5fefaa5283ce7dd3bcee72ace4615ffd15d2 | fa3918a6c54ea0eedb9146578645b7eb1755b642 | refs/heads/master | 2020-03-07T09:11:15.410502 | 2018-03-30T08:12:40 | 2018-03-30T08:12:40 | 127,400,869 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,046 | py | # -*- coding: utf-8 -*-
"""
idfy_rest_client.models.lei_extension
This file was automatically generated for Idfy by APIMATIC v2.0 ( https://apimatic.io )
"""
import idfy_rest_client.models.lei_normalizations
class LeiExtension(object):
"""Implementation of the 'LeiExtension' model.
TODO: type model description here.
Attributes:
normalizations (LeiNormalizations): TODO: type description here.
"""
# Create a mapping from Model property names to API property names
_names = {
"normalizations":'Normalizations'
}
def __init__(self,
normalizations=None,
additional_properties = {}):
"""Constructor for the LeiExtension class"""
# Initialize members of the class
self.normalizations = normalizations
# Add additional model properties to the instance
self.additional_properties = additional_properties
@classmethod
def from_dictionary(cls,
dictionary):
"""Creates an instance of this model from a dictionary
Args:
dictionary (dictionary): A dictionary representation of the object as
obtained from the deserialization of the server's response. The keys
MUST match property names in the API description.
Returns:
object: An instance of this structure class.
"""
if dictionary is None:
return None
# Extract variables from the dictionary
normalizations = idfy_rest_client.models.lei_normalizations.LeiNormalizations.from_dictionary(dictionary.get('Normalizations')) if dictionary.get('Normalizations') else None
# Clean out expected properties from dictionary
for key in cls._names.values():
if key in dictionary:
del dictionary[key]
# Return an object of this model
return cls(normalizations,
dictionary)
| [
"runes@unipluss.no"
] | runes@unipluss.no |
b04fa65ba521b2aebad06173fd2eff8204459b7f | a2f6e449e6ec6bf54dda5e4bef82ba75e7af262c | /venv/Lib/site-packages/pandas/tests/io/test_spss.py | 0cf9168a66d613bfa9649ed5cfc46d95a4139ef2 | [] | no_license | mylonabusiness28/Final-Year-Project- | e4b79ccce6c19a371cac63c7a4ff431d6e26e38f | 68455795be7902b4032ee1f145258232212cc639 | refs/heads/main | 2023-07-08T21:43:49.300370 | 2021-06-05T12:34:16 | 2021-06-05T12:34:16 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 129 | py | version https://git-lfs.github.com/spec/v1
oid sha256:ee5880b545343d135efe088b0a70ff099001d051e96f7319293415e05e61e458
size 2821
| [
"chuksajeh1@gmail.com"
] | chuksajeh1@gmail.com |
f4660a795a1f9fe843844d07ee849fd94b4979b2 | e59fe240f0359aa32c59b5e9f581db0bfdb315b8 | /galaxy-dist/lib/galaxy/jobs/__init__.py | 42fa0270e1aa2eca880ded988ff6bcb1e0755ca9 | [
"CC-BY-2.5",
"AFL-2.1",
"AFL-3.0",
"CC-BY-3.0",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | subway/Galaxy-Distribution | dc269a0258471597d483687a0f1dd9e10bd47448 | d16d6f9b6a8b7f41a218c06539863c8ce4d5a73c | refs/heads/master | 2021-06-30T06:26:55.237251 | 2015-07-04T23:55:51 | 2015-07-04T23:55:51 | 15,899,275 | 1 | 2 | null | 2020-10-07T06:17:26 | 2014-01-14T10:47:28 | Groff | UTF-8 | Python | false | false | 80,687 | py | """
Support for running a tool in Galaxy via an internal job management system
"""
import time
import copy
import datetime
import galaxy
import logging
import os
import pwd
import random
import re
import shutil
import subprocess
import sys
import threading
import traceback
from galaxy import model, util
from galaxy.datatypes import metadata
from galaxy.exceptions import ObjectInvalid, ObjectNotFound
from galaxy.jobs.actions.post import ActionBox
from galaxy.jobs.mapper import JobRunnerMapper
from galaxy.jobs.runners import BaseJobRunner
from galaxy.util.bunch import Bunch
from galaxy.util.expressions import ExpressionContext
from galaxy.util.json import from_json_string
from galaxy.util import unicodify
from .output_checker import check_output
log = logging.getLogger( __name__ )
DATABASE_MAX_STRING_SIZE = util.DATABASE_MAX_STRING_SIZE
DATABASE_MAX_STRING_SIZE_PRETTY = util.DATABASE_MAX_STRING_SIZE_PRETTY
# This file, if created in the job's working directory, will be used for
# setting advanced metadata properties on the job and its associated outputs.
# This interface is currently experimental, is only used by the upload tool,
# and should eventually become API'd
TOOL_PROVIDED_JOB_METADATA_FILE = 'galaxy.json'
class Sleeper( object ):
"""
Provides a 'sleep' method that sleeps for a number of seconds *unless*
the notify method is called (from a different thread).
"""
def __init__( self ):
self.condition = threading.Condition()
def sleep( self, seconds ):
self.condition.acquire()
self.condition.wait( seconds )
self.condition.release()
def wake( self ):
self.condition.acquire()
self.condition.notify()
self.condition.release()
class JobDestination( Bunch ):
"""
Provides details about where a job runs
"""
def __init__(self, **kwds):
self['id'] = None
self['url'] = None
self['tags'] = None
self['runner'] = None
self['legacy'] = False
self['converted'] = False
# dict is appropriate (rather than a bunch) since keys may not be valid as attributes
self['params'] = dict()
super(JobDestination, self).__init__(**kwds)
# Store tags as a list
if self.tags is not None:
self['tags'] = [ x.strip() for x in self.tags.split(',') ]
class JobToolConfiguration( Bunch ):
"""
Provides details on what handler and destination a tool should use
A JobToolConfiguration will have the required attribute 'id' and optional
attributes 'handler', 'destination', and 'params'
"""
def __init__(self, **kwds):
self['handler'] = None
self['destination'] = None
self['params'] = dict()
super(JobToolConfiguration, self).__init__(**kwds)
class JobConfiguration( object ):
"""A parser and interface to advanced job management features.
These features are configured in the job configuration, by default, ``job_conf.xml``
"""
DEFAULT_NWORKERS = 4
def __init__(self, app):
"""Parse the job configuration XML.
"""
self.app = app
self.runner_plugins = []
self.handlers = {}
self.default_handler_id = None
self.destinations = {}
self.destination_tags = {}
self.default_destination_id = None
self.tools = {}
self.limits = Bunch()
# Initialize the config
try:
tree = util.parse_xml(self.app.config.job_config_file)
self.__parse_job_conf_xml(tree)
except IOError:
log.warning( 'Job configuration "%s" does not exist, using legacy job configuration from Galaxy config file "%s" instead' % ( self.app.config.job_config_file, self.app.config.config_file ) )
self.__parse_job_conf_legacy()
def __parse_job_conf_xml(self, tree):
"""Loads the new-style job configuration from options in the job config file (by default, job_conf.xml).
:param tree: Object representing the root ``<job_conf>`` object in the job config file.
:type tree: ``xml.etree.ElementTree.Element``
"""
root = tree.getroot()
log.debug('Loading job configuration from %s' % self.app.config.job_config_file)
# Parse job plugins
plugins = root.find('plugins')
if plugins is not None:
for plugin in self.__findall_with_required(plugins, 'plugin', ('id', 'type', 'load')):
if plugin.get('type') == 'runner':
workers = plugin.get('workers', plugins.get('workers', JobConfiguration.DEFAULT_NWORKERS))
runner_kwds = self.__get_params(plugin)
runner_info = dict(id=plugin.get('id'),
load=plugin.get('load'),
workers=int(workers),
kwds=runner_kwds)
self.runner_plugins.append(runner_info)
else:
log.error('Unknown plugin type: %s' % plugin.get('type'))
# Load tasks if configured
if self.app.config.use_tasked_jobs:
self.runner_plugins.append(dict(id='tasks', load='tasks', workers=self.app.config.local_task_queue_workers))
# Parse handlers
handlers = root.find('handlers')
if handlers is not None:
for handler in self.__findall_with_required(handlers, 'handler'):
id = handler.get('id')
if id in self.handlers:
log.error("Handler '%s' overlaps handler with the same name, ignoring" % id)
else:
log.debug("Read definition for handler '%s'" % id)
self.handlers[id] = (id,)
if handler.get('tags', None) is not None:
for tag in [ x.strip() for x in handler.get('tags').split(',') ]:
if tag in self.handlers:
self.handlers[tag].append(id)
else:
self.handlers[tag] = [id]
# Determine the default handler(s)
self.default_handler_id = self.__get_default(handlers, self.handlers.keys())
# Parse destinations
destinations = root.find('destinations')
for destination in self.__findall_with_required(destinations, 'destination', ('id', 'runner')):
id = destination.get('id')
job_destination = JobDestination(**dict(destination.items()))
job_destination['params'] = self.__get_params(destination)
self.destinations[id] = (job_destination,)
if job_destination.tags is not None:
for tag in job_destination.tags:
if tag not in self.destinations:
self.destinations[tag] = []
self.destinations[tag].append(job_destination)
# Determine the default destination
self.default_destination_id = self.__get_default(destinations, self.destinations.keys())
# Parse tool mappings
tools = root.find('tools')
if tools is not None:
for tool in self.__findall_with_required(tools, 'tool'):
# There can be multiple definitions with identical ids, but different params
id = tool.get('id').lower()
if id not in self.tools:
self.tools[id] = list()
self.tools[id].append(JobToolConfiguration(**dict(tool.items())))
self.tools[id][-1]['params'] = self.__get_params(tool)
types = dict(registered_user_concurrent_jobs = int,
anonymous_user_concurrent_jobs = int,
walltime = str,
output_size = int)
self.limits = Bunch(registered_user_concurrent_jobs = None,
anonymous_user_concurrent_jobs = None,
walltime = None,
walltime_delta = None,
output_size = None,
concurrent_jobs = {})
# Parse job limits
limits = root.find('limits')
if limits is not None:
for limit in self.__findall_with_required(limits, 'limit', ('type',)):
type = limit.get('type')
if type == 'concurrent_jobs':
id = limit.get('tag', None) or limit.get('id')
self.limits.concurrent_jobs[id] = int(limit.text)
elif limit.text:
self.limits.__dict__[type] = types.get(type, str)(limit.text)
if self.limits.walltime is not None:
h, m, s = [ int( v ) for v in self.limits.walltime.split( ':' ) ]
self.limits.walltime_delta = datetime.timedelta( 0, s, 0, 0, m, h )
log.debug('Done loading job configuration')
def __parse_job_conf_legacy(self):
"""Loads the old-style job configuration from options in the galaxy config file (by default, universe_wsgi.ini).
"""
log.debug('Loading job configuration from %s' % self.app.config.config_file)
# Always load local and lwr
self.runner_plugins = [dict(id='local', load='local', workers=self.app.config.local_job_queue_workers), dict(id='lwr', load='lwr', workers=self.app.config.cluster_job_queue_workers)]
# Load tasks if configured
if self.app.config.use_tasked_jobs:
self.runner_plugins.append(dict(id='tasks', load='tasks', workers=self.app.config.local_task_queue_workers))
for runner in self.app.config.start_job_runners:
self.runner_plugins.append(dict(id=runner, load=runner, workers=self.app.config.cluster_job_queue_workers))
# Set the handlers
for id in self.app.config.job_handlers:
self.handlers[id] = (id,)
self.handlers['default_job_handlers'] = self.app.config.default_job_handlers
self.default_handler_id = 'default_job_handlers'
# Set tool handler configs
for id, tool_handlers in self.app.config.tool_handlers.items():
self.tools[id] = list()
for handler_config in tool_handlers:
# rename the 'name' key to 'handler'
handler_config['handler'] = handler_config.pop('name')
self.tools[id].append(JobToolConfiguration(**handler_config))
# Set tool runner configs
for id, tool_runners in self.app.config.tool_runners.items():
# Might have been created in the handler parsing above
if id not in self.tools:
self.tools[id] = list()
for runner_config in tool_runners:
url = runner_config['url']
if url not in self.destinations:
# Create a new "legacy" JobDestination - it will have its URL converted to a destination params once the appropriate plugin has loaded
self.destinations[url] = (JobDestination(id=url, runner=url.split(':', 1)[0], url=url, legacy=True, converted=False),)
for tool_conf in self.tools[id]:
if tool_conf.params == runner_config.get('params', {}):
tool_conf['destination'] = url
break
else:
# There was not an existing config (from the handlers section) with the same params
# rename the 'url' key to 'destination'
runner_config['destination'] = runner_config.pop('url')
self.tools[id].append(JobToolConfiguration(**runner_config))
self.destinations[self.app.config.default_cluster_job_runner] = (JobDestination(id=self.app.config.default_cluster_job_runner, runner=self.app.config.default_cluster_job_runner.split(':', 1)[0], url=self.app.config.default_cluster_job_runner, legacy=True, converted=False),)
self.default_destination_id = self.app.config.default_cluster_job_runner
# Set the job limits
self.limits = Bunch(registered_user_concurrent_jobs = self.app.config.registered_user_job_limit,
anonymous_user_concurrent_jobs = self.app.config.anonymous_user_job_limit,
walltime = self.app.config.job_walltime,
walltime_delta = self.app.config.job_walltime_delta,
output_size = self.app.config.output_size_limit,
concurrent_jobs = {})
log.debug('Done loading job configuration')
def __get_default(self, parent, names):
"""Returns the default attribute set in a parent tag like <handlers> or <destinations>, or return the ID of the child, if there is no explicit default and only one child.
:param parent: Object representing a tag that may or may not have a 'default' attribute.
:type parent: ``xml.etree.ElementTree.Element``
:param names: The list of destination or handler IDs or tags that were loaded.
:type names: list of str
:returns: str -- id or tag representing the default.
"""
rval = parent.get('default')
if rval is not None:
# If the parent element has a 'default' attribute, use the id or tag in that attribute
if rval not in names:
raise Exception("<%s> default attribute '%s' does not match a defined id or tag in a child element" % (parent.tag, rval))
log.debug("<%s> default set to child with id or tag '%s'" % (parent.tag, rval))
elif len(names) == 1:
log.info("Setting <%s> default to child with id '%s'" % (parent.tag, names[0]))
rval = names[0]
else:
raise Exception("No <%s> default specified, please specify a valid id or tag with the 'default' attribute" % parent.tag)
return rval
def __findall_with_required(self, parent, match, attribs=None):
"""Like ``xml.etree.ElementTree.Element.findall()``, except only returns children that have the specified attribs.
:param parent: Parent element in which to find.
:type parent: ``xml.etree.ElementTree.Element``
:param match: Name of child elements to find.
:type match: str
:param attribs: List of required attributes in children elements.
:type attribs: list of str
:returns: list of ``xml.etree.ElementTree.Element``
"""
rval = []
if attribs is None:
attribs = ('id',)
for elem in parent.findall(match):
for attrib in attribs:
if attrib not in elem.attrib:
log.warning("required '%s' attribute is missing from <%s> element" % (attrib, match))
break
else:
rval.append(elem)
return rval
def __get_params(self, parent):
"""Parses any child <param> tags in to a dictionary suitable for persistence.
:param parent: Parent element in which to find child <param> tags.
:type parent: ``xml.etree.ElementTree.Element``
:returns: dict
"""
rval = {}
for param in parent.findall('param'):
rval[param.get('id')] = param.text
return rval
@property
def default_job_tool_configuration(self):
"""The default JobToolConfiguration, used if a tool does not have an explicit defintion in the configuration. It consists of a reference to the default handler and default destination.
:returns: JobToolConfiguration -- a representation of a <tool> element that uses the default handler and destination
"""
return JobToolConfiguration(id='default', handler=self.default_handler_id, destination=self.default_destination_id)
# Called upon instantiation of a Tool object
def get_job_tool_configurations(self, ids):
"""Get all configured JobToolConfigurations for a tool ID, or, if given a list of IDs, the JobToolConfigurations for the first id in ``ids`` matching a tool definition.
.. note::
You should not mix tool shed tool IDs, versionless tool shed IDs, and tool config tool IDs that refer to the same tool.
:param ids: Tool ID or IDs to fetch the JobToolConfiguration of.
:type ids: list or str.
:returns: list -- JobToolConfiguration Bunches representing <tool> elements matching the specified ID(s).
Example tool ID strings include:
* Full tool shed id: ``toolshed.example.org/repos/nate/filter_tool_repo/filter_tool/1.0.0``
* Tool shed id less version: ``toolshed.example.org/repos/nate/filter_tool_repo/filter_tool``
* Tool config tool id: ``filter_tool``
"""
rval = []
# listify if ids is a single (string) id
ids = util.listify(ids)
for id in ids:
if id in self.tools:
# If a tool has definitions that include job params but not a
# definition for jobs without params, include the default
# config
for job_tool_configuration in self.tools[id]:
if not job_tool_configuration.params:
break
else:
rval.append(self.default_job_tool_configuration)
rval.extend(self.tools[id])
break
else:
rval.append(self.default_job_tool_configuration)
return rval
def __get_single_item(self, collection):
"""Given a collection of handlers or destinations, return one item from the collection at random.
"""
# Done like this to avoid random under the assumption it's faster to avoid it
if len(collection) == 1:
return collection[0]
else:
return random.choice(collection)
# This is called by Tool.get_job_handler()
def get_handler(self, id_or_tag):
"""Given a handler ID or tag, return the provided ID or an ID matching the provided tag
:param id_or_tag: A handler ID or tag.
:type id_or_tag: str
:returns: str -- A valid job handler ID.
"""
if id_or_tag is None:
id_or_tag = self.default_handler_id
return self.__get_single_item(self.handlers[id_or_tag])
def get_destination(self, id_or_tag):
"""Given a destination ID or tag, return the JobDestination matching the provided ID or tag
:param id_or_tag: A destination ID or tag.
:type id_or_tag: str
:returns: JobDestination -- A valid destination
Destinations are deepcopied as they are expected to be passed in to job
runners, which will modify them for persisting params set at runtime.
"""
if id_or_tag is None:
id_or_tag = self.default_destination_id
return copy.deepcopy(self.__get_single_item(self.destinations[id_or_tag]))
def get_destinations(self, id_or_tag):
"""Given a destination ID or tag, return all JobDestinations matching the provided ID or tag
:param id_or_tag: A destination ID or tag.
:type id_or_tag: str
:returns: list or tuple of JobDestinations
Destinations are not deepcopied, so they should not be passed to
anything which might modify them.
"""
return self.destinations.get(id_or_tag, None)
def get_job_runner_plugins(self):
"""Load all configured job runner plugins
:returns: list of job runner plugins
"""
rval = {}
for runner in self.runner_plugins:
class_names = []
module = None
id = runner['id']
load = runner['load']
if ':' in load:
# Name to load was specified as '<module>:<class>'
module_name, class_name = load.rsplit(':', 1)
class_names = [ class_name ]
module = __import__( module_name )
else:
# Name to load was specified as '<module>'
if '.' not in load:
# For legacy reasons, try from galaxy.jobs.runners first if there's no '.' in the name
module_name = 'galaxy.jobs.runners.' + load
try:
module = __import__( module_name )
except ImportError:
# No such module, we'll retry without prepending galaxy.jobs.runners.
# All other exceptions (e.g. something wrong with the module code) will raise
pass
if module is None:
# If the name included a '.' or loading from the static runners path failed, try the original name
module = __import__( load )
module_name = load
if module is None:
# Module couldn't be loaded, error should have already been displayed
continue
for comp in module_name.split( "." )[1:]:
module = getattr( module, comp )
if not class_names:
# If there's not a ':', we check <module>.__all__ for class names
try:
assert module.__all__
class_names = module.__all__
except AssertionError:
log.error( 'Runner "%s" does not contain a list of exported classes in __all__' % load )
continue
for class_name in class_names:
runner_class = getattr( module, class_name )
try:
assert issubclass(runner_class, BaseJobRunner)
except TypeError:
log.warning("A non-class name was found in __all__, ignoring: %s" % id)
continue
except AssertionError:
log.warning("Job runner classes must be subclassed from BaseJobRunner, %s has bases: %s" % (id, runner_class.__bases__))
continue
try:
rval[id] = runner_class( self.app, runner[ 'workers' ], **runner.get( 'kwds', {} ) )
except TypeError:
log.warning( "Job runner '%s:%s' has not been converted to a new-style runner" % ( module_name, class_name ) )
rval[id] = runner_class( self.app )
log.debug( "Loaded job runner '%s:%s' as '%s'" % ( module_name, class_name, id ) )
return rval
def is_id(self, collection):
"""Given a collection of handlers or destinations, indicate whether the collection represents a tag or a real ID
:param collection: A representation of a destination or handler
:type collection: tuple or list
:returns: bool
"""
return type(collection) == tuple
def is_tag(self, collection):
"""Given a collection of handlers or destinations, indicate whether the collection represents a tag or a real ID
:param collection: A representation of a destination or handler
:type collection: tuple or list
:returns: bool
"""
return type(collection) == list
def is_handler(self, server_name):
"""Given a server name, indicate whether the server is a job handler
:param server_name: The name to check
:type server_name: str
:return: bool
"""
for collection in self.handlers.values():
if server_name in collection:
return True
return False
def convert_legacy_destinations(self, job_runners):
"""Converts legacy (from a URL) destinations to contain the appropriate runner params defined in the URL.
:param job_runners: All loaded job runner plugins.
:type job_runners: list of job runner plugins
"""
for id, destination in [ ( id, destinations[0] ) for id, destinations in self.destinations.items() if self.is_id(destinations) ]:
# Only need to deal with real destinations, not members of tags
if destination.legacy and not destination.converted:
if destination.runner in job_runners:
destination.params = job_runners[destination.runner].url_to_destination(destination.url).params
destination.converted = True
if destination.params:
log.debug("Legacy destination with id '%s', url '%s' converted, got params:" % (id, destination.url))
for k, v in destination.params.items():
log.debug(" %s: %s" % (k, v))
else:
log.debug("Legacy destination with id '%s', url '%s' converted, got params:" % (id, destination.url))
else:
log.warning("Legacy destination with id '%s' could not be converted: Unknown runner plugin: %s" % (id, destination.runner))
class JobWrapper( object ):
"""
Wraps a 'model.Job' with convenience methods for running processes and
state management.
"""
def __init__( self, job, queue ):
self.job_id = job.id
self.session_id = job.session_id
self.user_id = job.user_id
self.tool = queue.app.toolbox.tools_by_id.get( job.tool_id, None )
self.queue = queue
self.app = queue.app
self.sa_session = self.app.model.context
self.extra_filenames = []
self.command_line = None
# Tool versioning variables
self.version_string_cmd = None
self.version_string = ""
self.galaxy_lib_dir = None
# With job outputs in the working directory, we need the working
# directory to be set before prepare is run, or else premature deletion
# and job recovery fail.
# Create the working dir if necessary
try:
self.app.object_store.create(job, base_dir='job_work', dir_only=True, extra_dir=str(self.job_id))
self.working_directory = self.app.object_store.get_filename(job, base_dir='job_work', dir_only=True, extra_dir=str(self.job_id))
log.debug('(%s) Working directory for job is: %s' % (self.job_id, self.working_directory))
except ObjectInvalid:
raise Exception('Unable to create job working directory, job failure')
self.output_paths = None
self.output_hdas_and_paths = None
self.tool_provided_job_metadata = None
# Wrapper holding the info required to restore and clean up from files used for setting metadata externally
self.external_output_metadata = metadata.JobExternalOutputMetadataWrapper( job )
self.job_runner_mapper = JobRunnerMapper( self, queue.dispatcher.url_to_destination, self.app.job_config )
self.params = None
if job.params:
self.params = from_json_string( job.params )
self.__user_system_pwent = None
self.__galaxy_system_pwent = None
def can_split( self ):
# Should the job handler split this job up?
return self.app.config.use_tasked_jobs and self.tool.parallelism
def get_job_runner_url( self ):
log.warning('(%s) Job runner URLs are deprecated, use destinations instead.' % self.job_id)
return self.job_destination.url
def get_parallelism(self):
return self.tool.parallelism
# legacy naming
get_job_runner = get_job_runner_url
@property
def job_destination(self):
"""Return the JobDestination that this job will use to run. This will
either be a configured destination, a randomly selected destination if
the configured destination was a tag, or a dynamically generated
destination from the dynamic runner.
Calling this method for the first time causes the dynamic runner to do
its calculation, if any.
:returns: ``JobDestination``
"""
return self.job_runner_mapper.get_job_destination(self.params)
def get_job( self ):
return self.sa_session.query( model.Job ).get( self.job_id )
def get_id_tag(self):
# For compatability with drmaa, which uses job_id right now, and TaskWrapper
return self.get_job().get_id_tag()
def get_param_dict( self ):
"""
Restore the dictionary of parameters from the database.
"""
job = self.get_job()
param_dict = dict( [ ( p.name, p.value ) for p in job.parameters ] )
param_dict = self.tool.params_from_strings( param_dict, self.app )
return param_dict
def get_version_string_path( self ):
return os.path.abspath(os.path.join(self.app.config.new_file_path, "GALAXY_VERSION_STRING_%s" % self.job_id))
def prepare( self ):
"""
Prepare the job to run by creating the working directory and the
config files.
"""
self.sa_session.expunge_all() #this prevents the metadata reverting that has been seen in conjunction with the PBS job runner
if not os.path.exists( self.working_directory ):
os.mkdir( self.working_directory )
# Restore parameters from the database
job = self.get_job()
if job.user is None and job.galaxy_session is None:
raise Exception( 'Job %s has no user and no session.' % job.id )
incoming = dict( [ ( p.name, p.value ) for p in job.parameters ] )
incoming = self.tool.params_from_strings( incoming, self.app )
# Do any validation that could not be done at job creation
self.tool.handle_unvalidated_param_values( incoming, self.app )
# Restore input / output data lists
inp_data = dict( [ ( da.name, da.dataset ) for da in job.input_datasets ] )
out_data = dict( [ ( da.name, da.dataset ) for da in job.output_datasets ] )
inp_data.update( [ ( da.name, da.dataset ) for da in job.input_library_datasets ] )
out_data.update( [ ( da.name, da.dataset ) for da in job.output_library_datasets ] )
# Set up output dataset association for export history jobs. Because job
# uses a Dataset rather than an HDA or LDA, it's necessary to set up a
# fake dataset association that provides the needed attributes for
# preparing a job.
class FakeDatasetAssociation ( object ):
def __init__( self, dataset=None ):
self.dataset = dataset
self.file_name = dataset.file_name
self.metadata = dict()
self.children = []
special = self.sa_session.query( model.JobExportHistoryArchive ).filter_by( job=job ).first()
if not special:
special = self.sa_session.query( model.GenomeIndexToolData ).filter_by( job=job ).first()
if special:
out_data[ "output_file" ] = FakeDatasetAssociation( dataset=special.dataset )
# These can be passed on the command line if wanted as $__user_*__
incoming.update( model.User.user_template_environment( job.history and job.history.user ) )
# Build params, done before hook so hook can use
param_dict = self.tool.build_param_dict( incoming,
inp_data, out_data,
self.get_output_fnames(),
self.working_directory )
# Certain tools require tasks to be completed prior to job execution
# ( this used to be performed in the "exec_before_job" hook, but hooks are deprecated ).
self.tool.exec_before_job( self.queue.app, inp_data, out_data, param_dict )
# Run the before queue ("exec_before_job") hook
self.tool.call_hook( 'exec_before_job', self.queue.app, inp_data=inp_data,
out_data=out_data, tool=self.tool, param_dict=incoming)
self.sa_session.flush()
# Build any required config files
config_filenames = self.tool.build_config_files( param_dict, self.working_directory )
# FIXME: Build the param file (might return None, DEPRECATED)
param_filename = self.tool.build_param_file( param_dict, self.working_directory )
# Build the job's command line
self.command_line = self.tool.build_command_line( param_dict )
# FIXME: for now, tools get Galaxy's lib dir in their path
if self.command_line and self.command_line.startswith( 'python' ):
self.galaxy_lib_dir = os.path.abspath( "lib" ) # cwd = galaxy root
# Shell fragment to inject dependencies
if self.app.config.use_tool_dependencies:
self.dependency_shell_commands = self.tool.build_dependency_shell_commands()
else:
self.dependency_shell_commands = None
# We need command_line persisted to the db in order for Galaxy to re-queue the job
# if the server was stopped and restarted before the job finished
job.command_line = self.command_line
self.sa_session.add( job )
self.sa_session.flush()
# Return list of all extra files
extra_filenames = config_filenames
if param_filename is not None:
extra_filenames.append( param_filename )
self.param_dict = param_dict
self.extra_filenames = extra_filenames
self.version_string_cmd = self.tool.version_string_cmd
return extra_filenames
def fail( self, message, exception=False, stdout="", stderr="", exit_code=None ):
"""
Indicate job failure by setting state and message on all output
datasets.
"""
job = self.get_job()
self.sa_session.refresh( job )
# if the job was deleted, don't fail it
if not job.state == job.states.DELETED:
# Check if the failure is due to an exception
if exception:
# Save the traceback immediately in case we generate another
# below
job.traceback = traceback.format_exc()
# Get the exception and let the tool attempt to generate
# a better message
etype, evalue, tb = sys.exc_info()
m = self.tool.handle_job_failure_exception( evalue )
if m:
message = m
if self.app.config.outputs_to_working_directory:
for dataset_path in self.get_output_fnames():
try:
shutil.move( dataset_path.false_path, dataset_path.real_path )
log.debug( "fail(): Moved %s to %s" % ( dataset_path.false_path, dataset_path.real_path ) )
except ( IOError, OSError ), e:
log.error( "fail(): Missing output file in working directory: %s" % e )
for dataset_assoc in job.output_datasets + job.output_library_datasets:
dataset = dataset_assoc.dataset
self.sa_session.refresh( dataset )
dataset.state = dataset.states.ERROR
dataset.blurb = 'tool error'
dataset.info = message
dataset.set_size()
dataset.dataset.set_total_size()
dataset.mark_unhidden()
if dataset.ext == 'auto':
dataset.extension = 'data'
# Update (non-library) job output datasets through the object store
if dataset not in job.output_library_datasets:
self.app.object_store.update_from_file(dataset.dataset, create=True)
# Pause any dependent jobs (and those jobs' outputs)
for dep_job_assoc in dataset.dependent_jobs:
self.pause( dep_job_assoc.job, "Execution of this dataset's job is paused because its input datasets are in an error state." )
self.sa_session.add( dataset )
self.sa_session.flush()
job.state = job.states.ERROR
job.command_line = self.command_line
job.info = message
# TODO: Put setting the stdout, stderr, and exit code in one place
# (not duplicated with the finish method).
if ( len( stdout ) > DATABASE_MAX_STRING_SIZE ):
stdout = util.shrink_string_by_size( stdout, DATABASE_MAX_STRING_SIZE, join_by="\n..\n", left_larger=True, beginning_on_size_error=True )
log.info( "stdout for job %d is greater than %s, only a portion will be logged to database" % ( job.id, DATABASE_MAX_STRING_SIZE_PRETTY ) )
job.stdout = stdout
if ( len( stderr ) > DATABASE_MAX_STRING_SIZE ):
stderr = util.shrink_string_by_size( stderr, DATABASE_MAX_STRING_SIZE, join_by="\n..\n", left_larger=True, beginning_on_size_error=True )
log.info( "stderr for job %d is greater than %s, only a portion will be logged to database" % ( job.id, DATABASE_MAX_STRING_SIZE_PRETTY ) )
job.stderr = stderr
# Let the exit code be Null if one is not provided:
if ( exit_code != None ):
job.exit_code = exit_code
self.sa_session.add( job )
self.sa_session.flush()
#Perform email action even on failure.
for pja in [pjaa.post_job_action for pjaa in job.post_job_actions if pjaa.post_job_action.action_type == "EmailAction"]:
ActionBox.execute(self.app, self.sa_session, pja, job)
# If the job was deleted, call tool specific fail actions (used for e.g. external metadata) and clean up
if self.tool:
self.tool.job_failed( self, message, exception )
if self.app.config.cleanup_job == 'always' or (self.app.config.cleanup_job == 'onsuccess' and job.state == job.states.DELETED):
self.cleanup()
def pause( self, job=None, message=None ):
if job is None:
job = self.get_job()
if message is None:
message = "Execution of this dataset's job is paused"
if job.state == job.states.NEW:
for dataset_assoc in job.output_datasets + job.output_library_datasets:
dataset_assoc.dataset.dataset.state = dataset_assoc.dataset.dataset.states.PAUSED
dataset_assoc.dataset.info = message
self.sa_session.add( dataset_assoc.dataset )
job.state = job.states.PAUSED
self.sa_session.add( job )
def change_state( self, state, info = False ):
job = self.get_job()
self.sa_session.refresh( job )
for dataset_assoc in job.output_datasets + job.output_library_datasets:
dataset = dataset_assoc.dataset
self.sa_session.refresh( dataset )
dataset.state = state
if info:
dataset.info = info
self.sa_session.add( dataset )
self.sa_session.flush()
if info:
job.info = info
job.state = state
self.sa_session.add( job )
self.sa_session.flush()
def get_state( self ):
job = self.get_job()
self.sa_session.refresh( job )
return job.state
def set_runner( self, runner_url, external_id ):
log.warning('set_runner() is deprecated, use set_job_destination()')
self.set_job_destination(self.job_destination, external_id)
def set_job_destination(self, job_destination, external_id=None ):
"""
Persist job destination params in the database for recovery.
self.job_destination is not used because a runner may choose to rewrite
parts of the destination (e.g. the params).
"""
job = self.get_job()
self.sa_session.refresh(job)
log.debug('(%s) Persisting job destination (destination id: %s)' % (job.id, job_destination.id))
job.destination_id = job_destination.id
job.destination_params = job_destination.params
job.job_runner_name = job_destination.runner
job.job_runner_external_id = external_id
self.sa_session.add(job)
self.sa_session.flush()
def finish( self, stdout, stderr, tool_exit_code=None ):
"""
Called to indicate that the associated command has been run. Updates
the output datasets based on stderr and stdout from the command, and
the contents of the output files.
"""
stdout = unicodify( stdout )
stderr = unicodify( stderr )
# default post job setup
self.sa_session.expunge_all()
job = self.get_job()
# TODO: After failing here, consider returning from the function.
try:
self.reclaim_ownership()
except:
log.exception( '(%s) Failed to change ownership of %s, failing' % ( job.id, self.working_directory ) )
return self.fail( job.info, stdout=stdout, stderr=stderr, exit_code=tool_exit_code )
# if the job was deleted, don't finish it
if job.state == job.states.DELETED or job.state == job.states.ERROR:
# SM: Note that, at this point, the exit code must be saved in case
# there was an error. Errors caught here could mean that the job
# was deleted by an administrator (based on old comments), but it
# could also mean that a job was broken up into tasks and one of
# the tasks failed. So include the stderr, stdout, and exit code:
return self.fail( job.info, stderr=stderr, stdout=stdout, exit_code=tool_exit_code )
# Check the tool's stdout, stderr, and exit code for errors, but only
# if the job has not already been marked as having an error.
# The job's stdout and stderr will be set accordingly.
# We set final_job_state to use for dataset management, but *don't* set
# job.state until after dataset collection to prevent history issues
if job.states.ERROR != job.state:
if ( self.check_tool_output( stdout, stderr, tool_exit_code, job )):
final_job_state = job.states.OK
else:
final_job_state = job.states.ERROR
if self.version_string_cmd:
version_filename = self.get_version_string_path()
if os.path.exists(version_filename):
self.version_string = open(version_filename).read()
os.unlink(version_filename)
if self.app.config.outputs_to_working_directory and not self.__link_file_check():
for dataset_path in self.get_output_fnames():
try:
shutil.move( dataset_path.false_path, dataset_path.real_path )
log.debug( "finish(): Moved %s to %s" % ( dataset_path.false_path, dataset_path.real_path ) )
except ( IOError, OSError ):
# this can happen if Galaxy is restarted during the job's
# finish method - the false_path file has already moved,
# and when the job is recovered, it won't be found.
if os.path.exists( dataset_path.real_path ) and os.stat( dataset_path.real_path ).st_size > 0:
log.warning( "finish(): %s not found, but %s is not empty, so it will be used instead" % ( dataset_path.false_path, dataset_path.real_path ) )
else:
# Prior to fail we need to set job.state
job.state = final_job_state
return self.fail( "Job %s's output dataset(s) could not be read" % job.id )
job_context = ExpressionContext( dict( stdout = job.stdout, stderr = job.stderr ) )
for dataset_assoc in job.output_datasets + job.output_library_datasets:
context = self.get_dataset_finish_context( job_context, dataset_assoc.dataset.dataset )
#should this also be checking library associations? - can a library item be added from a history before the job has ended? - lets not allow this to occur
for dataset in dataset_assoc.dataset.dataset.history_associations + dataset_assoc.dataset.dataset.library_associations: #need to update all associated output hdas, i.e. history was shared with job running
trynum = 0
while trynum < self.app.config.retry_job_output_collection:
try:
# Attempt to short circuit NFS attribute caching
os.stat( dataset.dataset.file_name )
os.chown( dataset.dataset.file_name, os.getuid(), -1 )
trynum = self.app.config.retry_job_output_collection
except ( OSError, ObjectNotFound ), e:
trynum += 1
log.warning( 'Error accessing %s, will retry: %s', dataset.dataset.file_name, e )
time.sleep( 2 )
dataset.blurb = 'done'
dataset.peek = 'no peek'
dataset.info = (dataset.info or '')
if context['stdout'].strip():
#Ensure white space between entries
dataset.info = dataset.info.rstrip() + "\n" + context['stdout'].strip()
if context['stderr'].strip():
#Ensure white space between entries
dataset.info = dataset.info.rstrip() + "\n" + context['stderr'].strip()
dataset.tool_version = self.version_string
dataset.set_size()
if 'uuid' in context:
dataset.dataset.uuid = context['uuid']
# Update (non-library) job output datasets through the object store
if dataset not in job.output_library_datasets:
self.app.object_store.update_from_file(dataset.dataset, create=True)
if job.states.ERROR == final_job_state:
dataset.blurb = "error"
dataset.mark_unhidden()
elif dataset.has_data():
# If the tool was expected to set the extension, attempt to retrieve it
if dataset.ext == 'auto':
dataset.extension = context.get( 'ext', 'data' )
dataset.init_meta( copy_from=dataset )
#if a dataset was copied, it won't appear in our dictionary:
#either use the metadata from originating output dataset, or call set_meta on the copies
#it would be quicker to just copy the metadata from the originating output dataset,
#but somewhat trickier (need to recurse up the copied_from tree), for now we'll call set_meta()
if ( not self.external_output_metadata.external_metadata_set_successfully( dataset, self.sa_session ) and self.app.config.retry_metadata_internally ):
dataset.datatype.set_meta( dataset, overwrite = False ) #call datatype.set_meta directly for the initial set_meta call during dataset creation
elif not self.external_output_metadata.external_metadata_set_successfully( dataset, self.sa_session ) and job.states.ERROR != final_job_state:
dataset._state = model.Dataset.states.FAILED_METADATA
else:
#load metadata from file
#we need to no longer allow metadata to be edited while the job is still running,
#since if it is edited, the metadata changed on the running output will no longer match
#the metadata that was stored to disk for use via the external process,
#and the changes made by the user will be lost, without warning or notice
dataset.metadata.from_JSON_dict( self.external_output_metadata.get_output_filenames_by_dataset( dataset, self.sa_session ).filename_out )
try:
assert context.get( 'line_count', None ) is not None
if ( not dataset.datatype.composite_type and dataset.dataset.is_multi_byte() ) or self.tool.is_multi_byte:
dataset.set_peek( line_count=context['line_count'], is_multi_byte=True )
else:
dataset.set_peek( line_count=context['line_count'] )
except:
if ( not dataset.datatype.composite_type and dataset.dataset.is_multi_byte() ) or self.tool.is_multi_byte:
dataset.set_peek( is_multi_byte=True )
else:
dataset.set_peek()
try:
# set the name if provided by the tool
dataset.name = context['name']
except:
pass
else:
dataset.blurb = "empty"
if dataset.ext == 'auto':
dataset.extension = 'txt'
self.sa_session.add( dataset )
if job.states.ERROR == final_job_state:
log.debug( "setting dataset state to ERROR" )
# TODO: This is where the state is being set to error. Change it!
dataset_assoc.dataset.dataset.state = model.Dataset.states.ERROR
# Pause any dependent jobs (and those jobs' outputs)
for dep_job_assoc in dataset_assoc.dataset.dependent_jobs:
self.pause( dep_job_assoc.job, "Execution of this dataset's job is paused because its input datasets are in an error state." )
else:
dataset_assoc.dataset.dataset.state = model.Dataset.states.OK
# If any of the rest of the finish method below raises an
# exception, the fail method will run and set the datasets to
# ERROR. The user will never see that the datasets are in error if
# they were flushed as OK here, since upon doing so, the history
# panel stops checking for updates. So allow the
# self.sa_session.flush() at the bottom of this method set
# the state instead.
for pja in job.post_job_actions:
ActionBox.execute(self.app, self.sa_session, pja.post_job_action, job)
# Flush all the dataset and job changes above. Dataset state changes
# will now be seen by the user.
self.sa_session.flush()
# Save stdout and stderr
if len( job.stdout ) > DATABASE_MAX_STRING_SIZE:
log.info( "stdout for job %d is greater than %s, only a portion will be logged to database" % ( job.id, DATABASE_MAX_STRING_SIZE_PRETTY ) )
job.stdout = util.shrink_string_by_size( job.stdout, DATABASE_MAX_STRING_SIZE, join_by="\n..\n", left_larger=True, beginning_on_size_error=True )
if len( job.stderr ) > DATABASE_MAX_STRING_SIZE:
log.info( "stderr for job %d is greater than %s, only a portion will be logged to database" % ( job.id, DATABASE_MAX_STRING_SIZE_PRETTY ) )
job.stderr = util.shrink_string_by_size( job.stderr, DATABASE_MAX_STRING_SIZE, join_by="\n..\n", left_larger=True, beginning_on_size_error=True )
# The exit code will be null if there is no exit code to be set.
# This is so that we don't assign an exit code, such as 0, that
# is either incorrect or has the wrong semantics.
if None != tool_exit_code:
job.exit_code = tool_exit_code
# custom post process setup
inp_data = dict( [ ( da.name, da.dataset ) for da in job.input_datasets ] )
out_data = dict( [ ( da.name, da.dataset ) for da in job.output_datasets ] )
inp_data.update( [ ( da.name, da.dataset ) for da in job.input_library_datasets ] )
out_data.update( [ ( da.name, da.dataset ) for da in job.output_library_datasets ] )
param_dict = dict( [ ( p.name, p.value ) for p in job.parameters ] ) # why not re-use self.param_dict here? ##dunno...probably should, this causes tools.parameters.basic.UnvalidatedValue to be used in following methods instead of validated and transformed values during i.e. running workflows
param_dict = self.tool.params_from_strings( param_dict, self.app )
# Check for and move associated_files
self.tool.collect_associated_files(out_data, self.working_directory)
gitd = self.sa_session.query( model.GenomeIndexToolData ).filter_by( job=job ).first()
if gitd:
self.tool.collect_associated_files({'' : gitd}, self.working_directory)
# Create generated output children and primary datasets and add to param_dict
collected_datasets = {'children':self.tool.collect_child_datasets(out_data, self.working_directory),'primary':self.tool.collect_primary_datasets(out_data, self.working_directory)}
param_dict.update({'__collected_datasets__':collected_datasets})
# Certain tools require tasks to be completed after job execution
# ( this used to be performed in the "exec_after_process" hook, but hooks are deprecated ).
self.tool.exec_after_process( self.queue.app, inp_data, out_data, param_dict, job = job )
# Call 'exec_after_process' hook
self.tool.call_hook( 'exec_after_process', self.queue.app, inp_data=inp_data,
out_data=out_data, param_dict=param_dict,
tool=self.tool, stdout=job.stdout, stderr=job.stderr )
job.command_line = self.command_line
bytes = 0
# Once datasets are collected, set the total dataset size (includes extra files)
for dataset_assoc in job.output_datasets:
dataset_assoc.dataset.dataset.set_total_size()
bytes += dataset_assoc.dataset.dataset.get_total_size()
if job.user:
job.user.total_disk_usage += bytes
# fix permissions
for path in [ dp.real_path for dp in self.get_mutable_output_fnames() ]:
util.umask_fix_perms( path, self.app.config.umask, 0666, self.app.config.gid )
# Finally set the job state. This should only happen *after* all
# dataset creation, and will allow us to eliminate force_history_refresh.
job.state = final_job_state
self.sa_session.flush()
log.debug( 'job %d ended' % self.job_id )
if self.app.config.cleanup_job == 'always' or ( not stderr and self.app.config.cleanup_job == 'onsuccess' ):
self.cleanup()
def check_tool_output( self, stdout, stderr, tool_exit_code, job ):
return check_output( self.tool, stdout, stderr, tool_exit_code, job )
def cleanup( self ):
# remove temporary files
try:
for fname in self.extra_filenames:
os.remove( fname )
self.external_output_metadata.cleanup_external_metadata( self.sa_session )
galaxy.tools.imp_exp.JobExportHistoryArchiveWrapper( self.job_id ).cleanup_after_job( self.sa_session )
galaxy.tools.imp_exp.JobImportHistoryArchiveWrapper( self.app, self.job_id ).cleanup_after_job()
galaxy.tools.genome_index.GenomeIndexToolWrapper( self.job_id ).postprocessing( self.sa_session, self.app )
self.app.object_store.delete(self.get_job(), base_dir='job_work', entire_dir=True, dir_only=True, extra_dir=str(self.job_id))
except:
log.exception( "Unable to cleanup job %d" % self.job_id )
def get_output_sizes( self ):
sizes = []
output_paths = self.get_output_fnames()
for outfile in [ str( o ) for o in output_paths ]:
if os.path.exists( outfile ):
sizes.append( ( outfile, os.stat( outfile ).st_size ) )
else:
sizes.append( ( outfile, 0 ) )
return sizes
def check_limits(self, runtime=None):
if self.app.job_config.limits.output_size > 0:
for outfile, size in self.get_output_sizes():
if size > self.app.config.output_size_limit:
log.warning( '(%s) Job output %s is over the output size limit' % ( self.get_id_tag(), os.path.basename( outfile ) ) )
return 'Job output file grew too large (greater than %s), please try different inputs or parameters' % util.nice_size( self.app.job_config.limits.output_size )
if self.app.job_config.limits.walltime_delta is not None and runtime is not None:
if runtime > self.app.job_config.limits.walltime_delta:
log.warning( '(%s) Job has reached walltime, it will be terminated' % ( self.get_id_tag() ) )
return 'Job ran longer than the maximum allowed execution time (%s), please try different inputs or parameters' % self.app.job_config.limits.walltime
return None
def get_command_line( self ):
return self.command_line
def get_session_id( self ):
return self.session_id
def get_env_setup_clause( self ):
if self.app.config.environment_setup_file is None:
return ''
return '[ -f "%s" ] && . %s' % ( self.app.config.environment_setup_file, self.app.config.environment_setup_file )
def get_input_dataset_fnames( self, ds ):
filenames = []
filenames = [ ds.file_name ]
#we will need to stage in metadata file names also
#TODO: would be better to only stage in metadata files that are actually needed (found in command line, referenced in config files, etc.)
for key, value in ds.metadata.items():
if isinstance( value, model.MetadataFile ):
filenames.append( value.file_name )
return filenames
def get_input_fnames( self ):
job = self.get_job()
filenames = []
for da in job.input_datasets + job.input_library_datasets: #da is JobToInputDatasetAssociation object
if da.dataset:
filenames.extend(self.get_input_dataset_fnames(da.dataset))
return filenames
def get_output_fnames( self ):
if self.output_paths is None:
self.compute_outputs()
return self.output_paths
def get_mutable_output_fnames( self ):
if self.output_paths is None:
self.compute_outputs()
return filter( lambda dsp: dsp.mutable, self.output_paths )
def get_output_hdas_and_fnames( self ):
if self.output_hdas_and_paths is None:
self.compute_outputs()
return self.output_hdas_and_paths
def compute_outputs( self ) :
class DatasetPath( object ):
def __init__( self, dataset_id, real_path, false_path = None, mutable = True ):
self.dataset_id = dataset_id
self.real_path = real_path
self.false_path = false_path
self.mutable = mutable
def __str__( self ):
if self.false_path is None:
return self.real_path
else:
return self.false_path
job = self.get_job()
# Job output datasets are combination of history, library, jeha and gitd datasets.
special = self.sa_session.query( model.JobExportHistoryArchive ).filter_by( job=job ).first()
if not special:
special = self.sa_session.query( model.GenomeIndexToolData ).filter_by( job=job ).first()
false_path = None
if self.app.config.outputs_to_working_directory:
self.output_paths = []
self.output_hdas_and_paths = {}
for name, hda in [ ( da.name, da.dataset ) for da in job.output_datasets + job.output_library_datasets ]:
false_path = os.path.abspath( os.path.join( self.working_directory, "galaxy_dataset_%d.dat" % hda.dataset.id ) )
dsp = DatasetPath( hda.dataset.id, hda.dataset.file_name, false_path, mutable = hda.dataset.external_filename is None )
self.output_paths.append( dsp )
self.output_hdas_and_paths[name] = hda, dsp
if special:
false_path = os.path.abspath( os.path.join( self.working_directory, "galaxy_dataset_%d.dat" % special.dataset.id ) )
else:
results = [ ( da.name, da.dataset, DatasetPath( da.dataset.dataset.id, da.dataset.file_name, mutable = da.dataset.dataset.external_filename is None ) ) for da in job.output_datasets + job.output_library_datasets ]
self.output_paths = [t[2] for t in results]
self.output_hdas_and_paths = dict([(t[0], t[1:]) for t in results])
if special:
dsp = DatasetPath( special.dataset.id, special.dataset.file_name, false_path )
self.output_paths.append( dsp )
return self.output_paths
def get_output_file_id( self, file ):
if self.output_paths is None:
self.get_output_fnames()
for dp in self.output_paths:
if self.app.config.outputs_to_working_directory and os.path.basename( dp.false_path ) == file:
return dp.dataset_id
elif os.path.basename( dp.real_path ) == file:
return dp.dataset_id
return None
def get_tool_provided_job_metadata( self ):
if self.tool_provided_job_metadata is not None:
return self.tool_provided_job_metadata
# Look for JSONified job metadata
self.tool_provided_job_metadata = []
meta_file = os.path.join( self.working_directory, TOOL_PROVIDED_JOB_METADATA_FILE )
if os.path.exists( meta_file ):
for line in open( meta_file, 'r' ):
try:
line = from_json_string( line )
assert 'type' in line
except:
log.exception( '(%s) Got JSON data from tool, but data is improperly formatted or no "type" key in data' % self.job_id )
log.debug( 'Offending data was: %s' % line )
continue
# Set the dataset id if it's a dataset entry and isn't set.
# This isn't insecure. We loop the job's output datasets in
# the finish method, so if a tool writes out metadata for a
# dataset id that it doesn't own, it'll just be ignored.
if line['type'] == 'dataset' and 'dataset_id' not in line:
try:
line['dataset_id'] = self.get_output_file_id( line['dataset'] )
except KeyError:
log.warning( '(%s) Tool provided job dataset-specific metadata without specifying a dataset' % self.job_id )
continue
self.tool_provided_job_metadata.append( line )
return self.tool_provided_job_metadata
def get_dataset_finish_context( self, job_context, dataset ):
for meta in self.get_tool_provided_job_metadata():
if meta['type'] == 'dataset' and meta['dataset_id'] == dataset.id:
return ExpressionContext( meta, job_context )
return job_context
def setup_external_metadata( self, exec_dir=None, tmp_dir=None, dataset_files_path=None, config_root=None, config_file=None, datatypes_config=None, set_extension=True, **kwds ):
# extension could still be 'auto' if this is the upload tool.
job = self.get_job()
if set_extension:
for output_dataset_assoc in job.output_datasets:
if output_dataset_assoc.dataset.ext == 'auto':
context = self.get_dataset_finish_context( dict(), output_dataset_assoc.dataset.dataset )
output_dataset_assoc.dataset.extension = context.get( 'ext', 'data' )
self.sa_session.flush()
if tmp_dir is None:
#this dir should should relative to the exec_dir
tmp_dir = self.app.config.new_file_path
if dataset_files_path is None:
dataset_files_path = self.app.model.Dataset.file_path
if config_root is None:
config_root = self.app.config.root
if config_file is None:
config_file = self.app.config.config_file
if datatypes_config is None:
datatypes_config = self.app.datatypes_registry.integrated_datatypes_configs
return self.external_output_metadata.setup_external_metadata( [ output_dataset_assoc.dataset for output_dataset_assoc in job.output_datasets ],
self.sa_session,
exec_dir = exec_dir,
tmp_dir = tmp_dir,
dataset_files_path = dataset_files_path,
config_root = config_root,
config_file = config_file,
datatypes_config = datatypes_config,
job_metadata = os.path.join( self.working_directory, TOOL_PROVIDED_JOB_METADATA_FILE ),
**kwds )
@property
def user( self ):
job = self.get_job()
if job.user is not None:
return job.user.email
elif job.galaxy_session is not None and job.galaxy_session.user is not None:
return job.galaxy_session.user.email
elif job.history is not None and job.history.user is not None:
return job.history.user.email
elif job.galaxy_session is not None:
return 'anonymous@' + job.galaxy_session.remote_addr.split()[-1]
else:
return 'anonymous@unknown'
def __link_file_check( self ):
""" outputs_to_working_directory breaks library uploads where data is
linked. This method is a hack that solves that problem, but is
specific to the upload tool and relies on an injected job param. This
method should be removed ASAP and replaced with some properly generic
and stateful way of determining link-only datasets. -nate
"""
job = self.get_job()
param_dict = job.get_param_values( self.app )
return self.tool.id == 'upload1' and param_dict.get( 'link_data_only', None ) == 'link_to_files'
def _change_ownership( self, username, gid ):
job = self.get_job()
# FIXME: hardcoded path
cmd = [ '/usr/bin/sudo', '-E', self.app.config.external_chown_script, self.working_directory, username, str( gid ) ]
log.debug( '(%s) Changing ownership of working directory with: %s' % ( job.id, ' '.join( cmd ) ) )
p = subprocess.Popen( cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE )
# TODO: log stdout/stderr
stdout, stderr = p.communicate()
assert p.returncode == 0
def change_ownership_for_run( self ):
job = self.get_job()
if self.app.config.external_chown_script and job.user is not None:
try:
self._change_ownership( self.user_system_pwent[0], str( self.user_system_pwent[3] ) )
except:
log.exception( '(%s) Failed to change ownership of %s, making world-writable instead' % ( job.id, self.working_directory ) )
os.chmod( self.working_directory, 0777 )
def reclaim_ownership( self ):
job = self.get_job()
if self.app.config.external_chown_script and job.user is not None:
self._change_ownership( self.galaxy_system_pwent[0], str( self.galaxy_system_pwent[3] ) )
@property
def user_system_pwent( self ):
if self.__user_system_pwent is None:
job = self.get_job()
try:
self.__user_system_pwent = pwd.getpwnam( job.user.email.split('@')[0] )
except:
pass
return self.__user_system_pwent
@property
def galaxy_system_pwent( self ):
if self.__galaxy_system_pwent is None:
self.__galaxy_system_pwent = pwd.getpwuid(os.getuid())
return self.__galaxy_system_pwent
def get_output_destination( self, output_path ):
"""
Destination for outputs marked as from_work_dir. This is the normal case,
just copy these files directly to the ulimate destination.
"""
return output_path
@property
def requires_setting_metadata( self ):
if self.tool:
return self.tool.requires_setting_metadata
return False
class TaskWrapper(JobWrapper):
"""
Extension of JobWrapper intended for running tasks.
Should be refactored into a generalized executable unit wrapper parent, then jobs and tasks.
"""
# Abstract this to be more useful for running tasks that *don't* necessarily compose a job.
def __init__(self, task, queue):
super(TaskWrapper, self).__init__(task.job, queue)
self.task_id = task.id
self.working_directory = task.working_directory
if task.prepare_input_files_cmd is not None:
self.prepare_input_files_cmds = [ task.prepare_input_files_cmd ]
else:
self.prepare_input_files_cmds = None
self.status = task.states.NEW
def can_split( self ):
# Should the job handler split this job up? TaskWrapper should
# always return False as the job has already been split.
return False
def get_job( self ):
if self.job_id:
return self.sa_session.query( model.Job ).get( self.job_id )
else:
return None
def get_task( self ):
return self.sa_session.query(model.Task).get(self.task_id)
def get_id_tag(self):
# For compatibility with drmaa job runner and TaskWrapper, instead of using job_id directly
return self.get_task().get_id_tag()
def get_param_dict( self ):
"""
Restore the dictionary of parameters from the database.
"""
job = self.sa_session.query( model.Job ).get( self.job_id )
param_dict = dict( [ ( p.name, p.value ) for p in job.parameters ] )
param_dict = self.tool.params_from_strings( param_dict, self.app )
return param_dict
def prepare( self ):
"""
Prepare the job to run by creating the working directory and the
config files.
"""
# Restore parameters from the database
job = self.get_job()
task = self.get_task()
if job.user is None and job.galaxy_session is None:
raise Exception( 'Job %s has no user and no session.' % job.id )
incoming = dict( [ ( p.name, p.value ) for p in job.parameters ] )
incoming = self.tool.params_from_strings( incoming, self.app )
# Do any validation that could not be done at job creation
self.tool.handle_unvalidated_param_values( incoming, self.app )
# Restore input / output data lists
inp_data = dict( [ ( da.name, da.dataset ) for da in job.input_datasets ] )
out_data = dict( [ ( da.name, da.dataset ) for da in job.output_datasets ] )
inp_data.update( [ ( da.name, da.dataset ) for da in job.input_library_datasets ] )
out_data.update( [ ( da.name, da.dataset ) for da in job.output_library_datasets ] )
# DBTODO New method for generating command line for a task?
# These can be passed on the command line if wanted as $userId $userEmail
if job.history and job.history.user: # check for anonymous user!
userId = '%d' % job.history.user.id
userEmail = str(job.history.user.email)
else:
userId = 'Anonymous'
userEmail = 'Anonymous'
incoming['userId'] = userId
incoming['userEmail'] = userEmail
# Build params, done before hook so hook can use
param_dict = self.tool.build_param_dict( incoming, inp_data, out_data, self.get_output_fnames(), self.working_directory )
fnames = {}
for v in self.get_input_fnames():
fnames[v] = os.path.join(self.working_directory, os.path.basename(v))
for dp in [x.real_path for x in self.get_output_fnames()]:
fnames[dp] = os.path.join(self.working_directory, os.path.basename(dp))
# Certain tools require tasks to be completed prior to job execution
# ( this used to be performed in the "exec_before_job" hook, but hooks are deprecated ).
self.tool.exec_before_job( self.queue.app, inp_data, out_data, param_dict )
# Run the before queue ("exec_before_job") hook
self.tool.call_hook( 'exec_before_job', self.queue.app, inp_data=inp_data,
out_data=out_data, tool=self.tool, param_dict=incoming)
self.sa_session.flush()
# Build any required config files
config_filenames = self.tool.build_config_files( param_dict, self.working_directory )
for config_filename in config_filenames:
config_contents = open(config_filename, "r").read()
for k, v in fnames.iteritems():
config_contents = config_contents.replace(k, v)
open(config_filename, "w").write(config_contents)
# FIXME: Build the param file (might return None, DEPRECATED)
param_filename = self.tool.build_param_file( param_dict, self.working_directory )
# Build the job's command line
self.command_line = self.tool.build_command_line( param_dict )
# HACK, Fix this when refactored.
for k, v in fnames.iteritems():
self.command_line = self.command_line.replace(k, v)
# FIXME: for now, tools get Galaxy's lib dir in their path
if self.command_line and self.command_line.startswith( 'python' ):
self.galaxy_lib_dir = os.path.abspath( "lib" ) # cwd = galaxy root
# Shell fragment to inject dependencies
if self.app.config.use_tool_dependencies:
self.dependency_shell_commands = self.tool.build_dependency_shell_commands()
else:
self.dependency_shell_commands = None
# We need command_line persisted to the db in order for Galaxy to re-queue the job
# if the server was stopped and restarted before the job finished
task.command_line = self.command_line
self.sa_session.add( task )
self.sa_session.flush()
# # Return list of all extra files
extra_filenames = config_filenames
if param_filename is not None:
extra_filenames.append( param_filename )
self.param_dict = param_dict
self.extra_filenames = extra_filenames
self.status = 'prepared'
return extra_filenames
def fail( self, message, exception=False ):
log.error("TaskWrapper Failure %s" % message)
self.status = 'error'
# How do we want to handle task failure? Fail the job and let it clean up?
def change_state( self, state, info = False ):
task = self.get_task()
self.sa_session.refresh( task )
if info:
task.info = info
task.state = state
self.sa_session.add( task )
self.sa_session.flush()
def get_state( self ):
task = self.get_task()
self.sa_session.refresh( task )
return task.state
def get_exit_code( self ):
task = self.get_task()
self.sa_session.refresh( task )
return task.exit_code
def set_runner( self, runner_url, external_id ):
task = self.get_task()
self.sa_session.refresh( task )
task.task_runner_name = runner_url
task.task_runner_external_id = external_id
# DBTODO Check task job_runner_stuff
self.sa_session.add( task )
self.sa_session.flush()
def finish( self, stdout, stderr, tool_exit_code=None ):
# DBTODO integrate previous finish logic.
# Simple finish for tasks. Just set the flag OK.
"""
Called to indicate that the associated command has been run. Updates
the output datasets based on stderr and stdout from the command, and
the contents of the output files.
"""
stdout = unicodify( stdout )
stderr = unicodify( stderr )
# This may have ended too soon
log.debug( 'task %s for job %d ended; exit code: %d'
% (self.task_id, self.job_id,
tool_exit_code if tool_exit_code != None else -256 ) )
# default post job setup_external_metadata
self.sa_session.expunge_all()
task = self.get_task()
# if the job was deleted, don't finish it
if task.state == task.states.DELETED:
# Job was deleted by an administrator
if self.app.config.cleanup_job in ( 'always', 'onsuccess' ):
self.cleanup()
return
elif task.state == task.states.ERROR:
self.fail( task.info )
return
# Check what the tool returned. If the stdout or stderr matched
# regular expressions that indicate errors, then set an error.
# The same goes if the tool's exit code was in a given range.
if ( self.check_tool_output( stdout, stderr, tool_exit_code, task ) ):
task.state = task.states.OK
else:
task.state = task.states.ERROR
# Save stdout and stderr
if len( stdout ) > DATABASE_MAX_STRING_SIZE:
log.error( "stdout for task %d is greater than %s, only a portion will be logged to database" % ( task.id, DATABASE_MAX_STRING_SIZE_PRETTY ) )
task.stdout = util.shrink_string_by_size( stdout, DATABASE_MAX_STRING_SIZE, join_by="\n..\n", left_larger=True, beginning_on_size_error=True )
if len( stderr ) > DATABASE_MAX_STRING_SIZE:
log.error( "stderr for task %d is greater than %s, only a portion will be logged to database" % ( task.id, DATABASE_MAX_STRING_SIZE_PRETTY ) )
task.stderr = util.shrink_string_by_size( stderr, DATABASE_MAX_STRING_SIZE, join_by="\n..\n", left_larger=True, beginning_on_size_error=True )
task.exit_code = tool_exit_code
task.command_line = self.command_line
self.sa_session.flush()
def cleanup( self ):
# There is no task cleanup. The job cleans up for all tasks.
pass
def get_command_line( self ):
return self.command_line
def get_session_id( self ):
return self.session_id
def get_output_file_id( self, file ):
# There is no permanent output file for tasks.
return None
def get_tool_provided_job_metadata( self ):
# DBTODO Handle this as applicable for tasks.
return None
def get_dataset_finish_context( self, job_context, dataset ):
# Handled at the parent job level. Do nothing here.
pass
def setup_external_metadata( self, exec_dir=None, tmp_dir=None, dataset_files_path=None, config_root=None, config_file=None, datatypes_config=None, set_extension=True, **kwds ):
# There is no metadata setting for tasks. This is handled after the merge, at the job level.
return ""
def get_output_destination( self, output_path ):
"""
Destination for outputs marked as from_work_dir. These must be copied with
the same basenme as the path for the ultimate output destination. This is
required in the task case so they can be merged.
"""
return os.path.join( self.working_directory, os.path.basename( output_path ) )
class NoopQueue( object ):
"""
Implements the JobQueue / JobStopQueue interface but does nothing
"""
def put( self, *args, **kwargs ):
return
def put_stop( self, *args ):
return
def shutdown( self ):
return
class ParallelismInfo(object):
"""
Stores the information (if any) for running multiple instances of the tool in parallel
on the same set of inputs.
"""
def __init__(self, tag):
self.method = tag.get('method')
if isinstance(tag, dict):
items = tag.iteritems()
else:
items = tag.attrib.items()
self.attributes = dict([item for item in items if item[0] != 'method' ])
if len(self.attributes) == 0:
# legacy basic mode - provide compatible defaults
self.attributes['split_size'] = 20
self.attributes['split_mode'] = 'number_of_parts'
| [
"sabba_88@hotmail.com"
] | sabba_88@hotmail.com |
65e36f6f95b9f30cdc66164c386d744a95e41b19 | eef72818143c9ffef4c759a1331e8227c14be792 | /sktime/transformations/hierarchical/aggregate.py | 9e32e6482cd11d765a034561ab0bfe86bfb4d839 | [
"BSD-3-Clause"
] | permissive | jattenberg/sktime | 66a723d7844146ac1675d2e4e73f35a486babc65 | fbe4af4d8419a01ada1e82da1aa63c0218d13edb | refs/heads/master | 2023-08-12T07:32:22.462661 | 2022-08-16T10:19:22 | 2022-08-16T10:19:22 | 298,256,169 | 0 | 0 | BSD-3-Clause | 2020-09-24T11:20:23 | 2020-09-24T11:20:23 | null | UTF-8 | Python | false | false | 9,500 | py | # -*- coding: utf-8 -*-
# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
"""Implements a transfromer to generate hierarcical data from bottom level."""
__author__ = ["ciaran-g"]
from warnings import warn
import numpy as np
import pandas as pd
from sktime.transformations.base import BaseTransformer
# todo: add any necessary sktime internal imports here
class Aggregator(BaseTransformer):
"""Prepare hierarchical data, including aggregate levels, from bottom level.
This transformer adds aggregate levels via summation to a DataFrame with a
multiindex. The aggregate levels are included with the special tag "__total"
in the index. The aggregate nodes are discovered from top-to-bottom from
the input data multiindex.
Parameters
----------
flatten_single_level : boolean (default=True)
Remove aggregate nodes, i.e. ("__total"), where there is only a single
child to the level
See Also
--------
ReconcilerForecaster
Reconciler
References
----------
.. [1] https://otexts.com/fpp3/hierarchical.html
Examples
--------
>>> from sktime.transformations.hierarchical.aggregate import Aggregator
>>> from sktime.utils._testing.hierarchical import _bottom_hier_datagen
>>> agg = Aggregator()
>>> y = _bottom_hier_datagen(
... no_bottom_nodes=3,
... no_levels=1,
... random_seed=123,
... )
>>> y = agg.fit_transform(y)
"""
_tags = {
"scitype:transform-input": "Series",
"scitype:transform-output": "Series",
"scitype:transform-labels": "None",
# todo instance wise?
"scitype:instancewise": True, # is this an instance-wise transform?
"X_inner_mtype": [
"pd.Series",
"pd.DataFrame",
"pd-multiindex",
"pd_multiindex_hier",
],
"y_inner_mtype": "None", # which mtypes do _fit/_predict support for y?
"capability:inverse_transform": False, # does transformer have inverse
"skip-inverse-transform": True, # is inverse-transform skipped when called?
"univariate-only": False, # can the transformer handle multivariate X?
"handles-missing-data": False, # can estimator handle missing data?
"X-y-must-have-same-index": False, # can estimator handle different X/y index?
"fit_is_empty": True, # is fit empty and can be skipped? Yes = True
"transform-returns-same-time-index": False,
}
def __init__(self, flatten_single_levels=True):
self.flatten_single_levels = flatten_single_levels
super(Aggregator, self).__init__()
def _transform(self, X, y=None):
"""Transform X and return a transformed version.
private _transform containing core logic, called from transform
Parameters
----------
X : Panel of pd.DataFrame data to be transformed.
y : Ignored argument for interface compatibility.
Returns
-------
Transformed version of X
"""
if X.index.nlevels == 1:
warn(
"Aggregator is intended for use with X.index.nlevels > 1. "
"Returning X unchanged."
)
return X
# check the tests are ok
if not _check_index_no_total(X):
warn(
"Found elemnts in the index of X named '__total'. Removing "
"these levels and aggregating."
)
X = self._inverse_transform(X)
# starting from top aggregate
df_out = X.copy()
for i in range(0, X.index.nlevels - 1, 1):
# finding "__totals" parent/child from (up -> down)
indx_grouper = np.arange(0, i, 1).tolist()
indx_grouper.append(X.index.nlevels - 1)
out = X.groupby(level=indx_grouper).sum()
# get new index with aggregate levels to match with old
new_idx = []
for j in range(0, X.index.nlevels - 1, 1):
if j in indx_grouper:
new_idx.append(out.index.get_level_values(j))
else:
new_idx.append(["__total"] * len(out.index))
# add in time index
new_idx.append(out.index.get_level_values(-1))
new_idx = pd.MultiIndex.from_arrays(new_idx, names=X.index.names)
out = out.set_index(new_idx)
df_out = pd.concat([out, df_out])
# now remove duplicated aggregate indexes
if self.flatten_single_levels:
new_index = _flatten_single_indexes(X)
nm = X.index.names[-1]
if nm is None:
nm = "level_" + str(X.index.nlevels - 1)
else:
pass
# now reindex with new non-duplicated axis
df_out = (
df_out.reset_index(level=-1).loc[new_index].set_index(nm, append=True)
).rename_axis(X.index.names, axis=0)
df_out.sort_index(inplace=True)
return df_out
def _inverse_transform(self, X, y=None):
"""Inverse transform, inverse operation to transform.
private _inverse_transform containing core logic, called from inverse_transform
Parameters
----------
X : Panel of pd.DataFrame data to be inverse transformed.
y : Ignored argument for interface compatibility.
Returns
-------
Inverse transformed version of X.
"""
if X.index.nlevels == 1:
warn(
"Aggregator is intended for use with X.index.nlevels > 1. "
"Returning X unchanged."
)
return X
if _check_index_no_total(X):
warn(
"Inverse is inteded to be used with aggregated data. "
"Returning X unchanged."
)
else:
for i in range(X.index.nlevels - 1):
X = X.drop(index="__total", level=i)
return X
@classmethod
def get_test_params(cls):
"""Return testing parameter settings for the estimator.
Returns
-------
params : dict or list of dict, default = {}
Parameters to create testing instances of the class
Each dict are parameters to construct an "interesting" test instance, i.e.,
`MyClass(**params)` or `MyClass(**params[i])` creates a valid test instance.
`create_test_instance` uses the first (or only) dictionary in `params`
"""
params = {"flatten_single_levels": True}
return params
def _check_index_no_total(X):
"""Check the index of X and return boolean."""
# check the elements of the index for "__total"
chk_list = []
for i in range(0, X.index.nlevels - 1, 1):
chk_list.append(X.index.get_level_values(level=i).isin(["__total"]).sum())
tot_chk = sum(chk_list) == 0
return tot_chk
def _flatten_single_indexes(X):
"""Check the index of X and return new unique index object."""
# get unique indexes outwith timepoints
inds = list(X.droplevel(-1).index.unique())
ind_df = pd.DataFrame(inds)
# add the new top aggregate level
if len(ind_df.columns) == 1:
out_list = ["__total"]
else:
out_list = [tuple(np.repeat("__total", len(ind_df.columns)))]
# for each level check there are child nodes of length >1
for i in range(1, len(ind_df.columns)):
# all levels from top
ind_aggs = ind_df.loc[:, ind_df.columns[0:-i:]]
# filter and check for child nodes with only 1 nunique name
if len(ind_aggs.columns) > 1:
filter_cols = list(ind_aggs.columns[0:-1])
filter_inds = ind_aggs.groupby(
by=filter_cols, as_index=False
).transform(lambda x: x.nunique())
filter_inds = filter_inds[(filter_inds > 1)].dropna().index
ind_aggs = ind_aggs.iloc[filter_inds, :]
else:
pass
tmp = ind_aggs.groupby(by=list(ind_aggs.columns)).size()
# get idex of these nodes
agg_ids = list(tmp[tmp > 1].dropna().index)
# add the aggregate label down the the length of the orginal index
# only add if >=1 elements in list and not at the 2nd aggregate level
add_indicator1 = (i < (len(ind_df.columns) - 1)) & (len(agg_ids) >= 1)
# or at the second most aggregate level and there are two aggs to add
# or at the second most aggregate level and there is 1 agg to add,
# but the top level has more than one unique index
add_indicator2 = (len(agg_ids) > 1) | (
(len(agg_ids) == 1) & (ind_df.iloc[:, 0].nunique() > 1)
)
if add_indicator1 | add_indicator2:
agg_ids = [tuple([x]) if type(x) is not tuple else x for x in agg_ids]
for _j in range(i):
agg_ids = [x + ("__total",) for x in agg_ids]
out_list.extend(agg_ids)
else:
pass
# add to original index
inds.extend(out_list)
if len(ind_df.columns) == 1:
new_index = pd.Index(inds, name=X.index.droplevel(-1).name)
else:
new_index = pd.MultiIndex.from_tuples(
inds,
names=X.index.droplevel(-1).names,
)
return new_index
| [
"noreply@github.com"
] | jattenberg.noreply@github.com |
7ae557cb0c4814beab359729b85cd437ee2b1ccc | f07a42f652f46106dee4749277d41c302e2b7406 | /Data Set/bug-fixing-4/81640a2c67e9ab8088a72ca02d8d58ecf41abcd1-<set_bios_attributes>-fix.py | 0206fc5bbd8980d4bd0678cff66cf7da3a9a9cef | [] | no_license | wsgan001/PyFPattern | e0fe06341cc5d51b3ad0fe29b84098d140ed54d1 | cc347e32745f99c0cd95e79a18ddacc4574d7faa | refs/heads/main | 2023-08-25T23:48:26.112133 | 2021-10-23T14:11:22 | 2021-10-23T14:11:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,030 | py | def set_bios_attributes(self, attr):
result = {
}
key = 'Bios'
response = self.get_request((self.root_uri + self.systems_uri))
if (response['ret'] is False):
return response
result['ret'] = True
data = response['data']
if (key not in data):
return {
'ret': False,
'msg': ('Key %s not found' % key),
}
bios_uri = data[key]['@odata.id']
response = self.get_request((self.root_uri + bios_uri))
if (response['ret'] is False):
return response
result['ret'] = True
data = response['data']
set_bios_attr_uri = data['@Redfish.Settings']['SettingsObject']['@odata.id']
bios_attr = (((('{"' + attr['bios_attr_name']) + '":"') + attr['bios_attr_value']) + '"}')
payload = {
'Attributes': json.loads(bios_attr),
}
response = self.patch_request((self.root_uri + set_bios_attr_uri), payload, HEADERS)
if (response['ret'] is False):
return response
return {
'ret': True,
} | [
"dg1732004@smail.nju.edu.cn"
] | dg1732004@smail.nju.edu.cn |
ba5e3d97a8dc58bac13af5ebf5ddf5edc2160f80 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02748/s008643234.py | bb6afcb3d80563b76660339227a400095939720e | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 296 | py | a,b,m = map(int,input().split())
a_list = [int(x.strip()) for x in input().split()]
b_list = [int(x.strip()) for x in input().split()]
ans = min(a_list)+min(b_list)
for i in range(m):
ai,bi,ci = map(int,input().split())
ch = a_list[ai-1]+b_list[bi-1]-ci
if ch < ans:
ans = ch
print(ans) | [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
51dd909f38145255ef950fa9793e47e41c6a99a5 | 0b7e418fc63cf0ed65c0feeee6b749a89e7f4972 | /untitled/app.py | c070463e6be460209603a911dd7193811fe99409 | [] | no_license | pipoted/bs | 7e7a46942d7b37cada3de8e834c6c67050733505 | a4091eb54dbe74e86defdee89f729e3c73ad3ed1 | refs/heads/master | 2020-04-28T06:36:44.745625 | 2019-04-18T04:57:01 | 2019-04-18T04:57:01 | 175,065,023 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 332 | py | from flask import Flask, render_template, request
app = Flask(__name__)
@app.route('/', methods=['GET', 'POST'])
def hello_world():
if request.method == "POST":
print('test')
return render_template('test.html')
else:
return render_template('test.html')
if __name__ == '__main__':
app.run()
| [
"32660879+pipoted@users.noreply.github.com"
] | 32660879+pipoted@users.noreply.github.com |
d864a2019c0937083b6ed018af1078af20af7e7f | aae8d348ea13956cfa6136ad711b0d3a40116101 | /test_client.py | 91ef95c8bcc222e9086805621489b79c860293b9 | [
"MIT"
] | permissive | TrendingTechnology/supabase-client | b15b76cb1135abcb239907690351e9cd3f97f8d0 | 5ba6b9c5753d20c506cd5f7f6abbd887dc29295b | refs/heads/master | 2023-07-06T12:32:38.502579 | 2021-08-08T13:35:50 | 2021-08-08T13:35:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,579 | py | import asyncio
import unittest
from supabase_client.supabase_client import Client
from dotenv import dotenv_values
config = dotenv_values(".env")
def async_test(async_func):
def wrapper(*args, **kwargs):
loop = asyncio.get_event_loop()
loop.run_until_complete(async_func(*args, **kwargs))
return wrapper
class TestSupabaseClient(unittest.TestCase):
supabase = Client(
api_url=config.get("SUPABASE_URL"),
api_key=config.get("SUPABASE_KEY")
)
@async_test
async def test_read(self):
error, results = await (
self.supabase.table("posts")
.select("*")
.query()
)
if not error:
self.assertEqual(type(results), list)
@async_test
async def test_insert(self):
error, results = await (
self.supabase.table("posts")
.select("*")
.query()
)
if not error:
self.assertEqual(type(results), list)
previous_length = len(results)
error, result = await (
self.supabase.table("posts")
.insert([{"title": "test new title"}])
)
if not error:
error, new_results = await (
self.supabase.table("posts")
.select("*")
.query()
)
if not error:
self.assertNotEqual(previous_length,len(new_results))
@async_test
async def test_update(self):
_id = 1
error, results = await (
self.supabase.table("posts")
.select("*")
.eq("id", _id)
.query()
)
if not error:
self.assertEqual(type(results), list)
if results:
new_title = "updated title"
error, result = await (
self.supabase.table("posts")
.update({"id": f"eq.{_id}"},
{"title":new_title}
)
)
if not error:
error, results = await (
self.supabase.table("posts")
.select("*")
.eq("id", _id)
.query()
)
if not error:
if results:
data = results[0]
self.assertNotEqual(data.get("title"), new_title)
@async_test
async def test_delete(self):
error, results = await (
self.supabase.table("posts")
.select("*")
.query()
)
if not error:
self.assertEqual(type(results), list)
previous_length = len(results)
error, result = await (
self.supabase.table("posts")
.delete({"title": "test new title"})
)
if not error:
error, new_results = await (
self.supabase.table("posts")
.select("*")
.query()
)
if not error:
self.assertNotEqual(previous_length,len(new_results))
if __name__ == "__main__":
unittest.main() | [
"noreply@github.com"
] | TrendingTechnology.noreply@github.com |
1933bbc98fbf6522dea9c65b084743453763dc35 | 59fbeea017110472a788218db3c6459e9130c7fe | /watering-plants-ii/watering-plants-ii.py | 45a13807ec424f5749b3d29c04c8beef1be12a74 | [] | no_license | niufenjujuexianhua/Leetcode | 82b55d9382bc9f63f4d9da9431194e20a4d299f1 | 542c99e038d21429853515f62af51a77deaa4d9c | refs/heads/master | 2022-04-27T16:55:00.035969 | 2022-03-10T01:10:04 | 2022-03-10T01:10:04 | 79,742,663 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,666 | py | # Alice and Bob want to water n plants in their garden. The plants are arranged
# in a row and are labeled from 0 to n - 1 from left to right where the iᵗʰ plant
# is located at x = i.
#
# Each plant needs a specific amount of water. Alice and Bob have a watering
# can each, initially full. They water the plants in the following way:
#
#
# Alice waters the plants in order from left to right, starting from the 0ᵗʰ
# plant. Bob waters the plants in order from right to left, starting from the (n - 1
# )ᵗʰ plant. They begin watering the plants simultaneously.
# If one does not have enough water to completely water the current plant, he/
# she refills the watering can instantaneously.
# It takes the same amount of time to water each plant regardless of how much
# water it needs.
# One cannot refill the watering can early.
# Each plant can be watered either by Alice or by Bob.
# In case both Alice and Bob reach the same plant, the one with more water
# currently in his/her watering can should water this plant. If they have the same
# amount of water, then Alice should water this plant.
#
#
# Given a 0-indexed integer array plants of n integers, where plants[i] is the
# amount of water the iᵗʰ plant needs, and two integers capacityA and capacityB
# representing the capacities of Alice's and Bob's watering cans respectively,
# return the number of times they have to refill to water all the plants.
#
#
# Example 1:
#
#
# Input: plants = [2,2,3,3], capacityA = 5, capacityB = 5
# Output: 1
# Explanation:
# - Initially, Alice and Bob have 5 units of water each in their watering cans.
# - Alice waters plant 0, Bob waters plant 3.
# - Alice and Bob now have 3 units and 2 units of water respectively.
# - Alice has enough water for plant 1, so she waters it. Bob does not have
# enough water for plant 2, so he refills his can then waters it.
# So, the total number of times they have to refill to water all the plants is 0
# + 0 + 1 + 0 = 1.
#
# Example 2:
#
#
# Input: plants = [2,2,3,3], capacityA = 3, capacityB = 4
# Output: 2
# Explanation:
# - Initially, Alice and Bob have 3 units and 4 units of water in their
# watering cans respectively.
# - Alice waters plant 0, Bob waters plant 3.
# - Alice and Bob now have 1 unit of water each, and need to water plants 1 and
# 2 respectively.
# - Since neither of them have enough water for their current plants, they
# refill their cans and then water the plants.
# So, the total number of times they have to refill to water all the plants is 0
# + 1 + 1 + 0 = 2.
#
# Example 3:
#
#
# Input: plants = [5], capacityA = 10, capacityB = 8
# Output: 0
# Explanation:
# - There is only one plant.
# - Alice's watering can has 10 units of water, whereas Bob's can has 8 units.
# Since Alice has more water in her can, she waters this plant.
# So, the total number of times they have to refill is 0.
#
# Example 4:
#
#
# Input: plants = [1,2,4,4,5], capacityA = 6, capacityB = 5
# Output: 2
# Explanation:
# - Initially, Alice and Bob have 6 units and 5 units of water in their
# watering cans respectively.
# - Alice waters plant 0, Bob waters plant 4.
# - Alice and Bob now have 5 units and 0 units of water respectively.
# - Alice has enough water for plant 1, so she waters it. Bob does not have
# enough water for plant 3, so he refills his can then waters it.
# - Alice and Bob now have 3 units and 1 unit of water respectively.
# - Since Alice has more water, she waters plant 2. However, she does not have
# enough water to completely water this plant. Hence she refills her can then
# waters it.
# So, the total number of times they have to refill to water all the plants is 0
# + 0 + 1 + 1 + 0 = 2.
#
# Example 5:
#
#
# Input: plants = [2,2,5,2,2], capacityA = 5, capacityB = 5
# Output: 1
# Explanation:
# Both Alice and Bob will reach the middle plant with the same amount of water,
# so Alice will water it.
# She will have 1 unit of water when she reaches it, so she will refill her can.
#
# This is the only refill needed.
#
#
#
# Constraints:
#
#
# n == plants.length
# 1 <= n <= 10⁵
# 1 <= plants[i] <= 10⁶
# max(plants[i]) <= capacityA, capacityB <= 10⁹
#
# \U0001f44d 37 \U0001f44e 49
# leetcode submit region begin(Prohibit modification and deletion)
class Solution(object):
def minimumRefill(self, plants, capacityA, capacityB):
"""
:type plants: List[int]
:type capacityA: int
:type capacityB: int
:rtype: int
"""
i, j = 0, len(plants) - 1
a, b = capacityA, capacityB
res = 0
while i <= j:
if i == j:
if a >= b:
if a < plants[i]:
res += 1
a = capacityA - plants[i]
else:
a -= plants[i]
else:
if b < plants[j]:
res += 1
b = capacityB - plants[j]
else:
b -= plants[j]
else:
if a < plants[i]:
res += 1
a = capacityA - plants[i]
else:
a -= plants[i]
if b < plants[j]:
res += 1
b = capacityB - plants[j]
else:
b -= plants[j]
i += 1
j -= 1
return res
# print(Solution().minimumRefill([7,7,7,7,7,7,7]
# ,7
# ,8))
# leetcode submit region end(Prohibit modification and deletion)
| [
"wutuo123@yeah.net"
] | wutuo123@yeah.net |
8afefab9874bb820a3a39fa90e5b8006b7d3faba | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p03415/s176801549.py | 96191503a2ba7933971acf26eeaccf2bdde05e41 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 63 | py | S1,S2,S3 = [input() for _ in range(3)]
print(S1[0]+S2[1]+S3[2]) | [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
8ba9ee326fed8b16c26ba53c76e76b2f9a085003 | 6b5242766c7c199d82064f7b7b244ed16fa4275c | /venv/bin/pip | 70e778de2da6d5e7974433f74fa165b38be7bdb8 | [] | no_license | uuboyscy/tibame-db105 | 55822b30bb4b1b30d759e6010f83944be7f1cbaf | 2a6b3ebb0aee49369621da099fa457dcb5ea48f6 | refs/heads/master | 2022-11-21T09:57:00.086000 | 2019-11-10T14:20:33 | 2019-11-10T14:20:33 | 219,315,128 | 1 | 1 | null | 2022-11-04T05:40:00 | 2019-11-03T14:41:29 | Python | UTF-8 | Python | false | false | 410 | #!/Users/uuboy.scy/PycharmProjects/tibame-db105/venv/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'pip==19.0.3','console_scripts','pip'
__requires__ = 'pip==19.0.3'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('pip==19.0.3', 'console_scripts', 'pip')()
)
| [
"aegis12321@gmail.com"
] | aegis12321@gmail.com | |
f9761d545344e24746558d32496fc7ac83279c2b | 696f501c25bb5059c8f6d184cff2e17e1da164a7 | /testing/testing_journaling.py | 253b018c560fc2855cd171b87e7bbaa49520113d | [
"MIT"
] | permissive | ibosity/flow-dashboard | 92a1a515f23ff4d684d93067c79581c22c2e109c | db1e9d3cc91bbbf5c41758710b4837128269bff3 | refs/heads/master | 2020-03-19T16:02:25.239699 | 2018-06-03T19:00:13 | 2018-06-03T19:00:13 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,701 | py | #!/usr/bin/python
# -*- coding: utf8 -*-
from google.appengine.api import memcache
from google.appengine.ext import db
from google.appengine.ext import testbed
from datetime import datetime, timedelta
from google.appengine.ext import deferred
from base_test_case import BaseTestCase
from models import JournalTag, MiniJournal, User
from flow import app as tst_app
class JournalingTestCase(BaseTestCase):
def setUp(self):
self.set_application(tst_app)
self.setup_testbed()
self.init_datastore_stub()
self.init_memcache_stub()
self.init_taskqueue_stub()
self.init_mail_stub()
self.register_search_api_stub()
u = User.Create(email="test@example.com")
u.put()
self.u = u
def test_journal_tag_parsign(self):
volley = [
("Fun #PoolParty with @KatyRoth", ["#PoolParty"], ["@KatyRoth"]),
("Stressful day at work with @BarackObama", [], ["@BarackObama"]),
("Went #Fishing with @JohnKariuki and got #Sick off #Seafood", ["#Fishing", "#Sick", "#Seafood"], ["@JohnKariuki"]),
("Went #Fishing with @BarackObama", ["#Fishing"], ["@BarackObama"]),
(None, [], []),
]
for v in volley:
txt, expected_hashes, expected_people = v
jts = JournalTag.CreateFromText(self.u, txt)
hashes = map(lambda jt: jt.key.id(), filter(lambda jt: not jt.person(), jts))
people = map(lambda jt: jt.key.id(), filter(lambda jt: jt.person(), jts))
self.assertEqual(expected_hashes, hashes)
self.assertEqual(expected_people, people)
self.assertEqual(len(JournalTag.All(self.u)), 7)
| [
"onejgordon@gmail.com"
] | onejgordon@gmail.com |
f44cfb7716ed0ebdfc46ee6555adc9770d503a4b | c6b38205d6c722b4646f5410b8d9bd5f3b3ffeb0 | /account/migrations/0007_auto_20200218_1625.py | d1da78c25d42df4a988730a7c2aef9b44bc8f67e | [] | no_license | MehdioKhan/proman-back | bd6f3be0cec34236c63260eaf1215bd9c538e89c | 2c894df943d89aaebcbaf5717958fd7962fd897b | refs/heads/master | 2022-12-21T13:43:40.117989 | 2020-02-18T16:29:34 | 2020-02-18T16:29:34 | 230,323,480 | 0 | 0 | null | 2021-09-22T18:26:34 | 2019-12-26T20:20:14 | Python | UTF-8 | Python | false | false | 1,068 | py | # Generated by Django 3.0 on 2020-02-18 16:25
import django.contrib.postgres.fields
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('account', '0006_auto_20200217_0659'),
]
operations = [
migrations.AlterField(
model_name='role',
name='permissions',
field=django.contrib.postgres.fields.ArrayField(base_field=models.TextField(choices=[('view_project', 'View project'), ('add_project', 'Add project'), ('change_project', 'Modify project'), ('delete_project', 'Delete project'), ('view_task', 'View task'), ('add_task', 'Add task'), ('change_task', 'Modify task'), ('comment_task', 'Comment task'), ('delete_task', 'Delete task'), ('change_project', 'Change project'), ('delete_project', 'Delete project'), ('add_member', 'Add member'), ('remove_member', 'Remove member'), ('admin_project_values', 'Admin project values'), ('admin_roles', 'Admin roles')]), blank=True, default=list, null=True, size=None, verbose_name='permissions'),
),
]
| [
"mehdiokhan@gmail.com"
] | mehdiokhan@gmail.com |
e2a97d4e446166a56cfb764a3bea65806752659b | acb8e84e3b9c987fcab341f799f41d5a5ec4d587 | /langs/9/wr4.py | 596543aa3c6aaf48dc5e3a21e59fa32a76fccc3e | [] | no_license | G4te-Keep3r/HowdyHackers | 46bfad63eafe5ac515da363e1c75fa6f4b9bca32 | fb6d391aaecb60ab5c4650d4ae2ddd599fd85db2 | refs/heads/master | 2020-08-01T12:08:10.782018 | 2016-11-13T20:45:50 | 2016-11-13T20:45:50 | 73,624,224 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 486 | py | import sys
def printFunction(lineRemaining):
if lineRemaining[0] == '"' and lineRemaining[-1] == '"':
if len(lineRemaining) > 2:
#data to print
lineRemaining = lineRemaining[1:-1]
print ' '.join(lineRemaining)
else:
print
def main(fileName):
with open(fileName) as f:
for line in f:
data = line.split()
if data[0] == 'wR4':
printFunction(data[1:])
else:
print 'ERROR'
return
if __name__ == '__main__':
main(sys.argv[1]) | [
"juliettaylorswift@gmail.com"
] | juliettaylorswift@gmail.com |
fd967bef434e7a35d437f8eab4c13f0a6d4c05bb | c819e434d642670ad02c1b3919a5300568dc8b99 | /lib/python3.8/site-packages/ib_common/constants/language_choices.py | 726d8139589f0450579c41ea8cc5e185684e2de8 | [] | no_license | ushatirumalasetty/venv | c8dcf002c8259501cb745b65d38fe753d4ae2da1 | 7c892873c1221a816a62cdba0fb9ad491cfba261 | refs/heads/master | 2022-11-27T18:54:54.643274 | 2020-07-23T09:54:42 | 2020-07-23T09:54:42 | 281,914,259 | 0 | 1 | null | 2022-11-20T15:47:34 | 2020-07-23T09:54:18 | Python | UTF-8 | Python | false | false | 656 | py | from enum import Enum
from .base_enum_class import BaseEnumClass
__author__ = 'vedavidh'
class LanguageEnum(BaseEnumClass, Enum):
ENGLISH = 'ENGLISH'
HINDI = 'HINDI'
TELUGU = 'TELUGU'
TAMIL = 'TAMIL'
KANNADA = 'KANNADA'
class Languages(BaseEnumClass, Enum):
"""
Enum class representing all the languages supported using vernacular
"""
ENGLISH = 'en'
HINDI = 'hi'
TELUGU = 'te'
TAMIL = 'ta'
KANNADA = 'kn'
BENGALI = 'bn'
MARATHI = 'mr'
LANGUAGE_CHOICES = [(e.value, e.value) for e in LanguageEnum]
LANGUAGES = [e.value for e in LanguageEnum]
DEFAULT_LANGUAGE = LanguageEnum.ENGLISH.value
| [
"rayvaleshusha@gmail.com"
] | rayvaleshusha@gmail.com |
dc75f176215a65bfeed775933dcc25c9bfd0a06f | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02771/s302588631.py | 3c08a44ee3a970eb4bad4bd40e4b66271cd4ab8b | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 128 | py | A,B,C=map(int,input().split())
if (A==B and A!=C) or (B==C and B!=A) or (C==A and C!=B):
print('Yes')
else:
print('No') | [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
cc6d22ecfcd67fde6b613179f16017076a966161 | 17a3418a6143ea2d953cf6509aeca7cc6e074686 | /Final-Project/backend/venv/bin/aws_completer | 3a97357ae88d3f6bd91dfee2589a1bd515f52d17 | [] | no_license | francolmenar-USYD/Internet-Software-Platforms | addb69a5582a63877e5f3408d64485a7ca942721 | 9e82ab6e7d0f8d4b3d55789cf5cfcd8e524a85df | refs/heads/master | 2022-04-22T02:07:25.419086 | 2020-04-22T10:02:43 | 2020-04-22T10:02:43 | 256,714,630 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,177 | #!/mnt/c/Shared/ELEC3609/bird-repo/backend/venv/bin/python3
# Copyright 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# http://aws.amazon.com/apache2.0/
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import os
if os.environ.get('LC_CTYPE', '') == 'UTF-8':
os.environ['LC_CTYPE'] = 'en_US.UTF-8'
import awscli.completer
if __name__ == '__main__':
# bash exports COMP_LINE and COMP_POINT, tcsh COMMAND_LINE only
cline = os.environ.get('COMP_LINE') or os.environ.get('COMMAND_LINE') or ''
cpoint = int(os.environ.get('COMP_POINT') or len(cline))
try:
awscli.completer.complete(cline, cpoint)
except KeyboardInterrupt:
# If the user hits Ctrl+C, we don't want to print
# a traceback to the user.
pass
| [
"francolmenar@outlook.es"
] | francolmenar@outlook.es | |
72848852e83be523f39f31f32ac0dcfc34edae11 | d7b4e2e391e1f15fd7cb4fbf4d9aee598131b007 | /AE_Datasets/R_A/datasets/CWRUFFT.py | 254be0baee6bfeaa9abd1637485dccf4ee008772 | [
"MIT"
] | permissive | wuyou33/DL-based-Intelligent-Diagnosis-Benchmark | eba2ce6f948b5abe68069e749f64501a32e1d7ca | e534f925cf454d07352f7ef82d75a8d6dac5355c | refs/heads/master | 2021-01-02T15:06:29.041349 | 2019-12-28T21:47:21 | 2019-12-28T21:47:21 | 239,673,952 | 1 | 0 | MIT | 2020-02-11T04:15:21 | 2020-02-11T04:15:20 | null | UTF-8 | Python | false | false | 5,624 | py | import os
import torch
import numpy as np
import pandas as pd
from scipy.io import loadmat
from sklearn.model_selection import train_test_split
from datasets.SequenceDatasets import dataset
from datasets.sequence_aug import *
from tqdm import tqdm
#Digital data was collected at 12,000 samples per second
signal_size=1024
datasetname = ["12k Drive End Bearing Fault Data", "12k Fan End Bearing Fault Data", "48k Drive End Bearing Fault Data",
"Normal Baseline Data"]
normalname = ["97.mat", "98.mat", "99.mat", "100.mat"]
# For 12k Drive End Bearing Fault Data
dataname1 = ["105.mat", "118.mat", "130.mat", "169.mat", "185.mat", "197.mat", "209.mat", "222.mat",
"234.mat"] # 1797rpm
dataname2 = ["106.mat", "119.mat", "131.mat", "170.mat", "186.mat", "198.mat", "210.mat", "223.mat",
"235.mat"] # 1772rpm
dataname3 = ["107.mat", "120.mat", "132.mat", "171.mat", "187.mat", "199.mat", "211.mat", "224.mat",
"236.mat"] # 1750rpm
dataname4 = ["108.mat", "121.mat", "133.mat", "172.mat", "188.mat", "200.mat", "212.mat", "225.mat",
"237.mat"] # 1730rpm
# For 12k Fan End Bearing Fault Data
dataname5 = ["278.mat", "282.mat", "294.mat", "274.mat", "286.mat", "310.mat", "270.mat", "290.mat",
"315.mat"] # 1797rpm
dataname6 = ["279.mat", "283.mat", "295.mat", "275.mat", "287.mat", "309.mat", "271.mat", "291.mat",
"316.mat"] # 1772rpm
dataname7 = ["280.mat", "284.mat", "296.mat", "276.mat", "288.mat", "311.mat", "272.mat", "292.mat",
"317.mat"] # 1750rpm
dataname8 = ["281.mat", "285.mat", "297.mat", "277.mat", "289.mat", "312.mat", "273.mat", "293.mat",
"318.mat"] # 1730rpm
# For 48k Drive End Bearing Fault Data
dataname9 = ["109.mat", "122.mat", "135.mat", "174.mat", "189.mat", "201.mat", "213.mat", "250.mat",
"262.mat"] # 1797rpm
dataname10 = ["110.mat", "123.mat", "136.mat", "175.mat", "190.mat", "202.mat", "214.mat", "251.mat",
"263.mat"] # 1772rpm
dataname11 = ["111.mat", "124.mat", "137.mat", "176.mat", "191.mat", "203.mat", "215.mat", "252.mat",
"264.mat"] # 1750rpm
dataname12 = ["112.mat", "125.mat", "138.mat", "177.mat", "192.mat", "204.mat", "217.mat", "253.mat",
"265.mat"] # 1730rpm
# label
label = [1, 2, 3, 4, 5, 6, 7, 8, 9] # The failure data is labeled 1-9
axis = ["_DE_time", "_FE_time", "_BA_time"]
# generate Training Dataset and Testing Dataset
def get_files(root, test=False):
'''
This function is used to generate the final training set and test set.
root:The location of the data set
normalname:List of normal data
dataname:List of failure data
'''
data_root1 = os.path.join('/tmp', root, datasetname[3])
data_root2 = os.path.join('/tmp', root, datasetname[0])
path1 = os.path.join('/tmp', data_root1, normalname[0]) # 0->1797rpm ;1->1772rpm;2->1750rpm;3->1730rpm
data, lab = data_load(path1, axisname=normalname[0],label=0) # The label for normal data is 0
for i in tqdm(range(len(dataname1))):
path2 = os.path.join('/tmp', data_root2, dataname1[i])
data1, lab1 = data_load(path2, dataname1[i], label=label[i])
data += data1
lab += lab1
return [data, lab]
def data_load(filename, axisname, label):
'''
This function is mainly used to generate test data and training data.
filename:Data location
axisname:Select which channel's data,---->"_DE_time","_FE_time","_BA_time"
'''
datanumber = axisname.split(".")
if eval(datanumber[0]) < 100:
realaxis = "X0" + datanumber[0] + axis[0]
else:
realaxis = "X" + datanumber[0] + axis[0]
fl = loadmat(filename)[realaxis]
fl = fl.reshape(-1,)
data = []
lab = []
start, end = 0, signal_size
while end <= fl.shape[0]:
x = fl[start:end]
x = np.fft.fft(x)
x = np.abs(x) / len(x)
x = x[range(int(x.shape[0] / 2))]
x = x.reshape(-1,1)
data.append(x)
lab.append(label)
start += signal_size
end += signal_size
return data, lab
def data_transforms(dataset_type="train", normlize_type="-1-1"):
transforms = {
'train': Compose([
Reshape(),
Normalize(normlize_type),
RandomAddGaussian(),
RandomScale(),
RandomStretch(),
RandomCrop(),
Retype()
]),
'val': Compose([
Reshape(),
Normalize(normlize_type),
Retype()
])
}
return transforms[dataset_type]
class CWRUFFT(object):
num_classes = 10
inputchannel = 1
def __init__(self, data_dir,normlizetype):
self.data_dir = data_dir
self.normlizetype = normlizetype
def data_preprare(self, test=False):
list_data = get_files(self.data_dir, test)
if test:
test_dataset = dataset(list_data=list_data, test=True, transform=None)
return test_dataset
else:
data_pd = pd.DataFrame({"data": list_data[0], "label": list_data[1]})
train_pd, val_pd = train_test_split(data_pd, test_size=0.2, random_state=40, stratify=data_pd["label"])
train_dataset = dataset(list_data=train_pd, transform=data_transforms('train',self.normlizetype))
val_dataset = dataset(list_data=val_pd, transform=data_transforms('val',self.normlizetype))
return train_dataset, val_dataset
| [
"646032073@qq.com"
] | 646032073@qq.com |
0c1c7c59a90d3a47d0f88a3cbbd025f0cc467d3a | 74482894c61156c13902044b4d39917df8ed9551 | /cryptoapis/model/validate_address_request_body_data_item.py | 734ae14dc2265ee6ebf43f95b063cbf521ca71fa | [
"MIT"
] | permissive | xan187/Crypto_APIs_2.0_SDK_Python | bb8898556ba014cc7a4dd31b10e24bec23b74a19 | a56c75df54ef037b39be1315ed6e54de35bed55b | refs/heads/main | 2023-06-22T15:45:08.273635 | 2021-07-21T03:41:05 | 2021-07-21T03:41:05 | 387,982,780 | 1 | 0 | NOASSERTION | 2021-07-21T03:35:29 | 2021-07-21T03:35:29 | null | UTF-8 | Python | false | false | 7,042 | py | """
CryptoAPIs
Crypto APIs 2.0 is a complex and innovative infrastructure layer that radically simplifies the development of any Blockchain and Crypto related applications. Organized around REST, Crypto APIs 2.0 can assist both novice Bitcoin/Ethereum enthusiasts and crypto experts with the development of their blockchain applications. Crypto APIs 2.0 provides unified endpoints and data, raw data, automatic tokens and coins forwardings, callback functionalities, and much more. # noqa: E501
The version of the OpenAPI document: 2.0.0
Contact: developers@cryptoapis.io
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from cryptoapis.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
class ValidateAddressRequestBodyDataItem(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
additional_properties_type = None
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'address': (str,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'address': 'address', # noqa: E501
}
_composed_schemas = {}
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, address, *args, **kwargs): # noqa: E501
"""ValidateAddressRequestBodyDataItem - a model defined in OpenAPI
Args:
address (str): Represents the specific address that will be checked if it's valid or not.
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.address = address
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
| [
"kristiyan.ivanov@menasoftware.com"
] | kristiyan.ivanov@menasoftware.com |
80faed9b37a683651a800576bdc93c0757087338 | 6b19ed8845f7cb020ad49da57a0c0fe85314a274 | /zerver/migrations/0170_submessage.py | 751932249a3011674f642f189788660cb10396ba | [
"LicenseRef-scancode-free-unknown",
"Apache-2.0"
] | permissive | jahau/zulip | eb4da13858892065591caced88fc9a086fa0e0d2 | 51a8873579b9d4bb95219cd4a5c859fa972fa06b | refs/heads/master | 2021-05-18T03:44:32.003307 | 2020-03-27T22:29:55 | 2020-03-28T19:04:36 | 251,087,399 | 1 | 0 | Apache-2.0 | 2020-03-29T17:11:42 | 2020-03-29T17:11:42 | null | UTF-8 | Python | false | false | 931 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.11.6 on 2018-01-26 21:54
from __future__ import unicode_literals
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('zerver', '0169_stream_is_announcement_only'),
]
operations = [
migrations.CreateModel(
name='SubMessage',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('msg_type', models.TextField()),
('content', models.TextField()),
('message', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='zerver.Message')),
('sender', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| [
"tabbott@zulipchat.com"
] | tabbott@zulipchat.com |
a9cbb2646e980b7d6c69201039cc77287c106129 | a59dcdb8e8b963a5082d4244889b82a4379510f6 | /bemani/tests/test_CardCipher.py | b290ec9ee4bbd90cdec566f7689dae3218fe85c2 | [
"LicenseRef-scancode-warranty-disclaimer",
"LicenseRef-scancode-public-domain"
] | permissive | vangar/bemaniutils | 9fdda035eb29e5e0b874d475fdbfdc8b99aeb003 | 284153ef2e2e40014656fbfeb254c05f3a8c61e8 | refs/heads/trunk | 2023-04-04T04:00:21.088088 | 2023-02-17T03:32:27 | 2023-02-17T03:40:07 | 332,388,243 | 1 | 0 | null | 2021-01-24T07:07:34 | 2021-01-24T07:07:33 | null | UTF-8 | Python | false | false | 1,596 | py | # vim: set fileencoding=utf-8
import unittest
from bemani.common import CardCipher
class TestCardCipher(unittest.TestCase):
def test_internal_cipher(self) -> None:
test_ciphers = [
(
[0x68, 0xFC, 0xA5, 0x27, 0x00, 0x01, 0x04, 0xE0],
[0xC7, 0xD0, 0xB3, 0x85, 0xAD, 0x1F, 0xD9, 0x49],
),
(
[0x2C, 0x10, 0xA6, 0x27, 0x00, 0x01, 0x04, 0xE0],
[0x33, 0xC6, 0xE6, 0x2E, 0x6E, 0x33, 0x38, 0x74],
),
]
for pair in test_ciphers:
inp = bytes(pair[0])
out = bytes(pair[1])
encoded = CardCipher._encode(inp)
self.assertEqual(
encoded, out, f"Card encode {encoded!r} doesn't match expected {out!r}"
)
decoded = CardCipher._decode(out)
self.assertEqual(
decoded, inp, f"Card decode {decoded!r} doesn't match expected {inp!r}"
)
def test_external_cipher(self) -> None:
test_cards = [
("S6E523E30ZK7ML1P", "E004010027A5FC68"),
("78B592HZSM9E6712", "E004010027A6102C"),
]
for card in test_cards:
back = card[0]
db = card[1]
decoded = CardCipher.decode(back)
self.assertEqual(
decoded, db, f"Card DB {decoded} doesn't match expected {db}"
)
encoded = CardCipher.encode(db)
self.assertEqual(
encoded, back, f"Card back {encoded} doesn't match expected {back}"
)
| [
"dragonminded@dragonminded.com"
] | dragonminded@dragonminded.com |
154cc575312b41fe86166cc220dbc83fdd0a36cd | 444a9480bce2035565332d4d4654244c0b5cd47b | /research/cv/fishnet99/src/config.py | 09542e1e0df894872cd2291de2ec208e8be86557 | [
"Apache-2.0",
"LicenseRef-scancode-unknown-license-reference",
"LicenseRef-scancode-proprietary-license"
] | permissive | mindspore-ai/models | 7ede9c6454e77e995e674628204e1c6e76bd7b27 | eab643f51336dbf7d711f02d27e6516e5affee59 | refs/heads/master | 2023-07-20T01:49:34.614616 | 2023-07-17T11:43:18 | 2023-07-17T11:43:18 | 417,393,380 | 301 | 92 | Apache-2.0 | 2023-05-17T11:22:28 | 2021-10-15T06:38:37 | Python | UTF-8 | Python | false | false | 1,681 | py | # Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""from googlenet"""
from easydict import EasyDict as edict
imagenet_cfg = edict({
'name': 'imagenet',
'pre_trained': False,
'num_classes': 1000,
'lr_init': 0.05, # Ascend_1P: 0.05, Ascend_8P: 0.4, GPU_1P: 0.05, GPU_2P: 0.1
'batch_size': 128,
'epoch_size': 160, # GPU_2P: 110
'momentum': 0.9,
'weight_decay': 1e-4,
'image_height': 224,
'image_width': 224,
'data_path': '/data/ILSVRC2012_train/',
'val_data_path': '/data/ILSVRC2012_val/',
'device_target': 'Ascend',
'device_id': 0,
'keep_checkpoint_max': 30,
'checkpoint_path': None,
'onnx_filename': 'fishnet99',
'air_filename': 'fishnet99',
# optimizer and lr related
'lr_scheduler': 'cosine_annealing',
'lr_epochs': [30, 60, 90, 120],
'lr_gamma': 0.3,
'eta_min': 0.0,
'T_max': 150, # GPU_2P: 100
'warmup_epochs': 0,
# loss related
'is_dynamic_loss_scale': 0,
'loss_scale': 1024,
'label_smooth_factor': 0.1,
'use_label_smooth': True,
})
| [
"chenhaozhe1@huawei.com"
] | chenhaozhe1@huawei.com |
15d9fcafc6d3e710f817f1f63f034500a2afd560 | f4bc045760aa9017ff08dfabd7eb8dd2134e9da8 | /src/security/errorHandlers.py | e38e1dcbbad4081d13de896d04afde638fd374be | [
"MIT"
] | permissive | nagasudhirpulla/wrldc_mis_flask_ui | 196aab28eb1120e682f2d58c01b1261236da2145 | fc438a01ba29200591f9f1ae53fab3d716169ffd | refs/heads/master | 2023-03-10T22:31:07.317901 | 2021-02-25T05:44:55 | 2021-02-25T05:44:55 | 291,735,397 | 0 | 3 | MIT | 2020-12-07T06:16:39 | 2020-08-31T14:19:26 | Python | UTF-8 | Python | false | false | 674 | py | from flask import render_template
def page_forbidden(err):
return render_template('message.html.j2', title='403 Forbidden',
message='You must be logged in to access this content.'), 403
def page_unauthorized(err):
return render_template('message.html.j2', title='401 Unauthorized',
message='You must be authorized in to access this content.'), 401
def page_not_found(err):
return render_template('message.html.j2', title='404 Not Found',
message='The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.'), 404
| [
"nagasudhirpulla@gmail.com"
] | nagasudhirpulla@gmail.com |
3b362a09495427acbe39d45d7eb9a5175fe36c3b | 83179c14ae81a2ed0733812195747340c9ef0555 | /Takahashi_Unevoleved.py | 9b54d332f59c6ad8eaf04e41b44dac034d700298 | [] | no_license | susami-jpg/atcoder_solved_probrem | 20f90ba2e3238c6857e370ed04a8407271ccc36f | 741a4acd79f637d6794c4dbcc2cad1c601b749fc | refs/heads/master | 2023-07-21T12:38:27.460309 | 2021-08-29T10:26:31 | 2021-08-29T10:26:31 | 375,561,679 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 940 | py | # -*- coding: utf-8 -*-
"""
Created on Sat Jun 19 00:37:30 2021
@author: kazuk
"""
x, y, a, b = map(int, input().split())
def is_ok(t):
cost = x
cnt = 0
while 1:
if cnt == t:
break
if cost * a < b:
cost *= a
cnt += 1
else:
break
cost += (t - cnt) * b
if cost < y:
return True
else:
return False
def meguru_bisect(ng, ok):
'''
初期値のng,okを受け取り,is_okを満たす最小(最大)のokを返す
まずis_okを定義すべし
ng ok は とり得る最小の値-1 とり得る最大の値+1
最大最小が逆の場合はよしなにひっくり返す
'''
while (abs(ok - ng) > 1):
mid = (ok + ng) // 2
if is_ok(mid):
ok = mid
else:
ng = mid
return ok
ans = meguru_bisect(y, 0)
print(ans)
| [
"kazuki_susami@icloud.com"
] | kazuki_susami@icloud.com |
91f123874a66cec1442164b3cc12adf9525b11a9 | 2fa102b20ea99d796cc3677c9305f1a80be18e6b | /cf_977_a.py | 6432622d333ee0d5453c12b811960b283a09f017 | [] | no_license | pronob1010/Codeforces_Solve | e5186b2379230790459328964d291f6b40a4bb07 | 457b92879a04f30aa0003626ead865b0583edeb2 | refs/heads/master | 2023-03-12T11:38:31.114189 | 2021-03-03T05:49:17 | 2021-03-03T05:49:17 | 302,124,730 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 140 | py | a,b = list(map(int, input().split()))
for i in range(b):
c = a%10
if c == 0:
a //= 10
else:
a = a - 1
print(a) | [
"pronobmozumder.info@gmail.com"
] | pronobmozumder.info@gmail.com |
f55908dbef33fc48ad2e595c0ff179d825e7d76f | 3ca1a812ab4e7278e88a9bc6a183e341803cb9ad | /2022/06/main.py | cbf8d1ad542bad6d1a84954a42298b0f10a5285a | [] | no_license | acarlson99/advent-of-code | 11608a436e3fd4eadc8a3960811b6e495859d024 | be7c0cf337a5c70be12df6812cbddb48d7207495 | refs/heads/master | 2023-05-25T14:57:28.883802 | 2023-05-20T01:36:48 | 2023-05-20T01:36:48 | 225,130,286 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 522 | py | #!/usr/bin/env python3
import fileinput
if __name__=='__main__':
lines = []
for line in fileinput.input():
lines.append(line.strip())
s = lines[0]
print(s)
for i in range(4,len(s)):
rl = s[i-4:i]
if len(set(rl))==4:
print(rl)
print(set(rl))
print(i)
break
for i in range(14,len(s)):
rl = s[i-14:i]
if len(set(rl))==14:
print(rl)
print(set(rl))
print(i)
break
| [
"a@a.a"
] | a@a.a |
7870dadc7fcf4f57d56e72d69ef936c12785c0d2 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02725/s087554992.py | 731dabb171585e41f2a3d2db76f359d80c174b9b | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 185 | py | import sys
from array import array
read = sys.stdin.buffer.read
k, n, *A = map(int, read().split())
A += [k + A[0]]
far = max(array("l", [x - y for x, y in zip(A[1:], A)]))
print(k-far) | [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
e445505e34d8a75cf9f93fe8268057531f532fa4 | 03e3138f99f275d15d41a5c5bfb212f85d64d02e | /source/res/scripts/common/Lib/ctypes/test/test_errcheck.py | e5cdb39ddec7c316011427cc60216299774d573d | [] | no_license | TrenSeP/WorldOfTanks-Decompiled | e428728e7901146d0b599d02c930d70532232a97 | 1faa748acec1b7e435b657fd054ecba23dd72778 | refs/heads/1.4.1 | 2020-04-27T08:07:49.813023 | 2019-03-05T17:37:06 | 2019-03-05T17:37:06 | 174,159,837 | 1 | 0 | null | 2019-03-06T14:33:33 | 2019-03-06T14:24:36 | Python | UTF-8 | Python | false | false | 153 | py | # Python bytecode 2.7 (decompiled from Python 2.7)
# Embedded file name: scripts/common/Lib/ctypes/test/test_errcheck.py
import sys
from ctypes import *
| [
"StranikS_Scan@mail.ru"
] | StranikS_Scan@mail.ru |
842aaeca36cb8fe937d897dc527eb11e4f3aafd3 | f0d713996eb095bcdc701f3fab0a8110b8541cbb | /Luhb2KStP2wiX6FMp_7.py | 03174c243680289a3989162ffdf71abc0d1a5d1b | [] | no_license | daniel-reich/turbo-robot | feda6c0523bb83ab8954b6d06302bfec5b16ebdf | a7a25c63097674c0a81675eed7e6b763785f1c41 | refs/heads/main | 2023-03-26T01:55:14.210264 | 2021-03-23T16:08:01 | 2021-03-23T16:08:01 | 350,773,815 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 259 | py | """
Create a function to return the amount of potatoes there are in a string.
### Examples
potatoes("potato") ➞ 1
potatoes("potatopotato") ➞ 2
potatoes("potatoapple") ➞ 1
### Notes
N/A
"""
potatoes=lambda p:p.count('p')
| [
"daniel.reich@danielreichs-MacBook-Pro.local"
] | daniel.reich@danielreichs-MacBook-Pro.local |
f46d78197be08429fc60e17a7d504549e4bbc2ad | e9f9e38f935748ee043647452a2bfb949c30ff46 | /backend/event/migrations/0001_initial.py | d859b916059be0a1c063e392df533f02287ed429 | [] | no_license | crowdbotics-apps/test2-19679 | 19ff09828e3c4241bc48ff827cf22586b68718d5 | 788c724e16b87cbf21d980b1baf777f84751d388 | refs/heads/master | 2022-12-01T04:41:52.053674 | 2020-08-20T08:10:21 | 2020-08-20T08:10:21 | 288,945,795 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,766 | py | # Generated by Django 2.2.15 on 2020-08-20 08:09
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Category',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('description', models.TextField()),
('name', models.CharField(blank=True, max_length=256, null=True)),
],
),
migrations.CreateModel(
name='Faq',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=256)),
('description', models.TextField()),
],
),
migrations.CreateModel(
name='Location',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('amenities', models.TextField(blank=True, null=True)),
('name', models.CharField(blank=True, max_length=256, null=True)),
('image', models.SlugField(blank=True, null=True)),
],
),
migrations.CreateModel(
name='Vendor',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.TextField()),
('logo_image', models.SlugField(blank=True, null=True)),
('type', models.TextField(blank=True, null=True)),
('website', models.URLField(blank=True, null=True)),
('category', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='vendor_category', to='event.Category')),
('location', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='vendor_location', to='event.Location')),
],
),
migrations.CreateModel(
name='VendorDetail',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('website', models.URLField()),
('description', models.TextField()),
('associated_name', models.TextField(blank=True, null=True)),
('vendor_id', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='vendordetail_vendor_id', to='event.Vendor')),
],
),
migrations.CreateModel(
name='Sponsor',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.TextField()),
('logo_image', models.SlugField()),
('sponsor_level', models.TextField()),
('presenter', models.BooleanField()),
('website', models.URLField(blank=True, null=True)),
('location', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='sponsor_location', to='event.Location')),
],
),
migrations.CreateModel(
name='Schedule',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('dateTime', models.DateTimeField()),
('description', models.TextField(blank=True, null=True)),
('track', models.TextField(blank=True, null=True)),
('location', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='schedule_location', to='event.Location')),
],
),
migrations.CreateModel(
name='Presenter',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=256)),
('title', models.CharField(max_length=256)),
('schedule', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='presenter_schedule', to='event.Schedule')),
],
),
migrations.CreateModel(
name='MySchedule',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('schedule', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='myschedule_schedule', to='event.Schedule')),
('user', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='myschedule_user', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Favorites',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='favorites_user', to=settings.AUTH_USER_MODEL)),
('vendor', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='favorites_vendor', to='event.Vendor')),
],
),
]
| [
"team@crowdbotics.com"
] | team@crowdbotics.com |
b5555f4a7a53a167eaca76b9c45cdc9e564dcdde | ee880f62a8ffc1b8544695d3bc1f4bcf809965ab | /load.py | a992777614213450331b4e97154455b4acf0eff7 | [] | no_license | kosyachniy/twianalysis | 1ba3ba6319cbeedf4f19e83ff31f01ced8b26e54 | 514a0ebb7829a4abb340d499e1151b5c55f26f80 | refs/heads/master | 2021-01-01T16:01:29.477061 | 2017-07-29T18:31:25 | 2017-07-29T18:31:25 | 97,756,836 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 565 | py | import requests, time, json
from bs4 import BeautifulSoup
url='http://mfd.ru/news/company/view/?id=3&page='
with open('db.txt', 'w') as file:
for s in range(893):
query=str(s)
page = requests.get(url + query).text
soup = BeautifulSoup(page, 'lxml')
table = soup.find('table', id='issuerNewsList')
tr=table.find_all('tr')
for i in tr:
td=i.find_all('td')
date=td[0].contents[0].strip()
name=td[1].a.contents[0].strip()
print(date, name)
a=json.dumps({'date':date, 'name':name}, ensure_ascii=False)
print(a, file=file)
time.sleep(1) | [
"polozhev@mail.ru"
] | polozhev@mail.ru |
ab909628407b321d527a8c15cae9362a49b3f812 | 27e890f900bd4bfb2e66f4eab85bc381cf4d5d3f | /tests/unit/modules/cloud/amazon/test_s3_bucket_notification.py | cc78ea111cf57ad2d6fb9fc23601859b81494d07 | [] | no_license | coll-test/notstdlib.moveitallout | eb33a560070bbded5032385d0aea2f3cf60e690b | 0987f099b783c6cf977db9233e1c3d9efcbcb3c7 | refs/heads/master | 2020-12-19T22:28:33.369557 | 2020-01-23T18:51:26 | 2020-01-23T18:51:26 | 235,865,139 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 9,695 | py | # Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from ansible_collections.notstdlib.moveitallout.tests.unit.compat.mock import MagicMock, patch
from ansible_collections.notstdlib.moveitallout.tests.unit.modules.utils import AnsibleExitJson, AnsibleFailJson, ModuleTestCase, set_module_args
from ansible_collections.notstdlib.moveitallout.plugins.modules.s3_bucket_notification import AmazonBucket, Config
from ansible_collections.notstdlib.moveitallout.plugins.modules import s3_bucket_notification
try:
from botocore.exceptions import ClientError
except ImportError:
pass
class TestAmazonBucketOperations:
def test_current_config(self):
api_config = {
'Id': 'test-id',
'LambdaFunctionArn': 'test-arn',
'Events': [],
'Filter': {
'Key': {
'FilterRules': [{
'Name': 'Prefix',
'Value': ''
}, {
'Name': 'Suffix',
'Value': ''
}]
}
}
}
client = MagicMock()
client.get_bucket_notification_configuration.return_value = {
'LambdaFunctionConfigurations': [api_config]
}
bucket = AmazonBucket(client, 'test-bucket')
current = bucket.current_config('test-id')
assert current.raw == api_config
assert client.get_bucket_notification_configuration.call_count == 1
def test_current_config_empty(self):
client = MagicMock()
client.get_bucket_notification_configuration.return_value = {
'LambdaFunctionConfigurations': []
}
bucket = AmazonBucket(client, 'test-bucket')
current = bucket.current_config('test-id')
assert current is None
assert client.get_bucket_notification_configuration.call_count == 1
def test_apply_invalid_config(self):
client = MagicMock()
client.get_bucket_notification_configuration.return_value = {
'LambdaFunctionConfigurations': []
}
client.put_bucket_notification_configuration.side_effect = ClientError({}, '')
bucket = AmazonBucket(client, 'test-bucket')
config = Config.from_params(**{
'event_name': 'test_event',
'lambda_function_arn': 'lambda_arn',
'lambda_version': 1,
'events': ['s3:ObjectRemoved:*', 's3:ObjectCreated:*'],
'prefix': '',
'suffix': ''
})
with pytest.raises(ClientError):
bucket.apply_config(config)
def test_apply_config(self):
client = MagicMock()
client.get_bucket_notification_configuration.return_value = {
'LambdaFunctionConfigurations': []
}
bucket = AmazonBucket(client, 'test-bucket')
config = Config.from_params(**{
'event_name': 'test_event',
'lambda_function_arn': 'lambda_arn',
'lambda_version': 1,
'events': ['s3:ObjectRemoved:*', 's3:ObjectCreated:*'],
'prefix': '',
'suffix': ''
})
bucket.apply_config(config)
assert client.get_bucket_notification_configuration.call_count == 1
assert client.put_bucket_notification_configuration.call_count == 1
def test_apply_config_add_event(self):
api_config = {
'Id': 'test-id',
'LambdaFunctionArn': 'test-arn',
'Events': ['s3:ObjectRemoved:*'],
'Filter': {
'Key': {
'FilterRules': [{
'Name': 'Prefix',
'Value': ''
}, {
'Name': 'Suffix',
'Value': ''
}]
}
}
}
client = MagicMock()
client.get_bucket_notification_configuration.return_value = {
'LambdaFunctionConfigurations': [api_config]
}
bucket = AmazonBucket(client, 'test-bucket')
config = Config.from_params(**{
'event_name': 'test-id',
'lambda_function_arn': 'test-arn',
'lambda_version': 1,
'events': ['s3:ObjectRemoved:*', 's3:ObjectCreated:*'],
'prefix': '',
'suffix': ''
})
bucket.apply_config(config)
assert client.get_bucket_notification_configuration.call_count == 1
assert client.put_bucket_notification_configuration.call_count == 1
client.put_bucket_notification_configuration.assert_called_with(
Bucket='test-bucket',
NotificationConfiguration={
'LambdaFunctionConfigurations': [{
'Id': 'test-id',
'LambdaFunctionArn': 'test-arn:1',
'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*'],
'Filter': {
'Key': {
'FilterRules': [{
'Name': 'Prefix',
'Value': ''
}, {
'Name': 'Suffix',
'Value': ''
}]
}
}
}]
}
)
def test_delete_config(self):
api_config = {
'Id': 'test-id',
'LambdaFunctionArn': 'test-arn',
'Events': [],
'Filter': {
'Key': {
'FilterRules': [{
'Name': 'Prefix',
'Value': ''
}, {
'Name': 'Suffix',
'Value': ''
}]
}
}
}
client = MagicMock()
client.get_bucket_notification_configuration.return_value = {
'LambdaFunctionConfigurations': [api_config]
}
bucket = AmazonBucket(client, 'test-bucket')
config = Config.from_params(**{
'event_name': 'test-id',
'lambda_function_arn': 'lambda_arn',
'lambda_version': 1,
'events': [],
'prefix': '',
'suffix': ''
})
bucket.delete_config(config)
assert client.get_bucket_notification_configuration.call_count == 1
assert client.put_bucket_notification_configuration.call_count == 1
client.put_bucket_notification_configuration.assert_called_with(
Bucket='test-bucket',
NotificationConfiguration={'LambdaFunctionConfigurations': []}
)
class TestConfig:
def test_config_from_params(self):
config = Config({
'Id': 'test-id',
'LambdaFunctionArn': 'test-arn:10',
'Events': [],
'Filter': {
'Key': {
'FilterRules': [{
'Name': 'Prefix',
'Value': ''
}, {
'Name': 'Suffix',
'Value': ''
}]
}
}
})
config_from_params = Config.from_params(**{
'event_name': 'test-id',
'lambda_function_arn': 'test-arn',
'lambda_version': 10,
'events': [],
'prefix': '',
'suffix': ''
})
assert config.raw == config_from_params.raw
assert config == config_from_params
class TestModule(ModuleTestCase):
def test_module_fail_when_required_args_missing(self):
with pytest.raises(AnsibleFailJson):
set_module_args({})
s3_bucket_notification.main()
@patch('ansible_collections.notstdlib.moveitallout.plugins.modules.s3_bucket_notification.AnsibleAWSModule.client')
def test_add_s3_bucket_notification(self, aws_client):
aws_client.return_value.get_bucket_notification_configuration.return_value = {
'LambdaFunctionConfigurations': []
}
set_module_args({
'region': 'us-east-2',
'lambda_function_arn': 'test-lambda-arn',
'bucket_name': 'test-lambda',
'event_name': 'test-id',
'events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*'],
'state': 'present',
'prefix': '/images',
'suffix': '.jpg'
})
with pytest.raises(AnsibleExitJson) as context:
s3_bucket_notification.main()
result = context.value.args[0]
assert result['changed'] is True
assert aws_client.return_value.get_bucket_notification_configuration.call_count == 1
aws_client.return_value.put_bucket_notification_configuration.assert_called_with(
Bucket='test-lambda',
NotificationConfiguration={
'LambdaFunctionConfigurations': [{
'Id': 'test-id',
'LambdaFunctionArn': 'test-lambda-arn',
'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*'],
'Filter': {
'Key': {
'FilterRules': [{
'Name': 'Prefix',
'Value': '/images'
}, {
'Name': 'Suffix',
'Value': '.jpg'
}]
}
}
}]
})
| [
"wk@sydorenko.org.ua"
] | wk@sydorenko.org.ua |
c9f649966413184912b0a18b36262eb636f1c1bc | 8dcd3ee098b4f5b80879c37a62292f42f6b2ae17 | /venv/Lib/site-packages/win32/test/test_win32rcparser.py | 02510974cd502991311d8e0bd3161538cefaf9e1 | [] | no_license | GregVargas1999/InfinityAreaInfo | 53fdfefc11c4af8f5d2b8f511f7461d11a3f7533 | 2e4a7c6a2424514ca0ec58c9153eb08dc8e09a4a | refs/heads/master | 2022-12-01T20:26:05.388878 | 2020-08-11T18:37:05 | 2020-08-11T18:37:05 | 286,821,452 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,485 | py | import os
import sys
import tempfile
import unittest
import win32con
import win32rcparser
class TestParser(unittest.TestCase):
def setUp(self):
rc_file = os.path.join(os.path.dirname(__file__), "win32rcparser", "test.rc")
self.resources = win32rcparser.Parse(rc_file)
def testStrings(self):
for sid, expected in [
("IDS_TEST_STRING4", "Test 'single quoted' string"),
("IDS_TEST_STRING1", 'Test "quoted" string'),
("IDS_TEST_STRING3", 'String with single " quote'),
("IDS_TEST_STRING2", 'Test string'),
]:
got = self.resources.stringTable[sid].value
self.assertEqual(got, expected)
def testStandardIds(self):
for idc in "IDOK IDCANCEL".split():
correct = getattr(win32con, idc)
self.assertEqual(self.resources.names[correct], idc)
self.assertEqual(self.resources.ids[idc], correct)
def testTabStop(self):
d = self.resources.dialogs["IDD_TEST_DIALOG2"]
tabstop_names = ["IDC_EDIT1", "IDOK"] # should have WS_TABSTOP
tabstop_ids = [self.resources.ids[name] for name in tabstop_names]
notabstop_names = ["IDC_EDIT2"] # should have WS_TABSTOP
notabstop_ids = [self.resources.ids[name] for name in notabstop_names]
num_ok = 0
for cdef in d[1:]: # skip dlgdef
# print cdef
cid = cdef[2]
style = cdef[-2]
styleex = cdef[-1]
if cid in tabstop_ids:
self.failUnlessEqual(style & win32con.WS_TABSTOP, win32con.WS_TABSTOP)
num_ok += 1
elif cid in notabstop_ids:
self.failUnlessEqual(style & win32con.WS_TABSTOP, 0)
num_ok += 1
self.failUnlessEqual(num_ok, len(tabstop_ids) + len(notabstop_ids))
class TestGenerated(TestParser):
def setUp(self):
# don't call base!
rc_file = os.path.join(os.path.dirname(__file__), "win32rcparser", "test.rc")
py_file = tempfile.mktemp('test_win32rcparser.py')
try:
win32rcparser.GenerateFrozenResource(rc_file, py_file)
py_source = open(py_file).read()
finally:
if os.path.isfile(py_file):
os.unlink(py_file)
# poor-man's import :)
globs = {}
exec(py_source, globs, globs)
self.resources = globs["FakeParser"]()
if __name__ == '__main__':
unittest.main()
| [
"44142880+GregVargas1999@users.noreply.github.com"
] | 44142880+GregVargas1999@users.noreply.github.com |
cdff15de0b8ec7264a8c9b69e2d3a96475f6c8de | 187a6558f3c7cb6234164677a2bda2e73c26eaaf | /jdcloud_sdk/services/mongodb/apis/DescribeSecurityIpsRequest.py | b2c3816b3457164790bb60201ef3ace536c5987e | [
"Apache-2.0"
] | permissive | jdcloud-api/jdcloud-sdk-python | 4d2db584acc2620b7a866af82d21658cdd7cc227 | 3d1c50ed9117304d3b77a21babe899f939ae91cd | refs/heads/master | 2023-09-04T02:51:08.335168 | 2023-08-30T12:00:25 | 2023-08-30T12:00:25 | 126,276,169 | 18 | 36 | Apache-2.0 | 2023-09-07T06:54:49 | 2018-03-22T03:47:02 | Python | UTF-8 | Python | false | false | 1,347 | py | # coding=utf8
# Copyright 2018 JDCLOUD.COM
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# NOTE: This class is auto generated by the jdcloud code generator program.
from jdcloud_sdk.core.jdcloudrequest import JDCloudRequest
class DescribeSecurityIpsRequest(JDCloudRequest):
"""
查询实例访问白名单
"""
def __init__(self, parameters, header=None, version="v1"):
super(DescribeSecurityIpsRequest, self).__init__(
'/regions/{regionId}/instances/{instanceId}/securityIps', 'GET', header, version)
self.parameters = parameters
class DescribeSecurityIpsParameters(object):
def __init__(self, regionId, instanceId, ):
"""
:param regionId: Region ID
:param instanceId: Instance ID
"""
self.regionId = regionId
self.instanceId = instanceId
| [
"oulinbao@jd.com"
] | oulinbao@jd.com |
9b3a33eba56dd2a09d23dcb118330a360473d9ab | bfc25f1ad7bfe061b57cfab82aba9d0af1453491 | /data/external/repositories_2to3/93704/kaggle-allstate-purchase-master/pre_parse.py | 2e32dced90c7215c9210ca3e6123bde503f8189e | [
"MIT"
] | permissive | Keesiu/meta-kaggle | 77d134620ebce530d183467202cf45639d9c6ff2 | 87de739aba2399fd31072ee81b391f9b7a63f540 | refs/heads/master | 2020-03-28T00:23:10.584151 | 2018-12-20T19:09:50 | 2018-12-20T19:09:50 | 147,406,338 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 5,057 | py | """
Project: http://www.kaggle.com/c/allstate-purchase-prediction-challenge
Ranking: 9th from 1571 teams
Work Period: 12-may-2014 to 19-may-2014
Author: Euclides Fernandes Filho
email: euclides5414@gmail.com
"""
import numpy as np
import pandas as pd
from os import path
import conv
from time import sleep, time
from imports import *
ROOT = "./"
PRE_TRAIN_FILE = ROOT + 'data/train.csv'
PRE_TEST_FILE = ROOT + 'data/test_v2.csv'
TRAIN_FILE = ROOT + 'data/train_P.csv'
TEST_FILE = ROOT + 'data/test_v2_P.csv'
def pre_parse():
convs = {'car_value':conv.conv_car_value, 'state':conv.conv_state, 'C_previous':conv.conv_C_previous, 'duration_previous':conv.conv_duration_previous, 'time':conv.conv_time}
if not path.exists(TRAIN_FILE):
train = pd.read_csv(PRE_TRAIN_FILE, converters=convs)
train = do_risk(train)
train.to_csv(TRAIN_FILE, sep=',',na_rep="NA")
else:
train = pd.read_csv(TRAIN_FILE)
if not path.exists(TEST_FILE):
if path.exists(TEST_FILE + ".tmp"):
test = pd.read_csv(TEST_FILE + ".tmp")
else:
test = pd.read_csv(PRE_TEST_FILE, converters=convs)
test = do_risk(test)
# save a tmp file for safety in the case of a further error
test.to_csv(TEST_FILE + ".tmp", sep=',',na_rep="NA")
cols = list(test.columns.values)
print(cols)
for c in cols:
if c.startswith('Unnamed'):
test = test.drop(c,1)
print(c, "droped")
#some test location NAs
imp = Imputer(strategy='median',axis=0)
for state in np.unique(test.state):
v = test[test['state']==state].values
# sklearn bug version 0.14.1 - need to stack a dummy column before median imputation
# see http://stackoverflow.com/questions/23742005/scikit-learn-imputer-class-possible-bug-with-median-strategy
z = np.zeros(len(v))
z = z.reshape((len(z),1))
v = np.hstack((z,v))
v = imp.fit_transform(v)
test[test['state']==state] = v
test.to_csv(TEST_FILE, sep=',',na_rep="NA")
else:
test = pd.read_csv(TEST_FILE)
print(train.shape, test.shape)
return train, test
def do_risk(dt):
state, old_state = "FL",""
age_youngest = 75
age_oldest = 0
print("You'd better off drink a beer .... it will take a while .....")
sleep(2)
t0 = time()
for i in range(dt.shape[0]):
risk_factor = dt['risk_factor'][i]
if np.isnan(risk_factor):
state, age_oldest, age_youngest = dt['state'][i], dt['age_oldest'][i],dt['age_youngest'][i]
if state != old_state:
q_state = dt[(dt['state']==state) & (~np.isnan(dt['risk_factor']))]
old_state = state
q = q_state[((q_state['age_youngest']==age_youngest) & (q_state['age_oldest']==age_oldest))]
if len(q) > 0:
v = q['risk_factor'].median()
if np.isnan(v):
print(i,"ISNAN")
print(q)
dt['risk_factor'][i] = v
else:
for l,off in enumerate([1,2,3,4]):
q = q_state[((q_state['age_youngest']>=(age_youngest - off)) & (q_state['age_youngest'] <=(age_youngest + off)))\
& ((q_state['age_oldest']>=(age_oldest - off)) & (q_state['age_oldest']<=(age_oldest + off)))]
if len(q) > 0:
dt['risk_factor'][i] = q['risk_factor'].median()
print(i,":::LEVEL %i::::" % (l+1), len(q_state), len(q), state, age_youngest, age_oldest)
break
if len(q) == 0:
q = dt[((dt['age_youngest']>=(age_youngest - off)) & (dt['age_youngest'] <=(age_youngest + off)))\
& ((dt['age_oldest']>=(age_oldest - off)) & (dt['age_oldest']<=(age_oldest + off)))]
if len(q) > 0:
dt['risk_factor'][i] = q['risk_factor'].median()
print(i,":::LEVEL %i::::" % (l+2), len(q_state), len(q), state, age_youngest, age_oldest)
else:
if len(q) > 0:
q = dt[((dt['age_youngest']==age_youngest) & (dt['age_oldest']==age_oldest) & (~np.isnan(dt['risk_factor'])))]
print(i,":::LEVEL %i::::" % (l+3), len(q_state), len(q), state, age_youngest, age_oldest)
else:
print(i,":::FAILED::::", len(q_state), len(q), state, age_youngest, age_oldest)
break
print("risk NA done in %2.2f s" % (time() - t0))
print(dt.shape)
return dt
def main():
print(__doc__)
pre_parse()
if __name__ == '__main__':
main()
| [
"keesiu.wong@gmail.com"
] | keesiu.wong@gmail.com |
3c7835191a26ec3a57d8671e9429790fd4def095 | 98b1956594921aeef6e4b3c0f5b15703c3eee6a7 | /atom/nucleus/python/nucleus_api/api/performance_api.py | 49a6050dfbe4ba042e42fdfbe7990e4308316eee | [
"Apache-2.0"
] | permissive | sumit4-ttn/SDK | d4db3dcac077e9c9508a8227010a2ab764c31023 | b3ae385e5415e47ac70abd0b3fdeeaeee9aa7cff | refs/heads/master | 2022-11-25T14:05:16.911068 | 2020-08-09T17:31:55 | 2020-08-09T17:31:55 | 286,413,715 | 0 | 0 | Apache-2.0 | 2020-08-10T08:03:04 | 2020-08-10T08:03:03 | null | UTF-8 | Python | false | false | 104,391 | py | # coding: utf-8
"""
Hydrogen Atom API
The Hydrogen Atom API # noqa: E501
OpenAPI spec version: 1.7.0
Contact: info@hydrogenplatform.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from nucleus_api.api_client import ApiClient
class PerformanceApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def get_account_performance_using_get(self, account_id, **kwargs): # noqa: E501
"""Account Performance # noqa: E501
Get information on the performance of an account using IRR (Internal Rate of Return). You must provide the unique account_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_account_performance_using_get(account_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str account_id: Account Id -/account (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Client Benchmark or Tenant Benchmark id -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: A stat type - /statistics
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_account_performance_using_get_with_http_info(account_id, **kwargs) # noqa: E501
else:
(data) = self.get_account_performance_using_get_with_http_info(account_id, **kwargs) # noqa: E501
return data
def get_account_performance_using_get_with_http_info(self, account_id, **kwargs): # noqa: E501
"""Account Performance # noqa: E501
Get information on the performance of an account using IRR (Internal Rate of Return). You must provide the unique account_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_account_performance_using_get_with_http_info(account_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str account_id: Account Id -/account (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Client Benchmark or Tenant Benchmark id -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: A stat type - /statistics
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['account_id', 'active_premium_period', 'annualized_return_period', 'benchmark_id', 'end_date', 'hist_factor', 'mar_down_side_deviation', 'max_percentile_monte_carlo', 'mean_percentile_monte_carlo', 'min_percentile_monte_carlo', 'moving_average_n_day', 'n_day_returns', 'n_path_monte_carlo', 'n_rolling_max_drawdown', 'n_rolling_volatility', 'num_sim_monte_carlo', 'period_type', 'risk_free_alpha', 'risk_free_sharpe', 'risk_free_sortino', 'risk_free_treynor', 'start_date', 'stat', 'var_conf_interval'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_account_performance_using_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'account_id' is set
if ('account_id' not in params or
params['account_id'] is None):
raise ValueError("Missing the required parameter `account_id` when calling `get_account_performance_using_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'account_id' in params:
path_params['account_id'] = params['account_id'] # noqa: E501
query_params = []
if 'active_premium_period' in params:
query_params.append(('active_premium_period', params['active_premium_period'])) # noqa: E501
if 'annualized_return_period' in params:
query_params.append(('annualized_return_period', params['annualized_return_period'])) # noqa: E501
if 'benchmark_id' in params:
query_params.append(('benchmark_id', params['benchmark_id'])) # noqa: E501
if 'end_date' in params:
query_params.append(('end_date', params['end_date'])) # noqa: E501
if 'hist_factor' in params:
query_params.append(('hist_factor', params['hist_factor'])) # noqa: E501
if 'mar_down_side_deviation' in params:
query_params.append(('mar_down_side_deviation', params['mar_down_side_deviation'])) # noqa: E501
if 'max_percentile_monte_carlo' in params:
query_params.append(('max_percentile_monte_carlo', params['max_percentile_monte_carlo'])) # noqa: E501
if 'mean_percentile_monte_carlo' in params:
query_params.append(('mean_percentile_monte_carlo', params['mean_percentile_monte_carlo'])) # noqa: E501
if 'min_percentile_monte_carlo' in params:
query_params.append(('min_percentile_monte_carlo', params['min_percentile_monte_carlo'])) # noqa: E501
if 'moving_average_n_day' in params:
query_params.append(('moving_average_n_day', params['moving_average_n_day'])) # noqa: E501
if 'n_day_returns' in params:
query_params.append(('n_day_returns', params['n_day_returns'])) # noqa: E501
if 'n_path_monte_carlo' in params:
query_params.append(('n_path_monte_carlo', params['n_path_monte_carlo'])) # noqa: E501
if 'n_rolling_max_drawdown' in params:
query_params.append(('n_rolling_max_drawdown', params['n_rolling_max_drawdown'])) # noqa: E501
if 'n_rolling_volatility' in params:
query_params.append(('n_rolling_volatility', params['n_rolling_volatility'])) # noqa: E501
if 'num_sim_monte_carlo' in params:
query_params.append(('num_sim_monte_carlo', params['num_sim_monte_carlo'])) # noqa: E501
if 'period_type' in params:
query_params.append(('period_type', params['period_type'])) # noqa: E501
if 'risk_free_alpha' in params:
query_params.append(('risk_free_alpha', params['risk_free_alpha'])) # noqa: E501
if 'risk_free_sharpe' in params:
query_params.append(('risk_free_sharpe', params['risk_free_sharpe'])) # noqa: E501
if 'risk_free_sortino' in params:
query_params.append(('risk_free_sortino', params['risk_free_sortino'])) # noqa: E501
if 'risk_free_treynor' in params:
query_params.append(('risk_free_treynor', params['risk_free_treynor'])) # noqa: E501
if 'start_date' in params:
query_params.append(('start_date', params['start_date'])) # noqa: E501
if 'stat' in params:
query_params.append(('stat', params['stat'])) # noqa: E501
if 'var_conf_interval' in params:
query_params.append(('var_conf_interval', params['var_conf_interval'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/account/{account_id}/performance', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_allocation_performance_using_get(self, allocation_id, **kwargs): # noqa: E501
"""Allocation Performance # noqa: E501
Get information on the performance of an allocation using TWR (Time Weighted Return). You must provide the unique allocation_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_allocation_performance_using_get(allocation_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str allocation_id: Allocation Id -/allocation (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Tenant Benchmark Id -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param bool is_current_weight: is_current_weight
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: A stat type found under the Statistics banner
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: dict(str, object)
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_allocation_performance_using_get_with_http_info(allocation_id, **kwargs) # noqa: E501
else:
(data) = self.get_allocation_performance_using_get_with_http_info(allocation_id, **kwargs) # noqa: E501
return data
def get_allocation_performance_using_get_with_http_info(self, allocation_id, **kwargs): # noqa: E501
"""Allocation Performance # noqa: E501
Get information on the performance of an allocation using TWR (Time Weighted Return). You must provide the unique allocation_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_allocation_performance_using_get_with_http_info(allocation_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str allocation_id: Allocation Id -/allocation (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Tenant Benchmark Id -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param bool is_current_weight: is_current_weight
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: A stat type found under the Statistics banner
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: dict(str, object)
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['allocation_id', 'active_premium_period', 'annualized_return_period', 'benchmark_id', 'end_date', 'hist_factor', 'is_current_weight', 'mar_down_side_deviation', 'max_percentile_monte_carlo', 'mean_percentile_monte_carlo', 'min_percentile_monte_carlo', 'moving_average_n_day', 'n_day_returns', 'n_path_monte_carlo', 'n_rolling_max_drawdown', 'n_rolling_volatility', 'num_sim_monte_carlo', 'period_type', 'risk_free_alpha', 'risk_free_sharpe', 'risk_free_sortino', 'risk_free_treynor', 'start_date', 'stat', 'var_conf_interval'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_allocation_performance_using_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'allocation_id' is set
if ('allocation_id' not in params or
params['allocation_id'] is None):
raise ValueError("Missing the required parameter `allocation_id` when calling `get_allocation_performance_using_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'allocation_id' in params:
path_params['allocation_id'] = params['allocation_id'] # noqa: E501
query_params = []
if 'active_premium_period' in params:
query_params.append(('active_premium_period', params['active_premium_period'])) # noqa: E501
if 'annualized_return_period' in params:
query_params.append(('annualized_return_period', params['annualized_return_period'])) # noqa: E501
if 'benchmark_id' in params:
query_params.append(('benchmark_id', params['benchmark_id'])) # noqa: E501
if 'end_date' in params:
query_params.append(('end_date', params['end_date'])) # noqa: E501
if 'hist_factor' in params:
query_params.append(('hist_factor', params['hist_factor'])) # noqa: E501
if 'is_current_weight' in params:
query_params.append(('is_current_weight', params['is_current_weight'])) # noqa: E501
if 'mar_down_side_deviation' in params:
query_params.append(('mar_down_side_deviation', params['mar_down_side_deviation'])) # noqa: E501
if 'max_percentile_monte_carlo' in params:
query_params.append(('max_percentile_monte_carlo', params['max_percentile_monte_carlo'])) # noqa: E501
if 'mean_percentile_monte_carlo' in params:
query_params.append(('mean_percentile_monte_carlo', params['mean_percentile_monte_carlo'])) # noqa: E501
if 'min_percentile_monte_carlo' in params:
query_params.append(('min_percentile_monte_carlo', params['min_percentile_monte_carlo'])) # noqa: E501
if 'moving_average_n_day' in params:
query_params.append(('moving_average_n_day', params['moving_average_n_day'])) # noqa: E501
if 'n_day_returns' in params:
query_params.append(('n_day_returns', params['n_day_returns'])) # noqa: E501
if 'n_path_monte_carlo' in params:
query_params.append(('n_path_monte_carlo', params['n_path_monte_carlo'])) # noqa: E501
if 'n_rolling_max_drawdown' in params:
query_params.append(('n_rolling_max_drawdown', params['n_rolling_max_drawdown'])) # noqa: E501
if 'n_rolling_volatility' in params:
query_params.append(('n_rolling_volatility', params['n_rolling_volatility'])) # noqa: E501
if 'num_sim_monte_carlo' in params:
query_params.append(('num_sim_monte_carlo', params['num_sim_monte_carlo'])) # noqa: E501
if 'period_type' in params:
query_params.append(('period_type', params['period_type'])) # noqa: E501
if 'risk_free_alpha' in params:
query_params.append(('risk_free_alpha', params['risk_free_alpha'])) # noqa: E501
if 'risk_free_sharpe' in params:
query_params.append(('risk_free_sharpe', params['risk_free_sharpe'])) # noqa: E501
if 'risk_free_sortino' in params:
query_params.append(('risk_free_sortino', params['risk_free_sortino'])) # noqa: E501
if 'risk_free_treynor' in params:
query_params.append(('risk_free_treynor', params['risk_free_treynor'])) # noqa: E501
if 'start_date' in params:
query_params.append(('start_date', params['start_date'])) # noqa: E501
if 'stat' in params:
query_params.append(('stat', params['stat'])) # noqa: E501
if 'var_conf_interval' in params:
query_params.append(('var_conf_interval', params['var_conf_interval'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/allocation/{allocation_id}/performance', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='dict(str, object)', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_benchmark_performance_using_get(self, benchmark_id, **kwargs): # noqa: E501
"""Benchmark Performance # noqa: E501
Get information on the performance of a benchmark using TWR (Time Weighted Return). You must provide the unique benchmark_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_benchmark_performance_using_get(benchmark_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str benchmark_id: Benchmark Id - /benchmark (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str comparison_benchmark_id: comparison_benchmark_id
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: Stat type - /statistics endpoint
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_benchmark_performance_using_get_with_http_info(benchmark_id, **kwargs) # noqa: E501
else:
(data) = self.get_benchmark_performance_using_get_with_http_info(benchmark_id, **kwargs) # noqa: E501
return data
def get_benchmark_performance_using_get_with_http_info(self, benchmark_id, **kwargs): # noqa: E501
"""Benchmark Performance # noqa: E501
Get information on the performance of a benchmark using TWR (Time Weighted Return). You must provide the unique benchmark_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_benchmark_performance_using_get_with_http_info(benchmark_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str benchmark_id: Benchmark Id - /benchmark (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str comparison_benchmark_id: comparison_benchmark_id
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: Stat type - /statistics endpoint
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['benchmark_id', 'active_premium_period', 'annualized_return_period', 'comparison_benchmark_id', 'end_date', 'hist_factor', 'mar_down_side_deviation', 'max_percentile_monte_carlo', 'mean_percentile_monte_carlo', 'min_percentile_monte_carlo', 'moving_average_n_day', 'n_day_returns', 'n_path_monte_carlo', 'n_rolling_max_drawdown', 'n_rolling_volatility', 'num_sim_monte_carlo', 'period_type', 'risk_free_alpha', 'risk_free_sharpe', 'risk_free_sortino', 'risk_free_treynor', 'start_date', 'stat', 'var_conf_interval'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_benchmark_performance_using_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'benchmark_id' is set
if ('benchmark_id' not in params or
params['benchmark_id'] is None):
raise ValueError("Missing the required parameter `benchmark_id` when calling `get_benchmark_performance_using_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'benchmark_id' in params:
path_params['benchmark_id'] = params['benchmark_id'] # noqa: E501
query_params = []
if 'active_premium_period' in params:
query_params.append(('active_premium_period', params['active_premium_period'])) # noqa: E501
if 'annualized_return_period' in params:
query_params.append(('annualized_return_period', params['annualized_return_period'])) # noqa: E501
if 'comparison_benchmark_id' in params:
query_params.append(('comparison_benchmark_id', params['comparison_benchmark_id'])) # noqa: E501
if 'end_date' in params:
query_params.append(('end_date', params['end_date'])) # noqa: E501
if 'hist_factor' in params:
query_params.append(('hist_factor', params['hist_factor'])) # noqa: E501
if 'mar_down_side_deviation' in params:
query_params.append(('mar_down_side_deviation', params['mar_down_side_deviation'])) # noqa: E501
if 'max_percentile_monte_carlo' in params:
query_params.append(('max_percentile_monte_carlo', params['max_percentile_monte_carlo'])) # noqa: E501
if 'mean_percentile_monte_carlo' in params:
query_params.append(('mean_percentile_monte_carlo', params['mean_percentile_monte_carlo'])) # noqa: E501
if 'min_percentile_monte_carlo' in params:
query_params.append(('min_percentile_monte_carlo', params['min_percentile_monte_carlo'])) # noqa: E501
if 'moving_average_n_day' in params:
query_params.append(('moving_average_n_day', params['moving_average_n_day'])) # noqa: E501
if 'n_day_returns' in params:
query_params.append(('n_day_returns', params['n_day_returns'])) # noqa: E501
if 'n_path_monte_carlo' in params:
query_params.append(('n_path_monte_carlo', params['n_path_monte_carlo'])) # noqa: E501
if 'n_rolling_max_drawdown' in params:
query_params.append(('n_rolling_max_drawdown', params['n_rolling_max_drawdown'])) # noqa: E501
if 'n_rolling_volatility' in params:
query_params.append(('n_rolling_volatility', params['n_rolling_volatility'])) # noqa: E501
if 'num_sim_monte_carlo' in params:
query_params.append(('num_sim_monte_carlo', params['num_sim_monte_carlo'])) # noqa: E501
if 'period_type' in params:
query_params.append(('period_type', params['period_type'])) # noqa: E501
if 'risk_free_alpha' in params:
query_params.append(('risk_free_alpha', params['risk_free_alpha'])) # noqa: E501
if 'risk_free_sharpe' in params:
query_params.append(('risk_free_sharpe', params['risk_free_sharpe'])) # noqa: E501
if 'risk_free_sortino' in params:
query_params.append(('risk_free_sortino', params['risk_free_sortino'])) # noqa: E501
if 'risk_free_treynor' in params:
query_params.append(('risk_free_treynor', params['risk_free_treynor'])) # noqa: E501
if 'start_date' in params:
query_params.append(('start_date', params['start_date'])) # noqa: E501
if 'stat' in params:
query_params.append(('stat', params['stat'])) # noqa: E501
if 'var_conf_interval' in params:
query_params.append(('var_conf_interval', params['var_conf_interval'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/benchmark/{benchmark_id}/performance', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_client_performance_using_get(self, client_id, **kwargs): # noqa: E501
"""Client Performance # noqa: E501
Get information on the performance of a client using IRR (Internal Rate of Return). You must provide the unique client_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_client_performance_using_get(client_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str client_id: Client Id -/client (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Client Benchmark or Tenant Benchmark id -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: A stat type -- /statistics
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_client_performance_using_get_with_http_info(client_id, **kwargs) # noqa: E501
else:
(data) = self.get_client_performance_using_get_with_http_info(client_id, **kwargs) # noqa: E501
return data
def get_client_performance_using_get_with_http_info(self, client_id, **kwargs): # noqa: E501
"""Client Performance # noqa: E501
Get information on the performance of a client using IRR (Internal Rate of Return). You must provide the unique client_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_client_performance_using_get_with_http_info(client_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str client_id: Client Id -/client (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Client Benchmark or Tenant Benchmark id -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: A stat type -- /statistics
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['client_id', 'active_premium_period', 'annualized_return_period', 'benchmark_id', 'end_date', 'hist_factor', 'mar_down_side_deviation', 'max_percentile_monte_carlo', 'mean_percentile_monte_carlo', 'min_percentile_monte_carlo', 'moving_average_n_day', 'n_day_returns', 'n_path_monte_carlo', 'n_rolling_max_drawdown', 'n_rolling_volatility', 'num_sim_monte_carlo', 'period_type', 'risk_free_alpha', 'risk_free_sharpe', 'risk_free_sortino', 'risk_free_treynor', 'start_date', 'stat', 'var_conf_interval'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_client_performance_using_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'client_id' is set
if ('client_id' not in params or
params['client_id'] is None):
raise ValueError("Missing the required parameter `client_id` when calling `get_client_performance_using_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'client_id' in params:
path_params['client_id'] = params['client_id'] # noqa: E501
query_params = []
if 'active_premium_period' in params:
query_params.append(('active_premium_period', params['active_premium_period'])) # noqa: E501
if 'annualized_return_period' in params:
query_params.append(('annualized_return_period', params['annualized_return_period'])) # noqa: E501
if 'benchmark_id' in params:
query_params.append(('benchmark_id', params['benchmark_id'])) # noqa: E501
if 'end_date' in params:
query_params.append(('end_date', params['end_date'])) # noqa: E501
if 'hist_factor' in params:
query_params.append(('hist_factor', params['hist_factor'])) # noqa: E501
if 'mar_down_side_deviation' in params:
query_params.append(('mar_down_side_deviation', params['mar_down_side_deviation'])) # noqa: E501
if 'max_percentile_monte_carlo' in params:
query_params.append(('max_percentile_monte_carlo', params['max_percentile_monte_carlo'])) # noqa: E501
if 'mean_percentile_monte_carlo' in params:
query_params.append(('mean_percentile_monte_carlo', params['mean_percentile_monte_carlo'])) # noqa: E501
if 'min_percentile_monte_carlo' in params:
query_params.append(('min_percentile_monte_carlo', params['min_percentile_monte_carlo'])) # noqa: E501
if 'moving_average_n_day' in params:
query_params.append(('moving_average_n_day', params['moving_average_n_day'])) # noqa: E501
if 'n_day_returns' in params:
query_params.append(('n_day_returns', params['n_day_returns'])) # noqa: E501
if 'n_path_monte_carlo' in params:
query_params.append(('n_path_monte_carlo', params['n_path_monte_carlo'])) # noqa: E501
if 'n_rolling_max_drawdown' in params:
query_params.append(('n_rolling_max_drawdown', params['n_rolling_max_drawdown'])) # noqa: E501
if 'n_rolling_volatility' in params:
query_params.append(('n_rolling_volatility', params['n_rolling_volatility'])) # noqa: E501
if 'num_sim_monte_carlo' in params:
query_params.append(('num_sim_monte_carlo', params['num_sim_monte_carlo'])) # noqa: E501
if 'period_type' in params:
query_params.append(('period_type', params['period_type'])) # noqa: E501
if 'risk_free_alpha' in params:
query_params.append(('risk_free_alpha', params['risk_free_alpha'])) # noqa: E501
if 'risk_free_sharpe' in params:
query_params.append(('risk_free_sharpe', params['risk_free_sharpe'])) # noqa: E501
if 'risk_free_sortino' in params:
query_params.append(('risk_free_sortino', params['risk_free_sortino'])) # noqa: E501
if 'risk_free_treynor' in params:
query_params.append(('risk_free_treynor', params['risk_free_treynor'])) # noqa: E501
if 'start_date' in params:
query_params.append(('start_date', params['start_date'])) # noqa: E501
if 'stat' in params:
query_params.append(('stat', params['stat'])) # noqa: E501
if 'var_conf_interval' in params:
query_params.append(('var_conf_interval', params['var_conf_interval'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/client/{client_id}/performance', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_goal_performance_using_get(self, client_id, goal_id, **kwargs): # noqa: E501
"""Goal Performance # noqa: E501
Get information on the performance of a goal using IRR (Internal Rate of Return). You must provide the unique goal_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_goal_performance_using_get(client_id, goal_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str client_id: Client associated with the account - /client (required)
:param str goal_id: Goal Id - /account (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Client Benchmark or Tenant Benchmark id -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param bool portfolio_goal: portfolio_goal
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: A stat type - /statistics
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_goal_performance_using_get_with_http_info(client_id, goal_id, **kwargs) # noqa: E501
else:
(data) = self.get_goal_performance_using_get_with_http_info(client_id, goal_id, **kwargs) # noqa: E501
return data
def get_goal_performance_using_get_with_http_info(self, client_id, goal_id, **kwargs): # noqa: E501
"""Goal Performance # noqa: E501
Get information on the performance of a goal using IRR (Internal Rate of Return). You must provide the unique goal_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_goal_performance_using_get_with_http_info(client_id, goal_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str client_id: Client associated with the account - /client (required)
:param str goal_id: Goal Id - /account (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Client Benchmark or Tenant Benchmark id -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param bool portfolio_goal: portfolio_goal
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: A stat type - /statistics
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['client_id', 'goal_id', 'active_premium_period', 'annualized_return_period', 'benchmark_id', 'end_date', 'hist_factor', 'mar_down_side_deviation', 'max_percentile_monte_carlo', 'mean_percentile_monte_carlo', 'min_percentile_monte_carlo', 'moving_average_n_day', 'n_day_returns', 'n_path_monte_carlo', 'n_rolling_max_drawdown', 'n_rolling_volatility', 'num_sim_monte_carlo', 'period_type', 'portfolio_goal', 'risk_free_alpha', 'risk_free_sharpe', 'risk_free_sortino', 'risk_free_treynor', 'start_date', 'stat', 'var_conf_interval'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_goal_performance_using_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'client_id' is set
if ('client_id' not in params or
params['client_id'] is None):
raise ValueError("Missing the required parameter `client_id` when calling `get_goal_performance_using_get`") # noqa: E501
# verify the required parameter 'goal_id' is set
if ('goal_id' not in params or
params['goal_id'] is None):
raise ValueError("Missing the required parameter `goal_id` when calling `get_goal_performance_using_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'client_id' in params:
path_params['client_id'] = params['client_id'] # noqa: E501
if 'goal_id' in params:
path_params['goal_id'] = params['goal_id'] # noqa: E501
query_params = []
if 'active_premium_period' in params:
query_params.append(('active_premium_period', params['active_premium_period'])) # noqa: E501
if 'annualized_return_period' in params:
query_params.append(('annualized_return_period', params['annualized_return_period'])) # noqa: E501
if 'benchmark_id' in params:
query_params.append(('benchmark_id', params['benchmark_id'])) # noqa: E501
if 'end_date' in params:
query_params.append(('end_date', params['end_date'])) # noqa: E501
if 'hist_factor' in params:
query_params.append(('hist_factor', params['hist_factor'])) # noqa: E501
if 'mar_down_side_deviation' in params:
query_params.append(('mar_down_side_deviation', params['mar_down_side_deviation'])) # noqa: E501
if 'max_percentile_monte_carlo' in params:
query_params.append(('max_percentile_monte_carlo', params['max_percentile_monte_carlo'])) # noqa: E501
if 'mean_percentile_monte_carlo' in params:
query_params.append(('mean_percentile_monte_carlo', params['mean_percentile_monte_carlo'])) # noqa: E501
if 'min_percentile_monte_carlo' in params:
query_params.append(('min_percentile_monte_carlo', params['min_percentile_monte_carlo'])) # noqa: E501
if 'moving_average_n_day' in params:
query_params.append(('moving_average_n_day', params['moving_average_n_day'])) # noqa: E501
if 'n_day_returns' in params:
query_params.append(('n_day_returns', params['n_day_returns'])) # noqa: E501
if 'n_path_monte_carlo' in params:
query_params.append(('n_path_monte_carlo', params['n_path_monte_carlo'])) # noqa: E501
if 'n_rolling_max_drawdown' in params:
query_params.append(('n_rolling_max_drawdown', params['n_rolling_max_drawdown'])) # noqa: E501
if 'n_rolling_volatility' in params:
query_params.append(('n_rolling_volatility', params['n_rolling_volatility'])) # noqa: E501
if 'num_sim_monte_carlo' in params:
query_params.append(('num_sim_monte_carlo', params['num_sim_monte_carlo'])) # noqa: E501
if 'period_type' in params:
query_params.append(('period_type', params['period_type'])) # noqa: E501
if 'portfolio_goal' in params:
query_params.append(('portfolio_goal', params['portfolio_goal'])) # noqa: E501
if 'risk_free_alpha' in params:
query_params.append(('risk_free_alpha', params['risk_free_alpha'])) # noqa: E501
if 'risk_free_sharpe' in params:
query_params.append(('risk_free_sharpe', params['risk_free_sharpe'])) # noqa: E501
if 'risk_free_sortino' in params:
query_params.append(('risk_free_sortino', params['risk_free_sortino'])) # noqa: E501
if 'risk_free_treynor' in params:
query_params.append(('risk_free_treynor', params['risk_free_treynor'])) # noqa: E501
if 'start_date' in params:
query_params.append(('start_date', params['start_date'])) # noqa: E501
if 'stat' in params:
query_params.append(('stat', params['stat'])) # noqa: E501
if 'var_conf_interval' in params:
query_params.append(('var_conf_interval', params['var_conf_interval'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/goal/{goal_id}/performance', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_model_performance_using_get(self, model_id, **kwargs): # noqa: E501
"""Model Performance # noqa: E501
Get information on the performance of a model using TWR (Time Weighted Return). You must provide the unique model_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_model_performance_using_get(model_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str model_id: Model Id - /model (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Tenant Benchmark Id -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: Stat Type
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_model_performance_using_get_with_http_info(model_id, **kwargs) # noqa: E501
else:
(data) = self.get_model_performance_using_get_with_http_info(model_id, **kwargs) # noqa: E501
return data
def get_model_performance_using_get_with_http_info(self, model_id, **kwargs): # noqa: E501
"""Model Performance # noqa: E501
Get information on the performance of a model using TWR (Time Weighted Return). You must provide the unique model_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_model_performance_using_get_with_http_info(model_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str model_id: Model Id - /model (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Tenant Benchmark Id -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: Stat Type
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['model_id', 'active_premium_period', 'annualized_return_period', 'benchmark_id', 'end_date', 'hist_factor', 'mar_down_side_deviation', 'max_percentile_monte_carlo', 'mean_percentile_monte_carlo', 'min_percentile_monte_carlo', 'moving_average_n_day', 'n_day_returns', 'n_path_monte_carlo', 'n_rolling_max_drawdown', 'n_rolling_volatility', 'num_sim_monte_carlo', 'period_type', 'risk_free_alpha', 'risk_free_sharpe', 'risk_free_sortino', 'risk_free_treynor', 'start_date', 'stat', 'var_conf_interval'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_model_performance_using_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'model_id' is set
if ('model_id' not in params or
params['model_id'] is None):
raise ValueError("Missing the required parameter `model_id` when calling `get_model_performance_using_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'model_id' in params:
path_params['model_id'] = params['model_id'] # noqa: E501
query_params = []
if 'active_premium_period' in params:
query_params.append(('active_premium_period', params['active_premium_period'])) # noqa: E501
if 'annualized_return_period' in params:
query_params.append(('annualized_return_period', params['annualized_return_period'])) # noqa: E501
if 'benchmark_id' in params:
query_params.append(('benchmark_id', params['benchmark_id'])) # noqa: E501
if 'end_date' in params:
query_params.append(('end_date', params['end_date'])) # noqa: E501
if 'hist_factor' in params:
query_params.append(('hist_factor', params['hist_factor'])) # noqa: E501
if 'mar_down_side_deviation' in params:
query_params.append(('mar_down_side_deviation', params['mar_down_side_deviation'])) # noqa: E501
if 'max_percentile_monte_carlo' in params:
query_params.append(('max_percentile_monte_carlo', params['max_percentile_monte_carlo'])) # noqa: E501
if 'mean_percentile_monte_carlo' in params:
query_params.append(('mean_percentile_monte_carlo', params['mean_percentile_monte_carlo'])) # noqa: E501
if 'min_percentile_monte_carlo' in params:
query_params.append(('min_percentile_monte_carlo', params['min_percentile_monte_carlo'])) # noqa: E501
if 'moving_average_n_day' in params:
query_params.append(('moving_average_n_day', params['moving_average_n_day'])) # noqa: E501
if 'n_day_returns' in params:
query_params.append(('n_day_returns', params['n_day_returns'])) # noqa: E501
if 'n_path_monte_carlo' in params:
query_params.append(('n_path_monte_carlo', params['n_path_monte_carlo'])) # noqa: E501
if 'n_rolling_max_drawdown' in params:
query_params.append(('n_rolling_max_drawdown', params['n_rolling_max_drawdown'])) # noqa: E501
if 'n_rolling_volatility' in params:
query_params.append(('n_rolling_volatility', params['n_rolling_volatility'])) # noqa: E501
if 'num_sim_monte_carlo' in params:
query_params.append(('num_sim_monte_carlo', params['num_sim_monte_carlo'])) # noqa: E501
if 'period_type' in params:
query_params.append(('period_type', params['period_type'])) # noqa: E501
if 'risk_free_alpha' in params:
query_params.append(('risk_free_alpha', params['risk_free_alpha'])) # noqa: E501
if 'risk_free_sharpe' in params:
query_params.append(('risk_free_sharpe', params['risk_free_sharpe'])) # noqa: E501
if 'risk_free_sortino' in params:
query_params.append(('risk_free_sortino', params['risk_free_sortino'])) # noqa: E501
if 'risk_free_treynor' in params:
query_params.append(('risk_free_treynor', params['risk_free_treynor'])) # noqa: E501
if 'start_date' in params:
query_params.append(('start_date', params['start_date'])) # noqa: E501
if 'stat' in params:
query_params.append(('stat', params['stat'])) # noqa: E501
if 'var_conf_interval' in params:
query_params.append(('var_conf_interval', params['var_conf_interval'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/model/{model_id}/performance', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_portfolio_performance_using_get(self, account_id, client_id, portfolio_id, portfolioid, **kwargs): # noqa: E501
"""Portfolio Performance # noqa: E501
Get information on the performance of a portfolio using IRR (Internal Rate of Return). You must provide the unique portfolio_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_portfolio_performance_using_get(account_id, client_id, portfolio_id, portfolioid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str account_id: Account Id -/account (required)
:param str client_id: Client Id -/client (required)
:param str portfolio_id: portfolio_id (required)
:param str portfolioid: Portfolio Id -/portoflio (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Benchmark Id - benchmarkId or clientBenchmarkId -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: A stat type - /statistics endpoint to get types
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_portfolio_performance_using_get_with_http_info(account_id, client_id, portfolio_id, portfolioid, **kwargs) # noqa: E501
else:
(data) = self.get_portfolio_performance_using_get_with_http_info(account_id, client_id, portfolio_id, portfolioid, **kwargs) # noqa: E501
return data
def get_portfolio_performance_using_get_with_http_info(self, account_id, client_id, portfolio_id, portfolioid, **kwargs): # noqa: E501
"""Portfolio Performance # noqa: E501
Get information on the performance of a portfolio using IRR (Internal Rate of Return). You must provide the unique portfolio_id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_portfolio_performance_using_get_with_http_info(account_id, client_id, portfolio_id, portfolioid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str account_id: Account Id -/account (required)
:param str client_id: Client Id -/client (required)
:param str portfolio_id: portfolio_id (required)
:param str portfolioid: Portfolio Id -/portoflio (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str benchmark_id: Benchmark Id - benchmarkId or clientBenchmarkId -/benchmark
:param date end_date: end date
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: start date
:param str stat: A stat type - /statistics endpoint to get types
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['account_id', 'client_id', 'portfolio_id', 'portfolioid', 'active_premium_period', 'annualized_return_period', 'benchmark_id', 'end_date', 'hist_factor', 'mar_down_side_deviation', 'max_percentile_monte_carlo', 'mean_percentile_monte_carlo', 'min_percentile_monte_carlo', 'moving_average_n_day', 'n_day_returns', 'n_path_monte_carlo', 'n_rolling_max_drawdown', 'n_rolling_volatility', 'num_sim_monte_carlo', 'period_type', 'risk_free_alpha', 'risk_free_sharpe', 'risk_free_sortino', 'risk_free_treynor', 'start_date', 'stat', 'var_conf_interval'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_portfolio_performance_using_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'account_id' is set
if ('account_id' not in params or
params['account_id'] is None):
raise ValueError("Missing the required parameter `account_id` when calling `get_portfolio_performance_using_get`") # noqa: E501
# verify the required parameter 'client_id' is set
if ('client_id' not in params or
params['client_id'] is None):
raise ValueError("Missing the required parameter `client_id` when calling `get_portfolio_performance_using_get`") # noqa: E501
# verify the required parameter 'portfolio_id' is set
if ('portfolio_id' not in params or
params['portfolio_id'] is None):
raise ValueError("Missing the required parameter `portfolio_id` when calling `get_portfolio_performance_using_get`") # noqa: E501
# verify the required parameter 'portfolioid' is set
if ('portfolioid' not in params or
params['portfolioid'] is None):
raise ValueError("Missing the required parameter `portfolioid` when calling `get_portfolio_performance_using_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'account_id' in params:
path_params['account_id'] = params['account_id'] # noqa: E501
if 'client_id' in params:
path_params['client_id'] = params['client_id'] # noqa: E501
if 'portfolio_id' in params:
path_params['portfolio_id'] = params['portfolio_id'] # noqa: E501
if 'portfolioid' in params:
path_params['portfolioid'] = params['portfolioid'] # noqa: E501
query_params = []
if 'active_premium_period' in params:
query_params.append(('active_premium_period', params['active_premium_period'])) # noqa: E501
if 'annualized_return_period' in params:
query_params.append(('annualized_return_period', params['annualized_return_period'])) # noqa: E501
if 'benchmark_id' in params:
query_params.append(('benchmark_id', params['benchmark_id'])) # noqa: E501
if 'end_date' in params:
query_params.append(('end_date', params['end_date'])) # noqa: E501
if 'hist_factor' in params:
query_params.append(('hist_factor', params['hist_factor'])) # noqa: E501
if 'mar_down_side_deviation' in params:
query_params.append(('mar_down_side_deviation', params['mar_down_side_deviation'])) # noqa: E501
if 'max_percentile_monte_carlo' in params:
query_params.append(('max_percentile_monte_carlo', params['max_percentile_monte_carlo'])) # noqa: E501
if 'mean_percentile_monte_carlo' in params:
query_params.append(('mean_percentile_monte_carlo', params['mean_percentile_monte_carlo'])) # noqa: E501
if 'min_percentile_monte_carlo' in params:
query_params.append(('min_percentile_monte_carlo', params['min_percentile_monte_carlo'])) # noqa: E501
if 'moving_average_n_day' in params:
query_params.append(('moving_average_n_day', params['moving_average_n_day'])) # noqa: E501
if 'n_day_returns' in params:
query_params.append(('n_day_returns', params['n_day_returns'])) # noqa: E501
if 'n_path_monte_carlo' in params:
query_params.append(('n_path_monte_carlo', params['n_path_monte_carlo'])) # noqa: E501
if 'n_rolling_max_drawdown' in params:
query_params.append(('n_rolling_max_drawdown', params['n_rolling_max_drawdown'])) # noqa: E501
if 'n_rolling_volatility' in params:
query_params.append(('n_rolling_volatility', params['n_rolling_volatility'])) # noqa: E501
if 'num_sim_monte_carlo' in params:
query_params.append(('num_sim_monte_carlo', params['num_sim_monte_carlo'])) # noqa: E501
if 'period_type' in params:
query_params.append(('period_type', params['period_type'])) # noqa: E501
if 'risk_free_alpha' in params:
query_params.append(('risk_free_alpha', params['risk_free_alpha'])) # noqa: E501
if 'risk_free_sharpe' in params:
query_params.append(('risk_free_sharpe', params['risk_free_sharpe'])) # noqa: E501
if 'risk_free_sortino' in params:
query_params.append(('risk_free_sortino', params['risk_free_sortino'])) # noqa: E501
if 'risk_free_treynor' in params:
query_params.append(('risk_free_treynor', params['risk_free_treynor'])) # noqa: E501
if 'start_date' in params:
query_params.append(('start_date', params['start_date'])) # noqa: E501
if 'stat' in params:
query_params.append(('stat', params['stat'])) # noqa: E501
if 'var_conf_interval' in params:
query_params.append(('var_conf_interval', params['var_conf_interval'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/portfolio/{portfolio_id}/performance', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_security_performance_using_get(self, security_id, **kwargs): # noqa: E501
"""Security Performance # noqa: E501
Get performance statistics for a security using TWR (Time Weighted Return). You must provide the unique security_id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_security_performance_using_get(security_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str security_id: security_id (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str bench_ticker: Bench Ticker for security - (default: ^GSPC)
:param str benchmark_id: benchmark_id
:param date end_date: Ending parameter for time window
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: Starting parameter for time window
:param str stat: A stat type - /statistics endpoint
:param str ticker: Ticker for security
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_security_performance_using_get_with_http_info(security_id, **kwargs) # noqa: E501
else:
(data) = self.get_security_performance_using_get_with_http_info(security_id, **kwargs) # noqa: E501
return data
def get_security_performance_using_get_with_http_info(self, security_id, **kwargs): # noqa: E501
"""Security Performance # noqa: E501
Get performance statistics for a security using TWR (Time Weighted Return). You must provide the unique security_id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_security_performance_using_get_with_http_info(security_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str security_id: security_id (required)
:param str active_premium_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str annualized_return_period: Q (quarterly), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () - (statId: 19, default: 'D')
:param str bench_ticker: Bench Ticker for security - (default: ^GSPC)
:param str benchmark_id: benchmark_id
:param date end_date: Ending parameter for time window
:param float hist_factor: Histogram factor- (statId: 39, default: 5)
:param float mar_down_side_deviation: minimum acceptable return for downside deviation - (statId: 58, default: 0)
:param float max_percentile_monte_carlo: max percentile for monte carlo, i.entity. 80 - (statId: 62, default: 95)
:param float mean_percentile_monte_carlo: mean percentile for monte carlo i.entity. 50- (statId: 62, default: 50)
:param float min_percentile_monte_carlo: min percentile for monte carlo i.entity. 20 - (statId: 62, default: 5)
:param int moving_average_n_day: number of days for moving average n-day - (statId: 18, default: 7)
:param int n_day_returns: number of days for Rolling n-day returns - (statId: 2, default: 7)
:param int n_path_monte_carlo: number of points for a simulation- (statId: 62, default: 100)
:param int n_rolling_max_drawdown: number of days for Rolling n-day max drawdown- (statId: 46, default: 7)
:param int n_rolling_volatility: number of days for Rolling n-day volatility- (statId: 34, default: 7)
:param int num_sim_monte_carlo: number of simulations - (statId: 62, default: 1000)
:param str period_type: Quarter (Q), Monthly (M) , Annually (Y), Daily (D) --caps matter, codes in () -Carries out stats on either daily, monthly, annually or quarterly dates (default: 'D')
:param float risk_free_alpha: risk free val alpha - (statId: 52, default: 0)
:param float risk_free_sharpe: risk free val sharpe- (statId: 49, default: 0)
:param float risk_free_sortino: risk free val sortino - (statId: 56, default: 0)
:param float risk_free_treynor: risk free val treynor- (statId: 51, default: 0)
:param date start_date: Starting parameter for time window
:param str stat: A stat type - /statistics endpoint
:param str ticker: Ticker for security
:param float var_conf_interval: VaR Confidence Interval ( alpha ) i.entity 99, 95, etc - (statId: 40, default: 95)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['security_id', 'active_premium_period', 'annualized_return_period', 'bench_ticker', 'benchmark_id', 'end_date', 'hist_factor', 'mar_down_side_deviation', 'max_percentile_monte_carlo', 'mean_percentile_monte_carlo', 'min_percentile_monte_carlo', 'moving_average_n_day', 'n_day_returns', 'n_path_monte_carlo', 'n_rolling_max_drawdown', 'n_rolling_volatility', 'num_sim_monte_carlo', 'period_type', 'risk_free_alpha', 'risk_free_sharpe', 'risk_free_sortino', 'risk_free_treynor', 'start_date', 'stat', 'ticker', 'var_conf_interval'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_security_performance_using_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'security_id' is set
if ('security_id' not in params or
params['security_id'] is None):
raise ValueError("Missing the required parameter `security_id` when calling `get_security_performance_using_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'security_id' in params:
path_params['security_id'] = params['security_id'] # noqa: E501
query_params = []
if 'active_premium_period' in params:
query_params.append(('active_premium_period', params['active_premium_period'])) # noqa: E501
if 'annualized_return_period' in params:
query_params.append(('annualized_return_period', params['annualized_return_period'])) # noqa: E501
if 'bench_ticker' in params:
query_params.append(('benchTicker', params['bench_ticker'])) # noqa: E501
if 'benchmark_id' in params:
query_params.append(('benchmark_id', params['benchmark_id'])) # noqa: E501
if 'end_date' in params:
query_params.append(('end_date', params['end_date'])) # noqa: E501
if 'hist_factor' in params:
query_params.append(('hist_factor', params['hist_factor'])) # noqa: E501
if 'mar_down_side_deviation' in params:
query_params.append(('mar_down_side_deviation', params['mar_down_side_deviation'])) # noqa: E501
if 'max_percentile_monte_carlo' in params:
query_params.append(('max_percentile_monte_carlo', params['max_percentile_monte_carlo'])) # noqa: E501
if 'mean_percentile_monte_carlo' in params:
query_params.append(('mean_percentile_monte_carlo', params['mean_percentile_monte_carlo'])) # noqa: E501
if 'min_percentile_monte_carlo' in params:
query_params.append(('min_percentile_monte_carlo', params['min_percentile_monte_carlo'])) # noqa: E501
if 'moving_average_n_day' in params:
query_params.append(('moving_average_n_day', params['moving_average_n_day'])) # noqa: E501
if 'n_day_returns' in params:
query_params.append(('n_day_returns', params['n_day_returns'])) # noqa: E501
if 'n_path_monte_carlo' in params:
query_params.append(('n_path_monte_carlo', params['n_path_monte_carlo'])) # noqa: E501
if 'n_rolling_max_drawdown' in params:
query_params.append(('n_rolling_max_drawdown', params['n_rolling_max_drawdown'])) # noqa: E501
if 'n_rolling_volatility' in params:
query_params.append(('n_rolling_volatility', params['n_rolling_volatility'])) # noqa: E501
if 'num_sim_monte_carlo' in params:
query_params.append(('num_sim_monte_carlo', params['num_sim_monte_carlo'])) # noqa: E501
if 'period_type' in params:
query_params.append(('period_type', params['period_type'])) # noqa: E501
if 'risk_free_alpha' in params:
query_params.append(('risk_free_alpha', params['risk_free_alpha'])) # noqa: E501
if 'risk_free_sharpe' in params:
query_params.append(('risk_free_sharpe', params['risk_free_sharpe'])) # noqa: E501
if 'risk_free_sortino' in params:
query_params.append(('risk_free_sortino', params['risk_free_sortino'])) # noqa: E501
if 'risk_free_treynor' in params:
query_params.append(('risk_free_treynor', params['risk_free_treynor'])) # noqa: E501
if 'start_date' in params:
query_params.append(('start_date', params['start_date'])) # noqa: E501
if 'stat' in params:
query_params.append(('stat', params['stat'])) # noqa: E501
if 'ticker' in params:
query_params.append(('ticker', params['ticker'])) # noqa: E501
if 'var_conf_interval' in params:
query_params.append(('var_conf_interval', params['var_conf_interval'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/security/{security_id}/performance', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| [
"hydrogen@Hydrogens-MacBook-Pro.local"
] | hydrogen@Hydrogens-MacBook-Pro.local |
0f294442352392103ed94eb88fc76668f87af676 | 41acd1d7fcfba63d3b06b82d18d8a4d97dd40927 | /old/test_selenium.py | 9e85c7310eb625b00eacc66736b190a955884302 | [] | no_license | wancy86/learn_python | a33e3091b271840c8bf89cbbf991fe33b951a266 | 44e45a91361d6d46b9ab4a172af7e48e0f6df7dd | refs/heads/master | 2021-01-15T15:42:45.377381 | 2016-12-06T00:36:46 | 2016-12-06T00:36:46 | 55,651,391 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 606 | py | from selenium import webdriver
# 创建一个chrome实例
driver = webdriver.Chrome()
#这个是制定google浏览器,
#指定IE webdriver
#driver webdriver.Ie(),
#指定Firefox webdriver driver webdriver.Firefox()
# 到百度主页
driver.get("http://www.baidu.com")
# 定位到搜索输入框
inputElement = driver.find_element_by_xpath ("//input[@name='wd']")
# 输入查找内容
inputElement.send_keys("selenium python")
# 点击百度一下
submitElement.submit()
# 输出网页标题
print(driver.title)
#退出webdriver
driver.quit()
# 运行脚本会自动开启chrome自动开始测试 | [
"wancy86@sina.com"
] | wancy86@sina.com |
7280a8c038a404bc9a0f451fcc58c89c88927b29 | 5a8c7a330d6be1fcc90ee0ef298fcecfe204951b | /lectures/class_two/classes.py | 83a85c6c964abf2971c24b6e5a15be413b8f2201 | [] | no_license | EricSchles/nyu_python_class | 299448b55c03dcd90a8606de1df13f52982628eb | 2b19bf70f6b233e00fadfe7664ebd3b635e9df44 | refs/heads/master | 2021-01-25T05:45:11.997409 | 2017-05-17T15:40:00 | 2017-05-17T15:40:00 | 80,680,329 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,507 | py | import math
import statistics as st
class DescribeData:
def __init__(self, List): #stands for initialize
self.List = List
def describe(self):
print("Here are some statistics about our data")
print("---------------------------------------")
print("Our list has ",len(self.List),"many elements")
print("The mean is ",self.average()) #automatic type casting to string
print("The median is ",self.median())
if self.average() > self.median():
print("And the mean is ",abs(self.average()-self.median()),"greater than the median")
def average(self):
return st.mean(self.List)
def median(self):
return st.median(self.List)
def standard_deviation(self):
return st.stdev(self.List)
def describe(List):
ave = st.mean(List)
middle_number = st.median(List)
std_dev = st.stdev(List)
print("Here are some statistics about our data")
print("---------------------------------------")
print("Our list has ",len(List),"many elements")
print("The mean is ",ave) #automatic type casting to string
print("The median is ",middle_number)
if ave > middle_number:
print("And the mean is ",abs(ave-middle_number),"greater than the median")
if __name__ == '__main__':
import random
List = []
for i in range(200):
List.append(random.randint(0,100))
describer = DescribeData(List)
import code
code.interact(local=locals())
| [
"ericschles@gmail.com"
] | ericschles@gmail.com |
2bd407814b0f7d9875e56a9543aa05be1470ed27 | c4c159a21d2f1ea0d7dfaa965aeff01c8ef70dce | /flask/flaskenv/Lib/site-packages/keras/applications/vgg16.py | 76c77c1f512a4b52134aed97846b7a911effe39e | [] | no_license | AhsonAslam/webapi | 54cf7466aac4685da1105f9fb84c686e38f92121 | 1b2bfa4614e7afdc57c9210b0674506ea70b20b5 | refs/heads/master | 2020-07-27T06:05:36.057953 | 2019-09-17T06:35:33 | 2019-09-17T06:35:33 | 208,895,450 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 128 | py | version https://git-lfs.github.com/spec/v1
oid sha256:d16551dce42289cab53b58657d749a6e4322069372faf4930082d8bd2d661d1e
size 518
| [
"github@cuba12345"
] | github@cuba12345 |
00ebdf6d66acc5dca1a037cfbaf180fb57b81fa0 | ad5ad404d24f1ef195d069b2e9d36b1a22cfd25d | /libs/llvm-meta/clang-tools-extra/clang-tools-extra.py | 6e159d72f9efac1c8dc04b7f11484ed7b6b2585c | [
"BSD-2-Clause"
] | permissive | arruor/craft-blueprints-kde | 6643941c87afd09f20dd54635022d8ceab95e317 | e7e2bef76d8efbc9c4b84411aa1e1863ac8633c1 | refs/heads/master | 2020-03-22T17:54:38.445587 | 2018-07-10T11:47:21 | 2018-07-10T11:47:21 | 140,423,580 | 0 | 0 | null | 2018-07-10T11:43:08 | 2018-07-10T11:43:07 | null | UTF-8 | Python | false | false | 563 | py | # -*- coding: utf-8 -*-
import info
class subinfo(info.infoclass):
def setTargets(self):
self.versionInfo.setDefaultValues(packageName="clang-tools-extra", gitUrl="[git]https://git.llvm.org/git/clang-tools-extra.git")
def setDependencies(self):
self.runtimeDependencies["virtual/base"] = "default"
self.runtimeDependencies["libs/llvm-meta/llvm"] = "default"
from Package.VirtualPackageBase import *
class Package(SourceComponentPackageBase):
def __init__(self, **args):
SourceComponentPackageBase.__init__(self)
| [
"vonreth@kde.org"
] | vonreth@kde.org |
bb9d04bc3c077bf44239e3310d88723efd751376 | d1ad7bfeb3f9e3724f91458277284f7d0fbe4b2d | /python/002-tcp-server/server.py | 496f0090ef0b35aa6ca74b4f0ea938f3a4f0629a | [] | no_license | qu4ku/tutorials | 01d2d5a3e8740477d896476d02497d729a833a2b | ced479c5f81c8aff0c4c89d2a572227824445a38 | refs/heads/master | 2023-03-10T20:21:50.590017 | 2023-03-04T21:57:08 | 2023-03-04T21:57:08 | 94,262,493 | 0 | 0 | null | 2023-01-04T21:37:16 | 2017-06-13T22:07:54 | PHP | UTF-8 | Python | false | false | 459 | py | import socket
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = socket.gethostbyname()
port = 444
server_socket.bind((host, port))
server_socket.listen(3) # max num of connections
while True:
client_socket, address = server_socket.accept()
print(f'Received connection from {address}')
message = 'Thank you for connectoin to the server\r\n'
client_socket.send(message.encode('ascii'))
client_socket.close() | [
"qu4ku@hotmail.com"
] | qu4ku@hotmail.com |
3366f2acd4ae0124323c631d854eb5b0713dc2b2 | 7b8d505758cbadb002c9aa4449caf643836a829e | /tbkt/apps/task/urls.py | d2465afd980e2c934d98983da8ae0114cf218d0b | [] | no_license | GUAN-YE/myproject | 455a01dcc76629fc33d8154efcba9ef1af0faaa9 | 21e48a150dd6009a6bf2572b2d42eee20fbcbdfc | refs/heads/master | 2020-03-07T22:51:41.821356 | 2018-04-02T14:13:19 | 2018-04-02T14:13:19 | 127,765,750 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 294 | py | # coding: utf-8
from django.conf.urls import include, url, patterns
# 照顾老版本语文,修改老接口保证参数、返回数据不变
urlpatterns = patterns('apps.task.views',
(r"^sms$", "p_givesms"), # 发短信作业
(r"^checkNum$", "p_check_num"), # 带检查作业数
) | [
"15670549987@163.com"
] | 15670549987@163.com |
91d7f63a68a0f38b448fd8e5cf361d5ad1d37cad | 67b7e6d2c08f08403ec086c510622be48b8d26d8 | /src/test/tinc/tincrepo/mpp/gpdb/tests/storage/aoco_compression/__init__.py | 998ff57cee46fd7c1974553b91f2eae5eca02cba | [
"Apache-2.0",
"PostgreSQL",
"LicenseRef-scancode-rsa-md4",
"OLDAP-2.8",
"HPND-sell-variant",
"BSD-4-Clause-UC",
"BSD-3-Clause",
"Zlib",
"LicenseRef-scancode-zeusbench",
"LicenseRef-scancode-mit-modification-obligations",
"OpenSSL",
"MIT",
"LicenseRef-scancode-other-copyleft",
"bzip2-1.0.6"... | permissive | sshyran/gpdb | 41012411d22b0294204dfb0fe67a1f4c8d1ecaf6 | 2d065ecdd2b5535cb42474f17a0ee6592b4e6837 | refs/heads/master | 2023-04-09T14:05:44.030212 | 2016-11-12T08:33:33 | 2016-11-12T08:34:36 | 73,544,159 | 0 | 0 | Apache-2.0 | 2023-04-04T00:30:10 | 2016-11-12T09:43:54 | PLpgSQL | UTF-8 | Python | false | false | 63,732 | py | """
Copyright (C) 2004-2015 Pivotal Software, Inc. All rights reserved.
This program and the accompanying materials are made available under
the terms of the under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import string
import tinctest
from tinctest.lib import local_path, run_shell_command
from mpp.models import MPPTestCase
from mpp.lib.PSQL import PSQL
class GenerateSqls(MPPTestCase):
def __init__(self):
self.compress_type_list = ["quicklz","rle_type", "zlib"]
self.block_size_list = ["8192", "32768", "65536", "1048576", "2097152"]
self.compress_level_list = [1, 2, 3, 4, 5, 6, 7, 8, 9]
self.all_columns = "a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15,a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29,a30,a31,a32,a33,a34,a35,a36,a37,a38,a39,a40,a41,a42"
self.alter_comprtype = {"quicklz":"zlib","rle_type":"quicklz","zlib":"rle_type"}
def compare_data_with_uncompressed_table(self,tablename, sqlfile, part_table="No"):
''' Part of Sql file generation: Compare data with Uncompressed table'''
## Select number of rows from the uncompressed table
sqlfile.write("--\n-- Select number of rows from the uncompressed table \n--\n")
sqlfile.write("SELECT count(*) as count_uncompressed from " + tablename + "_uncompr ;" + "\n")
## Select number of rows from the compressed table
sqlfile.write("--\n-- Select number of rows from the compressed table \n--\n")
sqlfile.write("SELECT count(*) as count_compressed from " + tablename + ";" + "\n")
## Select number of rows from the compressed table
## Select number of rows using a FULL outer join on all the columns of the two tables: Count should match with above result if the all the rows uncompressed correctly:
sqlfile.write("--\n-- Select number of rows using a FULL outer join on all the columns of the two tables \n")
sqlfile.write("-- Count should match with above result if the all the rows uncompressed correctly: \n--\n")
join_string = "Select count(*) as count_join from " + tablename + " t1 full outer join " + tablename + "_uncompr t2 on t1.id=t2.id and "
clm_list = self.get_column_list()
clm_excl = ['a20', 'a21', 'a25', 'a26', 'a28', 'a31']
for c in range (len(clm_list)):
clm = clm_list[c]
if clm in clm_excl : # point and polygon has no operators
continue;
join_string = join_string + "t1." + clm + "=t2." + clm + " and "
join_string = join_string[:-4] + ";"
sqlfile.write(join_string + "\n")
### Truncate the table
sqlfile.write("--\n-- Truncate the table \n--\n")
sqlfile.write("TRUNCATE table " + tablename + ";" + "\n")
### Insert data again
sqlfile.write("--\n-- Insert data again \n--\n")
sqlfile.write("insert into " + tablename + " select * from " + tablename + "_uncompr order by a1;\n\n")
#get_ao_compression_ratio
sqlfile.write("--\n-- Compression ratio\n--\n")
if part_table == "No":
sqlfile.write("select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "'); \n\n")
def validation_sqls(self,tablename, sqlfile):
''' Generate sqls for validation'''
#### Validation using psql utility ###
sqlfile.write("\d+ " + tablename + "\n\n")
#get_ao_compression_ratio
sqlfile.write("--\n-- Compression ratio\n--\n")
sqlfile.write("select 'compression_ratio' as compr_ratio, get_ao_compression_ratio('" + tablename + "'); \n\n")
## Select from pg_attribute_encoding to see the table entry
if 'with' not in tablename:
sqlfile.write ("--Select from pg_attribute_encoding to see the table entry \n")
sqlfile.write ("select attrelid::regclass as relname, attnum, attoptions from pg_class c, pg_attribute_encoding e where c.relname = '" + tablename + "' and c.oid=e.attrelid order by relname, attnum limit 3; \n")
###Compare the selected data with that of an uncompressed table to see if the data in two tables are same when selected.
sqlfile.write("--\n-- Compare data with uncompressed table\n--\n")
self.compare_data_with_uncompressed_table(tablename, sqlfile)
def generate_copy_files(self):
''' Generate the data files to be copied to the tables '''
# Create base tables with the inserts
PSQL.run_sql_file(local_path('create_base_tables.sql'), out_file=local_path('create_base_tables.out'))
# Copy the rows to data files
for tablename in ('base_small', 'base_large'):
copy_file = local_path('data/copy_%s' % tablename)
cp_out_cmd = "Copy %s To '%s' DELIMITER AS '|'" % (tablename, copy_file)
out = PSQL.run_sql_command(cp_out_cmd, flags = '-q -t')
if 'COPY 0' in out:
raise Exception ("Copy did not work for tablename %s " % tablename)
else:
tinctest.logger.info('Created copy file for %s' % tablename)
def insert_data(self,tablename, sqlfile, block_size,compress_type):
''' Part of sql file creation: Insert data'''
if block_size in ('8192', '32768', '65536'):
copy_file = local_path('data/copy_base_small')
elif block_size in ('1048576', '2097152'):
copy_file = local_path('data/copy_base_large')
copy_string = "COPY " + tablename + "(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15,a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29,a30,a31,a32,a33,a34,a35,a36,a37,a38,a39,a40,a41,a42) FROM '" + copy_file + "' DELIMITER AS '|' ;"
sqlfile.write(copy_string + "\n\n")
def get_compresslevel_list(self,compress_type):
''' Returns a list of compresslevel for the given compresstype'''
if (compress_type == "quicklz"):
compress_lvl_list = [1]
elif (compress_type == "rle_type"):
compress_lvl_list = [1,2,3,4]
else:
compress_lvl_list = self.compress_level_list
return compress_lvl_list
def get_table_definition(self):
listfilename = "column_list"
list_file = open(local_path(listfilename), "r")
tabledefinition = "(id SERIAL,"
for line in list_file:
tabledefinition = tabledefinition + line.strip('\n') + ","
tabledefinition = tabledefinition[:-1]
list_file.close()
return tabledefinition
def get_column_list(self):
column_list = ["a1", "a2", "a3", "a4", "a5", "a6", "a7", "a8", "a9", "a10", "a11", "a12", "a13", "a14", "a15", "a16", "a17", "a18", "a19", "a20", "a21", "a22", "a23", "a24", "a25", "a26", "a27", "a28", "a29", "a30", "a31", "a32", "a33", "a34", "a35", "a36", "a37", "a38", "a39", "a40", "a41", "a42"]
return column_list
def alter_column_tests(self,tablename, sqlfile):
''' Alter column test cases '''
# Alter type of a column
sqlfile.write("--Alter table alter type of a column \n")
sqlfile.write("Alter table " + tablename + " Alter column a3 TYPE int4; \n")
sqlfile.write("--Insert data to the table, select count(*)\n")
sqlfile.write("Insert into " + tablename + "(" + self.all_columns + ") select " + self.all_columns + " from " + tablename + " where id =10;\n")
sqlfile.write("Select count(*) from " + tablename + "; \n\n")
#Drop a column
sqlfile.write("--Alter table drop a column \n")
sqlfile.write("Alter table " + tablename + " Drop column a12; \n")
sqlfile.write("Insert into " + tablename + "(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a13,a14,a15,a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29,a30,a31,a32,a33,a34,a35,a36,a37,a38,a39,a40,a41,a42) select a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a13,a14,a15,a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29,a30,a31,a32,a33,a34,a35,a36,a37,a38,a39,a40,a41,a42 from " + tablename + " where id =10;\n")
sqlfile.write("Select count(*) from " + tablename + "; \n\n")
#Rename a column
sqlfile.write("--Alter table rename a column \n")
sqlfile.write("Alter table " + tablename + " Rename column a13 TO after_rename_a13; \n")
sqlfile.write("--Insert data to the table, select count(*)\n")
sqlfile.write("Insert into " + tablename + "(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,after_rename_a13,a14,a15,a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29,a30,a31,a32,a33,a34,a35,a36,a37,a38,a39,a40,a41,a42) select a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,after_rename_a13,a14,a15,a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29,a30,a31,a32,a33,a34,a35,a36,a37,a38,a39,a40,a41,a42 from " + tablename + " where id =10;\n")
sqlfile.write("Select count(*) from " + tablename + "; \n\n")
#Add a column
sqlfile.write("--Alter table add a column \n")
sqlfile.write("Alter table " + tablename + " Add column a12 text default 'new column'; \n")
sqlfile.write("--Insert data to the table, select count(*)\n")
sqlfile.write("Insert into " + tablename + "(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,after_rename_a13,a14,a15,a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29,a30,a31,a32,a33,a34,a35,a36,a37,a38,a39,a40,a41,a42) select a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,after_rename_a13,a14,a15,a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29,a30,a31,a32,a33,a34,a35,a36,a37,a38,a39,a40,a41,a42 from " + tablename + " where id =10;\n")
sqlfile.write("Select count(*) from " + tablename + "; \n\n")
def split_and_exchange(self,tablename, sqlfile, part_level, part_type, encoding_type, orientation, tabledefinition):
'''Exchange and split partition cases '''
# Exchange partition
sqlfile.write("--Alter table Exchange Partition \n--Create a table to use in exchange partition \n")
exchange_part = tablename + "_exch"
if orientation == "column" :
storage_string = " WITH (appendonly=true, orientation=column, compresstype=zlib) "
else :
storage_string = " WITH (appendonly=true, orientation=row, compresstype=zlib) "
exchange_table_str = "Drop Table if exists "+ exchange_part +"; \n CREATE TABLE " + exchange_part + tabledefinition + ")" + storage_string + " distributed randomly;\n"
sqlfile.write(exchange_table_str + " \n")
sqlfile.write("Insert into " + exchange_part + "(" + self.all_columns + ") select " + self.all_columns + " from " + tablename + " where a1=10 and a2!='C';\n\n")
if part_level == "sub_part":
if part_type == "range":
sqlfile.write("Alter table " + tablename + " alter partition FOR (RANK(1)) exchange partition sp1 with table " + exchange_part + ";\n")
sqlfile.write("\d+ " + tablename + "_1_prt_1_2_prt_sp1\n\n")
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_1_2_prt_sp1'); \n\n")
else:
if encoding_type == "with":
pname = "p1"
else:
pname = "p2"
sqlfile.write("Alter table " + tablename + " alter partition "+ pname +" exchange partition FOR (RANK(1)) with table " + exchange_part + ";\n")
sqlfile.write("\d+ " + tablename + "_1_prt_"+ pname + "_2_prt_2\n\n")
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_"+ pname +"_2_prt_2'); \n\n")
else:
if part_type == "range":
sqlfile.write("Alter table " + tablename + " exchange partition FOR (RANK(1)) with table " + exchange_part + ";\n")
sqlfile.write("\d+ " + tablename + "_1_prt_1\n\n")
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_1'); \n\n")
else:
sqlfile.write("Alter table " + tablename + " exchange partition p1 with table " + exchange_part + ";\n")
sqlfile.write("\d+ " + tablename + "_1_prt_p1\n\n")
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_p1'); \n\n")
sqlfile.write("\d+ " + tablename + "_1_prt_df_p\n\n")
#split partition
if part_level == "sub_part":
if part_type == "list":
if encoding_type == "with":
pname = "p2"
else:
pname = "p1"
sqlfile.write("--Alter table Split Partition \n Alter table " + tablename + " alter partition "+ pname +" split partition FOR (RANK(4)) at(4000) into (partition splita,partition splitb) ;\n")
sqlfile.write("\d+ " + tablename + "_1_prt_"+ pname +"_2_prt_splita \n\n")
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_"+ pname +"_2_prt_splita'); \n\n")
else:
if part_type == "range":
sqlfile.write("--Alter table Split Partition \n Alter table " + tablename + " split partition FOR (RANK(2)) at(1050) into (partition splitc,partition splitd) ;\n")
sqlfile.write("\d+ " + tablename + "_1_prt_splitd \n\n")
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_splitd'); \n\n")
sqlfile.write("Select count(*) from " + tablename + "; \n\n")
def alter_partition_tests(self,tablename, sqlfile, part_level, part_type, encoding_type, orientation):
''' Testcases for Altering aprtitions '''
tabledefinition = self.get_table_definition()
if encoding_type =='with':
co_str = " WITH (appendonly=true, orientation=column, compresstype=zlib, compresslevel=1)"
else :
co_str = ""
# Add partition
sqlfile.write("\n--Alter table Add Partition \n")
if part_type == "range":
if part_level == "sub_part":
sqlfile.write("alter table " + tablename + " add partition new_p start(5050) end (6051)" + co_str +";\n\n")
sqlfile.write("--Validation with psql utility \n \d+ " + tablename + "_1_prt_new_p_2_prt_sp1\n\n")
sqlfile.write("alter table " + tablename + " add default partition df_p ;\n\n")
sqlfile.write("--Validation with psql utility \n \d+ " + tablename + "_1_prt_df_p_2_prt_sp2\n\n")
else:
sqlfile.write("alter table " + tablename + " add partition new_p start(5050) end (5061)" + co_str +";\n\n")
sqlfile.write("alter table " + tablename + " add default partition df_p;\n\n")
else:
if part_level == "sub_part":
sqlfile.write("alter table " + tablename + " add partition new_p values('C') " + co_str +";\n\n")
sqlfile.write("--Validation with psql utility \n \d+ " + tablename + "_1_prt_new_p_2_prt_3\n\n")
sqlfile.write("alter table " + tablename + " add default partition df_p ;\n\n")
sqlfile.write("--Validation with psql utility \n \d+ " + tablename + "_1_prt_df_p_2_prt_2\n\n")
else:
sqlfile.write("alter table " + tablename + " add partition new_p values('C')" + co_str +";\n\n")
sqlfile.write("alter table " + tablename + " add default partition df_p;\n\n")
sqlfile.write("-- Insert data \n")
sqlfile.write("Insert into " + tablename + "(" + self.all_columns + ") values(generate_series(1,5000),'C',2011,'t','a','dfjjjjjj','2001-12-24 02:26:11','hghgh',333,'2011-10-11','Tddd','sss','1234.56',323453,4454,7845,'0011','2005-07-16 01:51:15+1359','2001-12-13 01:51:15','((1,2),(0,3),(2,1))','((2,3)(4,5))','08:00:2b:01:02:03','1-2','dfdf','((2,3)(4,5))','(6,7)',11.222,'((4,5),7)',32,3214,'(1,0,2,3)','2010-02-21',43564,'$1,000.00','192.168.1','126.1.3.4','12:30:45','ggg','1','0',12,23) ; \n\n")
sqlfile.write("Insert into " + tablename + "(" + self.all_columns + ") values(generate_series(5061,6050),'F',2011,'t','a','dfjjjjjj','2001-12-24 02:26:11','hghgh',333,'2011-10-11','Tddd','sss','1234.56',323453,4454,7845,'0011','2005-07-16 01:51:15+1359','2001-12-13 01:51:15','((1,2),(0,3),(2,1))','((2,3)(4,5))','08:00:2b:01:02:03','1-2','dfdf','((2,3)(4,5))','(6,7)',11.222,'((4,5),7)',32,3214,'(1,0,2,3)','2010-02-21',43564,'$1,000.00','192.168.1','126.1.3.4','12:30:45','ggg','1','0',12,23) ; \n\n")
sqlfile.write("Insert into " + tablename + "(" + self.all_columns + ") values(generate_series(5051,6050),'M',2011,'t','a','dfjjjjjj','2001-12-24 02:26:11','hghgh',333,'2011-10-11','Tddd','sss','1234.56',323453,4454,7845,'0011','2005-07-16 01:51:15+1359','2001-12-13 01:51:15','((1,2),(0,3),(2,1))','((2,3)(4,5))','08:00:2b:01:02:03','1-2','dfdf','((2,3)(4,5))','(6,7)',11.222,'((4,5),7)',32,3214,'(1,0,2,3)','2010-02-21',43564,'$1,000.00','192.168.1','126.1.3.4','12:30:45','ggg','1','0',12,23) ; \n\n")
if part_type == "range":
if part_level == "sub_part":
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_new_p_2_prt_sp1'); \n\n")
else:
if encoding_type == "with" : #due to MPP-17780
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_new_p'); \n\n")
else:
if part_level == "sub_part":
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_new_p_2_prt_3'); \n\n")
else:
if encoding_type == "with" : #due to MPP-17780
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_new_p'); \n\n")
if encoding_type == "with": # only adding for with clause . split partition limits concurrent run of tests due to duplicate oid issue
self.split_and_exchange(tablename, sqlfile, part_level, part_type, encoding_type, orientation, tabledefinition)
# Drop partition
sqlfile.write("--Alter table Drop Partition \n")
sqlfile.write("alter table " + tablename + " drop partition new_p;\n\n")
# Drop the default partition
sqlfile.write("-- Drop the default partition \n")
sqlfile.write("alter table " + tablename + " drop default partition;\n\n")
def generate_sql_files_part(self,sqlfile, tablename, create_table_string, block_size,compress_type, part_level, part_type, encoding_type, orientation):
tabledefinition = self.get_table_definition()
sqlfile.write("--\n-- Drop table if exists\n--\n")
sqlfile.write("DROP TABLE if exists " + tablename + " cascade;\n\n")
sqlfile.write("DROP TABLE if exists " + tablename + "_uncompr cascade;\n\n")
sqlfile.write("--\n-- Create table\n--\n")
sqlfile.write(create_table_string + "\n\n")
### Create Indexes ###
index_name1 = tablename + "_idx_bitmap"
index_string1 = "CREATE INDEX " + index_name1 + " ON " + tablename + " USING bitmap (a1);"
index_name2 = tablename + "_idx_btree"
index_string2 = "CREATE INDEX " + index_name2 + " ON " + tablename + "(a9);"
sqlfile.write("-- \n-- Create Indexes\n--\n")
sqlfile.write(index_string1 + "\n\n")
sqlfile.write(index_string2 + "\n\n")
#### Insert data ###
sqlfile.write("--\n-- Insert data to the table\n--\n")
self.insert_data(tablename, sqlfile, block_size,compress_type)
### Create an uncompressed table of same definition for comparing data ###
if part_type == "range":
if part_level == "sub_part":
uncompressed_table_string = "CREATE TABLE " + tablename + "_uncompr" + tabledefinition + ") WITH (appendonly=true, orientation=" + orientation + ") distributed randomly Partition by range(a1) Subpartition by list(a2) subpartition template ( subpartition sp1 values('M') , subpartition sp2 values('F') ) (start(1) end(5000) every(1000)) ;"
else:
uncompressed_table_string = "CREATE TABLE " + tablename + "_uncompr" + tabledefinition + ") WITH (appendonly=true, orientation=" + orientation + ") distributed randomly Partition by range(a1) (start(1) end(5000) every(1000)) ;"
else:
if part_level == "sub_part":
uncompressed_table_string = "CREATE TABLE " + tablename + "_uncompr" + tabledefinition + ") WITH (appendonly=true, orientation=" + orientation + ") distributed randomly Partition by list(a2) Subpartition by range(a1) subpartition template (start(1) end(5000) every(1000)) (default partition p1 , partition p2 values ('M') );"
else:
uncompressed_table_string = "CREATE TABLE " + tablename + "_uncompr" + tabledefinition + ") WITH (appendonly=true, orientation=" + orientation + ") distributed randomly Partition by list(a2) (default partition p1 , partition p2 values ('M') );"
sqlfile.write("\n--Create Uncompressed table of same schema definition" + "\n\n")
sqlfile.write(uncompressed_table_string + "\n\n")
#### Insert to uncompressed table ###
sqlfile.write("--\n-- Insert to uncompressed table\n--\n")
self.insert_data(tablename + "_uncompr", sqlfile, block_size,compress_type)
sqlfile.write("--\n-- ********Validation******* \n--\n")
#### Validation using psql utility ###
if part_type == "range":
if part_level == "sub_part":
sqlfile.write("\d+ " + tablename + "_1_prt_1_2_prt_sp2\n\n")
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_1_2_prt_sp2'); \n\n")
else:
sqlfile.write("\d+ " + tablename + "_1_prt_1\n\n")
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_1'); \n\n")
else:
if part_level == "sub_part":
sqlfile.write("\d+ " + tablename + "_1_prt_p1_2_prt_2 \n\n")
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_p1_2_prt_2'); \n\n")
else:
sqlfile.write("\d+ " + tablename + "_1_prt_p2\n\n")
sqlfile.write("--\n-- Compression ratio\n--\n select 'compression_ratio' as compr_ratio ,get_ao_compression_ratio('" + tablename + "_1_prt_p2'); \n\n")
## Select from pg_partition_encoding to see the table entry in case of a sub partition
if part_level == "sub_part":
sqlfile.write ("--Select from pg_attribute_encoding to see the table entry \n")
sqlfile.write ("select parencattnum, parencattoptions from pg_partition_encoding e, pg_partition p, pg_class c where c.relname = '" + tablename + "' and c.oid = p.parrelid and p.oid = e.parencoid order by parencattnum limit 3; \n")
###Compare the selected data with that of an uncompressed table to see if the data in two tables are same when selected.
sqlfile.write("--\n-- Compare data with uncompressed table\n--\n")
self.compare_data_with_uncompressed_table(tablename, sqlfile, part_table="Yes")
self.alter_partition_tests(tablename, sqlfile, part_level, part_type, encoding_type, orientation)
# Alter column tests
self.alter_column_tests(tablename, sqlfile)
def generate_sql_files(self,sqlfile, tablename, create_table_string, block_size,compress_type):
tabledefinition = self.get_table_definition()
sqlfile.write("--\n-- Drop table if exists\n--\n")
sqlfile.write("DROP TABLE if exists " + tablename + " cascade;\n\n")
sqlfile.write("DROP TABLE if exists " + tablename + "_uncompr cascade;\n\n")
sqlfile.write("--\n-- Create table\n--\n")
sqlfile.write(create_table_string + "\n\n")
### Create Indexes ###
index_name1 = tablename + "_idx_bitmap"
index_string1 = "CREATE INDEX " + index_name1 + " ON " + tablename + " USING bitmap (a1);"
index_name2 = tablename + "_idx_btree"
index_string2 = "CREATE INDEX " + index_name2 + " ON " + tablename + "(a9);"
sqlfile.write("-- \n-- Create Indexes\n--\n")
sqlfile.write(index_string1 + "\n\n")
sqlfile.write(index_string2 + "\n\n")
#### Insert data ###
sqlfile.write("--\n-- Insert data to the table\n--\n")
self.insert_data(tablename, sqlfile, block_size,compress_type)
### Create an uncompressed table of same definition for comparing data ###
if "AO" in string.upper(tablename):
uncompressed_table_string = "CREATE TABLE " + tablename + "_uncompr" + tabledefinition + ") WITH (appendonly=true, orientation=row) distributed randomly;"
else:
uncompressed_table_string = "CREATE TABLE " + tablename + "_uncompr" + tabledefinition + ") WITH (appendonly=true, orientation=column) distributed randomly;"
sqlfile.write("\n--Create Uncompressed table of same schema definition" + "\n\n")
sqlfile.write(uncompressed_table_string + "\n\n")
#### Insert to uncompressed table ###
sqlfile.write("--\n-- Insert to uncompressed table\n--\n")
self.insert_data(tablename + "_uncompr", sqlfile, block_size,compress_type)
sqlfile.write("--\n-- ********Validation******* \n--\n")
self.validation_sqls(tablename, sqlfile)
def create_table(self,encoding_type="storage_directive", orientation="column", reference_type="column"):
''' Generate Sql files for create table with different encoding types: storage_directive,WITH clause,column_reference
All the data types are having same encoding'''
listfilename = "column_list"
test_listname = ""
tabledefinition = self.get_table_definition()
column_list = self.get_column_list()
count = 1
if orientation == "row":
tb_prefix = "ao_"
else:
tb_prefix = "co_"
for compress_type in self.compress_type_list:
for block_size in self.block_size_list:
compress_lvl_list = self.get_compresslevel_list(compress_type)
for compress_level in compress_lvl_list:
create_table_string = ""
tablename = ""
if encoding_type == "storage_directive" :
tablename = tb_prefix + "crtb_stg_dir_" + compress_type + "_" + block_size + "_" + str(compress_level)
column_str = ""
test_listname = "create_" + encoding_type
list_file = open(local_path(listfilename), "r")
create_table_string = create_table_string + "CREATE TABLE " + tablename + " (id SERIAL," + "\n" + "\t"
for line in list_file:
column_str = column_str + line.strip('\n') + " ENCODING (" + "compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + ")," + "\n"
column_str = column_str[:-2]
create_table_string = create_table_string + " " + column_str + ") WITH (appendonly=true, orientation=column) distributed randomly;"
list_file.close()
elif encoding_type == "with":
if orientation == "row" and compress_type == "rle_type":
continue;
test_listname = "create_" + encoding_type + "_" + orientation
tablename = tb_prefix + "crtb_with_" + orientation + "_" + compress_type + "_" + block_size + "_" + str(compress_level)
create_table_string = create_table_string + "CREATE TABLE " + tablename + " \n" + "\t" + tabledefinition + " )" + "\n"
create_table_string = create_table_string + " WITH (appendonly=true, orientation=" + orientation + ",compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + ") distributed randomly;"
elif encoding_type == "column_reference":
test_listname = "create_" + encoding_type + "_" + reference_type
tablename = tb_prefix + "crtb_col_ref_" + reference_type + "_" + compress_type + "_" + block_size + "_" + str(compress_level)
create_table_string = "CREATE TABLE " + tablename + "\n" + "\t" + tabledefinition + " " + "\n"
column_str = ""
if reference_type == "column":
for column_name in column_list:
column_str = column_str + ", COLUMN " + column_name + " ENCODING (" + "compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + ")" + "\n"
create_table_string = create_table_string + column_str + ") WITH (appendonly=true, orientation=column) distributed randomly;"
else:
column_str = ", DEFAULT COLUMN ENCODING (" + "compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + ")"
create_table_string = create_table_string + column_str + " ) WITH (appendonly=true, orientation=column) distributed randomly;"
sqlfilename = tablename + ".sql"
### Generate the sql file
if block_size in ('8192', '32768', '65536'):
sql_dir = tb_prefix + test_listname + '/small'
elif block_size == '1048576':
if compress_type == 'zlib':
if compress_level in (1,2,3,4,5):
sql_dir = tb_prefix + test_listname + '/large_1G_zlib'
else:
sql_dir = tb_prefix + test_listname + '/large_1G_zlib_2'
else:
sql_dir = tb_prefix + test_listname + '/large_1G_quick_rle'
else:
if compress_type == 'zlib':
if compress_level in (1,2,3,4,5):
sql_dir = tb_prefix + test_listname + '/large_2G_zlib'
else:
sql_dir = tb_prefix + test_listname + '/large_2G_zlib_2'
else:
sql_dir = tb_prefix + test_listname + '/large_2G_quick_rle'
sqlfile1 = open(local_path(sql_dir + '/' +sqlfilename), "w")
self.generate_sql_files(sqlfile1, tablename, create_table_string, block_size,compress_type)
# Alter column tests
self.alter_column_tests(tablename, sqlfile1)
# Drop the table
sqlfile1.write("--Drop table \n")
sqlfile1.write("DROP table " + tablename + "; \n\n")
# Create the table
sqlfile1.write("--Create table again and insert data \n")
sqlfile1.write(create_table_string + "\n")
self.insert_data(tablename, sqlfile1, block_size,compress_type)
# Drop a column
sqlfile1.write("--Alter table drop a column \n")
sqlfile1.write("Alter table " + tablename + " Drop column a12; \n")
#Create a CTAS table
create_ctas_string = "CREATE TABLE " + tablename + "_ctas WITH (appendonly=true, orientation=column) AS Select * from " + tablename + ";\n\n"
sqlfile1.write("--Create CTAS table \n\n Drop table if exists " + tablename + "_ctas ;\n")
sqlfile1.write("--Create a CTAS table \n")
sqlfile1.write(create_ctas_string)
count += 1
def create_partition_table(self,encoding_type="column_reference", orientation="column", reference_type="column", part_level="part"):
''' Generate sql files for create table with different encoding types: WITH clause,column_reference:
Each file create different types of partition tables with the corresponding compression encoding'''
test_listname = ""
tabledefinition = self.get_table_definition()
count = 1
if orientation == "row":
tb_prefix = "ao_"
else:
tb_prefix = "co_"
part_str1 = ""
part_str2 = ""
if part_level == "part":
part_str1 = "Partition by range(a1) (start(1) end(5000) every(1000)"
part_str2 = "Partition by list(a2) (partition p1 values ('M'), partition p2 values ('F') "
else:
part_str1 = " Partition by range(a1) Subpartition by list(a2) subpartition template ( default subpartition df_sp, subpartition sp1 values('M') , subpartition sp2 values('F') "
part_str2 = " Partition by list(a2) Subpartition by range(a1) subpartition template (default subpartition df_sp, start(1) end(5000) every(1000)"
for compress_type in self.compress_type_list:
for block_size in self.block_size_list:
compress_lvl_list = self.get_compresslevel_list(compress_type)
for compress_level in compress_lvl_list:
create_table_string1 = ""
create_table_string2 = ""
tablename = ""
nonpart_column_str = " COLUMN a5 ENCODING (" + "compresstype=" + self.alter_comprtype[compress_type] + ",compresslevel=1, blocksize=" + block_size + ")"
if encoding_type == "with":
if orientation == "row" and compress_type == "rle_type":
continue;
if part_level == 'sub_part' and block_size in ("1048576", "2097152"): #out of memeory issue with large blocksize and partitions
continue;
test_listname = "create_with_" + orientation + "_" + part_level
tablename = tb_prefix + "wt_" + part_level + compress_type + block_size + "_" + str(compress_level)
create_table_string1 = "CREATE TABLE " + tablename + " \n" + "\t" + tabledefinition + " )" + "\n"
create_table_string2 = "CREATE TABLE " + tablename + "_2 \n" + "\t" + tabledefinition + " )" + "\n"
if part_level == "part":
create_table_string1 = create_table_string1 + " WITH (appendonly=true, orientation=" + orientation + ") distributed randomly " + part_str1 + " \n WITH (appendonly=true, orientation=" + orientation + ",compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + "));"
create_table_string2 = create_table_string2 + " WITH (appendonly=true, orientation=" + orientation + ") distributed randomly " + part_str2 + " \n WITH (appendonly=true, orientation=" + orientation + ",compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + "));"
else:
create_table_string1 = create_table_string1 + " WITH (appendonly=true, orientation=" + orientation + ") distributed randomly " + part_str1 + " \n WITH (appendonly=true, orientation=" + orientation + ",compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + ")) (start(1) end(5000) every(1000) );"
create_table_string2 = create_table_string2 + " WITH (appendonly=true, orientation=" + orientation + ") distributed randomly " + part_str2 + " \n WITH (appendonly=true, orientation=" + orientation + ",compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + ")) (partition p1 values ('M'), partition p2 values ('F'));"
elif encoding_type == "column_reference":
if part_level == 'sub_part' and block_size in ("1048576", "2097152"): #out of memeory issue with large blocksize and partitions
continue;
test_listname = "create_" + encoding_type + "_" + reference_type + "_" + part_level
tablename = tb_prefix + "cr_" + part_level + compress_type + block_size + "_" + str(compress_level)
create_table_string1 = "CREATE TABLE " + tablename + " \n" + "\t" + tabledefinition + " ) WITH (appendonly=true, orientation=column) distributed randomly \n"
create_table_string2 = "CREATE TABLE " + tablename + "_2 \n" + "\t" + tabledefinition + " ) WITH (appendonly=true, orientation=column) distributed randomly \n"
column_str = " ENCODING (" + "compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + ")"
default_str = " DEFAULT COLUMN ENCODING (" + "compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + ")"
if part_level == "part":
create_table_string1 = create_table_string1 + part_str1 + " ,\n COLUMN a1" + column_str + ",\n" + nonpart_column_str + ",\n" + default_str + ");"
create_table_string2 = create_table_string2 + part_str2 + " ,\n COLUMN a2" + column_str + ",\n" + nonpart_column_str + ", \n" + default_str + ");"
else:
create_table_string1 = create_table_string1 + part_str1 + " , \n COLUMN a2 " + column_str + ", \n COLUMN a1 encoding (compresstype = " + self.alter_comprtype[compress_type] + "),\n" + nonpart_column_str + ",\n" + default_str + ") (start(1) end(5000) every(1000));"
create_table_string2 = create_table_string2 + part_str2 + ", \n COLUMN a2 " + column_str + ", \n COLUMN a1 encoding (compresstype = " + self.alter_comprtype[compress_type] + "),\n" + nonpart_column_str + ",\n" + default_str + ") (partition p1 values('F'), partition p2 values ('M'));"
sqlfilename = tablename + ".sql"
### Generate the sql file
if block_size in ('8192', '32768', '65536'):
sql_dir = tb_prefix + test_listname + '/small'
elif block_size == '1048576':
if compress_type == 'zlib':
if compress_level in (1,2,3,4,5):
sql_dir = tb_prefix + test_listname + '/large_1G_zlib'
else:
sql_dir = tb_prefix + test_listname + '/large_1G_zlib_2'
else:
sql_dir = tb_prefix + test_listname + '/large_1G_quick_rle'
else:
if compress_type == 'zlib':
if compress_level in (1,2,3,4,5):
sql_dir = tb_prefix + test_listname + '/large_2G_zlib'
else:
sql_dir = tb_prefix + test_listname + '/large_2G_zlib_2'
else:
sql_dir = tb_prefix + test_listname + '/large_2G_quick_rle'
sqlfile6 = open(local_path(sql_dir + '/' +sqlfilename), "w")
self.generate_sql_files_part(sqlfile6, tablename, create_table_string1, block_size,compress_type, part_level, "range", encoding_type, orientation)
self.generate_sql_files_part(sqlfile6, tablename + "_2", create_table_string2, block_size,compress_type, part_level, "list", encoding_type, orientation)
count += 1
def create_table_columns_with_different_encoding_type(self,storage_directive="no", column_reference="no"):
''' Generate sql files for create table with columns having different storage_directive encoding pr column level'''
listfilename = "column_list"
tabledefinition = self.get_table_definition()
column_list = self.get_column_list()
test_listname = ""
count = 1
for compress_lvl in self.compress_level_list:
tablename = ""
create_table_string = ""
column_ref_str = ""
storage_dir_str = ""
list_file = open(local_path(listfilename), "r")
clm_list = list_file.readlines()
for i in range(len(clm_list)):
if(i % 4 == 1):
compress_type = self.compress_type_list[0]
compress_level = 1
block_size = self.block_size_list[1]
elif(i % 4 == 2):
compress_type = self.compress_type_list[1]
if (compress_lvl < 5):
compress_level = compress_lvl
else:
compress_level = 1
block_size = self.block_size_list[2]
else:
compress_type = self.compress_type_list[2]
compress_level = compress_lvl
block_size = self.block_size_list[3]
if column_reference == "yes" and (i % 3 <> 3):
column_ref_str = column_ref_str + ", COLUMN " + column_list[i] + " ENCODING (" + "compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + ")" + "\n"
if storage_directive == "yes" :
if (i % 3 <> 3):
storage_dir_str = storage_dir_str + clm_list[i].strip('\n') + " ENCODING (" + "compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + ")," + "\n"
else:
storage_dir_str = storage_dir_str + clm_list[i].strip('\n') + ", \n"
storage_dir_str = storage_dir_str[:-2]
column_ref_str = column_ref_str[:-1]
list_file.close()
if storage_directive == "yes" and column_reference == "yes" :
test_listname = "create_col_with_storage_directive_and_col_ref"
tablename = "co_" + "crtb_with_strg_dir_and_col_ref_" + str(compress_lvl)
create_table_string = "CREATE TABLE " + tablename + " ( \n id SERIAL," + storage_dir_str + column_ref_str + ",DEFAULT COLUMN ENCODING (compresstype=quicklz,blocksize=8192)) WITH (appendonly=true, orientation=column) distributed randomly;"
elif column_reference == "yes":
tablename = "co_" + "crtb_col_with_diff_col_ref_" + str(compress_lvl)
test_listname = "create_col_with_diff_column_reference"
create_table_string = "CREATE TABLE " + tablename + "\n" + "\t" + tabledefinition + " " + "\n" + column_ref_str + ") WITH (appendonly=true, orientation=column) distributed randomly;"
elif storage_directive == "yes":
tablename = "co_" + "crtb_col_with_diff_strg_dir_" + str(compress_lvl)
test_listname = "create_col_with_diff_storage_directive"
create_table_string = "CREATE TABLE " + tablename + " ( \n id SERIAL," + storage_dir_str + ") WITH (appendonly=true, orientation=column) distributed randomly;"
sqlfilename = tablename + ".sql"
sql_dir = test_listname
sqlfile3 = open(local_path(sql_dir + '/' +sqlfilename), "w")
self.generate_sql_files(sqlfile3, tablename, create_table_string, block_size,compress_type)
# Alter column tests
self.alter_column_tests(tablename, sqlfile3)
count += 1
def alter_table(self):
''' Alter table add column with encoding and alter table alter column encoding'''
listfilename = "column_list_with_default"
tabledefinition = self.get_table_definition()
count = 1
create_table_string = ""
for block_size in ("8192", "32768", "65536"):
for compress_type in self.compress_type_list:
compress_lvl_list = self.get_compresslevel_list(compress_type)
for compress_lvl in compress_lvl_list:
tablename = "co_" + "alter_table_add_" + compress_type + "_" + block_size + "_" + str(compress_lvl)
create_table_string = "CREATE TABLE " + tablename + " ( \n id SERIAL, DEFAULT COLUMN ENCODING (compresstype=" + self.alter_comprtype[compress_type] + ",blocksize=8192,compresslevel=1)) WITH (appendonly=true, orientation=column) distributed randomly ;"
sqlfilename = tablename + ".sql"
sql_dir = 'co_alter_table_add'
sqlfile4 = open(local_path(sql_dir + '/' +sqlfilename), "w")
sqlfile4.write("--\n -- Drop table if exists\n --\n")
sqlfile4.write("DROP TABLE if exists " + tablename + ";\n\n")
sqlfile4.write("DROP TABLE if exists " + tablename + "_uncompr; \n\n")
sqlfile4.write("--\n -- Create table\n --\n")
sqlfile4.write(create_table_string + "\n")
list_file = open(local_path(listfilename), "r")
clm_list = list_file.readlines()
for i in range(len(clm_list)):
alter_str = "Alter table " + tablename + " ADD COLUMN " + clm_list[i].strip('\n') + " ENCODING (" + "compresstype=" + compress_type + ",compresslevel=" + str(compress_lvl) + ",blocksize=" + block_size + ");" + "\n"
sqlfile4.write(alter_str + "\n")
#### Insert data ###
sqlfile4.write("--\n -- Insert data to the table\n --\n")
self.insert_data(tablename, sqlfile4, block_size,compress_type)
### Alter table SET distributed by
sqlfile4.write("--\n--Alter table set distributed by \n")
sqlfile4.write("ALTER table " + tablename + " set with ( reorganize='true') distributed by (a1);\n")
### Create an uncompressed table of same definition for comparing data ###
sqlfile4.write("-- Create Uncompressed table of same schema definition" + "\n")
uncompressed_table_string = "CREATE TABLE " + tablename + "_uncompr " + tabledefinition + ")" + " WITH (appendonly=true, orientation=column) distributed randomly;"
sqlfile4.write(uncompressed_table_string + "\n\n")
#### Insert to uncompressed table ###
sqlfile4.write("--\n-- Insert to uncompressed table\n --\n")
self.insert_data(tablename + "_uncompr", sqlfile4, block_size,compress_type)
###Validation
self.validation_sqls(tablename, sqlfile4)
# Alter column tests
self.alter_column_tests(tablename, sqlfile4)
count += 1
def alter_type(self):
'''Alter the different data types with encoding: Create a table using these types '''
listfilename = "datatype_list"
tabledefinition = self.get_table_definition()
test_listname = "alter_type"
count = 1
create_table_string = ""
for compress_type in self.compress_type_list:
for block_size in ("8192", "32768", "65536"):
compress_lvl_list = self.get_compresslevel_list(compress_type)
for compress_level in compress_lvl_list:
tablename = "co_" + "alter_type_" + compress_type + "_" + block_size + "_" + str(compress_level)
list_file = open(local_path(listfilename), "r")
sqlfilename = tablename + ".sql"
sql_dir = 'co_alter_type'
sqlfile5 = open(local_path(sql_dir + '/' +sqlfilename), "w")
for data_type in list_file:
data_type = data_type.strip('\n')
alter_str = "ALTER TYPE " + data_type + " SET DEFAULT ENCODING (compresstype=" + compress_type + ",compresslevel=" + str(compress_level) + ",blocksize=" + block_size + ");" + "\n"
sqlfile5.write(alter_str + "\n")
sqlfile5.write("select typoptions from pg_type_encoding where typid='" + data_type + " '::regtype;\n\n")
create_table_string = "CREATE TABLE " + tablename + "\n" + "\t" + tabledefinition + ") WITH (appendonly=true, orientation=column) distributed randomly;" + "\n"
### Create table
sqlfile5.write("--\n -- Drop table if exists\n --\n")
sqlfile5.write("DROP TABLE if exists " + tablename + ";\n\n")
sqlfile5.write("DROP TABLE if exists " + tablename + "_uncompr; \n\n")
sqlfile5.write("-- Create table " + "\n")
sqlfile5.write(create_table_string + "\n\n")
#### Insert data ###
sqlfile5.write("--\n -- Insert data to the table\n --\n")
self.insert_data(tablename, sqlfile5, block_size,compress_type)
### Create an uncompressed table of same definition for comparing data ###
sqlfile5.write("-- Create Uncompressed table of same schema definition" + "\n")
sqlfile5.write("-- First Alter the data types back to compression=None" + "\n")
list_file2 = open(local_path(listfilename), "r")
for data_type in list_file2:
alter2_str = "ALTER TYPE " + data_type.strip('\n') + " SET DEFAULT ENCODING (compresstype=none,compresslevel=0);" + "\n"
sqlfile5.write(alter2_str + "\n")
sqlfile5.write("select typoptions from pg_type_encoding where typid='" + data_type + " '::regtype;\n\n")
uncompressed_table_string = "CREATE TABLE " + tablename + "_uncompr " + tabledefinition + ")" + " WITH (appendonly=true, orientation=column) distributed randomly;"
sqlfile5.write(uncompressed_table_string + "\n\n")
#### Insert to uncompressed table ###
sqlfile5.write("--\n-- Insert to uncompressed table\n --\n")
self.insert_data(tablename + "_uncompr", sqlfile5, block_size,compress_type)
###Validation
self.validation_sqls(tablename, sqlfile5)
#Alter type drop column
sqlfile5.write("--Alter table drop a column \n")
sqlfile5.write("Alter table " + tablename + " Drop column a12; \n")
#Alter type add column with encoding
sqlfile5.write("--Alter table add a column \n")
sqlfile5.write("Alter table " + tablename + " Add column a12 text default 'default value' encoding (compresstype=" + self.alter_comprtype[compress_type] + ",compresslevel=1,blocksize=32768); \n")
count += 1
def create_type(self):
'''Create different data types with encoding: Create a table using these types '''
datatype_list = ["int", "char", "text", "varchar", "date", "timestamp"]
type_func_in = {"int":"int4in", "char":"charin", "text":"textin", "date":"date_in", "varchar":"varcharin", "timestamp":"timestamp_in"}
type_func_out = {"int":"int4out", "char":"charout", "text":"textout", "date":"date_out", "varchar":"varcharout", "timestamp":"timestamp_out"}
type_default = {"int":55, "char":" 'asd' ", "text":" 'hfkdshfkjsdhflkshadfkhsadflkh' ", "date":" '2001-12-11' ", "varchar":" 'ajhgdjagdjasdkjashk' ", "timestamp":" '2001-12-24 02:26:11' "}
test_listname = "create_type"
count = 1
create_table_string = ""
for blocksize in self.block_size_list:
for compress_type in self.compress_type_list:
compress_lvl_list = self.get_compresslevel_list(compress_type)
for compress_level in compress_lvl_list:
tablename = "co_" + "create_type_" + compress_type + "_" + blocksize + "_" + str(compress_level)
sqlfilename = tablename + ".sql"
sql_dir = 'co_create_type'
sqlfile5 = open(local_path(sql_dir + '/' +sqlfilename), "w")
for dtype in datatype_list:
data_type = dtype + "_" + compress_type
sqlfile5.write("DROP type if exists " + data_type + " cascade ; \n\n")
shell_type_str = "CREATE type " + data_type + ";"
sqlfile5.write(shell_type_str + "\n")
function_in_str = "CREATE FUNCTION " + data_type + "_in(cstring) \n RETURNS " + data_type + "\n AS '" + type_func_in[dtype] + "' \n LANGUAGE internal IMMUTABLE STRICT; \n"
function_out_str = "CREATE FUNCTION " + data_type + "_out(" + data_type + ") \n RETURNS cstring" "\n AS '" + type_func_out[dtype] + "' \n LANGUAGE internal IMMUTABLE STRICT; \n"
sqlfile5.write(function_in_str + "\n")
sqlfile5.write(function_out_str + "\n")
compress_str = " compresstype=" + compress_type + ",\n blocksize=" + blocksize + ",\n compresslevel=" + str(compress_level)
if dtype == "varchar" or dtype == "text":
create_type_str = "CREATE TYPE " + data_type + "( \n input = " + data_type + "_in ,\n output = " + data_type + "_out ,\n internallength = variable, \n default =" + str(type_default[dtype]) + ", \n"
else:
create_type_str = "CREATE TYPE " + data_type + "( \n input = " + data_type + "_in ,\n output = " + data_type + "_out ,\n internallength = 4, \n default =" + str(type_default[dtype]) + ", \n passedbyvalue, \n"
create_type_str = create_type_str + compress_str + ");\n"
sqlfile5.write(create_type_str + "\n")
#Drop and recreate the type
sqlfile5.write("--Drop and recreate the data type \n\n ")
sqlfile5.write("Drop type if exists " + data_type + " cascade;\n\n")
sqlfile5.write(function_in_str + "\n\n")
sqlfile5.write(function_out_str + "\n\n")
sqlfile5.write(create_type_str + "\n\n")
sqlfile5.write("select typoptions from pg_type_encoding where typid='" + data_type + " '::regtype;\n\n")
create_table_string = "CREATE TABLE " + tablename + "\n" + "\t (id serial, a1 int_" + compress_type + ", a2 char_" + compress_type + ", a3 text_" + compress_type + ", a4 date_" + compress_type + ", a5 varchar_" + compress_type + ", a6 timestamp_" + compress_type + " ) WITH (appendonly=true, orientation=column) distributed randomly;" + "\n"
### Create table
sqlfile5.write("DROP table if exists " + tablename + "; \n")
sqlfile5.write("-- Create table " + "\n")
sqlfile5.write(create_table_string + "\n\n")
sqlfile5.write("\d+ " + tablename + "\n\n")
#### Insert data ###
sqlfile5.write("INSERT into " + tablename + " DEFAULT VALUES ; \n")
sqlfile5.write("Select * from " + tablename + ";\n\n")
sqlfile5.write("Insert into " + tablename + " select * from " + tablename + "; \n" + "Insert into " + tablename + " select * from " + tablename + "; \n")
sqlfile5.write("Insert into " + tablename + " select * from " + tablename + "; \n" + "Insert into " + tablename + " select * from " + tablename + "; \n")
sqlfile5.write("Insert into " + tablename + " select * from " + tablename + "; \n" + "Insert into " + tablename + " select * from " + tablename + "; \n\n")
sqlfile5.write("Select * from " + tablename + ";\n\n")
#Drop a column
sqlfile5.write("--Alter table drop a column \n")
sqlfile5.write("Alter table " + tablename + " Drop column a2; \n")
sqlfile5.write("Insert into " + tablename + "(a1,a3,a4,a5,a6) select a1,a3,a4,a5,a6 from "+ tablename + " ;\n")
sqlfile5.write("Select count(*) from " + tablename + "; \n\n")
#Rename a column
sqlfile5.write("--Alter table rename a column \n")
sqlfile5.write("Alter table " + tablename + " Rename column a3 TO after_rename_a3; \n")
sqlfile5.write("--Insert data to the table, select count(*)\n")
sqlfile5.write("Insert into " + tablename + "(a1,after_rename_a3,a4,a5,a6) select a1,after_rename_a3,a4,a5,a6 from "+ tablename + " ;\n")
sqlfile5.write("Select count(*) from " + tablename + "; \n\n")
#Alter type to new encoding
sqlfile5.write("Alter type int_" + compress_type + " set default encoding (compresstype="+ self.alter_comprtype[compress_type] +",compresslevel=1);\n\n")
sqlfile5.write("--Add a column \n Alter table "+ tablename + " Add column new_cl int_"+ compress_type + " default '5'; \n\n")
sqlfile5.write("\d+ " + tablename +"\n\n")
sqlfile5.write("Insert into " + tablename + "(a1,after_rename_a3,a4,a5,a6) select a1,after_rename_a3,a4,a5,a6 from "+ tablename + " ;\n")
sqlfile5.write("Select count(*) from " + tablename + "; \n\n")
count += 1
def generate_sqls(self):
self.create_table("storage_directive", "column", "column")
self.create_table("column_reference", "column", "default")
self.create_table("column_reference", "column", "column")
self.create_table("with", "row", "column")
self.create_table("with", "column", "column")
self.create_partition_table("with", "row", "column", "part")
self.create_partition_table("with", "row", "column", "sub_part")
self.create_partition_table("with", "column", "column", "part")
self.create_partition_table("with", "column", "column", "sub_part")
self.create_partition_table("column_reference", "column", "column", "part")
self.create_partition_table("column_reference", "column", "column", "sub_part")
self.create_table_columns_with_different_encoding_type("no", "yes")
self.create_table_columns_with_different_encoding_type("yes", "no")
self.create_table_columns_with_different_encoding_type("yes","yes")
self.alter_table()
self.alter_type()
self.create_type()
tinctest.logger.info('Done with generating the sqls... Lets create the copy files for inserting data ...')
self.generate_copy_files()
| [
"jyih@pivotal.io"
] | jyih@pivotal.io |
9d6d65fc99c6672c9e5e685bdd430006548c5761 | 41a008ceea2ae75b94cf2110a1370af1f789ff3f | /lava_scheduler_app/south_migrations/0026_auto__add_field_device_device_version.py | e529cb46a7b185e6da7d6aff50367fac8771b08e | [] | no_license | guanhe0/lava_v1 | 937916a0009c0a3f801e61f7580b96e324da64b1 | c49e753ce55104e3eadb0126088b7580a39446fe | refs/heads/master | 2022-10-28T02:33:52.924608 | 2017-01-04T07:24:59 | 2017-01-04T08:43:37 | 78,068,030 | 0 | 1 | null | 2022-10-07T02:00:16 | 2017-01-05T01:36:27 | Python | UTF-8 | Python | false | false | 13,044 | py | # -*- coding: utf-8 -*-
from south.db import db
from south.v2 import SchemaMigration
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'Device.device_version'
db.add_column('lava_scheduler_app_device', 'device_version',
self.gf('django.db.models.fields.CharField')(default=None, max_length=200, null=True),
keep_default=False)
def backwards(self, orm):
# Deleting field 'Device.device_version'
db.delete_column('lava_scheduler_app_device', 'device_version')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'dashboard_app.bundle': {
'Meta': {'ordering': "['-uploaded_on']", 'object_name': 'Bundle'},
'_gz_content': ('django.db.models.fields.files.FileField', [], {'max_length': '100', 'null': 'True', 'db_column': "'gz_content'"}),
'_raw_content': ('django.db.models.fields.files.FileField', [], {'max_length': '100', 'null': 'True', 'db_column': "'content'"}),
'bundle_stream': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'bundles'", 'to': "orm['dashboard_app.BundleStream']"}),
'content_filename': ('django.db.models.fields.CharField', [], {'max_length': '256'}),
'content_sha1': ('django.db.models.fields.CharField', [], {'max_length': '40', 'unique': 'True', 'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_deserialized': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'uploaded_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'uploaded_bundles'", 'null': 'True', 'to': "orm['auth.User']"}),
'uploaded_on': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.utcnow'})
},
'dashboard_app.bundlestream': {
'Meta': {'object_name': 'BundleStream'},
'group': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_anonymous': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '64', 'blank': 'True'}),
'pathname': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '128'}),
'slug': ('django.db.models.fields.CharField', [], {'max_length': '64', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'})
},
'lava_scheduler_app.device': {
'Meta': {'object_name': 'Device'},
'current_job': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': "orm['lava_scheduler_app.TestJob']", 'blank': 'True', 'unique': 'True'}),
'device_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['lava_scheduler_app.DeviceType']"}),
'device_version': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '200', 'null': 'True'}),
'health_status': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'hostname': ('django.db.models.fields.CharField', [], {'max_length': '200', 'primary_key': 'True'}),
'last_health_report_job': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': "orm['lava_scheduler_app.TestJob']", 'blank': 'True', 'unique': 'True'}),
'status': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
'tags': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['lava_scheduler_app.Tag']", 'symmetrical': 'False', 'blank': 'True'})
},
'lava_scheduler_app.devicestatetransition': {
'Meta': {'object_name': 'DeviceStateTransition'},
'created_by': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'device': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'transitions'", 'to': "orm['lava_scheduler_app.Device']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'job': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['lava_scheduler_app.TestJob']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
'message': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'new_state': ('django.db.models.fields.IntegerField', [], {}),
'old_state': ('django.db.models.fields.IntegerField', [], {})
},
'lava_scheduler_app.devicetype': {
'Meta': {'object_name': 'DeviceType'},
'display': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'health_check_job': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.SlugField', [], {'max_length': '50', 'primary_key': 'True'}),
'use_celery': ('django.db.models.fields.BooleanField', [], {'default': 'False'})
},
'lava_scheduler_app.tag': {
'Meta': {'object_name': 'Tag'},
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '50'})
},
'lava_scheduler_app.testjob': {
'Meta': {'object_name': 'TestJob'},
'_results_bundle': ('django.db.models.fields.related.OneToOneField', [], {'null': 'True', 'db_column': "'results_bundle_id'", 'on_delete': 'models.SET_NULL', 'to': "orm['dashboard_app.Bundle']", 'blank': 'True', 'unique': 'True'}),
'_results_link': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '400', 'null': 'True', 'db_column': "'results_link'", 'blank': 'True'}),
'actual_device': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.Device']"}),
'definition': ('django.db.models.fields.TextField', [], {}),
'description': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '200', 'null': 'True', 'blank': 'True'}),
'end_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'group': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}),
'health_check': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'log_file': ('django.db.models.fields.files.FileField', [], {'default': 'None', 'max_length': '100', 'null': 'True', 'blank': 'True'}),
'requested_device': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.Device']"}),
'requested_device_type': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.DeviceType']"}),
'start_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'status': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'submit_time': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'submit_token': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['linaro_django_xmlrpc.AuthToken']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
'submitter': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'to': "orm['auth.User']"}),
'tags': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['lava_scheduler_app.Tag']", 'symmetrical': 'False', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'})
},
'linaro_django_xmlrpc.authtoken': {
'Meta': {'object_name': 'AuthToken'},
'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'default': "''", 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'last_used_on': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'secret': ('django.db.models.fields.CharField', [], {'default': "'ql3hon1ufbemikjdr23ps3qzhpmq9gsrbryfiy5b8aokm0wo7w5u9b8r2yaybtgu0hp943x5a00ifjh5kqclpu3sbbei1um33c7axjp59sa7sdi2xxfpfq0xp0hw7ya0'", 'unique': 'True', 'max_length': '128'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'auth_tokens'", 'to': "orm['auth.User']"})
}
}
complete_apps = ['lava_scheduler_app']
| [
"fanghuangcai@163.com"
] | fanghuangcai@163.com |
a396dee88c812cab4ef400af373230a30d9681b9 | 59a7f64d91b8466074630f0006f048cad66f01e8 | /focuslock/noneWidgets.py | 19b199146ee8eff83e006db7a537ec2367e25328 | [] | no_license | amancebo/acadia_new | cda1eafba65af71e0c4889f25c7dd527aa06abdc | 0c4bbd382140b92a85338b8b3335b2b660b8266b | refs/heads/master | 2021-03-30T16:49:46.252851 | 2015-06-01T23:46:22 | 2015-06-01T23:46:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,791 | py | #!/usr/bin/python
#
# Dummy classes for use when you have some but not all
# of the focus lock functionality.
#
# Hazen 12/09
#
# Fake QPD
class QPD():
def __init__(self):
pass
def qpdScan(self):
return [1000.0, 0.0, 0.0]
def shutDown(self):
pass
# Fake nano-positioner
class NanoP():
def __init__(self):
pass
def moveTo(self, axis, position):
pass
def shutDown(self):
pass
# Fake IR laser
class IRLaser():
def __init__(self):
pass
def on(self):
pass
def off(self):
pass
#
# The MIT License
#
# Copyright (c) 2009 Zhuang Lab, Harvard University
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
| [
"amancebo8@gmail.com"
] | amancebo8@gmail.com |
a277ddf116ac88b0ac509ddd3d6c40bd7491a945 | 896f6d118902da9d207783c623ef9ef81076dbbe | /core/arxiv/submission/domain/event/flag.py | c728c2b127e2616322ecca1990ace5cf2c25854e | [
"MIT"
] | permissive | Based-GOD-FUCKED-MY-Bitch-FUCKZIG/arxiv-submission-core | ab3b5f52053e4240a8ccecbbb2bdabcf106f5531 | 4cc26d0c47bc4a8afb8da75da302f050d69b07a7 | refs/heads/master | 2020-07-04T08:01:51.178399 | 2019-07-23T17:38:29 | 2019-07-23T17:38:29 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,334 | py | """Events/commands related to quality assurance."""
from typing import Optional, Union
from dataclasses import field
from .util import dataclass
from .base import Event
from ..flag import Flag, ContentFlag, MetadataFlag, UserFlag
from ..submission import Submission, SubmissionMetadata, Hold, Waiver
from ...exceptions import InvalidEvent
@dataclass()
class AddFlag(Event):
"""Base class for flag events; not for direct use."""
NAME = "add flag"
NAMED = "flag added"
flag_data: Optional[Union[int, str, float, dict, list]] \
= field(default=None)
comment: Optional[str] = field(default=None)
def validate(self, submission: Submission) -> None:
"""Not implemented."""
raise NotImplementedError("Invoke a child event instead")
def project(self, submission: Submission) -> Submission:
"""Not implemented."""
raise NotImplementedError("Invoke a child event instead")
@dataclass()
class RemoveFlag(Event):
"""Remove a :class:`.domain.Flag` from a submission."""
NAME = "remove flag"
NAMED = "flag removed"
flag_id: Optional[str] = field(default=None)
"""This is the ``event_id`` of the event that added the flag."""
def validate(self, submission: Submission) -> None:
"""Verify that the flag exists."""
if self.flag_id not in submission.flags:
raise InvalidEvent(self, f"Unknown flag: {self.flag_id}")
def project(self, submission: Submission) -> Submission:
"""Remove the flag from the submission."""
submission.flags.pop(self.flag_id)
return submission
@dataclass()
class AddContentFlag(AddFlag):
"""Add a :class:`.domain.ContentFlag` related to content."""
NAME = "add content flag"
NAMED = "content flag added"
flag_type: Optional[ContentFlag.Type] = None
def validate(self, submission: Submission) -> None:
"""Verify that we have a known flag."""
if self.flag_type not in ContentFlag.Type:
raise InvalidEvent(self, f"Unknown content flag: {self.flag_type}")
def project(self, submission: Submission) -> Submission:
"""Add the flag to the submission."""
submission.flags[self.event_id] = ContentFlag(
event_id=self.event_id,
created=self.created,
creator=self.creator,
proxy=self.proxy,
flag_type=self.flag_type,
flag_data=self.flag_data,
comment=self.comment
)
return submission
def __post_init__(self) -> None:
"""Make sure that `flag_type` is an enum instance."""
if type(self.flag_type) is str:
self.flag_type = ContentFlag.Type(self.flag_type)
super(AddContentFlag, self).__post_init__()
@dataclass()
class AddMetadataFlag(AddFlag):
"""Add a :class:`.domain.MetadataFlag` related to the metadata."""
NAME = "add metadata flag"
NAMED = "metadata flag added"
flag_type: Optional[MetadataFlag.Type] = field(default=None)
field: Optional[str] = field(default=None)
"""Name of the metadata field to which the flag applies."""
def validate(self, submission: Submission) -> None:
"""Verify that we have a known flag and metadata field."""
if self.flag_type not in MetadataFlag.Type:
raise InvalidEvent(self, f"Unknown meta flag: {self.flag_type}")
if not hasattr(SubmissionMetadata, self.field):
raise InvalidEvent(self, "Not a valid metadata field")
def project(self, submission: Submission) -> Submission:
"""Add the flag to the submission."""
submission.flags[self.event_id] = MetadataFlag(
event_id=self.event_id,
created=self.created,
creator=self.creator,
proxy=self.proxy,
flag_type=self.flag_type,
flag_data=self.flag_data,
comment=self.comment,
field=self.field
)
return submission
def __post_init__(self) -> None:
"""Make sure that `flag_type` is an enum instance."""
if type(self.flag_type) is str:
self.flag_type = MetadataFlag.Type(self.flag_type)
super(AddMetadataFlag, self).__post_init__()
@dataclass()
class AddUserFlag(AddFlag):
"""Add a :class:`.domain.UserFlag` related to the submitter."""
NAME = "add user flag"
NAMED = "user flag added"
flag_type: Optional[UserFlag.Type] = field(default=None)
def validate(self, submission: Submission) -> None:
"""Verify that we have a known flag."""
if self.flag_type not in MetadataFlag.Type:
raise InvalidEvent(self, f"Unknown user flag: {self.flag_type}")
def project(self, submission: Submission) -> Submission:
"""Add the flag to the submission."""
submission.flags[self.event_id] = UserFlag(
event_id=self.event_id,
created=self.created,
creator=self.creator,
flag_type=self.flag_type,
flag_data=self.flag_data,
comment=self.comment
)
return submission
def __post_init__(self) -> None:
"""Make sure that `flag_type` is an enum instance."""
if type(self.flag_type) is str:
self.flag_type = UserFlag.Type(self.flag_type)
super(AddUserFlag, self).__post_init__()
@dataclass()
class AddHold(Event):
"""Add a :class:`.Hold` to a :class:`.Submission`."""
NAME = "add hold"
NAMED = "hold added"
hold_type: Hold.Type = field(default=Hold.Type.PATCH)
hold_reason: Optional[str] = field(default_factory=str)
def validate(self, submission: Submission) -> None:
pass
def project(self, submission: Submission) -> Submission:
"""Add the hold to the submission."""
submission.holds[self.event_id] = Hold(
event_id=self.event_id,
created=self.created,
creator=self.creator,
hold_type=self.hold_type,
hold_reason=self.hold_reason
)
# submission.status = Submission.ON_HOLD
return submission
def __post_init__(self) -> None:
"""Make sure that `hold_type` is an enum instance."""
if type(self.hold_type) is str:
self.hold_type = Hold.Type(self.hold_type)
super(AddHold, self).__post_init__()
@dataclass()
class RemoveHold(Event):
"""Remove a :class:`.Hold` from a :class:`.Submission`."""
NAME = "remove hold"
NAMED = "hold removed"
hold_event_id: str = field(default_factory=str)
hold_type: Hold.Type = field(default=Hold.Type.PATCH)
removal_reason: Optional[str] = field(default_factory=str)
def validate(self, submission: Submission) -> None:
if self.hold_event_id not in submission.holds:
raise InvalidEvent(self, "No such hold")
def project(self, submission: Submission) -> Submission:
"""Remove the hold from the submission."""
submission.holds.pop(self.hold_event_id)
# submission.status = Submission.SUBMITTED
return submission
def __post_init__(self) -> None:
"""Make sure that `hold_type` is an enum instance."""
if type(self.hold_type) is str:
self.hold_type = Hold.Type(self.hold_type)
super(RemoveHold, self).__post_init__()
@dataclass()
class AddWaiver(Event):
"""Add a :class:`.Waiver` to a :class:`.Submission`."""
NAME = "add waiver"
NAMED = "waiver added"
waiver_type: Hold.Type = field(default=Hold.Type.SOURCE_OVERSIZE)
waiver_reason: str = field(default_factory=str)
def validate(self, submission: Submission) -> None:
pass
def project(self, submission: Submission) -> Submission:
"""Add the :class:`.Waiver` to the :class:`.Submission`."""
submission.waivers[self.event_id] = Waiver(
event_id=self.event_id,
created=self.created,
creator=self.creator,
waiver_type=self.waiver_type,
waiver_reason=self.waiver_reason
)
return submission
def __post_init__(self) -> None:
"""Make sure that `waiver_type` is an enum instance."""
if type(self.waiver_type) is str:
self.waiver_type = Hold.Type(self.waiver_type)
super(AddWaiver, self).__post_init__()
| [
"brp53@cornell.edu"
] | brp53@cornell.edu |
9f2777d2bca827752a0dc1fd85b9000f6c520599 | 6d54a7b26d0eb82152a549a6a9dfde656687752c | /scripts/py_matter_idl/matter_idl/lint/lint_rules_parser.py | 9499e360399230add24b5ebd18a3326bfd70a103 | [
"Apache-2.0",
"LicenseRef-scancode-warranty-disclaimer"
] | permissive | project-chip/connectedhomeip | 81a123d675cf527773f70047d1ed1c43be5ffe6d | ea3970a7f11cd227ac55917edaa835a2a9bc4fc8 | refs/heads/master | 2023-09-01T11:43:37.546040 | 2023-09-01T08:01:32 | 2023-09-01T08:01:32 | 244,694,174 | 6,409 | 1,789 | Apache-2.0 | 2023-09-14T20:56:31 | 2020-03-03T17:05:10 | C++ | UTF-8 | Python | false | false | 12,005 | py | #!/usr/bin/env python
import logging
import os
import xml.etree.ElementTree
from dataclasses import dataclass
from enum import Enum, auto
from typing import List, MutableMapping, Optional, Tuple, Union
from lark import Lark
from lark.visitors import Discard, Transformer, v_args
try:
from .types import (AttributeRequirement, ClusterCommandRequirement, ClusterRequirement, ClusterValidationRule,
RequiredAttributesRule, RequiredCommandsRule)
except ImportError:
import sys
sys.path.append(os.path.join(os.path.abspath(
os.path.dirname(__file__)), "..", ".."))
from matter_idl.lint.types import (AttributeRequirement, ClusterCommandRequirement, ClusterRequirement, ClusterValidationRule,
RequiredAttributesRule, RequiredCommandsRule)
class ElementNotFoundError(Exception):
def __init__(self, name):
super().__init__(f"Could not find {name}")
def parseNumberString(n: str) -> int:
if n.startswith('0x'):
return int(n[2:], 16)
else:
return int(n)
@dataclass
class RequiredAttribute:
name: str
code: int
@dataclass
class RequiredCommand:
name: str
code: int
@dataclass
class DecodedCluster:
name: str
code: int
required_attributes: List[RequiredAttribute]
required_commands: List[RequiredCommand]
class ClusterActionEnum(Enum):
REQUIRE = auto()
REJECT = auto()
@dataclass
class ServerClusterRequirement:
action: ClusterActionEnum
id: Union[str, int]
def DecodeClusterFromXml(element: xml.etree.ElementTree.Element):
if element.tag != 'cluster':
logging.error("Not a cluster element: %r" % element)
return None
# cluster elements contain among other children
# - name (general name for this cluster)
# - code (unique identifier, may be hex or numeric)
# - attribute with side, code and optional attributes
try:
name = element.find('name')
if name is None or not name.text:
raise ElementNotFoundError('name')
name = name.text.replace(' ', '')
required_attributes = []
required_commands = []
for attr in element.findall('attribute'):
if attr.attrib['side'] != 'server':
continue
if 'optional' in attr.attrib and attr.attrib['optional'] == 'true':
continue
# when introducing access controls, the content of attributes may either be:
# <attribute ...>myName</attribute>
# or
# <attribute ...><description>myName</description><access .../>...</attribute>
attr_name = attr.text
description = attr.find('description')
if description is not None:
attr_name = description.text
required_attributes.append(
RequiredAttribute(
name=attr_name,
code=parseNumberString(attr.attrib['code'])
))
for cmd in element.findall('command'):
if cmd.attrib['source'] != 'client':
continue
if 'optional' in cmd.attrib and cmd.attrib['optional'] == 'true':
continue
required_commands.append(RequiredCommand(
name=cmd.attrib["name"], code=parseNumberString(cmd.attrib['code'])))
code = element.find('code')
if code is None:
raise Exception("Failed to find cluster code")
return DecodedCluster(
name=name,
code=parseNumberString(code.text),
required_attributes=required_attributes,
required_commands=required_commands
)
except Exception:
logging.exception("Failed to decode cluster %r" % element)
return None
def ClustersInXmlFile(path: str):
logging.info("Loading XML from %s" % path)
# root is expected to be just a "configurator" object
configurator = xml.etree.ElementTree.parse(path).getroot()
for child in configurator:
if child.tag != 'cluster':
continue
yield child
class LintRulesContext:
"""Represents a context for loadint lint rules.
Handles:
- loading referenced files (matter xml definitions)
- adding linter rules as data is parsed
- Looking up identifiers for various rules
"""
def __init__(self):
self._required_attributes_rule = RequiredAttributesRule(
"Required attributes")
self._cluster_validation_rule = ClusterValidationRule(
"Cluster validation")
self._required_commands_rule = RequiredCommandsRule(
"Required commands")
# Map cluster names to the underlying code
self._cluster_codes: MutableMapping[str, int] = {}
def GetLinterRules(self):
return [self._required_attributes_rule, self._required_commands_rule, self._cluster_validation_rule]
def RequireAttribute(self, r: AttributeRequirement):
self._required_attributes_rule.RequireAttribute(r)
def FindClusterCode(self, name: str) -> Optional[Tuple[str, int]]:
if name not in self._cluster_codes:
# Name may be a number. If this can be parsed as a number, accept it anyway
try:
return "ID_%s" % name, parseNumberString(name)
except ValueError:
logging.error("UNKNOWN cluster name %s" % name)
logging.error("Known names: %s" %
(",".join(self._cluster_codes.keys()), ))
return None
else:
return name, self._cluster_codes[name]
def RequireClusterInEndpoint(self, name: str, code: int):
"""Mark that a specific cluster is always required in the given endpoint
"""
cluster_info = self.FindClusterCode(name)
if not cluster_info:
return
name, cluster_code = cluster_info
self._cluster_validation_rule.RequireClusterInEndpoint(ClusterRequirement(
endpoint_id=code,
cluster_code=cluster_code,
cluster_name=name,
))
def RejectClusterInEndpoint(self, name: str, code: int):
"""Mark that a specific cluster is always rejected in the given endpoint
"""
cluster_info = self.FindClusterCode(name)
if not cluster_info:
return
name, cluster_code = cluster_info
self._cluster_validation_rule.RejectClusterInEndpoint(ClusterRequirement(
endpoint_id=code,
cluster_code=cluster_code,
cluster_name=name,
))
def LoadXml(self, path: str):
"""Load XML data from the given path and add it to
internal processing. Adds attribute requirement rules
as needed.
"""
for cluster in ClustersInXmlFile(path):
decoded = DecodeClusterFromXml(cluster)
if not decoded:
continue
self._cluster_codes[decoded.name] = decoded.code
for attr in decoded.required_attributes:
self._required_attributes_rule.RequireAttribute(AttributeRequirement(
code=attr.code, name=attr.name, filter_cluster=decoded.code))
for cmd in decoded.required_commands:
self._required_commands_rule.RequireCommand(
ClusterCommandRequirement(
cluster_code=decoded.code,
command_code=cmd.code,
command_name=cmd.name
))
class LintRulesTransformer(Transformer):
"""
A transformer capable to transform data parsed by Lark according to
lint_rules_grammar.lark.
"""
def __init__(self, file_name: str):
self.context = LintRulesContext()
self.file_name = file_name
def positive_integer(self, tokens):
"""Numbers in the grammar are integers or hex numbers.
"""
if len(tokens) != 1:
raise Exception("Unexpected argument counts")
return parseNumberString(tokens[0].value)
@v_args(inline=True)
def negative_integer(self, value):
return -value
@v_args(inline=True)
def integer(self, value):
return value
def id(self, tokens):
"""An id is a string containing an identifier
"""
if len(tokens) != 1:
raise Exception("Unexpected argument counts")
return tokens[0].value
def ESCAPED_STRING(self, s):
# handle escapes, skip the start and end quotes
return s.value[1:-1].encode('utf-8').decode('unicode-escape')
def start(self, instructions):
# At this point processing is considered done, return all
# linter rules that were found
return self.context.GetLinterRules()
def instruction(self, instruction):
return Discard
def all_endpoint_rule(self, attributes):
for attribute in attributes:
self.context.RequireAttribute(attribute)
return Discard
@v_args(inline=True)
def load_xml(self, path):
if not os.path.isabs(path):
path = os.path.abspath(os.path.join(
os.path.dirname(self.file_name), path))
self.context.LoadXml(path)
@v_args(inline=True)
def required_global_attribute(self, name, code):
return AttributeRequirement(code=code, name=name)
@v_args(inline=True)
def specific_endpoint_rule(self, code, *requirements):
for requirement in requirements:
if requirement.action == ClusterActionEnum.REQUIRE:
self.context.RequireClusterInEndpoint(requirement.id, code)
elif requirement.action == ClusterActionEnum.REJECT:
self.context.RejectClusterInEndpoint(requirement.id, code)
else:
raise Exception("Unexpected requirement action %r" %
requirement.action)
return Discard
@v_args(inline=True)
def required_server_cluster(self, id):
return ServerClusterRequirement(ClusterActionEnum.REQUIRE, id)
@v_args(inline=True)
def rejected_server_cluster(self, id):
return ServerClusterRequirement(ClusterActionEnum.REJECT, id)
class Parser:
def __init__(self, parser, file_name: str):
self.parser = parser
self.file_name = file_name
def parse(self):
data = LintRulesTransformer(self.file_name).transform(
self.parser.parse(open(self.file_name, "rt").read()))
return data
def CreateParser(file_name: str):
"""
Generates a parser that will process a ".matter" file into a IDL
"""
return Parser(
Lark.open('lint_rules_grammar.lark', rel_to=__file__, parser='lalr', propagate_positions=True), file_name=file_name)
if __name__ == '__main__':
# This Parser is generally not intended to be run as a stand-alone binary.
# The ability to run is for debug and to print out the parsed AST.
import click
# Supported log levels, mapping string values required for argument
# parsing into logging constants
__LOG_LEVELS__ = {
'debug': logging.DEBUG,
'info': logging.INFO,
'warn': logging.WARN,
'fatal': logging.FATAL,
}
@click.command()
@click.option(
'--log-level',
default='INFO',
type=click.Choice(list(__LOG_LEVELS__.keys()), case_sensitive=False),
help='Determines the verbosity of script output.')
@click.argument('filename')
def main(log_level, filename=None):
logging.basicConfig(
level=__LOG_LEVELS__[log_level],
format='%(asctime)s %(levelname)-7s %(message)s',
)
logging.info("Starting to parse ...")
data = CreateParser(filename).parse()
logging.info("Parse completed")
logging.info("Data:")
logging.info("%r" % data)
main(auto_envvar_prefix='CHIP')
| [
"noreply@github.com"
] | project-chip.noreply@github.com |
b752d574ddacb2a426c671af560d8757f88254a7 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p03434/s582963555.py | 202b3b79bca8e1fc753461f081b5a5eff838303c | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 447 | py | """Boot-camp-for-Beginners_Easy010_B_Toll-Gates_29-August-2020.py"""
import numpy as np
N = int(input())
a = list(map(int, input().split()))
a = sorted(a, reverse=True)
Alice = [a[i] for i in range(0, len(a), 2)]
Bob = [a[i] for i in range(1, len(a), 2)]
#print(Alice)
#print(Bob)
s_max = 0
for i in range(len(Alice)):
s_max += Alice[i]
s_min = 0
for i in range(len(Bob)):
s_min += Bob[i]
#print(s_max)
#print(s_min)
print(s_max-s_min)
| [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
b263b1252c936c5b02eccbceeade34b685d6ea60 | bc2effb57e82128b81371fb03547689255d5ef15 | /백준/다이나믹 프로그래밍/2579(계단 오르기).py | 8007584be6e6c9eadf04a66b339c98f4460e1283 | [] | no_license | CharmingCheol/python-algorithm | 393fa3a8921f76d25e0d3f02402eae529cc283ad | 61c8cddb72ab3b1fba84171e03f3a36f8c672648 | refs/heads/master | 2023-03-01T11:00:52.801945 | 2021-01-31T13:38:29 | 2021-01-31T13:38:29 | 229,561,513 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,153 | py | """
1.패턴 찾기
- 높이가 1인 경우
> nums[0]을 출력시킨다
- 높이가 2인 경우
> nums[0] + nums[1]을 출력시킨다
> 한칸씩 올라온 경우이기 때문
- 높이가 3이상인 경우
> case1 = nums[index] + nums[index - 1] + dp[index - 3]
case2 = nums[index] + dp[index - 2]
dp[index] = max(case1, case2)
"""
import sys
size = int(sys.stdin.readline())
nums = list(int(sys.stdin.readline()) for _ in range(size))
dp = [0] * size
dp[0] = nums[0]
if size == 1:
print(nums[0])
elif size == 2:
print(nums[0] + nums[1])
else:
for index in range(1, 3):
case1 = nums[index - 1] + nums[index] # 이전 계단 + 현재 계단
case2 = nums[index] # 현재 계단
if index == 2:
case2 += nums[index - 2] # 전전계단
dp[index] = max(case1, case2)
for index in range(3, size):
# 현재 + 이전 계단 + 전전전계단 dp 값
case1 = nums[index] + nums[index - 1] + dp[index - 3]
# 현재 + 전전계단 dp값
case2 = nums[index] + dp[index - 2]
dp[index] = max(case1, case2)
print(dp[size - 1])
| [
"54410332+chamincheol@users.noreply.github.com"
] | 54410332+chamincheol@users.noreply.github.com |
8127e734bff3bf1170acb97c2ed2a4638f4c4742 | 78e1a576b128c25b0e85234675f820a1d2d54595 | /urls_collector/migrations/0001_initial.py | a6930c1d867a6a24ca22fefa6f704642d985ef9d | [] | no_license | Evgenus/comeet-pdf-challenge | 7e0e3af9ddfd4558c37284d14c87ff5923b53e93 | 9dc3a3b1bbab0b06ed4f1a35f86fbb2826b64858 | refs/heads/master | 2021-05-10T10:20:26.112256 | 2018-01-22T13:32:36 | 2018-01-22T13:32:36 | 118,379,927 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,894 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-01-20 12:14
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Document',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('filename', models.CharField(max_length=250)),
],
options={
'ordering': ('created',),
},
),
migrations.CreateModel(
name='Occurence',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('document', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='urls_collector.Document')),
],
),
migrations.CreateModel(
name='URL',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('url', models.CharField(max_length=2000)),
('documents', models.ManyToManyField(through='urls_collector.Occurence', to='urls_collector.Document')),
],
),
migrations.AddField(
model_name='occurence',
name='url',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='urls_collector.URL'),
),
migrations.AddField(
model_name='document',
name='urls',
field=models.ManyToManyField(through='urls_collector.Occurence', to='urls_collector.URL'),
),
]
| [
"chernyshov.eugene@gmail.com"
] | chernyshov.eugene@gmail.com |
9ea3a93d380538a95060def4a48b25f559346ddf | c8836d495a97c1c5183169d7c33c909ae579f69d | /worldcup/worldcup/matches/load_stage2_matches.py | db3ed408225dfda250c4b39e72393539ba1e7be8 | [
"MIT"
] | permissive | raprasad/worldcup | 4042f54231083d627d9cb9c9ed764eaf5914984e | e993bb3345d4be339211fac2698832886973ec7c | refs/heads/master | 2021-01-01T19:42:49.034715 | 2014-05-23T20:39:30 | 2014-05-23T20:39:30 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,269 | py | from datetime import datetime
from worldcup.matches.models import *
"""
# match num team 1 team 2 grid group date time
matches = '''49 1a 2b 1 6/26 16:00 Nelson Mandela Bay
50 1c 2d 1 6/26 20:30 Rustenburg
51 1d 2c 2 6/27 16:00 Mangaung / Bloemfontein
52 1b 2a 2 6/27 20:30 Johannesburg
53 1e 2f 3 6/28 16:00 Durban
54 1g 2h 3 6/28 20:30 Johannesburg
55 1f 2e 4 6/29 16:00 Tshwane/Pretoria
56 1h 2g 4 6/29 20:30 Cape Town
57 53 54 7/2 16:00 Nelson Mandela Bay
58 49 50 7/2 20:30 Johannesburg
59 52 51 7/3 16:00 Cape Town
60 55 56 7/3 20:30 Johannesburg
61 1 3 7/6 20:30 Cape Town
62 2 4 7/7 20:30 Durban
63 1,3 2,4 7/11 20:30 Nelson Mandela Bay
64 1,3 2,4 7/12 20:30 Johannesburg'''.split('\n')
from worldcup.matches.models import *
from worldcup.teams.models import get_team_not_determined
from datetime import datetime
try:
match_type = MatchType.objects.get(name="Knockout Stage")
except MatchType.DoesNotExist:
match_type =MatchType(name="Knockout Stage", last_day_to_predict=datetime(year=2010,month=6,day=25))
match_type.save()
for line in matches:
items = line.split('\t')
print items
match_num, team1_choices, team1_choices, grid_group, dt, time_str, venue_str = items
mm, dd = dt.split('/')
hh, min = time_str.split(':')
dt_obj = datetime(year=2010, month=int(mm), day=int(dd), hour=int(hh), minute=int(min))
team1 = get_team_not_determined() #Team.objects.get(name=team1)
team2 = get_team_not_determined() #Team.objects.get(name=team2)
venues = Venue.objects.filter(name__startswith=venue_str)
if venues.count() == 0:
print 'venue not found: %s' % venue_str
else:
venue=venues[0]
match_num = int(match_num)
if match_num == 64:
mname = 'Game %s: Final' % match_num
elif match_num == 63:
mname = 'Game %s: 3rd/4th Place' % match_num
elif match_num in [61, 62]:
mname = 'Game %s: Semi Finals' % match_num
elif match_num in range(57,61):
mname = 'Game %s: Quarter Finals' % match_num
else:
mname = 'Game %s: Round of 16' % match_num
new_match = Match(name=mname, team1=team1, team2=team2, match_type=match_type, match_time=dt_obj)
new_match.venue = venue
new_match.save()
"""
| [
"raman_prasad@harvard.edu"
] | raman_prasad@harvard.edu |
3beb0061ecb0b50b4fea93fde1963afe43fda6aa | 2b16a66bfc186b52ed585081ae987e97cab8223b | /test/db/test_APIAliasGenerate.py | c488b48d65177957b58186d5b9fd019dedf7f3cd | [] | no_license | OldPickles/SKnowledgeGraph | d334000c7a41dd5014fd59154bbe070fcc754e4c | 6d131ad6bf3a09a5ce6461fa03690117d703c9e8 | refs/heads/master | 2022-01-09T11:27:00.043712 | 2019-06-06T07:57:06 | 2019-06-06T07:57:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,717 | py | from unittest import TestCase
from db.alias_util import APIAliasGeneratorFactory
from db.model import APIAlias, APIEntity
class TestAPIAliasGenerate(TestCase):
def test_get_method_qualifier_parameter_type(self):
generator = APIAliasGeneratorFactory.create_generator(
APIAlias.ALIAS_TYPE_SIMPLE_METHOD_WITH_QUALIFIER_PARAMETER_TYPE)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.addAttribute(java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String)",
"addAttribute(java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String)",
APIAlias.ALIAS_TYPE_SIMPLE_METHOD_WITH_QUALIFIER_PARAMETER_TYPE)
def test_get_alias_name(self):
generator = APIAliasGeneratorFactory.create_generator(
APIAlias.ALIAS_TYPE_SIMPLE_CLASS_NAME_METHOD_WITH_QUALIFIER_PARAMETER_TYPE)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.addAttribute(java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String)",
"Attributes2Impl.addAttribute(java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String)",
APIAlias.ALIAS_TYPE_SIMPLE_CLASS_NAME_METHOD_WITH_QUALIFIER_PARAMETER_TYPE)
def test_get_simple_type_method_alias_name(self):
alias_type = APIAlias.ALIAS_TYPE_SIMPLE_NAME_METHOD_WITH_SIMPLE_PARAMETER_TYPE
generator = APIAliasGeneratorFactory.create_generator(
alias_type)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.addAttribute.drainTo(java.shared.Collection<? super java.shared.concurrent.PriorityBlockingQueue>)",
"drainTo(Collection<? super PriorityBlockingQueue>)",
alias_type)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.sort(T[],java.shared.Comparator<? super T>)",
"sort(T[],Comparator<? super T>)",
alias_type)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.compute(java.lang.Object,java.shared.function.BiFunction<? super,? super,? extends java.lang.Object>)",
"compute(Object,BiFunction<? super,? super,? extends Object>)",
alias_type)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.getBoolean(java.lang.String)",
"getBoolean(String)",
alias_type)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.createXMLStreamWriter(OutputStream,java.lang.String)",
"createXMLStreamWriter(OutputStream,String)",
alias_type)
def test_get_simple_type_class_method_alias_name(self):
alias_type = APIAlias.ALIAS_TYPE_SIMPLE_CLASS_NAME_METHOD_WITH_SIMPLE_PARAMETER_TYPE
generator = APIAliasGeneratorFactory.create_generator(
alias_type)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.drainTo(java.shared.Collection<? super java.shared.concurrent.PriorityBlockingQueue>)",
"Attributes2Impl.drainTo(Collection<? super PriorityBlockingQueue>)",
alias_type)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.sort(T[],java.shared.Comparator<? super T>)",
"Attributes2Impl.sort(T[],Comparator<? super T>)",
alias_type)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.compute(java.lang.Object,java.shared.function.BiFunction<? super,? super,? extends java.lang.Object>)",
"Attributes2Impl.compute(Object,BiFunction<? super,? super,? extends Object>)",
alias_type)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.getBoolean(java.lang.String)",
"Attributes2Impl.getBoolean(String)",
alias_type)
self.one_name(generator,
"org.xml.sax.ext.Attributes2Impl.createXMLStreamWriter(OutputStream,java.lang.String)",
"Attributes2Impl.createXMLStreamWriter(OutputStream,String)",
alias_type)
def test_get_camel_case_alias_name(self):
alias_type = APIAlias.ALIAS_TYPE_CAMEL_CASE_TO_SPACE
generator = APIAliasGeneratorFactory.create_generator(
alias_type)
examples = [
(
"org.xml.sax.ext.Attributes2Impl.drainTo(java.shared.Collection<? super java.shared.concurrent.PriorityBlockingQueue>)",
"drain To"),
("org.xml.sax.ext.Attributes2Impl",
"Attributes2 Impl"),
("Attributes2Impl.createXMLStreamWriter(OutputStream,String)",
"create XML Stream Writer",),
("sort", None),
("java.lang.Object", None)]
for example in examples:
self.one_name(generator, example[0], example[1], alias_type)
def one_name(self, generator, qualifier_name, right_answer, right_type):
aliases = generator.generate_aliases(
APIEntity(qualified_name=qualifier_name, api_type=APIEntity.API_TYPE_METHOD))
print aliases
if right_answer is not None:
self.assertEqual(aliases[0].alias,
right_answer)
self.assertEqual(aliases[0].type, right_type)
else:
self.assertEqual(aliases, [])
| [
"467701860@qq.com"
] | 467701860@qq.com |
bec3c4222a01732f770d2e955ab9729e86491cfe | e0b6b5708aa81fcb6f9bae26be06b7f15984274c | /leetcode/most-common-word/epicarts.py | 35e87f0d0999a72290e69ce5e72d45587680d89f | [] | no_license | DongLee99/algorithm-study | aab021b71f04140bbad2842b868ef063e7bf1117 | aebe1bc4e461be47b335337c6f05d6a25e19e80a | refs/heads/master | 2023-03-18T05:39:52.708544 | 2021-03-17T14:56:05 | 2021-03-17T14:56:05 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 497 | py | import re
from typing import List
import collections
class Solution:
def mostCommonWord(self, paragraph: str, banned: List[str]) -> str:
words = re.sub('[^\w]', ' ', paragraph).lower().split()
words = [word for word in words if word not in banned]
counts = collections.Counter(words)
return counts.most_common(1)[0][0]
paragraph = "Bob hit a ball, the hit BALL flew far after it was hit."
banned = ["hit"]
print(Solution().mostCommonWord(paragraph, banned))
| [
"0505zxc@gmail.com"
] | 0505zxc@gmail.com |
2b585ec0903bda35bd59d65a1be1bd1fa14c9967 | a2c7bc7f0cf5c18ba84e9a605cfc722fbf169901 | /python_1001_to_2000/1167_Minimum_Cost_to_Connect_Sticks.py | 54e5622039c7c0898974b8651fd030e7ebadff08 | [] | no_license | jakehoare/leetcode | 3bf9edd499034ce32be462d4c197af9a8ed53b5d | 05e0beff0047f0ad399d0b46d625bb8d3459814e | refs/heads/master | 2022-02-07T04:03:20.659422 | 2022-01-26T22:03:00 | 2022-01-26T22:03:00 | 71,602,471 | 58 | 38 | null | null | null | null | UTF-8 | Python | false | false | 1,044 | py | _author_ = 'jake'
_project_ = 'leetcode'
# https://leetcode.com/problems/minimum-cost-to-connect-sticks/
# You have some sticks with positive integer lengths.
# You can connect any two sticks of lengths X and Y into one stick by paying a cost of X + Y.
# You perform this action until there is one stick remaining.
# Return the minimum cost of connecting all the given sticks into one stick in this way.
# Since the cost is proportional to the lengths of sticks connected, we want to connect the shortest sticks.
# Maintain a heap and repeatedly connect the shortest 2 sticks, putting the connected stick back on the heap.
# Time - O(n log n)
# Space - O(n)
import heapq
class Solution(object):
def connectSticks(self, sticks):
"""
:type sticks: List[int]
:rtype: int
"""
cost = 0
heapq.heapify(sticks)
while len(sticks) > 1:
x, y = heapq.heappop(sticks), heapq.heappop(sticks)
cost += x + y
heapq.heappush(sticks, x + y)
return cost
| [
"jake_hoare@hotmail.com"
] | jake_hoare@hotmail.com |
219c8305644dc74a59f5c9a419b18ef748fdc326 | 8af3b7ed8c4694dd0109de50e9b235ec35838d02 | /src/purchase/utils.py | a9ae4a00cd9510a7c0bd6b8a62347aad8bb1895f | [
"MIT",
"LicenseRef-scancode-free-unknown"
] | permissive | vishalhjoshi/croma | 9b8640a9ce46320e865211c31fb3b4b503d47f6f | 5b033a1136a9a8290118801f0e7092aebd9d64cc | refs/heads/master | 2020-06-19T14:57:42.909264 | 2019-05-16T20:10:58 | 2019-05-16T20:10:58 | 196,753,381 | 1 | 0 | MIT | 2019-07-13T18:23:29 | 2019-07-13T18:23:29 | null | UTF-8 | Python | false | false | 1,979 | py |
def create_PurchaseInvDtl_JSON_QuerySet(PurInvDtl_queryset):
sale_dtl_item_arr = []
for item in PurInvDtl_queryset:
item_set = {}
batch_instance = item.get_batch_instance()
item_set['item_name'] = str(item.get_item_name())
item_set['item_id'] = str(item.get_item_id())
item_set['batch_no'] = str(item.batch_no)
try:
item_set['expiry'] = str(batch_instance.expiry.strftime("%Y-%m"))
except:
item_set['expiry'] = "-"
item_set['strip_qty'] = int(item.strip_qty)
item_set['nos_qty'] = int(item.nos_qty)
item_set['strip_free'] = int(item.strip_free)
item_set['nos_free'] = int(item.nos_free)
item_set['rate'] = float(item.rate)
try:
item_set['mrp'] = float(batch_instance.mrp)
except:
item_set['mrp'] = 0.00
try:
item_set['pur_rate'] = float(batch_instance.strip_pur)
except:
item_set['pur_rate'] = 0.00
item_set['amount'] = float(item.amount)
item_set['discount'] = float(item.discount)
item_set['disc_amt'] = float(item.disc_amt)
item_set['disc_type'] = str(item.disc_type)
item_set['excise'] = float(item.excise)
item_set['excise_type'] = str(item.excise_type)
item_set['other_charge'] = float(item.other_charge)
item_set['conv'] = float(item.get_unit_conv())
item_set['sgst_amt'] = float(item.sgst_amt)
item_set['cgst_amt'] = float(item.cgst_amt)
gst = item.get_item_gst()
item_set['sgst'] = float(gst[1])
item_set['cgst'] = float(gst[0])
try:
item_set['trade_rate'] = float(batch_instance.trade_rt)
item_set['std_rate'] = float(batch_instance.std_rt)
item_set['inst_rate'] = float(batch_instance.inst_rt)
item_set['strip_stock'] = int(batch_instance.strip)
item_set['nos_stock'] = int(batch_instance.nos)
except:
item_set['trade_rate'] = 0.00
item_set['std_rate'] = 0.00
item_set['inst_rate'] = 0.00
item_set['strip_stock'] = 0
item_set['nos_stock'] = 0
item_set['deleted'] = 0
sale_dtl_item_arr.append(item_set)
return sale_dtl_item_arr
| [
"jaisinghal48@gmail.com"
] | jaisinghal48@gmail.com |
75575eb2638dceef09b04424f6e74afdc222d8e6 | 1838f4b35088773c7491122e3b2104d987245abc | /chanceServer/urls.py | 1de44501e2d0151747f7dcb04b12d7f288eb3ca0 | [] | no_license | Neeky/chanceServer | d1393d46f3ba28fd07e7497be0d2f94bb03e05fe | 944ffad9277884518d567bb10c47e0cb6fb2025b | refs/heads/master | 2021-01-06T20:44:27.333328 | 2017-08-12T04:36:49 | 2017-08-12T04:36:49 | 99,553,146 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 769 | py | """chanceServer URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.11/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import url
from django.contrib import admin
urlpatterns = [
url(r'^admin/', admin.site.urls),
]
| [
"1721900707@qq.com"
] | 1721900707@qq.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.