blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 2 616 | content_id stringlengths 40 40 | detected_licenses listlengths 0 69 | license_type stringclasses 2 values | repo_name stringlengths 5 118 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringlengths 4 63 | visit_date timestamp[us] | revision_date timestamp[us] | committer_date timestamp[us] | github_id int64 2.91k 686M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 23 values | gha_event_created_at timestamp[us] | gha_created_at timestamp[us] | gha_language stringclasses 213 values | src_encoding stringclasses 30 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 2 10.3M | extension stringclasses 246 values | content stringlengths 2 10.3M | authors listlengths 1 1 | author_id stringlengths 0 212 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ad2d9777e6c12e9718f41a658bad6641ab83d38e | 8280e66f3d4df8a8bf5fc94456bdd92c0cc09679 | /cdcs.py | 969e17383afd27568cc1452fed01b01726b9a697 | [] | no_license | lukefromstarwars/cdcs | 9d8f453f466d0407033790c9bec89d7260f1259a | 79a6d74e62a32e015751fc0fd5da452629870cc1 | refs/heads/master | 2021-01-13T05:01:20.964843 | 2017-02-07T15:51:42 | 2017-02-07T15:51:42 | 81,186,160 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,429 | py | from utils import *
class DV:
# df = read_pickle('Detailed_Institutions')
# cols = get_cols_alphabetically(df)
# for col in cols:
# print('{} = \'{}\''.format(col.lower(), col))
acces = 'ACCES'
activfr = 'ACTIVFR'
adnum = 'ADNUM'
adresfr = 'ADRESFR'
agrement = 'AGREMENT'
benef = 'BENEF'
bnum = 'BNUM'
but = 'BUT'
categ = 'CATEG'
commune = 'COMMUNE'
email = 'EMAIL'
email_nl = 'EMAIL_NL'
fax = 'FAX'
fiche = 'FICHE'
http = 'HTTP'
http_nl = 'HTTP_NL'
langstat = 'LANGSTAT'
latitude = 'LATITUDE'
longitude = 'LONGITUDE'
mother = 'MOTHER'
natoffre = 'NATOFFRE'
nmofffr = 'NMOFFFR'
nmservfr = 'NMSERVFR'
nmusefr = 'NMUSEFR'
offrling = 'OFFRLING'
permanfr = 'PERMANFR'
postfr = 'POSTFR'
remarque = 'REMARQUE'
revue = 'REVUE'
section = 'SECTION'
section_id = 'SECTION_ID'
statut = 'STATUT'
statuut = 'STATUUT'
subject = 'SUBJECT'
subject_id = 'SUBJECT_ID'
tel = 'TEL'
tel_nl = 'TEL_NL'
terrain = 'TERRAIN'
theme = 'THEME'
topic = 'TOPIC'
topic_id = 'TOPIC_ID'
xcoord = 'XCOORD'
xcoord_th = 'XCOORD_TH'
ycoord = 'YCOORD'
ycoord_th = 'YCOORD_TH'
zip = 'ZIP'
def fiches_as_pickle():
df = pd.read_excel(excel_folder + 'CDCS_fiches.xls')
df.columns
# 'PM_RP', idendification registre national entreprise
cols = ['ACCES',
'ADNUM',
'ADRESFR',
'AGREMENT',
'ACTIVFR',
'BENEF',
'BNUM',
'BUT',
'CATEG',
'COMMUNE',
'EMAIL',
'EMAIL_NL',
'FAX',
'FICHE',
'HTTP',
'HTTP_NL',
'LANGSTAT',
'LATITUDE',
'LONGITUDE',
'MOTHER',
'NATOFFRE',
'NMOFFFR',
'NMSERVFR',
'NMUSEFR',
'OFFRLING',
'PERMANFR',
'POSTFR',
'REMARQUE',
'REVUE',
'STATUT',
'STATUUT',
'TEL',
'TEL_NL',
'TERRAIN',
'THEME',
'XCOORD',
'XCOORD_TH',
'YCOORD',
'YCOORD_TH',
'ZIP']
df = df[cols]
get_cols_alphabetically(df)
df.loc[df[DV.xcoord] == 0 | df[DV.xcoord].isnull(), 'LATITUDE'] = 0
df.loc[df[DV.xcoord] == 0 | df[DV.xcoord].isnull(), 'LONGITUDE'] = 0
df.dropna(subset=[DV.categ], inplace=True)
df[DV.categ] = df[DV.categ]
save_as_pickle(df, 'Institutions')
# def categories_as_pickle():
# df_categories = pd.read_excel(excel_folder + 'CDCS_cats.xlsx')
# df_categories.columns
# df_categories = df_categories.fillna(999)
#
# df_categories.SUB_id = df_categories.SUB_id.astype(int)
# df_categories.MAIN_id = df_categories.MAIN_id.astype(int)
# save_as_pickle(df_categories, 'Categories')
def rename_agreements():
df = read_pickle('Institutions')
old_str = ['Cfl', 'C.flamande', 'C.fla.', 'Cfl', 'Comm.flamande']
new_str = 'Communauté flamande'
# old_str = ['K&G']
# new_str = 'Kind en Gezin'
# old_str = ['Région Bruxelles-Capitale']
# new_str = 'RBC'
for str in old_str:
df[DV.agrement] = df[DV.agrement].str.replace(str, new_str)
# str_contains = df[DV.agrement].str.contains(old_str, case=False, na=False)
# df[str_contains]
#
agreements_list = get_unique_rows([DV.agrement], df)
print_full(agreements_list)
# save_as_xlsx(agreements_list, 'Agreements')
save_as_pickle(df, 'Institutions_renamed')
def categories():
df = pd.read_excel(excel_folder + 'CDCS_SECTIONS.xlsx')
df = df.dropna(how='all')
df.loc[df[DV.topic_id].isnull() & df[DV.topic_id].shift(-1) > 0, DV.subject_id] = 1
new_ids = DataFrame(df[df[DV.subject_id] > 0][DV.subject_id])
new_ids[DV.subject_id] = np.arange(1, new_ids.size + 1)
del df[DV.subject_id]
df = pd.concat([df, new_ids], axis=1)
df.loc[df[DV.section].notnull(), DV.section_id] = 1
new_ids = DataFrame(df[df[DV.section_id] > 0][DV.section_id])
new_ids[DV.section_id] = np.arange(1, new_ids.size + 1)
del df[DV.section_id]
df = pd.concat([df, new_ids], axis=1)
df.loc[df[DV.topic_id].isnull() & df[DV.subject_id] > 0, 'SUBJECT'] = df['TOPIC']
df.loc[df[DV.topic_id].isnull() & df[DV.subject_id] > 0, 'TOPIC'] = np.nan
df = df.dropna(how='all')
df = df.fillna(method='ffill')
print_full_rows(df, 10)
save_as_pickle(df, 'Categories')
def detailed_institutions_as_pickle():
# DV.category = "CATÉGORIE"
# DV.maincategory = "CATÉGORIE PRINCIPALE"
# rename_agreements()
DV.agrement_organisation = 'AGREMENT_ORGANISATION'
df = read_pickle('Institutions')
get_cols_alphabetically(df)
# split categories
split_str = ','
df_categories = df[DV.categ].str.split(split_str).apply(pd.Series, 1).stack()
df_categories.dropna()
df_categories.index = df_categories.index.droplevel(-1)
df_categories.name = DV.categ
# split agreement
split_str = '\\r\\n'
agreements = df[DV.agrement].str.split(split_str).apply(pd.Series, 1).stack()
agreements = agreements.dropna()
agreements.index = agreements.index.droplevel(-1)
df_agreements = DataFrame(agreements, columns=[DV.agrement])
get_unique_rows([DV.agrement], df_agreements)
print_full(df_agreements)
# split agreement from organization
split_str = ' - '
df_agreements = df_agreements[DV.agrement].str.split(split_str).apply(pd.Series, 1)
df_agreements.dropna(how='all')
df_agreements.columns = [DV.agrement, DV.agrement_organisation]
df_agreements[DV.agrement_organisation] = df_agreements[DV.agrement_organisation].str.strip()
get_unique_rows([DV.agrement_organisation, DV.agrement], df_agreements)
# rename cols
old_strs = [
'Cfl',
'C. flamande',
'C. fla.',
'Cfl',
'Comm. flamande',
'Com. flam.',
'K&G',
'RBC',
'AUTORITÉ FÉDÉRALE'
]
new_strs = [
'Communauté flamande',
'Communauté flamande',
'Communauté flamande',
'Communauté flamande',
'Communauté flamande',
'Communauté flamande',
'Kind en Gezin',
'Région Bruxelles-Capitale',
'Fédéral'
]
# df_agreements = df_agreements.replace(old_strs, new_strs)
for o, n in zip(old_strs, new_strs):
print(o, n)
df_agreements[DV.agrement_organisation] = df_agreements[DV.agrement_organisation].str.replace(o, n)
agreement_orgs = get_unique_rows([DV.agrement_organisation], df_agreements)
save_as_xlsx(agreement_orgs, 'Agreement_orgs')
# merge detailed institutions
del df[DV.categ]
del df[DV.agrement]
df = df.join(df_categories)
df[DV.categ] = df[DV.categ].str.strip()
df = df[df[DV.categ] != '']
df[DV.categ] = df[DV.categ].astype(int)
df.dropna(subset=[DV.categ])
get_unique_values(DV.categ, df)
df_categories = read_pickle('Categories')
df_categories = df_categories.dropna()
df_categories[df_categories[DV.topic_id] == 482]
df = pd.merge(df, df_categories, left_on=DV.categ, right_on=DV.topic_id)
get_cols_alphabetically(df)
save_as_pickle(df, 'Detailed_Institutions')
# save_as_xlsx(df, 'Detailed_Institutions')
def check_assuetudes_coords():
df = read_pickle('Detailed_Institutions')
get_unique_rows([DV.section, DV.section_id], df)
df[df[DV.xcoord] == 0 | df[DV.xcoord].isnull()]
def get_assuetudes():
DV.fulladr = "FULLADRESS"
df = read_pickle('Detailed_Institutions')
df[DV.fulladr] = df[DV.adresfr] + ' ' + df[DV.adnum].astype(str) + ', ' + df[DV.postfr]
df_categories = read_pickle('Categories')
df_categories = df_categories.dropna()
sections = get_unique_rows([DV.section, DV.section_id], df)
subjects = get_unique_rows([DV.subject, DV.subject_id], df)
topics = get_unique_rows([DV.topic, DV.topic_id], df)
agreement = get_unique_rows([DV.agrement], df)
save_as_xlsx(agreement, 'Agreement')
cats = get_unique_rows([DV.section, DV.section_id, DV.subject, DV.subject_id, DV.topic, DV.topic_id], df)
# save_as_xlsx(cats, 'sections')
pvt = pd.pivot_table(cats, index=[DV.section, DV.subject])
pvt.columns
pvt.index
cols = [
DV.activfr,
DV.agrement,
DV.benef,
DV.fiche,
DV.fulladr,
DV.nmofffr,
DV.offrling,
DV.permanfr,
DV.section_id,
DV.subject,
DV.subject_id,
DV.topic,
DV.topic_id,
DV.section
]
cols_pvt = [
DV.fiche,
DV.nmofffr,
DV.fulladr,
DV.benef,
DV.offrling,
DV.permanfr,
DV.agrement,
DV.subject,
DV.topic,
DV.activfr
]
# selected_cats = [18, 20, 16, 17, 4]
selected_cats = [18]
df_tmp = df[df[DV.section_id].isin(selected_cats)][cols]
# print_full(df_tmp)
df_pivot = pd.pivot_table(df_tmp, index=cols_pvt)
df_pivot.to_html(open(html_folder + 'new_file.html', 'w'))
df_institutions = df_pivot.reset_index()
df_pivot = df_pivot.reset_index()
print_full(df_pivot)
print_full(df_pivot.index.droplevel(-1))
df_pivot.unstack(level='TOPIC')
del df_pivot[DV.topic_id]
del df_pivot[DV.section_id]
del df_pivot[DV.subject_id]
df_pivot.stack()
save_as_xlsx(df_institutions, 'Institutions_tox')
| [
"lukefromstarwars@gmail.com"
] | lukefromstarwars@gmail.com |
170d3334a15c242f409b50f67787dd80478f223b | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/adjectives/_amoral.py | 1ffdd9157b8e7b56bb3e5ebbc4833764df514bda | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 384 | py |
#calss header
class _AMORAL():
def __init__(self,):
self.name = "AMORAL"
self.definitions = [u'without moral principles: ']
self.parents = []
self.childen = []
self.properties = []
self.jsondata = {}
self.specie = 'adjectives'
def run(self, obj1, obj2):
self.jsondata[obj2] = {}
self.jsondata[obj2]['properties'] = self.name.lower()
return self.jsondata
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
aab164dddd265e0c92853e355a08e84927d73f6b | 523e845ced594e5fe1a4222d978c5f2c22e3164f | /Final_Project/urls.py | b881f6e163d68ea661ddaa425f30ca560836785b | [] | no_license | craiglymus/Final_Project | d399411f422fb4e90f81720140ef5f229bf4d27b | d64cdb292ebab1f05fee27e6f8514e05895320c0 | refs/heads/master | 2020-04-08T13:43:02.208314 | 2018-12-06T21:16:20 | 2018-12-06T21:16:20 | 159,403,464 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 880 | py | from django.urls import path
from . import views
from django.conf.urls.static import static
from django.conf import settings
urlpatterns=[
path('', views.index, name='index'),
path('register', views.register, name='register'),
path('login',views.user_login,name='user_login'),
path('logout', views.user_logout, name='logout'),
# path('api/users', views.sendJson, name='sendJson'),
path('special',views.special, name='special'),
path('map', views.map, name='map'),
path('about', views.about, name='about'),
path('like', views.like, name='like'),
path('api/likes', views.sendJsonLikes, name='sendJsonLikes'),
path('profile', views.profile_view, name='profile_view'),
path('like/<int:pk>/delete', views.delete, name='delete'),
]
# if settings.DEBUG:
# urlpatterns += static(settings.MEDIA_URL, document_root=MEDIA_ROOT) | [
"craiglymus@gmail.com"
] | craiglymus@gmail.com |
6f975e3d3182da9e70487af00419fad4fa355458 | 1dc837eb06d0c9778483dc20647ce6d3df77c310 | /loginReg_proj/manage.py | 6c9fe95be8357aa59e4ed4c718641f64c41f2b06 | [] | no_license | sbeck0109/DojoReads | afdc3cc0486456ca7ca78366da5f0e8e2478707b | 2ed828ff7bcbebf82170957bd561bfab161f4a9c | refs/heads/master | 2022-12-12T06:42:10.461439 | 2020-08-31T06:07:11 | 2020-08-31T06:07:11 | 291,629,308 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 633 | py | #!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'loginReg_proj.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
| [
"sbeck0109@gmail.com"
] | sbeck0109@gmail.com |
b19779ecded0d5e3d24b43d6c9b5854ce22747da | 5e63d35b296947a1e06fd8d180b20cd484080b74 | /site/config/settings/dev.py | 63eb37d0ace723ee1372647e5992ec3b7ea244dc | [] | no_license | XUJINKAI/DuoBlog | 3d883f0eeb29d0bcd6bc83982e70df82a999ec26 | 3b0273d31543cbc4a7e8e2ee4ca59d6035d2ce7e | refs/heads/master | 2021-01-22T17:28:49.186903 | 2018-06-19T11:16:11 | 2018-06-19T11:16:11 | 85,017,476 | 2 | 0 | null | 2017-03-18T11:44:06 | 2017-03-15T02:03:02 | Python | UTF-8 | Python | false | false | 4,542 | py | """
Django settings for config project.
Generated by 'django-admin startproject' using Django 1.10.4.
For more information on this file, see
https://docs.djangoproject.com/en/1.10/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.10/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# all user data saved here
USER_DATA_DIR = os.path.join(BASE_DIR, 'data')
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'a0l*+xp2a)_svomnw4an2ge6mx&k6p+amzr1g#f3(z%_siqikr'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
DEBUG_API = False
# //TODO
DEMO = False
ALLOWED_HOSTS = ['*']
BLOG_PROGRAM_NAME = 'DuoBlog'
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'allauth',
'allauth.account',
'crispy_forms',
'rest_framework',
'accounts',
'blog',
'api',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'config.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'config.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.10/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(USER_DATA_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.10/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = False
USE_L10N = False
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.10/howto/static-files/
# all url for static has this prefix
STATIC_URL = '/static/'
# custom, for uploads
UPLOADS_URL = '/uploads/'
STATIC_ROOT = os.path.join(BASE_DIR, "static")
UPLOADS_ROOT = os.path.join(USER_DATA_DIR, "uploads")
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAdminUser',
),
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.SessionAuthentication',
),
'DEFAULT_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',
'rest_framework.renderers.BrowsableAPIRenderer',
),
# http://django-filter.readthedocs.io/en/stable/
'DEFAULT_FILTER_BACKENDS': ('django_filters.rest_framework.DjangoFilterBackend',),
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.LimitOffsetPagination',
}
SITE_ID = 1
AUTH_USER_MODEL = 'accounts.User'
CRISPY_TEMPLATE_PACK = 'bootstrap3'
LOGIN_URL = '/account/login/'
### permission required
# management.middleware.py
RESTRICTED_URLS = (
(r'^/manage/', 'access_manage'),
(r'^/admin/', 'access_admin'),
)
RESTRICTED_URLS_EXCEPTIONS = (
)
# http://example.com/<POSTS_URL_FIELD>/<slug>
POSTS_URL_FIELD = 'p' | [
"xujinkai@gmail.com"
] | xujinkai@gmail.com |
1d7ebb2188e55a1b208a6193496d6348ea4589cf | 1aaa0e8069e668fef18eb57cbf8b768780d6b819 | /sync-flickr.py | c75481f380d2549fd23404f2b3d1fca03676d1bb | [] | no_license | aimxhaisse/an-aer | db3e660a24139479484042a802462a5b977263ab | c96679d21c3fb542341d7bfb10e40d7463d068f2 | refs/heads/master | 2020-05-26T06:23:56.401114 | 2015-08-19T19:34:14 | 2015-08-19T19:34:14 | 37,366,466 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,867 | py | #!/usr/bin/env python2
import os
import frontmatter
import flickrapi
import json
import requests
tmp_file='/tmp/flickr.jpg'
api_key=os.environ['FLICKR_KEY']
api_secret=os.environ['FLICKR_SECRET']
user_id='133795382@N04'
flickr=flickrapi.FlickrAPI(api_key, api_secret)
flickr.authenticate_via_browser(perms='write')
images=json.loads(flickr.photos.search(user_id=user_id, format='json').decode('utf-8'))
def photo_exists(title):
for photo in images['photos']['photo']:
if photo['title'] == title:
return True
return False
def sync_photo(url, title, desc):
if photo_exists(title):
return
with open(tmp_file, 'wb') as handle:
resp = requests.get(url, stream=True)
if not resp.ok:
return
for chunk in resp.iter_content(4096):
handle.write(chunk)
flickr.upload(filename=tmp_file, title=title, is_public=0, description=desc)
def main():
for root, subdirs, files in os.walk('almace-scaffolding/_app/_posts/series'):
for file in files:
path = '{0}/{1}'.format(root, file)
post = frontmatter.load(path)
# skip series
if post['layout'] != 'photo':
continue
serie = post['category']
category = post['desc']
image = post['image']
# skip already uploaded files
if post.get('flickr', None) != 'sync':
continue
url = 'http://storage.an-aer.bzh/aer/series/{0}/{1}/large.jpg'.format(serie, image)
title = u'{0} #{1}'.format(category, image)
filename = file[11:]
filename = filename[:-3]
desc = 'More info about this image at http://an-aer.bzh/{0}/{1}.html'.format(serie, filename)
sync_photo(url, title, desc)
if __name__ == '__main__':
main()
| [
"mxs@sbrk.org"
] | mxs@sbrk.org |
115ce2928dc8674efb001a44ef748c761a18d489 | a24e1f5ee48aef0835e4073f27ca05f59ad08830 | /Demo/oss_operate_upload.py | 64cef35803583bdcf727e8c7d24d075008d3e6f1 | [] | no_license | Bass0315/BeagleBoard | 73ff1b9a3ade0431ed9095b9a8578c4ba419b055 | 3197e21e8739e5a573e6cec67e0be2f6bf163358 | refs/heads/main | 2023-07-22T08:55:40.530758 | 2021-08-26T06:12:07 | 2021-08-26T06:12:07 | 399,308,478 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,527 | py | #!/usr/bin/env python
# tary, 14:49 2018/10/18
import os
import oss2
import time
import sys
#bucket_name = sys.argv[1]
#need_file_prex = sys.argv[2]
bucket_name = "102110561"
#need_file_prex = "BBBVC20200400020"
bucket_name_bk = "102991313"
class log_uploader():
def __init__(self):
self.access_key_id = "xxxxxxxxxxxxxxx"
self.access_key_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxx"
self.bucket_name = bucket_name
self.bucket_name_bk = bucket_name_bk
self.endpoint = "oss-cn-hangzhou.aliyuncs.com"
self.auth = oss2.Auth(self.access_key_id, self.access_key_secret)
self.bucket = oss2.Bucket(self.auth, self.endpoint, self.bucket_name)
#self.put_object()
self.bucket_bk = oss2.Bucket(self.auth, self.endpoint, self.bucket_name_bk)
def uploadfile(self,logfileName, filePath):
# realfilepath = filePath + "/" + logfileName
realfilepath = logfileName
print(realfilepath)
try:
oss2.resumable_upload(self.bucket, logfileName, filePath)
time.sleep(0.1)
file_exist_check = self.bucket.object_exists(logfileName)
# print file_exist_check
if file_exist_check != True:
return False
except Exception as e:
print(e)
return False
return True
def uploadfile_bk(self,logfileName, filePath):
realfilepath = filePath + "/" + logfileName
try:
oss2.resumable_upload(self.bucket_bk, logfileName, filePath)
time.sleep(0.1)
file_exist_check = self.bucket_bk.object_exists(logfileName)
if file_exist_check != True:
return False
except Exception as e:
print(e)
return False
return True
def downloadfile(self):
try:
for object_info in oss2.ObjectIterator(self.bucket,prefix=need_file_prex):
print(object_info.key)
self.bucket.get_object_to_file(object_info.key, object_info.key)
except:
print "Error"
def deletefile(self):
for obj in oss2.ObjectIterator(self.bucket, prefix="4016"):
print obj.key
self.bucket.delete_object(obj.key)
def download_delete(self):
try:
for object_info in oss2.ObjectIterator(self.bucket,prefix=need_file_prex):
print(object_info.key)
self.bucket.get_object_to_file(object_info.key, object_info.key)
self.bucket.delete_object(object_info.key)
except:
print "Error"
def search(self):
for object_info in oss2.ObjectIterator(self.bucket,prefix=need_file_prex):
print(object_info.key)
def copyfile(self):
d_bucket = oss2.Bucket(self.auth, "oss-cn-hangzhou.aliyuncs.com", '102991313')
for object_info in oss2.ObjectIterator(self.bucket,prefix="11399"):
print(object_info.key)
d_bucket.copy_object("mestestbak",object_info.key,object_info.key)
if __name__ == '__main__':
if len(sys.argv) < 2:
print("Usage: {} filename filepath")
quit(1)
uploader = log_uploader()
if uploader.uploadfile(sys.argv[1],sys.argv[2]):
quit(0)
quit(2)
# uploader.copyfile()
# uploader.deletefile()
# uploader.downloadfile()
# uploader.download_delete()
| [
"1217354870@qq.com"
] | 1217354870@qq.com |
53b6cc03d03e845c4fcee066f865a5837923dab4 | dacd2488a96c52acfd9bb282d1a5d5fc8f04d20b | /largest_palindrome_product.py | 017616247a817509e40e11cde549af1f56d206d4 | [] | no_license | snj0x03/MyProjectEulerSolutions | 1dfb9179a7130e3aca6a6a3e6693ab2fbbf10a70 | b16dc1340ea4c2a305531e9e6b600637e0028c6f | refs/heads/main | 2023-08-01T05:52:35.760183 | 2021-09-16T02:56:46 | 2021-09-16T02:56:46 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 249 | py | def isPalindrome(n):
temp = int(str(n)[::-1])
if temp == n:
return True
return False
ans = 0
for i in range(100, 1000):
for j in range(i, 1000):
if isPalindrome(i*j) and i*j > ans:
ans = i*j
print(ans)
| [
"sawnaysoe.mm@gmail.com"
] | sawnaysoe.mm@gmail.com |
784f80e5912c451b33f78e669f8c5db7589bc16c | f5ffd566166948c4202eb1e66bef44cf55a70033 | /openapi_client/model/single_token_no_id.py | 0574862288f2077ebbf419ad0659d700468ded35 | [] | no_license | skyportal/skyportal_client | ed025ac6d23589238a9c133d712d4f113bbcb1c9 | 15514e4dfb16313e442d06f69f8477b4f0757eaa | refs/heads/master | 2023-02-10T02:54:20.757570 | 2021-01-05T02:18:03 | 2021-01-05T02:18:03 | 326,860,562 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 8,676 | py | """
Fritz: SkyPortal API
SkyPortal provides an API to access most of its underlying functionality. To use it, you will need an API token. This can be generated via the web application from your profile page or, if you are an admin, you may use the system provisioned token stored inside of `.tokens.yaml`. ### Accessing the SkyPortal API Once you have a token, you may access SkyPortal programmatically as follows. #### Python ```python import requests token = 'ea70a5f0-b321-43c6-96a1-b2de225e0339' def api(method, endpoint, data=None): headers = {'Authorization': f'token {token}'} response = requests.request(method, endpoint, json=data, headers=headers) return response response = api('GET', 'http://localhost:5000/api/sysinfo') print(f'HTTP code: {response.status_code}, {response.reason}') if response.status_code in (200, 400): print(f'JSON response: {response.json()}') ``` #### Command line (curl) ```shell curl -s -H 'Authorization: token ea70a5f0-b321-43c6-96a1-b2de225e0339' http://localhost:5000/api/sysinfo ``` ### Response In the above examples, the SkyPortal server is located at `http://localhost:5000`. In case of success, the HTTP response is 200: ``` HTTP code: 200, OK JSON response: {'status': 'success', 'data': {}, 'version': '0.9.dev0+git20200819.84c453a'} ``` On failure, it is 400; the JSON response has `status=\"error\"` with the reason for the failure given in `message`: ```js { \"status\": \"error\", \"message\": \"Invalid API endpoint\", \"data\": {}, \"version\": \"0.9.1\" } ``` # Authentication <!-- ReDoc-Inject: <security-definitions> --> # noqa: E501
The version of the OpenAPI document: 0.9.dev0+git20201221.76627dd
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
import nulltype # noqa: F401
from openapi_client.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
def lazy_import():
from openapi_client.model.token_no_id import TokenNoID
globals()['TokenNoID'] = TokenNoID
class SingleTokenNoID(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
('status',): {
'SUCCESS': "success",
},
}
validations = {
}
additional_properties_type = None
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'status': (str,), # noqa: E501
'message': (str,), # noqa: E501
'data': (TokenNoID,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'status': 'status', # noqa: E501
'message': 'message', # noqa: E501
'data': 'data', # noqa: E501
}
_composed_schemas = {}
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""SingleTokenNoID - a model defined in OpenAPI
Args:
Keyword Args:
status (str): defaults to "success", must be one of ["success", ] # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
message (str): [optional] # noqa: E501
data (TokenNoID): [optional] # noqa: E501
"""
status = kwargs.get('status', "success")
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.status = status
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
| [
"profjsb@gmail.com"
] | profjsb@gmail.com |
ca5718e18f441d3016775009fc4ce404bf60bf1d | c723fc194b8f07e341635fca736f69d2b3f23b42 | /rinself.py | eed880d1a560ddc35fa3bde2564292ef96a1b29d | [] | no_license | Kaneki711/kpro3a | fc15f7e82f4704a591a5b84cae8efb04c1b83468 | a51c5abdd7964dae9be94496e8b39358a3ef4638 | refs/heads/master | 2020-03-26T01:33:38.925234 | 2018-08-01T19:23:26 | 2018-08-01T19:23:26 | 144,372,352 | 0 | 1 | null | 2018-08-11T09:25:31 | 2018-08-11T09:25:31 | null | UTF-8 | Python | false | false | 126,463 | py | # -*- coding: utf-8 -*-
from LineAPI.linepy import *
from LineAPI.akad.ttypes import Message
from LineAPI.akad.ttypes import ContentType as Type
from gtts import gTTS
from time import sleep
from datetime import datetime, timedelta
from bs4 import BeautifulSoup
from googletrans import Translator
from humanfriendly import format_timespan, format_size, format_number, format_length
import time, random, sys, json, codecs, threading, glob, re, string, os, requests, six, ast, pytz, urllib, urllib3, urllib.parse, traceback, atexit, subprocess
ririn = LINE("Evx7aX9igmrbbAvCoIMf.K9hUseF6j4f/WE5DLTHHBW.8cu6k4noj/VDblh9ro6rb5cE7Lxj0F4j3Pqy76h35ME=")
#ririn = LINE("TOKENMU")
ririnMid = ririn.profile.mid
ririnProfile = ririn.getProfile()
ririnSettings = ririn.getSettings()
ririnPoll = OEPoll(ririn)
botStart = time.time()
print ("╔═════════════════════════\n║╔════════════════════════\n║╠❂➣ DNA BERHASIL LOGIN\n║╚════════════════════════\n╚═════════════════════════")
msg_dict = {}
wait = {
"autoAdd": True,
"autoJoin": True,
"autoLeave": False,
"autoRead": False,
"autoRespon": True,
"autoResponPc": False,
"autoJoinTicket": True,
"checkContact": False,
"checkPost": False,
"checkSticker": False,
"changePictureProfile": False,
"changeGroupPicture": [],
"keyCommand": "",
"leaveRoom": True,
"myProfile": {
"displayName": "",
"coverId": "",
"pictureStatus": "",
"statusMessage": ""
},
"mimic": {
"copy": False,
"status": False,
"target": {}
},
"Protectcancel": True,
"Protectgr": True,
"Protectinvite": True,
"Protectjoin": False,
"setKey": False,
"sider": False,
"unsendMessage": True
}
cctv = {
"cyduk":{},
"point":{},
"sidermem":{}
}
read = {
"ROM": {},
"readPoint": {},
"readMember": {},
"readTime": {}
}
list_language = {
"list_textToSpeech": {
"id": "Indonesia",
"af" : "Afrikaans",
"sq" : "Albanian",
"ar" : "Arabic",
"hy" : "Armenian",
"bn" : "Bengali",
"ca" : "Catalan",
"zh" : "Chinese",
"zh-cn" : "Chinese (Mandarin/China)",
"zh-tw" : "Chinese (Mandarin/Taiwan)",
"zh-yue" : "Chinese (Cantonese)",
"hr" : "Croatian",
"cs" : "Czech",
"da" : "Danish",
"nl" : "Dutch",
"en" : "English",
"en-au" : "English (Australia)",
"en-uk" : "English (United Kingdom)",
"en-us" : "English (United States)",
"eo" : "Esperanto",
"fi" : "Finnish",
"fr" : "French",
"de" : "German",
"el" : "Greek",
"hi" : "Hindi",
"hu" : "Hungarian",
"is" : "Icelandic",
"id" : "Indonesian",
"it" : "Italian",
"ja" : "Japanese",
"km" : "Khmer (Cambodian)",
"ko" : "Korean",
"la" : "Latin",
"lv" : "Latvian",
"mk" : "Macedonian",
"no" : "Norwegian",
"pl" : "Polish",
"pt" : "Portuguese",
"ro" : "Romanian",
"ru" : "Russian",
"sr" : "Serbian",
"si" : "Sinhala",
"sk" : "Slovak",
"es" : "Spanish",
"es-es" : "Spanish (Spain)",
"es-us" : "Spanish (United States)",
"sw" : "Swahili",
"sv" : "Swedish",
"ta" : "Tamil",
"th" : "Thai",
"tr" : "Turkish",
"uk" : "Ukrainian",
"vi" : "Vietnamese",
"cy" : "Welsh"
},
"list_translate": {
"af": "afrikaans",
"sq": "albanian",
"am": "amharic",
"ar": "arabic",
"hy": "armenian",
"az": "azerbaijani",
"eu": "basque",
"be": "belarusian",
"bn": "bengali",
"bs": "bosnian",
"bg": "bulgarian",
"ca": "catalan",
"ceb": "cebuano",
"ny": "chichewa",
"zh-cn": "chinese (simplified)",
"zh-tw": "chinese (traditional)",
"co": "corsican",
"hr": "croatian",
"cs": "czech",
"da": "danish",
"nl": "dutch",
"en": "english",
"eo": "esperanto",
"et": "estonian",
"tl": "filipino",
"fi": "finnish",
"fr": "french",
"fy": "frisian",
"gl": "galician",
"ka": "georgian",
"de": "german",
"el": "greek",
"gu": "gujarati",
"ht": "haitian creole",
"ha": "hausa",
"haw": "hawaiian",
"iw": "hebrew",
"hi": "hindi",
"hmn": "hmong",
"hu": "hungarian",
"is": "icelandic",
"ig": "igbo",
"id": "indonesian",
"ga": "irish",
"it": "italian",
"ja": "japanese",
"jw": "javanese",
"kn": "kannada",
"kk": "kazakh",
"km": "khmer",
"ko": "korean",
"ku": "kurdish (kurmanji)",
"ky": "kyrgyz",
"lo": "lao",
"la": "latin",
"lv": "latvian",
"lt": "lithuanian",
"lb": "luxembourgish",
"mk": "macedonian",
"mg": "malagasy",
"ms": "malay",
"ml": "malayalam",
"mt": "maltese",
"mi": "maori",
"mr": "marathi",
"mn": "mongolian",
"my": "myanmar (burmese)",
"ne": "nepali",
"no": "norwegian",
"ps": "pashto",
"fa": "persian",
"pl": "polish",
"pt": "portuguese",
"pa": "punjabi",
"ro": "romanian",
"ru": "russian",
"sm": "samoan",
"gd": "scots gaelic",
"sr": "serbian",
"st": "sesotho",
"sn": "shona",
"sd": "sindhi",
"si": "sinhala",
"sk": "slovak",
"sl": "slovenian",
"so": "somali",
"es": "spanish",
"su": "sundanese",
"sw": "swahili",
"sv": "swedish",
"tg": "tajik",
"ta": "tamil",
"te": "telugu",
"th": "thai",
"tr": "turkish",
"uk": "ukrainian",
"ur": "urdu",
"uz": "uzbek",
"vi": "vietnamese",
"cy": "welsh",
"xh": "xhosa",
"yi": "yiddish",
"yo": "yoruba",
"zu": "zulu",
"fil": "Filipino",
"he": "Hebrew"
}
}
try:
with open("Log_data.json","r",encoding="utf_8_sig") as f:
msg_dict = json.loads(f.read())
except:
print("Couldn't read Log data")
wait["myProfile"]["displayName"] = ririnProfile.displayName
wait["myProfile"]["statusMessage"] = ririnProfile.statusMessage
wait["myProfile"]["pictureStatus"] = ririnProfile.pictureStatus
coverId = ririn.getProfileDetail()["result"]["objectId"]
wait["myProfile"]["coverId"] = coverId
def restartBot():
print ("[ INFO ] BOT RESTART")
python = sys.executable
os.execl(python, python, *sys.argv)
def logError(text):
ririn.log("[ ERROR ] {}".format(str(text)))
tz = pytz.timezone("Asia/Jakarta")
timeNow = datetime.now(tz=tz)
timeHours = datetime.strftime(timeNow,"(%H:%M)")
day = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday","Friday", "Saturday"]
hari = ["Minggu", "Senin", "Selasa", "Rabu", "Kamis", "Jumat", "Sabtu"]
bulan = ["Januari", "Februari", "Maret", "April", "Mei", "Juni", "Juli", "Agustus", "September", "Oktober", "November", "Desember"]
inihari = datetime.now(tz=tz)
hr = inihari.strftime('%A')
bln = inihari.strftime('%m')
for i in range(len(day)):
if hr == day[i]: hasil = hari[i]
for k in range(0, len(bulan)):
if bln == str(k): bln = bulan[k-1]
time = "{}, {} - {} - {} | {}".format(str(hasil), str(inihari.strftime('%d')), str(bln), str(inihari.strftime('%Y')), str(inihari.strftime('%H:%M:%S')))
with open("logError.txt","a") as error:
error.write("\n[ {} ] {}".format(str(time), text))
def cTime_to_datetime(unixtime):
return datetime.fromtimestamp(int(str(unixtime)[:len(str(unixtime))-3]))
def dt_to_str(dt):
return dt.strftime('%H:%M:%S')
def delete_log():
ndt = datetime.now()
for data in msg_dict:
if (datetime.utcnow() - cTime_to_datetime(msg_dict[data]["createdTime"])) > timedelta(1):
if "path" in msg_dict[data]:
ririn.deleteFile(msg_dict[data]["path"])
del msg_dict[data]
def sendMention(to, text="", mids=[]):
arrData = ""
arr = []
mention = "@dee "
if mids == []:
raise Exception("Invalid mids")
if "@!" in text:
if text.count("@!") != len(mids):
raise Exception("Invalid mids")
texts = text.split("@!")
textx = ""
for mid in mids:
textx += str(texts[mids.index(mid)])
slen = len(textx)
elen = len(textx) + 15
arrData = {'S':str(slen), 'E':str(elen - 4), 'M':mid}
arr.append(arrData)
textx += mention
textx += str(texts[len(mids)])
else:
textx = ""
slen = len(textx)
elen = len(textx) + 15
arrData = {'S':str(slen), 'E':str(elen - 4), 'M':mids[0]}
arr.append(arrData)
textx += mention + str(text)
ririn.sendMessage(to, textx, {'MENTION': str('{"MENTIONEES":' + json.dumps(arr) + '}')}, 0)
def command(text):
pesan = text.lower()
if wait["setKey"] == True:
if pesan.startswith(wait["keyCommand"]):
cmd = pesan.replace(wait["keyCommand"],"")
else:
cmd = "Undefined command"
else:
cmd = text.lower()
return cmd
def helpmessage():
if wait['setKey'] == True:
key = wait['keyCommand']
else:
key = ''
helpMessage = "╔════════════════════╗" + "\n" + \
" ✰ ᴅɴᴀ ʙᴏᴛ ✰" + "\n" + \
"╚════════════════════╝" + "\n" + \
"╔════════════════════╗" + "\n" + \
" ✪ 🅷🅴🅻🅿 🅼🅴🆂🆂🅰🅶🅴 ✪" + "\n" + \
"╠════════════════════╝" + "\n" + \
"╠❂➣ " + key + "ʜᴇʟᴘ " + "\n" + \
"╠❂➣ " + key + "ᴛᴛs " + "\n" + \
"╠❂➣ " + key + "ᴛʀᴀɴsʟᴀᴛᴇ " + "\n" + \
"╔════════════════════╗" + "\n" + \
" ✪ 🆂🆃🅰🆃🆄🆂 ✪" + "\n" + \
"╠════════════════════╝" + "\n" + \
"╠❂➣ " + key + "ʀᴇsᴛᴀʀᴛ" + "\n" + \
"╠❂➣ " + key + "ʀᴜɴᴛɪᴍᴇ" + "\n" + \
"╠❂➣ " + key + "sᴘ" + "\n" + \
"╠❂➣ " + key + "sᴘᴇᴇᴅ" + "\n" + \
"╠❂➣ " + key + "sᴛᴀᴛᴜs" + "\n" + \
"╠❂➣ ᴍʏᴋᴇʏ" + "\n" + \
"╠❂➣ sᴇᴛᴋᴇʏ「ᴏɴ/ᴏғғ」" + "\n" + \
"╔════════════════════╗" + "\n" + \
" ✪ 🆂🅴🆃🆃🅸🅽🅶🆂 ✪" + "\n" + \
"╠════════════════════╝" + "\n" + \
"╠❂➣ " + key + "ᴀᴜᴛᴏᴀᴅᴅ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ᴀᴜᴛᴏᴊᴏɪɴ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ᴀᴜᴛᴏᴊᴏɪɴᴛɪᴄᴋᴇᴛ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ᴀᴜᴛᴏʟᴇᴀᴠᴇ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ᴀᴜᴛᴏʀᴇᴀᴅ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ᴀᴜᴛᴏʀᴇsᴘᴏɴ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ᴀᴜᴛᴏʀᴇsᴘᴏɴᴘᴄ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴇᴄᴋᴄᴏɴᴛᴀᴄᴛ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴇᴄᴋᴘᴏsᴛ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴇᴄᴋsᴛɪᴄᴋᴇʀ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ᴜɴsᴇɴᴅᴄʜᴀᴛ「ᴏɴ/ᴏғғ」" + "\n" + \
"╔════════════════════╗" + "\n" + \
" ✪ 🆂🅴🅻🅵 ✪" + "\n" + \
"╠════════════════════╝" + "\n" + \
"╠❂➣ " + key + "ʙᴀᴄᴋᴜᴘᴘʀᴏғɪʟᴇ" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴀɴɢᴇʙɪᴏ:「ǫᴜᴇʀʏ」" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴀɴɢᴇɴᴀᴍᴇ:「ǫᴜᴇʀʏ」" + "\n" + \
"╠❂➣ " + key + "ᴄʟᴏɴᴇᴘʀᴏғɪʟᴇ「ᴍᴇɴᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴀɴɢᴇᴘɪᴄᴛᴜʀᴇᴘʀᴏғɪʟᴇ" + "\n" + \
"╠❂➣ " + key + "ᴍᴇ" + "\n" + \
"╠❂➣ " + key + "ᴍʏᴍɪᴅ" + "\n" + \
"╠❂➣ " + key + "ᴍʏɴᴀᴍᴇ" + "\n" + \
"╠❂➣ " + key + "ᴍʏʙɪᴏ" + "\n" + \
"╠❂➣ " + key + "ᴍʏᴘɪᴄᴛᴜʀᴇ" + "\n" + \
"╠❂➣ " + key + "ᴍʏᴠɪᴅᴇᴏᴘʀᴏғɪʟᴇ" + "\n" + \
"╠❂➣ " + key + "ᴍʏᴄᴏᴠᴇʀ" + "\n" + \
"╠❂➣ " + key + "ʀᴇsᴛᴏʀᴇᴘʀᴏғɪʟᴇ" + "\n" + \
"╠❂➣ " + key + "sᴛᴇᴀʟᴄᴏɴᴛᴀᴄᴛ「ᴍᴇɴᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "sᴛᴇᴀʟᴍɪᴅ「ᴍᴇɴᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "sᴛᴇᴀʟɴᴀᴍᴇ「ᴍᴇɴᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "sᴛᴇᴀʟʙɪᴏ「ᴍᴇɴᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "sᴛᴇᴀʟᴘɪᴄᴛᴜʀᴇ「ᴍᴇɴᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "sᴛᴇᴀʟᴠɪᴅᴇᴏᴘʀᴏғɪʟᴇ「ᴍᴇɴᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "sᴛᴇᴀʟᴄᴏᴠᴇʀ「ᴍᴇɴᴛɪᴏɴ」" + "\n" + \
"╔════════════════════╗" + "\n" + \
" ✪ 🆂🅿🅴🅲🅸🅰🅻 ✪" + "\n" + \
"╠════════════════════╝" + "\n" + \
"╠❂➣ " + key + "ʟᴜʀᴋɪɴɢ" + "\n" + \
"╠❂➣ " + key + "ʟᴜʀᴋɪɴɢ「ᴏɴ/ᴏғғ/ʀᴇsᴇᴛ」" + "\n" + \
"╠❂➣ " + key + "ᴍᴇɴᴛɪᴏɴ" + "\n" + \
"╠❂➣ " + key + "ᴍɪᴍɪᴄ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ᴍɪᴍɪᴄᴀᴅᴅ「ᴍᴇɴᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "ᴍɪᴍɪᴄᴅᴇʟ「ᴍᴇɴᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "ᴍɪᴍɪᴄʟɪsᴛ" + "\n" + \
"╠❂➣ " + key + "sɪᴅᴇʀ「ᴏɴ/ᴏғғ」" + "\n" + \
"╔════════════════════╗" + "\n" + \
" ✪ 🅶🆁🅾🆄🅿 ✪" + "\n" + \
"╠════════════════════╝" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴀɴɢᴇɢʀᴏᴜᴘᴘɪᴄᴛᴜʀᴇ" + "\n" + \
"╠❂➣ " + key + "ɢʀᴏᴜᴘᴄʀᴇᴀᴛᴏʀ" + "\n" + \
"╠❂➣ " + key + "ɢʀᴏᴜᴘɪᴅ" + "\n" + \
"╠❂➣ " + key + "ɢʀᴏᴜᴘɴᴀᴍᴇ" + "\n" + \
"╠❂➣ " + key + "ɢʀᴏᴜᴘᴘɪᴄᴛᴜʀᴇ" + "\n" + \
"╠❂➣ " + key + "ɢʀᴏᴜᴘᴛɪᴄᴋᴇᴛ" + "\n" + \
"╠❂➣ " + key + "ɢʀᴏᴜᴘᴛɪᴄᴋᴇᴛ「ᴏɴ/ᴏғғ」" + "\n" + \
"╠❂➣ " + key + "ɢʀᴏᴜᴘᴍᴇᴍʙᴇʀʟɪsᴛ" + "\n" + \
"╠❂➣ " + key + "ɢʀᴏᴜᴘɪɴғᴏ" + "\n" + \
"╠❂➣ " + key + "ɢʀᴏᴜᴘʟɪsᴛ" + "\n" + \
"╠❂➣ " + key + "ɪɴᴠɪᴛᴇɢᴄ「ᴀᴍᴏᴜɴᴛ」" + "\n" + \
"╔════════════════════╗" + "\n" + \
" ✪ 🅼🅴🅳🅸🅰 ✪" + "\n" + \
"╠════════════════════╝" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴇᴄᴋᴅᴀᴛᴇ「ᴅᴀᴛᴇ」" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴇᴄᴋʟᴏᴄᴀᴛɪᴏɴ「ʟᴏᴄᴀᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴇᴄᴋᴘʀᴀʏᴛɪᴍᴇ「ʟᴏᴄᴀᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴇᴄᴋᴡᴇᴀᴛʜᴇʀ「ʟᴏᴄᴀᴛɪᴏɴ」" + "\n" + \
"╠❂➣ " + key + "ᴄʜᴇᴄᴋᴡᴇʙsɪᴛᴇ「ᴜʀʟ」" + "\n" + \
"╠❂➣ " + key + "ɪɴsᴛᴀɪɴғᴏ 「ᴜsᴇʀɴᴀᴍᴇ」" + "\n" + \
"╠❂➣ " + key + "sᴇᴀʀᴄʜɪᴍᴀɢᴇ 「sᴇᴀʀᴄʜ」" + "\n" + \
"╠❂➣ " + key + "sᴇᴀʀᴄʜʟʏʀɪᴄ 「sᴇᴀʀᴄʜ」" + "\n" + \
"╠❂➣ " + key + "sᴇᴀʀᴄʜᴍᴜsɪᴄ 「sᴇᴀʀᴄʜ」" + "\n" + \
"╠❂➣ " + key + "sᴇᴀʀᴄʜʏᴏᴜᴛᴜʙᴇ「sᴇᴀʀᴄʜ」" + "\n" + \
"╠════════════════════╗" + "\n" + \
" ᴄʀᴇᴅɪᴛs ʙʏ : ᴅ̶ᴇ̶ᴇ̶ ✯" + "\n" + \
"╚════════════════════╝" + "\n" + \
"╔════════════════════╗" + "\n" + \
" ✰ ᴅɴᴀ ʙᴏᴛ ✰" + "\n" + \
"╚════════════════════╝"
return helpMessage
def helptexttospeech():
if wait['setKey'] == True:
key = wait['keyCommand']
else:
key = ''
helpTextToSpeech = "╔════════════════════╗" + "\n" + \
" ✰ ᴅɴᴀ ʙᴏᴛ ✰" + "\n" + \
"╚════════════════════╝" + "\n" + \
"╔════════════════════╗" + "\n" + \
" ◄]·✪·ᴛᴇxᴛᴛᴏsᴘᴇᴇᴄʜ·✪·[►" + "\n" + \
"╠════════════════════╝ " + "\n" + \
"╠❂➣ " + key + "ᴀғ : ᴀғʀɪᴋᴀᴀɴs" + "\n" + \
"╠❂➣ " + key + "sǫ : ᴀʟʙᴀɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴀʀ : ᴀʀᴀʙɪᴄ" + "\n" + \
"╠❂➣ " + key + "ʜʏ : ᴀʀᴍᴇɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ʙɴ : ʙᴇɴɢᴀʟɪ" + "\n" + \
"╠❂➣ " + key + "ᴄᴀ : ᴄᴀᴛᴀʟᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴢʜ : ᴄʜɪɴᴇsᴇ" + "\n" + \
"╠❂➣ " + key + "ᴢʜʏᴜᴇ : ᴄʜɪɴᴇsᴇ (ᴄᴀɴᴛᴏɴᴇsᴇ)" + "\n" + \
"╠❂➣ " + key + "ʜʀ : ᴄʀᴏᴀᴛɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴄs : ᴄᴢᴇᴄʜ" + "\n" + \
"╠❂➣ " + key + "ᴅᴀ : ᴅᴀɴɪsʜ" + "\n" + \
"╠❂➣ " + key + "ɴʟ : ᴅᴜᴛᴄʜ" + "\n" + \
"╠❂➣ " + key + "ᴇɴ : ᴇɴɢʟɪsʜ" + "\n" + \
"╠❂➣ " + key + "ᴇɴᴀᴜ : ᴇɴɢʟɪsʜ (ᴀᴜsᴛʀᴀʟɪᴀ)" + "\n" + \
"╠❂➣ " + key + "ᴇɴᴜᴋ : ᴇɴɢʟɪsʜ (ᴜᴋ)" + "\n" + \
"╠❂➣ " + key + "ᴇɴᴜs : ᴇɴɢʟɪsʜ (ᴜs)" + "\n" + \
"╠❂➣ " + key + "ᴇᴏ : ᴇsᴘᴇʀᴀɴᴛᴏ" + "\n" + \
"╠❂➣ " + key + "ғɪ : ғɪɴɴɪsʜ" + "\n" + \
"╠❂➣ " + key + "ғʀ : ғʀᴇɴᴄʜ" + "\n" + \
"╠❂➣ " + key + "ᴅᴇ : ɢᴇʀᴍᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴇʟ : ɢʀᴇᴇᴋ" + "\n" + \
"╠❂➣ " + key + "ʜɪ : ʜɪɴᴅɪ" + "\n" + \
"╠❂➣ " + key + "ʜᴜ : ʜᴜɴɢᴀʀɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ɪs : ɪᴄᴇʟᴀɴᴅɪᴄ" + "\n" + \
"╠❂➣ " + key + "ɪᴅ : ɪɴᴅᴏɴᴇsɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ɪᴛ : ɪᴛᴀʟɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴊᴀ : ᴊᴀᴘᴀɴᴇsᴇ" + "\n" + \
"╠❂➣ " + key + "ᴋᴍ : ᴋʜᴍᴇʀ (ᴄᴀᴍʙᴏᴅɪᴀɴ)" + "\n" + \
"╠❂➣ " + key + "ᴋᴏ : ᴋᴏʀᴇᴀɴ" + "\n" + \
"╠❂➣ " + key + "ʟᴀ : ʟᴀᴛɪɴ" + "\n" + \
"╠❂➣ " + key + "ʟᴠ : ʟᴀᴛᴠɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴍᴋ : ᴍᴀᴄᴇᴅᴏɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ɴᴏ : ɴᴏʀᴡᴇɢɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴘʟ : ᴘᴏʟɪsʜ" + "\n" + \
"╠❂➣ " + key + "ᴘᴛ : ᴘᴏʀᴛᴜɢᴜᴇsᴇ" + "\n" + \
"╠❂➣ " + key + "ʀᴏ : ʀᴏᴍᴀɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ʀᴜ : ʀᴜssɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "sʀ : sᴇʀʙɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "sɪ : sɪɴʜᴀʟᴀ" + "\n" + \
"╠❂➣ " + key + "sᴋ : sʟᴏᴠᴀᴋ" + "\n" + \
"╠❂➣ " + key + "ᴇs : sᴘᴀɴɪsʜ" + "\n" + \
"╠❂➣ " + key + "ᴇsᴇs : sᴘᴀɴɪsʜ (sᴘᴀɪɴ)" + "\n" + \
"╠❂➣ " + key + "ᴇsᴜs : sᴘᴀɴɪsʜ (ᴜs)" + "\n" + \
"╠❂➣ " + key + "sᴡ : sᴡᴀʜɪʟɪ" + "\n" + \
"╠❂➣ " + key + "sᴠ : sᴡᴇᴅɪsʜ" + "\n" + \
"╠❂➣ " + key + "ᴛᴀ : ᴛᴀᴍɪʟ" + "\n" + \
"╠❂➣ " + key + "ᴛʜ : ᴛʜᴀɪ" + "\n" + \
"╠❂➣ " + key + "ᴛʀ : ᴛᴜʀᴋɪsʜ" + "\n" + \
"╠❂➣ " + key + "ᴜᴋ : ᴜᴋʀᴀɪɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴠɪ : ᴠɪᴇᴛɴᴀᴍᴇsᴇ" + "\n" + \
"╠❂➣ " + key + "ᴄʏ : ᴡᴇʟsʜ" + "\n" + \
"╠════════════════════╗" + "\n" + \
" ᴄʀᴇᴅɪᴛs ʙʏ : ©ᴅ̶ᴇ̶ᴇ̶ ✯" + "\n" + \
"╚════════════════════╝" + "\n" + \
"╔════════════════════╗" + "\n" + \
" ✰ ᴅɴᴀ ʙᴏᴛ ✰" + "\n" + \
"╚════════════════════╝" + "\n" + \
"ᴄᴏɴᴛᴏʜ : " + key + "sᴀʏ-ɪᴅ ʀɪʀɪɴ"
return helpTextToSpeech
def helptranslate():
if wait['setKey'] == True:
key = wait['keyCommand']
else:
key = ''
helpTranslate = "╔════════════════════╗" + "\n" + \
" ✰ ᴅɴᴀ ʙᴏᴛ ✰" + "\n" + \
"╚════════════════════╝" + "\n" + \
"╔════════════════════╗" + "\n" + \
" ◄]·✪·ᴛʀᴀɴsʟᴀᴛᴇ·✪·[►" + "\n" + \
"╠════════════════════╝" + "\n" + \
"╠❂➣ " + key + "ᴀғ : ᴀғʀɪᴋᴀᴀɴs" + "\n" + \
"╠❂➣ " + key + "sǫ : ᴀʟʙᴀɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴀᴍ : ᴀᴍʜᴀʀɪᴄ" + "\n" + \
"╠❂➣ " + key + "ᴀʀ : ᴀʀᴀʙɪᴄ" + "\n" + \
"╠❂➣ " + key + "ʜʏ : ᴀʀᴍᴇɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴀᴢ : ᴀᴢᴇʀʙᴀɪᴊᴀɴɪ" + "\n" + \
"╠❂➣ " + key + "ᴇᴜ : ʙᴀsǫᴜᴇ" + "\n" + \
"╠❂➣ " + key + "ʙᴇ : ʙᴇʟᴀʀᴜsɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ʙɴ : ʙᴇɴɢᴀʟɪ" + "\n" + \
"╠❂➣ " + key + "ʙs : ʙᴏsɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ʙɢ : ʙᴜʟɢᴀʀɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴄᴀ : ᴄᴀᴛᴀʟᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴄᴇʙ : ᴄᴇʙᴜᴀɴᴏ" + "\n" + \
"╠❂➣ " + key + "ɴʏ : ᴄʜɪᴄʜᴇᴡᴀ" + "\n" + \
"╠❂➣ " + key + "ᴢʜᴄɴ : ᴄʜɪɴᴇsᴇ (sɪᴍᴘʟɪғɪᴇᴅ)" + "\n" + \
"╠❂➣ " + key + "ᴢʜᴛᴡ : ᴄʜɪɴᴇsᴇ (ᴛʀᴀᴅɪᴛɪᴏɴᴀʟ)" + "\n" + \
"╠❂➣ " + key + "ᴄᴏ : ᴄᴏʀsɪᴄᴀɴ" + "\n" + \
"╠❂➣ " + key + "ʜʀ : ᴄʀᴏᴀᴛɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴄs : ᴄᴢᴇᴄʜ" + "\n" + \
"╠❂➣ " + key + "ᴅᴀ : ᴅᴀɴɪsʜ" + "\n" + \
"╠❂➣ " + key + "ɴʟ : ᴅᴜᴛᴄʜ" + "\n" + \
"╠❂➣ " + key + "ᴇɴ : ᴇɴɢʟɪsʜ" + "\n" + \
"╠❂➣ " + key + "ᴇᴏ : ᴇsᴘᴇʀᴀɴᴛᴏ" + "\n" + \
"╠❂➣ " + key + "ᴇᴛ : ᴇsᴛᴏɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴛʟ : ғɪʟɪᴘɪɴᴏ" + "\n" + \
"╠❂➣ " + key + "ғɪ : ғɪɴɴɪsʜ" + "\n" + \
"╠❂➣ " + key + "ғʀ : ғʀᴇɴᴄʜ" + "\n" + \
"╠❂➣ " + key + "ғʏ : ғʀɪsɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ɢʟ : ɢᴀʟɪᴄɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴋᴀ : ɢᴇᴏʀɢɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴅᴇ : ɢᴇʀᴍᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴇʟ : ɢʀᴇᴇᴋ" + "\n" + \
"╠❂➣ " + key + "ɢᴜ : ɢᴜᴊᴀʀᴀᴛɪ" + "\n" + \
"╠❂➣ " + key + "ʜᴛ : ʜᴀɪᴛɪᴀɴ ᴄʀᴇᴏʟᴇ" + "\n" + \
"╠❂➣ " + key + "ʜᴀ : ʜᴀᴜsᴀ" + "\n" + \
"╠❂➣ " + key + "ʜᴀᴡ : ʜᴀᴡᴀɪɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ɪᴡ : ʜᴇʙʀᴇᴡ" + "\n" + \
"╠❂➣ " + key + "ʜɪ : ʜɪɴᴅɪ" + "\n" + \
"╠❂➣ " + key + "ʜᴍɴ : ʜᴍᴏɴɢ" + "\n" + \
"╠❂➣ " + key + "ʜᴜ : ʜᴜɴɢᴀʀɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ɪs : ɪᴄᴇʟᴀɴᴅɪᴄ" + "\n" + \
"╠❂➣ " + key + "ɪɢ : ɪɢʙᴏ" + "\n" + \
"╠❂➣ " + key + "ɪᴅ : ɪɴᴅᴏɴᴇsɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ɢᴀ : ɪʀɪsʜ" + "\n" + \
"╠❂➣ " + key + "ɪᴛ : ɪᴛᴀʟɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴊᴀ : ᴊᴀᴘᴀɴᴇsᴇ" + "\n" + \
"╠❂➣ " + key + "ᴊᴡ : ᴊᴀᴠᴀɴᴇsᴇ" + "\n" + \
"╠❂➣ " + key + "ᴋɴ : ᴋᴀɴɴᴀᴅᴀ" + "\n" + \
"╠❂➣ " + key + "ᴋᴋ : ᴋᴀᴢᴀᴋʜ" + "\n" + \
"╠❂➣ " + key + "ᴋᴍ : ᴋʜᴍᴇʀ" + "\n" + \
"╠❂➣ " + key + "ᴋᴏ : ᴋᴏʀᴇᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴋᴜ : ᴋᴜʀᴅɪsʜ (ᴋᴜʀᴍᴀɴᴊɪ)" + "\n" + \
"╠❂➣ " + key + "ᴋʏ : ᴋʏʀɢʏᴢ" + "\n" + \
"╠❂➣ " + key + "ʟᴏ : ʟᴀᴏ" + "\n" + \
"╠❂➣ " + key + "ʟᴀ : ʟᴀᴛɪɴ" + "\n" + \
"╠❂➣ " + key + "ʟᴠ : ʟᴀᴛᴠɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ʟᴛ : ʟɪᴛʜᴜᴀɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ʟʙ : ʟᴜxᴇᴍʙᴏᴜʀɢɪsʜ" + "\n" + \
"╠❂➣ " + key + "ᴍᴋ : ᴍᴀᴄᴇᴅᴏɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴍɢ : ᴍᴀʟᴀɢᴀsʏ" + "\n" + \
"╠❂➣ " + key + "ᴍs : ᴍᴀʟᴀʏ" + "\n" + \
"╠❂➣ " + key + "ᴍʟ : ᴍᴀʟᴀʏᴀʟᴀᴍ" + "\n" + \
"╠❂➣ " + key + "ᴍᴛ : ᴍᴀʟᴛᴇsᴇ" + "\n" + \
"╠❂➣ " + key + "ᴍɪ : ᴍᴀᴏʀɪ" + "\n" + \
"╠❂➣ " + key + "ᴍʀ : ᴍᴀʀᴀᴛʜɪ" + "\n" + \
"╠❂➣ " + key + "ᴍɴ : ᴍᴏɴɢᴏʟɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴍʏ : ᴍʏᴀɴᴍᴀʀ (ʙᴜʀᴍᴇsᴇ)" + "\n" + \
"╠❂➣ " + key + "ɴᴇ : ɴᴇᴘᴀʟɪ" + "\n" + \
"╠❂➣ " + key + "ɴᴏ : ɴᴏʀᴡᴇɢɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴘs : ᴘᴀsʜᴛᴏ" + "\n" + \
"╠❂➣ " + key + "ғᴀ : ᴘᴇʀsɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴘʟ : ᴘᴏʟɪsʜ" + "\n" + \
"╠❂➣ " + key + "ᴘᴛ : ᴘᴏʀᴛᴜɢᴜᴇsᴇ" + "\n" + \
"╠❂➣ " + key + "ᴘᴀ : ᴘᴜɴᴊᴀʙɪ" + "\n" + \
"╠❂➣ " + key + "ʀᴏ : ʀᴏᴍᴀɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ʀᴜ : ʀᴜssɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "sᴍ : sᴀᴍᴏᴀɴ" + "\n" + \
"╠❂➣ " + key + "ɢᴅ : sᴄᴏᴛs ɢᴀᴇʟɪᴄ" + "\n" + \
"╠❂➣ " + key + "sʀ : sᴇʀʙɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "sᴛ : sᴇsᴏᴛʜᴏ" + "\n" + \
"╠❂➣ " + key + "sɴ : sʜᴏɴᴀ" + "\n" + \
"╠❂➣ " + key + "sᴅ : sɪɴᴅʜɪ" + "\n" + \
"╠❂➣ " + key + "sɪ : sɪɴʜᴀʟᴀ" + "\n" + \
"╠❂➣ " + key + "sᴋ : sʟᴏᴠᴀᴋ" + "\n" + \
"╠❂➣ " + key + "sʟ : sʟᴏᴠᴇɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "sᴏ : sᴏᴍᴀʟɪ" + "\n" + \
"╠❂➣ " + key + "ᴇs : sᴘᴀɴɪsʜ" + "\n" + \
"╠❂➣ " + key + "sᴜ : sᴜɴᴅᴀɴᴇsᴇ" + "\n" + \
"╠❂➣ " + key + "sᴡ : sᴡᴀʜɪʟɪ" + "\n" + \
"╠❂➣ " + key + "sᴠ : sᴡᴇᴅɪsʜ" + "\n" + \
"╠❂➣ " + key + "ᴛɢ : ᴛᴀᴊɪᴋ" + "\n" + \
"╠❂➣ " + key + "ᴛᴀ : ᴛᴀᴍɪʟ" + "\n" + \
"╠❂➣ " + key + "ᴛᴇ : ᴛᴇʟᴜɢᴜ" + "\n" + \
"╠❂➣ " + key + "ᴛʜ : ᴛʜᴀɪ" + "\n" + \
"╠❂➣ " + key + "ᴛʀ : ᴛᴜʀᴋɪsʜ" + "\n" + \
"╠❂➣ " + key + "ᴜᴋ : ᴜᴋʀᴀɪɴɪᴀɴ" + "\n" + \
"╠❂➣ " + key + "ᴜʀ : ᴜʀᴅᴜ" + "\n" + \
"╠❂➣ " + key + "ᴜᴢ : ᴜᴢʙᴇᴋ" + "\n" + \
"╠❂➣ " + key + "ᴠɪ : ᴠɪᴇᴛɴᴀᴍᴇsᴇ" + "\n" + \
"╠❂➣ " + key + "ᴄʏ : ᴡᴇʟsʜ" + "\n" + \
"╠❂➣ " + key + "xʜ : xʜᴏsᴀ" + "\n" + \
"╠❂➣ " + key + "ʏɪ : ʏɪᴅᴅɪsʜ" + "\n" + \
"╠❂➣ " + key + "ʏᴏ : ʏᴏʀᴜʙᴀ" + "\n" + \
"╠❂➣ " + key + "ᴢᴜ : ᴢᴜʟᴜ" + "\n" + \
"╠❂➣ " + key + "ғɪʟ : ғɪʟɪᴘɪɴᴏ" + "\n" + \
"╠❂➣ " + key + "ʜᴇ : ʜᴇʙʀᴇᴡ" + "\n" + \
"╠════════════════════╗" + "\n" + \
" ᴄʀᴇᴅɪᴛs ʙʏ : ©ᴅ̶ᴇ̶ᴇ̶ ✯" + "\n" + \
"╚════════════════════╝" + "\n" + \
"╔════════════════════╗" + "\n" + \
" ✰ ᴅɴᴀ ʙᴏᴛ ✰" + "\n" + \
"╚════════════════════╝" + "\n" + \
"ᴄᴏɴᴛᴏʜ : " + key + "ᴛʀ-ɪᴅ ʀɪʀɪɴ"
return helpTranslate
def ririnBot(op):
try:
if op.type == 0:
print ("[ 0 ] Succes")
return
if op.type == 5:
print ("[ 5 ] Add Contact")
if wait["autoAdd"] == True:
ririn.findAndAddContactsByMid(op.param1)
ririn.sendMessage(op.param1, "╔════════════════════╗\n 「ᴀᴜᴛᴏ ʀᴇᴘʟʏ」\n ʙʏ:\n ✰ ᴅɴᴀ ʙᴏᴛ ✰\n╚════════════════════╝\n ʜᴀʟʟᴏ, ᴛʜᴀɴᴋs ғᴏʀ ᴀᴅᴅ ᴍᴇ\n\n ᴏᴘᴇɴ ᴏʀᴅᴇʀ :\n ✪ sᴇʟғʙᴏᴛ ᴏɴʟʏ ✪\n ✪ sᴇʟғʙᴏᴛ + ᴀssɪsᴛ ✪\n ✪ ʙᴏᴛ ᴘʀᴏᴛᴇᴄᴛ ✪\n 「ᴀʟʟ ʙᴏᴛ ᴘʏᴛʜᴏɴ з」\n\n ᴍɪɴᴀᴛ ᴘᴄ ᴀᴋᴜɴ ᴅɪ ʙᴀᴡᴀʜ :\n [line.me/ti/p/ppgIZ0JLDW]")
if op.type == 13:
print ("[ 13 ] Invite Into Group")
if ririnMid in op.param3:
if wait["autoJoin"] == True:
ririn.acceptGroupInvitation(op.param1)
dan = ririn.getContact(op.param2)
tgb = ririn.getGroup(op.param1)
ririn.sendMessage(op.param1, "ʜᴀʟᴏ, ᴛʜx ғᴏʀ ɪɴᴠɪᴛᴇ ᴍᴇ")
ririn.sendContact(op.param1, op.param2)
ririn.sendImageWithURL(op.param1, "http://dl.profile.line-cdn.net{}".format(dan.picturePath))
if op.type == 15:
dan = ririn.getContact(op.param2)
tgb = ririn.getGroup(op.param1)
ririn.sendMessage(op.param1, "ɴᴀʜ ᴋᴀɴ ʙᴀᴘᴇʀ 「{}」, ɢᴀᴋ ᴜsᴀʜ ʙᴀʟɪᴋ ᴅɪ {} ʟᴀɢɪ ʏᴀ\nsᴇʟᴀᴍᴀᴛ ᴊᴀʟᴀɴ ᴅᴀɴ sᴇᴍᴏɢᴀʜ ᴛᴇɴᴀɴɢ ᴅɪʟᴜᴀʀ sᴀɴᴀ 😘😘😘".format(str(dan.displayName),str(tgb.name)))
ririn.sendContact(op.param1, op.param2)
ririn.sendImageWithURL(op.param1, "http://dl.profile.line-cdn.net{}".format(dan.picturePath))
if op.type == 17:
dan = ririn.getContact(op.param2)
tgb = ririn.getGroup(op.param1)
sendMention(op.param1, "ʜᴏʟᴀ @! ,\nᴡᴇʟᴄᴏᴍᴇ ᴛᴏ ɢʀᴏᴜᴘ {} \nᴊᴀɴɢᴀɴ ʟᴜᴘᴀ ᴄʜᴇᴄᴋ ɴᴏᴛᴇ ʏᴀ \nᴀᴡᴀs ᴋᴀʟᴀᴜ ʙᴀᴘᴇʀᴀɴ 😘😘😘".format(str(tgb.name)),[op.param2])
ririn.sendContact(op.param1, op.param2)
ririn.sendImageWithURL(op.param1, "http://dl.profile.line-cdn.net{}".format(dan.picturePath))
if op.type == 22:
if wait["leaveRoom"] == True:
ririn.leaveRoom(op.param1)
if op.type == 24:
if wait["leaveRoom"] == True:
ririn.leaveRoom(op.param1)
if op.type == 25:
try:
print ("[ 25 ] SEND MESSAGE")
msg = op.message
text = msg.text
msg_id = msg.id
receiver = msg.to
sender = msg._from
setKey = wait["keyCommand"].title()
if wait["setKey"] == False:
setKey = ''
if msg.toType == 0 or msg.toType == 1 or msg.toType == 2:
if msg.toType == 0:
if sender != ririn.profile.mid:
to = sender
else:
to = receiver
elif msg.toType == 1:
to = receiver
elif msg.toType == 2:
to = receiver
if msg.contentType == 0:
if text is None:
return
else:
cmd = command(text)
if cmd == "help":
helpMessage = helpmessage()
ririn.sendMessage(to, str(helpMessage))
ririn.sendContact(to, "u93d1ee4847fa27817ec1ee5d96d8616f")
elif cmd == "tts":
helpTextToSpeech = helptexttospeech()
ririn.sendMessage(to, str(helpTextToSpeech))
ririn.sendContact(to, "u93d1ee4847fa27817ec1ee5d96d8616f")
elif cmd == "translate":
helpTranslate = helptranslate()
ririn.sendMessage(to, str(helpTranslate))
ririn.sendContact(to, "u93d1ee4847fa27817ec1ee5d96d8616f")
elif cmd.startswith("changekey:"):
sep = text.split(" ")
key = text.replace(sep[0] + " ","")
if " " in key:
ririn.sendMessage(to, "ᴅᴏɴ'ᴛ ᴛʏᴘᴏ ʙʀᴏ")
else:
wait["keyCommand"] = str(key).lower()
ririn.sendMessage(to, "sᴜᴄᴄᴇs ᴄʜᴀɴɢᴇ ᴋᴇʏ [ {} ]".format(str(key).lower()))
elif cmd == "sp":
ririn.sendMessage(to, "❂➣ ʟᴏᴀᴅɪɴɢ...")
sp = int(round(time.time() *1000))
ririn.sendMessage(to,"ᴍʏ sᴘᴇᴇᴅ : %sms" % (sp - op.createdTime))
elif cmd == "speed":
start = time.time()
ririn.sendMessage(to, "❂➣ ʟᴏᴀᴅɪɴɢ...")
elapsed_time = time.time() - start
ririn.sendMessage(to, "ᴍʏ sᴘᴇᴇᴅ : %sms" % (elapsed_time))
elif cmd == "runtime":
timeNow = time.time()
runtime = timeNow - botStart
runtime = format_timespan(runtime)
ririn.sendMessage(to, "ʀᴜɴɴɪɴɢ ɪɴ.. {}".format(str(runtime)))
elif cmd == "restart":
ririn.sendMessage(to, "ʙᴏᴛ ʜᴀᴠᴇ ʙᴇᴇɴ ʀᴇsᴛᴀʀᴛ")
restartBot()
#------------------------------------============================------------------------------------#
#======================-----------✰ ᴅɴᴀ ʙᴏᴛ ✰-----------======================#
#------------------------------------============================------------------------------------#
elif cmd == "autoadd on":
wait["autoAdd"] = True
ririn.sendMessage(to, "ᴀᴜᴛᴏ ᴀᴅᴅ ᴏɴ")
elif cmd == "autoadd off":
wait["autoAdd"] = False
ririn.sendMessage(to, "ᴀᴜᴛᴏ ᴀᴅᴅ ᴏғғ")
elif cmd == "autojoin on":
wait["autoJoin"] = True
ririn.sendMessage(to, "ᴀᴜᴛᴏ ᴊᴏɪɴ ᴏɴ")
elif cmd == "autojoin off":
wait["autoJoin"] = False
ririn.sendMessage(to, "ᴀᴜᴛᴏ ᴊᴏɪɴ ᴏɴ ᴏғғ")
elif cmd == "autoleave on":
wait["autoLeave"] = True
ririn.sendMessage(to, "ᴀᴜᴛᴏ ʟᴇᴀᴠᴇ ᴏɴ")
elif cmd == "autoleave off":
wait["autoLeave"] = False
ririn.sendMessage(to, "ᴀᴜᴛᴏ ʟᴇᴀᴠᴇ ᴏғғ")
elif cmd == "autoresponpc on":
wait["autoResponPc"] = True
ririn.sendMessage(to, "ᴀᴜᴛᴏ ʀᴇsᴘᴏɴ ғᴏʀ ᴘᴇʀsᴏɴᴀʟ ᴄʜᴀᴛ ᴏɴ")
elif cmd == "autoresponpc off":
wait["autoResponPc"] = False
ririn.sendMessage(to, "ᴀᴜᴛᴏ ʀᴇsᴘᴏɴ ғᴏʀ ᴘᴇʀsᴏɴᴀʟ ᴄʜᴀᴛ ᴏғғ")
elif cmd == "autorespon on":
wait["autoRespon"] = True
ririn.sendMessage(to, "ᴀᴜᴛᴏ ʀᴇsᴘᴏɴ ᴏɴ")
elif cmd == "autorespon off":
wait["autoRespon"] = False
ririn.sendMessage(to, "ᴀᴜᴛᴏ ʀᴇsᴘᴏɴ ᴏғғ")
elif cmd == "autoread on":
wait["autoRead"] = True
ririn.sendMessage(to, "ᴀᴜᴛᴏ ʀᴇᴀᴅ ᴏɴ")
elif cmd == "autoread off":
wait["autoRead"] = False
ririn.sendMessage(to, "ᴀᴜᴛᴏ ʀᴇᴀᴅ ᴏғғ")
elif cmd == "autojointicket on":
wait["autoJoinTicket"] = True
ririn.sendMessage(to, "ᴊᴏɪɴ ʙʏ ᴛɪᴄᴋᴇᴛ ᴏɴ")
elif cmd == "autoJoinTicket off":
wait["autoJoin"] = False
ririn.sendMessage(to, "ᴊᴏɪɴ ʙʏ ᴛɪᴄᴋᴇᴛ ᴏғғ")
elif cmd == "contact on":
wait["checkContact"] = True
ririn.sendMessage(to, "ᴄʜᴇᴄᴋ ᴄᴏɴᴛᴀᴄᴛ ᴏɴ")
elif cmd == "contact off":
wait["checkContact"] = False
ririn.sendMessage(to, "ᴄʜᴇᴄᴋ ᴄᴏɴᴛᴀᴄᴛ ᴏғғ")
elif cmd == "checkpost on":
wait["checkPost"] = True
ririn.sendMessage(to, "ᴄʜᴇᴄᴋ ᴘᴏsᴛ ᴏɴ")
elif cmd == "checkpost off":
wait["checkPost"] = False
ririn.sendMessage(to, "ᴄʜᴇᴄᴋ ᴘᴏsᴛ ᴏғғ")
elif cmd == "checksticker on":
wait["checkSticker"] = True
ririn.sendMessage(to, "ᴄʜᴇᴄᴋ sᴛɪᴄᴋᴇʀ ᴏɴ")
elif cmd == "checksticker off":
wait["checkSticker"] = False
ririn.sendMessage(to, "ᴄʜᴇᴄᴋ sᴛɪᴄᴋᴇʀ ᴏғғ")
elif cmd == "unsendchat on":
wait["unsendMessage"] = True
ririn.sendMessage(to, "ᴜɴsᴇɴᴅ ᴍᴇssᴀɢᴇ ᴏɴ")
elif cmd == "unsendchat off":
wait["unsendMessage"] = False
ririn.sendMessage(to, "ᴜɴsᴇɴᴅ ᴍᴇssᴀɢᴇ ᴏғғ")
elif cmd == "status":
try:
ret_ = "╔═════[ ·✪·sᴛᴀᴛᴜs·✪· ]═════╗"
if wait["autoAdd"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] ᴀᴜᴛᴏ ᴀᴅᴅ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] ᴀᴜᴛᴏ ᴀᴅᴅ 「⚫」"
if wait["autoJoin"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] ᴀᴜᴛᴏ ᴊᴏɪɴ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] ᴀᴜᴛᴏ ᴊᴏɪɴ 「⚫」"
if wait["autoLeave"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] ᴀᴜᴛᴏ ʟᴇᴀᴠᴇ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] ᴀᴜᴛᴏ ʟᴇᴀᴠᴇ 「⚫」"
if wait["autoJoinTicket"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] ᴊᴏɪɴ ᴛɪᴄᴋᴇᴛ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] ᴊᴏɪɴ ᴛɪᴄᴋᴇᴛ 「⚫」"
if wait["autoRead"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] ᴀᴜᴛᴏ ʀᴇᴀᴅ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] ᴀᴜᴛᴏ ʀᴇᴀᴅ 「⚫」"
if wait["autoRespon"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] ᴀᴜᴛᴏ ʀᴇsᴘᴏɴ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] ᴀᴜᴛᴏ ʀᴇsᴘᴏɴ 「⚫」"
if wait["autoResponPc"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] ᴀᴜᴛᴏ ʀᴇsᴘᴏɴ ᴘᴄ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] ᴀᴜᴛᴏ ʀᴇsᴘᴏɴ ᴘᴄ 「⚫」"
if wait["checkContact"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] ᴄʜᴇᴄᴋ ᴄᴏɴᴛᴀᴄᴛ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] ᴄʜᴇᴄᴋ ᴄᴏɴᴛᴀᴄᴛ 「⚫」"
if wait["checkPost"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] ᴄʜᴇᴄᴋ ᴘᴏsᴛ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] ᴄʜᴇᴄᴋ ᴘᴏsᴛ 「⚫」"
if wait["checkSticker"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] ᴄʜᴇᴄᴋ sᴛɪᴄᴋᴇʀ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] ᴄʜᴇᴄᴋ sᴛɪᴄᴋᴇʀ 「⚫」"
if wait["setKey"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] sᴇᴛ ᴋᴇʏ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] sᴇᴛ ᴋᴇʏ 「⚫」"
if wait["unsendMessage"] == True: ret_ += "\n╠❂➣ [ ᴏɴ ] ᴜɴsᴇɴᴅ ᴍsɢ 「⚪」"
else: ret_ += "\n╠❂➣ [ ᴏғғ ] ᴜɴsᴇɴᴅ ᴍsɢ 「⚫」"
ret_ += "\n╚═════[ ✯ ᴅɴᴀ ʙᴏᴛ ✯ ]═════╝"
ririn.sendMessage(to, str(ret_))
ririn.sendContact(to, "u93d1ee4847fa27817ec1ee5d96d8616f")
except Exception as e:
ririn.sendMessage(msg.to, str(e))
#------------------------------------============================------------------------------------#
#======================-----------✰ ᴅɴᴀ ʙᴏᴛ ✰-----------======================#
#------------------------------------============================------------------------------------#
elif cmd == "crash":
ririn.sendContact(to, "u1f41296217e740650e0448b96851a3e2',")
elif cmd.startswith("changename:"):
sep = text.split(" ")
string = text.replace(sep[0] + " ","")
if len(string) <= 20:
profile = ririn.getProfile()
profile.displayName = string
ririn.updateProfile(profile)
ririn.sendMessage(to,"ᴄʜᴀɴɢᴇ ɴᴀᴍᴇ sᴜᴄᴄᴇs :{}".format(str(string)))
elif cmd.startswith("changebio:"):
sep = text.split(" ")
string = text.replace(sep[0] + " ","")
if len(string) <= 500:
profile = ririn.getProfile()
profile.statusMessage = string
ririn.updateProfile(profile)
ririn.sendMessage(to,"ᴄʜᴀɴɢᴇ ᴘʀᴏғɪʟᴇ sᴜᴄᴄᴇs :{}".format(str(string)))
elif cmd == "me":
ririn.sendContact(to, sender)
elif cmd == "mymid":
ririn.sendMessage(to, "[ ᴍɪᴅ ]\n{}".format(sender))
elif cmd == "myname":
contact = ririn.getContact(sender)
ririn.sendMessage(to, "[ ᴅɪsᴘʟᴀʏ ɴᴀᴍᴇ ]\n{}".format(contact.displayName))
elif cmd == "mybio":
contact = ririn.getContact(sender)
ririn.sendMessage(to, "[ sᴛᴀᴛᴜs ᴍᴇssᴀɢᴇ ]\n{}".format(contact.statusMessage))
elif cmd == "mypicture":
contact = ririn.getContact(sender)
ririn.sendImageWithURL(to,"http://dl.profile.line-cdn.net/{}".format(contact.pictureStatus))
elif cmd == "myvideoprofile":
contact = ririn.getContact(sender)
ririn.sendVideoWithURL(to,"http://dl.profile.line-cdn.net/{}/vp".format(contact.pictureStatus))
elif cmd == "mycover":
channel = ririn.getProfileCoverURL(sender)
path = str(channel)
ririn.sendImageWithURL(to, path)
elif cmd.startswith ('invitegc '):
if msg.toType == 2:
sep = text.split(" ")
strnum = text.replace(sep[0] + " ","")
num = int(strnum)
ririn.sendMessage(to, "sᴜᴄᴄᴇs ɪɴᴠɪᴛᴇ ɢʀᴏᴜᴘ ᴄᴀʟʟ")
for var in range(0,num):
group = ririn.getGroup(to)
members = [mem.mid for mem in group.members]
ririn.inviteIntoGroupCall(to, contactIds=members)
elif cmd.startswith("cloneprofile "):
if 'MENTION' in msg.contentMetadata.keys()!= None:
names = re.findall(r'@(\w+)', text)
mention = ast.literal_eval(msg.contentMetadata['MENTION'])
mentionees = mention['MENTIONEES']
lists = []
for mention in mentionees:
if mention["M"] not in lists:
lists.append(mention["M"])
for ls in lists:
contact = ririn.getContact(ls)
ririn.cloneContactProfile(ls)
ririn.sendMessage(to, "ᴄʟᴏɴᴇ ᴘʀᴏғɪʟᴇ sᴜᴄᴄᴇs : {}".format(contact.displayName))
elif cmd == "restoreprofile":
try:
ririnProfile = ririn.getProfile()
ririnProfile.displayName = str(wait["myProfile"]["displayName"])
ririnProfile.statusMessage = str(wait["myProfile"]["statusMessage"])
ririnProfile.pictureStatus = str(wait["myProfile"]["pictureStatus"])
ririn.updateProfileAttribute(8, ririnProfile.pictureStatus)
ririn.updateProfile(ririnProfile)
coverId = str(wait["myProfile"]["coverId"])
ririn.updateProfileCoverById(coverId)
ririn.sendMessage(to, "ʀᴇsᴛᴏʀᴇ ᴘʀᴏғɪʟᴇ sᴜᴄᴄᴇs, ᴡᴀɪᴛ ᴀ ғᴇᴡ ᴍɪɴᴜᴛᴇs")
except Exception as e:
ririn.sendMessage(to, "ʀᴇsᴛᴏʀᴇ ᴘʀᴏғɪʟᴇ ғᴀɪʟᴇᴅ")
logError(error)
elif cmd == "backupprofile":
try:
profile = ririn.getProfile()
wait["myProfile"]["displayName"] = str(profile.displayName)
wait["myProfile"]["statusMessage"] = str(profile.statusMessage)
wait["myProfile"]["pictureStatus"] = str(profile.pictureStatus)
coverId = ririn.getProfileDetail()["result"]["objectId"]
wait["myProfile"]["coverId"] = str(coverId)
ririn.sendMessage(to, "ʙᴀᴄᴋᴜᴘ ᴘʀᴏғɪʟᴇ sᴜᴄᴄᴇs")
except Exception as e:
ririn.sendMessage(to, "ʙᴀᴄᴋᴜᴘ ᴘʀᴏғɪʟᴇ ғᴀɪʟᴇᴅ")
logError(error)
elif cmd.startswith("stealmid "):
if 'MENTION' in msg.contentMetadata.keys()!= None:
names = re.findall(r'@(\w+)', text)
mention = ast.literal_eval(msg.contentMetadata['MENTION'])
mentionees = mention['MENTIONEES']
lists = []
for mention in mentionees:
if mention["M"] not in lists:
lists.append(mention["M"])
ret_ = "[ Mid User ]"
for ls in lists:
ret_ += "\n{}".format(str(ls))
ririn.sendMessage(to, str(ret_))
elif cmd.startswith("stealname "):
if 'MENTION' in msg.contentMetadata.keys()!= None:
names = re.findall(r'@(\w+)', text)
mention = ast.literal_eval(msg.contentMetadata['MENTION'])
mentionees = mention['MENTIONEES']
lists = []
for mention in mentionees:
if mention["M"] not in lists:
lists.append(mention["M"])
for ls in lists:
contact = ririn.getContact(ls)
ririn.sendMessage(to, "[ Display Name ]\n{}".format(str(contact.displayName)))
elif cmd.startswith("stealbio "):
if 'MENTION' in msg.contentMetadata.keys()!= None:
names = re.findall(r'@(\w+)', text)
mention = ast.literal_eval(msg.contentMetadata['MENTION'])
mentionees = mention['MENTIONEES']
lists = []
for mention in mentionees:
if mention["M"] not in lists:
lists.append(mention["M"])
for ls in lists:
contact = ririn.getContact(ls)
ririn.sendMessage(to, "[ sᴛᴀᴛᴜs ᴍᴇssᴀɢᴇ ]\n{}".format(str(contact.statusMessage)))
elif cmd.startswith("stealpicture"):
if 'MENTION' in msg.contentMetadata.keys()!= None:
names = re.findall(r'@(\w+)', text)
mention = ast.literal_eval(msg.contentMetadata['MENTION'])
mentionees = mention['MENTIONEES']
lists = []
for mention in mentionees:
if mention["M"] not in lists:
lists.append(mention["M"])
for ls in lists:
contact = ririn.getContact(ls)
path = "http://dl.profile.line.naver.jp/{}".format(contact.pictureStatus)
ririn.sendImageWithURL(to, str(path))
elif cmd.startswith("stealvideoprofile "):
if 'MENTION' in msg.contentMetadata.keys()!= None:
names = re.findall(r'@(\w+)', text)
mention = ast.literal_eval(msg.contentMetadata['MENTION'])
mentionees = mention['MENTIONEES']
lists = []
for mention in mentionees:
if mention["M"] not in lists:
lists.append(mention["M"])
for ls in lists:
contact = ririn.getContact(ls)
path = "http://dl.profile.line.naver.jp/{}/vp".format(contact.pictureStatus)
ririn.sendVideoWithURL(to, str(path))
elif cmd.startswith("stealcover "):
if ririn != None:
if 'MENTION' in msg.contentMetadata.keys()!= None:
names = re.findall(r'@(\w+)', text)
mention = ast.literal_eval(msg.contentMetadata['MENTION'])
mentionees = mention['MENTIONEES']
lists = []
for mention in mentionees:
if mention["M"] not in lists:
lists.append(mention["M"])
for ls in lists:
channel = ririn.getProfileCoverURL(ls)
path = str(channel)
ririn.sendImageWithURL(to, str(path))
#------------------------------------============================------------------------------------#
#======================-----------✰ ᴅɴᴀ ʙᴏᴛ ✰-----------======================#
#------------------------------------============================------------------------------------#
elif cmd == 'groupcreator':
group = ririn.getGroup(to)
GS = group.creator.mid
ririn.sendContact(to, GS)
elif cmd == 'groupid':
gid = ririn.getGroup(to)
ririn.sendMessage(to, "[ɢʀᴏᴜᴘ ɪᴅ : : ]\n" + gid.id)
elif cmd == 'grouppicture':
group = ririn.getGroup(to)
path = "http://dl.profile.line-cdn.net/" + group.pictureStatus
ririn.sendImageWithURL(to, path)
elif cmd == 'groupname':
gid = ririn.getGroup(to)
ririn.sendMessage(to, "[ɢʀᴏᴜᴘ ɴᴀᴍᴇ : ]\n" + gid.name)
elif cmd == 'groupticket':
if msg.toType == 2:
group = ririn.getGroup(to)
if group.preventedJoinByTicket == False:
ticket = ririn.reissueGroupTicket(to)
ririn.sendMessage(to, "[ ɢʀᴏᴜᴘ ᴛɪᴄᴋᴇᴛ ]\nhttps://line.me/R/ti/g/{}".format(str(ticket)))
else:
ririn.sendMessage(to, "ᴛʜᴇ ǫʀ ɢʀᴏᴜᴘ ɪs ɴᴏᴛ ᴏᴘᴇɴ ᴘʟᴇᴀsᴇ ᴏᴘᴇɴ ɪᴛ ғɪʀsᴛ ᴡɪᴛʜ ᴛʜᴇ ᴄᴏᴍᴍᴀɴᴅ {}openqr".format(str(wait["keyCommand"])))
elif cmd == 'groupticket on':
if msg.toType == 2:
group = ririn.getGroup(to)
if group.preventedJoinByTicket == False:
ririn.sendMessage(to, "ᴀʟʀᴇᴀᴅʏ ᴏᴘᴇɴ")
else:
group.preventedJoinByTicket = False
ririn.updateGroup(group)
ririn.sendMessage(to, "sᴜᴄᴄᴇs ᴏᴘᴇɴ ǫʀ ɢʀᴏᴜᴘ")
elif cmd == 'groupticket off':
if msg.toType == 2:
group = ririn.getGroup(to)
if group.preventedJoinByTicket == True:
ririn.sendMessage(to, "ᴀʟʀᴇᴀᴅʏ ᴄʟᴏsᴇᴅ")
else:
group.preventedJoinByTicket = True
ririn.updateGroup(group)
ririn.sendMessage(to, "sᴜᴄᴄᴇs ᴄʟᴏsᴇ ǫʀ ɢʀᴏᴜᴘ")
elif cmd == 'groupinfo':
group = ririn.getGroup(to)
try:
gCreator = group.creator.displayName
except:
gCreator = "ɴᴏᴛ ғᴏᴜɴᴅ"
if group.invitee is None:
gPending = "0"
else:
gPending = str(len(group.invitee))
if group.preventedJoinByTicket == True:
gQr = "ᴄʟᴏsᴇᴅ"
gTicket = "ɴᴏʟ'"
else:
gQr = "ᴏᴘᴇɴ"
gTicket = "https://line.me/R/ti/g/{}".format(str(ririn.reissueGroupTicket(group.id)))
path = "http://dl.profile.line-cdn.net/" + group.pictureStatus
ret_ = "╔════[ ·✪ɢʀᴏᴜᴘ ɪɴғᴏ✪· ]════╗"
ret_ += "\n╠❂➣ ɢʀᴏᴜᴘ ɴᴀᴍᴇ : {}".format(str(group.name))
ret_ += "\n╠❂➣ ɢʀᴏᴜᴘ ɪᴅ : {}".format(group.id)
ret_ += "\n╠❂➣ ᴄʀᴇᴀᴛᴏʀ : {}".format(str(gCreator))
ret_ += "\n╠❂➣ ᴍᴇᴍʙᴇʀ : {}".format(str(len(group.members)))
ret_ += "\n╠❂➣ ᴘᴇɴᴅɪɴɢ : {}".format(gPending)
ret_ += "\n╠❂➣ ǫʀ ɢʀᴏᴜᴘ : {}".format(gQr)
ret_ += "\n╠❂➣ ᴛɪᴄᴋᴇᴛ ɢʀᴏᴜᴘ : {}".format(gTicket)
ret_ += "\n╚═════[ ✯ ᴅɴᴀ ʙᴏᴛ ✯ ]═════╝"
ririn.sendMessage(to, str(ret_))
ririn.sendImageWithURL(to, path)
elif cmd == 'memberlist':
if msg.toType == 2:
group = ririn.getGroup(to)
ret_ = "╔══[ ᴍᴇᴍʙᴇʀ ʟɪsᴛ ]══✪"
no = 0 + 1
for mem in group.members:
ret_ += "\n╠❂➣ {}. {}".format(str(no), str(mem.displayName))
no += 1
ret_ += "\n╚═══[ ᴛᴏᴛᴀʟ : {} ]═══✪".format(str(len(group.members)))
ririn.sendMessage(to, str(ret_))
elif cmd == 'grouplist':
groups = ririn.groups
ret_ = "╔═[ ✯ ɢʀᴏᴜᴘ ʟɪsᴛ ✯ ]═✪"
no = 0 + 1
for gid in groups:
group = ririn.getGroup(gid)
ret_ += "\n╠❂➣ {}. {} | {}".format(str(no), str(group.name), str(len(group.members)))
no += 1
ret_ += "\n╚═══[ ᴛᴏᴛᴀʟ : {} ]═══✪".format(str(len(groups)))
ririn.sendMessage(to, str(ret_))
#------------------------------------============================------------------------------------#
#======================-----------✰ ᴅɴᴀ ʙᴏᴛ ✰-----------======================#
#------------------------------------============================------------------------------------#
elif cmd == "changepictureprofile":
wait["changePictureProfile"] = True
ririn.sendMessage(to, "sᴇɴᴅ ᴘɪᴄᴛᴜʀᴇ")
elif cmd == "changegrouppicture":
if msg.toType == 2:
if to not in wait["changeGroupPicture"]:
wait["changeGroupPicture"].append(to)
ririn.sendMessage(to, "sᴇɴᴅ ᴘɪᴄᴛᴜʀᴇ")
elif cmd == 'mention':
group = ririn.getGroup(msg.to)
nama = [contact.mid for contact in group.members]
k = len(nama)//100
for a in range(k+1):
txt = u''
s=0
b=[]
for i in group.members[a*100 : (a+1)*100]:
b.append({"S":str(s), "E" :str(s+6), "M":i.mid})
s += 7
txt += u'@Zero \n'
ririn.sendMessage(to, text=txt, contentMetadata={u'MENTION': json.dumps({'MENTIONEES':b})}, contentType=0)
ririn.sendMessage(to, "Total {} Mention".format(str(len(nama))))
elif cmd == "sider on":
try:
del cctv['point'][msg.to]
del cctv['sidermem'][msg.to]
del cctv['cyduk'][msg.to]
except:
pass
cctv['point'][msg.to] = msg.id
cctv['sidermem'][msg.to] = ""
cctv['cyduk'][msg.to]=True
wait["Sider"] = True
ririn.sendMessage(msg.to,"sɪᴅᴇʀ sᴇᴛ ᴛᴏ ᴏɴ")
elif cmd == "sider off":
if msg.to in cctv['point']:
cctv['cyduk'][msg.to]=False
wait["Sider"] = False
ririn.sendMessage(msg.to,"sɪᴅᴇʀ sᴇᴛ ᴛᴏ ᴏғғ")
else:
ririn.sendMessage(msg.to,"sɪᴅᴇʀ ɴᴏᴛ sᴇᴛ")
elif cmd == "lurking on":
tz = pytz.timezone("Asia/Makassar")
timeNow = datetime.now(tz=tz)
day = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday","Friday", "Saturday"]
hari = ["Minggu", "Senin", "Selasa", "Rabu", "Kamis", "Jumat", "Sabtu"]
bulan = ["Januari", "Februari", "Maret", "April", "Mei", "Juni", "Juli", "Agustus", "September", "Oktober", "November", "Desember"]
hr = timeNow.strftime("%A")
bln = timeNow.strftime("%m")
for i in range(len(day)):
if hr == day[i]: hasil = hari[i]
for k in range(0, len(bulan)):
if bln == str(k): bln = bulan[k-1]
readTime = hasil + ", " + timeNow.strftime('%d') + " - " + bln + " - " + timeNow.strftime('%Y') + "\nJam : [ " + timeNow.strftime('%H:%M:%S') + " ]"
if receiver in read['readPoint']:
try:
del read['readPoint'][receiver]
del read['readMember'][receiver]
del read['readTime'][receiver]
except:
pass
read['readPoint'][receiver] = msg_id
read['readMember'][receiver] = ""
read['readTime'][receiver] = readTime
read['ROM'][receiver] = {}
ririn.sendMessage(receiver,"ʟᴜʀᴋɪɴɢ sᴇᴛ ᴏɴ")
else:
try:
del read['readPoint'][receiver]
del read['readMember'][receiver]
del read['readTime'][receiver]
except:
pass
read['readPoint'][receiver] = msg_id
read['readMember'][receiver] = ""
read['readTime'][receiver] = readTime
read['ROM'][receiver] = {}
ririn.sendMessage(receiver,"sᴇᴛ ʀᴇᴀᴅɪɴɢ ᴘᴏɪɴᴛ : \n" + readTime)
elif cmd == "lurking off":
tz = pytz.timezone("Asia/Makassar")
timeNow = datetime.now(tz=tz)
day = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday","Friday", "Saturday"]
hari = ["Minggu", "Senin", "Selasa", "Rabu", "Kamis", "Jumat", "Sabtu"]
bulan = ["Januari", "Februari", "Maret", "April", "Mei", "Juni", "Juli", "Agustus", "September", "Oktober", "November", "Desember"]
hr = timeNow.strftime("%A")
bln = timeNow.strftime("%m")
for i in range(len(day)):
if hr == day[i]: hasil = hari[i]
for k in range(0, len(bulan)):
if bln == str(k): bln = bulan[k-1]
readTime = hasil + ", " + timeNow.strftime('%d') + " - " + bln + " - " + timeNow.strftime('%Y') + "\nJam : [ " + timeNow.strftime('%H:%M:%S') + " ]"
if receiver not in read['readPoint']:
ririn.sendMessage(receiver,"ʟᴜʀᴋɪɴɢ sᴇᴛ ᴏғғ")
else:
try:
del read['readPoint'][receiver]
del read['readMember'][receiver]
del read['readTime'][receiver]
except:
pass
ririn.sendMessage(receiver,"ᴅᴇʟᴇᴛᴇ ʀᴇᴀᴅɪɴɢ ᴘᴏɪɴᴛ : \n" + readTime)
elif cmd == "lurking reset":
tz = pytz.timezone("Asia/Makassar")
timeNow = datetime.now(tz=tz)
day = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday","Friday", "Saturday"]
hari = ["Minggu", "Senin", "Selasa", "Rabu", "Kamis", "Jumat", "Sabtu"]
bulan = ["Januari", "Februari", "Maret", "April", "Mei", "Juni", "Juli", "Agustus", "September", "Oktober", "November", "Desember"]
hr = timeNow.strftime("%A")
bln = timeNow.strftime("%m")
for i in range(len(day)):
if hr == day[i]: hasil = hari[i]
for k in range(0, len(bulan)):
if bln == str(k): bln = bulan[k-1]
readTime = hasil + ", " + timeNow.strftime('%d') + " - " + bln + " - " + timeNow.strftime('%Y') + "\nJam : [ " + timeNow.strftime('%H:%M:%S') + " ]"
if msg.to in read["readPoint"]:
try:
del read["readPoint"][msg.to]
del read["readMember"][msg.to]
del read["readTime"][msg.to]
del read["ROM"][msg.to]
except:
pass
read['readPoint'][receiver] = msg_id
read['readMember'][receiver] = ""
read['readTime'][receiver] = readTime
read['ROM'][receiver] = {}
ririn.sendMessage(msg.to, "ʀᴇsᴇᴛ ʀᴇᴀᴅɪɴɢ ᴘᴏɪɴᴛ : \n" + readTime)
else:
ririn.sendMessage(msg.to, "ʟᴜʀᴋɪɴɢ ɴᴏᴛ ᴀᴋᴛɪᴠᴇ, ᴄᴏᴜʟᴅ ɴᴏᴛ ʙᴇ ʀᴇsᴇᴛ")
elif cmd == "lurking":
tz = pytz.timezone("Asia/Makassar")
timeNow = datetime.now(tz=tz)
day = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday","Friday", "Saturday"]
hari = ["Minggu", "Senin", "Selasa", "Rabu", "Kamis", "Jumat", "Sabtu"]
bulan = ["Januari", "Februari", "Maret", "April", "Mei", "Juni", "Juli", "Agustus", "September", "Oktober", "November", "Desember"]
hr = timeNow.strftime("%A")
bln = timeNow.strftime("%m")
for i in range(len(day)):
if hr == day[i]: hasil = hari[i]
for k in range(0, len(bulan)):
if bln == str(k): bln = bulan[k-1]
readTime = hasil + ", " + timeNow.strftime('%d') + " - " + bln + " - " + timeNow.strftime('%Y') + "\nJam : [ " + timeNow.strftime('%H:%M:%S') + " ]"
if receiver in read['readPoint']:
if read["ROM"][receiver].items() == []:
ririn.sendMessage(receiver,"ɴᴏ sɪᴅᴇʀ")
else:
chiya = []
for rom in read["ROM"][receiver].items():
chiya.append(rom[1])
cmem = ririn.getContacts(chiya)
zx = ""
zxc = ""
zx2 = []
xpesan = '[ ʀ ᴇ ᴀ ᴅ ᴇ ʀ ]\n'
for x in range(len(cmem)):
xname = str(cmem[x].displayName)
pesan = ''
pesan2 = pesan+"@c\n"
xlen = str(len(zxc)+len(xpesan))
xlen2 = str(len(zxc)+len(pesan2)+len(xpesan)-1)
zx = {'S':xlen, 'E':xlen2, 'M':cmem[x].mid}
zx2.append(zx)
zxc += pesan2
text = xpesan+ zxc + "\n" + readTime
try:
ririn.sendMessage(receiver, text, contentMetadata={'MENTION':str('{"MENTIONEES":'+json.dumps(zx2).replace(' ','')+'}')}, contentType=0)
except Exception as error:
print (error)
pass
else:
ririn.sendMessage(receiver,"ʟᴜʀᴋɪɴɢ ɴᴏᴛ ᴀᴄᴛɪᴠᴇ")
elif cmd.startswith("mimicadd"):
targets = []
key = eval(msg.contentMetadata["MENTION"])
key["MENTIONEES"][0]["M"]
for x in key["MENTIONEES"]:
targets.append(x["M"])
for target in targets:
try:
wait["mimic"]["target"][target] = True
ririn.sendMessage(msg.to,"ᴛᴀʀɢᴇᴛ ᴀᴅᴅᴇᴅ")
break
except:
ririn.sendMessage(msg.to,"ғᴀɪʟᴇᴅ ᴀᴅᴅᴇᴅ ᴛᴀʀɢᴇᴛ")
break
elif cmd.startswith("mimicdel"):
targets = []
key = eval(msg.contentMetadata["MENTION"])
key["MENTIONEES"][0]["M"]
for x in key["MENTIONEES"]:
targets.append(x["M"])
for target in targets:
try:
del wait["mimic"]["target"][target]
ririn.sendMessage(msg.to,"ᴛᴀɢᴇᴛ ᴅᴇʟᴇᴛᴇᴅ")
break
except:
ririn.sendMessage(msg.to,"ғᴀɪʟ ᴅᴇʟᴇᴛᴇᴅ ᴛᴀʀɢᴇᴛ")
break
elif cmd == "mimiclist":
if wait["mimic"]["target"] == {}:
ririn.sendMessage(msg.to,"ɴᴏ ᴛᴀʀɢᴇᴛ")
else:
mc = "╔════[ ·✪·ᴍɪᴍɪᴄ ʟɪsᴛ·✪· ]════╗"
for mi_d in wait["mimic"]["target"]:
mc += "\n╠❂➣ "+ririn.getContact(mi_d).displayName
mc += "\n╚═════[ ✯ ᴅɴᴀ ʙᴏᴛ ✯ ]═════╝"
ririn.sendMessage(msg.to,mc)
elif cmd.startswith("mimic"):
sep = text.split(" ")
mic = text.replace(sep[0] + " ","")
if mic == "on":
if wait["mimic"]["status"] == False:
wait["mimic"]["status"] = True
ririn.sendMessage(msg.to,"ᴍɪᴍɪᴄ ᴏɴ")
elif mic == "off":
if wait["mimic"]["status"] == True:
wait["mimic"]["status"] = False
ririn.sendMessage(msg.to,"ᴍɪᴍɪᴄ ᴏғғ")
#------------------------------------============================------------------------------------#
#======================-----------✰ ᴅɴᴀ ʙᴏᴛ ✰-----------======================#
#------------------------------------============================------------------------------------#
elif cmd.startswith("checkwebsite"):
try:
sep = text.split(" ")
query = text.replace(sep[0] + " ","")
r = requests.get("http://rahandiapi.herokuapp.com/sswebAPI?key=betakey&link={}".format(urllib.parse.quote(query)))
data = r.text
data = json.loads(data)
ririn.sendImageWithURL(to, data["result"])
except Exception as error:
logError(error)
elif cmd.startswith("checkdate"):
try:
sep = msg.text.split(" ")
tanggal = msg.text.replace(sep[0] + " ","")
r = requests.get('https://script.google.com/macros/exec?service=AKfycbw7gKzP-WYV2F5mc9RaR7yE3Ve1yN91Tjs91hp_jHSE02dSv9w&nama=ervan&tanggal='+tanggal)
data=r.text
data=json.loads(data)
ret_ = "[ D A T E ]"
ret_ += "\nDate Of Birth : {}".format(str(data["data"]["lahir"]))
ret_ += "\nAge : {}".format(str(data["data"]["usia"]))
ret_ += "\nBirthday : {}".format(str(data["data"]["ultah"]))
ret_ += "\nZodiak : {}".format(str(data["data"]["zodiak"]))
ririn.sendMessage(to, str(ret_))
except Exception as error:
logError(error)
elif cmd.startswith("checkpraytime "):
separate = msg.text.split(" ")
location = msg.text.replace(separate[0] + " ","")
r = requests.get("http://api.corrykalam.net/apisholat.php?lokasi={}".format(location))
data = r.text
data = json.loads(data)
tz = pytz.timezone("Asia/Makassar")
timeNow = datetime.now(tz=tz)
if data[1] != "sᴜʙᴜʜ : " and data[2] != "ᴅᴢᴜʜᴜʀ : " and data[3] != "ᴀsʜᴀʀ : " and data[4] != "ᴍᴀɢʜʀɪʙ : " and data[5] != "ɪsʜᴀ : ":
ret_ = "╔═══[ ᴊᴀᴅᴡᴀʟ sʜᴏʟᴀᴛ ]"
ret_ += "\n╠══[ sᴇᴋɪᴛᴀʀ " + data[0] + " ]"
ret_ += "\n╠❂➣ ᴛᴀɴɢɢᴀʟ : " + datetime.strftime(timeNow,'%Y-%m-%d')
ret_ += "\n╠❂➣ ᴊᴀᴍ : " + datetime.strftime(timeNow,'%H:%M:%S')
ret_ += "\n╠❂➣ " + data[1]
ret_ += "\n╠❂➣ " + data[2]
ret_ += "\n╠❂➣ " + data[3]
ret_ += "\n╠❂➣ " + data[4]
ret_ += "\n╠❂➣ " + data[5]
ret_ += "\n╚════[ ✯ ᴅɴᴀ ʙᴏᴛ ✯ ]"
ririn.sendMessage(msg.to, str(ret_))
elif cmd.startswith("checkweather "):
try:
sep = text.split(" ")
location = text.replace(sep[0] + " ","")
r = requests.get("http://api.corrykalam.net/apicuaca.php?kota={}".format(location))
data = r.text
data = json.loads(data)
tz = pytz.timezone("Asia/Makassar")
timeNow = datetime.now(tz=tz)
if "result" not in data:
ret_ = "╔═══[ ᴡᴇᴀᴛʜᴇʀ sᴛᴀᴛᴜs ]"
ret_ += "\n╠❂➣ ʟᴏᴄᴀᴛɪᴏɴ : " + data[0].replace("Temperatur di kota ","")
ret_ += "\n╠❂➣ sᴜʜᴜ : " + data[1].replace("Suhu : ","") + "°ᴄ"
ret_ += "\n╠❂➣ ᴋᴇʟᴇᴍʙᴀʙᴀɴ : " + data[2].replace("Kelembaban : ","") + "%"
ret_ += "\n╠❂➣ ᴛᴇᴋᴀɴᴀɴ ᴜᴅᴀʀᴀ : " + data[3].replace("Tekanan udara : ","") + "ʜᴘᴀ "
ret_ += "\n╠❂➣ ᴋᴇᴄᴇᴘᴀᴛᴀɴ ᴀɴɢɪɴ : " + data[4].replace("Kecepatan angin : ","") + "ᴍ/s"
ret_ += "\n╠════[ ᴛɪᴍᴇ sᴛᴀᴛᴜs ]"
ret_ += "\n╠❂➣ ᴛᴀɴɢɢᴀʟ : " + datetime.strftime(timeNow,'%Y-%m-%d')
ret_ += "\n╠❂➣ ᴊᴀᴍ : " + datetime.strftime(timeNow,'%H:%M:%S') + " ᴡɪʙ"
ret_ += "\n╚════[ ✯ ᴅɴᴀ ʙᴏᴛ ✯ ]"
ririn.sendMessage(to, str(ret_))
except Exception as error:
logError(error)
elif cmd.startswith("checklocation "):
try:
sep = text.split(" ")
location = text.replace(sep[0] + " ","")
r = requests.get("http://api.corrykalam.net/apiloc.php?lokasi={}".format(location))
data = r.text
data = json.loads(data)
if data[0] != "" and data[1] != "" and data[2] != "":
link = "https://www.google.co.id/maps/@{},{},15z".format(str(data[1]), str(data[2]))
ret_ = "╔═══[ ʟᴏᴄᴀᴛɪᴏɴ sᴛᴀᴛᴜs ]"
ret_ += "\n╠❂➣ ʟᴏᴄᴀᴛɪᴏɴ : " + data[0]
ret_ += "\n╠❂➣ ɢᴏᴏɢʟᴇ ᴍᴀᴘs : " + link
ret_ += "\n╚════[ ✯ ᴅɴᴀ ʙᴏᴛ ✯ ]"
ririn.sendMessage(to, str(ret_))
except Exception as error:
logError(error)
elif cmd.startswith("instainfo"):
try:
sep = text.split(" ")
search = text.replace(sep[0] + " ","")
r = requests.get("https://www.instagram.com/{}/?__a=1".format(search))
data = r.text
data = json.loads(data)
if data != []:
ret_ = "╔══[ Profile Instagram ]"
ret_ += "\n╠ Nama : {}".format(str(data["graphql"]["user"]["full_name"]))
ret_ += "\n╠ Username : {}".format(str(data["graphql"]["user"]["username"]))
ret_ += "\n╠ Bio : {}".format(str(data["graphql"]["user"]["biography"]))
ret_ += "\n╠ Pengikut : {}".format(str(data["graphql"]["user"]["edge_followed_by"]["count"]))
ret_ += "\n╠ Diikuti : {}".format(str(data["graphql"]["user"]["edge_follow"]["count"]))
if data["graphql"]["user"]["is_verified"] == True:
ret_ += "\n╠ Verifikasi : Sudah"
else:
ret_ += "\n╠ Verifikasi : Belum"
if data["graphql"]["user"]["is_private"] == True:
ret_ += "\n╠ Akun Pribadi : Iya"
else:
ret_ += "\n╠ Akun Pribadi : Tidak"
ret_ += "\n╠ Total Post : {}".format(str(data["graphql"]["user"]["edge_owner_to_timeline_media"]["count"]))
ret_ += "\n╚══[ https://www.instagram.com/{} ]".format(search)
path = data["graphql"]["user"]["profile_pic_url_hd"]
ririn.sendImageWithURL(to, str(path))
ririn.sendMessage(to, str(ret_))
except Exception as error:
logError(error)
elif cmd.startswith("instapost"):
try:
sep = text.split(" ")
text = text.replace(sep[0] + " ","")
cond = text.split("|")
username = cond[0]
no = cond[1]
r = requests.get("http://rahandiapi.herokuapp.com/instapost/{}/{}?key=betakey".format(str(username), str(no)))
data = r.text
data = json.loads(data)
if data["find"] == True:
if data["media"]["mediatype"] == 1:
ririn.sendImageWithURL(msg.to, str(data["media"]["url"]))
if data["media"]["mediatype"] == 2:
ririn.sendVideoWithURL(msg.to, str(data["media"]["url"]))
ret_ = "╔══[ Info Post ]"
ret_ += "\n╠ Jumlah Like : {}".format(str(data["media"]["like_count"]))
ret_ += "\n╠ Jumlah Comment : {}".format(str(data["media"]["comment_count"]))
ret_ += "\n╚══[ Caption ]\n{}".format(str(data["media"]["caption"]))
ririn.sendMessage(to, str(ret_))
except Exception as error:
logError(error)
elif cmd.startswith("instastory"):
try:
sep = text.split(" ")
text = text.replace(sep[0] + " ","")
cond = text.split("|")
search = str(cond[0])
if len(cond) == 2:
r = requests.get("http://rahandiapi.herokuapp.com/instastory/{}?key=betakey".format(search))
data = r.text
data = json.loads(data)
if data["url"] != []:
num = int(cond[1])
if num <= len(data["url"]):
search = data["url"][num - 1]
if search["tipe"] == 1:
ririn.sendImageWithURL(to, str(search["link"]))
if search["tipe"] == 2:
ririn.sendVideoWithURL(to, str(search["link"]))
except Exception as error:
logError(error)
elif cmd.startswith("say-"):
sep = text.split("-")
sep = sep[1].split(" ")
lang = sep[0]
say = text.replace("say-" + lang + " ","")
if lang not in list_language["list_textToSpeech"]:
return ririn.sendMessage(to, "ʟᴀɴɢᴜᴀɢᴇ ɴᴏᴛ ғᴏᴜɴᴅ")
tts = gTTS(text=say, lang=lang)
tts.save("hasil.mp3")
ririn.sendAudio(to,"hasil.mp3")
elif cmd.startswith("searchimage"):
try:
separate = msg.text.split(" ")
search = msg.text.replace(separate[0] + " ","")
r = requests.get("http://rahandiapi.herokuapp.com/imageapi?key=betakey&q={}".format(search))
data = r.text
data = json.loads(data)
if data["result"] != []:
items = data["result"]
path = random.choice(items)
a = items.index(path)
b = len(items)
ririn.sendImageWithURL(to, str(path))
except Exception as error:
logError(error)
elif cmd.startswith("searchmusic "):
sep = msg.text.split(" ")
query = msg.text.replace(sep[0] + " ","")
cond = query.split("|")
search = str(cond[0])
result = requests.get("http://api.ntcorp.us/joox/search?q={}".format(str(search)))
data = result.text
data = json.loads(data)
if len(cond) == 1:
num = 0
ret_ = "╔══[ ʀᴇsᴜʟᴛ ᴍᴜsɪᴄ ]"
for music in data["result"]:
num += 1
ret_ += "\n╠ {}. {}".format(str(num), str(music["single"]))
ret_ += "\n╚══[ ᴛᴏᴛᴀʟ {} ᴍᴜsɪᴄ ] ".format(str(len(data["result"])))
ret_ += "\n\nᴜɴᴛᴜᴋ ᴍᴇʟɪʜᴀᴛ ᴅᴇᴛᴀɪʟs ᴍᴜsɪᴄ, sɪʟᴀʜᴋᴀɴ ɢᴜɴᴀᴋᴀɴ ᴄᴏᴍᴍᴀɴᴅ {}sᴇᴀʀᴄʜᴍᴜsɪᴄ {}|「ɴᴜᴍʙᴇʀ」".format(str(setKey), str(search))
ririn.sendMessage(to, str(ret_))
elif len(cond) == 2:
num = int(cond[1])
if num <= len(data["result"]):
music = data["result"][num - 1]
result = requests.get("http://api.ntcorp.us/joox/song_info?sid={}".format(str(music["sid"])))
data = result.text
data = json.loads(data)
if data["result"] != []:
ret_ = "╔══════[ ᴍᴜsɪᴄ ]"
ret_ += "\n╠❂➣ ᴛɪᴛʟᴇ : {}".format(str(data["result"]["song"]))
ret_ += "\n╠❂➣ ᴀʟʙᴜᴍ : {}".format(str(data["result"]["album"]))
ret_ += "\n╠❂➣ sɪᴢᴇ : {}".format(str(data["result"]["size"]))
ret_ += "\n╠❂➣ ʟɪɴᴋ : {}".format(str(data["result"]["mp3"][0]))
ret_ += "\n╚════[ ✯ ᴅɴᴀ ʙᴏᴛ ✯ ]"
ririn.sendImageWithURL(to, str(data["result"]["img"]))
ririn.sendMessage(to, str(ret_))
ririn.sendAudioWithURL(to, str(data["result"]["mp3"][0]))
elif cmd.startswith("searchlyric"):
sep = msg.text.split(" ")
query = msg.text.replace(sep[0] + " ","")
cond = query.split("|")
search = cond[0]
api = requests.get("http://api.secold.com/joox/cari/{}".format(str(search)))
data = api.text
data = json.loads(data)
if len(cond) == 1:
num = 0
ret_ = "╔══[ ʀᴇsᴜʟᴛ ʟʏʀɪᴄ ]"
for lyric in data["results"]:
num += 1
ret_ += "\n╠❂➣ {}. {}".format(str(num), str(lyric["single"]))
ret_ += "\n╚══[ ᴛᴏᴛᴀʟ {} ᴍᴜsɪᴄ ]".format(str(len(data["results"])))
ret_ += "\n\nᴜɴᴛᴜᴋ ᴍᴇʟɪʜᴀᴛ ᴅᴇᴛᴀɪʟs ʟʏʀɪᴄ, sɪʟᴀʜᴋᴀɴ ɢᴜɴᴀᴋᴀɴ ᴄᴏᴍᴍᴀɴᴅ {}sᴇᴀʀᴄʜʟʏʀɪᴄ {}|「ɴᴜᴍʙᴇʀ」".format(str(setKey), str(search))
ririn.sendMessage(to, str(ret_))
elif len(cond) == 2:
num = int(cond[1])
if num <= len(data["results"]):
lyric = data["results"][num - 1]
api = requests.get("http://api.secold.com/joox/sid/{}".format(str(lyric["songid"])))
data = api.text
data = json.loads(data)
lyrics = data["results"]["lyric"]
lyric = lyrics.replace('ti:','Title - ')
lyric = lyric.replace('ar:','Artist - ')
lyric = lyric.replace('al:','Album - ')
removeString = "[1234567890.:]"
for char in removeString:
lyric = lyric.replace(char,'')
ririn.sendMessage(msg.to, str(lyric))
elif cmd.startswith("searchyoutube"):
sep = text.split(" ")
search = text.replace(sep[0] + " ","")
params = {"search_query": search}
r = requests.get("https://www.youtube.com/results", params = params)
soup = BeautifulSoup(r.content, "html5lib")
ret_ = "╔══[ ʀᴇsᴜʟᴛ ʏᴏᴜᴛᴜʙᴇ ]"
datas = []
for data in soup.select(".yt-lockup-title > a[title]"):
if "&lists" not in data["href"]:
datas.append(data)
for data in datas:
ret_ += "\n╠❂➣{} ]".format(str(data["title"]))
ret_ += "\n╠❂ https://www.youtube.com{}".format(str(data["href"]))
ret_ += "\n╚══[ ᴛᴏᴛᴀʟ {} ᴠɪᴅᴇᴏ ]".format(len(datas))
ririn.sendMessage(to, str(ret_))
elif cmd.startswith("tr-"):
sep = text.split("-")
sep = sep[1].split(" ")
lang = sep[0]
say = text.replace("tr-" + lang + " ","")
if lang not in list_language["list_translate"]:
return ririn.sendMessage(to, "Language not found")
translator = Translator()
hasil = translator.translate(say, dest=lang)
A = hasil.text
ririn.sendMessage(to, str(A))
#------------------------------------============================------------------------------------#
#======================-----------✰ ᴅɴᴀ ʙᴏᴛ ✰-----------======================#
#------------------------------------============================------------------------------------#
if text.lower() == "mykey":
ririn.sendMessage(to, "ᴋᴇʏᴄᴏᴍᴍᴀɴᴅ sᴀᴀᴛ ɪɴɪ [ {} ]".format(str(wait["keyCommand"])))
elif text.lower() == "setkey on":
wait["setKey"] = True
ririn.sendMessage(to, "ʙᴇʀʜᴀsɪʟ ᴍᴇɴɢᴀᴋᴛɪғᴋᴀɴ sᴇᴛᴋᴇʏ")
elif text.lower() == "setkey off":
wait["setKey"] = False
ririn.sendMessage(to, "ʙᴇʀʜᴀsɪʟ ᴍᴇɴᴏɴᴀᴋᴛɪғᴋᴀɴ sᴇᴛᴋᴇʏ")
#------------------------------------============================------------------------------------#
#======================-----------✰ ᴅɴᴀ ʙᴏᴛ ✰-----------======================#
#------------------------------------============================------------------------------------#
elif msg.contentType == 1:
if wait["changePictureProfile"] == True:
path = ririn.downloadObjectMsg(msg_id)
wait["changePictureProfile"] = False
ririn.updateProfilePicture(path)
ririn.sendMessage(to, "sᴜᴄᴄᴇs ᴄʜᴀɴɢᴇ ᴘʜᴏᴛᴏ ᴘʀᴏғɪʟᴇ")
if msg.toType == 2:
if to in wait["changeGroupPicture"]:
path = ririn.downloadObjectMsg(msg_id)
wait["changeGroupPicture"].remove(to)
ririn.updateGroupPicture(to, path)
ririn.sendMessage(to, "sᴜᴄᴄᴇs ᴄʜᴀɴɢᴇ ᴘʜᴏᴛᴏ ɢʀᴏᴜᴘ")
elif msg.contentType == 7:
if wait["checkSticker"] == True:
stk_id = msg.contentMetadata['STKID']
stk_ver = msg.contentMetadata['STKVER']
pkg_id = msg.contentMetadata['STKPKGID']
ret_ = "╔════[ sᴛɪᴄᴋᴇʀ ɪɴғᴏ ] "
ret_ += "\n╠❂➣ sᴛɪᴄᴋᴇʀ ɪᴅ : {}".format(stk_id)
ret_ += "\n╠❂➣ sᴛɪᴄᴋᴇʀ ᴘᴀᴄᴋᴀɢᴇs ɪᴅ : {}".format(pkg_id)
ret_ += "\n╠❂➣ sᴛɪᴄᴋᴇʀ ᴠᴇʀsɪᴏɴ : {}".format(stk_ver)
ret_ += "\n╠❂➣ sᴛɪᴄᴋᴇʀ ᴜʀʟ : line://shop/detail/{}".format(pkg_id)
ret_ += "\n╚════[ ✯ ᴅɴᴀ ʙᴏᴛ ✯ ]"
ririn.sendMessage(to, str(ret_))
elif msg.contentType == 13:
if wait["checkContact"] == True:
try:
contact = ririn.getContact(msg.contentMetadata["mid"])
if ririn != None:
cover = ririn.getProfileCoverURL(msg.contentMetadata["mid"])
else:
cover = "Tidak dapat masuk di line channel"
path = "http://dl.profile.line-cdn.net/{}".format(str(contact.pictureStatus))
try:
ririn.sendImageWithURL(to, str(path))
except:
pass
ret_ = "╔═══[ ᴅᴇᴛᴀɪʟs ᴄᴏɴᴛᴀᴄᴛ ]"
ret_ += "\n╠❂➣ ɴᴀᴍᴀ : {}".format(str(contact.displayName))
ret_ += "\n╠❂➣ ᴍɪᴅ : {}".format(str(msg.contentMetadata["mid"]))
ret_ += "\n╠❂➣ ʙɪᴏ : {}".format(str(contact.statusMessage))
ret_ += "\n╠❂➣ ɢᴀᴍʙᴀʀ ᴘʀᴏғɪʟᴇ : http://dl.profile.line-cdn.net/{}".format(str(contact.pictureStatus))
ret_ += "\n╠❂➣ ɢᴀᴍʙᴀʀ ᴄᴏᴠᴇʀ : {}".format(str(cover))
ret_ += "\n╚════[ ✯ ᴅɴᴀ ʙᴏᴛ ✯ ]"
ririn.sendMessage(to, str(ret_))
except:
ririn.sendMessage(to, "ᴋᴏɴᴛᴀᴋ ᴛɪᴅᴀᴋ ᴠᴀʟɪᴅ")
elif msg.contentType == 16:
if wait["checkPost"] == True:
try:
ret_ = "╔════[ ᴅᴇᴛᴀɪʟs ᴘᴏsᴛ ]"
if msg.contentMetadata["serviceType"] == "GB":
contact = ririn.getContact(sender)
auth = "\n╠❂➣ ᴀᴜᴛʜᴏʀ : {}".format(str(contact.displayName))
else:
auth = "\n╠❂➣ ᴀᴜᴛʜᴏʀ : {}".format(str(msg.contentMetadata["serviceName"]))
purl = "\n╠❂➣ ᴜʀʟ : {}".format(str(msg.contentMetadata["postEndUrl"]).replace("line://","https://line.me/R/"))
ret_ += auth
ret_ += purl
if "mediaOid" in msg.contentMetadata:
object_ = msg.contentMetadata["mediaOid"].replace("svc=myhome|sid=h|","")
if msg.contentMetadata["mediaType"] == "V":
if msg.contentMetadata["serviceType"] == "GB":
ourl = "\n╠❂➣ ᴏʙᴊᴇᴄᴛ ᴜʀʟ : https://obs-us.line-apps.com/myhome/h/download.nhn?tid=612w&{}".format(str(msg.contentMetadata["mediaOid"]))
murl = "\n╠❂➣ ᴍᴇᴅɪᴀ ᴜʀʟ : https://obs-us.line-apps.com/myhome/h/download.nhn?{}".format(str(msg.contentMetadata["mediaOid"]))
else:
ourl = "\n╠❂➣ ᴏʙᴊᴇᴄᴛ ᴜʀʟ : https://obs-us.line-apps.com/myhome/h/download.nhn?tid=612w&{}".format(str(object_))
murl = "\n╠❂➣ ᴍᴇᴅɪᴀ ᴜʀʟ : https://obs-us.line-apps.com/myhome/h/download.nhn?{}".format(str(object_))
ret_ += murl
else:
if msg.contentMetadata["serviceType"] == "GB":
ourl = "\n╠❂➣ ᴏʙᴊᴇᴄᴛ ᴜʀʟ : https://obs-us.line-apps.com/myhome/h/download.nhn?tid=612w&{}".format(str(msg.contentMetadata["mediaOid"]))
else:
ourl = "\n╠❂➣ ᴏʙᴊᴇᴄᴛ ᴜʀʟ : https://obs-us.line-apps.com/myhome/h/download.nhn?tid=612w&{}".format(str(object_))
ret_ += ourl
if "stickerId" in msg.contentMetadata:
stck = "\n╠❂➣ sᴛɪᴄᴋᴇʀ : https://line.me/R/shop/detail/{}".format(str(msg.contentMetadata["packageId"]))
ret_ += stck
if "text" in msg.contentMetadata:
text = "\n╠❂➣ ɴᴏᴛᴇ : {}".format(str(msg.contentMetadata["text"]))
ret_ += text
ret_ += "\n╚════[ ✯ ᴅɴᴀ ʙᴏᴛ ✯ ]"
ririn.sendMessage(to, str(ret_))
except:
ririn.sendMessage(to, "ɪɴᴠᴀʟɪᴅ ᴘᴏsᴛ")
except Exception as error:
logError(error)
traceback.print_tb(error.__traceback__)
if op.type == 26:
msg = op.message
if wait["autoResponPc"] == True:
if msg.toType == 0:
# ririn.sendChatChecked(msg._from,msg.id)
contact = ririn.getContact(msg._from)
cName = contact.displayName
balas = ["╔════════════════════╗\n 「ᴀᴜᴛᴏ ʀᴇᴘʟʏ」\n ʙʏ:\n ✰ ᴅɴᴀ ʙᴏᴛ ✰\n╚════════════════════╝\n\nʜᴀʟʟᴏ 「" + cName + "」\nᴍᴏʜᴏɴ ᴍᴀᴀғ sᴀʏᴀ sᴇᴅᴀɴɢ sɪʙᴜᴋ, ɪɴɪ ᴀᴅᴀʟᴀʜ ᴘᴇsᴀɴ ᴏᴛᴏᴍᴀᴛɪs, ᴊɪᴋᴀ ᴀᴅᴀ ʏᴀɴɢ ᴘᴇɴᴛɪɴɢ ᴍᴏʜᴏɴ ʜᴜʙᴜɴɢɪ sᴀʏᴀ ɴᴀɴᴛɪ, ᴛᴇʀɪᴍᴀᴋᴀsɪʜ...","╔════════════════════╗\n 「ᴀᴜᴛᴏ ʀᴇᴘʟʏ」\n ʙʏ:\n ✰ ᴅɴᴀ ʙᴏᴛ ✰\n╚════════════════════╝\n\nʜᴀʟʟᴏ 「" + cName + "」\nsᴀʏᴀ ʟᴀɢɪ sɪʙᴜᴋ ʏᴀ ᴋᴀᴋ ᴊᴀɴɢᴀɴ ᴅɪɢᴀɴɢɢᴜ","╔════════════════════╗\n 「ᴀᴜᴛᴏ ʀᴇᴘʟʏ」\n ʙʏ:\n ✰ ᴅɴᴀ ʙᴏᴛ ✰\n╚════════════════════╝\n\nʜᴀʟʟᴏ 「" + cName + "」\nsᴀʏᴀ sᴇᴅᴀɴɢ ᴛɪᴅᴜʀ ᴋᴀᴋ"]
dee = "" + random.choice(balas)
ririn.sendImageWithURL(msg._from, "http://dl.profile.line-cdn.net{}".format(contact.picturePath))
ririn.sendMessage(msg._from,dee)
if op.type == 26:
try:
print ("[ 26 ] RECIEVE MESSAGE")
msg = op.message
text = msg.text
msg_id = msg.id
receiver = msg.to
sender = msg._from
if msg.toType == 0 or msg.toType == 1 or msg.toType == 2:
if msg.toType == 0:
if sender != ririn.profile.mid:
to = sender
else:
to = receiver
elif msg.toType == 1:
to = receiver
elif msg.toType == 2:
to = receiver
if wait["autoRead"] == True:
ririn.sendChatChecked(to, msg_id)
if to in read["readPoint"]:
if sender not in read["ROM"][to]:
read["ROM"][to][sender] = True
if sender in wait["mimic"]["target"] and wait["mimic"]["status"] == True and wait["mimic"]["target"][sender] == True:
text = msg.text
if text is not None:
ririn.sendMessage(msg.to,text)
if wait["unsendMessage"] == True:
try:
msg = op.message
if msg.toType == 0:
ririn.log("[{} : {}]".format(str(msg._from), str(msg.text)))
else:
ririn.log("[{} : {}]".format(str(msg.to), str(msg.text)))
msg_dict[msg.id] = {"text": msg.text, "from": msg._from, "createdTime": msg.createdTime, "contentType": msg.contentType, "contentMetadata": msg.contentMetadata}
except Exception as error:
logError(error)
if msg.contentType == 0:
if text is None:
return
if "/ti/g/" in msg.text.lower():
if wait["autoJoinTicket"] == True:
link_re = re.compile('(?:line\:\/|line\.me\/R)\/ti\/g\/([a-zA-Z0-9_-]+)?')
links = link_re.findall(text)
n_links = []
for l in links:
if l not in n_links:
n_links.append(l)
for ticket_id in n_links:
group = ririn.findGroupByTicket(ticket_id)
ririn.acceptGroupInvitationByTicket(group.id,ticket_id)
ririn.sendMessage(to, "sᴜᴄᴄᴇssғᴜʟʟʏ ᴇɴᴛᴇʀᴇᴅ ᴛʜᴇ ɢʀᴏᴜᴘ %s" % str(group.name))
if 'MENTION' in msg.contentMetadata.keys()!= None:
names = re.findall(r'@(\w+)', text)
mention = ast.literal_eval(msg.contentMetadata['MENTION'])
mentionees = mention['MENTIONEES']
lists = []
for mention in mentionees:
if ririnMid in mention["M"]:
if wait["autoRespon"] == True:
# ririn.sendChatChecked(msg._from,msg.id)
contact = ririn.getContact(msg._from)
ririn.sendImageWithURL(msg._from, "http://dl.profile.line-cdn.net{}".format(contact.picturePath))
sendMention(sender, "makasih @! ,\nSudah tag aku", [sender])
dee = "" + random.choice(balas)
break
except Exception as error:
logError(error)
traceback.print_tb(error.__traceback__)
if op.type == 65:
print ("[ 65 ] NOTIFIED DESTROY MESSAGE")
if wait["unsendMessage"] == True:
try:
at = op.param1
msg_id = op.param2
if msg_id in msg_dict:
if msg_dict[msg_id]["from"]:
contact = ririn.getContact(msg_dict[msg_id]["from"])
if contact.displayNameOverridden != None:
name_ = contact.displayNameOverridden
else:
name_ = contact.displayName
ret_ = "sᴇɴᴅ ᴍᴇssᴀɢᴇ ᴄᴀɴᴄᴇʟʟᴇᴅ."
ret_ += "\nsᴇɴᴅᴇʀ : @! "
ret_ += "\nsᴇɴᴅ ᴀᴛ : {}".format(str(dt_to_str(cTime_to_datetime(msg_dict[msg_id]["createdTime"]))))
ret_ += "\nᴛʏᴘᴇ : {}".format(str(Type._VALUES_TO_NAMES[msg_dict[msg_id]["contentType"]]))
ret_ += "\nᴛᴇxᴛ : {}".format(str(msg_dict[msg_id]["text"]))
sendMention(at, str(ret_), [contact.mid])
del msg_dict[msg_id]
else:
ririn.sendMessage(at,"sᴇɴᴛᴍᴇssᴀɢᴇ ᴄᴀɴᴄᴇʟʟᴇᴅ,ʙᴜᴛ ɪ ᴅɪᴅɴ'ᴛ ʜᴀᴠᴇ ʟᴏɢ ᴅᴀᴛᴀ.\nsᴏʀʀʏ > <")
except Exception as error:
logError(error)
traceback.print_tb(error.__traceback__)
if op.type == 55:
try:
group_id = op.param1
user_id=op.param2
subprocess.Popen('echo "'+ user_id+'|'+str(op.createdTime)+'" >> dataSeen/%s.txt' % group_id, shell=True, stdout=subprocess.PIPE, )
except Exception as e:
print(e)
if op.type == 55:
try:
if cctv['cyduk'][op.param1]==True:
if op.param1 in cctv['point']:
Name = ririn.getContact(op.param2).displayName
if Name in cctv['sidermem'][op.param1]:
pass
else:
cctv['sidermem'][op.param1] += "\n• " + Name
if " " in Name:
nick = Name.split(' ')
if len(nick) == 2:
ririn.sendMessage(op.param1, "ᴡᴏʏ " + "☞ " + Name + " ☜" + "\nᴅɪᴇᴍ ᴅɪᴇᴍ ʙᴀᴇ...\nsɪɴɪ ɪᴋᴜᴛ ɴɢᴏᴘɪ ")
else:
ririn.sendMessage(op.param1, "ᴍʙʟᴏ " + "☞ " + Name + " ☜" + "\nɴɢɪɴᴛɪᴘ ᴅᴏᴀɴɢ ʟᴜ\nsɪɴɪ ɢᴀʙᴜɴɢ ")
else:
ririn.sendMessage(op.param1, "ᴛᴏɴɢ " + "☞ " + Name + " ☜" + "\nɴɢᴀᴘᴀɪɴ ʟᴜ...\nɢᴀʙᴜɴɢ ᴄʜᴀᴛ sɪɴɪ")
else:
pass
else:
pass
except:
pass
else:
pass
if op.type == 55:
print ("[ 55 ] NOTIFIED READ MESSAGE")
try:
if op.param1 in read['readPoint']:
if op.param2 in read['readMember'][op.param1]:
pass
else:
read['readMember'][op.param1] += op.param2
read['ROM'][op.param1][op.param2] = op.param2
else:
pass
except Exception as error:
logError(error)
traceback.print_tb(error.__traceback__)
except Exception as error:
logError(error)
traceback.print_tb(error.__traceback__)
while True:
try:
delete_log()
ops = ririnPoll.singleTrace(count=50)
if ops is not None:
for op in ops:
ririnBot(op)
ririnPoll.setRevision(op.revision)
except Exception as error:
logError(error)
def atend():
print("Saving")
with open("Log_data.json","w",encoding='utf8') as f:
json.dump(msg_dict, f, ensure_ascii=False, indent=4,separators=(',', ': '))
print("BYE")
atexit.register(atend)
| [
"noreply@github.com"
] | Kaneki711.noreply@github.com |
8100083afb33211bd35f7ed5b0458867af9443b8 | b961b17c346800f93e49c0888e89f9d8e4952066 | /Trimestre1/Ejercicio0.py | e2831d265127d823ba96e245da8077aacf0bb20c | [] | no_license | pabloTSDAW/Programacion-Python | 425e652b9fc019f7c5486270ca7bc41c60559207 | 9387671252433afc03cb8b47c6176ee649a64539 | refs/heads/master | 2021-08-18T19:20:41.190723 | 2017-11-23T16:27:08 | 2017-11-23T16:27:08 | 111,829,919 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 111 | py | """Escribe un programa que imprime desde el 10 hasta el 1"""
x=10
while x>0:
print(x)
x=x-1
print(x,x)
| [
"pablotsdaw@gmail.com"
] | pablotsdaw@gmail.com |
5df69f02dee8d7c0128dc781ba6d8d2646d5868e | 266a80aec89479cd35266ad392abf4cb65634eff | /raser/g4particles.py | b1eedc3be0d2f77ecc4397a91571d4927c98545a | [
"MIT"
] | permissive | yangtaogit/raser | ada3f75c98b1aa75a32d1245cf760e13798be5c9 | ae4c58a3577cb27898c8ee178c40cefa00e01f51 | refs/heads/main | 2023-09-03T09:01:51.939955 | 2021-11-17T14:12:44 | 2021-11-17T14:12:44 | 351,357,290 | 0 | 0 | MIT | 2021-11-17T14:12:45 | 2021-03-25T08:13:39 | Python | UTF-8 | Python | false | false | 19,997 | py | # -*- encoding: utf-8 -*-
'''
Description:
geat4_pybind simulate the energy depostion of
beta-source time resoltion experiment
@Date : 2021/09/02 12:46:27
@Author : tanyuhang
@version : 1.0
'''
import geant4_pybind as g4b
import sys
import numpy as np
# Geant4 main process
class Particles:
def __init__(self, my_d, my_f, dset):
"""
Description:
Geant4 main process
Simulate s_num electrons through device
Record the energy depositon in the device
Parameters:
---------
energy_steps : list
Energy deposition of each step in simulation
edep_devices : list
Total energy deposition in device
@Modify:
---------
2021/09/02
"""
g4_dic = dset.pygeant4
my_g4d = MyDetectorConstruction(my_d,my_f,g4_dic['name'],g4_dic['maxstep'])
if g4_dic['g4_vis']:
ui = None
ui = g4b.G4UIExecutive(len(sys.argv), sys.argv)
gRunManager = g4b.G4RunManagerFactory.CreateRunManager(g4b.G4RunManagerType.Default)
rand_engine= g4b.RanecuEngine()
g4b.HepRandom.setTheEngine(rand_engine)
g4b.HepRandom.setTheSeed(dset.g4seed)
gRunManager.SetUserInitialization(my_g4d)
# set physics list
physics_list = g4b.FTFP_BERT()
physics_list.SetVerboseLevel(1)
physics_list.RegisterPhysics(g4b.G4StepLimiterPhysics())
gRunManager.SetUserInitialization(physics_list)
# define golbal parameter
global s_eventIDs,s_edep_devices,s_p_steps,s_energy_steps,s_events_angle
s_eventIDs,s_edep_devices,s_p_steps,s_energy_steps,s_events_angle=[],[],[],[],[]
#define action
gRunManager.SetUserInitialization(MyActionInitialization(
g4_dic['par_in'],
g4_dic['par_out']))
if g4_dic['g4_vis']:
visManager = g4b.G4VisExecutive()
visManager.Initialize()
UImanager = g4b.G4UImanager.GetUIpointer()
UImanager.ApplyCommand('/control/execute init_vis.mac')
else:
UImanager = g4b.G4UImanager.GetUIpointer()
UImanager.ApplyCommand('/run/initialize')
gRunManager.BeamOn(int(dset.total_events))
if g4_dic['g4_vis']:
ui.SessionStart()
self.p_steps=s_p_steps
self.energy_steps=s_energy_steps
self.edep_devices=s_edep_devices
self.events_angle=s_events_angle
self.init_tz_device = my_g4d.init_tz_device
del s_eventIDs,s_edep_devices,s_p_steps,s_energy_steps,s_events_angle
def __del__(self):
pass
#Geant4 for particle drift path
class MyDetectorConstruction(g4b.G4VUserDetectorConstruction):
"My Detector Construction"
def __init__(self,my_d,my_f,sensor_model,maxStep=0.5):
g4b.G4VUserDetectorConstruction.__init__(self)
self.solid = {}
self.logical = {}
self.physical = {}
self.checkOverlaps = True
self.create_world(my_d)
#3D source order: beta->sic->si
#2D source order: beta->Si->SiC
tx_all = my_d.l_x/2.0*g4b.um
ty_all = my_d.l_y/2.0*g4b.um
if "plugin3D" in sensor_model:
tz_Si = 0*g4b.um
tz_device = 10000*g4b.um+my_d.l_z/2.0*g4b.um
self.init_tz_device = 10000
tz_pcb2 = 10000*g4b.um-750*g4b.um
device_x = (my_f.sx_r-my_f.sx_l)*g4b.um
device_y = (my_f.sy_r-my_f.sy_l)*g4b.um
device_z = my_d.l_z*g4b.um
elif "planar3D" in sensor_model:
tz_Si = 10000*g4b.um
tz_device = my_d.l_z/2.0*g4b.um
self.init_tz_device = 0
tz_pcb2 = -1100*g4b.um
device_x = my_d.l_x*g4b.um
device_y = my_d.l_y*g4b.um
device_z = my_d.l_z*g4b.um
self.create_AlorSi_box(
name = "Al",
sidex = my_d.l_x*g4b.um,
sidey = my_d.l_y*g4b.um,
sidez = 10*g4b.um,
translation = [tx_all,ty_all,15000*g4b.um],
material_type = "G4_Al",
colour = [1,0.1,0.8],
mother = 'world')
self.create_AlorSi_box(
name = "Si_main",
sidex = 1300*g4b.um,
sidey = 1300*g4b.um,
sidez = 33*g4b.um,
translation = [tx_all,ty_all,tz_Si],
material_type = "G4_Si",
colour = [1,1,1],
mother = 'world')
self.create_AlorSi_box(
name = "Si_sub",
sidex = 1300*g4b.um,
sidey = 1300*g4b.um,
sidez = 300*g4b.um,
translation = [tx_all,ty_all,
tz_Si-166.5*g4b.um],
material_type = "G4_Si",
colour = [0,0,1],
mother = 'world')
self.create_pcb_board(
name = "pcb1",
sidex = 20000*g4b.um,
sidey = 20000*g4b.um,
sidez = 1500*g4b.um,
translation = [tx_all,ty_all,
tz_Si-1066.5*g4b.um],
tub_radius = 500*g4b.um,
tub_depth = 1500*g4b.um,
material_Si = "Si",
material_O = "O",
colour = [0,0.5,0.8],
mother = 'world')
self.create_sic_box(
name = "Device",
sidex = device_x,
sidey = device_y,
sidez = device_z,
translation = [tx_all,ty_all,tz_device],
material_Si = "Si",
material_c = "C",
colour = [1,0,0],
mother = 'world')
self.create_pcb_board(
name = "pcb2",
sidex = 20000*g4b.um,
sidey = 20000*g4b.um,
sidez = 1500*g4b.um,
translation = [tx_all,ty_all,tz_pcb2],
tub_radius = 500*g4b.um,
tub_depth = 1500*g4b.um,
material_Si = "Si",
material_O = "O",
colour = [0,0.5,0.8],
mother = 'world')
if "plugin3D" in sensor_model:
self.create_sic_box(
name = "SiC_sub",
sidex = my_d.l_x*g4b.um,
sidey = my_d.l_y*g4b.um,
sidez = 350.0*g4b.um,
translation = [tx_all,ty_all,-175.0*g4b.um],
material_Si = "Si",
material_c = "C",
colour = [0,1,1],
mother = 'world')
self.maxStep = maxStep*g4b.um
self.fStepLimit = g4b.G4UserLimits(self.maxStep)
self.logical["Device"].SetUserLimits(self.fStepLimit)
def create_world(self,my_d):
self.nist = g4b.G4NistManager.Instance()
material = self.nist.FindOrBuildMaterial("G4_AIR")
self.solid['world'] = g4b.G4Box("world",
25000*g4b.um,
25000*g4b.um,
25000*g4b.um)
self.logical['world'] = g4b.G4LogicalVolume(self.solid['world'],
material,
"world")
self.physical['world'] = g4b.G4PVPlacement(None,
g4b.G4ThreeVector(0,0,0),
self.logical['world'],
"world", None, False,
0,self.checkOverlaps)
visual = g4b.G4VisAttributes()
visual.SetVisibility(False)
self.logical['world'].SetVisAttributes(visual)
def create_sic_box(self, **kwargs):
name = kwargs['name']
material_si = self.nist.FindOrBuildElement(kwargs['material_Si'],False)
material_c = self.nist.FindOrBuildElement(kwargs['material_c'],False)
sic_density = 3.2*g4b.g/g4b.cm3
SiC = g4b.G4Material("SiC",sic_density,2)
SiC.AddElement(material_si,50*g4b.perCent)
SiC.AddElement(material_c,50*g4b.perCent)
translation = g4b.G4ThreeVector(*kwargs['translation'])
visual = g4b.G4VisAttributes(g4b.G4Color(*kwargs['colour']))
mother = self.physical[kwargs['mother']]
sidex = kwargs['sidex']
sidey = kwargs['sidey']
sidez = kwargs['sidez']
self.solid[name] = g4b.G4Box(name, sidex/2., sidey/2., sidez/2.)
self.logical[name] = g4b.G4LogicalVolume(self.solid[name],
SiC,
name)
self.physical[name] = g4b.G4PVPlacement(None,translation,
name,self.logical[name],
mother, False,
0,self.checkOverlaps)
self.logical[name].SetVisAttributes(visual)
def create_AlorSi_box(self, **kwargs):
name = kwargs['name']
material_type = self.nist.FindOrBuildMaterial(kwargs['material_type'],
False)
translation = g4b.G4ThreeVector(*kwargs['translation'])
visual = g4b.G4VisAttributes(g4b.G4Color(*kwargs['colour']))
mother = self.physical[kwargs['mother']]
sidex = kwargs['sidex']
sidey = kwargs['sidey']
sidez = kwargs['sidez']
self.solid[name] = g4b.G4Box(name, sidex/2., sidey/2., sidez/2.)
self.logical[name] = g4b.G4LogicalVolume(self.solid[name],
material_type,
name)
self.physical[name] = g4b.G4PVPlacement(None,translation,
name,self.logical[name],
mother, False,
0,self.checkOverlaps)
self.logical[name].SetVisAttributes(visual)
def create_pcb_board(self, **kwargs):
name = kwargs['name']
material_si = self.nist.FindOrBuildElement(kwargs['material_Si'],False)
material_O = self.nist.FindOrBuildElement(kwargs['material_O'],False)
sic_density = 2.2*g4b.g/g4b.cm3
SiO2 = g4b.G4Material("SiO2",sic_density,2)
SiO2.AddElement(material_si,1)
SiO2.AddElement(material_O,2)
translation = g4b.G4ThreeVector(*kwargs['translation'])
visual = g4b.G4VisAttributes(g4b.G4Color(*kwargs['colour']))
mother = self.physical[kwargs['mother']]
sidex = kwargs['sidex']
sidey = kwargs['sidey']
sidez = kwargs['sidez']
tub_radius = kwargs['tub_radius']
tub_depth = kwargs['tub_depth']
self.solid[name+"box"] = g4b.G4Box(name+"box",
sidex/2., sidey/2., sidez/2.)
self.solid[name+"tub"] = g4b.G4Tubs(name+"tub", 0,tub_radius,
tub_depth, 0,360*g4b.deg)
self.solid[name] = g4b.G4SubtractionSolid(name,
self.solid[name+"box"],
self.solid[name+"tub"])
self.logical[name] = g4b.G4LogicalVolume(self.solid[name],
SiO2,
name)
self.physical[name] = g4b.G4PVPlacement(None,translation,
name,self.logical[name],
mother, False,
0,self.checkOverlaps)
self.logical[name].SetVisAttributes(visual)
def Construct(self): # return the world volume
self.fStepLimit.SetMaxAllowedStep(self.maxStep)
return self.physical['world']
def __del__(self):
print("use __del__ to delete the MyDetectorConstruction class ")
class MyPrimaryGeneratorAction(g4b.G4VUserPrimaryGeneratorAction):
"My Primary Generator Action"
def __init__(self,par_in,par_out):
g4b.G4VUserPrimaryGeneratorAction.__init__(self)
par_direction = [ par_out[i] - par_in[i] for i in range(3) ]
particle_table = g4b.G4ParticleTable.GetParticleTable()
electron = particle_table.FindParticle("e-") # define the beta electron
beam = g4b.G4ParticleGun(1)
beam.SetParticleEnergy(2.28*g4b.MeV)
# beam.SetParticleEnergy(0.546*g4b.MeV)
beam.SetParticleMomentumDirection(g4b.G4ThreeVector(par_direction[0],
par_direction[1],
par_direction[2]))
beam.SetParticleDefinition(electron)
beam.SetParticlePosition(g4b.G4ThreeVector(par_in[0]*g4b.um,
par_in[1]*g4b.um,
par_in[2]*g4b.um))
beam2 = g4b.G4ParticleGun(1)
beam2.SetParticleEnergy(0.546*g4b.MeV)
beam2.SetParticleMomentumDirection(g4b.G4ThreeVector(par_direction[0],
par_direction[1],
par_direction[2]))
beam2.SetParticleDefinition(electron)
beam2.SetParticlePosition(g4b.G4ThreeVector(par_in[0]*g4b.um,
par_in[1]*g4b.um,
par_in[2]*g4b.um))
self.particleGun = beam
self.particleGun2 = beam2
def GeneratePrimaries(self, event):
self.particleGun.GeneratePrimaryVertex(event)
self.particleGun2.GeneratePrimaryVertex(event)
class MyRunAction(g4b.G4UserRunAction):
def __init__(self):
g4b.G4UserRunAction.__init__(self)
milligray = 1.e-3*g4b.gray
microgray = 1.e-6*g4b.gray
nanogray = 1.e-9*g4b.gray
picogray = 1.e-12*g4b.gray
g4b.G4UnitDefinition("milligray", "milliGy", "Dose", milligray)
g4b.G4UnitDefinition("microgray", "microGy", "Dose", microgray)
g4b.G4UnitDefinition("nanogray", "nanoGy", "Dose", nanogray)
g4b.G4UnitDefinition("picogray", "picoGy", "Dose", picogray)
def BeginOfRunAction(self, run):
g4b.G4RunManager.GetRunManager().SetRandomNumberStore(False)
def EndOfRunAction(self, run):
nofEvents = run.GetNumberOfEvent()
if nofEvents == 0:
print("nofEvents=0")
return
class MyEventAction(g4b.G4UserEventAction):
"My Event Action"
def __init__(self, runAction, point_in, point_out):
g4b.G4UserEventAction.__init__(self)
self.fRunAction = runAction
self.point_in = point_in
self.point_out = point_out
def BeginOfEventAction(self, event):
self.edep_device=0.
self.event_angle = 0.
self.p_step = []
self.energy_step = []
def EndOfEventAction(self, event):
eventID = event.GetEventID()
print("eventID:%s"%eventID)
if len(self.p_step):
point_a = [ b-a for a,b in zip(self.point_in,self.point_out)]
point_b = [ c-a for a,c in zip(self.point_in,self.p_step[-1])]
self.event_angle = cal_angle(point_a,point_b)
else:
self.event_angle = None
save_geant4_events(eventID,self.edep_device,
self.p_step,self.energy_step,self.event_angle)
def RecordDevice(self, edep,point_in,point_out):
self.edep_device += edep
self.p_step.append([point_in.getX()*1000,
point_in.getY()*1000,point_in.getZ()*1000])
self.energy_step.append(edep)
def save_geant4_events(eventID,edep_device,p_step,energy_step,event_angle):
if(len(p_step)>0):
s_eventIDs.append(eventID)
s_edep_devices.append(edep_device)
s_p_steps.append(p_step)
s_energy_steps.append(energy_step)
s_events_angle.append(event_angle)
else:
s_eventIDs.append(eventID)
s_edep_devices.append(edep_device)
s_p_steps.append([[0,0,0]])
s_energy_steps.append([0])
s_events_angle.append(event_angle)
def cal_angle(point_a,point_b):
"Calculate the anlgle of point a and b"
x=np.array(point_a)
y=np.array(point_b)
l_x=np.sqrt(x.dot(x))
l_y=np.sqrt(y.dot(y))
dian=x.dot(y)
if l_x*l_y > 0:
cos_=dian/(l_x*l_y)
angle_d=np.arccos(cos_)*180/np.pi
else:
angle_d=9999
return angle_d
class MySteppingAction(g4b.G4UserSteppingAction):
"My Stepping Action"
def __init__(self, eventAction):
g4b.G4UserSteppingAction.__init__(self)
self.fEventAction = eventAction
def UserSteppingAction(self, step):
edep = step.GetTotalEnergyDeposit()
point_pre = step.GetPreStepPoint()
point_post = step.GetPostStepPoint()
point_in = point_pre.GetPosition()
point_out = point_post.GetPosition()
volume = step.GetPreStepPoint().GetTouchable().GetVolume().GetLogicalVolume()
volume_name = volume.GetName()
if(volume_name == "Device"):
self.fEventAction.RecordDevice(edep,point_in,point_out)
class MyActionInitialization(g4b.G4VUserActionInitialization):
def __init__(self,par_in,par_out):
g4b.G4VUserActionInitialization.__init__(self)
self.par_in = par_in
self.par_out = par_out
def Build(self):
self.SetUserAction(MyPrimaryGeneratorAction(self.par_in,
self.par_out))
# global myRA_action
myRA_action = MyRunAction()
self.SetUserAction(myRA_action)
myEA = MyEventAction(myRA_action,self.par_in,self.par_out)
self.SetUserAction(myEA)
self.SetUserAction(MySteppingAction(myEA))
| [
"tanyuhang@ihep.ac.cn"
] | tanyuhang@ihep.ac.cn |
e0ededae93e9d7e63a162d7f3b17f558d9e49ae8 | 252d023b55575f3d25fb9ab8faa92084479244b3 | /tests/routing/test_convertors.py | fe7eec96fb0debcbc1ade46a88fca3b16d508241 | [
"Apache-2.0"
] | permissive | sangensong/index.py | fef31a222b34961b5869a5d2a5832040029be778 | 4b4cfd0aeef67986f484e3f5f06544b8a2cb7699 | refs/heads/master | 2023-03-03T12:24:00.468335 | 2021-02-13T14:46:33 | 2021-02-13T14:46:33 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 395 | py | import pytest
from indexpy.routing.convertors import is_compliant
@pytest.mark.parametrize(
"string, result",
[
("", True),
("{}", True),
("1{1}1", True),
("}{", False),
("{}}", False),
("}", False),
("{{}", False),
("{", False),
],
)
def test_is_compliant(string, result):
assert is_compliant(string) == result
| [
"me@abersheeran.com"
] | me@abersheeran.com |
aa7d52d39870d17de3191882a3790001a6e68423 | 7832e7dc8f1583471af9c08806ce7f1117cd228a | /aliyun-python-sdk-ocs/aliyunsdkocs/request/v20150301/VerifyPasswordRequest.py | 390fe460a4639197db6df3a5bf2bfe77161f48c3 | [
"Apache-2.0"
] | permissive | dianplus/aliyun-openapi-python-sdk | d6494850ddf0e66aaf04607322f353df32959725 | 6edf1ed02994245dae1d1b89edc6cce7caa51622 | refs/heads/master | 2023-04-08T11:35:36.216404 | 2017-11-02T12:01:15 | 2017-11-02T12:01:15 | 109,257,597 | 0 | 0 | NOASSERTION | 2023-03-23T17:59:30 | 2017-11-02T11:44:27 | Python | UTF-8 | Python | false | false | 2,034 | py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from aliyunsdkcore.request import RpcRequest
class VerifyPasswordRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'Ocs', '2015-03-01', 'VerifyPassword')
def get_OwnerId(self):
return self.get_query_params().get('OwnerId')
def set_OwnerId(self,OwnerId):
self.add_query_param('OwnerId',OwnerId)
def get_ResourceOwnerAccount(self):
return self.get_query_params().get('ResourceOwnerAccount')
def set_ResourceOwnerAccount(self,ResourceOwnerAccount):
self.add_query_param('ResourceOwnerAccount',ResourceOwnerAccount)
def get_ResourceOwnerId(self):
return self.get_query_params().get('ResourceOwnerId')
def set_ResourceOwnerId(self,ResourceOwnerId):
self.add_query_param('ResourceOwnerId',ResourceOwnerId)
def get_OwnerAccount(self):
return self.get_query_params().get('OwnerAccount')
def set_OwnerAccount(self,OwnerAccount):
self.add_query_param('OwnerAccount',OwnerAccount)
def get_InstanceId(self):
return self.get_query_params().get('InstanceId')
def set_InstanceId(self,InstanceId):
self.add_query_param('InstanceId',InstanceId)
def get_Password(self):
return self.get_query_params().get('Password')
def set_Password(self,Password):
self.add_query_param('Password',Password) | [
"haowei.yao@alibaba-inc.com"
] | haowei.yao@alibaba-inc.com |
bd23b87cf95023a0889b85a7676bdd1f6157604f | 10b1e3e60e49c6a3f26e0cdd5a8d5dadbeb3c471 | /users/urls.py | bdf1717025e7b8f29f6f333771fd3ddb37e6bb1a | [] | no_license | srahnama/deadlines_app | 9afcdc712a7eccdbbf97b7825f8a89df450fec69 | 5e3a743b3b4211350351e38a6be91f0f6ba8596f | refs/heads/master | 2020-08-06T01:45:19.843649 | 2019-10-04T12:00:22 | 2019-10-04T12:00:22 | 212,789,023 | 0 | 0 | null | 2019-10-04T10:23:08 | 2019-10-04T10:23:08 | null | UTF-8 | Python | false | false | 88 | py | from django.urls import path
from . import views
app_name = 'users'
urlpatterns = [] | [
"jmfda00@gmail.com"
] | jmfda00@gmail.com |
0403bf4f6d707fcefaf4e588b0dae36cf3580e8f | 8f8f8da2b28368cc2dc85a58288594c0bb59e7eb | /auth/accounts/forms.py | 546ad46233ea50355cdeeb9f2c77bb77fa2e2cc1 | [] | no_license | Shaikzaheer174/authentication-project- | bb6ad2815e7d77ea38c14672219c2bd0d7435eeb | 0b68bd13494b79745d6053217e86592ca9c73d8f | refs/heads/master | 2023-02-08T09:31:38.238461 | 2021-01-03T08:29:01 | 2021-01-03T08:29:01 | 326,343,858 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 373 | py | #importing
from django import forms
from django.contrib.auth.models import User
from django.contrib.auth.forms import UserCreationForm
#registration form
class RegistrationForm(UserCreationForm):
class Meta:
model = User
fields = ['username',
'email',
'first_name',
'last_name']
| [
"68013910+Shaikzaheer174@users.noreply.github.com"
] | 68013910+Shaikzaheer174@users.noreply.github.com |
6cd8971cbc62173482e88a4f284fbaa212bcad97 | 607155e799f5847b17e93a46452d6c9cf87214f5 | /구현/상하좌우 (이코테).py | 32658ead47b64ae64a8eb440249a26daa6d38d5d | [] | no_license | chanheum/this_is_codingtest | 73e55c064e173a79c0714d316a48c653a301ec9e | 8850b59921b083fdc4a9d162071cd80b7835de4d | refs/heads/main | 2023-06-18T19:55:19.099988 | 2021-07-14T07:17:12 | 2021-07-14T07:17:12 | 370,069,912 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 584 | py | n = int(input())
x,y = 1,1
plans = input().split()
# 서, 동, 북, 남
dx = [0,0,-1,1]
dy = [-1,1,0,0]
move_types = ['L','R','U','D']
# 이동 계획을 하나씩 확인하기
for plan in plans:
# 이동 후 좌표 구하기
for i in range(len(move_types)):
if plan == move_types[i]:
nx = x + dx[i]
ny = y + dy[i]
print(nx, ny)
# 공간을 벗어나는 경우 다음 계획 확인
if nx < 1 or ny < 1 or nx > n or nx > n:
continue
# 범위를 벗어나지 않으면 이동 수행
x, y = nx, ny
print(x,y) | [
"xoxo_pch@naver.com"
] | xoxo_pch@naver.com |
e01687860fd1344cebafd20a1a817ccc92bc7cdb | ce791e8402a958f816b9d57323f4cbf357430515 | /Basic/Tokenize_String.py | f81f73305e72384a3147cc59345f86accbd34fca | [] | no_license | KhachKara/Python_Tutor | 43cb3a77ba51ccf3eda73a52e637c929706ddbea | 95f8a1f3c961541fd459696bc1e20d873ecebf71 | refs/heads/master | 2021-06-27T05:09:11.967610 | 2017-09-18T03:39:18 | 2017-09-18T03:39:18 | 103,820,420 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 655 | py | """
Example 1-5
===========
Tokenizing string to extract specific informations
from string.
Requisite: String pattern must be known
"""
# input string with known pattern
# String Pattern:
# First Name
# Last Name
# Year of Birth
# Month of Birth
# Day of Birth
# Gender
input_ = 'Adnan,Umer,1995,8,17,male'
# split input string on seprater
# in this case seprater is ','
tokens = input_.split(',')
firstName = tokens[0]
lastName = tokens[1]
# int(str) to convert string to integer
birthdate = (int(tokens[2]), int(tokens[3]), int(tokens[4]))
isMale = (tokens[5] == 'male')
print('Howday!', firstName, lastName)
| [
"khachkara@gmail.com"
] | khachkara@gmail.com |
5849527eec8c41d9b965ab42fba37c1d87057e34 | 1078c61f2c6d9fe220117d4c0fbbb09f1a67f84c | /paws/lib/python2.7/site-packages/euca2ools-3.4.1_2_g6b3f62f2-py2.7.egg/EGG-INFO/scripts/euca-attach-vpn-gateway | f2cedfbacd4acce4e9fae4e06a651f68f2eccffe | [
"MIT"
] | permissive | cirobessa/receitas-aws | c21cc5aa95f3e8befb95e49028bf3ffab666015c | b4f496050f951c6ae0c5fa12e132c39315deb493 | refs/heads/master | 2021-05-18T06:50:34.798771 | 2020-03-31T02:59:47 | 2020-03-31T02:59:47 | 251,164,945 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 217 | #!/media/ciro/LOCALDRV/A_DESENVOLVIMENTO/AWS/receitas/paws/bin/python -tt
import euca2ools.commands.ec2.attachvpngateway
if __name__ == '__main__':
euca2ools.commands.ec2.attachvpngateway.AttachVpnGateway.run()
| [
"cirobessa@yahoo.com"
] | cirobessa@yahoo.com | |
f3c8ce1577e12c00af93601312d0d5387dddf8ed | 50f7af09f36bc6bb55ebc5e109444fce76fb2294 | /Duy demo/1. Json Wire Protocol/google_test.py | 124078bbad2320e3f7e88874034f542d7613b112 | [] | no_license | duynguyenx/TheInsideofSelenium | 4aaf7b1fc7ef11c0a46f4e91ffb94461a4709f02 | ef661386c45957176fce0f2a0e2f160b83f613b3 | refs/heads/master | 2020-03-31T05:14:56.553230 | 2018-10-07T13:35:54 | 2018-10-07T13:35:54 | 151,938,951 | 1 | 1 | null | 2018-10-07T13:35:55 | 2018-10-07T12:26:46 | Python | UTF-8 | Python | false | false | 220 | py | from selenium.webdriver import Chrome
from selenium.webdriver.common.by import By
driver = Chrome()
driver.maximize_window()
driver.get('http://www.google.com')
element = driver.find_element(By.NAME, 'q')
driver.quit()
| [
"duy.nguyen@unified.com"
] | duy.nguyen@unified.com |
e1dca4caa796db4cb3ad1dda0483290de2b602d6 | fa4a3379cf9cc388f31cfc82a1f2e680dffdfbf4 | /demo/randomcolor-nolib.py | 25a29dd055eeb1b6c8fd04d27089534d3f96f1cd | [] | no_license | vswraith/pycyborg | 2665d0787f7afc8c530544ee14ea0f11dc6a28bf | 7b91bdbe1e0360a9386a6deec9bc58f806f12fa7 | refs/heads/master | 2021-01-17T05:19:21.051109 | 2016-03-09T05:38:40 | 2016-03-09T05:38:40 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,298 | py | #!/usr/bin/env python
"""
This was the first running python script, before I made the library.
If you want to code your own script without having to import the pycyborg lib you may use this as a reference
"""
import usb.core
import time
import random
import sys
VENDOR=0x06a3
PRODUCT=0x0dc5
CONFIGURATION=1
def color(dev,r=0,g=0,b=0):
"""Set the light to a specific color"""
r=int(r)
g=int(g)
b=int(b)
dev.ctrl_transfer(bmRequestType=0x21, bRequest=0x09, wValue=0x03a2, wIndex=0, data_or_wLength=[0xa2,0x00,r,g,b,0x00,0x00,0x00,0x00])
def transition(dev,target_r,target_g,target_b,duration=1,updatetime=0.001,start_r=0,start_g=0,start_b=0):
"""Transition from one color to another"""
color(dev,start_r,start_g,start_b)
steps=duration/updatetime
step_r=(target_r-start_r)/steps
step_g=(target_g-start_g)/steps
step_b=(target_b-start_b)/steps
for step in range(int(steps)):
time.sleep(updatetime)
new_r=start_r+(step*step_r)
new_g=start_g+(step*step_g)
new_b=start_b+(step*step_b)
color(dev,new_r,new_g,new_b)
color(dev,target_r,target_g,target_b)
def all_off(dev):
"""Turn off all lights"""
color(dev,0,0,0)
def random_color():
"""Generate a random color tuple"""
r=random.randint(0,255)
g=random.randint(0,255)
b=random.randint(0,255)
return r,g,b
#find the first ambx madcatz gaming light
print "Searching for a madcatz ambx gaming light..."
dev=usb.core.find(idVendor=VENDOR,idProduct=PRODUCT)
if dev!=None:
print "Found!"
else:
print "Not found :( "
sys.exit(1)
dev.set_configuration(CONFIGURATION)
#set idle request
dev.ctrl_transfer(bmRequestType=0x21, bRequest=0x0a, wValue=0x00, wIndex=0, data_or_wLength=None)
#init
dev.ctrl_transfer(bmRequestType=0x21, bRequest=0x09, wValue=0x03a7, wIndex=0, data_or_wLength=[0xa7,0x00])
#windows driver does all off at the beginning too
all_off(dev)
#do some nice color transitions for 20 seconds
start=time.time()
oldr,oldg,oldb=0,0,0
while time.time()-start<20:
r,g,b=random_color()
print "Transitioning to %s,%s,%s"%(r,g,b)
transition(dev,r,g,b,1,0.001,oldr,oldg,oldb)
oldr,oldg,oldb=r,g,b
#turn off
transition(dev,0,0,0,1,0.001,oldr,oldg,oldb)
print "good bye"
| [
"oli@fuglu.org"
] | oli@fuglu.org |
e439e39567a39f81438a77b36ea4aadcdacfb670 | 10ddfb2d43a8ec5d47ce35dc0b8acf4fd58dea94 | /Python/shortest-path-to-get-food.py | e55ecdc5d036b0016d429f856f0abb26327001d0 | [
"MIT"
] | permissive | kamyu104/LeetCode-Solutions | f54822059405ef4df737d2e9898b024f051fd525 | 4dc4e6642dc92f1983c13564cc0fd99917cab358 | refs/heads/master | 2023-09-02T13:48:26.830566 | 2023-08-28T10:11:12 | 2023-08-28T10:11:12 | 152,631,182 | 4,549 | 1,651 | MIT | 2023-05-31T06:10:33 | 2018-10-11T17:38:35 | C++ | UTF-8 | Python | false | false | 1,018 | py | # Time: O(m * n)
# Space: O(m + n)
class Solution(object):
def getFood(self, grid):
"""
:type grid: List[List[str]]
:rtype: int
"""
directions = [(0, 1), (1, 0), (0, -1), (-1, 0)]
q = []
for r in xrange(len(grid)):
for c in xrange(len(grid[0])):
if grid[r][c] == '*':
q.append((r, c))
break
result = 0
while q:
result += 1
new_q = []
for r, c in q:
for dr, dc in directions:
nr, nc = r+dr, c+dc
if not (0 <= nr < len(grid) and
0 <= nc < len(grid[0]) and
grid[nr][nc] != 'X'):
continue
if grid[nr][nc] == '#':
return result
grid[nr][nc] = 'X'
new_q.append((nr, nc))
q = new_q
return -1
| [
"noreply@github.com"
] | kamyu104.noreply@github.com |
eb40a4c499beb7ee27bc9a1cb2ade6b9093876c5 | 3063c11d7983b431ed01f9079fb6c8a1522dde8c | /Linux/bin/esna/solaris9.py | f8883fbb294d7ec0adbe6df30c078d0e55fae681 | [] | no_license | FingerLeakers/EquationGroupLeak | 783ca18de8a7d7c3b679a53afbd97fa347d34f15 | 4c0515a2b4d3c3b78c10c476ee03d66a0c29066d | refs/heads/master | 2021-06-26T18:33:02.864767 | 2020-09-20T18:32:18 | 2020-09-20T18:32:18 | 123,497,687 | 5 | 1 | null | 2018-03-01T22:04:24 | 2018-03-01T22:04:23 | null | UTF-8 | Python | false | false | 1,210 | py | import solaris
import solaris9shellcode
import utils
class solaris9(solaris.solaris):
def __init__(self, stackBase=0xfddf4000L):
self.stackBase = stackBase
version = "Solaris 9"
l7Stack = -0x144 # offset to ptr to GOT from bottom of thread stack
def buildShellcodeBuffer(self, target, challenge):
stackBase = target.stackBase
basePC = stackBase + target.bigBufOffset
pc = basePC
while (utils.intHasBadBytes(pc - 8, target.badBytes)):
pc += 4
socketLoc = stackBase + target.socketOffset
solaris9shellcode.socket_offset = \
utils.stringifyAddr(socketLoc - (pc + 8))
solaris9shellcode.challenge = \
utils.stringifyAddr(challenge);
filler = utils.buildBuffer(pc - basePC, target.badBytes)
shellcodeBuf = filler \
+ solaris9shellcode.build()
target.pc = pc
return shellcodeBuf
def validReply(self, target, reply, stackBase):
got = utils.stringifyAddr(target.got[0])
for i in target.got[1:]:
got += utils.stringifyAddr(target.imtaBase + i)
validResponse = got[0:13] + "\r\n"
if (validResponse == reply):
return True
return False
| [
"adam@adamcaudill.com"
] | adam@adamcaudill.com |
c706fc8117eda0768995e92b60c761ab2c961a3e | bdde67f8774b6954f4d1543112303d8c482ea262 | /services/services.py | a1da9787363d6c719884c40665b740a16b941646 | [] | no_license | Feronial/microservices | 96184406a51887be3ef7b7fd781fc3873683a1fd | 298daf8d770e0197e2df7499e1401164b6b01d7c | refs/heads/master | 2020-09-14T16:08:41.821100 | 2019-11-28T11:46:43 | 2019-11-28T11:46:43 | 223,178,862 | 0 | 0 | null | 2019-11-21T13:20:52 | 2019-11-21T13:20:51 | null | UTF-8 | Python | false | false | 360 | py | import os
import json
from flask import make_response
def root_dir():
""" Returns root director for this project """
return os.path.dirname(os.path.realpath(__file__ + '/..'))
def nice_json(arg):
response = make_response(json.dumps(arg, sort_keys = True, indent=4))
response.headers['Content-type'] = "application/json"
return response
| [
"mistiq.sonko@gmail.com"
] | mistiq.sonko@gmail.com |
01dcb4de35172d103fe828dcc3ff6e932e2af35e | 4c27e915ec0c01a77646274ca656606479c3574f | /myenv/bin/django-admin.py | 2f27e3e1588a11d8e281369869ea177173f668ba | [
"BSD-2-Clause"
] | permissive | nexuszix/propm | e86da3d6871212ae1a9f46092b0e969c33b072dc | 8640c88d6c97a69c41e489e98c479c5eb0f81a18 | refs/heads/master | 2021-07-19T16:59:57.640123 | 2017-10-29T09:48:11 | 2017-10-29T09:48:11 | 108,704,193 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 152 | py | #!/Users/birdlnw/bird/propm/myenv/bin/python3
from django.core import management
if __name__ == "__main__":
management.execute_from_command_line()
| [
"pongsakorn.rueng@thonburihospital.com"
] | pongsakorn.rueng@thonburihospital.com |
3d4cf534570392828e1257f064436ddecf56c4dc | 29016acfa6bc15bb26d99de3d08804a831110b73 | /fundamentals/fundamentals/json/test_read_json.py | c503cb0f83d336dcfe31c31614f50cd9ce81c0e9 | [] | no_license | starnowski/python-fun | 7d787defc83c013efbd922ebfd024bfd72ed79d5 | f5c1c7f09b89155859ab1b82312dc0e7ce940dde | refs/heads/master | 2020-05-05T11:33:43.966690 | 2019-10-09T21:25:04 | 2019-10-09T21:25:04 | 179,994,504 | 0 | 0 | null | 2019-08-27T20:16:19 | 2019-04-07T16:45:10 | Python | UTF-8 | Python | false | false | 1,377 | py | import unittest
import json
class TestReadJson(unittest.TestCase):
def test_read_json_data(self):
# given
with open("test_json/test1.json", "r") as f:
# when
json_data = json.load(f)
# then
self.assertTrue("name" in json_data, "Json should contains property \"name\"")
self.assertTrue("firstName" in json_data, "Json should contains property \"firstName\"")
self.assertTrue("lastName" in json_data, "Json should contains property \"lastName\"")
self.assertTrue("githubRepositories" in json_data, "Json should contains property \"githubRepositories\"")
self.assertEqual(4, json_data["githubRepositories"].__len__(), "The array \"githubRepositories\" should contains four objects")
self.assertEqual("docker-fun", json_data["githubRepositories"][0]["name"], "First mentioned repository should be \"docker-fun\"")
self.assertEqual("posmulten", json_data["githubRepositories"][1]["name"], "Second mentioned repository should be \"posmulten\"")
self.assertEqual("bmunit-extension", json_data["githubRepositories"][2]["name"], "Third mentioned repository should be \"bmunit-extension\"")
self.assertEqual("bash-fun", json_data["githubRepositories"][3]["name"], "Fourth mentioned repository should be \"bash-fun\"")
| [
"33316705+starnowski@users.noreply.github.com"
] | 33316705+starnowski@users.noreply.github.com |
1278f1187478aa2974abc7bd272fe6f7c1454136 | 6c35677080af75467686bd592a222b205a5e214a | /selfieCam.py | dd3acd7c97440892b0b5d44972754a22b77ba637 | [] | no_license | jsschor/selfieCam | a4d3c8bae525c00cecc9b046c173bfa87698feb7 | 698a2ac6be5b2927ebd1f239415b9676bd3480cb | refs/heads/master | 2022-05-11T17:51:06.383539 | 2020-04-21T21:32:24 | 2020-04-21T21:32:24 | 257,718,026 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,866 | py | import cv2
import imutils.imutils as imutils
from imutils.imutils.video import VideoStream
import time
import pigpio
import datetime
import tkinter as tk
from tkinter import simpledialog
import numpy as np
from subprocess import call
import os
h = 480
w = 640
dispW,dispH = (1024,768)
frameRate = 90
cap = VideoStream(usePiCamera=True,resolution=(w,h),framerate=frameRate).start()
cap.camera.vflip = True
windName = "selfie cam"
preVid = 1
vid = 0
cv2.namedWindow(windName)
cv2.moveWindow(windName,dispW//2-w//2,dispH//2-h//2)
startTime = 0
saveName = ''
savePath = ''
time.sleep(.5)
whiteRedHold, whiteBlueHold = cap.camera.awb_gains
cap.camera.awb_mode = 'off'
cap.camera.awb_gains = (whiteRedHold,whiteBlueHold)
stop = False
charStart = 5
alpha = .75
ft = cv2.freetype.createFreeType2()
ft.loadFontData(fontFileName='Helvetica.ttf',id=0)
fontHeight = 15
def changeBrightness(brightVal):
cap.camera.brightness = brightVal
def changeContrast(contrastVal):
cap.camera.contrast = contrastVal*2 - 100
def startRecord(event,x,y,flags,params):
global preVid,saveName,cap,dispW,dispH,w,h,savePath
if preVid:
if event==cv2.EVENT_LBUTTONDOWN and x>(w//2-h//25) and x<(w//2+h//25) and y>(9*h//10-h//25) and y<(9*h//10+h//25) and saveName != '':
preVid = not preVid
now = datetime.datetime.now().strftime("%y%m%d-%H%M%S")
saveName = "{}-{}".format(saveName,now)
savePath = "/home/pi/selfieCam/selfieCamVids/{}".format(saveName)
os.mkdir(savePath)
cap.camera.start_recording(savePath+'/'+saveName+'.h264',format='h264',quality=20)
elif event==cv2.EVENT_LBUTTONDOWN and x>8*w//10 and y>27*h//30 and x<w-10 and y<h-10:
cap.camera.awb_mode = 'auto'
time.sleep(.5)
whiteRedHold, whiteBlueHold = cap.camera.awb_gains
cap.camera.awb_mode = 'off'
cap.camera.awb_gains = (whiteRedHold,whiteBlueHold)
def stopRecord(event,x,y,flags,params):
global saveName,cap, stop
if event==cv2.EVENT_LBUTTONDOWN and x>(w//2-h//25) and x<(w//2+h//25) and y>(9*h//10-h//25) and y<(9*h//10+h//25):
stop = True
cv2.createTrackbar('Brightness',windName,0,100,changeBrightness)
cv2.createTrackbar('Contrast',windName,0,100,changeContrast)
cv2.setTrackbarPos('Brightness',windName,cap.camera.brightness)
cv2.setTrackbarPos('Contrast',windName,(cap.camera.brightness+100)//2)
cv2.setMouseCallback(windName,startRecord)
while True:
state = cap.hasGotten
frame = cap.read()
overlay = np.copy(frame)
cv2.rectangle(overlay,(0,0),(w,22),(200,200,200),-1)
cv2.addWeighted(overlay,alpha,frame,1-alpha,0,frame)
if vid:
timeStamp = time.time()-startTime
frameRate = int(cap.getFramerate())
timeStamp = datetime.timedelta(seconds=int(timeStamp))
ft.putText(frame,
"Recording: {}, {} fps, {} elapsed".format(saveName,frameRate, timeStamp),
(5,15),
fontHeight,
(255,255,255),
-1,
cv2.LINE_AA,
True)
cv2.rectangle(frame, (w//2-h//30, 9*h//10-h//30),(w//2+h//30, 9*h//10+h//30),[0,0,255],-1,cv2.LINE_AA)
cv2.rectangle(frame, (w//2-h//25, 9*h//10-h//25),(w//2+h//25, 9*h//10+h//25),[0,0,255],1,cv2.LINE_AA)
if preVid:
if saveName == '':
cv2.circle(frame, (w//2, 9*h//10),h//30,[200,200,200],-1,cv2.LINE_AA)
cv2.circle(frame, (w//2, 9*h//10),h//25,[200,200,200],1,cv2.LINE_AA)
else:
cv2.circle(frame, (w//2, 9*h//10),h//30,[0,0,255],-1,cv2.LINE_AA)
cv2.circle(frame, (w//2, 9*h//10),h//25,[0,0,255],1,cv2.LINE_AA)
cv2.rectangle(frame,(8*w//10,27*h//30),(w-10,h-10),(100,100,100),-1,cv2.LINE_AA)
cv2.rectangle(frame,(8*w//10,27*h//30),(w-10,h-10),(255,255,255),1,cv2.LINE_AA)
buttonFontHeight = 20
text = "Reset WB"
widthText,heightText = ft.getTextSize(text,buttonFontHeight,-1)[0]
textX = 8*w//10+((w-10)-(8*w//10))//2-widthText//2
textY = 27*h//30+((h-10)-(27*h//30))//2+8
ft.putText(frame,
text,
(textX,textY),
buttonFontHeight,
(255,255,255),
-1,
cv2.LINE_AA,
True)
ft.putText(frame,
"Save Name (all lowercase): {}".format(saveName),
(5,15),
fontHeight,
(255,255,255),
-1,
cv2.LINE_AA,
True)
if not preVid and not vid:
vid = not vid
cv2.destroyWindow(windName)
cv2.namedWindow(windName)
cv2.moveWindow(windName,dispW//2-w//2,dispH//2-h//2)
cv2.setMouseCallback(windName,stopRecord)
startTime = time.time()
if state is False:
cv2.imshow(windName,frame)
if stop:
cap.camera.stop_recording()
call(["ffmpeg",
"-framerate","{}".format(frameRate),
"-i","{}".format(savePath+'/'+saveName+".h264"),
"-c","copy","{}".format(savePath+'/'+saveName+".mp4")])
break
key = cv2.waitKey(1) & 0xFF
if key == 27:
cap.camera.stop_recording()
if saveName != '':
call(["ffmpeg",
"-framerate","{}".format(frameRate),
"-i","{}".format(savePath+'/'+saveName+".h264"),
"-c","copy","{}".format(savePath+'/'+saveName+".mp4")])
break
elif key == 255:
continue
else:
if preVid:
if key == 8:
if len(saveName)>0:
saveName = saveName[0:-1]
elif key<126:
saveName += chr(key)
| [
"jonathan.schor@ucsf.edu"
] | jonathan.schor@ucsf.edu |
8fcea8888ffaa757c9ebf4c52154b84fa92caeea | da680f9112000de225da3db68f590a05d3c6e1d7 | /blog/migrations/0001_initial.py | 7405240b4631b3086b6163221a4f44eacfa0085d | [] | no_license | ishasharma01/BlogWeb | 8ff9977b8076b42e7a31a769c2437c5df9575051 | 1558876185c0cfa20ca9c89b439dcba187329bb6 | refs/heads/master | 2022-12-15T23:37:00.875884 | 2020-09-21T00:00:48 | 2020-09-21T00:00:48 | 287,458,651 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 907 | py | # Generated by Django 2.1.5 on 2020-08-15 06:37
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Post',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=100)),
('content', models.TextField()),
('date_posted', models.DateTimeField(default=django.utils.timezone.now)),
('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| [
"ishasharma96@mail.fresnostate.edu"
] | ishasharma96@mail.fresnostate.edu |
9867e05bac66cf8fa90307e381f46fa36c273189 | 0a882e11a5bec8e40c833a5f6ae5c70ed3c33f4d | /Group_C/polls/admin.py | 7a01855090d8b33db76e70a64cfceee51b886f6f | [] | no_license | dianajane/gC | 04ae87ec90976f4b5d31b51e570c9f39c24b7806 | c06977ada5af256a160dbc4e9de63c4d9b7e2511 | refs/heads/master | 2021-01-06T20:46:35.655216 | 2013-12-21T01:46:37 | 2013-12-21T01:46:37 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 626 | py | from django.contrib import admin
from polls.models import Choice, Poll
class ChoiceInline(admin.TabularInline):
model = Choice
extra = 3
class PollAdmin(admin.ModelAdmin):
fieldsets = [
(None, {'fields': ['question']}),
('Date information', {'fields': ['pub_date'], 'classes': ['collapse']}),
]
inlines = [ChoiceInline]
list_display = ('question', 'pub_date')
list_display = ('question', 'pub_date', 'was_published_recently')
list_filter = ['pub_date']
search_fields = ['question']
date_hierarchy = 'pub_date'
admin.site.register(Poll, PollAdmin)
| [
"dianajaneabella23@gmail.com"
] | dianajaneabella23@gmail.com |
55284a8b34ba4c9cd9f65131001658abc9f72ce1 | f07a42f652f46106dee4749277d41c302e2b7406 | /Data Set/bug-fixing-5/d9a1c8954fbeedf3e95f1c873a9834b004ff41f9-<map_config_to_obj>-bug.py | 6da13df57dd89c28b87c9562bd75a8d2b93e1f3e | [] | no_license | wsgan001/PyFPattern | e0fe06341cc5d51b3ad0fe29b84098d140ed54d1 | cc347e32745f99c0cd95e79a18ddacc4574d7faa | refs/heads/main | 2023-08-25T23:48:26.112133 | 2021-10-23T14:11:22 | 2021-10-23T14:11:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 983 | py | def map_config_to_obj(module):
objs = []
output = run_commands(module, {
'command': 'show vrf',
})
if (output is None):
module.fail_json(msg='Could not fetch VRF details from device')
vrfText = output[0].strip()
vrfList = vrfText.split('VRF')
for vrfItem in vrfList:
if ('FIB ID' in vrfItem):
obj = dict()
list_of_words = vrfItem.split()
vrfName = list_of_words[0]
obj['name'] = vrfName[:(- 1)]
obj['rd'] = list_of_words[(list_of_words.index('RD') + 1)]
start = False
obj['interfaces'] = []
for intName in list_of_words:
if ('Interfaces' in intName):
start = True
if (start is True):
if (('!' not in intName) and ('Interfaces' not in intName)):
obj['interfaces'].append(intName.strip().lower())
objs.append(obj)
return objs | [
"dg1732004@smail.nju.edu.cn"
] | dg1732004@smail.nju.edu.cn |
e8783d379367b87d7df3e3b83acd771067b46d61 | fd720c03d9ac965e7555e67769a879daf073c0d5 | /Level 1/10.py | cb030f7f6ccf973472ff82e5a939ad35f6bde46e | [
"Unlicense"
] | permissive | chris-maclean/ProjectEuler | 84045fb3a01897fb4ce4b312aabfd582077e6a89 | 861387b130a05cb98518492ac099ef6470f14dab | refs/heads/master | 2021-05-26T22:46:54.114622 | 2013-08-06T01:08:02 | 2013-08-06T01:08:02 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 268 | py | # CJAA 4/6/2013
# Project Euler Problem 10
# Find the sum of all primes below two million
pool = list(range(2,2000000))
x = 0
i = 0
while i < len(pool):
x = pool[i]
j = x
if x != 0:
while i+j < len(pool):
pool[i+j] = 0
j += x
i += 1
print(sum(pool)) | [
"canna12@gmail.com"
] | canna12@gmail.com |
84c9cdecd5c3b1a028b9541370da32a16ec70f92 | 86301c579e54a0d0ca225931d5eefc417801d976 | /src/gui/modules/view_type.py | ef06f2dd521825f9cd3b8c80ae2fb5166a5adbb9 | [
"MIT",
"Apache-2.0"
] | permissive | mazurwiktor/albion-online-stats | a76189af71157bb842802a7c6fbd3e8f4c2a55b2 | 2895be91c450c1d8520aa25ec65789dd033385d0 | refs/heads/master | 2023-06-01T08:39:37.967339 | 2021-11-11T18:33:55 | 2021-11-11T18:33:55 | 204,306,583 | 206 | 78 | Apache-2.0 | 2023-05-23T02:08:19 | 2019-08-25T14:36:33 | Python | UTF-8 | Python | false | false | 100 | py | class ViewType:
DMG: str = 'Stats (damage done)'
HEALING_DONE: str = 'Stats (healing done)'
| [
"wiktormazur1@gmail.com"
] | wiktormazur1@gmail.com |
e1c3b72718165b920c24e62f621c82fe74e07fec | 72e979f2acba2472f892c965211f8a9925becead | /tests/test_saml.py | fb1b13cb0c4c21c3b36f4ee0f754f45bf28fdc37 | [] | no_license | Microkubes/microkubes-python | 7cb46860129f139e5c823aa93cbdb61546dce245 | 4f5854fa1f973b9362786a1bdafbc452f8f73756 | refs/heads/master | 2022-07-21T17:30:32.362942 | 2019-10-09T14:31:21 | 2019-10-09T14:31:21 | 154,119,131 | 2 | 3 | null | 2022-07-06T19:57:42 | 2018-10-22T09:38:08 | Python | UTF-8 | Python | false | false | 5,203 | py | """SAML service provier test
"""
from threading import local
from unittest import mock
from os.path import join as path_join
from tempfile import TemporaryDirectory
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives import serialization
from onelogin.saml2.auth import OneLogin_Saml2_Auth
from microkubes.security import SAMLServiceProvider, SAMLSPUtils
from microkubes.security.chain import Request, Response
from microkubes.security.keys import KeyStore
from microkubes.security.auth import SecurityContext
configSAML = {
"strict": True,
"debug": False,
"sp": {
"entityId": "http://localhost:5000/metadata",
"assertionConsumerService": {
"url": "http://localhost:5000/acs",
"binding": "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
},
"x509cert": "",
"privateKey": ""
},
"idp": {
"entityId": "http://localhost:8080/saml/idp/metadata",
"singleSignOnService": {
"url": "http://localhost:8080/saml/idp/sso",
"binding": "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
},
"x509cert": ""
},
"security": {
"nameIdEncrypted": False,
"authnRequestsSigned": False,
"signMetadata": False,
"wantMessagesSigned": False,
"wantAssertionsSigned": False,
"wantNameId": False,
"wantNameIdEncrypted": False,
"wantAssertionsEncrypted": False,
},
"registration_url":"http://localhost:8080/saml/idp/services",
"privateKeyName":"service.key",
"certName": "service.cert"
}
def generate_RSA_keypair():
"""Generate RSA key pair.
"""
return rsa.generate_private_key(public_exponent=65537, key_size=2048,
backend=default_backend())
def serialize_private_pem(key):
"""Serialize the private key as PEM in PKCS8 format.
"""
return key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.PKCS8,
encryption_algorithm=serialization.NoEncryption()
)
def serialize_public_pem(key):
"""Serialize the public key as PEM.
"""
return key.public_key().public_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PublicFormat.SubjectPublicKeyInfo
)
@mock.patch.object(Request, 'args')
@mock.patch.object(Request, 'form')
def test_prepare_saml_request(args_mock, form_mock):
"""Prepare SAML request
"""
req_object = local()
request = Request(req_object)
args_mock.return_value = ['arg1']
form_mock.return_value = {'RelayState': 'dsd67atdas6dad67ad67a'}
saml_req = SAMLSPUtils.prepare_saml_request(request)
assert isinstance(saml_req, dict)
assert len(saml_req) == 6
def test_get_sp_metadata():
""" Return the SP metadata
"""
metadata = SAMLSPUtils.get_sp_metadata(configSAML)
assert isinstance(metadata, tuple)
def test_init_saml_auth():
""" Initialize SAML auth object
"""
req_object = local()
request = Request(req_object)
auth = SAMLSPUtils.init_saml_auth(request, configSAML)
assert isinstance(auth, OneLogin_Saml2_Auth)
@mock.patch.object(SAMLServiceProvider, '_register_sp')
@mock.patch.object(Request, 'args')
@mock.patch.object(Request, 'form')
@mock.patch.object(Request, 'host')
def test_saml_sp(host_mock, form_mock, args_mock, register_sp_mock):
""" Test SAML SP middleware
"""
register_sp_mock.return_value = 'OK'
args_mock.return_value = ['arg1']
form_mock.return_value = {'RelayState': 'dsd67atdas6dad67ad67a'}
host_mock = 'localhost:5000'
with TemporaryDirectory() as tmpdir:
rsa_key = generate_RSA_keypair()
priv_key = serialize_private_pem(rsa_key)
pub_key = serialize_public_pem(rsa_key)
with open(path_join(tmpdir, 'service.key'), 'wb') as keyfile:
keyfile.write(priv_key)
with open(path_join(tmpdir, 'service.cert'), 'wb') as keyfile:
keyfile.write(pub_key)
key_store = KeyStore(dir_path=tmpdir)
local_context = local() # Thread-Local underlying local context
context = SecurityContext(local_context=local_context)
req_object = local()
request = Request(req_object)
response = Response()
sp = SAMLServiceProvider(key_store, configSAML)
sp(context, request, response)
assert response.redirect_url.startswith('http://localhost:8080/saml/idp/sso')
saml_session = {
'samlUserdata': {
'urn:oid:0.9.2342.19200300.100.1.1': ['test-user'],
'urn:oid:1.3.6.1.4.1.5923.1.1.1.6': ['test@example.com'],
'urn:oid:1.3.6.1.4.1.5923.1.1.1.1': ['user']
}
}
sp = SAMLServiceProvider(key_store, configSAML, saml_session=saml_session)
sp(context, request, response)
auth = context.get_auth()
assert context.has_auth()
assert auth.user_id == 'test-user'
assert auth.roles == ['user']
assert auth.username == 'test@example.com'
| [
"vladoohr@gmail.com"
] | vladoohr@gmail.com |
360e24cbfaffe3dccdfcd759b133d63a739bbcb6 | a63d907ad63ba6705420a6fb2788196d1bd3763c | /src/api/dataflow/modeling/basic_model/model_controller.py | 64d54d6efa8764108960512df4f6ea47f6a4eac0 | [
"MIT"
] | permissive | Tencent/bk-base | a38461072811667dc2880a13a5232004fe771a4b | 6d483b4df67739b26cc8ecaa56c1d76ab46bd7a2 | refs/heads/master | 2022-07-30T04:24:53.370661 | 2022-04-02T10:30:55 | 2022-04-02T10:30:55 | 381,257,882 | 101 | 51 | NOASSERTION | 2022-04-02T10:30:56 | 2021-06-29T06:10:01 | Python | UTF-8 | Python | false | false | 10,300 | py | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making BK-BASE 蓝鲸基础平台 available.
Copyright (C) 2021 THL A29 Limited, a Tencent company. All rights reserved.
BK-BASE 蓝鲸基础平台 is licensed under the MIT License.
License for BK-BASE 蓝鲸基础平台:
--------------------------------------------------------------------
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
import json
from common.exceptions import ApiRequestError, ValidationError
from dataflow.batch.handlers.processing_batch_info import ProcessingBatchInfoHandler
from dataflow.modeling.basic_model.basic_model_serializer import ModelSerializer
from dataflow.modeling.exceptions.comp_exceptions import TableNotExistsError
from dataflow.modeling.handler.algorithm import AlgorithmHandler
from dataflow.modeling.handler.algorithm_version import AlgorithmVersionHandler
from dataflow.modeling.handler.mlsql_model_info import MLSqlModelInfoHandler
from dataflow.modeling.models import MLSqlModelInfo
from dataflow.modeling.utils.modeling_utils import ModelingUtils
class ModelingModelStorageController(object):
def __init__(
self,
expires,
active,
priority,
generate_type,
physical_table_name,
storage_config,
storage_config_id,
created_by,
updated_by,
):
self.expires = expires
self.active = active
self.priority = priority
self.generate_type = generate_type
self.physical_table_name = physical_table_name
self.storage_config = storage_config
self.storage_config_id = storage_config_id
self.created_by = created_by
self.updated_by = updated_by
self.description = self.physical_table_name
self.id = 0
def create(self):
params = {
"expires": self.expires,
"active": self.active,
"priority": self.priority,
"generate_type": self.generate_type,
"physical_table_name": self.physical_table_name,
"storage_config": json.dumps(self.storage_config),
"storage_cluster_config_id": self.storage_config_id,
"created_by": self.created_by,
"updated_by": self.updated_by,
"description": self.description,
"data_type": "parquet",
}
model_storage = MLSqlModelInfoHandler.create_model_storage(params)
self.id = model_storage.id
class ModelingModelController(object):
queryset = MLSqlModelInfoHandler.fetch_all_models()
serializer_class = ModelSerializer
model_name = None
def __init__(self, model_name, created_by=None):
self.model_name = model_name
self.created_by = created_by
def create(self, args):
MLSqlModelInfoHandler.create_mlsql_model_info(**args)
def update(self, status=None, active=None):
update_args = {"status": status, "active": active}
model = MLSqlModelInfo.objects.filter(model_name=self.model_name)
model.update(**update_args)
def retrieve(self, active=None, status=None):
request_body = {
"model_name": self.model_name,
}
if active is not None:
request_body["active"] = active
if status is not None:
request_body["status"] = status
response = MLSqlModelInfoHandler.filter(**request_body)
serializer = ModelSerializer(response, many=True)
model_item_list = []
for model_item in serializer.data:
model_storage_id = model_item["model_storage_id"]
storage_result = MLSqlModelInfoHandler.filter_storage_model(id=model_storage_id)
for storage in storage_result:
model_item["storage"] = {"path": json.loads(storage.storage_config)["path"]}
model_item_list.append(model_item)
if model_item_list:
return model_item_list[0]
else:
return {}
def destroy(self, active=None):
return MLSqlModelInfoHandler.delete_model_by_permission(self.model_name, self.created_by)
@classmethod
def fetch_list(cls, bk_username, model_name):
if bk_username:
args = {"active": 1, "created_by": bk_username}
result_models = MLSqlModelInfoHandler.filter(**args)
serializer = ModelSerializer(result_models, many=True)
return serializer.data
elif model_name:
basic_models = MLSqlModelInfoHandler.filter(model_name__in=model_name, active=1)
result_models = []
for model in basic_models:
result_models.append(
{
"model_name": model.model_name,
"algorithm_name": model.algorithm_name,
"status": model.status,
"created_by": model.created_by,
"created_at": model.create_at,
}
)
return result_models
else:
raise ValidationError("The parameter must contain bk_username or model_name")
class ModelingDDLOperator(object):
def __init__(self, table_id, notebook_id, cell_id, bk_username):
self.table_id = table_id
self.notebook_id = notebook_id
self.cell_id = cell_id
self.bk_username = bk_username
self.notebook_info = {
"notebook_id": self.notebook_id,
"cell_id": self.cell_id,
"bk_username": self.bk_username,
}
def truncate(self):
try:
ModelingUtils.truncate_result_tables(self.table_id, **self.notebook_info)
except ApiRequestError:
raise TableNotExistsError(message_kv={"name": self.table_id})
except Exception as e:
raise e
def drop(self):
try:
ModelingUtils.delete_result_tables(self.table_id, **self.notebook_info)
except ApiRequestError:
raise TableNotExistsError(message_kv={"name": self.table_id})
except Exception as e:
raise e
try:
ProcessingBatchInfoHandler.delete_proc_batch_info(self.table_id)
except Exception as e:
raise e
"""
algorithm_name = models.CharField(primary_key=True, max_length=64)
algorithm_alias = models.CharField(max_length=64)
description = models.TextField(blank=True, null=True)
algorithm_type = models.CharField(max_length=32, blank=True, null=True)
generate_type = models.CharField(max_length=32)
sensitivity = models.CharField(max_length=32)
project_id = models.IntegerField(blank=True, null=True)
run_env = models.CharField(max_length=64, blank=True, null=True)
framework = models.CharField(max_length=64, blank=True, null=True)
created_by = models.CharField(max_length=50)
created_at = models.DateTimeField(auto_now_add=True)
updated_by = models.CharField(max_length=50)
updated_at = models.DateTimeField(auto_now=True)
algorithm_name = models.CharField(max_length=64)
version = models.IntegerField()
logic = models.TextField(blank=True, null=True)
config = models.TextField()
execute_config = models.TextField()
properties = models.TextField()
created_by = models.CharField(max_length=50)
created_at = models.DateTimeField(auto_now_add=True)
updated_by = models.CharField(max_length=50)
updated_at = models.DateTimeField(auto_now=True)
description = models.TextField(blank=True, null=True)
"""
class ModelingAlgorithmController(object):
@classmethod
def create(cls):
algorithm_list = AlgorithmHandler.get_spark_algorithm()
for algorithm in algorithm_list:
params = {
"algorithm_name": algorithm.algorithm_name,
"algorithm_alias": algorithm.algorithm_alias,
"description": algorithm.description,
"algorithm_type": algorithm.algorithm_type,
"generate_type": algorithm.generate_type,
"sensitivity": algorithm.sensitivity,
"project_id": algorithm.project_id,
"run_env": algorithm.run_env,
"framework": algorithm.framework,
"created_by": algorithm.created_by,
"created_at": algorithm.created_at,
"updated_by": algorithm.updated_by,
}
AlgorithmHandler.save(**params)
algorithm_version = AlgorithmVersionHandler.get_alorithm_by_name(algorithm.algorithm_name)
version_params = {
"id": algorithm_version.id,
"algorithm_name": algorithm_version.algorithm_name,
"version": algorithm_version.version,
"logic": algorithm_version.logic,
"config": algorithm_version.config,
"execute_config": algorithm_version.execute_config,
"properties": algorithm_version.properties,
"created_by": algorithm_version.created_by,
"created_at": algorithm_version.created_at,
"updated_by": algorithm_version.updated_by,
"description": algorithm_version.description,
}
AlgorithmVersionHandler.save(**version_params)
| [
"terrencehan@tencent.com"
] | terrencehan@tencent.com |
5c245966477963c28a3d12c61a02c5648ede4d3a | cf6a50732d708a3a3db0f297b73cb6f449a00b44 | /Practice4_lists/change_list.py | 3faa396ddbe5ab69f071885c15c902f9b0884922 | [] | no_license | subash319/PythonDepth | 9fe3920f4b0a25be02a9abbeeb60976853ab812e | 0de840b7776009e8e4362d059af14afaac6a8879 | refs/heads/master | 2022-11-16T03:03:56.874422 | 2020-07-17T01:19:39 | 2020-07-17T01:19:39 | 266,921,459 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,691 | py | # 1. listA = [1, 2, 3, 4, 5, 6, 7, 8]
#
# Write a statement to change the second last element of this list to 200.
listA = [1, 2, 3, 4, 5, 6, 7, 8]
listA[-2] = 200
print(listA)
# 2. listA = [1, 2, 3, 4, 5, 6, 7, 8]
#
# In this list, replace the elements 3,4,5,6 with elements 30,40,50,60,70,80
listA = [1, 2, 3, 4, 5, 6, 7, 8]
listA[2:6] = [30, 40, 50, 60, 70, 80]
print(listA)
# 3. listA = [1, 2, 3, 4, 5, 6, 7, 8]
#
# Write a statement to replace all the elements from index 3 onwards with the characters of the string 'pqr'.
#
# Resulting list should be [1, 2, 3, 'p', 'q', 'r']
listA = [1, 2, 3, 4, 5, 6, 7, 8]
listA[3:] = 'pqr'
print(listA)
# 4. listA = [1, 2, 3, 4, 5, 6, 7, 8]
#
# Write a statement to insert new elements 10,20,30,40,50 starting at index 5.
#
# Resulting list should be [1, 2, 3, 4, 5, 10, 20, 30, 40, 50, 6, 7, 8]
listA = [1, 2, 3, 4, 5, 6, 7, 8]
listA[5:5] = [10, 20, 30, 40, 50]
print(listA)
# 5. listA = [1, 2, 3, 4, 5, 6, 7, 8]
#
# Write a statement to delete all elements from index 2 to index 5.
#
# Resulting list should be [1, 2, 7, 8]
listA = [1, 2, 3, 4, 5, 6, 7, 8]
listA[2:6] = []
print(listA)
# 6. listA = [1, 2, 3, 4, 5, 6, 7, 8]
#
# Write a statement to make a new list named cpy that is a copy of this list listA.
listA = [1, 2, 3, 4, 5, 6, 7, 8]
cpy = listA[:]
print(cpy)
print(id(cpy))
print(listA)
print(id(listA))
# 7. listA = [1, 2, 3, 4, 5, 6, 7, 8]
#
# Write a statement to make a new list named rev that is reverse of this list listA.
listA = [1, 2, 3, 4, 5, 6, 7, 8]
rev = listA[::-1]
print(rev)
# What will be the output
listA = [4,5,6,7,8,9,10]
listA[2:5] = []
print(listA) #[4,5,10]
listA[2]=[]
print(listA) #[4,5,[],10] | [
"subas319@gmail.com"
] | subas319@gmail.com |
6276f32df7a72d66b9eb457516d20bf0666fe0a9 | 84a325bc5e038a601c84f4af291ff0b877b360bb | /tests/test_serialization.py | a875b79b256cbd0dc6ccb76bd0cacb282e85af48 | [
"MIT"
] | permissive | afcarl/nomen | 6cecae13f7dc0d0c0eeeed4d0b80c45d119e7ad2 | f9d579b60b8c9ad60af2f2e5e8524244a6365b58 | refs/heads/master | 2020-03-22T10:25:12.930476 | 2018-06-22T00:13:10 | 2018-06-22T00:13:10 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 633 | py | from util import cfg, get_path
import os
import nomen
import yaml
import copy
def test_serialization():
tmp_path = get_path('tmp_config.yml')
with open(tmp_path, 'w') as f:
f.write(str(cfg))
tmp_cfg = cfg.copy()
tmp_cfg.update_from_yaml(tmp_path)
assert str(tmp_cfg) == str(cfg)
# test yaml variables
assert tmp_cfg['second_house/windows'] == cfg['second_house/windows']
os.remove(tmp_path)
def test_loading():
tmp_cfg = cfg.copy()
tmp_cfg.update_from_yaml(get_path('local_config.yml'))
assert tmp_cfg['model/learning_rate'] == 0.232
if __name__ == '__main__':
test_serialization()
test_loading()
| [
"jaan.altosaar@gmail.com"
] | jaan.altosaar@gmail.com |
5e3c5aa73c6369cb87cbd58d130549dd347e7cf3 | 7a8345e0a3b84614f9c8f31bb249f7e211cbd549 | /PycharmProjects/untitled/func/test1.py | aba58eb970f6e4b49870bed7918301c711633c92 | [] | no_license | oumingwang/ubuntupythoncode | 5ac5baf16acecec3cd518094a49f9766dc6a823b | 48dd76f848efedf13ba049c5d4ef9402d3285675 | refs/heads/master | 2020-07-04T03:42:16.080197 | 2016-11-19T15:21:49 | 2016-11-19T15:21:49 | 74,214,129 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 393 | py | #coding:utf-8
def convert(func, seq):
print 'conv sequence of numbers to same type'
return [func(eachNum) for eachNum in seq]
#解析式
myseq = (123,45.67,-6.2e8,99999999L)
print convert(int,myseq)
print convert(float,myseq)
print convert(long,myseq)
def MyFunc(name):
print name + ' world'
MyFunc('hello')
list = MyFunc.__doc__
print list
print help(MyFunc) | [
"474978390@qq.com"
] | 474978390@qq.com |
bd67ccf0e1115f6c66b0342f3e775786b7d3b3d7 | 2bffd56acb0d10cd7f142f2214e957373ed59114 | /4/femfel/pyimagesearch/panorama.py | bbe507ab286dbb741a3900c4bfe088e52d97fe2f | [] | no_license | OlofHarrysson/imageanalysis | e53e55872476c36032d9b95c4456fd34e785b13b | 5a37471e69b0d5eb72cbc32d2d9f342a8e9b918c | refs/heads/master | 2021-07-18T00:35:02.524800 | 2017-10-22T11:06:51 | 2017-10-22T11:06:51 | 105,156,313 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,876 | py | # import the necessary packages
import numpy as np
import imutils
import cv2
class Stitcher:
def __init__(self):
# determine if we are using OpenCV v3.X
self.isv3 = imutils.is_cv3()
def stitch(self, images, ratio=0.75, reprojThresh=4.0,
showMatches=False):
# unpack the images, then detect keypoints and extract
# local invariant descriptors from them
(imageB, imageA) = images
(kpsA, featuresA) = self.detectAndDescribe(imageA)
(kpsB, featuresB) = self.detectAndDescribe(imageB)
# match features between the two images
M = self.matchKeypoints(kpsA, kpsB,
featuresA, featuresB, ratio, reprojThresh)
# if the match is None, then there aren't enough matched
# keypoints to create a panorama
if M is None:
return None
# otherwise, apply a perspective warp to stitch the images
# together
(matches, H, status) = M
result = cv2.warpPerspective(imageA, H, (imageA.shape[1], imageA.shape[0]), borderValue= (255, 255, 255))
# check to see if the keypoint matches should be visualized
if showMatches:
vis = self.drawMatches(imageA, imageB, kpsA, kpsB, matches,
status)
# return a tuple of the stitched image and the
# visualization
return (result, vis)
# return the stitched image
return result
def detectAndDescribe(self, image):
# convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# check to see if we are using OpenCV 3.X
if self.isv3:
# detect and extract features from the image
descriptor = cv2.xfeatures2d.SIFT_create()
(kps, features) = descriptor.detectAndCompute(image, None)
# otherwise, we are using OpenCV 2.4.X
else:
# detect keypoints in the image
detector = cv2.FeatureDetector_create("SIFT")
kps = detector.detect(gray)
# extract features from the image
extractor = cv2.DescriptorExtractor_create("SIFT")
(kps, features) = extractor.compute(gray, kps)
# convert the keypoints from KeyPoint objects to NumPy
# arrays
kps = np.float32([kp.pt for kp in kps])
# return a tuple of keypoints and features
return (kps, features)
def matchKeypoints(self, kpsA, kpsB, featuresA, featuresB,
ratio, reprojThresh):
# compute the raw matches and initialize the list of actual
# matches
matcher = cv2.DescriptorMatcher_create("BruteForce")
rawMatches = matcher.knnMatch(featuresA, featuresB, 2)
matches = []
# loop over the raw matches
for m in rawMatches:
# ensure the distance is within a certain ratio of each
# other (i.e. Lowe's ratio test)
if len(m) == 2 and m[0].distance < m[1].distance * ratio:
matches.append((m[0].trainIdx, m[0].queryIdx))
# computing a homography requires at least 4 matches
if len(matches) > 4:
# construct the two sets of points
ptsA = np.float32([kpsA[i] for (_, i) in matches])
ptsB = np.float32([kpsB[i] for (i, _) in matches])
# compute the homography between the two sets of points
(H, status) = cv2.findHomography(ptsA, ptsB, cv2.RANSAC,
reprojThresh)
# return the matches along with the homograpy matrix
# and status of each matched point
return (matches, H, status)
# otherwise, no homograpy could be computed
return None
def drawMatches(self, imageA, imageB, kpsA, kpsB, matches, status):
# initialize the output visualization image
(hA, wA) = imageA.shape[:2]
(hB, wB) = imageB.shape[:2]
vis = np.zeros((max(hA, hB), wA + wB, 3), dtype="uint8")
vis[0:hA, 0:wA] = imageA
vis[0:hB, wA:] = imageB
# loop over the matches
for ((trainIdx, queryIdx), s) in zip(matches, status):
# only process the match if the keypoint was successfully
# matched
if s == 1:
# draw the match
ptA = (int(kpsA[queryIdx][0]), int(kpsA[queryIdx][1]))
ptB = (int(kpsB[trainIdx][0]) + wA, int(kpsB[trainIdx][1]))
cv2.line(vis, ptA, ptB, (0, 255, 0), 1)
# return the visualization
return vis | [
"harrysson.olof@gmail.com"
] | harrysson.olof@gmail.com |
75b96b8fc35b74e1ba693c617e27ca511b260e49 | d6882f1f09350f7f03a2d615c9c64462a10438c7 | /WX_FUNC/text/carnumber.py | 5ac4f78c4da8d2da3acd0b63621c0a998826c8c3 | [] | no_license | yantaobattler/WXGZ | be7ce8d3dfffb1bebf32508b0530284304dc6df2 | 24e562e9fd6171a8aa75689a0413604626c15963 | refs/heads/master | 2020-04-26T04:43:05.603952 | 2019-08-07T11:39:01 | 2019-08-07T11:39:01 | 173,307,840 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 406 | py | # -*- coding: UTF-8 -*-
from WX_FUNC.publictools import *
def carnumberlimit(content):
if content == '限号':
return '请输入您要查询的城市'
rsp_content = '您查询的地区没有限号信息'
city = content[2:]
rsp_dict = xianhao.excute()
for k in rsp_dict:
if k.endswith(city):
rsp_content = k + '\n' + rsp_dict[k]
return rsp_content
| [
"yantao2212@126.com"
] | yantao2212@126.com |
e76b1150207bcbd1c3dcbb7e7de1e7db7260d1ee | 3cd78ff37cf258cb3a6b85a850c42f2d296ba348 | /scripts/plotify | 1084b0c983c6afb045646cc5746d9067360d00ee | [
"BSD-2-Clause"
] | permissive | emhuff/Piff | b93408995f729726bed69e057835f1ae171f8634 | e310bc699a66fb5b95bde9d0e501dea93a82b371 | refs/heads/master | 2020-07-05T23:06:24.901392 | 2019-07-12T21:13:35 | 2019-07-19T22:53:19 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,106 | #!/usr/bin/env python
# Copyright (c) 2016 by Mike Jarvis and the other collaborators on GitHub at
# https://github.com/rmjarvis/Piff All rights reserved.
#
# Piff is free software: Redistribution and use in source and binary forms
# with or without modification, are permitted provided that the following
# conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the disclaimer given in the accompanying LICENSE
# file.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the disclaimer given in the documentation
# and/or other materials provided with the distribution.
from __future__ import print_function
import sys
import piff
def parse_args():
"""Handle the command line arguments to plotify executable.
Returns the args as an argparse.Namespace object.
It will have the following fields:
args.config_file
args.variables
args.verbose
args.log_file
args.version
"""
import argparse
version_str = "Piff version %s"%piff.version
description = "Print a Piff PSF model from a list of fits files.\n"
description += "See https://github.com/rmjarvis/Piff for documenation."
parser = argparse.ArgumentParser(description=description, add_help=True, epilog=version_str)
parser.add_argument(
'config_file', type=str, nargs='?',
help='the configuration file')
parser.add_argument(
'variables', type=str, nargs='*',
help='additional variables or modifications to variables in the config file. ')
parser.add_argument(
'-v', '--verbose', type=int, action='store', default=None, choices=(0, 1, 2, 3),
help='integer verbosity level: min=0, max=3 '
'[default=1; overrides config verbose value]')
parser.add_argument(
'-l', '--log_file', type=str, action='store', default=None,
help='filename for storing logging output [default is to stream to stdout]')
parser.add_argument(
'--version', action='store_const', default=False, const=True,
help='show the version of Piff')
args = parser.parse_args()
if args.config_file == None:
if args.version:
print(version_str)
else:
parser.print_help()
sys.exit()
elif args.version:
print(version_str)
return args
def main():
args = parse_args()
# Read the config file
config = piff.config.read_config(args.config_file)
# Create a logger with the given verbosity and log_file
if args.verbose is None:
verbose = config.get('verbose', 1)
else:
verbose = args.verbose
logger = piff.config.setup_logger(verbose, args.log_file)
logger.warn('Using config file %s'%args.config_file)
# Add the additional variables to the config file
piff.config.parse_variables(config, args.variables, logger)
# Run the plotify function
piff.plotify(config, logger)
if __name__ == '__main__':
main()
| [
"chris.pa.davis@gmail.com"
] | chris.pa.davis@gmail.com | |
29fba49053b602e1a4fd38b60a029bb58e9f89af | 1e09be95bb410c091860982f1569c5346992ccc5 | /AutoDebias/baselines/CausE.py | 228a1809e3bfbd78bf0ab76906340328217a8e83 | [
"MIT"
] | permissive | colagold/RepetitionAlgorithm | 1410855f125ecc54ba4c4e3aa0be4cbef2c6a48b | 0112d73e29ac63fb6c4f4e22f851e6672c302e6c | refs/heads/master | 2023-09-05T16:00:07.428209 | 2021-11-10T12:00:44 | 2021-11-10T12:00:44 | 418,433,782 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,007 | py | import os
import numpy as np
import random
import torch
import torch.nn as nn
from model import *
import arguments
import utils.load_dataset
import utils.data_loader
import utils.metrics
from utils.early_stop import EarlyStopping, Stop_args
def setup_seed(seed):
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
np.random.seed(seed)
random.seed(seed)
def para(args):
if args.dataset == 'yahooR3':
args.training_args = {'batch_size': 1024, 'epochs': 500, 'patience': 60, 'block_batch': [6000, 500]}
args.base_model_args = {'emb_dim': 10, 'learning_rate': 0.0001, 'weight_decay': 0}
args.teacher_model_args = {'emb_dim': 10, 'learning_rate': 0.1, 'weight_decay': 10}
elif args.dataset == 'coat':
args.training_args = {'batch_size': 128, 'epochs': 500, 'patience': 60, 'block_batch': [64, 64]}
args.base_model_args = {'emb_dim': 10, 'learning_rate': 0.001, 'weight_decay': 0.1}
args.teacher_model_args = {'emb_dim': 10, 'learning_rate': 0.1, 'weight_decay': 10}
else:
print('invalid arguments')
os._exit()
def train_and_eval(train_data, unif_data, val_data, test_data, device = 'cuda',
model_args: dict = {'emb_dim': 10, 'learning_rate': 0.001, 'weight_decay': 0.0}, teacher_args: dict = {'emb_dim': 10, 'learning_rate': 0.1, 'weight_decay': 10},
training_args: dict = {'batch_size': 1024, 'epochs': 100, 'patience': 20, 'block_batch': [1000, 100]}):
# build data_loader.
train_loader = utils.data_loader.Block(train_data, u_batch_size=training_args['block_batch'][0], i_batch_size=training_args['block_batch'][1], device=device)
unif_loader = utils.data_loader.Block(unif_data, u_batch_size=training_args['block_batch'][0], i_batch_size=training_args['block_batch'][1], device=device)
val_loader = utils.data_loader.DataLoader(utils.data_loader.Interactions(val_data), batch_size=training_args['batch_size'], shuffle=False, num_workers=0)
test_loader = utils.data_loader.DataLoader(utils.data_loader.Interactions(test_data), batch_size=training_args['batch_size'], shuffle=False, num_workers=0)
# data shape
n_user, n_item = train_data.shape
# model and its optimizer.
model = MF(n_user, n_item * 2, dim=model_args['emb_dim'], dropout=0).to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=model_args['learning_rate'], weight_decay=0)
# loss_criterion
criterion = nn.MSELoss(reduction='sum')
# begin training
stopping_args = Stop_args(patience=training_args['patience'], max_epochs=training_args['epochs'])
early_stopping = EarlyStopping(model, **stopping_args)
for epo in range(early_stopping.max_epochs):
training_loss = 0
for u_batch_idx, users in enumerate(train_loader.User_loader):
for i_batch_idx, items in enumerate(train_loader.Item_loader):
# loss of training set
model.train()
users_train, items_train, y_train = train_loader.get_batch(users, items)
users_unif, items_unif, y_unif = unif_loader.get_batch(users, items)
items_unif = items_unif + n_item
users_combine = torch.cat((users_train, users_unif))
items_combine = torch.cat((items_train, items_unif))
y_combine = torch.cat((y_train, y_unif))
y_hat = model(users_combine, items_combine)
student_items_embedding = model.item_latent.weight[items_train]
teacher_items_embedding = torch.detach(model.item_latent.weight[items_train + n_item])
reg = torch.sum(torch.abs(student_items_embedding - teacher_items_embedding))
loss = criterion(y_hat, y_combine) + model_args['weight_decay'] * model.l2_norm(users_combine, items_combine) \
+ teacher_args['weight_decay'] * reg
optimizer.zero_grad()
loss.backward()
optimizer.step()
training_loss += loss.item()
model.eval()
with torch.no_grad():
# train metrics
train_pre_ratings = torch.empty(0).to(device)
train_ratings = torch.empty(0).to(device)
for u_batch_idx, users in enumerate(train_loader.User_loader):
for i_batch_idx, items in enumerate(train_loader.Item_loader):
users_train, items_train, y_train = train_loader.get_batch(users, items)
users_unif, items_unif, y_unif = unif_loader.get_batch(users, items)
pre_ratings = model(users_train, items_train)
train_pre_ratings = torch.cat((train_pre_ratings, pre_ratings))
train_ratings = torch.cat((train_ratings, y_train))
# validation metrics
val_pre_ratings = torch.empty(0).to(device)
val_ratings = torch.empty(0).to(device)
for batch_idx, (users, items, ratings) in enumerate(val_loader):
pre_ratings = model(users, items)
val_pre_ratings = torch.cat((val_pre_ratings, pre_ratings))
val_ratings = torch.cat((val_ratings, ratings))
train_results = utils.metrics.evaluate(train_pre_ratings, train_ratings, ['MSE', 'NLL'])
val_results = utils.metrics.evaluate(val_pre_ratings, val_ratings, ['MSE', 'NLL', 'AUC'])
print('Epoch: {0:2d} / {1}, Traning: {2}, Validation: {3}'.
format(epo, training_args['epochs'], ' '.join([key+':'+'%.3f'%train_results[key] for key in train_results]),
' '.join([key+':'+'%.3f'%val_results[key] for key in val_results])))
if early_stopping.check([val_results['AUC']], epo):
break
# testing loss
print('Loading {}th epoch'.format(early_stopping.best_epoch))
model.load_state_dict(early_stopping.best_state)
# validation metrics
val_pre_ratings = torch.empty(0).to(device)
val_ratings = torch.empty(0).to(device)
for batch_idx, (users, items, ratings) in enumerate(val_loader):
pre_ratings = model(users, items)
val_pre_ratings = torch.cat((val_pre_ratings, pre_ratings))
val_ratings = torch.cat((val_ratings, ratings))
# test metrics
test_users = torch.empty(0, dtype=torch.int64).to(device)
test_items = torch.empty(0, dtype=torch.int64).to(device)
test_pre_ratings = torch.empty(0).to(device)
test_ratings = torch.empty(0).to(device)
for batch_idx, (users, items, ratings) in enumerate(test_loader):
pre_ratings = model(users, items)
test_users = torch.cat((test_users, users))
test_items = torch.cat((test_items, items))
test_pre_ratings = torch.cat((test_pre_ratings, pre_ratings))
test_ratings = torch.cat((test_ratings, ratings))
val_results = utils.metrics.evaluate(val_pre_ratings, val_ratings, ['MSE', 'NLL', 'AUC'])
test_results = utils.metrics.evaluate(test_pre_ratings, test_ratings, ['MSE', 'NLL', 'AUC', 'Recall_Precision_NDCG@'], users=test_users, items=test_items)
print('-'*30)
print('The performance of validation set: {}'.format(' '.join([key+':'+'%.3f'%val_results[key] for key in val_results])))
print('The performance of testing set: {}'.format(' '.join([key+':'+'%.3f'%test_results[key] for key in test_results])))
print('-'*30)
return val_results,test_results
if __name__ == "__main__":
args = arguments.parse_args()
para(args)
setup_seed(args.seed)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train, unif_train, validation, test = utils.load_dataset.load_dataset(data_name=args.dataset, type = 'explicit', seed = args.seed, device=device)
train_and_eval(train, unif_train, validation, test, device, model_args = args.base_model_args, teacher_args = args.teacher_model_args, training_args = args.training_args)
| [
"1346380661@qq.com"
] | 1346380661@qq.com |
8f64ee6336f681650a0dd495cc61921e035cd197 | de24f83a5e3768a2638ebcf13cbe717e75740168 | /moodledata/vpl_data/59/usersdata/246/26430/submittedfiles/testes.py | 12faf80c31b214c2cbf4b4d9142d76efdbdfe08c | [] | no_license | rafaelperazzo/programacao-web | 95643423a35c44613b0f64bed05bd34780fe2436 | 170dd5440afb9ee68a973f3de13a99aa4c735d79 | refs/heads/master | 2021-01-12T14:06:25.773146 | 2017-12-22T16:05:45 | 2017-12-22T16:05:45 | 69,566,344 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 100 | py | # -*- coding: utf-8 -*-
from __future__ import division
#COMECE AQUI ABAIXO
a = 5.2
print('%d' %a)
| [
"rafael.mota@ufca.edu.br"
] | rafael.mota@ufca.edu.br |
a2dbc886b9861378eedfc1ff9a9e22384c135b00 | c933e9f705aca2586a866cbb489804eb37103b6f | /Archives/test1/FELion_avgSpec.py | aa5add61b82b239d11784740f3456b9d817062b8 | [
"MIT"
] | permissive | aravindhnivas/FELion-Spectrum-Analyser | ce49b6b23323a5e58df0cd763e94129efccad0ff | 430f16884482089b2f717ea7dd50625078971e48 | refs/heads/master | 2020-04-08T00:24:30.809611 | 2019-08-29T14:21:44 | 2019-08-29T14:21:44 | 158,850,892 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 6,136 | py | #!/usr/bin/python3
import os
import numpy as np
import pylab as P
import sys
import copy
from os import path
from scipy.optimize import leastsq
from FELion_normline import norm_line_felix
from FELion_normline import felix_binning
from matplotlib.ticker import MultipleLocator, FormatStrFormatter, NullFormatter, NullLocator
import matplotlib.pyplot as plt
## modules
import os
from tkinter import Tk, messagebox
DELTA=2.0
def export_file(fname, wn, inten):
f = open(fname.split(".pdf")[0] + '.dat','w')
f.write("#DATA points as shown in figure: " + fname + ".pdf file!\n")
f.write("#wn (cm-1) intensity\n")
for i in range(len(wn)):
f.write("{:8.3f}\t{:8.2f}\n".format(wn[i], inten[i]))
f.close()
def main(**kwargs):
t="Title"
ts=10
lgs=5
minor=5
major=50
majorTickSize=8
xmin=1000
xmax=2000
fig = plt.subplot(1,1,1)
plt.rcParams['figure.figsize'] = [6,4]
plt.rcParams['figure.dpi'] = 80
plt.rcParams['savefig.dpi'] = 100
plt.rcParams['font.size'] = ts # Title Size
plt.rcParams['legend.fontsize'] = lgs # Legend Size
my_path = os.getcwd() # getting current directory
pwd = os.listdir(my_path + "/DATA") # going into the data folder to fetch all the available data filename.
fileNameList = [] # creating a varaiable list : Don't add any data here. You can use the script as it is since it automatically takes the data in the DATA folder
for p in pwd:
if p.endswith(".felix"): # finding the files only with .felix extension
filename = os.path.basename(p) # getting the name of the file
file = os.path.splitext(filename)[0] # printing only the file name without the extension .felix
fileNameList.append([file]) # saving all the file names in the variable list fileNameList
else:
continue
xs = np.array([],dtype='double')
ys = np.array([],dtype='double')
for l in fileNameList:
a,b = norm_line_felix(l[0])
fig.plot(a, b, ls='', marker='o', ms=1, label=l[0])
xs = np.append(xs,a)
ys = np.append(ys,b)
fig.legend(title=t) #Set the fontsize for each label
#Binning
binns, inten = felix_binning(xs, ys, delta=DELTA)
fig.plot(binns, inten, ls='-', marker='', c='k')
#Exporting the Binned file.
F = 'OUT/average_Spectrum.pdf'
export_file(F, binns, inten)
#Set the Xlim values and fontsizes.
fig.set_xlim([xmin,xmax])
fig.set_xlabel(r"Calibrated lambda (cm-1)", fontsize=10)
fig.set_ylabel(r"Normalized Intensity", fontsize=10)
fig.tick_params(axis='both', which='major', labelsize=majorTickSize)
#Set the Grid value False if you don't need it.
fig.grid(True)
#Set the no. of Minor and Major ticks.
fig.xaxis.set_minor_locator(MultipleLocator(minor))
fig.xaxis.set_major_locator(MultipleLocator(major))
plt.savefig(F)
plt.close()
print("Completed.")
print()
def avgSpec_plot(t, ts, lgs, minor, major, \
majorTickSize, outFilename,\
location, mname, temp, bwidth, ie, save,\
specificFiles, allFiles
):
# Custom definitions:
def filesaved():
if os.path.isfile(my_path+"/OUT/{}.pdf".format(outFilename)) and save:
#os.chdir(my_path+"/OUT")
if "/OUT/{}.pdf".format(outFilename).endswith(".pdf"):
root = Tk()
root.withdraw()
messagebox.showinfo("Information", "File '{}.pdf' Saved".format(outFilename))
root.destroy()
def filenotfound():
root = Tk()
root.withdraw()
messagebox.showerror("Error", "FILE NOT FOUND (or some of the file's .base files are missing)")
root.destroy()
#save = True
show = True
os.chdir(location)
my_path = os.getcwd()
try:
fig = plt.subplot(1,1,1)
plt.rcParams['figure.figsize'] = [6,4]
plt.rcParams['figure.dpi'] = 80
plt.rcParams['savefig.dpi'] = 100
plt.rcParams['font.size'] = ts # Title Size
plt.rcParams['legend.fontsize'] = lgs # Legend Size
pwd = os.listdir(my_path + "/DATA") # going into the data folder to fetch all the available data filename.
fileNameList = [] # creating a varaiable list : Don't add any data here. You can use the script as it is since it automatically takes the data in the DATA folder
for f in pwd:
if f.find(".felix")>=0:
fileNameList.append(f.split(".felix")[0])
xs = np.array([],dtype='double')
ys = np.array([],dtype='double')
if all and not specificFiles:
foravgshow = True
normshow = False
for filelist in fileNameList:
a,b = norm_line_felix(filelist, mname, temp, bwidth, ie, save, foravgshow, normshow)
fig.plot(a, b, ls='', marker='o', ms=1, label=filelist)
xs = np.append(xs,a)
ys = np.append(ys,b)
fig.legend(title=t) #Set the fontsize for each label
#Binning
binns, inten = felix_binning(xs, ys, delta=DELTA)
fig.plot(binns, inten, ls='-', marker='', c='k')
#Set the Xlim values and fontsizes.
#fig.set_xlim([xmin,xmax])
fig.set_xlabel(r"Calibrated lambda (cm-1)", fontsize=10)
fig.set_ylabel(r"Normalized Intensity", fontsize=10)
fig.tick_params(axis='both', which='major', labelsize=majorTickSize)
#Set the Grid value False if you don't need it.
fig.grid(True)
#Set the no. of Minor and Major ticks.
fig.xaxis.set_minor_locator(MultipleLocator(minor))
fig.xaxis.set_major_locator(MultipleLocator(major))
if save:
# Saving and exporting the Binned file.
F = 'OUT/%s.pdf'%(outFilename)
export_file(F, binns, inten)
plt.savefig(F)
if show:
plt.show()
filesaved()
plt.close()
print()
print("Completed.")
print()
except:
filenotfound()
return
| [
"aravindhnivas28@gmail.com"
] | aravindhnivas28@gmail.com |
9c9c5ab28f50c4805bc5f3749bffb7f5979136de | ba567a54a54ca54607302c6860f05620b18ef8f9 | /Content/Compute/Lambda/WorkloadManager/src/lambda/main.py | 370a786523be751d183d3db840a6852a23f4f0aa | [
"Apache-2.0"
] | permissive | gurramd/cloudcentersuite | c9a24bb5864cb948f9a1ba65de39adbd3c142632 | 82e00457404562f278120480e19c2440c7f3a0ec | refs/heads/master | 2020-05-14T00:56:39.282849 | 2019-04-16T11:39:15 | 2019-04-16T11:39:15 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,556 | py |
import os
import sys
from util import *
from lambda_management import LambdaManagement
import json
try:
function_name = os.environ["functionName"]
function_description = os.environ["functionDescription"]
run_time = os.environ["runtimes"]
app_package = os.environ["appPackage"]
role_for_lambda=os.environ["roleForLambda"]
dynamo_db_table_name = ""
dependents = False
if 'CliqrDependencies' in os.environ:
dependents = os.environ.get('CliqrDependencies', "")
if dependents:
dynamo_db_table_name = os.environ.get('CliqrTier_'+dependents+'_tableName')
if dependents == False:
print_error("There is no depedency found to create the table.")
sys.exit(127)
print_log("table name ="+dynamo_db_table_name)
if dynamo_db_table_name == "":
print_error("There is no table found.")
sys.exit(127)
print_log(function_name)
print_log(function_description)
print_log(run_time)
print_log(app_package)
print_log(role_for_lambda)
except Exception as er:
print_log("some of the parameters are not given properly...")
print("my error",er)
print_error("some of the parameters are not given properly.")
sys.exit(127)
def get_package_base_name(appPackage):
name_list=appPackage.split('/')
package_zip=name_list[len(name_list)-1].split('.')
return package_zip[0]
def start():
object = LambdaManagement()
app_package_base_name = get_package_base_name(app_package)
handler = app_package_base_name + "/" + os.environ["initFile"] + "." + os.environ["invokeMethod"]
app_package_local="/opt/remoteFiles/cliqr_local_file/"+app_package_base_name+".zip"
fun_response = object.function_creation(function_name, run_time, handler, function_description, app_package_local,role_for_lambda)
if fun_response["FunctionArn"] is not None:
print_log("Lambda Function created.....")
else:
print_log("Lambda function creation failed.")
trigger_response=object.event_source_mapping(fun_response["FunctionArn"], dynamo_db_table_name)
trigger_uuid = trigger_response["UUID"]
result = {
"hostName": os.environ.get('appTierName', ""),
"environment": {
"trigger_uuid": trigger_uuid
}
}
def stop():
object = LambdaManagement()
trigger_uuid = os.environ.get("trigger_uuid", "")
object.delete_trigger_mapping(trigger_uuid)
object.delete_role_created(role_for_lambda)
object.delete_lambda_function(function_name)
| [
"suryamsc05@gmail.com"
] | suryamsc05@gmail.com |
81913a058df3768444ce74f18b1d6eb66f3e3893 | deca7b96384e8195d728ad1f32d72fc4cb662cef | /trace.py | 06ab35f5d860853ef1596db2320aa1bf1ac5ab07 | [] | no_license | kaelspencer/vmenu | c5ee19584b501927f8d1eb26e7963d1c15d4a428 | bab4120408469676c5cef1e2a4e6b6512e274ed1 | refs/heads/master | 2021-01-22T00:59:38.568147 | 2015-01-25T19:41:44 | 2015-01-25T19:41:44 | 18,473,974 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 327 | py | from time import time
import logging
def trace(f, *args, **kwargs):
return tracen(f.__name__, f, *args, **kwargs)
def tracen(name, f, *args, **kwargs):
logging.debug('%s start', name)
start = time()
ret = f(*args, **kwargs)
end = time() - start
logging.debug('%s end: %.3f', name, end)
return ret
| [
"kaelspencer@gmail.com"
] | kaelspencer@gmail.com |
ac087b05674afb1c7abb90905bdb696642f5ef53 | ec41617e83de6fdd484a1db282bf48bb391a6ea4 | /typeshed/aiostream/stream.pyi | 67c4d10451254a3a90b7bd058b89fdeb023651f0 | [
"BSD-3-Clause",
"BSD-2-Clause"
] | permissive | tsufeki/python-restclientaio | 4e06bf8fb78a604539d324949f903f66062a5e83 | 2af2ded9e22ba5552ace193691ed3a4b520cadf8 | refs/heads/master | 2021-01-01T20:12:53.593465 | 2017-08-07T18:51:35 | 2017-08-07T18:51:35 | 98,786,459 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 502 | pyi |
from typing import List, Iterable, Awaitable, AsyncIterable, \
AsyncContextManager, Union, TypeVar
_T = TypeVar('_T')
class Stream(AsyncIterable[_T], Awaitable[_T]):
def stream(self) -> 'Streamer': ...
def __getitem__(self, index: Union[int, slice]) -> Stream[_T]: ...
class Streamer(Stream[_T], AsyncContextManager['Streamer[_T]']):
pass
def iterate(source: Union[Iterable[_T], AsyncIterable[_T]]) -> Stream[_T]: ...
async def list(source: AsyncIterable[_T]) -> List[_T]: ...
| [
"tsufeki@ymail.com"
] | tsufeki@ymail.com |
ef940fafaa11ac195895d0192558f881c09f9ac8 | e480ca9d627a364f5c9b64424fe637e77a59d717 | /text_based_game.py | f8fe74f9b389e6e65424c499508d3a83de17ff8c | [] | no_license | acbahr/Python-Practice-Projects | ecf9b60cc578274183e981f749e80149fc5fdcb8 | 43c277726dbb770e6598afc4f34e07a9f97473a2 | refs/heads/main | 2023-02-28T13:38:43.713737 | 2021-01-28T07:20:44 | 2021-01-28T07:20:44 | 333,669,672 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,661 | py | # This is a story with alternate endings and routes one can take. Each choice leads to a different outcome.
character_attributes = {'names': ['aaron', 'kandee'], 'locations': ['maui', 'rome', 'machu pichu']}
# travel = fly (don't trust cruises due to the rona)
# AARON AND kANDEE VACATION TEXT BASED GAME
print('\nOne day, ' + ' & '.join(character_attributes['names']).title() + ' decided to go on a vacation.')
print('They had saved for a while and have chosen 3 places they would like to visit:')
for location in character_attributes['locations']:
print('\t' + location.title())
travel = input('\n\tWill they be flying or cruising? \n<fly> or <cruise>: --> ')
if travel.lower() == 'fly':
print('They chose to fly because nobody wants to catch the "rona" and cruiseships are incubators for that s*&%.')
elif travel.lower() == 'cruise':
print('They chose to cruise because f**k the "rona" and fake news! Plus the ocean and pretty islands.')
else:
print('\t<Invalid input. Choose either fly or cruise.>')
# for ROME - 1st would be colosseum, 2nd catacombs, 3rd vatican city, sistine chapel, leo davinci's secret work room
# ...try to get into bottom of pottery archeology site
# for MACHU PICHU - 1st would pet a llama, 2nd explore ancient ruins, 3rd get into lake titikaka to see the seahorses
# ...(the seahorses are from the aliens, obvi)
def rome():
site1 = 'colosseum'
site2 = 'catacombs'
site3 = {'vatican city': ['sistine chapel', 'leonardo davinci secret workshop', 'pottery archeology site']}
def machu_pichu():
site1 = 'pet a llama'
site2 = 'explore ancient ruins'
site3 = {'lake titicaca': 'seahorses'}
| [
"noreply@github.com"
] | acbahr.noreply@github.com |
65b087c8cc7f23933fbaadf58ced035e04d11596 | 4f8900cb2474282ae355a952901df4bc9b95e81c | /mooring/migrations/0158_auto_20210323_1119.py | 2a70da15655e21cf1a2dbcf79c1fe0df2d426cc8 | [
"Apache-2.0"
] | permissive | dbca-wa/moorings | cb6268c2b7c568b0b34ac007a42210fd767620f7 | 37d2942efcbdaad072f7a06ac876a40e0f69f702 | refs/heads/master | 2023-06-09T04:16:57.600541 | 2023-05-31T05:50:55 | 2023-05-31T05:50:55 | 209,494,941 | 0 | 6 | NOASSERTION | 2023-05-22T04:56:38 | 2019-09-19T07:53:30 | Python | UTF-8 | Python | false | false | 559 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.11.29 on 2021-03-23 03:19
from __future__ import unicode_literals
import django.core.files.storage
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('mooring', '0157_vessellicence'),
]
operations = [
migrations.AddField(
model_name='mooringarea',
name='mooring_specification',
field=models.SmallIntegerField(choices=[(1, 'Rental Mooring'), (2, 'Private Mooring')], default=1),
),
]
| [
"jason@digitalreach.com.au"
] | jason@digitalreach.com.au |
946e7d214f0b4cd9586a5b90853ff0bf3d07fa88 | 97df2c66247e02082a68194bf1e5fb3d01c96468 | /daemon/stores/containers.py | 4c2aab53fa6e73d5496047af7451d044e881087e | [
"Apache-2.0"
] | permissive | VenusTokyo/jina | fecd8ecb35928158f436ada90e773c4eccbf7b64 | 4265163fafe499f80dc52be4a437087bf3c1799f | refs/heads/master | 2023-07-31T13:18:51.727331 | 2021-10-02T12:22:59 | 2021-10-02T12:22:59 | 407,812,228 | 1 | 1 | Apache-2.0 | 2021-10-02T12:23:00 | 2021-09-18T09:10:57 | null | UTF-8 | Python | false | false | 12,102 | py | import os
import sys
import asyncio
from copy import deepcopy
from platform import uname
from http import HTTPStatus
from typing import Dict, TYPE_CHECKING, Union
import aiohttp
from jina import __docker_host__
from jina.helper import colored, random_port
from jina.enums import RemoteWorkspaceState
from .base import BaseStore
from ..dockerize import Dockerizer
from ..excepts import (
PartialDaemon400Exception,
PartialDaemonConnectionException,
)
from ..helper import if_alive, id_cleaner, error_msg_from
from ..models import DaemonID
from ..models.ports import PortMappings
from ..models.enums import UpdateOperation, IDLiterals
from ..models.containers import (
ContainerArguments,
ContainerItem,
ContainerMetadata,
ContainerStoreStatus,
)
if TYPE_CHECKING:
from pydantic import BaseModel
class ContainerStore(BaseStore):
"""A Store of Containers spawned by daemon"""
_kind = 'container'
_status_model = ContainerStoreStatus
async def _add(self, uri, *args, **kwargs):
"""Implements jina object creation in `partial-daemon`
.. #noqa: DAR101"""
raise NotImplementedError
@if_alive
async def _update(self, uri: str, params: Dict, **kwargs) -> Dict:
"""Sends `PUT` request to `partial-daemon` to execute a command on a Flow.
:param uri: uri of partial-daemon
:param params: json payload to be sent
:param kwargs: keyword args
:raises PartialDaemon400Exception: if update fails
:return: response from partial-daemon
"""
self._logger.debug(
f'sending PUT request to partial-daemon on {uri}/{self._kind}'
)
async with aiohttp.request(
method='PUT', url=f'{uri}/{self._kind}', params=params
) as response:
response_json = await response.json()
if response.status != HTTPStatus.OK:
raise PartialDaemon400Exception(error_msg_from(response_json))
return response_json
async def _delete(self, uri, *args, **kwargs):
"""Implements jina object termination in `partial-daemon`
.. #noqa: DAR101"""
raise NotImplementedError
async def ready(self, uri) -> bool:
"""Check if the container with partial-daemon is alive
:param uri: uri of partial-daemon
:return: True if partial-daemon is ready"""
async with aiohttp.ClientSession() as session:
for _ in range(20):
try:
async with session.get(uri) as response:
if response.status == HTTPStatus.OK:
self._logger.debug(
f'connected to {uri} to create a {self._kind.title()}'
)
return True
except aiohttp.ClientConnectionError as e:
await asyncio.sleep(0.5)
continue
except Exception as e:
self._logger.error(
f'error while checking if partial-daemon is ready: {e}'
)
self._logger.error(
f'couldn\'t reach {self._kind.title()} container at {uri} after 10secs'
)
return False
def _uri(self, port: int) -> str:
"""Returns uri of partial-daemon.
NOTE: JinaD (running inside a container) needs to access other containers via dockerhost.
Mac/WSL: this would work as is, as dockerhost is accessible.
Linux: this would only work if we start jinad passing extra_hosts.
NOTE: Checks if we actually are in docker (needed for unit tests). If not docker, use localhost.
:param port: mini jinad port
:return: uri for partial-daemon
"""
if (
sys.platform == 'linux'
and 'microsoft' not in uname().release
and not os.path.exists('/.dockerenv')
):
return f'http://localhost:{port}'
else:
return f'http://{__docker_host__}:{port}'
def _command(self, port: int, workspace_id: DaemonID) -> str:
"""Returns command for partial-daemon container to be appended to default entrypoint
NOTE: `command` is appended to already existing entrypoint, hence removed the prefix `jinad`
NOTE: Important to set `workspace_id` here as this gets set in jina objects in the container
:param port: partial-daemon port
:param workspace_id: workspace id
:return: command for partial-daemon container
"""
return f'--port {port} --mode {self._kind} --workspace-id {workspace_id.jid}'
@BaseStore.dump
async def add(
self,
id: DaemonID,
workspace_id: DaemonID,
params: 'BaseModel',
ports: Union[Dict, PortMappings],
envs: Dict[str, str] = {},
**kwargs,
) -> DaemonID:
"""Add a container to the store
:param id: id of the container
:param workspace_id: workspace id where the container lives
:param params: pydantic model representing the args for the container
:param ports: ports to be mapped to local
:param envs: dict of env vars to be passed
:param kwargs: keyword args
:raises KeyError: if workspace_id doesn't exist in the store or not ACTIVE
:raises PartialDaemonConnectionException: if jinad cannot connect to partial
:return: id of the container
"""
try:
from . import workspace_store
if workspace_id not in workspace_store:
raise KeyError(f'{workspace_id} not found in workspace store')
elif workspace_store[workspace_id].state != RemoteWorkspaceState.ACTIVE:
raise KeyError(
f'{workspace_id} is not ACTIVE yet. Please retry once it becomes ACTIVE'
)
partiald_port = random_port()
dockerports = (
ports.docker_ports if isinstance(ports, PortMappings) else ports
)
dockerports.update({f'{partiald_port}/tcp': partiald_port})
uri = self._uri(partiald_port)
command = self._command(partiald_port, workspace_id)
params = params.dict(exclude={'log_config'})
self._logger.debug(
'creating container with following arguments \n'
+ '\n'.join(
[
'{:15s} -> {:15s}'.format('id', id),
'{:15s} -> {:15s}'.format('workspace', workspace_id),
'{:15s} -> {:15s}'.format('dockerports', str(dockerports)),
'{:15s} -> {:15s}'.format('command', command),
]
)
)
container, network, dockerports = Dockerizer.run(
workspace_id=workspace_id,
container_id=id,
command=command,
ports=dockerports,
envs=envs,
)
if not await self.ready(uri):
raise PartialDaemonConnectionException(
f'{id.type.title()} creation failed, couldn\'t reach the container at {uri} after 10secs'
)
kwargs.update(
{'ports': ports.dict()} if isinstance(ports, PortMappings) else {}
)
object = await self._add(uri=uri, params=params, **kwargs)
except Exception as e:
self._logger.error(f'{self._kind} creation failed as {e}')
container_logs = Dockerizer.logs(container.id)
if container_logs and isinstance(
e, (PartialDaemon400Exception, PartialDaemonConnectionException)
):
self._logger.debug(
f'error logs from partial daemon: \n {container_logs}'
)
if e.message and isinstance(e.message, list):
e.message += container_logs.split('\n')
elif e.message and isinstance(e.message, str):
e.message += container_logs
if id in Dockerizer.containers:
self._logger.info(f'removing container {id_cleaner(container.id)}')
Dockerizer.rm_container(container.id)
raise
else:
self[id] = ContainerItem(
metadata=ContainerMetadata(
container_id=id_cleaner(container.id),
container_name=container.name,
image_id=id_cleaner(container.image.id),
network=network,
ports=dockerports,
uri=uri,
),
arguments=ContainerArguments(
command=f'jinad {command}',
object=object,
),
workspace_id=workspace_id,
)
self._logger.success(
f'{colored(id, "green")} is added to workspace {colored(workspace_id, "green")}'
)
workspace_store[workspace_id].metadata.managed_objects.add(id)
return id
@BaseStore.dump
async def update(
self,
id: DaemonID,
kind: UpdateOperation,
dump_path: str,
pod_name: str,
shards: int = None,
**kwargs,
) -> DaemonID:
"""Update the container in the store
:param id: id of the container
:param kind: type of update command to execute (only rolling_update for now)
:param dump_path: the path to which to dump on disk
:param pod_name: pod to target with the dump request
:param shards: nr of shards to dump
:param kwargs: keyword args
:raises KeyError: if id doesn't exist in the store
:return: id of the container
"""
if id not in self:
raise KeyError(f'{colored(id, "red")} not found in store.')
if id.jtype == IDLiterals.JFLOW:
params = {
'kind': kind.value,
'dump_path': dump_path,
'pod_name': pod_name,
}
params.update({'shards': shards} if shards else {})
elif id.jtype == IDLiterals.JPOD:
params = {'kind': kind.value, 'dump_path': dump_path}
else:
self._logger.error(f'update not supported for {id.type} {id}')
return id
uri = self[id].metadata.uri
try:
object = await self._update(uri, params)
except Exception as e:
self._logger.error(f'Error while updating the {self._kind.title()}: \n{e}')
raise
else:
self[id].arguments.object = object
self._logger.success(f'{colored(id, "green")} is updated successfully')
return id
@BaseStore.dump
async def delete(self, id: DaemonID, **kwargs) -> None:
"""Delete a container from the store
:param id: id of the container
:param kwargs: keyword args
:raises KeyError: if id doesn't exist in the store
"""
if id not in self:
raise KeyError(f'{colored(id, "red")} not found in store.')
uri = self[id].metadata.uri
try:
await self._delete(uri=uri)
except Exception as e:
self._logger.error(f'Error while updating the {self._kind.title()}: \n{e}')
raise
else:
workspace_id = self[id].workspace_id
del self[id]
from . import workspace_store
Dockerizer.rm_container(id)
workspace_store[workspace_id].metadata.managed_objects.remove(id)
self._logger.success(f'{colored(id, "green")} is released from the store.')
async def clear(self, **kwargs) -> None:
"""Delete all the objects in the store
:param kwargs: keyward args
"""
_status = deepcopy(self.status)
for k in _status.items.keys():
await self.delete(id=k, workspace=True, **kwargs)
| [
"noreply@github.com"
] | VenusTokyo.noreply@github.com |
dd2183a5edf45184db4142f6a18c37dda7297669 | f66ca310db65faedb9c097d5b5fa14a9ecddf8f5 | /climate/classes/MonthlyReport.py | 4dae73f5d9d9adb6c0512110f450b51106ab4bb6 | [] | no_license | zivatar/thesis | 6fdae954007c427da84dce48771030dfd0b80418 | 19cfe2e56ff21644d97cc9da403614b48de854d9 | refs/heads/master | 2022-12-13T05:00:02.584101 | 2020-03-01T13:55:23 | 2020-03-01T13:55:23 | 83,134,676 | 0 | 0 | null | 2022-12-07T23:45:41 | 2017-02-25T13:48:51 | Python | UTF-8 | Python | false | false | 8,183 | py | import simplejson as json
from climate.classes.Report import Report
from climate.classes.Climate import Climate
from climate.classes.Month import Month
class MonthlyReport(Report):
"""
| Monthly report
| managed: False
"""
class Meta:
managed = False
def collect_daily_temperature(self):
"""
| Collect daily average, minimum and maximum temperature values
:return: minimum temperature list, average temperature list, maximum temperature list
"""
Tavg = []
Tmin = []
Tmax = []
for i in self.days:
hasData = False
for j in self.dayObjs:
if j.day == i:
hasData = True
Tavg.append(j.tempAvg)
Tmin.append(j.tempMin)
Tmax.append(j.tempMax)
if not hasData:
Tmin.append(None)
Tmax.append(None)
Tavg.append(None)
return Tmin, Tavg, Tmax
def collect_daily_precipitation(self):
"""
| Collect daily precipitation values
:return: precipitation list
"""
prec = []
for i in self.days:
hasData = False
for j in self.dayObjs:
if j.day == i:
hasData = True
prec.append(j.precipitation)
if not hasData:
prec.append(None)
return prec
def collect_snow_depth(self):
"""
| Collect every morning snow depth data
:return: snow depth list
"""
s = []
for i in self.days:
hasData = False
for j in self.manualDayObjs:
if j.day == i:
hasData = True
s.append(j.snowDepth)
if not hasData:
s.append(None)
return s
def generate_temp_distribution(self):
"""
| Generate temperature distribution
:return: temperature distribution list
"""
dist = []
for l in range(len(Climate.TEMP_DISTRIBUTION_LIMITS)):
sublist = []
for i in self.days:
hasData = False
for j in self.dayObjs:
if j.day == i:
hasData = True
dailyData = j.tempDistribution
if dailyData != None and dailyData != "":
sublist.append(int(float(dailyData.split(',')[l])))
if not hasData:
sublist.append(None)
dist.append(sublist)
return dist
def generate_rh_distribution(self):
"""
| Generate relative humidity distribution
:return: relative humidity distribution list
"""
dist = []
for l in range(len(Climate.RH_DISTRIBUTION_LIMITS)):
sublist = []
for i in self.days:
hasData = False
for j in self.dayObjs:
if j.day == i:
hasData = True
dailyData = j.rhDistribution
if dailyData != None and dailyData != "":
sublist.append(int(float(dailyData.split(',')[l])))
if not hasData:
sublist.append(None)
dist.append(sublist)
return dist
def generate_wind_distribution(self):
"""
| Generate wind direction distribution
:return: wind direction distribution list
"""
dist = []
for l in range(len(Climate.WIND_DIRECTION_LIMITS)):
sublist = []
for i in self.days:
hasData = False
for j in self.dayObjs:
if j.day == i:
hasData = True
dailyData = j.windDistribution
if dailyData != None and dailyData != "":
sublist.append(int(float(dailyData.split(',')[l])))
if not hasData:
sublist.append(None)
dist.append(sublist)
return dist
def calculate_climate_index_days(self):
"""
| Calculate climate index days
:return: {'frostDays': fd, 'winterDays': wd, 'coldDays': cd, 'warmNights': wn, 'summerDays': sd, 'warmDays': wwd, 'hotDays': hd}
"""
return ({
'frostDays': Climate.get_nr_frost_days(self.tempMins),
'winterDays': Climate.get_nr_winter_days(self.tempMaxs),
'coldDays': Climate.get_nr_cold_days(self.tempMins),
'warmNights': Climate.get_nr_warm_nights(self.tempMins),
'summerDays': Climate.get_nr_summer_days(self.tempMins),
'warmDays': Climate.get_nr_warm_days(self.tempMins),
'hotDays': Climate.get_nr_hot_days(self.tempMins)
})
def calculate_number_of_available_data(self):
"""
| Calculate number of available data
:return: {"temp": temp, "tempDist": tempDist, "rhDist": rhDist, "prec": prec, "windDist": windDist, "sign": sign, "snowDepth": snowDepth}
"""
temp = Climate.number(self.tempMins) > 0 and Climate.number(self.tempMaxs) > 0
tempDist = Climate.number2(self.generate_temp_distribution()) > 0
rhDist = Climate.number2(self.generate_rh_distribution()) > 0
prec = Climate.number(self.collect_daily_precipitation()) > 0 and Climate.sum(
self.collect_daily_precipitation()) > 0
windDist = Climate.number2(self.generate_wind_distribution(), True) > 0
sign = Climate.sum([self.monthObjs[0].significants.get(i) for i in self.monthObjs[0].significants]) > 0
snowDepth = Climate.number(self.collect_snow_depth()) > 0 and Climate.sum(self.collect_snow_depth()) > 0
return {
"temp": temp,
"tempDist": tempDist,
"rhDist": rhDist,
"prec": prec,
"windDist": windDist,
"sign": sign,
"snowDepth": snowDepth,
}
def get_comments(self):
"""
Get comments of manual raw data
:return: list of { "day": d, "comment": c }
"""
s = []
for d in self.manualDayObjs:
if d.comment and d.comment != "":
s.append({
"day": d.day,
"comment": d.comment
})
return s
def __init__(self, siteId, year, month, monthObjs, yearObj, dayObjs, manualDayObjs):
self.siteId = siteId
self.year = year
self.month = month
self.monthObjs = monthObjs
self.yearObj = yearObj
self.dayObjs = dayObjs
self.manualDayObjs = manualDayObjs
self.monthObj = Month(year=self.year, month=self.month)
self.days = Month(year=self.year, month=self.month).get_days_of_month()
self.tempMins, self.tempAvgs, self.tempMaxs = self.collect_daily_temperature()
self.indices = self.calculate_climate_index_days()
self.tempDist = json.dumps(self.generate_temp_distribution())
self.rhDist = json.dumps(self.generate_rh_distribution())
self.prec = json.dumps(self.collect_daily_precipitation())
self.precDist = Climate.get_precipitation_over_limits(self.collect_daily_precipitation())
self.windDist = json.dumps(self.generate_wind_distribution())
self.significants = json.dumps(monthObjs[0].significants)
self.precipitation = Climate.sum(self.collect_daily_precipitation())
self.tmin = Climate.avg(self.tempMins)
self.tmax = Climate.avg(self.tempMaxs)
self.tavg = Climate.avg2(self.tempMins, self.tempMaxs)
self.dataAvailable = self.calculate_number_of_available_data()
self.tempMins = json.dumps(self.tempMins)
self.tempAvgs = json.dumps(self.tempAvgs)
self.tempMaxs = json.dumps(self.tempMaxs)
self.snowDepths = json.dumps(self.collect_snow_depth())
self.snowDays = self.get_nr_of_snow_days()
self.comments = self.get_comments()
| [
"macgyver1024@gmail.com"
] | macgyver1024@gmail.com |
83acc424ef522d4a277c229a5bf815d13d2ec91d | 3fb13cab1fafe157d323d6e9d6598e0d86fe9191 | /16HaarCascade.py | bf683fb21ab227503c8cd5b03612ebe924e38f5f | [] | no_license | swallowsyulika/OpenCV_practice | 5800016b2b21b2d58d718206d37ea50ac55ef4c5 | f450658ef9e69526e5a398c08b2fc05e0fdc519c | refs/heads/main | 2023-08-11T07:14:07.678694 | 2021-10-12T10:02:33 | 2021-10-12T10:02:33 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,404 | py | import cv2
import numpy as np
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
mouth_cascade = cv2.CascadeClassifier('haarcascade_smile.xml')
cap = cv2.VideoCapture(0)
num = 0
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5) # detectMultiScale(img, scaleFactor, minNeighbors)
for (fx, fy, fw, fh) in faces:
cv2.rectangle(frame, (fx, fy), (fx+fw, fy+fh), (255, 0, 0), 2)
cv2.putText(frame, "face", (fx, fy), cv2.FONT_HERSHEY_SIMPLEX, 1, (200, 255, 255), 2, cv2.LINE_AA)
roi_gray = gray[fy:fy+fh, fx:fx+fw]
roi_color = frame[fy:fy+fh, fx:fx+fw]
mouth = mouth_cascade.detectMultiScale(roi_gray)
max_size = 0
mx, my, mw, mh = 0, 0, 0, 0
for (ex, ey, ew, eh) in mouth:
if ey >= (fy+fh)/2:
if (ex+ew)*(ey+eh) > max_size:
max_size = (ex+ew)*(ey+eh)
mx, my, mw, mh = ex, ey, ew, eh
cv2.rectangle(roi_color, (mx, my), (mx+mw, my+mh), (0, 255, 0), 2)
cv2.putText(roi_color, "mouth", (mx, my), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (200, 255, 255), 1, cv2.LINE_AA)
print(f"frame {num} done!")
num += 1
cv2.imshow('img', frame)
if cv2.waitKey(1) & 0xFF == 27:
break
cap.release()
cv2.destroyAllWindows()
| [
"nooseeleaf@gmail.com"
] | nooseeleaf@gmail.com |
4c2e7947c80e872883c19b38280b6b3b8dfdd207 | 2f69a8bf8bf8e097525ed2aedf9bd0f0dad32e3f | /newfile/roblox-claimable-group-finder-main/lib/threads.py | 487261ace6a446703ddf5194a988c13e7d9afff1 | [
"MIT"
] | permissive | hetrom111/ok | 17ba3d788938cb2594269739582812db12d221f4 | fc1c5097fdeb4382b6690fa82bb0e11db0eea651 | refs/heads/main | 2023-09-05T15:57:43.520147 | 2021-11-09T12:38:36 | 2021-11-09T12:38:36 | 426,201,398 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,935 | py | from .constants import GROUP_API, GROUP_API_ADDR, BATCH_GROUP_REQUEST,\
SINGLE_GROUP_REQUEST
from .utils import parse_batch_response, make_http_socket, shutdown_socket,\
send_webhook, make_embed
from datetime import datetime, timezone
from time import time, sleep
from json import loads as json_loads
from zlib import decompress
def log_notifier(log_queue, webhook_url):
while True:
date, group_info = log_queue.get()
print(f"[{date.strftime('%H:%M:%S')}]",
f"roblox.com/groups/{group_info['id']:08d}",
"|", f"{str(group_info['memberCount']).rjust(2)} member(s)",
"|", group_info["name"])
if webhook_url:
try:
send_webhook(
webhook_url, embeds=(make_embed(group_info, date),))
except Exception as err:
print(f"Error while sending webhook: {err!r}")
def stat_updater(count_queue):
count_cache = {}
while True:
while True:
try:
for ts, count in count_queue.get(block=False):
ts = int(ts)
count_cache[ts] = count_cache.get(ts, 0) + count
except:
break
now = time()
total_count = 0
for ts, count in tuple(count_cache.items()):
if now - ts > 60:
count_cache.pop(ts)
continue
total_count += count
print(f"Speed: {total_count/1e6:.2f}m RPM", end="\r")
sleep(0.1)
def group_scanner(log_queue, count_queue, proxy_iter, timeout,
gid_ranges, gid_cutoff, gid_chunk_size):
gid_tracked = set()
gid_list = [
str(gid).encode()
for gid_range in gid_ranges
for gid in range(*gid_range)
]
gid_list_len = len(gid_list)
gid_list_idx = 0
if gid_cutoff:
gid_cutoff = str(gid_cutoff).encode()
while gid_list_len >= gid_chunk_size:
proxy_auth, proxy_addr = next(proxy_iter) if proxy_iter else (None, None)
try:
sock = make_http_socket(
GROUP_API_ADDR,
timeout,
proxy_addr,
proxy_headers={"Proxy-Authorization": proxy_auth} if proxy_auth else {},
hostname=GROUP_API)
except:
continue
while True:
gid_chunk = [
gid_list[(gid_list_idx + n) % gid_list_len]
for n in range(1, gid_chunk_size + 1)
]
gid_list_idx += gid_chunk_size
try:
# Request batch group details.
sock.sendall(BATCH_GROUP_REQUEST % b",".join(gid_chunk))
resp = sock.recv(1048576)
if not resp.startswith(b"HTTP/1.1 200 OK"):
break
resp = resp.partition(b"\r\n\r\n")[2]
while resp[-1] != 0:
resp += sock.recv(1048576)
owner_status = parse_batch_response(decompress(resp, -15), gid_chunk_size)
for gid in gid_chunk:
if gid not in owner_status:
# Group is missing from the batch response.
if not gid_cutoff or gid_cutoff > gid:
# Group is outside of cut-off range.
# Assume it doesn't exist and ignore it in the future.
gid_list.remove(gid)
gid_list_len -= 1
continue
if gid not in gid_tracked:
if owner_status[gid]:
# Group has an owner and this is the first time it's been checked.
# Mark it as tracked.
gid_tracked.add(gid)
else:
# Group doesn't have an owner, and this is only the first time it's been checked.
# Assume that it's locked or manual-approval only, and ignore it in the future.
gid_list.remove(gid)
gid_list_len -= 1
continue
if owner_status[gid]:
# Group has an owner and it's been checked previously.
# Skip to next group in the batch.
continue
# Group is marked as tracked and doesn't have an owner.
# Request extra details and determine if it's claimable.
sock.sendall(SINGLE_GROUP_REQUEST % gid)
resp = sock.recv(1048576)
if not resp.startswith(b"HTTP/1.1 200 OK"):
break
group_info = json_loads(resp.partition(b"\r\n\r\n")[2])
if (
not group_info["publicEntryAllowed"]
or group_info["owner"]
or "isLocked" in group_info
):
# Group cannot be claimed, ignore it in the future.
gid_list.remove(gid)
gid_list_len -= 1
continue
# Send group info back to main process.
log_queue.put((datetime.now(timezone.utc), group_info))
# Ignore group in the future.
gid_list.remove(gid)
gid_list_len -= 1
# Let the counter know gid_chunk_size groups were checked.
count_queue.put((time(), gid_chunk_size))
except KeyboardInterrupt:
exit()
except Exception as err:
break
shutdown_socket(sock)
| [
"noreply@github.com"
] | hetrom111.noreply@github.com |
2c0f82e91b7900c543eed69d0120e0797d1662ca | b171ee654417e7a953837dda965b2f6a420d715d | /mytestproject/manage.py | 6bee2d9f6a4bd23f63cf81d168f663d4edff60f2 | [] | no_license | Qurbonov-AA/DjangoRestFramework | 45a464f31cc31e140351f0df9473f7d74d877dd5 | d9f77214ca184e89161ab71376c452163c27318a | refs/heads/master | 2023-05-14T01:17:58.841584 | 2021-06-13T12:35:39 | 2021-06-13T12:35:39 | 376,248,849 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 669 | py | #!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mytestproject.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
| [
"akmal.q@jafton.com"
] | akmal.q@jafton.com |
576a6b4f84ba33b136094930fd0af3facaf312c4 | daaf640a700260b48ea9b17cbd6374ae1ee10de1 | /my_django15_project/my_django15_project/wsgi.py | 1d3fc214a5bb0928aabf7e01d056a017c7816f32 | [] | no_license | newton2304/my-first-blog | b0a93896bd5dd271963a8e05332187b71ef00fb8 | 4c1ae43d422f3cf037c8972577a644af44a89786 | refs/heads/master | 2021-01-10T07:09:43.615525 | 2015-05-29T17:41:11 | 2015-05-29T17:41:11 | 36,519,985 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 413 | py | """
WSGI config for my_django15_project project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/howto/deployment/wsgi/
"""
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_django15_project.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
| [
"newton2304@gmail.com"
] | newton2304@gmail.com |
df8e74cca7548aa19d7339e3fcc38319a3c182b6 | bc0d9f8089fddd2a7a2c17c08c79af1e644dd591 | /src/bbru/__init__.py | c3806e309e148e94d5c9e5da876afffe9b063f66 | [] | no_license | ilshad/bbru | d5f2463b4aa60907ac3acfc373916b6f0f0e0783 | c56a6e7def8a247928a09f95183f2da4032bcc5a | refs/heads/master | 2016-09-03T07:40:24.989323 | 2010-10-14T22:11:01 | 2010-10-14T22:11:01 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 227 | py | # coding: utf-8
# This code was developed for http://bluebream.ru by its community and
# placed under Public Domain.
# для удобства импортируем из корня
from bbru.localsite.interfaces import ISite
| [
"astoon.net@gmail.com"
] | astoon.net@gmail.com |
f8914046922f01b8b6ee22db455ede2d79514d26 | 72849fa61d1966dddde110943d1ee80761eccd89 | /src/OldStateEstimator/2020.09.08_auvStateEstimator/test.py | d32874e40b40e6d95dfb8b3eb3847b5a79bdd25e | [] | no_license | mfkiwl/AuvEstimator | c474d5d675a0cf7d819c5e3cad8b6ea37399d746 | 471754d81219bd6d8da283fcfded1d334eae5412 | refs/heads/main | 2023-04-15T06:06:15.368061 | 2021-04-30T00:14:56 | 2021-04-30T00:14:56 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 865 | py | import numpy as np
import matplotlib.pyplot as plt
from math import sqrt, cos, sin, atan2
def rbt2map(xrf,yrf,xr0,yr0,psi0,xm0,ym0):
# Converts pose in the robot frame to pose in the map frame
# Calculate translations and rotations in robot frame
Txr = xrf-xr0
Tyr = yrf-yr0
# Calculate intermediate length and angle
li = sqrt(Txr**2+Tyr**2)
psii = atan2(yrf-yr0, xrf-xr0) # atan or atan2
# Calculate translations and rotations in map frame
Txm = cos(psii+psi0)*li
Tym = sin(psii+psi0)*li
# Calculate individual components in the map frame
xmf = xm0+Txm
ymf = ym0+Tym
print('Txr', Txr)
print('Tyr', Tyr)
print('li', li)
print('psii', psii)
print('Txm', Txm)
print('Tym', Tym)
return xmf, ymf
[xmf, ymf] = rbt2map(
1,
1,
0,
0,
90,
0,
0
)
print(1e1) | [
"awong3@andrew.cmu.edu"
] | awong3@andrew.cmu.edu |
2c468df5c843d206e98f51df943af1f388985ab2 | 55d097123f4695bd3020c32f3454fda3a3b7f386 | /code/heart_sounds/feature_enhance2.py | 6906d2fd335d3afae8bfa792a1cc386703dd5745 | [] | no_license | wglnngt/heart-sound | 54e52866d5a22c8014ecbf9b350494b25722a6c4 | 43a7c4c571841851b1443970f91c3c66afeffe36 | refs/heads/master | 2022-09-09T22:38:42.981758 | 2020-05-28T11:06:38 | 2020-05-28T11:06:38 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 20,263 | py | import numpy as np
import csv
import os
import wave
import librosa
import math
import pandas as pd
from scipy.stats import skew, kurtosis
from scipy.fftpack import dct
from utils import undersampling
from stacking_method import model_training_stack
import signal
from python_speech_features import mfcc
import pywt
"""
NFFT = 300
NFFT1 = 300
n_maj = 0.25
n_min = 1.0
epochs = 1
len_frame = 300
frame_mov = 90
"""
def normalize_option(row_data,op):
if op == 1:
data_max = np.max(row_data)
data_min = np.min(row_data)
op_data = (row_data - data_min) / (data_max - data_min)
elif op == 2:
data_std = np.std(row_data)
data_mean = np.mean(row_data)
op_data = (row_data - data_mean) / data_std
elif op == 3:
op_data = []
row_data = row_data.tolist()
for i in range(len(row_data)):
op_data.append(1.0/(1+np.exp(-float(row_data[i]))))
return op_data
def data_normalized(data_feature,feature_name,op):
for i in range(len(feature_name)):
data_feature[feature_name[i]] = normalize_option(data_feature[feature_name[i]],op)
return data_feature
def time_enh(data_sequence):
frame_data = enframe(data_sequence, 256, 80,np.array([0.54 - 0.46 * np.cos(2 * np.pi * n / (NFFT - 1)) for n in range(NFFT)]))
rms_list = []
boxing = []
maichong = []
yudu = []
for w in range(frame_data.shape[0]):
sum = 0
rms = math.sqrt(pow(np.mean(frame_data[w]),2) + pow(frame_data[w].std(),2))
dif = frame_data[w][-1] - frame_data[w][0]
for v in frame_data[w]:
sum += math.sqrt(abs(v))
rms_list.append(rms_list)
boxing.append(rms / (abs(frame_data[w]).mean()))
maichong.append((max(frame_data[w])) / (abs(frame_data[w]).mean()))
yudu.append((max(frame_data[w])) / (pow(sum/dif,2)))
return [np.mean(rms_list),np.mean(boxing),np.mean(maichong),np.mean(yudu)]
def frequency_features(data_sequence,num_cof,len_frame,frame_mov):
# computes the power spectrum of the signal
hamming_distance = data_sequence*np.hamming(len(data_sequence))
fft = np.absolute(np.fft.rfft(hamming_distance, len_frame))
fft_trans = np.mean(fft)/np.max(fft)
power_spec = np.around(fft[:len_frame//2], decimals=4)
p_spec = ((1.0 / len_frame) * ((fft) ** 2))
# computes the mel frequency cepstral coefficient of the sound signal
mel_coeff = mel_coefficients(1000, 40, p_spec,13,len_frame)
medain_power = np.median(power_spec)
return mel_coeff,medain_power,fft_trans
def cal_skew(data_sequence,len_frame,frame_mov):
frame_data = enframe(data_sequence, len_frame, frame_mov,np.array([0.54 - 0.46 * np.cos(2 * np.pi * n / (NFFT - 1)) for n in range(NFFT)]))
skew_list = []
for w in range(frame_data.shape[0]):
skew_list.append(skew(frame_data[w]))
return np.mean(skew_list)
def cal_kurtosis(data_sequence,len_frame,frame_mov):
frame_data = enframe(data_sequence, len_frame, frame_mov,np.array([0.54 - 0.46 * np.cos(2 * np.pi * n / (NFFT - 1)) for n in range(NFFT)]))
kurtosis_list = []
for w in range(frame_data.shape[0]):
kurtosis_list.append(skew(frame_data[w]))
return np.mean(kurtosis_list)
def cal_energy(data_sequence,len_frame,frame_mov):
frame_data = enframe(data_sequence, len_frame, frame_mov,np.array([0.54 - 0.46 * np.cos(2 * np.pi * n / (NFFT - 1)) for n in range(NFFT)]))
sum_list = []
for w in range(frame_data.shape[0]):
sum = 0
for i in range(len(frame_data[w])):
sum += pow(frame_data[w][i],2)
sum_list.append(sum)
return np.mean(sum_list)
def zero_pass(data_sequence,len_frame,frame_mov):
frame_data = enframe(data_sequence, len_frame, frame_mov,np.array([0.54 - 0.46 * np.cos(2 * np.pi * n / (NFFT - 1)) for n in range(NFFT)]))
count_list = []
for w in range(frame_data.shape[0]):
count = 0
for i in range(len(frame_data) - 1):
if frame_data[w][i] * frame_data[w][i+1] < 0:
count+=1
count_list.append(count)
return np.mean(count_list)
def cal_mean(wav_in):
last_list = []
for i in range(13):
buf_list = []
for j in range(len(wav_in)):
buf_list.append(wav_in[j][i])
last_list.append(np.mean(buf_list))
return last_list
def enframe(wave_data, nw, inc, winfunc):
wlen = len(wave_data) # 信号总长度
if wlen <= nw: # 若信号长度小于一个帧的长度,则帧数定义为1
nf = 1
else: # 否则,计算帧的总长度
nf = int(np.ceil((1.0 * wlen - nw + inc) / inc))
pad_length = int((nf - 1) * inc + nw) # 所有帧加起来总的铺平后的长度
zeros = np.zeros((pad_length - wlen,))
pad_signal = np.concatenate((wave_data, zeros))
indices = np.tile(np.arange(0, nw), (nf, 1)) + np.tile(np.arange(0, nf * inc, inc), (nw, 1)).T
indices = np.array(indices, dtype=np.int32) # 将indices转化为矩阵
frames = pad_signal[indices] # 得到帧信号
win = np.tile(winfunc, (nf, 1))
return frames * win
def mel_coefficients(sample_rate, nfilt, pow_frames,num_cof,len_frame):
low_freq_mel = 0
num_mel_coeff = num_cof
high_freq_mel = (2595 * np.log10(1 + (sample_rate / 2.0) / 700.0)) # Convert Hz to Mel
mel_points = np.linspace(low_freq_mel, high_freq_mel, nfilt + 2) # Equally spaced in Mel scale
hz_points = (700 * (10**(mel_points / 2595) - 1)) # Convert Mel to Hz
bin = np.floor((len_frame + 1) * hz_points / sample_rate)
fbank = np.zeros((nfilt, int(np.floor(len_frame / 2 + 1))))
for m in range(1, nfilt + 1):
f_m_minus = int(bin[m - 1]) # left
f_m = int(bin[m]) # center
f_m_plus = int(bin[m + 1]) # right
for k in range(f_m_minus, f_m):
fbank[m - 1, k] = (k - bin[m - 1]) / (bin[m] - bin[m - 1])
for k in range(f_m, f_m_plus):
fbank[m - 1, k] = (bin[m + 1] - k) / (bin[m + 1] - bin[m])
filter_banks = np.dot(pow_frames, fbank.T)
filter_banks = np.where(filter_banks == 0, np.finfo(float).eps, filter_banks) # Numerical Stability
filter_banks = 20 * np.log10(filter_banks) # dB
mfcc = dct(filter_banks, type=2, axis=0, norm='ortho')[1:(num_mel_coeff+1)]
(ncoeff,) = mfcc.shape
cep_lifter = ncoeff
n = np.arange(ncoeff)
lift = 1 + (cep_lifter / 2) * np.sin(np.pi * n / cep_lifter)
mfcc *= lift
mfcc -= (np.mean(mfcc, axis=0) + 1e-8)
return mfcc
def total_frequency(data_sequence,len_frame,frame_mov):
tol_mel = []
frame_data = enframe(data_sequence, len_frame, frame_mov,np.array([0.54 - 0.46 * np.cos(2 * np.pi * n / (NFFT - 1)) for n in range(NFFT)]))
for i in range(frame_data.shape[0]):
pro_seq = frame_data[i,:]
fft = np.absolute(np.fft.rfft(pro_seq, NFFT))
p_spec = ((1.0 / NFFT) * ((fft) ** 2))
mel_coeff = mel_coefficients(1000, 40, p_spec, 13,len_frame)
tol_mel.append(list(mel_coeff))
return cal_mean(tol_mel)
def diff_frequency(data_sequence,len_frame,frame_mov):
tol_mel = []
frame_data = enframe(data_sequence, len_frame, frame_mov,
np.array([0.54 - 0.46 * np.cos(2 * np.pi * n / (NFFT - 1)) for n in range(NFFT)]))
for i in range(frame_data.shape[0]):
pro_seq = frame_data[i, :]
fft = np.absolute(np.fft.rfft(pro_seq, NFFT))
p_spec = ((1.0 / NFFT) * ((fft) ** 2))
mel_coeff = mel_coefficients(1000, 40, p_spec, 13,len_frame)
tol_mel.append(list(mel_coeff))
return mel_dif(tol_mel)
def label_extraction(label_path): #'/Users/mac/Desktop/heart_science/wav_label.txt'
hs_label = txt_read(label_path)
y_label = []
for i in range(len(hs_label)):
y_label.append(hs_label[i].split('\n')[0])
return y_label
def txt_read(file_name):
read_list = []
with open(file_name) as f:
lines = f.readlines()
for line in lines:
read_list.append(line)
return read_list
def data_downsampling(file_path,save_path,sample_fs = 1000):
data,sr = librosa.load(file_path)
data_resample = librosa.resample(data,sr,sample_fs)
data_sample = list(data_resample)
with open(save_path,'w+') as f:
for i in range(len(data_sample)):
f.writelines(str(data_sample[i]))
f.writelines('\n')
def contents_gain(contents_path):
path_list = []
print(len(os.listdir(contents_path)))
for each_file in os.listdir(contents_path):
path_list.append(os.path.join(contents_path,each_file))
return path_list
def counter(input_list): #找出心音变换时候的index
now_list = np.diff(input_list)
tran_index = []
for i in range(len(now_list)):
if now_list[i] != 0:
tran_index.append(i+1)
return tran_index
def element_div(list_1,list_2): #实现元素之间相除,用来计算s1/systole,s2/diastole
result_list = []
assert len(list_1) == len(list_2)
for i in range(len(list_1)):
result_list.append(np.round(list_1[i]/list_2[i],decimals=4))
return result_list
def del_zero_element(mfcc_list,num_mel,num_zero): #去除0的部分
data_length = len(mfcc_list[0])
for i in range(num_mel):
mfcc_list[i] = mfcc_list[i][data_length - num_zero :]
return mfcc_list
def different(list):
A = []
for i in range(2,len(list)-2):
A.append(np.abs(np.round((-2*list[i-2]-list[i-1]+list[i+1]+2*list[i+2]),decimals=0))/np.sqrt(10))
#A.append(np.round(list[i+1] - list[i],decimals=0))
return np.round(np.mean(A),decimals=0),A
def mel_dif(list_in):
dif_list1 = []
dif_list2 = []
buffer = []
for j in range(4):
number_list0 = []
for i in range(len(list_in)):
number_list0.append(list_in[i][j])
A,B = different(number_list0)
dif_list1.append(A)
buffer.append(B)
for j in range(4):
number_list1 = []
for i in range(len(buffer[0])):
number_list1.append(buffer[j][i])
C,_ = different(number_list1)
dif_list2.append(C)
return dif_list1,dif_list2
def exteraction_feature(hs_amps_path,num_feature,frame_len,frame_mov):
wav_list = txt_read('/home/deep/heart_science/wav_path.txt')
label_list = label_extraction('/home/deep/heart_science/wav_label.txt')
data_feature = np.zeros((len(wav_list), num_feature)).tolist()
contentes_amps = []
contentes_state = []
data_number = []
hs_amps = []
hs_state = []
if hs_amps_path is None:
for idx in range(len(wav_list)):
save_path = '/home/deep/heart_science/hs_amps/undersampling_' + str(idx + 1) + '.txt'
data_downsampling(str(wav_list[idx].split('\n')[0]), save_path, sample_fs=1000)
print('已完成第%d个下采样' % idx)
print('下采样数据已经准备完毕')
else:
print('下采样数据已经准备完毕')
for i in range(1, 3240):
contentes_amps.append('/home/deep/heart_science/hs_amps/undersampling_' + str(i) + '.txt')
for i in range(1, 3240):
contentes_state.append('/home/deep/heart_science/hs_segment/wav_segment' + str(i) + '.txt')
for i in range(len(contentes_amps)):
data_number.append(i)
with open(contentes_amps[i], 'r+') as f:
hs_amps.append(f.read())
for i in range(len(contentes_state)):
with open(contentes_state[i], 'r+') as f:
hs_state.append(f.read())
for i,state,amp in zip(data_number,hs_state,hs_amps):
try:
indivadul_list = []
feature_label = label_list[i]
list1 = []
list2 = []
list3 = []
list4 = []
enhance_list1 = []
enhance_list2 = []
fft_trans_list1 = []
fft_trans_list2 = []
print('正在计算第{0}个数据'.format(i))
state = np.array(list(state.split(',')),dtype='float32')
amp = list(amp.split('\n'))
change_position = counter(state)
buffer_list = np.zeros((4,math.ceil(len(change_position)/4))).tolist() #存放每个阶段的时长
amp_list = np.zeros((4,math.ceil(len(change_position)/4))).tolist()
index = 0
for j in range(len(change_position) - 1):
now_state = int(state[change_position[j]] - 1)
now_length = int(change_position[j+1]-change_position[j])
now_amps = amp[change_position[j]:change_position[j+1]]
try:
buffer_list[now_state][index] = now_length
amp_list[now_state][index] = now_amps
if now_state == 0: #计算峰度以及偏斜度
list1.extend(now_amps)
elif now_state == 1:
list2.extend(now_amps)
enhance_list1.extend(now_amps)
elif now_state == 2:
list3.extend(now_amps)
elif now_state == 3:
list4.extend(now_amps)
enhance_list2.extend(now_amps)
except IndexError:
break
if (j+1)%4 == 0:
index += 1
#将1234 替换成 0123
indivadul_list.append(len(buffer_list[0]))
indivadul_list.append(len(buffer_list[1]))
indivadul_list.append(len(buffer_list[2]))
indivadul_list.append(len(buffer_list[3]))
num_cycle = min(indivadul_list) - 1
enhance_mel1,enhance_mel3 = diff_frequency(np.array(enhance_list1, dtype='float32'),frame_len,frame_mov)
enhance_mel2,enhance_mel4 = diff_frequency(np.array(enhance_list2, dtype='float32'),frame_len,frame_mov)
_,_,fft_trans1 = frequency_features(np.array(list2, dtype='float32'), 12, frame_len,
frame_mov)
_,_,fft_trans2 = frequency_features(np.array(list4, dtype='float32'), 12, frame_len,
frame_mov)
fft_trans_list1.append(fft_trans1)
fft_trans_list2.append(fft_trans2)
list1 = np.array(list1,dtype='float32')
list2 = np.array(list2,dtype='float32')
list3 = np.array(list3,dtype='float32')
list4 = np.array(list4,dtype='float32')
#enh_time1 = time_enh(list2)
#enh_time2 = time_enh(list4)
_mel_s1_list = total_frequency(list1,frame_len,frame_mov) # mfcc
s1_zero = zero_pass(list1,frame_len,frame_mov) # 短时过0率
s1_energy = cal_energy(list1,frame_len,frame_mov) # 短时能量
s1_skew = cal_skew(list1,frame_len,frame_mov)
s1_kurtosis = cal_kurtosis(list1,frame_len,frame_mov)
_mel_systole_list = total_frequency(list2,frame_len,frame_mov) # mfcc
systole_zero = zero_pass(list2,frame_len,frame_mov) # 短时过0率
systole_energy = cal_energy(list2,frame_len,frame_mov) # 短时能量
systole_skew = cal_skew(list2,frame_len,frame_mov)
systole_kurtosis = cal_kurtosis(list2,frame_len,frame_mov)
_mel_s2_list = total_frequency(list3,frame_len,frame_mov) # mfcc
s2_zero = zero_pass(list3,frame_len,frame_mov) # 短时过0率
s2_energy = cal_energy(list3,frame_len,frame_mov) # 短时能量
s2_skew = cal_skew(list3,frame_len,frame_mov)
s2_kurtosis = cal_kurtosis(list3,frame_len,frame_mov)
_mel_diastole_list = total_frequency(list4,frame_len,frame_mov) # mfcc
diastole_zero = zero_pass(list4,frame_len,frame_mov) # 短时过0率
diastole_energy = cal_energy(list4,frame_len,frame_mov) # 短时能量
diastole_skew = cal_skew(list4,frame_len,frame_mov)
diastole_kurtosis = cal_kurtosis(list4,frame_len,frame_mov)
feature_list = [np.mean(_mel_s1_list[0]),np.mean(_mel_s1_list[1]),np.mean(_mel_s1_list[2]),np.mean(_mel_s1_list[3]),
np.mean(_mel_s1_list[4]),np.mean(_mel_s1_list[5]),np.mean(_mel_s1_list[6]),np.mean(_mel_s1_list[7]),
np.mean(_mel_s1_list[8]),np.mean(_mel_s1_list[9]),np.mean(_mel_s1_list[10]),np.mean(_mel_s1_list[11]),
np.mean(_mel_systole_list[0]),np.mean(_mel_systole_list[1]),np.mean(_mel_systole_list[2]),np.mean(_mel_systole_list[3]),
np.mean(_mel_systole_list[4]),np.mean(_mel_systole_list[5]),np.mean(_mel_systole_list[6]),np.mean(_mel_systole_list[7]),
np.mean(_mel_systole_list[8]),np.mean(_mel_systole_list[9]),np.mean(_mel_systole_list[10]),np.mean(_mel_systole_list[11]),
np.mean(_mel_s2_list[0]),np.mean(_mel_s2_list[1]),np.mean(_mel_s2_list[2]),np.mean(_mel_s2_list[3]),
np.mean(_mel_s2_list[4]),np.mean(_mel_s2_list[5]),np.mean(_mel_s2_list[6]),np.mean(_mel_s2_list[7]),
np.mean(_mel_s2_list[8]),np.mean(_mel_s2_list[9]),np.mean(_mel_s2_list[10]),np.mean(_mel_s2_list[11]),
np.mean(_mel_diastole_list[0]),np.mean(_mel_diastole_list[1]),np.mean(_mel_diastole_list[2]),np.mean(_mel_diastole_list[3]),
np.mean(_mel_diastole_list[4]),np.mean(_mel_diastole_list[5]),np.mean(_mel_diastole_list[6]),np.mean(_mel_diastole_list[7]),
np.mean(_mel_diastole_list[8]),np.mean(_mel_diastole_list[9]),np.mean(_mel_diastole_list[10]),np.mean(_mel_diastole_list[11]),
np.mean(s1_energy),np.mean(s1_zero),np.mean(s1_kurtosis),np.mean(s1_skew),
np.mean(systole_energy),np.mean(systole_zero),np.mean(systole_kurtosis),np.mean(systole_skew),
np.mean(s2_energy), np.mean(s2_zero), np.mean(s2_kurtosis), np.mean(s2_skew),
np.mean(diastole_energy), np.mean(diastole_zero), np.mean(diastole_kurtosis), np.mean(diastole_skew)]
for idx in range(len(feature_list)):
data_feature[i][idx] = feature_list[idx]
#data_feature[i] = data_feature[i] + enhance_mel1 + enhance_mel2
#data_feature[i].append(_mel_systole_list[-1])
#data_feature[i].append(_mel_diastole_list[-1])
#data_feature[i].extend(enh_time1)
#data_feature[i].extend(enh_time2)
#data_feature[i].append(np.mean(fft_trans_list1))
#data_feature[i].append(np.mean(fft_trans_list2))
data_feature[i].append(feature_label)
except:
pass
return data_feature
if __name__ == "__main__":
n_maj = 0.25
n_min = 1.0
len_frame = 512
cof = 0.25
for i in range(7):
frame_mov = int(len_frame*cof)
NFFT = len_frame
extracted_feature = exteraction_feature('/home/deep/heart_science/hs_amps.txt',64,len_frame,frame_mov)
df_feat_ext = pd.DataFrame(extracted_feature)
#out_file = '/home/deep/heart_science/frame_wav/' + str(cof) + '.csv'
data_path = '/home/deep/heart_science/data_feature0.csv'
try:
df_feat_ext.to_csv(data_path, index=False)
except Exception:
print("Output path does not exist")
train_data = pd.read_csv(data_path)
train_data.dropna(inplace=True)
feature_list0 = train_data.columns.values.tolist()
feature_label = feature_list0[-1]
feature_list0.remove(feature_label)
nor_data = data_normalized(train_data, feature_list0, 1)
df_feat_ext = pd.DataFrame(nor_data)
out_file = '/home/deep/heart_science/frame_len/' + str(cof)\
+ '.csv'
try:
df_feat_ext.to_csv(out_file, index=False)
except Exception:
print("Output path does not exist")
cof+=0.05
| [
"chenjianfei16@jd.com"
] | chenjianfei16@jd.com |
420add03aec6f4d46ad9a81f655c1dbf2d97933d | 66b1c2fb8fe1bd5222c78e2fde3087a45685646e | /doubt_folder/file1.py | 4fc265daa64dda824e8113babf73138e9db29733 | [] | no_license | nsharma0619/pythonbatchwac | a12233359d1dcf87c01a164381d49b7818631cb6 | f2c878c47e6a610170366c86d2879b86019ab0c3 | refs/heads/master | 2023-01-30T03:40:59.285379 | 2020-12-07T12:17:37 | 2020-12-07T12:17:37 | 310,302,491 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 160 | py |
def printforme():
print("hello there i am printforme function and i will print this thing")
if __name__ == "__main__":
print("i am in file1")
| [
"nsharma0619@gmail.com"
] | nsharma0619@gmail.com |
30f2fce99a7623512eae080861a761697a855b10 | 7275f7454ce7c3ce519aba81b3c99994d81a56d3 | /sp1/廖学峰/yield函数.py | 84e282134cb93d421a55e7284fcbd097a0ef13ab | [] | no_license | chengqiangaoci/back | b4c964b17fb4b9e97ab7bf0e607bdc13e2724f06 | a26da4e4f088afb57c4122eedb0cd42bb3052b16 | refs/heads/master | 2020-03-22T08:36:48.360430 | 2018-08-10T03:53:55 | 2018-08-10T03:53:55 | 139,777,994 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 406 | py | def h():
print ('Wen Chuan')
yield 5#就是个生成器
print ('Fighting!')
print ("xxxxx")
return
c = h()
print(next(c))#第一次生成到yield就停止
print(next(c))#第二次从yield下面开始
#yield 的作用: yield可以看出是“暂停”了函数的执行,
#然后在调用函数的.next() 方法之后, 函数开始执行直到下一个 yield的表达式。
| [
"2395618655@qq.com"
] | 2395618655@qq.com |
c6b8c919972b891ce2bb0d8ec469c528a40a618c | 478b4edbb35f39f745a410d9bf7634ba78dd6483 | /datastructures/graph/bfs_traversal.py | 8e84ba102586ec80cf5ba18a84c8eb5709b5cb75 | [] | no_license | mohanakrishnavh/Data-Structures-and-Algorithms-in-Python | 8fc6614bde943322e36333ba6d68f0d808d9324c | e392086d7ee381aebe2804c170a7e69cf590ca4b | refs/heads/master | 2021-06-04T18:02:07.385620 | 2020-05-17T13:03:40 | 2020-05-17T13:03:40 | 143,103,689 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,058 | py | from datastructures.graph.Graph import Graph
from collections import deque
def bfs_traversal(graph, source):
"""
Time Complexity: O(V + E)
"""
result = []
traversal_queue = deque([source])
while traversal_queue:
# Pop the vertex from the queue & add it to the result
current_vertex = traversal_queue.popleft()
result.append(current_vertex)
# Get the adjacency list of the vertex and iterate over it and add all its adjacent vertices to the queue
if current_vertex in graph.adjacency_list:
adjacency_list_of_current_node = graph.adjacency_list[current_vertex]
head_node = adjacency_list_of_current_node.get_head()
current_node = head_node
while current_node is not None:
traversal_queue.append(current_node.data)
current_node = current_node.next
return result
g = Graph(5)
g.add_edges([[0, 1], [0, 2], [1, 3], [1, 4]])
g.print_graph()
traversed_list = bfs_traversal(g, source=0)
print(traversed_list)
| [
"mohanakrishnavh@gmail.com"
] | mohanakrishnavh@gmail.com |
1519652a231ca7e0759337ed965d5244ac2f0f3e | 9da5fcf3488c4be56784e91ed1ae001ce8212af0 | /pipelines/adult_preparers.py | 22f2895b990e1e75549179617c96e3f62f22b62b | [
"MIT"
] | permissive | leanguardia/msc-project-pipelines | db5aad8f2901f39e5dcfa3c2362fb1a677eceb91 | 87d3af747ce390df0448f9fb1db4115f36df67ce | refs/heads/master | 2023-01-08T19:34:34.560708 | 2020-11-13T01:44:56 | 2020-11-13T01:44:56 | 285,151,592 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,084 | py | import numpy as np
from pipelines.transformation import dummify
from pipelines.preparer import Preparer
from pipelines.adult_schema import adult_schema
class AdultPreparerETL(Preparer):
def __init__(self):
super(AdultPreparerETL, self).__init__(adult_schema)
self.input_validator.add_validators(self.schema.validators(which='input'))
def prepare(self, data):
df = super(AdultPreparerETL, self).prepare(data)
self.input_validator.validate(df)
df.drop_duplicates(keep='first', inplace=True)
# Select all text columns and strip all values
text_cols = df.dtypes == np.object
texts_df = df.loc[:, text_cols].copy()
df.loc[:, text_cols] = texts_df.applymap(lambda text: text.strip())
# Remove extra dots from target.
df['>50K<=50K'] = df['>50K<=50K'].replace({'>50K.': '>50K', '<=50K.': '<=50K'})
# Replace '?' with None to identify missing values.
df['workclass'] = df['workclass'].replace({'?': None})
df['occupation'] = df['occupation'].replace({'?': None})
df['native_country'] = df['native_country'].replace({'?': None})
# Feature Engineering
# workclasses = ['State-gov', 'Self-emp-not-inc', 'Private','Federal-gov',
# 'Local-gov', 'Self-emp-inc', 'Without-pay', 'Never-worked']
# df = dummify(df, 'workclass', workclasses, dummy_na=True)
# df = dummify(df, 'race', ['White', 'Black', 'Asian-Pac-Islander',
# 'Amer-Indian-Eskimo', 'Other'])
# df = dummify(df, 'sex', ['Male', 'Female'])
df['>50K'] = df['>50K<=50K'].map({'>50K': 1, '<=50K': 0})
return df
class AdultPreparer(Preparer):
def __init__(self):
super(AdultPreparer, self).__init__(adult_schema)
def prepare(self, data):
df = super(AdultPreparer, self).prepare(data)
selected_features = ['age', 'fnlwgt', 'education_num', 'capital_gain',
'capital_loss', 'hours_per_week']
return df[selected_features].copy()
| [
"lean.guardia93@gmail.com"
] | lean.guardia93@gmail.com |
cf3a839946fb8122061a8aab6318284a726438e0 | eac5d8cb1fce89af0794f560461231d9c23dd530 | /sift.py | e83e2bde1042aaa76addbff6757d487b8726dabc | [
"MIT"
] | permissive | DongChengdongHangZhou/sift | 9205383c0beb4e17fc5bba92245e8973c24923d1 | 9ff2195933e4931aeffa9b5fffd749743b480e8a | refs/heads/main | 2023-07-23T06:40:10.890224 | 2021-08-27T07:02:09 | 2021-08-27T07:02:09 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,715 | py | import matplotlib.image as mpimg
import matplotlib.pyplot as plt
image1 = mpimg.imread("3.bmp")
image2 = mpimg.imread("4.bmp")
plt.figure()
plt.imshow(image1)
plt.savefig('image1.png', dpi = 300)
plt.figure()
plt.imshow(image2)
plt.savefig('image2.png', dpi = 300)
import cv2
import time
# 计算特征点提取&生成描述时间
start = time.time()
sift = cv2.SIFT_create() # 老版本的cv2库使用cv2.xfeatures2d.SIFT_create()
# 使用SIFT查找关键点key points和描述符descriptors
kp1, des1 = sift.detectAndCompute(image1, None)
kp2, des2 = sift.detectAndCompute(image2, None)
end = time.time()
print("特征点提取&生成描述运行时间:%.2f秒"%(end-start))
kp_image1 = cv2.drawKeypoints(image1, kp1, None)
kp_image2 = cv2.drawKeypoints(image2, kp2, None)
plt.figure()
plt.imshow(kp_image1)
plt.savefig('kp_image1.png', dpi = 300)
plt.figure()
plt.imshow(kp_image2)
plt.savefig('kp_image2.png', dpi = 300)
# 查看关键点
print("关键点数目:", len(kp1))
for i in range(2):
print("关键点", i)
print("数据类型:", type(kp1[i]))
print("关键点坐标:", kp1[i].pt)
print("邻域直径:", kp1[i].size)
print("方向:", kp1[i].angle)
print("所在的图像金字塔的组:", kp1[i].octave)
print("================")
# 查看描述
print("描述的shape:", des1.shape)
for i in range(2):
print("描述", i)
print(des1[i])
ratio = 0.7
# 计算匹配点匹配时间
start = time.time()
# K近邻算法求取在空间中距离最近的K个数据点,并将这些数据点归为一类
matcher = cv2.BFMatcher()
raw_matches = matcher.knnMatch(des1, des2, k = 2)
good_matches = []
for m1, m2 in raw_matches:
# 如果最接近和次接近的比值大于一个既定的值,那么我们保留这个最接近的值,认为它和其匹配的点为good_match
if m1.distance < ratio * m2.distance:
good_matches.append([m1])
end = time.time()
print("匹配点匹配运行时间:%.2f秒"%(end-start))
matches = cv2.drawMatchesKnn(image1, kp1, image2, kp2, good_matches, None, flags = 2)
plt.figure()
plt.imshow(matches)
plt.savefig('matches.png', dpi = 300)
# 匹配对的数目
print("匹配对的数目:", len(good_matches))
for i in range(2):
print("匹配", i)
print("数据类型:", type(good_matches[i][0]))
print("描述符之间的距离:", good_matches[i][0].distance)
print("查询图像中描述符的索引:", good_matches[i][0].queryIdx)
print("目标图像中描述符的索引:", good_matches[i][0].trainIdx)
print("================")
import numpy as np
# 单应性矩阵有八个参数,每一个对应的像素点可以产生2个方程(x一个,y一个),那么需要四个像素点就能解出单应性矩阵
if len(good_matches) > 4:
# 计算匹配时间
start = time.time()
ptsA= np.float32([kp1[m[0].queryIdx].pt for m in good_matches]).reshape(-1, 1, 2)
ptsB = np.float32([kp2[m[0].trainIdx].pt for m in good_matches]).reshape(-1, 1, 2)
ransacReprojThreshold = 4
# 单应性矩阵可以将一张图通过旋转、变换等方式与另一张图对齐
H, status =cv2.findHomography(ptsA,ptsB,cv2.RANSAC,ransacReprojThreshold)
imgOut = cv2.warpPerspective(image2, H, (image1.shape[1],image1.shape[0]),flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP)
end = time.time()
print("匹配运行时间:%.2f秒"%(end-start))
plt.figure()
plt.imshow(image1)
plt.figure()
plt.imshow(image2)
plt.figure()
plt.imshow(imgOut)
plt.savefig('imgOut.png', dpi = 300)
cv2.imwrite('imgOut.jpg',imgOut) | [
"noreply@github.com"
] | DongChengdongHangZhou.noreply@github.com |
2eb9864ff832a3a0fe802a6f917a4a80bccc3d9f | 392b190de3a04f5d437dee56fe03361478741f24 | /Backup/20170324024512/LaTeXTools/bibliography_plugins/newBibliography.py | 1af98ba1d8fa56ca09dc8732daf1d903b639e5f1 | [] | no_license | nikhilhassija/st3-config | c691c30fc53e1c8ef18358914759a30c3dc46f71 | 75165813ad67166dbcae44dbac38884c4857bf1c | refs/heads/master | 2021-01-01T15:32:45.307474 | 2017-07-29T11:07:50 | 2017-07-29T11:07:50 | 97,637,945 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,856 | py | from latextools_plugin import LaTeXToolsPlugin
from external.bibtex import Parser
from external.bibtex.names import Name
from external.bibtex.tex import tokenize_list
from external import latex_chars
from latextools_utils import bibcache
import codecs
from collections import Mapping
import sublime
import traceback
# LaTeX -> Unicode decoder
latex_chars.register()
if sublime.version() < '3000':
def _get_people_long(people):
return u' and '.join([unicode(x) for x in people])
else:
def _get_people_long(people):
return u' and '.join([str(x) for x in people])
def _get_people_short(people):
if len(people) <= 2:
return u' & '.join([x.last if x.last != '' else x.first for x in people])
else:
return people[0].last if people[0].last != '' else people[0].first + \
u', et al.'
def remove_latex_commands(s):
u'''
Simple function to remove any LaTeX commands or brackets from the string,
replacing it with its contents.
'''
chars = []
FOUND_SLASH = False
for c in s:
if c == '{':
# i.e., we are entering the contents of the command
if FOUND_SLASH:
FOUND_SLASH = False
elif c == '}':
pass
elif c == '\\':
FOUND_SLASH = True
elif not FOUND_SLASH:
chars.append(c)
elif c.isspace():
FOUND_SLASH = False
return ''.join(chars)
# wrapper to implement a dict-like interface for bibliographic entries
# returning formatted value, if it is available
class EntryWrapper(Mapping):
def __init__(self, entry):
self.entry = entry
def __getitem__(self, key):
if not key:
return u''
key = key.lower()
result = None
short = False
if key.endswith('_short'):
short = True
key = key[:-6]
if key == 'keyword' or key == 'citekey':
return self.entry.cite_key
if key in Name.NAME_FIELDS:
people = []
for x in tokenize_list(self.entry[key]):
if x.strip() == '':
continue
try:
people.append(Name(x))
except:
print(u'Error handling field "{0}" with value "{1}"'.format(
key, x
))
traceback.print_exc()
if len(people) == 0:
return u''
if short:
result = _get_people_short(people)
else:
result = _get_people_long(people)
if not result:
result = self.entry[key]
return remove_latex_commands(codecs.decode(result, 'latex'))
def __iter__(self):
return iter(self.entry)
def __len__(self):
return len(self.entry)
class NewBibliographyPlugin(LaTeXToolsPlugin):
def get_entries(self, *bib_files):
entries = []
parser = Parser()
for bibfname in bib_files:
bib_cache = bibcache.BibCache("new", bibfname)
try:
cached_entries = bib_cache.get()
entries.extend(cached_entries)
continue
except:
pass
try:
bibf = codecs.open(bibfname, 'r', 'UTF-8', 'ignore') # 'ignore' to be safe
except IOError:
print("Cannot open bibliography file %s !" % (bibfname,))
sublime.status_message("Cannot open bibliography file %s !" % (bibfname,))
continue
else:
bib_data = parser.parse(bibf.read())
print ('Loaded %d bibitems' % (len(bib_data)))
bib_entries = []
for key in bib_data:
entry = bib_data[key]
if entry.entry_type in ('xdata', 'comment', 'string'):
continue
# purge some unnecessary fields from the bib entry to save
# some space and time reloading
for k in [
'abstract', 'annotation', 'annote', 'execute',
'langidopts', 'options'
]:
if k in entry:
del entry[k]
bib_entries.append(EntryWrapper(entry))
try:
bib_cache.set(bib_entries)
fmt_entries = bib_cache.get()
entries.extend(fmt_entries)
except:
traceback.print_exc()
finally:
try:
bibf.close()
except:
pass
print("Found %d total bib entries" % (len(entries),))
return entries
| [
"nikhil.hassija@gmail.com"
] | nikhil.hassija@gmail.com |
133f85e0376eb70eb1c5335387a08dd630b1e713 | 23c6f46572a13b4febbaec8854745ec6cc99c996 | /onnx_tf/handlers/backend/q_linear_conv.py | 083d29e9426fe1feef54af14079b590b4cff356d | [
"Apache-2.0"
] | permissive | ashwinr64/onnx-tensorflow | ff732875747b42c7c67586b5d7df96e9bfa78cf6 | 1980caca5a128e2f7fb8b82e1dada5b3989140c0 | refs/heads/master | 2022-08-16T16:53:31.720678 | 2020-05-19T18:19:55 | 2020-05-19T18:19:55 | 266,281,693 | 1 | 0 | Apache-2.0 | 2020-05-23T07:04:23 | 2020-05-23T07:04:22 | null | UTF-8 | Python | false | false | 2,335 | py | import tensorflow as tf
from onnx_tf.handlers.backend_handler import BackendHandler
from onnx_tf.handlers.handler import onnx_op
from .conv_mixin import ConvMixin
@onnx_op("QLinearConv")
class QLinearConv(ConvMixin, BackendHandler):
@classmethod
def _dequantize_tensor(cls, base, zero_point, scale):
# Do computation in float32
base = tf.cast(base, tf.float32)
zero_point = tf.cast(zero_point, tf.float32)
return (base - zero_point) * scale
@classmethod
def _dequantize_w(cls, base, zero_point, scale):
tensor_list = [
cls._dequantize_tensor(base[i][j], zero_point[j], scale[j])
for i in range(base.shape.as_list()[0])
for j in range(zero_point.shape.as_list()[0])
]
out_tensor = tf.concat(tensor_list, 0)
return tf.reshape(out_tensor, base.shape)
@classmethod
def version_10(cls, node, **kwargs):
tensor_dict = kwargs["tensor_dict"]
x = tensor_dict[node.inputs[0]]
x_scale = tensor_dict[node.inputs[1]]
x_zero_point = tensor_dict[node.inputs[2]]
w = tensor_dict[node.inputs[3]]
w_scale = tensor_dict[node.inputs[4]]
w_zero_point = tensor_dict[node.inputs[5]]
y_scale = tensor_dict[node.inputs[6]]
y_zero_point = tensor_dict[node.inputs[7]]
output_dtype = x.dtype
# Convert w_zero_point and w_scale to 1-D if scalar
if len(w_zero_point.shape) == 0:
w_zero_point = tf.fill([x.shape[1]], w_zero_point)
elif len(w_zero_point.shape) > 1:
raise ValueError("Unsupported zero point: {}".format(w_zero_point))
if len(w_scale.shape) == 0:
w_scale = tf.fill([x.shape[1]], w_scale)
elif len(w_scale.shape) > 1:
raise ValueError("Unsupported scale: {}".format(w_scale))
# Dequantize variables to float32
x = cls._dequantize_tensor(x, x_zero_point, x_scale)
w = cls._dequantize_w(w, w_zero_point, w_scale)
y_zero_point = tf.cast(y_zero_point, tf.float32)
new_dict = tensor_dict.copy()
new_dict[node.inputs[0]] = x
new_dict[node.inputs[3]] = w
# Remove scales and zero-points from inputs
for i in [7, 6, 5, 4, 2, 1]:
node.inputs.remove(node.inputs[i])
# Use common conv handling
conv_node = cls.conv(node, new_dict)[0]
# Process output
y = tf.round(conv_node / y_scale) + y_zero_point
return [tf.cast(y, output_dtype)]
| [
"chhuang@us.ibm.com"
] | chhuang@us.ibm.com |
0ff7a49a3c962c569d98dc7b1e3d5efd3cbd7f2e | 8b89797b16820bd5c8b01bbb49bc0176d64e7bfc | /src/GUI/slider.py | 50a6077e0f2a2f978d247fba6544b67dee963ad1 | [] | no_license | basuluu/Equalizer | 1bda11c4f9aaaad24a659ea685d9eedad00e071c | 7cbe38c3a30c44261348996b8346f0435f50d835 | refs/heads/master | 2022-11-23T12:50:30.573065 | 2019-07-01T09:43:53 | 2019-07-01T09:43:53 | 194,638,833 | 0 | 1 | null | 2022-11-22T03:54:46 | 2019-07-01T09:08:31 | Python | UTF-8 | Python | false | false | 885 | py | from tkinter import *
class Slider:
def __init__(self, filter_, master):
self.scale = Scale(master, resolution=1, from_=67, to=0, width=60, activebackground="black", showvalue=0, command=self.onChange)
self.scale.set('33')
self.label = Label(master, text=0, font=("Helvetica", 12), height=5, justify="center", anchor="c", bg="#40E0D0")
self.name = Label(master, text=0, font=("Helvetica", 12), justify="center", anchor="c")
self.filter = filter_
def onChange(self, value):
step = 3
if int(value) > 33:
value = int(value) - 32
elif int(value) < 33:
value = (100 - (33 - int(value)) * step) / 100
else:
value = 1
self.label['text'] = str(value)
self.filter.set_gain(value)
def setName(self, name):
self.name['text'] = name
| [
"severok1077@mail.ru"
] | severok1077@mail.ru |
eee0365216c2e30319d84a343c944c9d404dca71 | 129f435c765b72ee377e0303f634fb05de92a479 | /scripts/test_login.py | 331ecc6bb1993337ae43b7adbfcff213436a9a08 | [] | no_license | cattree1/jenkins_project | 9538f6566f04919a589fe52a220e1e0d7639c481 | 7c278bb4d90ccee223f23bceac26227b2d5e5940 | refs/heads/master | 2020-07-31T12:50:04.811082 | 2019-09-26T14:58:05 | 2019-09-26T14:58:05 | 210,609,335 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 264 | py | from base.base_driver import init_driver
class TestLogin:
def setup(self):
self.driver = init_driver()
def teardown(self):
self.driver.quit()
def test_login(self):
print("hello")
def test_login1(self):
assert 0
| [
"807265791@qq.com"
] | 807265791@qq.com |
5fd9e18f04fb7261e3dbb249063971e1fff84091 | 0388b9940c1dd4db259c8ce5f5cd728932939a23 | /bubble_sort_main.py | 150e8bb37ba2949d589d76933bfebbc408de7e0b | [] | no_license | BloodyPhoenix/VIP_bubble_sort_tips | 8d8593df0132ecf413016d7ab98beacc2b66ab02 | 95340fe9aafef637a55bfede499f1c11654a4d92 | refs/heads/master | 2023-05-09T07:07:52.858913 | 2021-06-06T15:26:42 | 2021-06-06T15:26:42 | 374,392,537 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,246 | py | # -*- coding: utf-8 -*-
import copy
import bubble_sort_engine
import bubble_sort_setup
def bubbles_sort():
print("""Добро пожаловать в приложение, помогающее проходить игру Bubble Sort и ей подобные!
Для того, чтобы начать, нажмите Enter""")
input()
print("""Сейчас вам нужно будет задать начальные условия уровня: количество пробирок, их расположение и цвета шариков.
Программа не чувствительна к языку и регистру. Вы можете вводить цвета шаирков любым удобным для вас способом,
в том числе - используя сокращения. Главное - чтобы вы сами понимали, какое обозначение какому цвету соответствует,
и везде обозначали один цвет одинаково. Если будут попадаться цвета "Жёлтый" и "жёлтый", программа будет считать их
разными цветами и не сможет корректно начать работу.
Нумерация пробирок идёт от верхнего левого угла поля, шариков - от верхнего к нижнему.""")
base_game_field = bubble_sort_setup.setup_game_field()
tries = 0
completed = False
while not completed:
tries += 1
print("Попытка номер", tries)
game_field = copy.deepcopy(base_game_field)
simulator = bubbles_sorting_engine.GameSimulator(game_field)
has_moves = True
while has_moves:
if simulator.make_move():
if simulator.check_if_finished():
print_log(simulator)
completed = True
has_moves = False
else:
has_moves = False
input()
def print_log(simulator: bubbles_sorting_engine.GameSimulator):
for line in simulator.log:
print(line)
if __name__ == "__main__":
bubbles_sort()
| [
"mchashchina@gmail.com"
] | mchashchina@gmail.com |
bb1c1189e5ccf4de06ca9ace67b63ad4cdc1739a | f9c5260fbc31972e931a17a79b90959345305e6f | /webapp/website/podcast/urls.py | ebad2022f668cf2225b2402e0f0a806002da5bcf | [] | no_license | tosborne1215/ColdOnesPodcast | 9799a9d785ec9ac610c99db1fed4faf39f3a8bc4 | 34abe068d1be837ba16f472c614f4dcd12915fca | refs/heads/master | 2020-03-14T13:35:23.705366 | 2018-10-19T03:08:54 | 2018-10-19T03:08:54 | 131,636,317 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 999 | py | """website URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.11/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import url, include
from . import views
from .podcast_feed import PodcastFeed
app_name = 'podcast'
urlpatterns = [
url(r'^podcast.rss$', PodcastFeed(), name='feed'),
url(r'^episodes/', views.EpisodesView.as_view(), name='episodes'),
url(r'^episode/(?P<pk>[0-9]+)$',
views.EpisodeView.as_view(), name='episode'),
]
| [
"tosborne1215@gmail.com"
] | tosborne1215@gmail.com |
074470dd0fa688eec27d55563a18b2e2ec5a6553 | ae29385e73c6fe7a1f8f3b0ff4f1c515cdcffb08 | /pywikibugs.py | 5e52678dd767caa977873c8755189e5aeaef74d0 | [] | no_license | pywikibot/pywikibugs | 078b53084d990c7e3beabe95ca55c1a81b08b884 | e86a51bde81985387c5612c07c90e6715ba08760 | refs/heads/master | 2020-05-18T20:49:36.047148 | 2013-09-25T03:03:48 | 2013-09-25T03:03:48 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,296 | py | #!/usr/bin/python
# (C) Legoktm, 2013
# (C) Pywikipediabot team, 2013
# Released under the MIT License
from __future__ import unicode_literals
import re
import requests
from mtirc import bot, hooks, settings
config = dict(settings.config)
config['nick'] = 'pywikibugs'
config['debug'] = False
config['disable_on_errors'] = None
config['connections']['card.freenode.net']['channels'] = ['#mediawiki', '#pywikipediabot', '##legoktm-bots-chatter']
COLOR_RE = re.compile(r'(?:\x02|\x03(?:\d{1,2}(?:,\d{1,2})?)?)')
THINGY = re.compile('bugzilla\.wikimedia\.org/(\d*?) ')
def on_msg(**kw):
if kw['sender'].nick.startswith('wikibugs'):
if 'Pywikibot' in kw['text']:
kw['bot'].queue_msg('#pywikipediabot', kw['text'])
else:
de_colored = COLOR_RE.sub('', kw['text'])
match = THINGY.search(de_colored)
if match:
bug_id = match.group(1)
url = 'http://bugzilla.wikimedia.org/show_bug.cgi?id=' + bug_id.strip()
r = requests.get(url)
if 'describecomponents.cgi?product=Pywikibot' in r.text:
kw['bot'].queue_msg('#pywikipediabot', kw['text'])
hooks.add_hook('on_msg', 'pywikibugs', on_msg)
if __name__ == '__main__':
b = bot.Bot(config)
b.run()
| [
"legoktm@gmail.com"
] | legoktm@gmail.com |
bc749c94a7e9f16ca9886f932249f24e0a270ea8 | 19687d24e83a67865ed90f31b830922d425ac5ab | /auto.py | 5f12e77afbd1ad4520cf3b7949e3ac785de5880a | [] | no_license | klapperys/Spotify-Python-Media-controls | 08fe62b21a4d295cb365af69008a490ca9cf8a61 | 72f08c58c2f3f4514a3d11e69041875d3d886461 | refs/heads/master | 2020-04-10T21:19:44.577905 | 2018-12-11T07:36:22 | 2018-12-11T07:36:22 | 161,293,774 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,272 | py | #!/usr/bin/python3
import pyautogui
import subprocess
import argparse
#get current workspace
workspace = subprocess.run(['./workspace.sh'], stdout=subprocess.PIPE).stdout.decode('utf-8').split('\n')[0].split(')')[0].split('(')[1]
print('Curent Workspace: ' + workspace)
mouse_pos = pyautogui.position()
# Instantiate the parser
parser = argparse.ArgumentParser(description='A gui media controler')
parser.add_argument('action', type=str,
help='what action must i perform')
args = parser.parse_args()
if args.action == 'pp':
#move work space
pyautogui.hotkey('winleft', '1')
pyautogui.moveTo(965, 50000)
pyautogui.moveRel(-10, -70)
pyautogui.click()
pyautogui.hotkey('winleft', workspace)
pyautogui.moveTo(mouse_pos)
if args.action == 'next':
#move work space
pyautogui.hotkey('winleft', '1')
pyautogui.moveTo(965, 50000)
pyautogui.moveRel(32, -70)
pyautogui.click()
pyautogui.hotkey('winleft', workspace)
pyautogui.moveTo(mouse_pos)
if args.action == 'prev':
#move work space
pyautogui.hotkey('winleft', '1')
pyautogui.moveTo(965, 50000)
pyautogui.moveRel(-52, -70)
pyautogui.click()
pyautogui.hotkey('winleft', workspace)
pyautogui.moveTo(mouse_pos)
| [
"noreply@github.com"
] | klapperys.noreply@github.com |
51f1039e6e06aaaeb2eebeec4efa0727b1cf99fb | c09a06ef8693c1cdd1af5211924ae38c8401c908 | /newbie.py | acf7fb6e2c9892620ca6d4183566aabcdfdf13ef | [] | no_license | Kebumen-Grey-Hat/ChatBot | befcc42c00630fac519413d7666c3bfa2e8f52f6 | 62f01b8a576ddff816239da3f48e2cdccd5a7d21 | refs/heads/master | 2022-09-03T13:04:51.544042 | 2020-05-25T16:27:34 | 2020-05-25T16:27:34 | 266,113,542 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,128 | py | from requests import Session
import re, sys
s = Session()
no = int(input("No : "))
msg = int(input("Pesan : "))
headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0.1; ASUS_A007 Build/MMB29P; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/55.0.2883.91 Mobile Safari/537.36'
'Referer': 'https://alpha.payuterus.biz/'
}
bypas = r.get("https://aplha.payuterus.biz/?a=keluar", headers=headers).text
key = re.findall(r'value="(\d+)"', bypas)[0]
jml = re.findall(r'<span>(.*?) = </span>', bypas)[0]
captcha = eval(("x", "*").replace(":", "/"))
data = {
'nohp':no,
'pesan':msg,
'captcha':captcha,
'key':key
}
send = s.post("http://sms.payuterus.biz/alpha/send.php", headers=headers, data=data).text
if 'SMS Gratis Telah Dikirim' in send:
print(f"\nSukses dikirim! \n[{no}] => {msg}")
elif 'MAAF....!' in send:
print("\n\t* Mohon Tunggu 15 Menit Lagi Untuk Pengirima>*")
elif 'Pesan yang dikirimkan minimal 10 karakter' in send:
print('\n\t* Pesan yang dikirimkan minimal 10 karakter >*')
else:
print("\n\t* Gagal dikirim! *")
| [
"aldoraimas@gmail.com"
] | aldoraimas@gmail.com |
ecb011d695e08e7bb06c253526edc4831910d3c6 | 5e9cec01d4f1da811f5b585474b38c8a6dd2127b | /asl-workflow-engine/py/test/workers_blocking_asyncio.py | 173270bf56b7d51b8ea58f1916b78f7052f77400 | [
"Apache-2.0"
] | permissive | fadams/local-step-functions | e8ffe9830286412d32bcbfe440d5d1c14462099f | fde33568c2bd822c232b585f4eaf1028be20bf5d | refs/heads/master | 2023-08-04T08:15:17.659530 | 2023-07-31T08:06:13 | 2023-07-31T08:06:13 | 191,017,454 | 9 | 2 | null | null | null | null | UTF-8 | Python | false | false | 8,587 | py | #
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Run with:
# PYTHONPATH=.. python3 workers_blocking_asyncio.py
# PYTHONPATH=.. LOG_LEVEL=DEBUG python3 workers_blocking_asyncio.py
#
"""
This is an asyncio based version of the workers_blocking.py example.
The main loop uses asyncio, and a "handler_task" representing a long running
job that would block the event loop if called directly is launched as a Task
(Executor thread) by the main message_listener via a loop.run_in_executor call.
"""
import sys
assert sys.version_info >= (3, 6) # Bomb out if not running Python3.6
import asyncio, json, time, opentracing
import atexit, concurrent.futures, contextlib
from asl_workflow_engine.logger import init_logging
from asl_workflow_engine.open_tracing_factory import create_tracer, span_context, inject_span
from asl_workflow_engine.amqp_0_9_1_messaging_asyncio import Connection, Message
from asl_workflow_engine.messaging_exceptions import *
class Worker():
def __init__(self, name):
self.name = name
# Initialise logger
self.logger = init_logging(log_name=name)
# Note that although we will use asyncio await loop.run_in_executor
# to actually launch the tasks, the actual ThreadPoolExecutor itself
# is not asyncio so we use ExitStack, enter_context and close rather
# than AsyncExitStack, enter_async_context and aclose as we would for
# an asyncio context manager.
context_stack = contextlib.ExitStack()
# Create the ThreadPoolExecutor here because creating it in the
# message listener/handler is really expensive. Also running it as a
# context manager using a with statement there won't actually sort the
# issue as the context manager blocks until the threads exit so will
# block the event loop in the main thread which is exactly what we
# *don't* want to do.
self.executor = context_stack.enter_context(
concurrent.futures.ThreadPoolExecutor(max_workers=10)
)
# Note: exit_handler is a nested closure which "decouples" the lifetime
# of the exit_handler from the lifetimes of instances of the class.
# https://stackoverflow.com/questions/16333054/what-are-the-implications-of-registering-an-instance-method-with-atexit-in-pytho
@atexit.register
def exit_handler():
print("exit_handler called")
context_stack.close()
print("exit_handler exiting")
# Launched as an Executor Task from the message listener
def handler_task(self, message):
print(self.name + " handler_task started")
print(message)
with opentracing.tracer.start_active_span(
operation_name=self.name,
child_of=span_context("text_map", message.properties, self.logger),
tags={
"component": "workers",
"message_bus.destination": self.name,
"span.kind": "consumer",
"peer.address": "amqp://localhost:5672"
}
) as scope:
# Delay for a while to simulate work then create simple reply.
# In a real processor **** DO WORK HERE ****
time.sleep(30)
reply = {"reply": self.name + " reply"}
"""
Create the response message by reusing the request note that this
approach retains the correlation_id, which is necessary. If a fresh
Message instance is created we would need to get the correlation_id
from the request Message and use that value in the response message.
"""
"""
Start an OpenTracing trace for the rpcmessage response.
https://opentracing.io/guides/python/tracers/ standard tags are from
https://opentracing.io/specification/conventions/
"""
with opentracing.tracer.start_active_span(
operation_name=self.name,
child_of=opentracing.tracer.active_span,
tags={
"component": "workers",
"message_bus.destination": message.reply_to,
"span.kind": "producer",
"peer.address": "amqp://localhost:5672"
}
) as scope:
message.properties=inject_span("text_map", scope.span, self.logger)
message.subject = message.reply_to
message.reply_to = None
message.body = json.dumps(reply)
#self.producer.send(message)
#message.acknowledge() # Acknowledges the original request
# These need to be thread safe if called from a thread
# other than the main event loop thread.
self.producer.send(message, threadsafe=True)
message.acknowledge(threadsafe=True) # Acknowledges the original request
print(self.name + " handler_task finished")
async def message_listener(self, message):
# Because we are basically launching the handler_task and doing nothing
# with the result here (because the handler_task is acknowledging and
# sending the result in the same way as the non-asyncio version) we
# could actually make this method a subroutine (the underlying listener
# autodetects if real listener is a subroutine or coroutine) and use the
# self.executor.submit(self.handler_task, message)
# below rather than loop.run_in_executor, but the approach here is
# probably more idiomatic for code that is primarily asyncio.
print(self.name + " message_listener called")
loop = asyncio.get_event_loop()
# N.B. whether to directly call the actual processing or use an
# Executor *should be configurable*. For IO bound processing it may
# make matters worse to launch in an Executor unless the processor
# is itself blocking on other IO. In general asyncio is a better way
# to enable high IO concurrency and Executors may be better for a
# small number of more CPU bound processors.
# Directly call the real delegated handler
#self.handler_task(message)
# Indirectly call the real delegated handler as an Executor Task
#self.executor.submit(self.handler_task, message)
await loop.run_in_executor(self.executor, self.handler_task, message)
async def run(self):
# Connect to worker queue and process data.
connection = Connection("amqp://localhost:5672?connection_attempts=20&retry_delay=10&heartbeat=10")
try:
await connection.open()
session = await connection.session()
self.consumer = await session.consumer(
self.name + '; {"node": {"auto-delete": true}}'
)
self.consumer.capacity = 100; # Enable consumer prefetch
await self.consumer.set_message_listener(self.message_listener)
self.producer = await session.producer()
await connection.start(); # Wait until connection closes.
except MessagingError as e: # ConnectionError, SessionError etc.
self.logger.error(e)
connection.close();
if __name__ == '__main__':
"""
Initialising OpenTracing here rather than in the Worker constructor as
opentracing.tracer is a per process object not per thread.
"""
create_tracer("workers", {"implementation": "Jaeger"})
workers = ["SimpleBlockingProcessor"]
loop = asyncio.get_event_loop()
# Create list of Worker tasks using list comprehension
tasks = [loop.create_task(Worker(name=w).run()) for w in workers]
# Run the tasks concurrently
loop.run_until_complete(asyncio.gather(*tasks))
loop.close()
| [
"fraser.adams@blueyonder.co.uk"
] | fraser.adams@blueyonder.co.uk |
83e7813ff8fcbda39b6202c6d8067e847050c658 | 670f38479e368c71712202a79f9c29810d17b6e6 | /blog/migrations/0004_auto__add_field_blog_photographer_name.py | 61de687eae38d6ab673efe339db1bf92888cf803 | [] | no_license | opendream/livestory | e1855097ba7d5c0430cb3f82678e16efdbe80fb7 | 675a57959ffc521702049ef8bf00463d4643b4fc | refs/heads/master | 2021-01-15T18:03:43.967435 | 2013-10-02T08:17:51 | 2013-10-02T08:17:51 | 3,446,520 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,722 | py | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'Blog.photographer_name'
db.add_column('blog_blog', 'photographer_name',
self.gf('django.db.models.fields.CharField')(default='', max_length=200, blank=True),
keep_default=False)
def backwards(self, orm):
# Deleting field 'Blog.photographer_name'
db.delete_column('blog_blog', 'photographer_name')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'blog.blog': {
'Meta': {'object_name': 'Blog'},
'allow_download': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'category': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['blog.Category']"}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'db_index': 'True'}),
'download_url': ('django.db.models.fields.URLField', [], {'max_length': '200', 'null': 'True', 'blank': 'True'}),
'draft': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'image': ('private_files.models.fields.PrivateFileField', [], {'max_length': '500', 'attachment': 'False'}),
'image_captured_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'image_captured_device': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'location': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['location.Location']"}),
'modified': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'mood': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'photographer_name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}),
'private': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'published': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'related_url': ('django.db.models.fields.URLField', [], {'max_length': '200', 'null': 'True', 'blank': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'trash': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'blog.category': {
'Meta': {'object_name': 'Category'},
'code': ('django.db.models.fields.SlugField', [], {'max_length': '50'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'})
},
'blog.comment': {
'Meta': {'object_name': 'Comment'},
'blog': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['blog.Blog']"}),
'comment': ('django.db.models.fields.TextField', [], {'max_length': '500'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'post_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'blog.love': {
'Meta': {'object_name': 'Love'},
'blog': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['blog.Blog']"}),
'datetime': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'location.location': {
'Meta': {'object_name': 'Location'},
'city': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'country': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'lat': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'lng': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'taggit.tag': {
'Meta': {'object_name': 'Tag'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '100'})
},
'taggit.taggeditem': {
'Meta': {'object_name': 'TaggedItem'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'taggit_taggeditem_tagged_items'", 'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.IntegerField', [], {'db_index': 'True'}),
'tag': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'taggit_taggeditem_items'", 'to': "orm['taggit.Tag']"})
}
}
complete_apps = ['blog'] | [
"panuta@gmail.com"
] | panuta@gmail.com |
3227e6ffd710be98752fe6ce6d61315d48df4aea | 80de3f5a7e29c0e350e0b43af17a71e4857bdfa0 | /main_pages/tests/tests_forms.py | dc022af32910dc0bd1a9fd685ba2cdbf9185f7f1 | [] | no_license | AlexiaDelorme/project-spacex | ca5ddcd81c5fdbc70719d850c26ba110160ad5ba | ffecac6fee28b9f9779ca682c64490f7e13a30f1 | refs/heads/master | 2023-05-25T12:04:31.938573 | 2023-02-05T14:31:14 | 2023-02-05T14:31:14 | 244,375,146 | 0 | 1 | null | 2023-05-22T23:27:35 | 2020-03-02T13:15:38 | Python | UTF-8 | Python | false | false | 727 | py | from django.test import TestCase
from main_pages.forms import ContactForm
class TestContactForm(TestCase):
def test_cannot_send_form_with_just_a_field(self):
form = ContactForm({'first': 'Test'})
self.assertFalse(form.is_valid())
def test_correct_message_for_missing_field(self):
form = ContactForm({'first': ''})
self.assertFalse(form.is_valid())
self.assertIn('first', form.errors.keys())
self.assertEqual(form.errors['first'][0], 'This field is required.')
def test_fields_are_explicit_in_form_metaclass(self):
form = ContactForm()
self.assertEqual(form.Meta.fields, [
'subject', 'first', 'last', 'email', 'message'])
| [
"alexia.delorme@gmail.com"
] | alexia.delorme@gmail.com |
dd94b63be58aec2f390e8640515372d3d15f61c9 | 1c64b0d2b762613ff1711c94f41aa8ced5f25029 | /selenium/InstaBot.py | cda4c8de0071ca7edf9dabe6758ad3ef98e6de95 | [] | no_license | Ang3lino/auto | c2aeb07d193818e95f6660d8abcde55a7cc0be91 | 0e7f1817666340ff81f00d87b30f86f6dcc7a787 | refs/heads/master | 2021-07-14T12:05:28.658223 | 2021-04-30T00:19:29 | 2021-04-30T00:19:29 | 246,199,980 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,910 | py | import random
import sys
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
from tqdm import tqdm
def print_same_line(text):
sys.stdout.write('\r')
sys.stdout.flush()
sys.stdout.write(text)
sys.stdout.flush()
def login(driver, username, password):
driver.get("https://www.instagram.com/")
time.sleep(2)
# login_button = driver.find_element_by_xpath("//a[@href='/accounts/login/?source=auth_switcher']")
# login_button.click()
# time.sleep(2)
user_name_elem = driver.find_element_by_xpath("//input[@name='username']")
user_name_elem.clear()
user_name_elem.send_keys(username)
passworword_elem = driver.find_element_by_xpath("//input[@name='password']")
passworword_elem.clear()
passworword_elem.send_keys(password)
passworword_elem.send_keys(Keys.RETURN)
time.sleep(2)
def like_comments(driver, url):
driver.get(url)
time.sleep(2)
# click the "Watch more comments"
watch_count = 86
for _ in tqdm(range(watch_count), total=watch_count): # while True:
try:
btn = driver.find_element_by_xpath("//button[@class='dCJp8 afkep']")
except NoSuchElementException as e:
print("[OK] All comments in the screen")
break
btn.click()
time.sleep(2)
comments = driver.find_elements_by_class_name('Mr508')
love_count, already_loved = 0, 0
for comment in comments:
if comment.tag_name == 'ul':
print(f'{love_count} {already_loved}: {comment.text} \n')
svgs = comment.find_elements_by_tag_name('svg')
for svg in svgs:
attr = svg.get_attribute('aria-label')
if attr == 'Unlike': # we haven't "loved" the button
already_loved += 1
break
elif attr == 'Like': # we haven't "loved" the button
try:
svg.click()
except Exception as e:
print("[!] IG got angry man, stop") # ya se emputo instagram
return False
time.sleep(random.randint(1, 4)) # exec NOP randomly to avoid anti-bot algorithms
love_count += 1
break
return True
# print (comment.text)
# print (comment.tag_name)
# print (comment.parent)
# print (comment.location)
# print (comment.size)
# print('\n\n')
driver = webdriver.Chrome('./chromedriver')
username = "__ang3lino"
password = ""
url = 'https://www.instagram.com/p/B-yNzwrlaII/'
login(driver, username, password)
success = like_comments(driver, url)
if not success:
print(driver.page_source)
with open('err_page.html', 'w') as f:
f.write(driver.page_source)
time.sleep(10)
driver.close()
print('[Ok] Work done')
| [
"sigma271@outlook.com"
] | sigma271@outlook.com |
8700468a4e3c98b55a0eef668ba0cbfe06acb303 | 2e9b974a8249d6deb1220ae426f9b42088be3d1a | /api/.~c9_invoke_iQvS0H.py | 130e88cdee90751a67ddfaa869d0a03bd17c7135 | [] | no_license | Freddec-Ingress/MyIngressMosaics | 2238ad0ecf4764de3a8990a2d8110de13d9c0b46 | 765fc47435e8640ee3416a2cf8a560fda4444acb | refs/heads/master | 2020-12-30T22:31:13.343570 | 2018-07-19T14:27:58 | 2018-07-19T14:27:58 | 95,421,447 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 20,889 | py | #!/usr/bin/env python
# coding: utf-8
import json
from rest_framework import status
from rest_framework import viewsets
from rest_framework.response import Response
from rest_framework.decorators import api_view, permission_classes
from rest_framework.permissions import IsAuthenticated, IsAuthenticatedOrReadOnly, AllowAny
from rest_framework.authtoken.models import Token
from rest_social_auth.serializers import UserTokenSerializer
from .models import *
from django.http import HttpResponse
from django.db.models import Q
from django.views.decorators.csrf import csrf_exempt
from django.contrib.auth import get_user_model, authenticate, logout
from operator import itemgetter, attrgetter, methodcaller
#---------------------------------------------------------------------------------------------------
@csrf_exempt
def ext_isMissionRegistered(request):
data = []
obj = json.loads(request.body)
for item in obj:
result = Mission.objects.filter(ref = item['mid'])
if result.count() > 0:
data.append({'mid':item['mid'], 'status': 'registered'})
else:
data.append({'mid':item['mid'], 'status': 'notregistered'})
from django.http import JsonResponse
return JsonResponse({'data': data})
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((AllowAny, ))
def ext_registerMission(request):
obj = json.loads(request.body)
results = Mission.objects.filter(ref=obj[0])
if (results.count() < 1):
mission = Mission(data=request.body)
mission.save()
mission.computeInternalData()
return Response('Registered', status=status.HTTP_200_OK)
else:
mission = results[0]
if mission.data != request.body:
mission.data = request.body
mission.save()
mission.computeInternalData()
return Response('Updated', status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((AllowAny, ))
def ext_checkBounds(request):
data = None
results = Mission.objects.filter(startLat__gte=request.data['sLat'], startLng__gte=request.data['sLng']).filter(startLat__lte=request.data['nLat'], startLng__lte=request.data['nLng'])
if (results.count() > 0):
data = []
for item in results:
mission = item.mapSerialize()
data.append(mission)
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((AllowAny, ))
def user_login(request):
user = authenticate(username=request.data['username'], password=request.data['password'])
if user is None:
return Response('error_USER_UNKNOWN', status=status.HTTP_400_BAD_REQUEST)
return Response(UserTokenSerializer(user).data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((AllowAny, ))
def user_register(request):
if request.data['password1'] != request.data['password2']:
return Response('error_PASSWORDS_NOT_EQUAL', status=status.HTTP_400_BAD_REQUEST)
try:
user = get_user_model().objects.create_user(request.data['username'], request.data['email'], request.data['password1'])
except IntegrityError:
results = get_user_model().objects.filter(username=request.data['username'])
if results.count() > 0:
return Response('error_USERNAME_ALREADY_EXISTS', status=status.HTTP_400_BAD_REQUEST)
return Response('error_INTEGRITY_ERROR', status=status.HTTP_400_BAD_REQUEST)
token = Token.objects.create(user=user)
authenticate(username=request.data['username'], password=request.data['password1'])
return Response(UserTokenSerializer(user).data, status=status.HTTP_201_CREATED)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((AllowAny, ))
def user_logout(request):
logout(request)
return Response(None, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def user_getDetails(request):
data = {
'loved': [],
'completed': [],
}
results = request.user.mosaics_loved.all()
if results.count() > 0:
for item in results:
mosaic = item.overviewSerialize()
data['loved'].append(mosaic)
results = request.user.mosaics_completed.all()
if results.count() > 0:
for item in results:
mosaic = item.overviewSerialize()
data['completed'].append(mosaic)
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['GET'])
@permission_classes((AllowAny, ))
def user_getProfile(request):
data = {
'name': request.user.username,
'faction': None,
'superuser': request.user.is_superuser,
}
if request.user.profile:
data['faction'] = request.user.profile.faction
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def user_edit(request):
request.user.username = request.data['name']
request.user.save()
request.user.profile.faction = request.data['faction']
request.user.profile.save()
return Response(None, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def user_getRegisteredMissions(request):
missions = None
results = Mission.objects.filter(mosaic__isnull = True).order_by('title')
if results.count() > 0:
missions = []
for item in results:
temp = item.overviewSerialize()
missions.append(temp)
return Response(missions, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mosaic_create(request):
mosaic = Mosaic( registerer = request.user,
cols = int(request.data['columns']),
type = request.data['type'],
city = request.data['city'],
title = request.data['title'],
region = request.data['region'],
country = request.data['country']
)
mosaic.save()
for m in request.data['missions']:
result = Mission.objects.filter(ref=m['ref'])
if result.count() > 0:
item = result[0]
item.mosaic = mosaic
item.order = m['order']
item.save()
mosaic.computeInternalData()
return Response(mosaic.ref, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['GET'])
@permission_classes((AllowAny, ))
def mosaic_view(request, ref):
data = None
result = Mosaic.objects.filter(ref=ref)
if result.count() > 0:
mosaic = result[0]
data = mosaic.detailsSerialize()
if mosaic.lovers.filter(username=request.user.username).count() > 0:
data['is_loved'] = True
if mosaic.completers.filter(username=request.user.username).count() > 0:
data['is_completed'] = True
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mosaic_edit(request):
if request.user.is_superuser:
result = Mosaic.objects.filter(ref=request.data['ref'])
else:
result = Mosaic.objects.filter(ref=request.data['ref'], registerer=request.user)
if result.count() > 0:
mosaic = result[0]
mosaic.city = request.data['city']
mosaic.type = request.data['type']
mosaic.cols = request.data['cols']
mosaic.title = request.data['title']
mosaic.region = request.data['region']
mosaic.country = request.data['country']
mosaic.save()
data = mosaic.serialize()
return Response(data, status=status.HTTP_200_OK)
return Response(None, status=status.HTTP_404_NOT_FOUND)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mosaic_reorder(request):
if request.user.is_superuser:
result = Mosaic.objects.filter(ref=request.data['ref'])
else:
result = Mosaic.objects.filter(ref=request.data['ref'], registerer=request.user)
if result.count() > 0:
mosaic = result[0]
for item in request.data['missions']:
result = Mission.objects.filter(ref=item['ref'])
if result.count() > 0:
mission = result[0]
mission.order = item['order']
mission.save()
mosaic.save()
data = mosaic.detailsSerialize()
return Response(data, status=status.HTTP_200_OK)
return Response(None, status=status.HTTP_404_NOT_FOUND)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mosaic_delete(request):
if request.user.is_superuser:
result = Mosaic.objects.filter(ref=request.data['ref'])
else:
result = Mosaic.objects.filter(ref=request.data['ref'], registerer=request.user)
if result.count() > 0:
mosaic = result[0]
mosaic.delete()
return Response(None, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mosaic_removeMission(request):
if request.user.is_superuser:
result = Mosaic.objects.filter(ref=request.data['ref'])
else:
result = Mosaic.objects.filter(ref=request.data['ref'], registerer=request.user)
if result.count() > 0:
mosaic = result[0]
result = Mission.objects.filter(ref=request.data['mission'], mosaic=mosaic)
if result.count() > 0:
mission = result[0]
mission.mosaic = None
mission.save()
mosaic.save()
data = mosaic.serialize()
return Response(data, status=status.HTTP_200_OK)
return Response(None, status=status.HTTP_404_NOT_FOUND)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mosaic_addMission(request):
if request.user.is_superuser:
result = Mosaic.objects.filter(ref=request.data['ref'])
else:
result = Mosaic.objects.filter(ref=request.data['ref'], registerer=request.user)
if result.count() > 0:
mosaic = result[0]
result = Mission.objects.filter(ref=request.data['mission'], mosaic__isnull=True)
if result.count() > 0:
mission = result[0]
mission.mosaic = mosaic
mission.order = request.data['order']
mission.save()
mosaic.save()
data = mosaic.serialize()
return Response(data, status=status.HTTP_200_OK)
return Response(None, status=status.HTTP_404_NOT_FOUND)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mosaic_love(request):
result = Mosaic.objects.filter(ref=request.data['ref'])
if result.count() > 0:
mosaic = result[0]
result = mosaic.lovers.filter(username=request.user.username)
if result.count() < 1:
mosaic.lovers.add(request.user)
return Response(None, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mosaic_unlove(request):
result = Mosaic.objects.filter(ref=request.data['ref'])
if result.count() > 0:
mosaic = result[0]
result = mosaic.lovers.filter(username=request.user.username)
if result.count() > 0:
mosaic.lovers.remove(request.user)
return Response(None, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mosaic_complete(request):
result = Mosaic.objects.filter(ref=request.data['ref'])
if result.count() > 0:
mosaic = result[0]
result = mosaic.completers.filter(username=request.user.username)
if result.count() < 1:
mosaic.completers.add(request.user)
return Response(None, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mosaic_uncomplete(request):
result = Mosaic.objects.filter(ref=request.data['ref'])
if result.count() > 0:
mosaic = result[0]
result = mosaic.completers.filter(username=request.user.username)
if result.count() > 0:
mosaic.completers.remove(request.user)
return Response(None, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mission_delete(request):
result = Mission.objects.filter(ref=request.data['ref'])
if result.count() > 0:
result[0].delete();
return Response(None, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def mission_order(request):
result = Mission.objects.filter(ref=request.data['ref'])
if result.count() > 0:
mission = result[0]
mission.order = request.data['order']
mission.save()
return Response(None, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['GET'])
@permission_classes((AllowAny, ))
def data_getLastestMosaics(request):
results = Mosaic.objects.order_by('-pk')[:12]
if results.count() > 0:
data = []
for item in results:
mosaic = item.overviewSerialize()
data.append(mosaic)
return Response(data, status=status.HTTP_200_OK)
return Response(None, status=status.HTTP_404_NOT_FOUND)
#---------------------------------------------------------------------------------------------------
@api_view(['GET'])
@permission_classes((AllowAny, ))
def data_getMosaicsByCountry(request):
data = None
results = Mosaic.objects.values('country').distinct()
if (results.count() > 0):
data = []
for item in results:
country = {
'mosaics': Mosaic.objects.filter(country=item['country']).count(),
'name': item['country'],
}
data.append(country)
return Response(data, status=status.HTTP_200_OK)
data_getMosaicsByRegion
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((AllowAny, ))
def data_searchForMissions(request):
results = Mission.objects.filter(mosaic__isnull=True).filter(Q(title__icontains=request.data['text']) | Q(creator__icontains=request.data['text'])).order_by('creator', 'title')
if (results.count() > 0):
data = { 'missions': [], }
for item in results:
data['missions'].append(item.overviewSerialize())
else:
data = { 'missions': None, }
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((AllowAny, ))
def data_searchForMosaics(request):
array = []
# Creator search
results = Mosaic.objects.filter(creators__icontains=request.data['text'])
if (results.count() > 0):
for mosaic in results:
array.append(mosaic)
# Title search
results = Mosaic.objects.filter(title__icontains=request.data['text'])
if (results.count() > 0):
for mosaic in results:
array.append(mosaic)
# Country search
results = Mosaic.objects.filter(country__icontains=request.data['text'])
if (results.count() > 0):
for mosaic in results:
array.append(mosaic)
# Region search
results = Mosaic.objects.filter(region__icontains=request.data['text'])
if (results.count() > 0):
for mosaic in results:
array.append(mosaic)
# City search
results = Mosaic.objects.filter(city__icontains=request.data['text'])
if (results.count() > 0):
for mosaic in results:
array.append(mosaic)
if (len(array) > 0):
temp = list(set(array))
data = { 'mosaics': [], }
for item in temp:
data['mosaics'].append(item.overviewSerialize())
else:
data = { 'mosaics': [], }
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((AllowAny, ))
def map_getMosaics(request):
data = None
results = Mosaic.objects.filter(startLat__gte=request.data['sLat'], startLng__gte=request.data['sLng']).filter(startLat__lte=request.data['nLat'], startLng__lte=request.data['nLng'])
if (results.count() > 0):
data = []
for item in results:
mosaic = item.mapSerialize()
data.append(mosaic)
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((AllowAny, ))
def map_getMosaicOverview(request):
data = None
results = Mosaic.objects.filter(ref=request.data['ref'])
if (results.count() > 0):
data = []
item = results[0]
mosaic = item.overviewSerialize()
data.append(mosaic)
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def comment_add(request):
data = None
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def comment_update(request):
data = None
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def comment_delete(request):
data = None
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def adm_getCountries(request):
data = None
results = Mosaic.objects.values('country').order_by('country').distinct()
if (results.count() > 0):
data = { 'countries': [], }
for item in results:
country = {
'name': item['country'],
}
data['countries'].append(country)
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def adm_getRegions(request):
data = None
results = Mosaic.objects.filter(country=request.data['country']).values('region').order_by('region').distinct()
if (results.count() > 0):
data = { 'regions': [], }
for item in results:
region = {
'name': item['region'],
}
data['regions'].append(region)
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((IsAuthenticated, ))
def adm_renameRegion(request):
data = None
results = Mosaic.objects.filter(country=request.data['country'], region=request.data['region'])
if (results.count() > 0):
for item in results:
item.region = request.data['new_region']
item.save()
return Response(data, status=status.HTTP_200_OK)
#---------------------------------------------------------------------------------------------------
@api_view(['POST'])
@permission_classes((AllowAny, ))
def adm_getMosaics(request):
data = None
from django.db.models import Count
fieldname = 'name'
results = Mission.objects.filter(mosaic__isnull=True).values(fieldname).order_by(fieldname).annotate(count=Count(fieldname)).order_by('-count')
if (results.count() > 0):
data = []
for item in results:
if item['count'] >= 6:
obj = {
'name': item[fieldname],
'count': item['count'],
}
data.append(obj)
return Response(data, status=status.HTTP_200_OK)
| [
"deconinck.frederic@yahoo.fr"
] | deconinck.frederic@yahoo.fr |
d4eb77db9b7726504b090591c1dfc46b5efe988e | ea49dd7d31d2e0b65ce6aadf1274f3bb70abfaf9 | /problems/0264_ugly-number-ii/solution.py | 91e0790f61309372e271b7a2ff6231de0cd245d9 | [] | no_license | yychuyu/LeetCode | 907a3d7d67ada9714e86103ac96422381e75d683 | 48384483a55e120caf5d8d353e9aa287fce3cf4a | refs/heads/master | 2020-03-30T15:02:12.492378 | 2019-06-19T01:52:45 | 2019-06-19T01:52:45 | 151,345,944 | 134 | 331 | null | 2019-08-01T02:56:10 | 2018-10-03T01:26:28 | C++ | UTF-8 | Python | false | false | 340 | py | class Solution:
def nthUglyNumber(self, n: int) -> int:
ids = [0, 0, 0]
uglys = [1,]
for i in range(1, n):
a = uglys[ids[0]] * 2
b = uglys[ids[1]] * 3
c = uglys[ids[2]] * 5
min_ = min(a, b, c)
if a == min_: ids[0] += 1
if b == min_: ids[1] += 1
if c == min_: ids[2] += 1
uglys.append(min_)
return uglys[-1]
| [
"xuzuhai163@163.com"
] | xuzuhai163@163.com |
d32b24c6e808c080f9232d7be4d87ba552ad7c93 | 8a18517d6aa8464e1fac75d7801b85b682f42c6b | /tests/__mocks__/pika.py | dc56896081e56148d83b633cc59f748658e4e7b0 | [
"MIT"
] | permissive | rasyadhs/rabbitmq-python-adapter | 68f24da78c9c1e723fba65c7482ec95ff9977795 | 0a01a982040526a867a936586055abaa9e529f8b | refs/heads/master | 2022-01-13T03:42:00.751577 | 2019-06-29T20:06:31 | 2019-06-29T20:06:31 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 475 | py | class Channel:
def __init__(self): pass
def exchange_declare(self): pass
def queue_declare(self, queue, durable): pass
def queue_bind(self, queue, exchange): pass
def basic_qos(self, prefetch_count): pass
def basic_consume(self, queue, on_message_callback): pass
def start_consuming(self): pass
def basic_publish(self): pass
def basic_ack(self): pass
class Connection:
def __init__(self): pass
def channel(self): return Channel()
| [
"delucca@pm.me"
] | delucca@pm.me |
b68cf92089929f2110352f2f4eb1280d4458603c | 64825bc67c5b110cf5431af85bb17e080f549722 | /rate_recorder/rate_scraper/settings.py | 780f56a6075ff103518f0014cb2347940badaa12 | [] | no_license | quratulain25/RateRecorder | 08a7062863a8963e19e132f649d7626a9335e08a | 477477805ddcf1a81e19d228229abc483eceec2d | refs/heads/master | 2023-06-28T08:18:16.592263 | 2021-08-04T07:29:00 | 2021-08-04T07:29:00 | 382,623,761 | 0 | 0 | null | 2021-07-06T18:50:24 | 2021-07-03T13:34:54 | null | UTF-8 | Python | false | false | 3,518 | py | # -*- coding: utf-8 -*-
# Scrapy settings for rate_scraper project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# http://doc.scrapy.org/en/latest/topics/settings.html
# http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'rate_scraper'
SPIDER_MODULES = ['rate_scraper.spiders']
NEWSPIDER_MODULE = 'rate_scraper.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'rate_scraper (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'rate_scraper.middlewares.RaterecorderSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'rate_scraper.middlewares.MyCustomDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
# 'rate_scraper.custom_pipelines.xlsx_exporter_pipeline.XlxsExporterPipeline': 300
'rate_scraper.custom_pipelines.pipeline.RateRecorderPipeline': 300
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
import sys
sys.path.append('/')
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'rate_scraper.settings'
# If you you use django outside of manage.py context, you
# need to explicitly setup the django
import django
django.setup()
| [
"quratulain@arbisoft.com"
] | quratulain@arbisoft.com |
0ae814ba961f86c656ad2740c22ee6395fab08a0 | a689ed08848b9720d6cc5247380cb8198ba884f9 | /rules/php/CVI_1011.py | 0b04bf075c61d5b1a49fbf0f9c71b0a870f4d322 | [
"MIT"
] | permissive | DictionaryHouse/Cobra-White | 72ec08973ca38b69dec77be30660d75beba81b96 | 3bc1a50d300c64cc53a3f79c767e2dc3b05efe7d | refs/heads/master | 2021-08-18T16:34:56.130546 | 2017-11-23T09:00:23 | 2017-11-23T09:00:23 | 111,886,667 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,044 | py | # -*- coding: utf-8 -*-
"""
CVI-1011
~~~~
Remote command execute
:author: LoRexxar <LoRexxar@gmail.com>
:homepage: https://github.com/LoRexxar/cobra
:license: MIT, see LICENSE for more details.
:copyright: Copyright (c) 2017 LoRexxar. All rights reserved
"""
from cobra.file import file_grep
class CVI_1011():
"""
rule class
"""
def __init__(self):
self.svid = 1011
self.language = "PHP"
self.author = "LoRexxar/wufeifei"
self.vulnerability = "RCE"
self.description = "Remote command execute"
# status
self.status = True
# 部分配置
self.match_mode = "function-param-regex"
self.match = "(system|passthru|exec|pcntl_exec|shell_exec|popen|proc_open|ob_start|expect_popen|mb_send_mail|w32api_register_function|w32api_invoke_function|ssh2_exec)"
def main(self, regex_string):
"""
regex string input
:regex_string: regex match string
:return:
"""
pass
| [
"lorexxar@gmail.com"
] | lorexxar@gmail.com |
ba2eeea330b4ea20383b5a962bc8097161c2bada | 86f725b8c2d5be364c70ee959215d7dd7a904767 | /chapter4/copy.py | 3fa278cac1758e78ea9e3426b3b9711a013ad9ba | [] | no_license | whsasf/python | 676a917835d94eb577e6313a56594e761e79f84f | 98344ee5afc4ef92022c36943e08361a241b2c9e | refs/heads/master | 2021-01-24T23:24:05.120031 | 2018-07-18T05:33:45 | 2018-07-18T05:33:45 | 123,283,171 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 162 | py | #!/usr/bin/env python
import copy
a = {'user':'ram','num':[1,2,3]}
print id (a)
b = a
print id (b)
c = a.copy ()
print id (c)
d = deepcopy (a)
print id (d)
| [
"whsasf@126.com"
] | whsasf@126.com |
92adf06443b5aeab357aede04f2016bbca16aa45 | 320b8c9920d00b3ad714543e53993d9313f8a11f | /FIND ALL FOUR SUM NUMBERS/main.py | fb5fbf9aabd6be5fffbc0ff997e5cd8c34b00998 | [] | no_license | s0409/DATA-STRUCTURES-AND-ALGORITHMS | 6c6e7e81d7b0b92399645f32536c66440f43748e | fd0762878859aeda5f0dc08332edecb5e80ed2ff | refs/heads/main | 2023-07-06T21:51:55.582015 | 2021-08-15T11:16:35 | 2021-08-15T11:16:35 | 391,643,159 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,221 | py | def foursum(arr,n,k):
n=len(arr)
ans=[]
if n<4:
return ans
arr.sort()
for i in range(0,n-3):
#current element is greater than k then no quadruplet can
#be found
if arr[i]>0 and arr[i]>k:
break
#removing duplicates
if i>0 and arr[i]==arr[i-1]:
continue
for j in range(i+1,n-2):
if j>i+1 and arr[j]==arr[j-1]:
continue
left=j+1
right=n-1
while left<right:
old_l=left
old_r=right
#calculate current sum
sum=arr[i]+arr[j]+arr[left]+arr[right]
if sum==k:
ans.append([arr[i],arr[j],arr[left],arr[right]])
#removing duplicates
while left<right and arr[left]==arr[old_l]:
left+=1
while left<right and arr[right]==arr[old_r]:
right-=1
elif sum>k:
right-=1
else:
left+=1
return ans
arr=[10,2,3,4,5,7,8]
n=len(arr)
k=23
print(foursum(arr,n,k))
| [
"noreply@github.com"
] | s0409.noreply@github.com |
bf02edfd8725f08f0c3cb77c29cd37d4d369fe10 | d1b80acd75e9138adf6ee070ddd5d02981544dc7 | /rigl/sparse_utils_test.py | aa5d6704e32971da13b277d33d3b72ab1c96f27a | [] | no_license | kaileymonn/RigL-Experiments | e27d84c649059d78d6e0f01f5d0749969b66c6ff | b162653d8c7fe886fdcdd3abb9fd857c94366ac2 | refs/heads/master | 2020-12-05T06:59:51.526153 | 2020-07-08T18:26:30 | 2020-07-08T18:26:30 | 232,041,429 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,088 | py | # coding=utf-8
# Copyright 2019 RigL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the data_helper input pipeline and the training process.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import absl.testing.parameterized as parameterized
import numpy as np
from rigl import sparse_utils
import tensorflow as tf
class GetMaskRandomTest(tf.test.TestCase, parameterized.TestCase):
def _setup_session(self):
"""Resets the graph and returns a fresh session."""
tf.reset_default_graph()
sess = tf.Session()
return sess
@parameterized.parameters(((30, 40), 0.5), ((1, 2, 1, 4), 0.8), ((3,), 0.1))
def testMaskConnectionDeterminism(self, shape, sparsity):
sess = self._setup_session()
mask = tf.ones(shape)
mask1 = sparse_utils.get_mask_random(mask, sparsity, tf.int32)
mask2 = sparse_utils.get_mask_random(mask, sparsity, tf.int32)
mask1_array, = sess.run([mask1])
mask2_array, = sess.run([mask2])
self.assertEqual(np.sum(mask1_array), np.sum(mask2_array))
@parameterized.parameters(((30, 4), 0.5, 60), ((1, 2, 1, 4), 0.8, 1),
((30,), 0.1, 27))
def testMaskFraction(self, shape, sparsity, expected_ones):
sess = self._setup_session()
mask = tf.ones(shape)
mask1 = sparse_utils.get_mask_random(mask, sparsity, tf.int32)
mask1_array, = sess.run([mask1])
self.assertEqual(np.sum(mask1_array), expected_ones)
@parameterized.parameters(tf.int32, tf.float32, tf.int64, tf.float64)
def testMaskDtype(self, dtype):
_ = self._setup_session()
mask = tf.ones((3, 2))
mask1 = sparse_utils.get_mask_random(mask, 0.5, dtype)
self.assertEqual(mask1.dtype, dtype)
class GetSparsitiesTest(tf.test.TestCase, parameterized.TestCase):
def _setup_session(self):
"""Resets the graph and returns a fresh session."""
tf.reset_default_graph()
sess = tf.Session()
return sess
@parameterized.parameters(0., 0.4, 0.9)
def testSparsityDictRandom(self, default_sparsity):
_ = self._setup_session()
all_masks = [tf.get_variable(shape=(2, 3), name='var1/mask'),
tf.get_variable(shape=(2, 3), name='var2/mask'),
tf.get_variable(shape=(1, 1, 3), name='var3/mask')]
custom_sparsity = {'var1': 0.8}
sparsities = sparse_utils.get_sparsities(
all_masks, 'random', default_sparsity, custom_sparsity)
self.assertEqual(sparsities[all_masks[0].name], 0.8)
self.assertEqual(sparsities[all_masks[1].name], default_sparsity)
self.assertEqual(sparsities[all_masks[2].name], default_sparsity)
@parameterized.parameters(0.1, 0.4, 0.9)
def testSparsityDictErdosRenyiCustom(self, default_sparsity):
_ = self._setup_session()
all_masks = [tf.get_variable(shape=(2, 4), name='var1/mask'),
tf.get_variable(shape=(2, 3), name='var2/mask'),
tf.get_variable(shape=(1, 1, 3), name='var3/mask')]
custom_sparsity = {'var3': 0.8}
sparsities = sparse_utils.get_sparsities(
all_masks, 'erdos_renyi', default_sparsity, custom_sparsity)
self.assertEqual(sparsities[all_masks[2].name], 0.8)
@parameterized.parameters(0.1, 0.4, 0.9)
def testSparsityDictErdosRenyiError(self, default_sparsity):
_ = self._setup_session()
all_masks = [tf.get_variable(shape=(2, 4), name='var1/mask'),
tf.get_variable(shape=(2, 3), name='var2/mask'),
tf.get_variable(shape=(1, 1, 3), name='var3/mask')]
custom_sparsity = {'var3': 0.8}
sparsities = sparse_utils.get_sparsities(
all_masks, 'erdos_renyi', default_sparsity, custom_sparsity)
self.assertEqual(sparsities[all_masks[2].name], 0.8)
@parameterized.parameters(((2, 3), (2, 3), 0.5),
((1, 1, 2, 3), (1, 1, 2, 3), 0.3),
((8, 6), (4, 3), 0.7),
((80, 4), (20, 20), 0.8),
((2, 6), (2, 3), 0.8))
def testSparsityDictErdosRenyiSparsitiesScale(
self, shape1, shape2, default_sparsity):
_ = self._setup_session()
all_masks = [tf.get_variable(shape=shape1, name='var1/mask'),
tf.get_variable(shape=shape2, name='var2/mask')]
custom_sparsity = {}
sparsities = sparse_utils.get_sparsities(
all_masks, 'erdos_renyi', default_sparsity, custom_sparsity)
sparsity1 = sparsities[all_masks[0].name]
size1 = np.prod(shape1)
sparsity2 = sparsities[all_masks[1].name]
size2 = np.prod(shape2)
# Ensure that total number of connections are similar.
expected_zeros_uniform = (
sparse_utils.get_n_zeros(size1, default_sparsity) +
sparse_utils.get_n_zeros(size2, default_sparsity))
# Ensure that total number of connections are similar.
expected_zeros_current = (
sparse_utils.get_n_zeros(size1, sparsity1) +
sparse_utils.get_n_zeros(size2, sparsity2))
# Due to rounding we can have some difference. This is expected but should
# be less than number of rounding operations we make.
diff = abs(expected_zeros_uniform - expected_zeros_current)
tolerance = 2
self.assertLessEqual(diff, tolerance)
# Ensure that ErdosRenyi proportions are preserved.
factor1 = (shape1[-1] + shape1[-2]) / float(shape1[-1] * shape1[-2])
factor2 = (shape2[-1] + shape2[-2]) / float(shape2[-1] * shape2[-2])
self.assertAlmostEqual((1 - sparsity1) / factor1,
(1 - sparsity2) / factor2)
if __name__ == '__main__':
tf.test.main()
| [
"kaileymon@gmail.com"
] | kaileymon@gmail.com |
0ea6cd5922f4f8e8516149615e7c04ede7c224bf | 15dd0a0b4a44481259d937af425591bb7c5a5109 | /manage.py | d5e9a9828cfca196a026ddc2406099413c1f39f2 | [] | no_license | Drew81/ForoMarbleCalculator | 7cd13c228ce66428b52835ff323afdd09d3ea151 | 803e8956cbc9ed3b4c3bd54b16812dced490d86c | refs/heads/master | 2020-04-11T23:17:16.386484 | 2019-04-18T15:00:55 | 2019-04-18T15:00:55 | 162,069,810 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 538 | py | #!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "matcal.settings")
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
| [
"andrewgilbert81@gmail.com"
] | andrewgilbert81@gmail.com |
a74e5dccff64ec1c98ba218b71323dc71cefd602 | 2a26a285bdb730f7d915800802baa5490c64692b | /bigml/generators/boosted_tree.py | 96fb4a2805bb615010f7a83487da3bc6f92f33f6 | [
"Apache-2.0",
"LicenseRef-scancode-public-domain"
] | permissive | jaor/python | 87ee7aca2a21d6097f7dac7c3ad0eba74bb4c60b | 22698904f9b54234272fe2fea91a0eed692ca48a | refs/heads/master | 2023-08-08T01:15:27.878709 | 2023-08-02T20:58:39 | 2023-08-02T20:58:39 | 4,218,987 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,269 | py | # -*- coding: utf-8 -*-
#
# Copyright 2020-2023 BigML
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tree level output for python
This module defines functions that generate python code to make local
predictions
"""
from bigml.tree_utils import COMPOSED_FIELDS, INDENT
from bigml.predict_utils.common import missing_branch, \
none_value, get_node, get_predicate, mintree_split
from bigml.generators.tree_common import value_to_print, map_data, \
missing_prefix_code, filter_nodes, split_condition_code
from bigml.util import NUMERIC
MISSING_OPERATOR = {
"=": "is",
"!=": "is not"
}
def missing_check_code(tree, offsets, fields,
field, depth, input_map, cmv):
"""Builds the code to predict when the field is missing
"""
node = get_node(tree)
code = "%sif (%s is None):\n" % \
(INDENT * depth,
map_data(fields[field]['slug'], input_map, True))
value = value_to_print(node[offsets["output"]], NUMERIC)
code += "%sreturn {\"prediction\":%s" % (INDENT * (depth + 1),
value)
code += "}\n"
cmv.append(fields[field]['slug'])
return code
def boosted_plug_in_body(tree, offsets, fields, objective_id, regression,
depth=1, cmv=None, input_map=False,
ids_path=None, subtree=True):
"""Translate the model into a set of "if" python statements.
`depth` controls the size of indentation. As soon as a value is missing
that node is returned without further evaluation.
"""
if cmv is None:
cmv = []
body = ""
term_analysis_fields = []
item_analysis_fields = []
node = get_node(tree)
children = [] if node[offsets["children#"]] == 0 else \
node[offsets["children"]]
children = filter_nodes(children, offsets, ids=ids_path, subtree=subtree)
if children:
# field used in the split
field = mintree_split(children)
has_missing_branch = (missing_branch(children) or
none_value(children))
# the missing is singled out as a special case only when there's
# no missing branch in the children list
one_branch = not has_missing_branch or \
fields[field]['optype'] in COMPOSED_FIELDS
if (one_branch and not fields[field]['slug'] in cmv):
body += missing_check_code(tree, offsets, fields,
field, depth, input_map, cmv)
for child in children:
[_, field, value, _, _] = get_predicate(child)
pre_condition = ""
# code when missing_splits has been used
if has_missing_branch and value is not None:
pre_condition = missing_prefix_code(child, fields, field,
input_map, cmv)
# complete split condition code
body += split_condition_code( \
child, fields,
depth, input_map, pre_condition,
term_analysis_fields, item_analysis_fields, cmv)
# value to be determined in next node
next_level = boosted_plug_in_body( \
child, offsets, fields, objective_id, regression, depth + 1,
cmv=cmv[:], input_map=input_map, ids_path=ids_path,
subtree=subtree)
body += next_level[0]
term_analysis_fields.extend(next_level[1])
item_analysis_fields.extend(next_level[2])
else:
value = value_to_print(node[offsets["output"]], NUMERIC)
body = "%sreturn {\"prediction\":%s" % (INDENT * depth, value)
body += "}\n"
return body, term_analysis_fields, item_analysis_fields
| [
"merce@bigml.com"
] | merce@bigml.com |
8005600941ed22738c2e03eff23041ffaf03ca1f | 59080f5116b9e8f625b5cc849eb14b7ff9d19f3d | /练习/练习题/小绿本 p17 1- 30.py | 11af9a69c97046adc15811681dea0f1d284c4d0e | [] | no_license | yyq1609/Python_road | eda2bcd946b480a05ec31cdcb65e35b3f3e739d1 | e9ba2f47c8dd2d00a6e5ddff03c546152efd8f49 | refs/heads/master | 2020-09-11T11:51:35.903284 | 2019-11-11T13:02:21 | 2019-11-11T13:02:21 | 222,054,462 | 1 | 0 | null | 2019-11-16T05:58:13 | 2019-11-16T05:58:12 | null | UTF-8 | Python | false | false | 10,427 | py | #!/usr/bin/env python
# -*- coding:utf-8 -*-
"""p3-17"""
s = "hello {}".format('henry')
"""
43. 将list按照下列规则排序,补全代码
"""
# li = [7, -8, 5, 4, 0, -2, -5]
# print(sorted(li, key=lambda x: [x < 0, abs(x)]))
"""
50. 现有字典d={"a":26,"g":20,"e":20,"c":24,"d":23,"f":21,"b":25} 请按照字段中的value字段进行排序
"""
# d = {"a": 26, "g": 20, "e": 20, "c": 24, "d": 23, "f": 21, "b": 25}
# print(sorted(d.items(), key=lambda x: x[1]))
"""
56.从0-99这100个数中随机取出10个, 要求不能重复, 可以自己设计数据结构
"""
# import random
# li = random.sample(range(100), k=10)
# print(li)
"""
57. python 判断一个字典中是否有这些key: "AAA",'BB','C',"DD",'EEE'(不使用for while)
"""
dic = {'AAA': 1, 'BB': 2, 'CC': 3, 'DD': 4}
li = ["AAA", 'BB', 'C', "DD", 'EEE']
s1 = set(dic.keys())
s2 = set(li)
# print(s1 & s2)
"""
58. 有一个list["This","is","a","Boy","!"], 所有元素都是字符串, 对他进行大小写无关的排序
"""
li = ['This', 'is', 'a', 'boy']
li = [i.lower() for i in li]
li.sort()
# print(li)
"""
70. 用Python实现99乘法表(用两种不同的方法实现)
"""
# [print(f'{i}*{j}={i*j}', end='\t') if i != j else print(f'{i}*{j}={i*j}', '\n') for i in range(1, 10) for j in range(1, i+1)]
# for i in range(1, 10):
# for j in range(1, i+1):
# if j == i:
# print(f'{i}*{j}={i*j}', '\n')
# else:
# print(f'{i}*{j}={i*j}', end='\t')
"""
76. a=dict(zip(('a', 'b', 'c', 'd', 'e'), (1,2,3,4,5))) 请问a是什么?
"""
a = dict(zip(('a', 'b', 'c', 'd', 'e'), (1, 2, 3, 4, 5)))
# print(a)
"""
108. 输出结果是
"""
import math
# print(math.floor(5.5))
"""
"""
"""
1.进制转换
"""
# print(int('0b1111011', base=2))
# print(bin(18))
# print(int('011', base=8))
# print(int('0x12', base=16))
# print(hex(87))
"""
2.python递归的最大深度为1000
"""
"""
3.列举常见的内置函数
"""
# 强制转换:int, bool, str, list, tuple, dict, set
# 输入输出:print, input
# 进制转换:bin, oct, int, hex
# 数学相关:abs, max, min, float, round, divmod, sum
# map/filter/reduce/zip
# 编码相关:chr, ord
# 其他:len, type, id, range, open
"""
4. filter, map, reduce 的作用
"""
# filter,对可迭代对象根据指定标准进行数据筛选
# map,对可迭代对象进行批量的修改
# reduce,对可迭代对象进行指定运算
"""
5. 一行实现9*9乘法表
"""
# [print('%s*%s ' % (i, j,)) if i == j else print('%s*%s ' % (i, j,), end='') \
# for i in range(1, 10) for j in range(1, i+1)]
"""
6. 什么是闭包
"""
# 闭包就是能够读取其他函数内部变量的函数。
# 在本质上,闭包是将函数内部和函数外部连接起来的桥梁。
"""
7.简述生成器、迭代器、装饰器以及应用场景
"""
# 生成器:主要用于构造大量数据时,为了节省内存空间,使用生成器可以在for 循环的时候一个个的生成数据
# 迭代器:for循环的内部就是通过迭代器的操作来实现
# 装饰器:用于调用其他函数或模块时,可以在其前后进行自定义操作
"""
8.使用生成器编写fib函数,函数声明为fib(max),输入一个参数max值,是的函数可以这样调用
for i in range(0, 100):
print(fib(1000))
"""
li = [1, 1]
def func(num=1000):
a, b = 1, 1
while a + b < num:
b = li[-1] + li[-2]
a = li[-2]
li.append(b)
yield b
for i in func():
print(i)
#
# for i in range(100):
# print(fib(1000))
"""
9. 一行代码,通过filter和lambda函数输出以下列表索引为基数对应的元素
"""
# list_a = [12, 213, 22, 2, 2, 2, 22, 2, 2, 32]
# print([i for i in filter(lambda i: i, list_a)])
"""
10. 写一个base62encode函数,把
"""
# result = []
# li = [str(i) for i in range(10)]
#
#
# def check_list():
# i = 65
# while i <= 90:
# li.append(chr(i))
# i += 1
# i = 97
# while 97 <= i <= 122:
# li.append(chr(i))
# i += 1
#
#
# def run():
# while 1:
# num = input('please input a num: ')
# if not num.isdecimal():
# print('your num is wrong')
# continue
# num = int(num)
# return num
#
#
# def fun(count):
# a, b = divmod(count, 62)
# result.append(li[b])
# if a > 62:
# fun(a)
# else:
# if a:
# result.append(str(a))
# return ''.join(result[::-1])
#
#
# check_list()
# v = fun(run())
# print(v)
"""
11. 实现一个装饰器,限制该函数调用频率,如10s 一次
"""
# import time
#
#
# def wrapper(func):
# start = 0
#
# def inner():
# nonlocal start
# if time.time() - start >= 10:
# start = time.time()
# v = func()
# return v
# else:
# print('限制访问')
#
#
# return inner
#
#
# @wrapper
# def function():
# print('hello')
#
#
# while 1:
# function()
# time.sleep(1)
"""
12. 实现一个装饰器,通过一次调用函数重复执行5次
"""
# def outside(num):
# def wrapper(func):
# def inner():
# v = []
# for i in range(num):
# v.append(func())
# return v
# return inner
# return wrapper
#
# @outside(5)
# def func():
# print('hello, Python')
#
# func()
"""
13. python一行print出1-100偶数list,(list推导式,filter均可)
"""
# print([i for i in range(1, 101) if i % 2 == 0])
# print(list(filter(lambda i: i, range(2, 101, 2))))
# print(list(filter(lambda i: i if i % 2==0 else None, range(1, 101))))
"""
14.解释生成器与函数的不同,并实现和简单使用generator
"""
# 1.语法上和函数类似:生成器函数和常规函数几乎一模一样的,他们都是使用def语句进行定义,
# \区别在于,生成器使用yield语句返回一个值,而常规函数使用return语句返回一个值
# 2.自动实现迭代器的协议:对于生成器,python自动实现迭代器协议,所以我们可以调用它的next
# \方法,并且在没有值返回的时候,生成器自动生成Stopltwration异常
# 3.状态挂起,生成器使用yield语句返回一个值.yield语句挂起该生成器函数的状态,保留足够的信息,
# \方便之后离开的地方继续执行
# def g():
# print('one')
# yield 'two'
#
#
# g1 = g()
# print(g1.__next__())
# for i in [1, 2, 3, 4].__iter__():
# print(i)
"""
15. [i % 2 for i in range(10)] 和(i % 2 for i in range(10))
"""
# print([i % 2 for i in range(10)])
# print(list(i % 2 for i in range(10)))
"""
16. map(str, [1, 2, 3, 4, 5, 6, 7, 8, 9])
"""
# print(list(map(str, [1, 2, 3, 4, 5, 6, 7, 8, 9])))
"""
17. python定义函数时,如何书写可变参数和关键字参数
"""
# def func(*args, k=5, **kwargs):
# pass
"""
18. Python3.5中enumerate的意思是什么
"""
# 枚举,在使用enumera函数时,至少需要传入一个可迭代对象,通过迭代一一取出同时为
# \每个元素添加一个序号,默认为0开始,也可以在传参时指定
# def enumerate(sequence, start=0):
# n = start
# for elem in sequence:
# yield n, elem
# n += 1
# 等价于
# for i in enumerate(range(100)):
# print(i)
"""
19. 说说python中的装饰器,迭代器的用法,描述dict的item方法与iteritems方法的不同
"""
# 装饰器:为了在调用一些模块或函数时,在其前后进行自定义化操作时使用
# 迭代器:主要用于可迭代对象,可以通过迭代器一一获取可迭代对象中的元素
# dict.items 是一个list
# dict.iteritems 是一个<type 'dictionary-itemiterator'>的迭代器
# info = {'1': 1, '2': 2, '3': 3}
# val = info.iteritems()
# help (val)
# print(type(val))
# for i, j in val:
# print i, j
"""
20. 是否使用过functools中的函数,其作用是什么
"""
# functools.partial
# functools.reduce
# functools.wrap(func) # 装饰器一般需要使用,彻底装饰一个行数
"""
21. 如何判断一个值是函数还是方法
"""
# 1. 根据定义参数,方法有一个self形参,函数没有
# 2. 根据调用方式不同,函数调用是fun(),方法一般需要对象调用
# from types import MethodType, FunctionType
# def f():
# pass
# print(isinstance(f, FunctionType))
"""
22. 请编写一个函数实现ip地址转换为和一个整数
"""
# ip = '192.168.12.87'
# res = int(''.join([bin(int(i)).replace('0b', '').zfill(8) for i in ip.split('.')]), base=2)
# print(res)
# def ip_transfer(ip):
# print(int(''.join([bin(int(i)).replace('0b', '').zfill(8) for i in ip.split('.')]), base=2))
#
#
# ip_transfer('192.168.12.87')
"""
23. lambda 表达式以及应用场景
"""
# lambda 表达式又称为匿名函数,主要用于替换简单函数缩减代码量
"""
24. pass作用
"""
# python语法需要,在不需做任何操作的情况下使用
"""
25. *arg 和 **kwargs作用
"""
# 在定义函数时一般使用这两个参数
# *arg可以接收无限个位置参数形成一个元组
# **kwargs可以接收无限个关键字参数形成一个字典
"""
26. 如何在函中设置一个全局变量
"""
# 使用global关键字,先找到改变量,在进行赋值操作
"""
27. 看代码写结果
"""
# # 示例1
# def func(a, b=[]):
# b.append(a)
# print(b)
#
#
# func(1)
# func(1)
# func(1)
# func(1)
#
# # [1] [1, 1] [1, 1, 1] [1, 1, 1, 1]
#
# # 示例2
# def func(a, b={}):
# b[a] = 'v'
# print(b)
#
#
# func(1)
# func(2)
#
# # {1: 'v'}
# # {1: 'v', 2: 'v'}
"""
28. 看代码写结果:lambda
"""
# def num():
# return [lambda x: i * x for i in range(4)]
#
#
# print([m(2) for m in num()])
# [6, 6, 6, 6]
"""
29. 简述yield和yield from 关键字
"""
# yield:用于定义生成器函数,yield后的值在for循环的时候会生成
# yield from:用于生成器函数中,调用其他函数或生成器函数, python3之后
"""
30. 有processFunc变量,初始化为processFunc = collapse and (lambda s:''.join(s.split()))) or (lambda s:s)
"""
# collapse = True
# processFunc = collapse and (lambda s: " ".join(s.split())) or (lambda s:s)
# print(processFunc('i\tam\ntest\tobject !'))
#
# collapse = False
# processFunc = collapse and (lambda s: " ".join(s.split())) or (lambda s:s)
# print(processFunc('i\tam\ntest\tobject !'))
| [
"958976577@qq.com"
] | 958976577@qq.com |
05cd662e53f25510e1345d688a990d79b9d89010 | 32aafc0c131aa7cdc6d238fa0653bc64c12df70a | /Aula07 - Operações aritmeticas.py | c5d8d6af14911203ed145d098712f78b0abede8d | [] | no_license | Vieiraork/Python-Learn | 2c2fd747e1fa3e8a28477809f77e29feffaa223e | 2af6c1b807f30d7ec543a8c7dd8f5761a6173b4f | refs/heads/master | 2022-12-30T05:57:41.423915 | 2020-10-20T22:08:55 | 2020-10-20T22:08:55 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 678 | py | n1 = int(input('Digite o primeiro número: '))
n2 = int(input('Digite o segundo número: '))
s = n1+n2
su = n1-n2
m = n1*n2
d = n1/n2
di = n1//n2
re = n1%n2
print('A soma vale {}, a subtração value {} e a multiplicação vale {}'.format(s, su, m))
print('A divisão vale {:.2f}, a divisão inteira vale {} e o resto vale {}'.format(d, di, re))
"""
Para divisão inteira usa-se // duas barras
Para potência usa-se **
Para modulo ou resto da divisão usa-se %
Ordem de precedencia
1º vem os parenteses ()
2º vem a potência **
3º vem multiplicação, divisâo, divisâo inteira e modulo *, /, //, %
4 vem soma e subtração +, - ;
""" | [
"vieiraork@gmail.com"
] | vieiraork@gmail.com |
195283028cc1feebf1833b77d2c0d8fd2e5836f0 | dc86f54bf5f514dcdfed8580a8f59c82d6a2e6d9 | /Data Scientist with Python- Track/Writing Functions in Python/3. Decorators/3_closures.py | ac63611800d13c4cec594beb06307a1654ddf802 | [] | no_license | JPCLima/DataCamp-Python-2020 | cff4380979f9cb2476338fe70472654e400730b1 | b6b1ebd3d2af0f9a306988fa5374effb96e982eb | refs/heads/master | 2023-01-24T06:50:43.792883 | 2020-11-27T11:35:05 | 2020-11-27T11:35:05 | 295,113,104 | 6 | 0 | null | null | null | null | UTF-8 | Python | false | false | 785 | py | # Checking for closure
def return_a_func(arg1, arg2):
def new_func():
print('arg1 was {}'.format(arg1))
print('arg2 was {}'.format(arg2))
return new_func
my_func = return_a_func(2, 17)
print(my_func.__closure__ is not None)
print(len(my_func.__closure__) == 2)
# Get the values of the variables in the closure
closure_values = [
my_func.__closure__[i].cell_contents for i in range(2)
]
print(closure_values == [2, 17])
# Closures keep your values safe
def my_special_function():
print('You are running my_special_function()')
def get_new_func(func):
def call_func():
func()
return call_func
# Overwrite `my_special_function` with the new function
my_special_function = get_new_func(my_special_function)
my_special_function()
| [
"joaoplima@ua.pt"
] | joaoplima@ua.pt |
d88c8868f3af0e74585647dbfe75f28e66c69035 | c2c8915d745411a0268ee5ce18d8bf7532a09e1a | /cybox-2.1.0.5/cybox/objects/uri_object.py | 1c48683aa3013e98712b3e7bf3aafb554f2f1671 | [
"BSD-3-Clause"
] | permissive | asealey/crits_dependencies | 581d44e77f297af7edb78d08f0bf11ad6712b3ab | a8049c214c4570188f6101cedbacf669168f5e52 | refs/heads/master | 2021-01-17T11:50:10.020346 | 2014-12-28T06:53:01 | 2014-12-28T06:53:01 | 28,555,464 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 955 | py | # Copyright (c) 2014, The MITRE Corporation. All rights reserved.
# See LICENSE.txt for complete terms.
import cybox
import cybox.bindings.uri_object as uri_binding
from cybox.common import ObjectProperties, AnyURI
class URI(ObjectProperties):
_binding = uri_binding
_binding_class = uri_binding.URIObjectType
_namespace = 'http://cybox.mitre.org/objects#URIObject-2'
_XSI_NS = 'URIObj'
_XSI_TYPE = "URIObjectType"
TYPE_URL = "URL"
TYPE_GENERAL = "General URN"
TYPE_DOMAIN = "Domain Name"
TYPES = (TYPE_URL, TYPE_GENERAL, TYPE_DOMAIN)
value = cybox.TypedField("Value", AnyURI)
type_ = cybox.TypedField("type_", key_name="type")
def __init__(self, value=None, type_=None):
super(URI, self).__init__()
self.value = value
self.type_ = type_
def __str__(self):
return self.__unicode__().encode("utf-8")
def __unicode__(self):
return unicode(self.value)
| [
"ssnow@mitre.org"
] | ssnow@mitre.org |
87ad3b2fe843b77af5fbc56c4dd5274d06291bfa | c338108b98baa268ddab98d3373aaab9de0990b8 | /opendelta/utils/common_structures/t5.py | 8150fe2322c1bc5ec7f641f19b21adfe4c1b53c8 | [
"Apache-2.0"
] | permissive | thunlp/OpenDelta | 491973555f40db5f2678c5260b40640b783b80f2 | 067eed2304cb1bdfe462094e42a37de4de98edff | refs/heads/main | 2023-08-17T10:51:35.731082 | 2023-08-16T09:50:10 | 2023-08-16T09:50:10 | 459,158,400 | 800 | 66 | Apache-2.0 | 2023-07-21T08:50:43 | 2022-02-14T12:45:00 | Python | UTF-8 | Python | false | false | 2,602 | py | Mappings = {}
t5encoder = {"__name__":"encoder",
"embed_tokens": {"__name__":"embeddings"},
"block": {"__name__":"block",
"$": {"__name__":"$",
"layer.0": {"__name__":"attn",
"SelfAttention.q": {"__name__":"q"},
"SelfAttention.k": {"__name__":"k"},
"SelfAttention.v": {"__name__":"v"},
"SelfAttention.o": {"__name__":"proj"},
"SelfAttention.relative_attention_bias": {"__name__":""},
"layer_norm": {"__name__":"layer_norm"},
},
"layer.1": {"__name__":"ff",
"DenseReluDense.wi": {"__name__":"w1"},
"layer_norm": {"__name__":"layer_norm"},
"DenseReluDense.wo": {"__name__":"w2"},
}
}
},
"final_layer_norm": {"__name__":"layer_norm"},
}
t5decoder = {"__name__":"decoder",
"embed_tokens": {"__name__":"embeddings"},
"block": {"__name__":"block",
"$": {"__name__":"$",
"layer.0": {"__name__":"attn",
"SelfAttention.q": {"__name__":"q"},
"SelfAttention.k": {"__name__":"k"},
"SelfAttention.v": {"__name__":"v"},
"SelfAttention.o": {"__name__":"proj"},
"SelfAttention.relative_attention_bias": {"__name__":""},
"layer_norm": {"__name__":"layer_norm"},
},
"layer.1": {"__name__":"crossattn",
"EncDecAttention.q": {"__name__":"q"},
"EncDecAttention.k": {"__name__":"k"},
"EncDecAttention.v": {"__name__":"v"},
"EncDecAttention.o": {"__name__":"proj"},
"layer_norm": {"__name__":"layer_norm"},
},
"layer.2": {"__name__":"ff",
"DenseReluDense.wi": {"__name__":"w1"},
"layer_norm": {"__name__":"layer_norm"},
"DenseReluDense.wo": {"__name__":"w2"},
}
}
},
"final_layer_norm": {"__name__":"layer_norm"},
}
Mappings['T5Model'] = {
"shared": {"__name__":"embeddings"},
"encoder": t5encoder,
"decoder": t5decoder,
}
Mappings['T5ForConditionalGeneration'] = {
"shared": {"__name__":"embeddings"},
"encoder": t5encoder,
"decoder": t5decoder,
}
Mappings['T5EncoderModel'] = {
"shared": {"__name__":"embeddings"},
"encoder": t5encoder,
} | [
"shengdinghu@gmail.com"
] | shengdinghu@gmail.com |
85169b2322a3604f9f3ffbdbf3143108ed486497 | ae0852611a5946258fcd85865b43313ec4cfc554 | /binding.gyp | d36761c061ee729b0ccf30d736af96dedfb0dbfe | [
"MIT"
] | permissive | rhtpandeyIN/node-informixdb | 061aadd80dae8c90d98484e9c2bc3a795ad88086 | 96a380886d85013b246f9ce56dfb250647bace65 | refs/heads/master | 2021-07-18T11:31:03.222030 | 2019-05-09T13:39:58 | 2019-05-09T13:39:58 | 185,806,210 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,671 | gyp | {
'targets' : [
{
'target_name' : 'odbc_bindings',
'sources' : [
'src/odbc.cpp',
'src/odbc_connection.cpp',
'src/odbc_statement.cpp',
'src/odbc_result.cpp',
],
'include_dirs' : [
"<!(node -e \"require('nan')\")"
],
'defines' :
[
#'UNICODE',
'ODBC64'
],
'conditions' : [
[ 'OS != "zos"',
{ 'defines' : [ 'UNICODE'], }
]
],
"variables" : {
# Set the linker location
"ORIGIN_LIB_PATH%" : "$(CSDK_HOME)/lib/cli",
},
'conditions' : [
[ '(OS == "linux" and (target_arch =="ia32" or target_arch == "s390" or target_arch == "ppc32" or target_arch == "arm")) or (OS == "aix" and target_arch == "ppc")',
{
'conditions' : [],
'libraries' :
[
'-L$(CSDK_HOME)/lib/cli',
'-lthcli'
],
'include_dirs' :
[
'$(CSDK_HOME)/incl/cli'
],
'cflags' : ['-g'],
}
],
[ '(OS == "linux" or OS == "aix") and (target_arch =="x64" or target_arch == "s390x" or target_arch == "ppc64")',
{
'conditions' : [],
'libraries' :
[
'-L$(CSDK_HOME)/lib/cli ',
'-lthcli'
],
'include_dirs' :
[
'$(CSDK_HOME)/incl/cli'
],
'cflags' : ['-g -m64'],
}
],
[ 'OS == "zos" ',
{ 'libraries' : ['dsnao64c.x'],
'include_dirs' : ['build/include'],
'cflags' : ['-g']
}
],
[ 'OS == "mac" and target_arch =="x64" ',
{ 'xcode_settings' : {'GCC_ENABLE_CPP_EXCEPTIONS': 'YES' },
'libraries' :
[
'-L$(CSDK_HOME)/lib/cli',
'-lthcli'
],
'include_dirs' :
[
'$(CSDK_HOME)/incl/cli'
],
'cflags' : ['-g']
}
],
[ 'OS=="win" and target_arch =="ia32"',
{ 'sources' : ['src/strptime.c', 'src/odbc.cpp'],
'libraries' :
[
'$(CSDK_HOME)/lib/iclit09b.lib'
],
'include_dirs' :
[
'$(CSDK_HOME)/incl/cli',
'$(NODE_SRC)/test/gc/node_modules/nan'
]
}
],
[ 'OS=="win" and target_arch =="x64"',
{ 'sources' : ['src/strptime.c', 'src/odbc.cpp'],
'libraries' :
[
'$(CSDK_HOME)/lib/iclit09b.lib'
],
'include_dirs' :
[
'$(CSDK_HOME)/incl/cli',
'$(NODE_SRC)/test/gc/node_modules/nan'
]
}
],
[ 'OS != "linux" and OS!="win" and OS!="darwin" and target_arch =="ia32" ',
{ 'conditions' : [],
'libraries' :
[
'-L$(CSDK_HOME)/lib/cli',
'-lthcli'
],
'include_dirs' :
[
'$(CSDK_HOME)/incl/cli'
],
'cflags' : ['-g']
}
],
[ 'OS != "linux" and OS != "win" and OS != "mac" and target_arch == "x64" ',
{ 'conditions' : [],
'libraries' :
[
'-L$(CSDK_HOME)/lib/cli',
'-lthcli'
],
'include_dirs' :
[
'$(CSDK_HOME)/incl/cli'
],
'cflags' : ['-g']
}
]
]
}
]
} | [
"RohitPa@PROD.HCLPNP.COM"
] | RohitPa@PROD.HCLPNP.COM |
4b75f6e418af282330fc60e65238b5a0599af764 | 4da58b65fd3094c3b0556c7a3108d4cd1ffea0f3 | /policy_gradients/reinforce/run_episode.py | d1031d91e64322e5a75a00a3945bb04b6f608d33 | [] | no_license | willclarktech/policy-gradient-implementations | b7d6d55910cf6bc25e86368365f58c51b843df24 | 311276053322272319ffac8206f1e41960495ad7 | refs/heads/main | 2023-07-25T22:00:18.445628 | 2023-07-07T13:03:56 | 2023-07-07T13:03:56 | 252,439,207 | 1 | 0 | null | 2023-08-19T11:33:14 | 2020-04-02T11:42:47 | Jupyter Notebook | UTF-8 | Python | false | false | 940 | py | from policy_gradients.core import Hyperparameters, TrainOptions
from policy_gradients.reinforce.agent import Agent
def run_episode(
agent: Agent,
hyperparameters: Hyperparameters,
options: TrainOptions,
) -> float:
env = hyperparameters.env
should_eval = options.should_eval
should_render = options.should_render
# Necessary for pybullet envs
if should_render:
env.render()
observation = env.reset()
agent.reset()
done = False
ret = 0.0
if should_render:
env.render()
while not done:
action, log_probability = agent.choose_action(observation)
observation_, reward, done, _ = env.step(action)
ret += reward
if not should_eval:
agent.remember(log_probability, reward)
observation = observation_
if should_render:
env.render()
if not should_eval:
agent.learn()
return ret
| [
"willclarktech@users.noreply.github.com"
] | willclarktech@users.noreply.github.com |
cebf0716bc73ae81ca648778249c5ac1ff29c732 | 2367d735dd36ba26570d40cbdfaf0b9efcccd238 | /backend/fresh_26242/settings.py | 13d9b117e2b78a3d5be9ff3b51d2a49d850e9277 | [] | no_license | crowdbotics-apps/fresh-26242 | 65e895d0167b64d2946cc6a205c1631e3bd71f76 | f0b2306cb670db68a0e00bb5567552de62f2e313 | refs/heads/master | 2023-04-19T19:57:51.620879 | 2021-05-06T09:45:42 | 2021-05-06T09:45:42 | 364,860,528 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,734 | py | """
Django settings for fresh_26242 project.
Generated by 'django-admin startproject' using Django 1.11.16.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.11/ref/settings/
"""
import os
import environ
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
env = environ.Env()
environ.Env.read_env(os.path.join(BASE_DIR, '.env'))
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env.str('SECRET_KEY')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env.bool('DEBUG', default=True)
ALLOWED_HOSTS = ['*']
SITE_ID = 1
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites'
]
LOCAL_APPS = [
'home',
]
THIRD_PARTY_APPS = [
'rest_framework',
'rest_framework.authtoken',
'bootstrap4',
'allauth',
'allauth.account',
'allauth.socialaccount',
'allauth.socialaccount.providers.google',
]
INSTALLED_APPS += LOCAL_APPS + THIRD_PARTY_APPS
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
]
ROOT_URLCONF = 'fresh_26242.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates'), ],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'fresh_26242.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'fresh_26242',
'USER': 'fresh_26242',
'PASSWORD': 'fresh_26242',
'HOST': 'localhost',
'PORT': '5432',
}
}
if env.str('DATABASE_URL', default=None):
DATABASES = {
'default': env.db()
}
# Password validation
# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.11/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.11/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static')
]
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
AUTHENTICATION_BACKENDS = (
'django.contrib.auth.backends.ModelBackend',
'allauth.account.auth_backends.AuthenticationBackend'
)
# allauth
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_AUTHENTICATION_METHOD = 'email'
ACCOUNT_USERNAME_REQUIRED = False
ACCOUNT_EMAIL_VERIFICATION = None
LOGIN_REDIRECT_URL = '/'
if DEBUG:
# output email to console instead of sending
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
EMAIL_HOST = 'smtp.sendgrid.net'
EMAIL_HOST_USER = env.str('SENDGRID_USERNAME', '')
EMAIL_HOST_PASSWORD = env.str('SENDGRID_PASSWORD', '')
EMAIL_PORT = 587
EMAIL_USE_TLS = True
# Import local settings
try:
from .local_settings import *
INSTALLED_APPS += DEBUG_APPS
except:
pass
| [
"team@crowdbotics.com"
] | team@crowdbotics.com |
5b1879bb6fe6bb8554ee412c5912957f6f4b5efb | e7db91361b481433ad66b9ae8b272631adc90ebe | /feedbackPro/feeedbackApp/forms.py | 2f5eeb7551722c2652cb7abb764e618ac09b044f | [] | no_license | mohangirie/feedback | 74b3362e1309ee0f358a32500d6166578bf2c2f5 | bd9551c468a090cf843e53a987b1c48c14301db7 | refs/heads/master | 2020-12-02T18:27:56.095952 | 2020-01-02T09:15:37 | 2020-01-02T09:15:37 | 231,079,516 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,050 | py | from django import forms
class FeedbackForm(forms.Form):
name= forms.CharField(
label='Enter your Name:',
widget=forms.TextInput(
attrs={
'class':'form-control',
'placeholder':'Enter Your Name:'
}
)
)
rating= forms.IntegerField(
label= 'Enter your rating',
widget= forms.NumberInput(
attrs={
'class':'form-control',
'placeholder':'Your rating'
}
)
)
loc= forms.CharField(
label= 'Enter your location',
widget= forms.TextInput(
attrs={
'class': 'form-control',
'placeholder':'Your location'
}
)
)
feedback= forms.CharField(
help_text="Thanks for your valuble feedback",
label='Enter your feedback',
widget=forms.Textarea(
attrs={
'class':'form-control',
'placeholder':'Your feedback'
}
)
)
| [
"mohangirie@gmail.com"
] | mohangirie@gmail.com |
a35ddd9fa845d85d4f58348086ef76de61dadfae | 2b2031f068adee8454841c3ea7f07b60b97f5c74 | /main.py | b07c1b3dedf5ea3a2d7144cfa87028c38eba41e5 | [] | no_license | coristus/aight | 8baeb47d4b1f3efd8761bf3249584dcb8822f60f | b845bc950d5fe6de779eadc6e3b8a120facc75b7 | refs/heads/master | 2021-01-25T05:02:36.344222 | 2017-06-22T12:16:02 | 2017-06-22T12:16:02 | 93,506,167 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 34 | py | # Test file
print "Hello World!!" | [
"jelle4life@gmail.com"
] | jelle4life@gmail.com |
534199a4b5489aea921ce31ea9edc14a46a502b9 | 7bd72524d3746cfd0379a19452d2a0c859d34ec0 | /starting-python/dict.py | 67c28ecf39486c6ffc53b0cc4cda9903e0d0e0d9 | [] | no_license | davidpetro88/python-playground | 2da77fdd4408939b8c71dc143b4712718285bfed | 1c20fb997d0a4c89818cb1d549aec30938b02741 | refs/heads/master | 2021-01-12T20:58:52.400967 | 2017-03-18T01:30:01 | 2017-03-18T01:30:01 | 68,545,351 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 464 | py | cars = {}
cars['corola'] = "red"
cars['fit'] = "green"
cars['320'] = "black"
print(cars.keys())
print("Car Corola -> " + cars['corola'])
for key, value in cars.items():
print(key + " = " + value)
user = dict(name='David', lastName='Petro', city='Porto Alegre')
print (user)
print ("user name : " + user['name'])
user2= {
'name' : 'David',
'lastName' : 'Petro',
'city' : 'Porto Alegre'
}
print (user2)
print ("user2 city : " + user2['city']) | [
"david.abraao.petro@gmail.com"
] | david.abraao.petro@gmail.com |
b3d403b20ecf6d9a750d94bc05a0c287a1ee850b | fa5713863cada0177d15e56f5327b79d907a119f | /test/plotobspred_subtractmc.py | 13110badf3a296a8dd802f8ee8d406173880dec4 | [] | no_license | rappoccio/EXOVV | 1500c126d8053b47fbc425d1c2f9e76f14cb75c5 | db96edf661398b5bab131bbeba36d331b180d12d | refs/heads/master | 2020-04-03T20:12:57.959191 | 2018-08-24T01:30:03 | 2018-08-24T01:30:03 | 39,910,319 | 4 | 2 | null | null | null | null | UTF-8 | Python | false | false | 6,178 | py | #! /usr/bin/env python
##################
# Finding the mistag rate plots
##################
from optparse import OptionParser
parser = OptionParser()
parser.add_option('--hist', type='string', action='store',
dest='hist',
default = "pred_mvv",
help='Input ttbar MC file, without the .root')
parser.add_option('--blind', action = 'store_true',
dest='blind',
default = False,
help='Blind data?')
parser.add_option('--isZ', action = 'store_true',
dest='isZ',
default = False,
help='Is this the Z channel?')
parser.add_option('--outstr', type='string', action='store',
dest='outstr',
default = None,
help='Output string')
parser.add_option('--fileData', type='string', action='store',
dest='fileData',
default = None,
help='Input data file, without the .root')
parser.add_option('--fileTTbar', type='string', action='store',
dest='fileTTbar',
default = None,
help='Input ttbar MC file, without the .root')
parser.add_option('--fileWJets', type='string', action='store',
dest='fileWJets',
default = None,
help='Input ttbar MC file, without the .root')
(options, args) = parser.parse_args()
argv = []
#FWLITE STUFF
import math
import ROOT
import sys
ROOT.gROOT.Macro("rootlogon.C")
ROOT.gStyle.SetOptStat(000000)
tlx = ROOT.TLatex()
tlx.SetNDC()
tlx.SetTextFont(42)
tlx.SetTextSize(0.057)
ROOT.gStyle.SetOptStat(000000)
ROOT.gStyle.SetOptFit(0000)
#ROOT.gROOT.Macro("rootlogon.C")
#ROOT.gStyle.SetPadRightMargin(0.15)
ROOT.gStyle.SetOptStat(000000)
ROOT.gStyle.SetTitleFont(43)
#ROOT.gStyle.SetTitleFontSize(0.05)
ROOT.gStyle.SetTitleFont(43, "XYZ")
ROOT.gStyle.SetTitleSize(30, "XYZ")
#ROOT.gStyle.SetTitleOffset(3.5, "X")
ROOT.gStyle.SetLabelFont(43, "XYZ")
ROOT.gStyle.SetLabelSize(24, "XYZ")
isMC = False
if options.fileData == None :
isMC = True
f = ROOT.TFile(options.fileWJets + '.root')
else :
isMC = False
f = ROOT.TFile(options.fileData + '.root')
fttbar = ROOT.TFile(options.fileTTbar + '.root')
hobs = f.Get(options.hist)
hpred= f.Get(options.hist + "_pred")
hobs_ttbar = fttbar.Get(options.hist)
hpred_ttbar= fttbar.Get(options.hist + "_pred")
xaxes = {
"pred_mvv":[0.,6000.],
"pred_mvvmod":[0.,6000.],
"pred_jet_pt":[0.,5000.],
"pred_sdmass":[0.,500.],
"pred_jetmass":[0.,500.],
"pred_jetmassmod":[0.,500.],
"pred_sdrho":[0.,1.],
}
xaxis = xaxes[options.hist]
lumi = 2110
ttbar_norm = 861.57 * lumi / 96834559.
hobs_ttbar.Scale( ttbar_norm )
hpred_ttbar.Scale( ttbar_norm )
if isMC :
# hobs.Scale( 31.2749 * lumi / 1229879. )
# hpred.Scale( 31.2749 * lumi / 1229879. )
hobs.Scale( lumi * 1.21 )
hpred.Scale( lumi * 1.21 )
# subtract off weighted ttbar pretags, add in observed ttbar
hpred.Add( hpred_ttbar, -1.0 )
hpred.Add( hobs_ttbar )
canv = ROOT.TCanvas('c1','c1', 800, 700)
canv.cd()
pad1 = ROOT.TPad('p1', 'p1', 0.,2./7.,1.0,1.0)
pad1.SetBottomMargin(0.)
pad2 = ROOT.TPad('p1', 'p1', 0.,0.,1.0,2./7.)
pad2.SetTopMargin(0.)
pad2.SetBottomMargin(0.4)
pad1.Draw()
pad2.Draw()
pad1.cd()
hobs.SetMarkerStyle(20)
hobs_ttbar.SetFillColor(ROOT.kGreen)
if options.isZ == False :
hpred.SetFillColor(ROOT.kRed)
else :
hpred.SetFillColor(ROOT.kBlue-7)
#hpred.SetLineColor(2)
#hpred.SetMarkerColor(2)
#hpred.SetMarkerStyle(24)
hpredclone = hpred.Clone()
hpredclone.SetName("hpredclone")
hpredclone.SetFillColor(1)
hpredclone.SetFillStyle(3004)
hpredclone.SetMarkerStyle(0)
hobs_ttbar.SetMarkerStyle(0)
hs = ROOT.THStack('hs', ';m_{VV} (GeV);Number')
hs.Add( hobs_ttbar )
hs.Add( hpred )
hserrs = ROOT.THStack('hserrs', ';m_{VV} (GeV);Number')
hserrs.Add( hobs_ttbar, "hist")
hserrs.Add( hpredclone, "e3")
hs.Draw("hist")
hserrs.Draw("same")
if not options.blind :
hobs.Draw("same")
hs.Draw("axis same")
hs.GetYaxis().SetTitleOffset(1.0)
hs.GetXaxis().SetRangeUser( xaxis[0], xaxis[1])
hs.SetMinimum(1e-3)
eobs_1500 = ROOT.Double(0.)
nobs_1500 = hobs.IntegralAndError( hobs.GetXaxis().FindBin(1500.), hobs.GetXaxis().FindBin(2000.), eobs_1500 )
ebkg_1500 = ROOT.Double(0.)
nbkg_1500 = hs.GetStack().Last().IntegralAndError( hs.GetStack().Last().GetXaxis().FindBin(1500.), hs.GetStack().Last().GetXaxis().FindBin(2000.), ebkg_1500 )
print 'Expected background 1500-2000 : %6.2f +- %6.2f' % ( nbkg_1500, ebkg_1500)
print 'Observed 1500-2000 : %6.2f +- %6.2f' % ( nobs_1500, eobs_1500 )
leg = ROOT.TLegend(0.6,0.6,0.8,0.8)
if not isMC :
leg.AddEntry( hobs, 'Data', 'p')
else :
leg.AddEntry( hobs, 'Observed MC', 'p')
if options.isZ == False :
leg.AddEntry( hpred, 'Predicted W+Jets', 'f')
else :
leg.AddEntry( hpred, 'Predicted Z+Jets', 'f')
leg.AddEntry( hobs_ttbar, 't#bar{t} MC', 'f')
leg.SetFillColor(0)
leg.SetBorderSize(0)
leg.Draw()
ROOT.gPad.SetLogy()
tlx.DrawLatex(0.131, 0.905, "CMS Preliminary #sqrt{s}=13 TeV, " + str(lumi) + " pb^{-1}")
ratio = hobs.Clone()
ratio.SetName('ratio')
ratio.Divide( hs.GetHists().Last() )
ratio.GetXaxis().SetRangeUser( xaxis[0], xaxis[1])
ratio.UseCurrentStyle()
ratio.SetFillStyle(3004)
ratio.SetFillColor(1)
pad2.cd()
ratio.SetMarkerStyle(1)
ratio.SetMarkerSize(0)
ratio.SetTitle(';' + hs.GetXaxis().GetTitle() + ';Ratio')
ratio.SetMinimum(0.)
ratio.SetMaximum(2.)
ratio.GetYaxis().SetNdivisions(2,4,0,False)
#fit = ROOT.TF1("fit", "pol1", 500, 3000)
if not options.blind :
ratio.Draw('e3')
# if isMC :
# ratio.Fit("fit", "LRM")
else :
ratio.Draw("axis")
ratio.GetYaxis().SetTitleOffset(1.0)
ratio.GetXaxis().SetTitleOffset(3.0)
#ratio.SetTitleSize(20, "XYZ")
canv.cd()
canv.Update()
outstr = ''
if options.outstr != None :
outstr = options.outstr
else :
if not isMC :
outstr = options.fileData
else :
outstr = options.fileWJets
canv.Print(outstr + '_obspred.pdf', 'pdf')
canv.Print(outstr + '_obspred.png', 'png')
| [
"rappoccio@gmail.com"
] | rappoccio@gmail.com |
8b734325b692a09963b62a2877da0533267399e5 | 8ed61980185397f8a11ad5851e3ffff09682c501 | /ecto_rbo_dbg/setup.py | 431ec17d0d00fdceb2390ca823195994ade7f21e | [
"BSD-2-Clause-Views"
] | permissive | SoMa-Project/vision | 8975a2b368f69538a05bd57b0c3eda553b783b55 | ea8199d98edc363b2be79baa7c691da3a5a6cc86 | refs/heads/melodic | 2023-04-12T22:49:13.125788 | 2021-01-11T15:28:30 | 2021-01-11T15:28:30 | 80,823,825 | 1 | 0 | NOASSERTION | 2021-04-20T21:27:03 | 2017-02-03T11:36:44 | C++ | UTF-8 | Python | false | false | 313 | py | ## ! DO NOT MANUALLY INVOKE THIS setup.py, USE CATKIN INSTEAD
from distutils.core import setup
from catkin_pkg.python_setup import generate_distutils_setup
# fetch values from package.xml
setup_args = generate_distutils_setup(
packages=['ecto_rbo_dbg_py'],
package_dir={'': 'src'})
setup(**setup_args)
| [
"canerdogan.89@gmail.com"
] | canerdogan.89@gmail.com |
d022de22576eeddd7059fd2a8a9f2a92b8ad80d8 | f291b689f881f85d700fda5937073d644d5ad348 | /matchingAuthors.py | 9b90f2f34e61b78075de46b1f225e5c83075d908 | [] | no_license | lis123kr/cereb_generator | 0337d857436af7191fe826fc9df34eaf6106bff6 | 812367ba193dcd78a3e07cf34b444ec3328d8ab3 | refs/heads/master | 2020-04-01T16:48:08.670803 | 2018-10-26T18:41:48 | 2018-10-26T18:41:48 | 151,278,555 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,279 | py | # Copyright 2018 Cerebro Scholar
# generated by IS Lee
from cleansingAuthors import *
from printUtils import *
from authors import Author
import re, ast
def matching_authors(paperclean, cerebauthor_dict):
print(blue("matching authors start"))
paperclean['authors'] = paperclean['authors'].apply(lambda x : norm_authors(x, cerebauthor_dict))
print(blue("Done matching authors !"))
return paperclean
def get_cerebid_(au, authkeys):
if au.email and authkeys['email'].get(au.email):
return authkeys['email'][au.email]['cereb_auid']
if au.wos_auid and authkeys['wos_auid'].get(au.wos_auid):
return authkeys['wos_auid'][au.wos_auid]['cereb_auid']
if au.scp_auid and authkeys['scp_auid'].get(au.scp_auid):
return authkeys['scp_auid'][au.scp_auid]['cereb_auid']
if au.name_chk_key and authkeys['name_chk_key'].get(au.name_chk_key):
return authkeys['name_chk_key'][au.name_chk_key]['cereb_auid']
print(yellow("{} author isn't matched to cereb_auid...".format(au.fullname)))
return None
def norm_authors(x, cerebauthor_dict):
if str(x) == 'nan' or str(x)=='None': return
axvauthor, scpauthor, wosauthor, ieeeauthor = get_authors_list(x)
ids = []
for a, s, w, i in zip(axvauthor, scpauthor, wosauthor, ieeeauthor):
full, _, src_ = get_fullnames(a,s,w,i)
match = re.compile(r'( and )').search(full)
if match:
full_ = re.compile(r'( and )').sub("&", full)
fullsplit = full_.split('&')
if len(fullsplit) == 2 and (len(fullsplit[0].strip()) <= 3 or len(fullsplit[0].split(' ')) == 1):
# author 한명으로
full = full_.replace('&', ' ').strip()
else:
for idf, f in enumerate(fullsplit):
if f.strip() == '': continue
f = cleansing_fullname(f.strip())[1]
if f.strip() == '' or f.strip() == '.': continue
au = Author(None, None, f, src_)
au.update_au(a,s,w,i)
if au.fullname.strip() != '':
cerebau = get_cerebid_(au, cerebauthor_dict)
if cerebau != None:
ids.append(cerebau)
continue
if full.strip() != '' and full.strip() != '.':
au = Author(None, None, full, src_)
au.update_au(a,s,w,i)
if au.fullname.strip() != '':
cerebau = get_cerebid_(au, cerebauthor_dict)
if cerebau != None:
ids.append(cerebau)
if len(ids) > 0:
return ids
else:
return None | [
"ls123kr@naver.com"
] | ls123kr@naver.com |
a3f84fcd0ac4a5938e3aca469fea90685d9d32fd | 6aaadb46a89877b094f9c8cf827682b747185eed | /18-machine-learning-with-the-experts-school-budgets/01-exploring-the-raw-data/6-K-Nearest-neighbors-fit.py | 843566876b75afd04b7ab1f59e4e9b920b60fd29 | [] | no_license | JTM152769/DataScience-With-Python_Track | 7942980558de743d73c8a7bf7d1dbc8c7e504b58 | 670344e31ef644764e1f3fad50e6295a8dedf5d5 | refs/heads/master | 2020-03-27T21:42:37.764555 | 2018-10-21T10:02:45 | 2018-10-21T10:02:45 | 147,169,595 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,106 | py | '''
k-Nearest Neighbors: Fit
Having explored the Congressional voting records dataset, it is time now to build your first classifier. In this exercise,
you will fit a k-Nearest Neighbors classifier to the voting dataset, which has once again been pre-loaded for you into a
DataFrame df.
In the video, Hugo discussed the importance of ensuring your data adheres to the format required by the scikit-learn API.
The features need to be in an array where each column is a feature and each row a different observation or data point - in
this case, a Congressman's voting record. The target needs to be a single column with the same number of observations as the
feature data. We have done this for you in this exercise. Notice we named the feature array X and response variable y: This
is in accordance with the common scikit-learn practice.
Your job is to create an instance of a k-NN classifier with 6 neighbors (by specifying the n_neighbors parameter) and then fit
it to the data. The data has been pre-loaded into a DataFrame called df.
Instructions
100 XP
Import KNeighborsClassifier from sklearn.neighbors.
Create arrays X and y for the features and the target variable. Here this has been done for you. Note the use of .drop() to
drop the target variable 'party' from the feature array X as well as the use of the .values attribute to ensure X and y are
NumPy arrays. Without using .values, X and y are a DataFrame and Series respectively; the scikit-learn API will accept them
in this form also as long as they are of the right shape.
Instantiate a KNeighborsClassifier called knn with 6 neighbors by specifying the n_neighbors parameter.
Fit the classifier to the data using the .fit() method.
'''
# Import KNeighborsClassifier from sklearn.neighbors
from sklearn.neighbors import KNeighborsClassifier
# Create arrays for the features and the response variable
y = df['party'].values
X = df.drop('party', axis=1).values
# Create a k-NN classifier with 6 neighbors
knn = KNeighborsClassifier(n_neighbors=
6
)
# Fit the classifier to the data
knn.fit(X,y)
| [
"noreply@github.com"
] | JTM152769.noreply@github.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.