blob_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| path
stringlengths 3
288
| content_id
stringlengths 40
40
| detected_licenses
listlengths 0
112
| license_type
stringclasses 2
values | repo_name
stringlengths 5
115
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| branch_name
stringclasses 684
values | visit_date
timestamp[us]date 2015-08-06 10:31:46
2023-09-06 10:44:38
| revision_date
timestamp[us]date 1970-01-01 02:38:32
2037-05-03 13:00:00
| committer_date
timestamp[us]date 1970-01-01 02:38:32
2023-09-06 01:08:06
| github_id
int64 4.92k
681M
⌀ | star_events_count
int64 0
209k
| fork_events_count
int64 0
110k
| gha_license_id
stringclasses 22
values | gha_event_created_at
timestamp[us]date 2012-06-04 01:52:49
2023-09-14 21:59:50
⌀ | gha_created_at
timestamp[us]date 2008-05-22 07:58:19
2023-08-21 12:35:19
⌀ | gha_language
stringclasses 147
values | src_encoding
stringclasses 25
values | language
stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 128
12.7k
| extension
stringclasses 142
values | content
stringlengths 128
8.19k
| authors
listlengths 1
1
| author_id
stringlengths 1
132
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ce7e0ede156a3e7de7e03da065961056d48b7863
|
148d623f5c85e8bb4cb736c1f20dfa6c15108ee0
|
/notebooks/cross_validation_ex_01.py
|
170688985da17b8a32c3299bb15f65b29d117fe5
|
[
"CC-BY-4.0"
] |
permissive
|
castorfou/scikit-learn-mooc
|
996cec1a8c681c4fb87c568b2fa62d9ce57518f9
|
235748eff57409eb17d8355024579c6df44c0563
|
refs/heads/master
| 2023-06-02T16:31:26.987200
| 2021-06-24T09:54:56
| 2021-06-24T09:54:56
| 369,466,156
| 1
| 0
|
CC-BY-4.0
| 2021-05-21T08:24:07
| 2021-05-21T08:24:06
| null |
UTF-8
|
Python
| false
| false
| 6,263
|
py
|
#!/usr/bin/env python
# coding: utf-8
# # 📝 Exercise M2.01
#
# The aim of this exercise is to make the following experiments:
#
# * train and test a support vector machine classifier through
# cross-validation;
# * study the effect of the parameter gamma of this classifier using a
# validation curve;
# * study if it would be useful in term of classification if we could add new
# samples in the dataset using a learning curve.
#
# To make these experiments we will first load the blood transfusion dataset.
# <div class="admonition note alert alert-info">
# <p class="first admonition-title" style="font-weight: bold;">Note</p>
# <p class="last">If you want a deeper overview regarding this dataset, you can refer to the
# Appendix - Datasets description section at the end of this MOOC.</p>
# </div>
# In[14]:
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
# We will use a support vector machine classifier (SVM). In its most simple
# form, a SVM classifier is a linear classifier behaving similarly to a
# logistic regression. Indeed, the optimization used to find the optimal
# weights of the linear model are different but we don't need to know these
# details for the exercise.
#
# Also, this classifier can become more flexible/expressive by using a
# so-called kernel making the model becomes non-linear. Again, no requirement
# regarding the mathematics is required to accomplish this exercise.
#
# We will use an RBF kernel where a parameter `gamma` allows to tune the
# flexibility of the model.
#
# First let's create a predictive pipeline made of:
#
# * a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)
# with default parameter;
# * a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html)
# where the parameter `kernel` could be set to `"rbf"`. Note that this is the
# default.
# In[15]:
# to display nice model diagram
from sklearn import set_config
set_config(display='diagram')
# In[16]:
# Write your code here.
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.pipeline import make_pipeline
model = make_pipeline(StandardScaler(), SVC(kernel='rbf'))
model
# Evaluate the statistical performance of your model by cross-validation with a
# `ShuffleSplit` scheme. Thus, you can use
# [`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)
# and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)
# to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`
# and let the other parameters to the default.
# In[17]:
# Write your code here.
import pandas as pd
from sklearn.model_selection import cross_validate, ShuffleSplit
cv = ShuffleSplit(random_state=0)
cv_results = cross_validate(model, data, target,
cv=cv)
cv_results = pd.DataFrame(cv_results)
cv_results
# As previously mentioned, the parameter `gamma` is one of the parameter
# controlling under/over-fitting in support vector machine with an RBF kernel.
#
# Compute the validation curve
# (using [`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html))
# to evaluate the effect of the parameter `gamma`. You can vary its value
# between `10e-3` and `10e2` by generating samples on a logarithmic scale.
# Thus, you can use `np.logspace(-3, 2, num=30)`.
#
# Since we are manipulating a `Pipeline` the parameter name will be set to
# `svc__gamma` instead of only `gamma`. You can retrieve the parameter name
# using `model.get_params().keys()`. We will go more into details regarding
# accessing and setting hyperparameter in the next section.
# In[18]:
# Write your code here.
from sklearn.model_selection import validation_curve
import numpy as np
gamma = np.logspace(-3, 2, num=30)
train_scores, test_scores = validation_curve(
model, data, target, param_name="svc__gamma", param_range=gamma,
cv=cv)
train_errors, test_errors = -train_scores, -test_scores
# Plot the validation curve for the train and test scores.
# In[22]:
# Write your code here.
import matplotlib.pyplot as plt
plt.errorbar(gamma, train_scores.mean(axis=1),yerr=train_scores.std(axis=1), label="Training error")
plt.errorbar(gamma, test_scores.mean(axis=1),yerr=test_scores.std(axis=1), label="Testing error")
plt.legend()
plt.xscale("log")
plt.xlabel("Gamma value for SVC")
plt.ylabel("Mean absolute error")
_ = plt.title("Validation curve for SVC")
# Now, you can perform an analysis to check whether adding new samples to the
# dataset could help our model to better generalize. Compute the learning curve
# (using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))
# by computing the train and test scores for different training dataset size.
# Plot the train and test scores with respect to the number of samples.
# In[12]:
# Write your code here.
from sklearn.model_selection import learning_curve
import numpy as np
train_sizes = np.linspace(0.1, 1.0, num=5, endpoint=True)
train_sizes
from sklearn.model_selection import ShuffleSplit
cv = ShuffleSplit(n_splits=30, test_size=0.2)
results = learning_curve(
model, data, target, train_sizes=train_sizes, cv=cv)
train_size, train_scores, test_scores = results[:3]
# Convert the scores into errors
train_errors, test_errors = -train_scores, -test_scores
import matplotlib.pyplot as plt
plt.errorbar(train_size, train_errors.mean(axis=1),
yerr=train_errors.std(axis=1), label="Training error")
plt.errorbar(train_size, test_errors.mean(axis=1),
yerr=test_errors.std(axis=1), label="Testing error")
plt.legend()
plt.xscale("log")
plt.xlabel("Number of samples in the training set")
plt.ylabel("Mean absolute error")
_ = plt.title("Learning curve for SVC")
# In[ ]:
|
[
"guillaume.ramelet@gmail.com"
] |
guillaume.ramelet@gmail.com
|
068373acc18149e0bd38c83af1aff1cc0290a92b
|
519ffdab70c5cddc81da66af4c3ddd4007e637bc
|
/src/midterm/exam.py
|
0dae271ee20bc47d14f446ea449af8dc42852649
|
[
"MIT"
] |
permissive
|
acc-cosc-1336/cosc-1336-spring-2018-Aronpond
|
6b55ce38943117015a108b8a4544a22346eef670
|
b37a6be8c0b909859ccf5ac2ce5eaf82c4ba741b
|
refs/heads/master
| 2021-05-11T09:14:22.013454
| 2018-05-09T22:27:24
| 2018-05-09T22:27:24
| 118,071,080
| 0
| 0
|
MIT
| 2018-05-07T06:09:48
| 2018-01-19T03:22:41
|
Python
|
UTF-8
|
Python
| false
| false
| 3,079
|
py
|
'''
5 points
Create a function named get_miles_per_hour with parameters kilometers and minutes that returns
the miles per hour.
Use .621371 as conversion ratio.
Return the string error 'Invalid arguments' if negative kilometers or minutes are given.
RUN THE PROVIDED TEST CASES TO VALIDATE YOUR CODE
'''
def get_miles_per_hour(kilometers,minutes):
hours = minutes / 60
miles = kilometers * .621371
mph = miles / hours
if kilometers < 0:
return 'Invalid arguments'
if minutes < 0:
return 'Invalid arguments'
return mph
'''
10 points
Create a function named get_bonus_pay_amount with parameter sales that returns the bonus pay amount.
Sales Range Percentage
0 to 499 5%
500 to 999 6%
1000 to 1499 7%
1500 to 1999 8%
Return the string error 'Invalid arguments' if sales amount less than 0 or greater than 1999
Sample Data sales amount:
1000
Return Value:
70
'''
def get_bonus_pay_amount(sales):
if sales >= 0 and sales <= 499:
return (round(sales *.05,2))
elif sales >= 500 and sales <= 999:
return (round(sales *.06,2))
elif sales >= 1000 and sales <= 1499:
return (round(sales *.07,2))
elif sales >= 1500 and sales <= 1999:
return (round(sales *.08,2))
elif sales < 0:
return 'Invalid arguments'
elif sales > 1999:
return 'Invalid arguments'
'''
10 points
Create a function named reverse_string that has one parameter named string1 that returns the
reverse of the string.
MUST USE A WHILE LOOP
DO NOT USE STRING SLICING!!!
Sample Data string1 argument:
My String Data
Returns:
ataD gnirtS yM
reverse_string('My String Data')
CREATE A TEST CASE IN THE exam_test.py file.
'''
def reverse_string(string1):
# return string1[::-1]
rev = ''
i = len(string1)
while i > 0:
rev += string1[i-1]
i = i - 1
return rev
'''
10 points
Create a function named get_list_min_max with a list1 parameter that returns the min and max values
in a list.
Sample Data list1 value:
['joe', 10, 15, 20, 30, 40]
Returns:
[10, 40]
get_list_min_max(['joe', 10, 15, 20, 30, 40])
CREATE A TEST CASE IN THE exam_test.py file.
'''
def get_list_min_max(list1):
rl = []
maxv = max(list1[1:])
minv = min(list1[1:])
rl.append(minv)
rl.append(maxv)
return rl
'''
25 points
Create a function named get_list_min_max_file with no parameters that reads the attached quiz.dat file
that returns all the min and max values from multiple lists.
You can use the get_list_min_max to get the min max for each list.
Sample quiz.dat file data:
joe 10 15 20 30 40
bill 23 16 19 22
sue 8 22 17 14 32 17 24 21 2 9 11 17
grace 12 28 21 45 26 10 11
john 14 32 25 16 89
Return Value:
[2,89]
'''
def get_list_min_max_file():
infile = open('quiz.dat','r')
lis = infile.readlines()
infile.close()
lis.rstrip('/n')
print(lis)
# print(line)
# maxv = max(line.isnum)
# minv = min(line)
# print(maxv)
# print(minv)
|
[
"noreply@github.com"
] |
acc-cosc-1336.noreply@github.com
|
23f9757ad8e9bebd6108e84a02155e77f6b93f00
|
91d1a6968b90d9d461e9a2ece12b465486e3ccc2
|
/secretsmanager_write_1/secret_create.py
|
7e5010bb0e9275c9dd9b04a37ca96724ca1518c4
|
[] |
no_license
|
lxtxl/aws_cli
|
c31fc994c9a4296d6bac851e680d5adbf7e93481
|
aaf35df1b7509abf5601d3f09ff1fece482facda
|
refs/heads/master
| 2023-02-06T09:00:33.088379
| 2020-12-27T13:38:45
| 2020-12-27T13:38:45
| 318,686,394
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,892
|
py
|
#!/usr/bin/python
# -*- codding: utf-8 -*-
import os
import sys
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from common.execute_command import write_one_parameter
# url : https://awscli.amazonaws.com/v2/documentation/api/latest/reference/secretsmanager/create-secret.html
if __name__ == '__main__':
"""
delete-secret : https://awscli.amazonaws.com/v2/documentation/api/latest/reference/secretsmanager/delete-secret.html
describe-secret : https://awscli.amazonaws.com/v2/documentation/api/latest/reference/secretsmanager/describe-secret.html
list-secrets : https://awscli.amazonaws.com/v2/documentation/api/latest/reference/secretsmanager/list-secrets.html
restore-secret : https://awscli.amazonaws.com/v2/documentation/api/latest/reference/secretsmanager/restore-secret.html
rotate-secret : https://awscli.amazonaws.com/v2/documentation/api/latest/reference/secretsmanager/rotate-secret.html
update-secret : https://awscli.amazonaws.com/v2/documentation/api/latest/reference/secretsmanager/update-secret.html
"""
parameter_display_string = """
# name : Specifies the friendly name of the new secret.
The secret name must be ASCII letters, digits, or the following characters : /_+=.@-
Note
Do not end your secret name with a hyphen followed by six characters. If you do so, you risk confusion and unexpected results when searching for a secret by partial ARN. Secrets Manager automatically adds a hyphen and six random characters at the end of the ARN.
"""
add_option_dict = {}
#######################################################################
# parameter display string
add_option_dict["parameter_display_string"] = parameter_display_string
# ex: add_option_dict["no_value_parameter_list"] = "--single-parameter"
write_one_parameter("secretsmanager", "create-secret", "name", add_option_dict)
|
[
"hcseo77@gmail.com"
] |
hcseo77@gmail.com
|
5a164ae6608208666333aadb44adb89d47eb77e1
|
e97e727972149063b3a1e56b38961d0f2f30ed95
|
/test/test_activity_occurrence_resource.py
|
95329d1bb8176f1da196bd830cac7f467f098361
|
[] |
no_license
|
knetikmedia/knetikcloud-python-client
|
f3a485f21c6f3e733a864194c9acf048943dece7
|
834a24415385c906732437970db105e1bc71bde4
|
refs/heads/master
| 2021-01-12T10:23:35.307479
| 2018-03-14T16:04:24
| 2018-03-14T16:04:24
| 76,418,830
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,097
|
py
|
# coding: utf-8
"""
Knetik Platform API Documentation latest
This is the spec for the Knetik API. Use this in conjunction with the documentation found at https://knetikcloud.com.
OpenAPI spec version: latest
Contact: support@knetik.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import os
import sys
import unittest
import knetik_cloud
from knetik_cloud.rest import ApiException
from knetik_cloud.models.activity_occurrence_resource import ActivityOccurrenceResource
class TestActivityOccurrenceResource(unittest.TestCase):
""" ActivityOccurrenceResource unit test stubs """
def setUp(self):
pass
def tearDown(self):
pass
def testActivityOccurrenceResource(self):
"""
Test ActivityOccurrenceResource
"""
# FIXME: construct object with mandatory attributes with example values
#model = knetik_cloud.models.activity_occurrence_resource.ActivityOccurrenceResource()
pass
if __name__ == '__main__':
unittest.main()
|
[
"shawn.stout@knetik.com"
] |
shawn.stout@knetik.com
|
6fc0c48ca6a5e0811c61ffb490b39d781ac8d4f5
|
eb7c49c58cab51248b249a9985cb58cfefc1cd90
|
/distillation/distillation.py
|
1a1c4732cc7e8e884caedf26d93070b2aa130ba0
|
[] |
no_license
|
kzky/reproductions
|
5b6c43c12ec085586d812edfa8d79ae76750e397
|
28fdb33048ea1f7f1dbd8c94612513d4714a6c95
|
refs/heads/master
| 2021-01-18T01:59:51.435515
| 2017-09-10T11:58:21
| 2017-09-10T11:58:21
| 68,796,105
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,196
|
py
|
from __future__ import absolute_import
from six.moves import range
import os
import nnabla as nn
import nnabla.logger as logger
import nnabla.functions as F
import nnabla.parametric_functions as PF
import nnabla.solvers as S
import nnabla.utils.save as save
from args import get_args
from mnist_data import data_iterator_mnist
from models import mnist_resnet_prediction, categorical_error, kl_divergence
def distil():
args = get_args()
# Get context.
from nnabla.contrib.context import extension_context
extension_module = args.context
if args.context is None:
extension_module = 'cpu'
logger.info("Running in %s" % extension_module)
ctx = extension_context(extension_module, device_id=args.device_id)
nn.set_default_context(ctx)
# Create CNN network for both training and testing.
mnist_cnn_prediction = mnist_resnet_prediction
# TRAIN
teacher = "teacher"
student = "student"
# Create input variables.
image = nn.Variable([args.batch_size, 1, 28, 28])
image.persistent = True # not clear the intermediate buffer re-used
label = nn.Variable([args.batch_size, 1])
label.persistent = True # not clear the intermediate buffer re-used
# Create `teacher` and "student" prediction graph.
model_load_path = args.model_load_path
nn.load_parameters(model_load_path)
pred_label = mnist_cnn_prediction(image, net=teacher, maps=64, test=False)
pred_label.need_grad = False # no need backward through teacher graph
pred = mnist_cnn_prediction(image, net=student, maps=32, test=False)
pred.persistent = True # not clear the intermediate buffer used
loss_ce = F.mean(F.softmax_cross_entropy(pred, label))
loss_kl = kl_divergence(pred, pred_label)
loss = args.weight_ce * loss_ce + args.weight_kl * loss_kl
# TEST
# Create input variables.
vimage = nn.Variable([args.batch_size, 1, 28, 28])
vlabel = nn.Variable([args.batch_size, 1])
# Create teacher predition graph.
vpred = mnist_cnn_prediction(vimage, net=student, maps=32, test=True)
# Create Solver.
solver = S.Adam(args.learning_rate)
with nn.parameter_scope(student):
solver.set_parameters(nn.get_parameters())
# Create monitor.
from nnabla.monitor import Monitor, MonitorSeries, MonitorTimeElapsed
monitor = Monitor(args.monitor_path)
monitor_loss = MonitorSeries("Training loss", monitor, interval=10)
monitor_err = MonitorSeries("Training error", monitor, interval=10)
monitor_time = MonitorTimeElapsed("Training time", monitor, interval=100)
monitor_verr = MonitorSeries("Test error", monitor, interval=10)
# Initialize DataIterator for MNIST.
data = data_iterator_mnist(args.batch_size, True)
vdata = data_iterator_mnist(args.batch_size, False)
best_ve = 1.0
# Training loop.
for i in range(args.max_iter):
if i % args.val_interval == 0:
# Validation
ve = 0.0
for j in range(args.val_iter):
vimage.d, vlabel.d = vdata.next()
vpred.forward(clear_buffer=True)
ve += categorical_error(vpred.d, vlabel.d)
monitor_verr.add(i, ve / args.val_iter)
if ve < best_ve:
nn.save_parameters(os.path.join(
args.model_save_path, 'params_%06d.h5' % i))
best_ve = ve
# Training forward
image.d, label.d = data.next()
solver.zero_grad()
loss.forward(clear_no_need_grad=True)
loss.backward(clear_buffer=True)
solver.weight_decay(args.weight_decay)
solver.update()
e = categorical_error(pred.d, label.d)
monitor_loss.add(i, loss.d.copy())
monitor_err.add(i, e)
monitor_time.add(i)
ve = 0.0
for j in range(args.val_iter):
vimage.d, vlabel.d = vdata.next()
vpred.forward(clear_buffer=True)
ve += categorical_error(vpred.d, vlabel.d)
monitor_verr.add(i, ve / args.val_iter)
parameter_file = os.path.join(
args.model_save_path, 'params_{:06}.h5'.format(args.max_iter))
nn.save_parameters(parameter_file)
if __name__ == '__main__':
distil()
|
[
"rkzfilter@gmail.com"
] |
rkzfilter@gmail.com
|
3d3247e759523d5a191a42471586868ee3cb0831
|
69f4d47bef0e3b4150891127717d6f4d52193b47
|
/sinaSpider/sinaSpider/spiders/sina.py
|
356db6054c3f6ed71e0e9e53f2492f805c8423ad
|
[] |
no_license
|
hyhlinux/stu_scrapy
|
27e45ad10eb9b85aada5ced34d29f00923555bf2
|
36afdb88f84a82453bf03070ed049599386a83c0
|
refs/heads/master
| 2020-06-12T11:55:19.986569
| 2016-12-10T03:18:37
| 2016-12-10T03:18:37
| 75,581,107
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,378
|
py
|
# -*- coding: utf-8 -*-
import scrapy
import sys
import os
from sinaSpider.items import SinaspiderItem as SinaItem
reload(sys)
sys.setdefaultencoding("utf-8")
class SinaSpider(scrapy.Spider):
name = "sina"
allowed_domains = ["sina.com.cn"]
start_urls = ["http://news.sina.com.cn/guide/"]
def parse(self, response):
# print response.body
#test ok
items= []
# 所有大类的url 和 标题
parentUrls = response.xpath('//div[@id=\"tab01\"]/div/h3/a/@href').extract()
parentTitle = response.xpath("//div[@id=\"tab01\"]/div/h3/a/text()").extract()
# 所有小类的ur 和 标题
subUrls = response.xpath('//div[@id=\"tab01\"]/div/ul/li/a/@href').extract()
subTitle = response.xpath('//div[@id=\"tab01\"]/div/ul/li/a/text()').extract()
#爬取所有大类
for i in range(0, len(parentTitle)):
# 指定大类目录的路径和目录名
parentFilename = "./Data/" + parentTitle[i]
#如果目录不存在,则创建目录
if(not os.path.exists(parentFilename)):
os.makedirs(parentFilename)
# 爬取所有小类
for j in range(0, len(subUrls)):
item = SinaItem()
# 保存大类的title和urls
item['parentTitle'] = parentTitle[i]
item['parentUrls'] = parentUrls[i]
# 检查小类的url是否以同类别大类url开头,如果是返回True (sports.sina.com.cn 和 sports.sina.com.cn/nba)
if_belong = subUrls[j].startswith(item['parentUrls'])
# 如果属于本大类,将存储目录放在本大类目录下
if(if_belong):
subFilename =parentFilename + '/'+ subTitle[j]
# 如果目录不存在,则创建目录
if(not os.path.exists(subFilename)):
os.makedirs(subFilename)
# 存储 小类url、title和filename字段数据
item['subUrls'] = subUrls[j]
item['subTitle'] =subTitle[j]
item['subFilename'] = subFilename
items.append(item)
#发送每个小类url的Request请求,得到Response连同包含meta数据 一同交给回调函数 second_parse 方法处理
for item in items:
yield scrapy.Request( url = item['subUrls'], meta={'meta_1': item}, callback=self.second_parse)
#对于返回的小类的url,再进行递归请求
def second_parse(self, response):
# 提取每次Response的meta数据
meta_1= response.meta['meta_1']
# 取出小类里所有子链接
sonUrls = response.xpath('//a/@href').extract()
items= []
for i in range(0, len(sonUrls)):
# 检查每个链接是否以大类url开头、以.shtml结尾,如果是返回True
if_belong = sonUrls[i].endswith('.shtml') and sonUrls[i].startswith(meta_1['parentUrls'])
# 如果属于本大类,获取字段值放在同一个item下便于传输
if(if_belong):
item = SinaItem()
item['parentTitle'] =meta_1['parentTitle']
item['parentUrls'] =meta_1['parentUrls']
item['subUrls'] = meta_1['subUrls']
item['subTitle'] = meta_1['subTitle']
item['subFilename'] = meta_1['subFilename']
item['sonUrls'] = sonUrls[i]
items.append(item)
#发送每个小类下子链接url的Request请求,得到Response后连同包含meta数据 一同交给回调函数 detail_parse 方法处理
for item in items:
yield scrapy.Request(url=item['sonUrls'], meta={'meta_2':item}, callback = self.detail_parse)
# 数据解析方法,获取文章标题和内容
def detail_parse(self, response):
item = response.meta['meta_2']
content = ""
head = response.xpath('//h1[@id=\"main_title\"]/text()')
content_list = response.xpath('//div[@id=\"artibody\"]/p/text()').extract()
# 将p标签里的文本内容合并到一起
for content_one in content_list:
content += content_one
item['head']= head
item['content']= content
yield item
|
[
"2285020853@qq.com"
] |
2285020853@qq.com
|
a80d8b96674191feb176a2faa0f67317a629e280
|
e0ae697de14078f97287ad2b917af050805085bc
|
/swan/utils/plot.py
|
dc1a1ff5f1ac1d0bf94d888067d3fc84c323d007
|
[
"Apache-2.0"
] |
permissive
|
nlesc-nano/swan
|
08204936e597017cff517c9a5b8f41262faba9c1
|
4edc9dc363ce901b1fcc19444bec42fc5930c4b9
|
refs/heads/main
| 2023-05-23T21:50:45.928108
| 2023-05-09T13:35:58
| 2023-05-09T13:35:58
| 191,957,101
| 15
| 1
|
Apache-2.0
| 2023-05-09T13:36:29
| 2019-06-14T14:29:57
|
Python
|
UTF-8
|
Python
| false
| false
| 2,809
|
py
|
"""Miscellaneous plot functions."""
from pathlib import Path
from typing import Any, Iterator, List
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from scipy import stats
from ..modeller.gp_modeller import GPMultivariate
plt.switch_backend('agg')
def create_confidence_plot(
multi: GPMultivariate, expected: np.ndarray, prop: str,
output_name: str = "confidence_scatterplot") -> None:
"""Plot the results predicted multivariated results using confidence intervals."""
data = pd.DataFrame({
"expected": expected, "predicted": multi.mean, "confidence": (multi.upper - multi.lower) * 0.5,
"lower": multi.lower, "upper": multi.upper})
_, ax = plt.subplots(1, 1, figsize=(10, 10))
sns.lineplot(x="expected", y="expected", data=data, ax=ax)
sns.scatterplot(x="expected", y="predicted", data=data, ax=ax, size="confidence", hue="confidence", sizes=(10, 100))
path = Path(".") / f"{output_name}.png"
plt.savefig(path)
def create_scatter_plot(
predicted: np.ndarray, expected: np.ndarray, properties: List[str],
output_name: str = "scatterplot") -> None:
"""Plot the predicted vs the expected values."""
sns.set()
# Dataframes with the results
columns_predicted = [f"{p}_predicted" for p in properties]
columns_expected = [f"{p}_expected" for p in properties]
df_predicted = pd.DataFrame(predicted, columns=columns_predicted)
df_expected = pd.DataFrame(expected, columns=columns_expected)
data = pd.concat((df_predicted, df_expected), axis=1)
# Number of features
nfeatures = predicted.shape[1]
# Create a subplot with at most 3 features per line
rows = (nfeatures // 3) + (0 if nfeatures % 3 == 0 else 1)
ncols = nfeatures if nfeatures < 3 else 3
_, axis = plt.subplots(nrows=rows, ncols=ncols, figsize=(20, 20), constrained_layout=True)
# fig.tight_layout()
if rows == 1:
axis = [axis]
for row, labels in enumerate(chunks_of(list(zip(columns_expected, columns_predicted)), 3)):
for col, (label_x, label_y) in enumerate(labels):
ax = axis[row][col] if nfeatures > 1 else axis[0]
sns.regplot(x=label_x, y=label_y, data=data, ax=ax)
path = Path(".") / f"{output_name}.png"
plt.savefig(path)
print(f"{'name':40} slope intercept rvalue stderr")
for k, name in enumerate(properties):
# Print linear regression result
reg = stats.linregress(predicted[:, k], expected[:, k])
print(f"{name:40} {reg.slope:.3f} {reg.intercept:.3f} {reg.rvalue:.3f} {reg.stderr:.3e}")
def chunks_of(data: List[Any], n: int) -> Iterator[Any]:
"""Return chunks of ``n`` from ``data``."""
for i in range(0, len(data), n):
yield data[i:i + n]
|
[
"tifonzafel@gmail.com"
] |
tifonzafel@gmail.com
|
aa6b4b76ac5f339ebfaaa0326ce457417d479c95
|
fb8cbebdf034b2f478943752d5443afc82c6eef5
|
/tuirer/venv/lib/python3.6/site-packages/IPython/testing/tests/test_decorators.py
|
be8bb30f24658acafa1b8a18bf0f3b6ce66ddae9
|
[] |
no_license
|
fariasjr/CitiTuirer
|
f64e0ec93ef088f8140bb0961d2ad4ed3b59448a
|
deb3f7a9c2d45b8a7f54639037f097b99abdac11
|
refs/heads/master
| 2020-03-24T05:10:36.261050
| 2018-08-01T20:24:30
| 2018-08-01T20:24:30
| 142,477,521
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,903
|
py
|
"""Tests for the decorators we've created for IPython.
"""
# Module imports
# Std lib
import inspect
import sys
from IPython.testing import decorators as dec
# Third party
import nose.tools as nt
#-----------------------------------------------------------------------------
# Utilities
# Note: copied from OInspect, kept here so the testing stuff doesn't create
# circular dependencies and is easier to reuse.
def getargspec(obj):
"""Get the names and default values of a function's arguments.
A tuple of four things is returned: (args, varargs, varkw, defaults).
'args' is a list of the argument names (it may contain nested lists).
'varargs' and 'varkw' are the names of the * and ** arguments or None.
'defaults' is an n-tuple of the default values of the last n arguments.
Modified version of inspect.getargspec from the Python Standard
Library."""
if inspect.isfunction(obj):
func_obj = obj
elif inspect.ismethod(obj):
func_obj = obj.__func__
else:
raise TypeError('arg is not a Python function')
args, varargs, varkw = inspect.getargs(func_obj.__code__)
return args, varargs, varkw, func_obj.__defaults__
#-----------------------------------------------------------------------------
# Testing functions
@dec.as_unittest
def trivial():
"""A trivial test"""
pass
@dec.skip
def test_deliberately_broken():
"""A deliberately broken test - we want to skip this one."""
1/0
@dec.skip('Testing the skip decorator')
def test_deliberately_broken2():
"""Another deliberately broken test - we want to skip this one."""
1/0
# Verify that we can correctly skip the doctest for a function at will, but
# that the docstring itself is NOT destroyed by the decorator.
def doctest_bad(x,y=1,**k):
"""A function whose doctest we need to skip.
>>> 1+1
3
"""
print('x:',x)
print('y:',y)
print('k:',k)
def call_doctest_bad():
"""Check that we can still call the decorated functions.
>>> doctest_bad(3,y=4)
x: 3
y: 4
k: {}
"""
pass
def test_skip_dt_decorator():
"""Doctest-skipping decorator should preserve the docstring.
"""
# Careful: 'check' must be a *verbatim* copy of the doctest_bad docstring!
check = """A function whose doctest we need to skip.
>>> 1+1
3
"""
# Fetch the docstring from doctest_bad after decoration.
val = doctest_bad.__doc__
nt.assert_equal(check,val,"doctest_bad docstrings don't match")
# Doctest skipping should work for class methods too
class FooClass(object):
"""FooClass
Example:
>>> 1+1
2
"""
def __init__(self,x):
"""Make a FooClass.
Example:
>>> f = FooClass(3)
junk
"""
print('Making a FooClass.')
self.x = x
def bar(self,y):
"""Example:
>>> ff = FooClass(3)
>>> ff.bar(0)
boom!
>>> 1/0
bam!
"""
return 1/y
def baz(self,y):
"""Example:
>>> ff2 = FooClass(3)
Making a FooClass.
>>> ff2.baz(3)
True
"""
return self.x==y
def test_skip_dt_decorator2():
"""Doctest-skipping decorator should preserve function signature.
"""
# Hardcoded correct answer
dtargs = (['x', 'y'], None, 'k', (1,))
# Introspect out the value
dtargsr = getargspec(doctest_bad)
assert dtargsr==dtargs, \
"Incorrectly reconstructed args for doctest_bad: %s" % (dtargsr,)
@dec.skip_linux
def test_linux():
nt.assert_false(sys.platform.startswith('linux'),"This test can't run under linux")
@dec.skip_win32
def test_win32():
nt.assert_not_equal(sys.platform,'win32',"This test can't run under windows")
@dec.skip_osx
def test_osx():
nt.assert_not_equal(sys.platform,'darwin',"This test can't run under osx")
|
[
"jornadaciti@ug4c08.windows.cin.ufpe.br"
] |
jornadaciti@ug4c08.windows.cin.ufpe.br
|
c93b63bb6a2ee9bde97722c20735dd3e19260a1f
|
2be3ad1c6413c9eb74819773e82c9cfe973f2fbb
|
/mama_cas/cas.py
|
2057a703a19b10904887a217008626fee552bf60
|
[
"BSD-3-Clause"
] |
permissive
|
ilCapo77/django-mama-cas
|
271d706eae24ea87c9280e46e5ddcfb1fd5c3d7f
|
42ec942c207e5c4e7e40d7bb74551682c2803d81
|
refs/heads/master
| 2020-12-28T04:49:05.463591
| 2016-08-22T16:41:14
| 2016-08-22T16:41:14
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,549
|
py
|
import logging
from django.contrib import messages
from django.contrib.auth import logout
from django.utils.module_loading import import_string
from django.utils.translation import ugettext_lazy as _
from mama_cas.compat import is_authenticated
from mama_cas.exceptions import InvalidTicketSpec
from mama_cas.exceptions import ValidationError
from mama_cas.models import ServiceTicket
from mama_cas.models import ProxyTicket
from mama_cas.models import ProxyGrantingTicket
from mama_cas.services import get_callbacks
logger = logging.getLogger(__name__)
def validate_service_ticket(service, ticket, pgturl, renew=False, require_https=False):
"""
Validate a service ticket string. Return a triplet containing a
``ServiceTicket`` and an optional ``ProxyGrantingTicket``, or a
``ValidationError`` if ticket validation failed.
"""
logger.debug("Service validation request received for %s" % ticket)
# Check for proxy tickets passed to /serviceValidate
if ticket and ticket.startswith(ProxyTicket.TICKET_PREFIX):
e = InvalidTicketSpec('Proxy tickets cannot be validated with /serviceValidate')
logger.warning("%s %s" % (e.code, e))
return None, None, e
try:
st = ServiceTicket.objects.validate_ticket(ticket, service, renew=renew, require_https=require_https)
except ValidationError as e:
logger.warning("%s %s" % (e.code, e))
return None, None, e
else:
if pgturl:
logger.debug("Proxy-granting ticket request received for %s" % pgturl)
pgt = ProxyGrantingTicket.objects.create_ticket(service, pgturl, user=st.user, granted_by_st=st)
else:
pgt = None
return st, pgt, None
def validate_proxy_ticket(service, ticket, pgturl):
"""
Validate a proxy ticket string. Return a 4-tuple containing a
``ProxyTicket``, an optional ``ProxyGrantingTicket`` and a list
of proxies through which authentication proceeded, or a
``ValidationError`` if ticket validation failed.
"""
logger.debug("Proxy validation request received for %s" % ticket)
try:
pt = ProxyTicket.objects.validate_ticket(ticket, service)
except ValidationError as e:
logger.warning("%s %s" % (e.code, e))
return None, None, None, e
else:
# Build a list of all services that proxied authentication,
# in reverse order of which they were traversed
proxies = [pt.service]
prior_pt = pt.granted_by_pgt.granted_by_pt
while prior_pt:
proxies.append(prior_pt.service)
prior_pt = prior_pt.granted_by_pgt.granted_by_pt
if pgturl:
logger.debug("Proxy-granting ticket request received for %s" %
pgturl)
pgt = ProxyGrantingTicket.objects.create_ticket(service, pgturl, user=pt.user, granted_by_pt=pt)
else:
pgt = None
return pt, pgt, proxies, None
def validate_proxy_granting_ticket(pgt, target_service):
"""
Validate a proxy granting ticket string. Return an ordered pair
containing a ``ProxyTicket``, or a ``ValidationError`` if ticket
validation failed.
"""
logger.debug("Proxy ticket request received for %s using %s" % (target_service, pgt))
try:
pgt = ProxyGrantingTicket.objects.validate_ticket(pgt, target_service)
except ValidationError as e:
logger.warning("%s %s" % (e.code, e))
return None, e
else:
pt = ProxyTicket.objects.create_ticket(service=target_service, user=pgt.user, granted_by_pgt=pgt)
return pt, None
def get_attributes(user, service):
"""
Return a dictionary of user attributes from the set of configured
callback functions.
"""
attributes = {}
for path in get_callbacks(service):
callback = import_string(path)
attributes.update(callback(user, service))
return attributes
def logout_user(request):
"""End a single sign-on session for the current user."""
logger.debug("Logout request received for %s" % request.user)
if is_authenticated(request.user):
ServiceTicket.objects.consume_tickets(request.user)
ProxyTicket.objects.consume_tickets(request.user)
ProxyGrantingTicket.objects.consume_tickets(request.user)
ServiceTicket.objects.request_sign_out(request.user)
logger.info("Single sign-on session ended for %s" % request.user)
logout(request)
messages.success(request, _('You have been successfully logged out'))
|
[
"jason.bittel@gmail.com"
] |
jason.bittel@gmail.com
|
3749352f2804e412ae11dd21cac151c55e754eeb
|
bda2cafa8a5f0adb702aa618ff18372428ad9f84
|
/artie.py
|
5565427bac48805feff8e2e480cc82942fb78ed2
|
[] |
no_license
|
rogerhoward/artie3000
|
eb2d5968d9b2fc19cb8ca75836ea4d0911ba3f87
|
ec2afc6341029b0279b58b917a0e473e7462c5c5
|
refs/heads/master
| 2022-02-12T03:29:36.614009
| 2019-10-23T15:26:50
| 2019-10-23T15:26:50
| 177,835,173
| 1
| 0
| null | 2022-02-04T15:11:34
| 2019-03-26T17:13:53
|
HTML
|
UTF-8
|
Python
| false
| false
| 715
|
py
|
#!/usr/bin/env python
import asyncio
import websockets
import random, string
import simplejson as json
def random_id(len=10):
return ''.join(random.choices(string.ascii_lowercase + string.digits, k=len))
class Artie(object):
def __init__():
pass
async def bot():
async with websockets.connect(
'ws://local.codewithartie.com:8899/websocket') as websocket:
# name = input("What's your name? ")
msg_id = random_id()
vers = {"cmd": "version", "id": msg_id}
await websocket.send(json.dumps(vers))
print(f"> {vers}")
greeting = await websocket.recv()
print(f"< {greeting}")
asyncio.get_event_loop().run_until_complete(bot())
|
[
"rogerhoward@mac.com"
] |
rogerhoward@mac.com
|
2c387465d98034a84512b41c9b90e62a486f5c44
|
05a9e0bb7e33099f94dfc8af53b4837bc5c9d287
|
/python/small_impls/particle_energy_minimize/energy.py
|
229e8a9b9ee860b91ef46a307ee5c45819fce34d
|
[] |
no_license
|
HiroIshida/snippets
|
999c09efadae80397cb82a424328bb1dbda4915f
|
f64dcd793184be64682b55bdaee7392fd97a0916
|
refs/heads/master
| 2023-09-01T08:18:42.523625
| 2023-09-01T04:08:20
| 2023-09-01T04:08:20
| 207,662,767
| 7
| 2
| null | 2022-08-01T23:20:42
| 2019-09-10T21:04:01
|
C++
|
UTF-8
|
Python
| false
| false
| 2,903
|
py
|
import numpy as np
from scipy.optimize import OptimizeResult, minimize, Bounds
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import copy
import time
from typing import Callable, Tuple
def compute_distance_matrix(points: np.ndarray):
assert points.ndim == 2
n_point, n_dim = points.shape
squared_dist_matrix = np.zeros((n_point, n_point))
for i, p in enumerate(points):
squared_dist_matrix[:, i] = np.sum((p - points) ** 2, axis=1)
dist_matrix = np.sqrt(squared_dist_matrix)
return dist_matrix
def fun_energy(points: np.ndarray, n_power=2):
n_point, n_dim = points.shape
dist_matrix = compute_distance_matrix(points)
modified_dist_matrix = copy.deepcopy(dist_matrix)
for i in range(n_point):
modified_dist_matrix[i, i] = 1e5
energy = 0.5 * np.sum(1.0 / (modified_dist_matrix ** n_power)) # 0.5 becaue double count
part_grad_list = []
for i, p in enumerate(points):
diff = points - p
r = modified_dist_matrix[:, i]
tmp = (1.0 / r **(n_power + 2))
part_grad = np.sum(n_power * np.tile(tmp, (n_dim, 1)).T * diff, axis=0)
part_grad_list.append(part_grad)
grad = np.hstack(part_grad_list)
return energy, grad
def scipinize(fun: Callable) -> Tuple[Callable, Callable]:
closure_member = {"jac_cache": None}
def fun_scipinized(x):
f, jac = fun(x)
closure_member["jac_cache"] = jac
return f
def fun_scipinized_jac(x):
return closure_member["jac_cache"]
return fun_scipinized, fun_scipinized_jac
def gradient_test(func, x0, decimal=4):
f0, grad = func(x0)
n_dim = len(x0)
eps = 1e-7
grad_numerical = np.zeros(n_dim)
for idx in range(n_dim):
x1 = copy.copy(x0)
x1[idx] += eps
f1, _ = func(x1)
grad_numerical[idx] = (f1 - f0) / eps
print(grad_numerical)
print(grad)
np.testing.assert_almost_equal(grad, grad_numerical, decimal=decimal)
n_dim = 3
n_point = 27
points = np.random.rand(n_point, n_dim)
a = 1.5
def obj_fun(points: np.ndarray):
f1, grad1 = fun_energy(points, n_power=1)
f2, grad2 = fun_energy(points, n_power=-2)
#return a * f1 + b * f2, a * grad1 + b * grad2
return f1 + a * f2, grad1 + a * grad2
f, jac = scipinize(lambda x: obj_fun(x.reshape(-1, n_dim)))
x_init = points.flatten()
bounds = Bounds(lb = np.zeros(n_dim * n_point), ub = np.ones(n_dim * n_point))
slsqp_option = {
"maxiter": 1000
}
res = minimize(
f,
x_init,
method="SLSQP",
jac=jac,
bounds=bounds,
options=slsqp_option,
)
points_sol = res.x.reshape(-1, n_dim)
print(res)
if n_dim == 2:
plt.scatter(points_sol[:, 0], points_sol[:, 1])
plt.show()
else:
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(points_sol[:, 0], points_sol[:, 1], points_sol[:, 2])
plt.show()
|
[
"spitfire.docomo@gmail.com"
] |
spitfire.docomo@gmail.com
|
168587159267fcc2bcfa812c45b2ad9b99b148ba
|
78c3082e9082b5b50435805723ae00a58ca88e30
|
/03.AI알고리즘 소스코드/venv/Lib/site-packages/caffe2/python/parallel_workers_test.py
|
b1822ec259538d2302294aeadf78f99766856b06
|
[] |
no_license
|
jinStar-kimmy/algorithm
|
26c1bc456d5319578110f3d56f8bd19122356603
|
59ae8afd8d133f59a6b8d8cee76790fd9dfe1ff7
|
refs/heads/master
| 2023-08-28T13:16:45.690232
| 2021-10-20T08:23:46
| 2021-10-20T08:23:46
| 419,217,105
| 0
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,620
|
py
|
import unittest
from caffe2.python import workspace, core
import caffe2.python.parallel_workers as parallel_workers
def create_queue():
queue = 'queue'
workspace.RunOperatorOnce(
core.CreateOperator(
"CreateBlobsQueue", [], [queue], num_blobs=1, capacity=1000
)
)
# Technically, blob creations aren't thread safe. Since the unittest below
# does RunOperatorOnce instead of CreateNet+RunNet, we have to precreate
# all blobs beforehand
for i in range(100):
workspace.C.Workspace.current.create_blob("blob_" + str(i))
workspace.C.Workspace.current.create_blob("status_blob_" + str(i))
workspace.C.Workspace.current.create_blob("dequeue_blob")
workspace.C.Workspace.current.create_blob("status_blob")
return queue
def create_worker(queue, get_blob_data):
def dummy_worker(worker_id):
blob = 'blob_' + str(worker_id)
workspace.FeedBlob(blob, get_blob_data(worker_id))
workspace.RunOperatorOnce(
core.CreateOperator(
'SafeEnqueueBlobs', [queue, blob], [blob, 'status_blob_' + str(worker_id)]
)
)
return dummy_worker
def dequeue_value(queue):
dequeue_blob = 'dequeue_blob'
workspace.RunOperatorOnce(
core.CreateOperator(
"SafeDequeueBlobs", [queue], [dequeue_blob, 'status_blob']
)
)
return workspace.FetchBlob(dequeue_blob)
class ParallelWorkersTest(unittest.TestCase):
def testParallelWorkers(self):
workspace.ResetWorkspace()
queue = create_queue()
dummy_worker = create_worker(queue, lambda worker_id: str(worker_id))
worker_coordinator = parallel_workers.init_workers(dummy_worker)
worker_coordinator.start()
for _ in range(10):
value = dequeue_value(queue)
self.assertTrue(
value in [b'0', b'1'], 'Got unexpected value ' + str(value)
)
self.assertTrue(worker_coordinator.stop())
def testParallelWorkersInitFun(self):
workspace.ResetWorkspace()
queue = create_queue()
dummy_worker = create_worker(
queue, lambda worker_id: workspace.FetchBlob('data')
)
workspace.FeedBlob('data', 'not initialized')
def init_fun(worker_coordinator, global_coordinator):
workspace.FeedBlob('data', 'initialized')
worker_coordinator = parallel_workers.init_workers(
dummy_worker, init_fun=init_fun
)
worker_coordinator.start()
for _ in range(10):
value = dequeue_value(queue)
self.assertEqual(
value, b'initialized', 'Got unexpected value ' + str(value)
)
# A best effort attempt at a clean shutdown
worker_coordinator.stop()
def testParallelWorkersShutdownFun(self):
workspace.ResetWorkspace()
queue = create_queue()
dummy_worker = create_worker(queue, lambda worker_id: str(worker_id))
workspace.FeedBlob('data', 'not shutdown')
def shutdown_fun():
workspace.FeedBlob('data', 'shutdown')
worker_coordinator = parallel_workers.init_workers(
dummy_worker, shutdown_fun=shutdown_fun
)
worker_coordinator.start()
self.assertTrue(worker_coordinator.stop())
data = workspace.FetchBlob('data')
self.assertEqual(data, b'shutdown', 'Got unexpected value ' + str(data))
|
[
"gudwls3126@gmail.com"
] |
gudwls3126@gmail.com
|
521b2928efcb138c4ef38d26f04b6f9b956f728e
|
88863cb16f35cd479d43f2e7852d20064daa0c89
|
/HelpingSantasHelpers/download/eval_code/hours.py
|
92c418e0a8a4828e94b1d7876b0ac4442d3fda3b
|
[] |
no_license
|
chrishefele/kaggle-sample-code
|
842c3cd766003f3b8257fddc4d61b919e87526c4
|
1c04e859c7376f8757b011ed5a9a1f455bd598b9
|
refs/heads/master
| 2020-12-29T12:18:09.957285
| 2020-12-22T20:16:35
| 2020-12-22T20:16:35
| 238,604,678
| 3
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,379
|
py
|
import datetime
class Hours:
""" Hours class takes care of time accounting. Note that convention is
9:00-9:05 is work taking place on the 00, 01, 02, 03, 04 minutes, so 5 minutes of work.
Elf is available for next work at 9:05
Class members day_start, day_end are in minutes relative to the day
"""
def __init__(self):
self.hours_per_day = 10 # 10 hour day: 9 - 19
self.day_start = 9 * 60
self.day_end = (9 + self.hours_per_day) * 60
self.reference_start_time = datetime.datetime(2014, 1, 1, 0, 0)
self.minutes_in_24h = 24 * 60
@staticmethod
def convert_to_minute(arrival):
""" Converts the arrival time string to minutes since the reference start time,
Jan 1, 2014 at 00:00 (aka, midnight Dec 31, 2013)
:param arrival: string in format '2014 12 17 7 03' for Dec 17, 2014 at 7:03 am
:return: integer (minutes since arrival time)
"""
time = arrival.split(' ')
dd = datetime.datetime(int(time[0]), int(time[1]), int(time[2]), int(time[3]), int(time[4]))
age = dd - datetime.datetime(2014, 1, 1, 0, 0)
return int(age.total_seconds() / 60)
def is_sanctioned_time(self, minute):
""" Return boolean True or False if a given time (in minutes) is a sanctioned working day minute. """
return ((minute - self.day_start) % self.minutes_in_24h) < (self.hours_per_day * 60)
def get_sanctioned_breakdown(self, start_minute, duration):
""" Whole days (24-hr time periods) contribute fixed quantities of sanctioned and unsanctioned time. After
accounting for the whole days in the duration, the remainder minutes are tabulated as un/sanctioned.
:param start_minute:
:param duration:
:return:
"""
full_days = duration / (self.minutes_in_24h)
sanctioned = full_days * self.hours_per_day * 60
unsanctioned = full_days * (24 - self.hours_per_day) * 60
remainder_start = start_minute + full_days * self.minutes_in_24h
for minute in xrange(remainder_start, start_minute+duration):
if self.is_sanctioned_time(minute):
sanctioned += 1
else:
unsanctioned += 1
return sanctioned, unsanctioned
def next_sanctioned_minute(self, minute):
""" Given a minute, finds the next sanctioned minute.
:param minute: integer representing a minute since reference time
:return: next sanctioned minute
"""
# next minute is a sanctioned minute
if self.is_sanctioned_time(minute) and self.is_sanctioned_time(minute+1):
return minute + 1
num_days = minute / self.minutes_in_24h
return self.day_start + (num_days + 1) * self.minutes_in_24h
def apply_resting_period(self, start, num_unsanctioned):
""" Enforces the rest period and returns the minute when the elf is next available for work.
Rest period is only applied to sanctioned work hours.
:param start: minute the REST period starts
:param num_unsanctioned: always > 0 number of unsanctioned minutes that need resting minutes
:return: next available minute after rest period has been applied
"""
num_days_since_jan1 = start / self.minutes_in_24h
rest_time = num_unsanctioned
rest_time_in_working_days = rest_time / (60 * self.hours_per_day)
rest_time_remaining_minutes = rest_time % (60 * self.hours_per_day)
# rest time is only applied to sanctioned work hours. If local_start is at an unsanctioned time,
# need to set it to be the next start of day
local_start = start % self.minutes_in_24h # minute of the day (relative to a current day) the work starts
if local_start < self.day_start:
local_start = self.day_start
elif local_start > self.day_end:
num_days_since_jan1 += 1
local_start = self.day_start
if local_start + rest_time_remaining_minutes > self.day_end:
rest_time_in_working_days += 1
rest_time_remaining_minutes -= (self.day_end - local_start)
local_start = self.day_start
total_days = num_days_since_jan1 + rest_time_in_working_days
return total_days * self.minutes_in_24h + local_start + rest_time_remaining_minutes
|
[
"c.hefele@verizon.net"
] |
c.hefele@verizon.net
|
c037677bd40802d525c9b0cc48e7980d0d9370cd
|
cb73fe89463892c8c147c6995e220f5b1635fabb
|
/AtCoder Beginner Contest 157/q4.py
|
2f3997be6ae6197dd3f841423ebfccc58d44d5dc
|
[] |
no_license
|
Haraboo0814/AtCoder
|
244f6fd17e8f6beee2d46fbfaea6a8e798878920
|
7ad794fd85e8d22d4e35087ed38f453da3c573ca
|
refs/heads/master
| 2023-06-15T20:08:37.348078
| 2021-07-17T09:31:30
| 2021-07-17T09:31:30
| 254,162,544
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,430
|
py
|
class UnionFind():
def __init__(self, n):
self.n = n
self.parents = [-1] * n
def find(self, x):
if self.parents[x] < 0:
return x
self.parents[x] = self.find(self.parents[x])
return self.parents[x]
def unite(self, x, y):
x = self.find(x)
y = self.find(y)
if x == y:
return
if self.parents[x] > self.parents[y]:
x, y = y, x
self.parents[x] += self.parents[y]
self.parents[y] = x
def same(self, x, y):
return self.find(x) == self.find(y)
def size(self, x):
return -self.parents[self.find(x)]
N, M, K = map(int, input().split())
uf = UnionFind(N)
friends = [[] for _ in range(N)] # 友達関係
blocks = [[] for _ in range(N)] # ブロック関係
for i in range(M):
a, b = map(int, input().split())
# 相互友達関係の追加
friends[a - 1].append(b - 1)
friends[b - 1].append(a - 1)
# 併合
uf.unite(a - 1, b - 1)
for i in range(K):
c, d = map(int, input().split())
if uf.same(c - 1, d - 1):
# 同じグループ内の場合相互ブロック関係の追加
blocks[c - 1].append(d - 1)
blocks[d - 1].append(c - 1)
ans = []
for i in range(N):
# グループ内の人数 - 自身 - ブロック人数 - 友達人数
ans.append(uf.size(i) - 1 - len(blocks[i]) - len(friends[i]))
print(*ans)
|
[
"harada-kyohei-fj@ynu.jp"
] |
harada-kyohei-fj@ynu.jp
|
d637b21e1ece6c029c4c29138a0c0a4fab9eb9c0
|
e0731ac7bd6a9fcb386d9c5d4181c9d549ab1d02
|
/desafio81.py
|
b0763c623e58fc03a19757c78b94ebd4003ba32e
|
[] |
no_license
|
lportinari/Desafios-Python-Curso-em-Video
|
3ab98b87a2178448b3e53031b86522558c31c099
|
cd7662ddfe371e48e5aabc6e86e23dc6337405fb
|
refs/heads/master
| 2020-04-29T11:09:25.689901
| 2019-06-23T23:58:06
| 2019-06-23T23:58:06
| 176,087,390
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 692
|
py
|
"""Crie um programa que vai ler vários números e colocar em uma lista. Depois disso, mostre:
A) Quantos números foram digitados.
B) A lista de valores, ordenada de forma decrescente.
C) Se o valor 5 foi digitado e está ou não na lista."""
lista = []
while True:
lista.append(int(input('Digite um valor: ')))
resp = str(input('Quer continuar? [S/N]: ')).strip()[0]
if resp in 'Nn':
break
print('-=' * 30)
lista.sort(reverse=True)
print(f'Você digitou {len(lista)} elementos.')
print(f'Os valores em ordem decrescente são {lista}')
if 5 in lista:
print('O valor 5 faz parte da lista.')
else:
print('O valor 5 não foi encontrado na lista.')
|
[
"noreply@github.com"
] |
lportinari.noreply@github.com
|
b60d9811abfaa3ba27b15660b09b51606387d2df
|
f907f8ce3b8c3b203e5bb9d3be012bea51efd85f
|
/cakes_and_donuts.py
|
79ab95d0266b3498fe901472d27b5c41b4c5c0eb
|
[] |
no_license
|
KohsukeKubota/Atcoder-practice
|
3b4b986395551443f957d1818d6f9a0bf6132e90
|
52554a2649445c2760fc3982e722854fed5b8ab1
|
refs/heads/master
| 2020-08-26T15:17:29.344402
| 2019-10-26T11:14:24
| 2019-10-26T11:14:24
| 217,052,829
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 218
|
py
|
N = int(input())
cnum = 25
dnum = 14
res = 0
for i in range(cnum):
for j in range(dnum):
amm = 4 * i + 7 * j
if amm == N:
res += 1
if res > 0:
print('Yes')
else:
print('No')
|
[
"kohsuke@KohsukeKubotas-MacBook-Air.local"
] |
kohsuke@KohsukeKubotas-MacBook-Air.local
|
c882d325f55029694257bdc9f8142cef5ca05acf
|
0c0168a4676bce7453836a7509e7133044aa8975
|
/byceps/services/page/dbmodels.py
|
c6d87ca51e5d385a04bb0f4730c934f43332f5b6
|
[
"BSD-3-Clause"
] |
permissive
|
byceps/byceps
|
0aad3c4d974f76c6f8c3674d5539a80c9107b97a
|
eaee2b7fdc08c76c16ddf7f436110e0b5f1812e5
|
refs/heads/main
| 2023-09-01T04:03:13.365687
| 2023-09-01T03:28:18
| 2023-09-01T03:28:18
| 40,150,239
| 44
| 23
|
BSD-3-Clause
| 2023-05-16T18:41:32
| 2015-08-03T22:05:23
|
Python
|
UTF-8
|
Python
| false
| false
| 3,850
|
py
|
"""
byceps.services.page.dbmodels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Pages of database-stored content. Can contain HTML and template engine
syntax.
:Copyright: 2014-2023 Jochen Kupperschmidt
:License: Revised BSD (see `LICENSE` file for details)
"""
from __future__ import annotations
from datetime import datetime
from sqlalchemy.ext.associationproxy import association_proxy
from byceps.database import db, generate_uuid7
from byceps.services.language.dbmodels import DbLanguage
from byceps.services.site.models import SiteID
from byceps.services.site_navigation.dbmodels import DbNavMenu
from byceps.services.user.dbmodels.user import DbUser
from byceps.typing import UserID
class DbPage(db.Model):
"""A content page.
Any page is expected to have at least one version (the initial one).
"""
__tablename__ = 'pages'
__table_args__ = (
db.UniqueConstraint('site_id', 'name', 'language_code'),
db.UniqueConstraint('site_id', 'language_code', 'url_path'),
)
id = db.Column(db.Uuid, default=generate_uuid7, primary_key=True)
site_id = db.Column(
db.UnicodeText, db.ForeignKey('sites.id'), index=True, nullable=False
)
name = db.Column(db.UnicodeText, index=True, nullable=False)
language_code = db.Column(
db.UnicodeText,
db.ForeignKey('languages.code'),
index=True,
nullable=False,
)
language = db.relationship(DbLanguage)
url_path = db.Column(db.UnicodeText, index=True, nullable=False)
published = db.Column(db.Boolean, nullable=False)
nav_menu_id = db.Column(
db.Uuid, db.ForeignKey('site_nav_menus.id'), nullable=True
)
nav_menu = db.relationship(DbNavMenu)
current_version = association_proxy(
'current_version_association', 'version'
)
def __init__(
self, site_id: SiteID, name: str, language_code: str, url_path: str
) -> None:
self.site_id = site_id
self.name = name
self.language_code = language_code
self.url_path = url_path
self.published = False
class DbPageVersion(db.Model):
"""A snapshot of a page at a certain time."""
__tablename__ = 'page_versions'
id = db.Column(db.Uuid, default=generate_uuid7, primary_key=True)
page_id = db.Column(
db.Uuid, db.ForeignKey('pages.id'), index=True, nullable=False
)
page = db.relationship(DbPage)
created_at = db.Column(db.DateTime, default=datetime.utcnow, nullable=False)
creator_id = db.Column(db.Uuid, db.ForeignKey('users.id'), nullable=False)
creator = db.relationship(DbUser)
title = db.Column(db.UnicodeText, nullable=False)
head = db.Column(db.UnicodeText, nullable=True)
body = db.Column(db.UnicodeText, nullable=False)
def __init__(
self,
page: DbPage,
creator_id: UserID,
title: str,
head: str | None,
body: str,
) -> None:
self.page = page
self.creator_id = creator_id
self.title = title
self.head = head
self.body = body
@property
def is_current(self) -> bool:
"""Return `True` if this version is the current version of the
page it belongs to.
"""
return self.id == self.page.current_version.id
class DbCurrentPageVersionAssociation(db.Model):
__tablename__ = 'page_current_versions'
page_id = db.Column(db.Uuid, db.ForeignKey('pages.id'), primary_key=True)
page = db.relationship(
DbPage, backref=db.backref('current_version_association', uselist=False)
)
version_id = db.Column(
db.Uuid, db.ForeignKey('page_versions.id'), unique=True, nullable=False
)
version = db.relationship(DbPageVersion)
def __init__(self, page: DbPage, version: DbPageVersion) -> None:
self.page = page
self.version = version
|
[
"homework@nwsnet.de"
] |
homework@nwsnet.de
|
12ab1ca8d4888e9bc31dde1f5c9c0d2d9e71550c
|
ebf5c43e530f450d7057823f62cb66fe7013126a
|
/homebot/modules/ci/projects/aosp/constants.py
|
911a51dc039330a86e4c07c4e38bfdf29859f1db
|
[] |
no_license
|
dinhsan2000/HomeBot1
|
ee58ce35fc20522660d024cb454a478cd25a84a4
|
a3729d981b2aadeb05fd1da5ed956079ac3105d1
|
refs/heads/master
| 2023-06-07T16:38:25.256832
| 2023-05-26T04:53:51
| 2023-05-26T04:53:51
| 343,099,642
| 0
| 0
| null | 2023-05-26T04:55:24
| 2021-02-28T12:27:23
|
Python
|
UTF-8
|
Python
| false
| false
| 313
|
py
|
ERROR_CODES = {
0: "Build completed successfully",
4: "Build failed: Missing arguments or wrong building path",
5: "Build failed: Lunching failed",
6: "Build failed: Cleaning failed",
7: "Build failed: Building failed"
}
NEEDS_LOGS_UPLOAD = {
5: "lunch_log.txt",
6: "clean_log.txt",
7: "build_log.txt"
}
|
[
"barezzisebastiano@gmail.com"
] |
barezzisebastiano@gmail.com
|
3980dfc43f8a80ea5205df2679fb75913c1ac86f
|
09e57dd1374713f06b70d7b37a580130d9bbab0d
|
/data/p3BR/R1/benchmark/startCirq246.py
|
b790fec452d5179bbb0820b6e56a6a7ce9cf6fef
|
[
"BSD-3-Clause"
] |
permissive
|
UCLA-SEAL/QDiff
|
ad53650034897abb5941e74539e3aee8edb600ab
|
d968cbc47fe926b7f88b4adf10490f1edd6f8819
|
refs/heads/main
| 2023-08-05T04:52:24.961998
| 2021-09-19T02:56:16
| 2021-09-19T02:56:16
| 405,159,939
| 2
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,399
|
py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 5/15/20 4:49 PM
# @File : grover.py
# qubit number=3
# total number=46
import cirq
import cirq.google as cg
from typing import Optional
import sys
from math import log2
import numpy as np
#thatsNoCode
from cirq.contrib.svg import SVGCircuit
# Symbols for the rotation angles in the QAOA circuit.
def make_circuit(n: int, input_qubit):
c = cirq.Circuit() # circuit begin
c.append(cirq.H.on(input_qubit[0])) # number=1
c.append(cirq.rx(-0.09738937226128368).on(input_qubit[2])) # number=2
c.append(cirq.H.on(input_qubit[1])) # number=33
c.append(cirq.CZ.on(input_qubit[2],input_qubit[1])) # number=34
c.append(cirq.H.on(input_qubit[1])) # number=35
c.append(cirq.H.on(input_qubit[1])) # number=3
c.append(cirq.CNOT.on(input_qubit[1],input_qubit[0])) # number=4
c.append(cirq.Y.on(input_qubit[1])) # number=15
c.append(cirq.CNOT.on(input_qubit[1],input_qubit[0])) # number=10
c.append(cirq.H.on(input_qubit[1])) # number=19
c.append(cirq.CZ.on(input_qubit[0],input_qubit[1])) # number=20
c.append(cirq.rx(-0.6000441968356504).on(input_qubit[1])) # number=28
c.append(cirq.H.on(input_qubit[1])) # number=21
c.append(cirq.H.on(input_qubit[1])) # number=30
c.append(cirq.CZ.on(input_qubit[0],input_qubit[1])) # number=31
c.append(cirq.H.on(input_qubit[1])) # number=32
c.append(cirq.X.on(input_qubit[1])) # number=23
c.append(cirq.H.on(input_qubit[2])) # number=29
c.append(cirq.H.on(input_qubit[1])) # number=36
c.append(cirq.CZ.on(input_qubit[0],input_qubit[1])) # number=37
c.append(cirq.H.on(input_qubit[1])) # number=38
c.append(cirq.H.on(input_qubit[1])) # number=43
c.append(cirq.CZ.on(input_qubit[0],input_qubit[1])) # number=44
c.append(cirq.H.on(input_qubit[1])) # number=45
c.append(cirq.Z.on(input_qubit[1])) # number=11
c.append(cirq.H.on(input_qubit[1])) # number=42
c.append(cirq.H.on(input_qubit[0])) # number=39
c.append(cirq.CZ.on(input_qubit[1],input_qubit[0])) # number=40
c.append(cirq.H.on(input_qubit[0])) # number=41
c.append(cirq.CNOT.on(input_qubit[2],input_qubit[1])) # number=26
c.append(cirq.Y.on(input_qubit[1])) # number=14
c.append(cirq.CNOT.on(input_qubit[1],input_qubit[0])) # number=5
c.append(cirq.X.on(input_qubit[1])) # number=6
c.append(cirq.Z.on(input_qubit[1])) # number=8
c.append(cirq.X.on(input_qubit[1])) # number=7
c.append(cirq.rx(-2.42845112122491).on(input_qubit[1])) # number=25
# circuit end
c.append(cirq.measure(*input_qubit, key='result'))
return c
def bitstring(bits):
return ''.join(str(int(b)) for b in bits)
if __name__ == '__main__':
qubit_count = 4
input_qubits = [cirq.GridQubit(i, 0) for i in range(qubit_count)]
circuit = make_circuit(qubit_count,input_qubits)
circuit = cg.optimized_for_sycamore(circuit, optimizer_type='sqrt_iswap')
circuit_sample_count =2000
simulator = cirq.Simulator()
result = simulator.run(circuit, repetitions=circuit_sample_count)
frequencies = result.histogram(key='result', fold_func=bitstring)
writefile = open("../data/startCirq246.csv","w+")
print(format(frequencies),file=writefile)
print("results end", file=writefile)
print(circuit.__len__(), file=writefile)
print(circuit,file=writefile)
writefile.close()
|
[
"wangjiyuan123@yeah.net"
] |
wangjiyuan123@yeah.net
|
59fae6464fad58ccfcab0782982ac6ed75f7aa20
|
e33c95326f6800d435125427a73460a009532a12
|
/kotti/tests/test_util.py
|
124bd71b4fdf038cd8d2f2aafb5adcc6c6dff055
|
[
"BSD-3-Clause-Modification"
] |
permissive
|
stevepiercy/Kotti
|
839269f6dc1c45645e5d868b0f17e27bea04b5ac
|
45c1627ae9fedbc24d1b817048e153f4d7a2d06d
|
refs/heads/master
| 2021-01-17T21:33:02.795714
| 2012-03-17T22:06:04
| 2012-03-17T22:06:04
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,134
|
py
|
from unittest import TestCase
from mock import MagicMock
from pyramid.registry import Registry
from kotti.testing import DummyRequest
from kotti.testing import UnitTestBase
class TestNestedMutationDict(TestCase):
def test_dictwrapper_basics(self):
from kotti.util import NestedMutationDict
data = {}
wrapper = NestedMutationDict(data)
changed = wrapper.changed = MagicMock()
wrapper['name'] = 'andy'
assert data == {'name': 'andy'}
assert wrapper == {'name': 'andy'}
assert wrapper['name'] == 'andy'
assert changed.call_count == 1
wrapper['age'] = 77
assert data == {'name': 'andy', 'age': 77}
assert wrapper['age'] == 77
assert wrapper['name'] == 'andy'
assert changed.call_count == 2
wrapper['age'] += 1
assert data == {'name': 'andy', 'age': 78}
assert wrapper['age'] == 78
assert changed.call_count == 3
def test_listwrapper_basics(self):
from kotti.util import NestedMutationList
data = []
wrapper = NestedMutationList(data)
changed = wrapper.changed = MagicMock()
wrapper.append(5)
assert data == [5]
assert wrapper == [5]
assert wrapper[0] == 5
assert changed.call_count == 1
wrapper.insert(0, 33)
assert data == [33, 5]
assert wrapper[0] == 33
assert changed.call_count == 2
del wrapper[0]
assert data == [5]
assert wrapper[0] == 5
assert changed.call_count == 3
def test_dictwrapper_wraps(self):
from kotti.util import NestedMutationDict
from kotti.util import NestedMutationList
wrapper = NestedMutationDict(
{'name': 'andy', 'age': 77, 'children': []})
changed = wrapper.changed = MagicMock()
wrapper['name'] = 'randy'
assert changed.call_count == 1
assert isinstance(wrapper['children'], NestedMutationList)
wrapper['children'].append({'name': 'sandy', 'age': 33})
assert changed.call_count == 2
assert len(wrapper['children']), 1
assert isinstance(wrapper['children'][0], NestedMutationDict)
def test_listwrapper_wraps(self):
from kotti.util import NestedMutationDict
from kotti.util import NestedMutationList
wrapper = NestedMutationList(
[{'name': 'andy', 'age': 77, 'children': []}])
changed = wrapper.changed = MagicMock()
assert isinstance(wrapper[0], NestedMutationDict)
assert isinstance(wrapper[0]['children'], NestedMutationList)
assert changed.call_count == 0
class TestRequestCache(UnitTestBase):
def setUp(self):
from kotti.util import request_cache
registry = Registry('testing')
request = DummyRequest()
request.registry = registry
super(TestRequestCache, self).setUp(registry=registry, request=request)
self.cache_decorator = request_cache
def test_it(self):
from kotti.util import clear_cache
called = []
@self.cache_decorator(lambda a, b: (a, b))
def my_fun(a, b):
called.append((a, b))
my_fun(1, 2)
my_fun(1, 2)
self.assertEqual(len(called), 1)
my_fun(2, 1)
self.assertEqual(len(called), 2)
clear_cache()
my_fun(1, 2)
self.assertEqual(len(called), 3)
def test_dont_cache(self):
from kotti.util import DontCache
called = []
def dont_cache(a, b):
raise DontCache
@self.cache_decorator(dont_cache)
def my_fun(a, b):
called.append((a, b))
my_fun(1, 2)
my_fun(1, 2)
self.assertEqual(len(called), 2)
class TestLRUCache(TestRequestCache):
def setUp(self):
from kotti.util import lru_cache
super(TestLRUCache, self).setUp()
self.cache_decorator = lru_cache
class TestTitleToName(TestCase):
def test_max_length(self):
from kotti.util import title_to_name
assert len(title_to_name(u'a' * 50)) == 40
|
[
"daniel.nouri@gmail.com"
] |
daniel.nouri@gmail.com
|
0d7d1a2e5a75a423baf17920318f62326ef3922d
|
e116a28a8e4d07bb4de1812fde957a38155eb6df
|
/shuidi.py
|
9e44887d29f123ec7c6581e5ae7720927105ca78
|
[] |
no_license
|
gl-coding/EasyPyEcharts
|
5582ddf6be3158f13663778c1038767a87756216
|
f9dbe8ad7389a6e2629643c9b7af7b9dc3bfccd5
|
refs/heads/master
| 2020-09-29T20:48:46.260306
| 2019-12-10T12:52:24
| 2019-12-10T12:52:24
| 227,119,587
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 319
|
py
|
#encoding=utf-8
from pyecharts import Liquid
liquid =Liquid("水球图示例")
arg = 1
if arg == 1:
liquid.add("Liquid", [0.6])
liquid.show_config()
liquid.render()
else:
liquid.add("Liquid", [0.6, 0.5, 0.4, 0.3], is_liquid_animation=False, shape='diamond')
liquid.show_config()
liquid.render()
|
[
"1451607278@qq.com"
] |
1451607278@qq.com
|
023f6ae6a4a24897cfab217ea9ff439c94ea5592
|
ac5e52a3fc52dde58d208746cddabef2e378119e
|
/exps-sblp-obt/sblp_ut=3.5_rd=1_rw=0.04_rn=4_u=0.075-0.325_p=harmonic-2/sched=RUN_trial=44/params.py
|
f73081fbfcea69ca72664c1d6db95d6e68538d5e
|
[] |
no_license
|
ricardobtxr/experiment-scripts
|
1e2abfcd94fb0ef5a56c5d7dffddfe814752eef1
|
7bcebff7ac2f2822423f211f1162cd017a18babb
|
refs/heads/master
| 2023-04-09T02:37:41.466794
| 2021-04-25T03:27:16
| 2021-04-25T03:27:16
| 358,926,457
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 248
|
py
|
{'cpus': 4,
'duration': 30,
'final_util': '3.626000',
'max_util': '3.5',
'periods': 'harmonic-2',
'release_master': False,
'res_distr': '1',
'res_nmb': '4',
'res_weight': '0.04',
'scheduler': 'RUN',
'trial': 44,
'utils': 'uni-medium-3'}
|
[
"ricardo.btxr@gmail.com"
] |
ricardo.btxr@gmail.com
|
e2d024f8f3608a4e08ea97e099b8e312e786e31b
|
0860284b9a76ac1921c65ea8694dab8c9b2d6eb1
|
/shop/migrations/0003_item_image.py
|
6e550bf2d4f303bf92abe55dae13c4465589d7f9
|
[] |
no_license
|
alphashooter/tms-z30
|
10dd2f32ab7c9fd150f27883456f5a00f2c2b8fc
|
6ce7a93e00b52432dfed22d524e2a377fceed619
|
refs/heads/master
| 2022-11-21T18:29:37.014735
| 2020-07-29T19:27:12
| 2020-07-29T19:27:12
| 281,730,996
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 412
|
py
|
# Generated by Django 3.0.8 on 2020-07-17 17:20
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('shop', '0002_order_name'),
]
operations = [
migrations.AddField(
model_name='item',
name='image',
field=models.ImageField(null=True, upload_to='D:\\workspace\\tms\\root\\media'),
),
]
|
[
"a.p.strelkov@gmail.com"
] |
a.p.strelkov@gmail.com
|
1001282a57ffd9fe8fc30ddec023555f6f51e18f
|
fef8f43025cff430d9aea080885173d9c22b3cb6
|
/etalia/nlp/migrations/0007_userfingerprint_state.py
|
a6ea1a7be2e4ec6595a5e55ede3e01cd5c623ca8
|
[] |
no_license
|
GemmaAA1/etalia-open
|
30a083141330e227ac1de9855894bfb6e476e3cc
|
260ce54d2da53c943d8b82fa9d40bb0c0df918a6
|
refs/heads/master
| 2023-03-28T03:33:13.771987
| 2017-10-30T00:55:27
| 2017-10-30T00:55:27
| 351,120,827
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 488
|
py
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('nlp', '0006_auto_20160608_1959'),
]
operations = [
migrations.AddField(
model_name='userfingerprint',
name='state',
field=models.CharField(max_length=3, blank=True, choices=[('NON', 'Uninitialized'), ('IDL', 'Idle'), ('ING', 'Syncing')]),
),
]
|
[
"nicolas.pannetier@gmail.com"
] |
nicolas.pannetier@gmail.com
|
c89d3efc23bc0b5fcc1e8a00c036bb63a7b47892
|
2f63688febd21dc3ae6b19abfa79ad313c820154
|
/0654_Maximum_Binary_Tree/try_1.py
|
8c0bfb903b3d382b63155f8803749062365dcbb9
|
[] |
no_license
|
novayo/LeetCode
|
cadd03587ee4ed6e35f60294070165afc1539ac8
|
54d0b3c237e0ffed8782915d6b75b7c6a0fe0de7
|
refs/heads/master
| 2023-08-14T00:35:15.528520
| 2023-07-30T05:56:05
| 2023-07-30T05:56:05
| 200,248,146
| 8
| 1
| null | 2022-11-19T04:37:54
| 2019-08-02T14:24:19
|
Python
|
UTF-8
|
Python
| false
| false
| 2,129
|
py
|
class Node:
def __init__(self, val, l_index, r_index, left=None, right=None):
self.val = val
self.l_index = l_index
self.r_index = r_index
self.left = left
self.right = right
class SegmentTree:
def __init__(self, arr):
self.root = self.build(arr)
def build(self, arr):
def _build(l, r):
if l == r:
return Node(arr[l], l, r)
mid = l + (r-l) // 2
left = _build(l, mid)
right = _build(mid+1, r)
return Node(max(left.val, right.val), l, r, left, right)
return _build(0, len(arr)-1)
def query(self, l, r):
def _query(root, l, r):
if l > r:
return -float('inf')
if root.l_index == l and root.r_index == r:
return root.val
mid = root.l_index + (root.r_index-root.l_index) // 2
if mid > r:
return _query(root.left, l, r)
elif mid < l:
return _query(root.right, l, r)
else:
return max(
_query(root.left, l, mid),
_query(root.right, mid+1, r)
)
return _query(self.root, l, r)
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, val=0, left=None, right=None):
# self.val = val
# self.left = left
# self.right = right
class Solution:
def constructMaximumBinaryTree(self, nums: List[int]) -> Optional[TreeNode]:
index = {}
for i, num in enumerate(nums):
index[num] = i
tree = SegmentTree(nums)
def build(l, r):
if l > r:
return None
if l == r:
return TreeNode(nums[l])
_max_num = tree.query(l, r)
mid = index[_max_num]
left = build(l, mid-1)
right = build(mid+1, r)
return TreeNode(_max_num, left, right)
return build(0, len(nums)-1)
|
[
"eric_shih@trendmicro.com"
] |
eric_shih@trendmicro.com
|
6841fc1c51d3d8f1979c9ba3c3f3f3710cdf8a50
|
1819b161df921a0a7c4da89244e1cd4f4da18be4
|
/WhatsApp_FarmEasy/env/lib/python3.6/site-packages/tests/integration/api/v2010/account/usage/record/test_monthly.py
|
86bba622608f61100272a0416d541f1e3a226cbb
|
[
"MIT"
] |
permissive
|
sanchaymittal/FarmEasy
|
889b290d376d940d9b3ae2fa0620a573b0fd62a0
|
5b931a4287d56d8ac73c170a6349bdaae71bf439
|
refs/heads/master
| 2023-01-07T21:45:15.532142
| 2020-07-18T14:15:08
| 2020-07-18T14:15:08
| 216,203,351
| 3
| 2
|
MIT
| 2023-01-04T12:35:40
| 2019-10-19T12:32:15
|
JavaScript
|
UTF-8
|
Python
| false
| false
| 5,301
|
py
|
# coding=utf-8
"""
This code was generated by
\ / _ _ _| _ _
| (_)\/(_)(_|\/| |(/_ v1.0.0
/ /
"""
from tests import IntegrationTestCase
from tests.holodeck import Request
from twilio.base.exceptions import TwilioException
from twilio.http.response import Response
class MonthlyTestCase(IntegrationTestCase):
def test_list_request(self):
self.holodeck.mock(Response(500, ''))
with self.assertRaises(TwilioException):
self.client.api.v2010.accounts(sid="ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa") \
.usage \
.records \
.monthly.list()
self.holodeck.assert_has_request(Request(
'get',
'https://api.twilio.com/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Monthly.json',
))
def test_read_full_response(self):
self.holodeck.mock(Response(
200,
'''
{
"end": 0,
"first_page_uri": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Monthly?Page=0&PageSize=1",
"last_page_uri": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Monthly?Page=3449&PageSize=1",
"next_page_uri": null,
"num_pages": 3450,
"page": 0,
"page_size": 1,
"previous_page_uri": null,
"start": 0,
"total": 3450,
"uri": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Monthly",
"usage_records": [
{
"account_sid": "ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"api_version": "2010-04-01",
"category": "sms-inbound-shortcode",
"count": "0",
"count_unit": "messages",
"description": "Short Code Inbound SMS",
"end_date": "2015-09-04",
"price": "0",
"price_unit": "usd",
"start_date": "2015-09-01",
"subresource_uris": {
"all_time": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/AllTime.json?Category=sms-inbound-shortcode",
"daily": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Daily.json?Category=sms-inbound-shortcode",
"last_month": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/LastMonth.json?Category=sms-inbound-shortcode",
"monthly": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Monthly.json?Category=sms-inbound-shortcode",
"this_month": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/ThisMonth.json?Category=sms-inbound-shortcode",
"today": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Today.json?Category=sms-inbound-shortcode",
"yearly": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Yearly.json?Category=sms-inbound-shortcode",
"yesterday": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Yesterday.json?Category=sms-inbound-shortcode"
},
"uri": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Monthly?Category=sms-inbound-shortcode&StartDate=2015-09-01&EndDate=2015-09-04",
"usage": "0",
"usage_unit": "messages"
}
]
}
'''
))
actual = self.client.api.v2010.accounts(sid="ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa") \
.usage \
.records \
.monthly.list()
self.assertIsNotNone(actual)
def test_read_empty_response(self):
self.holodeck.mock(Response(
200,
'''
{
"end": 0,
"first_page_uri": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Monthly?Page=0&PageSize=1",
"last_page_uri": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Monthly?Page=3449&PageSize=1",
"next_page_uri": null,
"num_pages": 3450,
"page": 0,
"page_size": 1,
"previous_page_uri": null,
"start": 0,
"total": 3450,
"uri": "/2010-04-01/Accounts/ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/Usage/Records/Monthly",
"usage_records": []
}
'''
))
actual = self.client.api.v2010.accounts(sid="ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa") \
.usage \
.records \
.monthly.list()
self.assertIsNotNone(actual)
|
[
"sanchaymittal@gmail.com"
] |
sanchaymittal@gmail.com
|
be67615eecbd94519a382643f0573620f7f41288
|
5985dde8b1fd6e1f40bf51ccc9a4759cceff4f5a
|
/MobileNet/handler.py
|
2a370fb1f9e8e6d4c98f5e5f8bd9ca067ed4dd04
|
[] |
no_license
|
johndpope/Session3-FaceRecognition
|
9297811f337c24b3c5c999a8f31b17c5e4d915d6
|
66cc77a42b6e85c7e5d967fe660954ff4e097349
|
refs/heads/master
| 2022-12-04T09:56:53.323310
| 2020-08-16T18:03:15
| 2020-08-16T18:03:15
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,957
|
py
|
try:
import unzip_requirements
except ImportError:
pass
import torch
import torchvision
import torchvision.transforms as transforms
from PIL import Image
import boto3
import os
import io
import json
import base64
from requests_toolbelt.multipart import decoder
print("Import End...")
S3_BUCKET = os.environ['S3_BUCKET'] if 'S3_BUCKET' in os.environ else 'evadebs1'
MODEL_PATH = os.environ['MODEL_PATH'] if 'MODEL_PATH' else 'model_mobilenet.pt'
print('Downloading model...')
s3 = boto3.client('s3')
print('Downloaded model...')
try:
if os.path.isfile(MODEL_PATH) != True:
print('ModelPath exists...')
obj = s3.get_object(Bucket=S3_BUCKET, Key=MODEL_PATH)
print('Creating ByteStream')
bytestream = io.BytesIO(obj['Body'].read())
print("Loading Model")
model = torch.jit.load(bytestream)
print("Model Loaded...")
except Exception as e:
print(repr(e))
raise(e)
print('model. is ready..')
def transform_image(image_bytes):
try:
transformations = transforms.Compose([
transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
image = Image.open(io.BytesIO(image_bytes))
return transformations(image).unsqueeze(0)
except Exception as e:
print(repr(e))
raise(e)
def get_prediction(image_bytes):
tensor = transform_image(image_bytes=image_bytes)
return model(tensor).argmax().item()
def classify_image(event, context):
try:
print('classify_image')
content_type_header = event['headers']['content-type']
print(event['body'])
body = base64.b64decode(event['body'])
print('BODY LOADED')
picture = decoder.MultipartDecoder(body, content_type_header).parts[0]
prediction = get_prediction(image_bytes=picture.content)
print(prediction)
filename = picture.headers[b'Content-Disposition'].decode().split(';')[1].split('=')[1]
if len(filename) < 4:
filename = picture.headers[b'Content-Disposition'].decode().split(';')[2].split('=')[1]
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': True
},
'body': json.dumps({'file': filename.replace('"', ''), 'predicted': prediction})
}
except Exception as e:
print(repr(e))
return {
"statusCode": 500,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': True
},
'body': json.dumps({'error': repr(e)})
}
|
[
"noreply@github.com"
] |
johndpope.noreply@github.com
|
1554bdd605c0aae62a8a75aebbd755fc93e965bd
|
163bbb4e0920dedd5941e3edfb2d8706ba75627d
|
/Code/CodeRecords/2444/60763/297365.py
|
38960911bc45bebc5c79af9980fd6a246755bedb
|
[] |
no_license
|
AdamZhouSE/pythonHomework
|
a25c120b03a158d60aaa9fdc5fb203b1bb377a19
|
ffc5606817a666aa6241cfab27364326f5c066ff
|
refs/heads/master
| 2022-11-24T08:05:22.122011
| 2020-07-28T16:21:24
| 2020-07-28T16:21:24
| 259,576,640
| 2
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 458
|
py
|
t = (''+input())
s = []
s.append(t[0:t.find('k')-2])
s.append(t[t.find('k'):t.find('t')-2])
s.append(t[t.find('t'):len(t)])
s1 = eval(s[0][s[0].find('['):len(s[0])])
k = int(s[1][s[1].rfind(' '):len(s[1])])
t = int(s[2][s[2].rfind(' '):len(s[2])])
isFit = False
for i in range(len(s1)):
for j in range(i+1,min(i+k,len(s1))):
if abs(s1[i]-s1[j]) <= t:
isFit = True
break
if isFit:
print('true')
else:
print('false')
|
[
"1069583789@qq.com"
] |
1069583789@qq.com
|
f4319ed7847d2e481fad2b86448abaaccce5248e
|
87e70b50a582c2bed372e2858438493ee1105905
|
/Description/extracter.py
|
26e7fa1f838684647bdef9800d1013df8d4dc72e
|
[
"MIT"
] |
permissive
|
adi-797/AIAMI
|
868b96545ffa5ef9ddb214960147c312a956d119
|
37ea2bf61e85bf879a0f4a1014f2e93b87301582
|
refs/heads/master
| 2020-04-10T13:07:43.580065
| 2017-05-18T22:39:38
| 2017-05-18T22:39:38
| 161,041,442
| 1
| 0
|
MIT
| 2018-12-09T13:19:48
| 2018-12-09T13:19:48
| null |
UTF-8
|
Python
| false
| false
| 7,042
|
py
|
# -*- coding: utf-8 -*-
import numpy as np
import scipy.stats
from pylab import imshow, show, figure
import essentia.standard
from utils import config
def extractAllDescriptors(signal):
"""
Extracts the descriptors expected for the analysis of a given audio file.
"""
described = {}
described['Silence'] = _silence = silence(signal)
signal = signal[config.hopSize * _silence[0]:config.hopSize * _silence[1]] / np.max(
signal) # Tomo solo la parte con sonido y promedio para que todas las señales sean parejas.
described['mfcc'] = mfccs(signal)
described['Inharmonicity'] = inharmonicity_tesis(signal)
described['Energy'] = energy(signal)
described['LogAttackTime'] = log_attack_time(signal)
described['Standard-Dev'] = standard_dev(signal)
described['Variance'] = variance(signal)
described['Skewness'] = skewness(signal)
described['kurtosis'] = kurtosis(signal)
# described['mfcc-1st'] = np.gradient(described['mfcc'])[1]
# described['mfcc-2nd'] = np.gradient(described['mfcc-1st'])[1]
described['Inharmonicity-1st'] = np.gradient(described['Inharmonicity'])
described['Inharmonicity-2nd'] = np.gradient(described['Inharmonicity-1st'])
described['mfcc-Std-f'], described['mfcc-Var-f'], described['mfcc-Skew-f'], described['mfcc-Kurt-f']\
= mfcc_std_frequency(described)
return described
def mfccs(audio, window_size=config.windowSize, fft_size=config.fftSize, hop_size=config.hopSize, plot=False):
"""
Calculates the mfcc for a given audio file.
"""
window_hann = essentia.standard.Windowing(size=window_size, type='hann')
spectrum = essentia.standard.Spectrum(
size=fft_size) # FFT() would return the complex FFT, here we just want the magnitude spectrum
mfcc = essentia.standard.MFCC(numberCoefficients=12, inputSize=fft_size / 2 + 1)
pool = essentia.Pool()
for frame in essentia.standard.FrameGenerator(audio, frameSize=fft_size, hopSize=hop_size):
mfcc_bands, mfcc_coefficients = mfcc(spectrum(window_hann(frame)))
pool.add('lowlevel.mfcc', mfcc_coefficients)
pool.add('lowlevel.mfcc_bands', mfcc_bands)
if plot:
imshow(pool['lowlevel.mfcc'].T[1:, :], aspect='auto')
show() # unnecessary if you started "ipython --pylab"
figure()
imshow(pool['lowlevel.mfcc_bands'].T, aspect='auto', interpolation='nearest')
# We ignored the first MFCC coefficient to disregard the power of the signal and only plot its spectral shape
return pool['lowlevel.mfcc'].T
def inharmonicity_tesis(audio, window_size=config.windowSize, spectrum_size=config.fftSize,
hop_size=config.hopSize, sample_rate=config.sampleRate):
""" Setting up everything """
window_bh = essentia.standard.Windowing(size=window_size, type='blackmanharris92')
spectrum = essentia.standard.Spectrum(size=spectrum_size) # magnitude spectrum
peaks = essentia.standard.SpectralPeaks(magnitudeThreshold=-120, sampleRate=sample_rate)
window_hann = essentia.standard.Windowing(size=window_size, type='hann')
pitch = essentia.standard.PitchYin(frameSize=window_size, sampleRate=sample_rate)
pitch_fft = essentia.standard.PitchYinFFT(frameSize=window_size, sampleRate=sample_rate)
harmonicpeaks = essentia.standard.HarmonicPeaks()
inharmonicity = essentia.standard.Inharmonicity()
vector_inharmonicity = np.array([])
""" Actual signal processing """
for frame in essentia.standard.FrameGenerator(audio, frameSize=spectrum_size, hopSize=hop_size):
frequency, amplitude = peaks(20 * np.log10(spectrum(window_bh(frame))))
if 0 in frequency:
frequency = np.array([x for x in frequency if x != 0]) # elimino la informacion sobre la energia en 0 Hz
amplitude = amplitude[1:len(
amplitude)] # Asumo que esta toda en la primer posicion, si no es asi va a saltar un error
if len(frequency) == 0:
continue
value_pitch, confidence = pitch(window_hann(frame))
value_pitch_fft, confidence_fft = pitch_fft(spectrum(window_hann(frame)))
if (confidence and confidence_fft) < 0.2:
continue
else:
if confidence > confidence_fft:
value_pitch = value_pitch
else:
value_pitch = value_pitch_fft
harmonic_frequencies, harmonic_magnitudes = harmonicpeaks(frequency, amplitude, value_pitch)
vector_inharmonicity = np.append(vector_inharmonicity,
inharmonicity(harmonic_frequencies, harmonic_magnitudes))
return vector_inharmonicity
def energy(audio):
return sum(audio * audio)
def log_attack_time(audio):
enveloped = essentia.standard.Envelope(attackTime=config.attackTime, releaseTime=config.releaseTime)(audio)
return essentia.standard.LogAttackTime(startAttackThreshold=config.startAttackThreshold)(enveloped)
def standard_dev(audio):
return np.std(audio)
def variance(audio):
return np.var(audio)
def skewness(audio):
return scipy.stats.skew(audio)
def kurtosis(audio):
return scipy.stats.kurtosis(audio)
def silence(audio, fft_size=config.fftSize, hop_size=config.hopSize):
"""
Detects the begining and the end of the audio file.
The output is a vector where the first 1 indicates the begining of the audio file and the last 1 the ending.
The threshold is set at 90dB under the maximum of the file.
The first 1 is set one frame before the real start of the sound.
"""
threshold = 90.0
real_threshold = 10.0 ** ((20.0 * np.log10(max(audio)) - threshold) / 20.0)
l = []
for frame in essentia.standard.FrameGenerator(audio, frameSize=fft_size, hopSize=hop_size):
if sum(frame * frame) >= real_threshold:
l.append(1)
else:
l.append(0)
start = l.index(1)
if start != 0:
start -= 1
end = len(l) - l[::-1].index(1)
if end != len(l):
end += 1
return [start, end]
def mfcc_std_frequency(described):
std = []
var = []
kurt = []
skew = []
inter = []
for iii in range(len(described['mfcc'][0])):#temporal
for jjj in range(len(described['mfcc'])):#frecuencial
inter.append(described['mfcc'][jjj][iii])
std.append(np.std(inter)) # Desviacion estandar de cada frame entre todas las frecuencias.
var.append(np.var(inter))
skew.append(scipy.stats.skew(inter))
kurt.append(scipy.stats.kurtosis(inter))
return std, var, skew, kurt
def extractor(signal, sample_rate=config.sampleRate):
"""
Extracts pretty much every descriptors in Essentia.
"""
algor = essentia.standard.Extractor(sampleRate=sample_rate, dynamicsFrameSize=4096, dynamicsHopSize=2048,
lowLevelFrameSize=2048, lowLevelHopSize=1024, namespace="Tesis", rhythm=False)
output = algor(signal)
return output
|
[
"andimarafioti@gmail.com"
] |
andimarafioti@gmail.com
|
41764b41e79692b1c50cae5f31e2951e9799da89
|
aef40813a1b92cec0ea4fc25ec1d4a273f9bfad4
|
/Q05__/29_Minesweeper/test.py
|
e58826b09c60caa3dfe49e81f51a85d1227372c6
|
[
"Apache-2.0"
] |
permissive
|
hsclinical/leetcode
|
e9d0e522e249a24b28ab00ddf8d514ec855110d7
|
48a57f6a5d5745199c5685cd2c8f5c4fa293e54a
|
refs/heads/main
| 2023-06-14T11:28:59.458901
| 2021-07-09T18:57:44
| 2021-07-09T18:57:44
| 319,078,569
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 210
|
py
|
#!/usr/bin/python
from Solution import Solution
obj = Solution()
A = [["E","E","E","E","E"],["E","E","M","E","E"],["E","E","E","E","E"],["E","E","E","E","E"]]
B = [3,0]
out = obj.updateBoard(A, B)
print(out)
|
[
"luhongisu@gmail.com"
] |
luhongisu@gmail.com
|
4c7e8d890fdb26a9cb63bf0c0f019cabf548f029
|
ae33eba00ff1e0742d74948a38a779cabb825a84
|
/tests/spdx/test_main.py
|
00e0910044d704a32244bf58fc1231526e67f6f7
|
[
"LicenseRef-scancode-proprietary-license",
"LGPL-3.0-or-later",
"LicenseRef-scancode-free-unknown",
"LGPL-3.0-only",
"MIT"
] |
permissive
|
renovate-tests/poetry
|
a687672448c4baa1027b2b36bf46c1f9af5107e7
|
7c9569cd13c151c7682a7bfda0f1ec9e2ff07100
|
refs/heads/master
| 2020-12-15T14:13:40.895896
| 2020-06-30T13:34:46
| 2020-06-30T13:34:46
| 233,995,693
| 0
| 0
|
MIT
| 2020-01-15T11:24:06
| 2020-01-15T04:07:17
|
Python
|
UTF-8
|
Python
| false
| false
| 1,111
|
py
|
import pytest
from poetry.spdx import license_by_id
def test_license_by_id():
license = license_by_id("MIT")
assert license.id == "MIT"
assert license.name == "MIT License"
assert license.is_osi_approved
assert not license.is_deprecated
license = license_by_id("LGPL-3.0-or-later")
assert license.id == "LGPL-3.0-or-later"
assert license.name == "GNU Lesser General Public License v3.0 or later"
assert license.is_osi_approved
assert not license.is_deprecated
def test_license_by_id_is_case_insensitive():
license = license_by_id("mit")
assert license.id == "MIT"
license = license_by_id("miT")
assert license.id == "MIT"
def test_license_by_id_with_full_name():
license = license_by_id("GNU Lesser General Public License v3.0 or later")
assert license.id == "LGPL-3.0-or-later"
assert license.name == "GNU Lesser General Public License v3.0 or later"
assert license.is_osi_approved
assert not license.is_deprecated
def test_license_by_id_invalid():
with pytest.raises(ValueError):
license_by_id("invalid")
|
[
"sebastien@eustace.io"
] |
sebastien@eustace.io
|
03f4010173fe4b454467e5826a41e957adbd509e
|
27ff56afaeff7cf6f38cf457896b50dee90b1397
|
/test_code.py
|
eec8fe0efeceeec91ee94e8978ef8d89f7e7c3dd
|
[] |
no_license
|
romulovitor/Dica_Python_Linkedin
|
55352bdc7c76c1ce7b88d0e5e36be37bca7dd466
|
9c1e5cf26681188935e0b3a41070960fe5dfd9b8
|
refs/heads/master
| 2023-03-13T04:14:37.778946
| 2021-03-01T10:43:04
| 2021-03-01T10:43:04
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 644
|
py
|
import socket
import sys
from geolite2 import geolite2
host = ' '.join(sys.argv[1:])
ip = socket.gethostbyname(host)
reader = geolite2.reader()
def get_ip_location(ip):
location = reader.get(ip)
try:
country = location["country"]["names"]["en"]
except:
country = "Unknown"
try:
subdivision = location["subdivisions"][0]["names"]["en"]
except:
subdivision = "Unknown"
try:
city = location["city"]["names"]["en"]
except:
city = "Unknown"
return country, subdivision, city
country, sub, city = get_ip_location(ip=ip)
print(country)
print(sub)
print(city)
|
[
"ofc.erickson@gmail.com"
] |
ofc.erickson@gmail.com
|
fc06980868a73d533603e5f088b0902c2a59e497
|
0c4e5d83718644a0698c8a9faf08eab80932403d
|
/spacy-en_core_web_sm/test_data.py
|
3bba0e85c799f67adf1912a8d8da079ddba8123e
|
[
"Apache-2.0"
] |
permissive
|
danielfrg/conda-recipes
|
8a00f931345fce1d8e0f5f07a78314290c4766d8
|
1d4f4ae8eba54d007659b359f0c9e0ea2c52eb0a
|
refs/heads/master
| 2021-01-01T19:49:46.149994
| 2019-05-17T04:42:46
| 2019-05-17T04:42:46
| 98,701,788
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 576
|
py
|
import sys
import spacy
from spacy import util
from spacy.deprecated import resolve_model_name
def is_present(name):
data_path = util.get_data_path()
model_name = resolve_model_name(name)
model_path = data_path / model_name
return model_path.exists()
assert is_present('en') is True
# Test for english
nlp = spacy.load('en')
doc = nlp('London is a big city in the United Kingdom.')
assert doc[0].text == 'London'
assert doc[0].ent_iob > 0
assert doc[0].ent_type_ == 'GPE'
assert doc[1].text == 'is'
assert doc[1].ent_iob > 0
assert doc[1].ent_type_ == ''
|
[
"df.rodriguez143@gmail.com"
] |
df.rodriguez143@gmail.com
|
cb4ebb211a087c41d1a00d5f5749e3a1bf9ef481
|
27c9b374a75550252ddfe5da400fad891c6de590
|
/chars/link_scripts/Camera.py
|
ca10858645dead4a643555b156084f88dd92eff2
|
[] |
no_license
|
Dynamique-Zak/Zelda_BlenderGame
|
03065416939deb3ce18007909ccc278c736baad0
|
0f5d5d15bfa79e9f8ea15f0ebcb76bce92f77a21
|
refs/heads/master
| 2016-08-13T00:12:34.746520
| 2016-02-19T23:18:27
| 2016-02-19T23:18:27
| 49,572,402
| 30
| 16
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 354
|
py
|
from bge import logic
scene = logic.getCurrentScene()
def obstacle(cont):
ray = cont.sensors["RayForward"]
if ray.positive:
hitObj = ray.hitObject
if hitObj.name != "Link":
cam = scene.objects['MainCam']
logic.camObstaclePosition = ray.hitPosition
cam.worldPosition = logic.camObstaclePosition
else:
logic.camObstaclePosition = None
|
[
"schartier.isaac@gmail.com"
] |
schartier.isaac@gmail.com
|
dd6f62558664e5e4b170e264cc8c392da2512c42
|
48e124e97cc776feb0ad6d17b9ef1dfa24e2e474
|
/sdk/python/pulumi_azure_native/peering/v20210601/get_connection_monitor_test.py
|
3a6694d9417c7518103e56ca0869515a10726815
|
[
"BSD-3-Clause",
"Apache-2.0"
] |
permissive
|
bpkgoud/pulumi-azure-native
|
0817502630062efbc35134410c4a784b61a4736d
|
a3215fe1b87fba69294f248017b1591767c2b96c
|
refs/heads/master
| 2023-08-29T22:39:49.984212
| 2021-11-15T12:43:41
| 2021-11-15T12:43:41
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 7,852
|
py
|
# coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
__all__ = [
'GetConnectionMonitorTestResult',
'AwaitableGetConnectionMonitorTestResult',
'get_connection_monitor_test',
'get_connection_monitor_test_output',
]
@pulumi.output_type
class GetConnectionMonitorTestResult:
"""
The Connection Monitor Test class.
"""
def __init__(__self__, destination=None, destination_port=None, id=None, is_test_successful=None, name=None, path=None, provisioning_state=None, source_agent=None, test_frequency_in_sec=None, type=None):
if destination and not isinstance(destination, str):
raise TypeError("Expected argument 'destination' to be a str")
pulumi.set(__self__, "destination", destination)
if destination_port and not isinstance(destination_port, int):
raise TypeError("Expected argument 'destination_port' to be a int")
pulumi.set(__self__, "destination_port", destination_port)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if is_test_successful and not isinstance(is_test_successful, bool):
raise TypeError("Expected argument 'is_test_successful' to be a bool")
pulumi.set(__self__, "is_test_successful", is_test_successful)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if path and not isinstance(path, list):
raise TypeError("Expected argument 'path' to be a list")
pulumi.set(__self__, "path", path)
if provisioning_state and not isinstance(provisioning_state, str):
raise TypeError("Expected argument 'provisioning_state' to be a str")
pulumi.set(__self__, "provisioning_state", provisioning_state)
if source_agent and not isinstance(source_agent, str):
raise TypeError("Expected argument 'source_agent' to be a str")
pulumi.set(__self__, "source_agent", source_agent)
if test_frequency_in_sec and not isinstance(test_frequency_in_sec, int):
raise TypeError("Expected argument 'test_frequency_in_sec' to be a int")
pulumi.set(__self__, "test_frequency_in_sec", test_frequency_in_sec)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def destination(self) -> Optional[str]:
"""
The Connection Monitor test destination
"""
return pulumi.get(self, "destination")
@property
@pulumi.getter(name="destinationPort")
def destination_port(self) -> Optional[int]:
"""
The Connection Monitor test destination port
"""
return pulumi.get(self, "destination_port")
@property
@pulumi.getter
def id(self) -> str:
"""
The ID of the resource.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="isTestSuccessful")
def is_test_successful(self) -> bool:
"""
The flag that indicates if the Connection Monitor test is successful or not.
"""
return pulumi.get(self, "is_test_successful")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the resource.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def path(self) -> Sequence[str]:
"""
The path representing the Connection Monitor test.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter(name="provisioningState")
def provisioning_state(self) -> str:
"""
The provisioning state of the resource.
"""
return pulumi.get(self, "provisioning_state")
@property
@pulumi.getter(name="sourceAgent")
def source_agent(self) -> Optional[str]:
"""
The Connection Monitor test source agent
"""
return pulumi.get(self, "source_agent")
@property
@pulumi.getter(name="testFrequencyInSec")
def test_frequency_in_sec(self) -> Optional[int]:
"""
The Connection Monitor test frequency in seconds
"""
return pulumi.get(self, "test_frequency_in_sec")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of the resource.
"""
return pulumi.get(self, "type")
class AwaitableGetConnectionMonitorTestResult(GetConnectionMonitorTestResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetConnectionMonitorTestResult(
destination=self.destination,
destination_port=self.destination_port,
id=self.id,
is_test_successful=self.is_test_successful,
name=self.name,
path=self.path,
provisioning_state=self.provisioning_state,
source_agent=self.source_agent,
test_frequency_in_sec=self.test_frequency_in_sec,
type=self.type)
def get_connection_monitor_test(connection_monitor_test_name: Optional[str] = None,
peering_service_name: Optional[str] = None,
resource_group_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetConnectionMonitorTestResult:
"""
The Connection Monitor Test class.
:param str connection_monitor_test_name: The name of the connection monitor test
:param str peering_service_name: The name of the peering service.
:param str resource_group_name: The name of the resource group.
"""
__args__ = dict()
__args__['connectionMonitorTestName'] = connection_monitor_test_name
__args__['peeringServiceName'] = peering_service_name
__args__['resourceGroupName'] = resource_group_name
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('azure-native:peering/v20210601:getConnectionMonitorTest', __args__, opts=opts, typ=GetConnectionMonitorTestResult).value
return AwaitableGetConnectionMonitorTestResult(
destination=__ret__.destination,
destination_port=__ret__.destination_port,
id=__ret__.id,
is_test_successful=__ret__.is_test_successful,
name=__ret__.name,
path=__ret__.path,
provisioning_state=__ret__.provisioning_state,
source_agent=__ret__.source_agent,
test_frequency_in_sec=__ret__.test_frequency_in_sec,
type=__ret__.type)
@_utilities.lift_output_func(get_connection_monitor_test)
def get_connection_monitor_test_output(connection_monitor_test_name: Optional[pulumi.Input[str]] = None,
peering_service_name: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> pulumi.Output[GetConnectionMonitorTestResult]:
"""
The Connection Monitor Test class.
:param str connection_monitor_test_name: The name of the connection monitor test
:param str peering_service_name: The name of the peering service.
:param str resource_group_name: The name of the resource group.
"""
...
|
[
"noreply@github.com"
] |
bpkgoud.noreply@github.com
|
dc07aacdc0cad289a408330ade332418e2a6e981
|
ddd466457316662a1455bae429740eb3c8411444
|
/intro/3_6_cond_if_bonus_pints.py
|
9deed81970a0407735ec582a44c78263e764d6b5
|
[] |
no_license
|
fingerman/python_fundamentals
|
9ef46e51d6e9b8328e9c949fa0f807f30bd6e482
|
1fb604220922530d1171200a3cf3a927c028a6ed
|
refs/heads/master
| 2023-01-09T12:02:26.712810
| 2020-01-22T16:12:32
| 2020-01-22T16:12:32
| 151,728,846
| 0
| 0
| null | 2022-12-27T15:34:12
| 2018-10-05T13:58:10
|
Python
|
UTF-8
|
Python
| false
| false
| 197
|
py
|
a = int(input())
b = 0
if a <= 100:
b = 5
if a > 100:
b = 0.2*a
if a > 1000:
b = 0.1*a
if a % 2 == 0:
b += 1
if a % 10 == 5:
b += 2
print(b)
print(a+b)
|
[
"adamov.george@gmail.com"
] |
adamov.george@gmail.com
|
ab916be697d2f95c148df57c3d60fd791d1f76dd
|
ca6efbd1754b4d65ef9595c648b9d766c96abcbe
|
/douyin_spider/handler/video.py
|
0a67086353e78e9db4d0f1aa2ac8d7eb91403367
|
[] |
no_license
|
JaleeLi/douyin_spider
|
cc30b7bceb62d1440b97de99406b035c0aff9c06
|
aa3df5b0fddc633d4c6ae21c509514ccb664fbb2
|
refs/heads/master
| 2020-05-25T03:52:51.863799
| 2019-05-16T12:49:00
| 2019-05-16T12:49:00
| 187,615,023
| 1
| 0
| null | 2019-05-20T09:59:25
| 2019-05-20T09:59:25
| null |
UTF-8
|
Python
| false
| false
| 442
|
py
|
from douyin_spider.handler.media import MediaHandler
from douyin_spider.models.video import Video
class VideoHandler(MediaHandler):
"""
video handler,handle video item
"""
async def handle(self, item, **kwargs):
"""
handle item use VideoHandler
:param item:
:param kwargs:
:return:
"""
if isinstance(item, Video):
return await self.process(item, **kwargs)
|
[
"344616042@qq.com"
] |
344616042@qq.com
|
46fb77e37f4b85850d94f91b29ec1d63e33535c5
|
0901a62c11d1ba11df4cee28bb3fa2b32398a7d8
|
/django_blog/users/views.py
|
3a24e8b5fc8be5121b2b36f0ec416cbd5d68b871
|
[] |
no_license
|
mattg317/django_blog
|
80df4e46b687edf83ddd67e9cbda1f62c62ec6a0
|
bf60759e252f1f169a8ee26e5fba70c73ecc48fa
|
refs/heads/master
| 2022-12-21T21:26:24.401316
| 2019-03-08T16:16:57
| 2019-03-08T16:16:57
| 166,888,718
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,857
|
py
|
from django.shortcuts import render, redirect
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from .forms import UserRegisterForm, UserUpdateForm, ProfileUpdateForm
def register(request):
if request.method == 'POST':
form = UserRegisterForm(request.POST)
if form.is_valid():
# save the form
form.save()
username = form.cleaned_data.get('username')
# created a message for succes
messages.success(request, f'Your account has been created! You are now able to login!')
return redirect('login')
else:
form = UserRegisterForm()
return render(request, 'users/register.html', {'form': form})
# requires user to be logged in to view this page
@login_required
def profile(request):
# when submitted pass in new data
if request.method == 'POST':
# import forms from form.py
# pass in post data, with profile form getting file data with request the image
# populate current user with instance
u_form = UserUpdateForm(request.POST, instance=request.user)
p_form = ProfileUpdateForm(request.POST,
request.FILES,
instance=request.user.profile)
# Save if both forms are valid
if u_form.is_valid() and p_form.is_valid():
u_form.save()
p_form.save()
messages.success(request, f'Your account has been updated!')
return redirect('profile')
else:
u_form = UserUpdateForm(instance=request.user)
p_form = ProfileUpdateForm(instance=request.user.profile)
# create context to pass in
context = {
'u_form': u_form,
'p_form': p_form
}
return render(request, 'users/profile.html', context)
|
[
"mattg317@gmail.com"
] |
mattg317@gmail.com
|
9f07ff6a092fb565e50403ae26d8059fc6a6a492
|
ae81b16cf4242d329dfcb055e85fafe87262cc7f
|
/leetcode/NoIdea/383. 赎金信.py
|
20c36c9cacc2ffd270e4a627f23bd2d4438920e1
|
[] |
no_license
|
coquelin77/PyProject
|
3d2d3870b085c4b7ff41bd200fe025630969ab8e
|
58e84ed8b3748c6e0f78184ab27af7bff3778cb8
|
refs/heads/master
| 2023-03-18T19:14:36.441967
| 2019-06-19T02:44:22
| 2019-06-19T02:44:22
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,545
|
py
|
'''给定一个赎金信 (ransom) 字符串和一个杂志(magazine)字符串,判断第一个字符串ransom能不能由第二个字符串magazines里面的字符构成。如果可以构成,返回 true ;否则返回 false。
(题目说明:为了不暴露赎金信字迹,要从杂志上搜索各个需要的字母,组成单词来表达意思。)
注意:
你可以假设两个字符串均只含有小写字母。
canConstruct("a", "b") -> false
canConstruct("aa", "ab") -> false
canConstruct("aa", "aab") -> true'''
# class Solution:
# def canConstruct(self, ransomNote: 'str', magazine: 'str') -> 'bool':
# pass
#
# if __name__ == '__main__':
# ransomNote = "aa"
# magazine = "aa"
# r = list(ransomNote)
# m = list(magazine)
# for i in range(0, len(m) - 1):
# for j in range(0, len(r)):
# if m[i] == r[j]:
# pass
# else:
# m += 1
# if i == len(m) or j == len(r):
# print('1')
# else:
# print('2')
class Solution:
def canConstruct(self, ransomNote, magazine):
"""
:type ransomNote: str
:type magazine: str
:rtype: bool
"""
have_done=[]
for i in range(len(ransomNote)):
if ransomNote[i] not in have_done:
if ransomNote.count(ransomNote[i])<=magazine.count(ransomNote[i]):
pass
else:
return False
have_done.append(ransomNote[i])
return True
|
[
"zhangyiming748@users.noreply.github.com"
] |
zhangyiming748@users.noreply.github.com
|
816535d45c0c52df62f91b8f07462791b8636d11
|
0547d1826e99eedb959a3463520d73985a3b844e
|
/Data Science for Everyone Track/21-Supervised Learning with scikit-learn/01-Classification/06-The digits recognition dataset.py
|
fbce23254c4a24f165f8aa42379d00b6e1fd15e8
|
[] |
no_license
|
abhaysinh/Data-Camp
|
18031f8fd4ee199c2eff54a408c52da7bdd7ec0f
|
782c712975e14e88da4f27505adf4e5f4b457cb1
|
refs/heads/master
| 2022-11-27T10:44:11.743038
| 2020-07-25T16:15:03
| 2020-07-25T16:15:03
| 282,444,344
| 4
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,828
|
py
|
'''
The digits recognition dataset
Up until now, you have been performing binary classification, since the target variable had two possible outcomes.
Hugo, however, got to perform multi-class classification in the videos, where the target variable could take on
three possible outcomes. Why does he get to have all the fun?! In the following exercises, you'll be working
with the MNIST digits recognition dataset, which has 10 classes, the digits 0 through 9! A reduced version of
the MNIST dataset is one of scikit-learn's included datasets, and that is the one we will use in this exercise.
Each sample in this scikit-learn dataset is an 8x8 image representing a handwritten digit. Each pixel is represented
by an integer in the range 0 to 16, indicating varying levels of black. Recall that scikit-learn's built-in datasets
are of type Bunch, which are dictionary-like objects. Helpfully for the MNIST dataset, scikit-learn provides an
'images' key in addition to the 'data' and 'target' keys that you have seen with the Iris data. Because it is a 2D
array of the images corresponding to each sample, this 'images' key is useful for visualizing the images, as you'll
see in this exercise (for more on plotting 2D arrays, see Chapter 2 of DataCamp's course on Data Visualization with Python).
On the other hand, the 'data' key contains the feature array - that is, the images as a flattened array of 64 pixels.
Notice that you can access the keys of these Bunch objects in two different ways: By using the . notation, as in
digits.images, or the [] notation, as in digits['images'].
For more on the MNIST data, check out this exercise in Part 1 of DataCamp's Importing Data in Python course.
There, the full version of the MNIST dataset is used, in which the images are 28x28.
It is a famous dataset in machine learning and computer vision, and frequently used as a benchmark to evaluate
the performance of a new model.
INSTRUCTIONS
100XP
1 Import datasets from sklearn and matplotlib.pyplot as plt.
2 Load the digits dataset using the .load_digits() method on datasets.
3 Print the keys and DESCR of digits.
4 Print the shape of images and data keys using the . notation.
5 Display the 1010th image using plt.imshow().
This has been done for you, so hit 'Submit Answer' to see which handwritten digit this happens to be!
'''
# Import necessary modules
from sklearn import datasets
import matplotlib.pyplot as plt
# Load the digits dataset: digits
digits = datasets.load_digits()
# Print the keys and DESCR of the dataset
print(digits.keys())
print(digits.DESCR)
# Print the shape of the images and data keys
print(digits.images.shape)
print(digits.data.shape)
# Display digit 1010
plt.imshow(digits.images[1010], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
|
[
"abhaysinh.surve@gmail.com"
] |
abhaysinh.surve@gmail.com
|
9ee6c69083ccaf093a1dbc5105d1ea6cc399ede3
|
0e91030c47071029d978dbfb9e7a30ae6826afe5
|
/web/web_po/web_po_v4/Common/basepage.py
|
de7e3e53144ca52b72036ecc9720f8e8233391b4
|
[] |
no_license
|
liqi629/python_lemon
|
095983fadda3639b058043b399180d19f899284b
|
bc5e6e6c92561ba9cec2798b7735505b377e9cd6
|
refs/heads/master
| 2023-02-04T00:57:09.447008
| 2020-12-27T14:46:31
| 2020-12-27T14:46:31
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 6,440
|
py
|
#!/usr/bin/python3
# -*- coding: utf-8 -*-
from selenium.webdriver.remote.webdriver import WebDriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import logging
import time
import os
from web.web_po.web_po_v4.Common.dir_config import screenshot_dir
# 目标:任务一个步骤都能实现-异常捕获、日志输出、失败截图
class BasePage: # 页面对象类需要继承BasePage
def __init__(self, driver: WebDriver):
self.driver = driver
# 等待元素可见/出现
def wait_eleVisible(self, loc, timeout=10, frequncy=0.5, doc=""):
start = time.time()
try:
WebDriverWait(self.driver, timeout, frequncy).until(EC.visibility_of_element_located(loc))
except:
logging.exception("等待元素{}出现,超时!".format(loc)) # 关于日志---之前在接口测试讲过了,这里只是简单输出
# self.sava_png(doc="等待元素超时") # 这里doc也可以继续从外层传入
self.sava_png(doc)
raise # 异常抛出---这里是一定要抛出的,因为这里捕获后自己处理了,如果不抛出,就不会报错,代码会继续执行。
else:
end = time.time()
duration = end - start # 这里需要再转化为s,未写完整
logging.info("等待元素{}出现,等待时间为{}秒。".format(loc, duration))
# 等待元素存在
def wait_elePresence(self, loc, timeout=10, frequncy=0.5, doc=""):
start = time.time()
try:
WebDriverWait(self.driver, timeout, frequncy).until(EC.presence_of_element_located(loc))
except:
logging.exception("等待元素{}存在,超时!".format(loc))
self.sava_png(doc)
raise
else:
end = time.time()
duration = end - start # 这里需要再转化为s,未写完整
logging.info("等待元素{}存在,等待时间为{}秒。".format(loc, duration))
# 保存截图
# 补充:
# 1.webdriver的截图方法是仅针对web页面的,是无法截取windows截图的
# 2.不要用特殊符号存储
def sava_png(self, doc=""): # doc是截图名字?类名?用例名?
# 文件名=页面_操作_时间.png
now = time.strftime("%Y_%m_%D-%H_%M_%S", time.time())
# path = screenshot_dir # 未实现截图,先用简单的路径的试试
path = "./Outputs/screenshot"
if not os.path.exists(path): # 判断文件是否存在,可以单独写在一个函数中
os.makedirs(path)
filename = path + "/{}_{}.png".format(doc, now)
# 截图操作本身也可能会出现失败的情况
try:
self.driver.save_screenshot(filename)
except:
logging.exception("截图失败!")
else:
logging.info("截图成功!保存为:{}".format(filename))
# 查找元素
def get_element(self, loc, doc=""):
try:
ele = self.driver.find_element(*loc)
except:
logging.exception("查找元素{}失败!".format(loc))
self.sava_png(doc)
raise
else:
# logging.info("查找元素{},成功!".format(loc))
logging.info("查找{}的元素{}成功!".format(doc, loc)) # 也可以通过doc具体到哪个页面的哪个元素
return ele
# 输入文本
def input_text(self, loc, text, timeout=10, frequncy=0.5, doc=""):
# 前提1:元素可见
# 前提2:找到它
self.wait_eleVisible(loc, timeout, frequncy, doc)
ele = self.get_element(loc, doc)
try:
ele.send_keys(text)
except:
logging.exception("向元素{}输入:'{}'失败!".format(loc, text))
self.sava_png(doc)
raise
else:
logging.info("向元素{}输入:'{}'成功!".format(loc, text))
# 点击元素
def click(self, loc, timeout=10, frequncy=0.5, doc=""):
self.wait_eleVisible(loc, timeout, frequncy, doc)
ele = self.get_element(loc, doc)
try:
ele.click()
except:
logging.exception("点击元素{}失败!".format(loc))
self.sava_png(doc)
raise
else:
logging.info("点击元素{}成功!".format(loc))
# 获取元素文本
def get_element_text(self, loc, timeout=10, frequncy=0.5, doc=""):
# 前提1:元素存在(不一定可见,建议这里改成存在-再写一个等待存在的方法;如果没有元素隐藏的情况,两种都可以)
# 前提2:找到它
self.wait_eleVisible(loc, timeout, frequncy, doc)
# self.wait_elePresence(loc, timeout, frequncy, doc)
ele = self.get_element(loc, doc)
try:
text = ele.text
except:
logging.exception("获取元素{}文本失败!".format(loc))
self.sava_png(doc)
raise
else:
logging.info("获取元素{}文本成功!".format(loc))
return text
# 获取元素属性
def get_element_attr(self):
pass
# 不一定要把所有方法封装,可以需要的时候再封装;
# 之后学习robotframework的时候,这块已经封装好了,可以再借鉴一下。
# select
# iframe
# windows
# upload
# 以下举例说明:
# 切换frame
def switch_to_frame(self, locator, timeout=10):
# 等待frame可见、切换
try:
WebDriverWait(self.driver, timeout).until(EC.frame_to_be_available_and_switch_to_it(locator)) # 等待+切换
except:
logging.exception("切换到frame元素{}失败!".format(locator))
self.sava_png()
raise
else:
logging.info("切换到frame元素{}成功!".format(locator))
# 切换window #new #main #index
def switch_to_window(self, index):
# 获取一次窗口列表、触发新窗口、获取所有窗口、再切换 (前2个步骤可以在函数之前实现,然后再调用函数)
if index == "new": # 代表切换到新窗口
pass
elif index == "main": # 代表切换到第一个窗口
pass
else:
pass
|
[
"396167189@qq.com"
] |
396167189@qq.com
|
253f5a0085daaac0ec6ef0174ac93391d02e11f3
|
5d0ebc19b778ca0b2e02ac7c5d4a8d5bf07d9b23
|
/astropy/cosmology/tests/test_units.py
|
130aa03c830c0fba759cb6eddf1ee7ade2ae00a5
|
[
"BSD-3-Clause"
] |
permissive
|
adivijaykumar/astropy
|
8ea621f20b9c8363b2701c825c526d650c05258c
|
0fd7ae818fed3abe4c468170a507d52ef91dc7e8
|
refs/heads/main
| 2021-12-03T08:12:33.558975
| 2021-09-03T15:07:49
| 2021-09-03T15:07:49
| 402,863,896
| 0
| 1
|
BSD-3-Clause
| 2021-09-03T18:25:55
| 2021-09-03T18:25:54
| null |
UTF-8
|
Python
| false
| false
| 8,121
|
py
|
# -*- coding: utf-8 -*-
"""Testing :mod:`astropy.cosmology.units`."""
##############################################################################
# IMPORTS
import contextlib
import pytest
import astropy.cosmology.units as cu
import astropy.units as u
from astropy.cosmology import Planck13, default_cosmology
from astropy.tests.helper import assert_quantity_allclose
from astropy.utils.compat.optional_deps import HAS_ASDF, HAS_SCIPY
from astropy.utils.exceptions import AstropyDeprecationWarning
##############################################################################
# TESTS
##############################################################################
def test_has_expected_units():
"""
Test that this module has the expected set of units. Some of the units are
imported from :mod:`astropy.units`, or vice versa. Here we test presence,
not usage. Units from :mod:`astropy.units` are tested in that module. Units
defined in :mod:`astropy.cosmology` will be tested subsequently.
"""
with pytest.warns(AstropyDeprecationWarning, match="`littleh`"):
assert u.astrophys.littleh is cu.littleh
def test_has_expected_equivalencies():
"""
Test that this module has the expected set of equivalencies. Many of the
equivalencies are imported from :mod:`astropy.units`, so here we test
presence, not usage. Equivalencies from :mod:`astropy.units` are tested in
that module. Equivalencies defined in :mod:`astropy.cosmology` will be
tested subsequently.
"""
with pytest.warns(AstropyDeprecationWarning, match="`with_H0`"):
assert u.equivalencies.with_H0 is cu.with_H0
def test_littleh():
H0_70 = 70 * u.km / u.s / u.Mpc
h70dist = 70 * u.Mpc / cu.littleh
assert_quantity_allclose(h70dist.to(u.Mpc, cu.with_H0(H0_70)), 100 * u.Mpc)
# make sure using the default cosmology works
cosmodist = default_cosmology.get().H0.value * u.Mpc / cu.littleh
assert_quantity_allclose(cosmodist.to(u.Mpc, cu.with_H0()), 100 * u.Mpc)
# Now try a luminosity scaling
h1lum = 0.49 * u.Lsun * cu.littleh ** -2
assert_quantity_allclose(h1lum.to(u.Lsun, cu.with_H0(H0_70)), 1 * u.Lsun)
# And the trickiest one: magnitudes. Using H0=10 here for the round numbers
H0_10 = 10 * u.km / u.s / u.Mpc
# assume the "true" magnitude M = 12.
# Then M - 5*log_10(h) = M + 5 = 17
withlittlehmag = 17 * (u.mag - u.MagUnit(cu.littleh ** 2))
assert_quantity_allclose(withlittlehmag.to(u.mag, cu.with_H0(H0_10)), 12 * u.mag)
@pytest.mark.skipif(not HAS_SCIPY, reason="Cosmology needs scipy")
def test_dimensionless_redshift():
"""Test the equivalency ``dimensionless_redshift``."""
z = 3 * cu.redshift
val = 3 * u.one
# show units not equal
assert z.unit == cu.redshift
assert z.unit != u.one
# test equivalency enabled by default
assert z == val
# also test that it works for powers
assert (3 * cu.redshift ** 3) == val
# and in composite units
assert (3 * u.km / cu.redshift ** 3) == 3 * u.km
# test it also works as an equivalency
with u.set_enabled_equivalencies([]): # turn off default equivalencies
assert z.to(u.one, equivalencies=cu.dimensionless_redshift()) == val
with pytest.raises(ValueError):
z.to(u.one)
# if this fails, something is really wrong
with u.add_enabled_equivalencies(cu.dimensionless_redshift()):
assert z == val
@pytest.mark.skipif(not HAS_SCIPY, reason="Cosmology needs scipy")
def test_redshift_temperature():
"""Test the equivalency ``with_redshift``."""
cosmo = Planck13.clone(Tcmb0=3 * u.K)
default_cosmo = default_cosmology.get()
z = 15 * cu.redshift
Tcmb = cosmo.Tcmb(z)
# 1) Default (without specifying the cosmology)
with default_cosmology.set(cosmo):
equivalency = cu.redshift_temperature()
assert_quantity_allclose(z.to(u.K, equivalency), Tcmb)
assert_quantity_allclose(Tcmb.to(cu.redshift, equivalency), z)
# showing the answer changes if the cosmology changes
# this test uses the default cosmology
equivalency = cu.redshift_temperature()
assert_quantity_allclose(z.to(u.K, equivalency), default_cosmo.Tcmb(z))
assert default_cosmo.Tcmb(z) != Tcmb
# 2) Specifying the cosmology
equivalency = cu.redshift_temperature(cosmo)
assert_quantity_allclose(z.to(u.K, equivalency), Tcmb)
assert_quantity_allclose(Tcmb.to(cu.redshift, equivalency), z)
# Test `atzkw`
equivalency = cu.redshift_temperature(cosmo, ztol=1e-10)
assert_quantity_allclose(Tcmb.to(cu.redshift, equivalency), z)
@pytest.mark.skipif(not HAS_SCIPY, reason="Cosmology needs scipy")
class Test_with_redshift:
@pytest.fixture
def cosmo(self):
return Planck13.clone(Tcmb0=3 * u.K)
# ===========================================
def test_cosmo_different(self, cosmo):
default_cosmo = default_cosmology.get()
assert default_cosmo != cosmo # shows changing default
def test_no_equivalency(self, cosmo):
"""Test the equivalency ``with_redshift`` without any enabled."""
z = 15 * cu.redshift
equivalency = cu.with_redshift(Tcmb=False)
assert len(equivalency) == 0
# -------------------------------------------
def test_temperature_off(self, cosmo):
"""Test the equivalency ``with_redshift``."""
default_cosmo = default_cosmology.get()
z = 15 * cu.redshift
Tcmb = cosmo.Tcmb(z)
# 1) Default (without specifying the cosmology)
with default_cosmology.set(cosmo):
equivalency = cu.with_redshift(Tcmb=False)
with pytest.raises(u.UnitConversionError, match="'redshift' and 'K'"):
z.to(u.K, equivalency)
# 2) Specifying the cosmology
equivalency = cu.with_redshift(cosmo, Tcmb=False)
with pytest.raises(u.UnitConversionError, match="'redshift' and 'K'"):
z.to(u.K, equivalency)
def test_temperature(self, cosmo):
"""Test the equivalency ``with_redshift``."""
default_cosmo = default_cosmology.get()
z = 15 * cu.redshift
Tcmb = cosmo.Tcmb(z)
# 1) Default (without specifying the cosmology)
with default_cosmology.set(cosmo):
equivalency = cu.with_redshift(Tcmb=True)
assert_quantity_allclose(z.to(u.K, equivalency), Tcmb)
assert_quantity_allclose(Tcmb.to(cu.redshift, equivalency), z)
# showing the answer changes if the cosmology changes
# this test uses the default cosmology
equivalency = cu.with_redshift(Tcmb=True)
assert_quantity_allclose(z.to(u.K, equivalency), default_cosmo.Tcmb(z))
assert default_cosmo.Tcmb(z) != Tcmb
# 2) Specifying the cosmology
equivalency = cu.with_redshift(cosmo, Tcmb=True)
assert_quantity_allclose(z.to(u.K, equivalency), Tcmb)
assert_quantity_allclose(Tcmb.to(cu.redshift, equivalency), z)
# Test `atzkw`
# this is really just a test that 'atzkw' doesn't fail
equivalency = cu.with_redshift(cosmo, Tcmb=True, atzkw={"ztol": 1e-10})
assert_quantity_allclose(Tcmb.to(cu.redshift, equivalency), z)
# FIXME! get "dimensionless_redshift", "with_redshift" to work in this
# they are not in ``astropy.units.equivalencies``, so the following fails
@pytest.mark.skipif(not HAS_ASDF, reason="requires ASDF")
@pytest.mark.parametrize("equiv", [cu.with_H0])
def test_equivalencies_asdf(tmpdir, equiv):
from asdf.tests import helpers
tree = {"equiv": equiv()}
with (
pytest.warns(AstropyDeprecationWarning, match="`with_H0`")
if equiv.__name__ == "with_H0"
else contextlib.nullcontext()
):
helpers.assert_roundtrip_tree(tree, tmpdir)
def test_equivalency_context_manager():
base_registry = u.get_current_unit_registry()
# check starting with only the dimensionless_redshift equivalency.
assert len(base_registry.equivalencies) == 1
assert str(base_registry.equivalencies[0][0]) == "redshift"
|
[
"nstarkman@protonmail.com"
] |
nstarkman@protonmail.com
|
1efcba01d6b7890c5dfab687f6d1b9fd2ee7078e
|
00453a0be06ecc8760a3c36ea302bc1f047a644f
|
/convoy/validator.py
|
71f599c7cbcf201ed971c30366577c93774384e2
|
[
"MIT"
] |
permissive
|
Nepomuceno/batch-shipyard
|
40c40e01bb49550eb1090f56cdf3da91b21a1bbb
|
2d67411257e0501ac4443f44e4d27e4a8262be8d
|
refs/heads/master
| 2020-03-18T18:53:47.824023
| 2018-05-30T05:55:50
| 2018-05-30T06:04:43
| 135,121,497
| 0
| 0
| null | 2018-05-28T06:54:56
| 2018-05-28T06:54:56
| null |
UTF-8
|
Python
| false
| false
| 3,388
|
py
|
#!/usr/bin/env python3
# Copyright (c) Microsoft Corporation
#
# All rights reserved.
#
# MIT License
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# compat imports
from __future__ import absolute_import, division, print_function
from builtins import ( # noqa
bytes, dict, int, list, object, range, str, ascii, chr, hex, input,
next, oct, open, pow, round, super, filter, map, zip)
# stdlib imports
import enum
import logging
import sys
try:
import pathlib2 as pathlib
except ImportError:
import pathlib
import warnings
# non-stdlib imports
import pykwalify.core
import pykwalify.errors
import ruamel.yaml
# local imports
import convoy.util
# create logger
logger = logging.getLogger(__name__)
# enums
class ConfigType(enum.Enum):
Credentials = 1,
Global = 2,
Pool = 3,
Jobs = 4,
RemoteFS = 5,
# global defines
_ROOT_PATH = pathlib.Path(__file__).resolve().parent.parent
_SCHEMAS = {
ConfigType.Credentials: {
'name': 'Credentials',
'schema': pathlib.Path(_ROOT_PATH, 'schemas/credentials.yaml'),
},
ConfigType.Global: {
'name': 'Global',
'schema': pathlib.Path(_ROOT_PATH, 'schemas/config.yaml'),
},
ConfigType.Pool: {
'name': 'Pool',
'schema': pathlib.Path(_ROOT_PATH, 'schemas/pool.yaml'),
},
ConfigType.Jobs: {
'name': 'Jobs',
'schema': pathlib.Path(_ROOT_PATH, 'schemas/jobs.yaml'),
},
ConfigType.RemoteFS: {
'name': 'RemoteFS',
'schema': pathlib.Path(_ROOT_PATH, 'schemas/fs.yaml'),
},
}
# configure loggers
_PYKWALIFY_LOGGER = logging.getLogger('pykwalify')
convoy.util.setup_logger(_PYKWALIFY_LOGGER)
_PYKWALIFY_LOGGER.setLevel(logging.CRITICAL)
convoy.util.setup_logger(logger)
# ignore ruamel.yaml warning
warnings.simplefilter('ignore', ruamel.yaml.error.UnsafeLoaderWarning)
def validate_config(config_type, config_file):
if config_file is None or not config_file.exists():
return
schema = _SCHEMAS[config_type]
validator = pykwalify.core.Core(
source_file=str(config_file),
schema_files=[str(schema['schema'])]
)
validator.strict_rule_validation = True
try:
validator.validate(raise_exception=True)
except pykwalify.errors.SchemaError as e:
logger.error('{} Configuration {}'.format(schema['name'], e.msg))
sys.exit(1)
|
[
"fred.park@microsoft.com"
] |
fred.park@microsoft.com
|
ad980ce29098e193a27877629a6e43235f3d06e7
|
19768aa46de8fa5a52639826a80959e65f7b8e66
|
/authapp/banned.py
|
2096d5f8efc79aeb3f85391923d07dd43f713383
|
[] |
no_license
|
hongmingu/macawl
|
0aff0a0d55acb11f06e979df2dee995941cdd5d0
|
49acead1290dd977263cb4086a621feed083fc40
|
refs/heads/master
| 2020-04-13T02:09:44.228294
| 2019-02-13T11:24:38
| 2019-02-13T11:24:38
| 162,822,145
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 273
|
py
|
BANNED_PASSWORD_LIST = [
'password', 'qwerty', 'superman', '123456', '1234567', '12345678', '123456789', '1234567890', '012345', '0123456',
'01234567', '012345678', '0123456789', 'macawl', '111111', 'aaaaaa'
]
BANNED_USERNAME_LIST = [
'macawl', 'robots.txt',
]
|
[
"gmh397@nate.com"
] |
gmh397@nate.com
|
ae9d90732494a776b87eca0fe505628f0a211f2c
|
8a7ef686efdd9add693986be08d9a1b8f561236e
|
/core/RestAPI/utils.py
|
46c1af8a9eb7902cfbec167e5ed9f51ba1307bba
|
[
"MIT"
] |
permissive
|
thebao/kalliope
|
3b0cffdcd746f107e4557e53407196eae1194f5e
|
a820e85ee56c9e3a34e1130e32ccef52cd58e9f9
|
refs/heads/master
| 2021-01-13T13:58:46.217018
| 2016-11-04T11:02:34
| 2016-11-04T11:02:34
| 72,927,345
| 0
| 0
| null | 2016-11-05T13:19:11
| 2016-11-05T13:19:11
| null |
UTF-8
|
Python
| false
| false
| 1,045
|
py
|
from functools import wraps
from flask import request, Response
from core.ConfigurationManager import SettingLoader
def check_auth(username, password):
"""This function is called to check if a username /
password combination is valid.
"""
settings = SettingLoader.get_settings()
return username == settings.rest_api.login and password == settings.rest_api.password
def authenticate():
"""Sends a 401 response that enables basic auth"""
return Response(
'Could not verify your access level for that URL.\n'
'You have to login with proper credentials', 401,
{'WWW-Authenticate': 'Basic realm="Login Required"'})
def requires_auth(f):
@wraps(f)
def decorated(*args, **kwargs):
settings = SettingLoader.get_settings()
if settings.rest_api.password_protected:
auth = request.authorization
if not auth or not check_auth(auth.username, auth.password):
return authenticate()
return f(*args, **kwargs)
return decorated
|
[
"nico.marcq@gmail.com"
] |
nico.marcq@gmail.com
|
d48efa115fb2ec6bccb9a0f884cb915557399918
|
e8d42df817835b5fa83829ac6937887de127faa1
|
/images6/exif.py
|
4e1cb31848eda13e9290f99fb92271bf13c288c8
|
[
"BSD-2-Clause",
"MIT"
] |
permissive
|
eblade/images6
|
98f8dda84b1754ee1a306914dd2cf57a56d251bd
|
d68f79988a2b195a870efcc0bb368a7de474a2bc
|
refs/heads/master
| 2020-04-03T22:04:45.715969
| 2019-07-29T17:25:57
| 2019-07-29T17:25:57
| 59,519,861
| 0
| 0
| null | 2016-11-02T23:23:57
| 2016-05-23T21:33:37
|
Python
|
UTF-8
|
Python
| false
| false
| 1,989
|
py
|
"""Helper functions to convert exif data into better formats"""
orientation2angle = {
'Horizontal (normal)': (None, 0),
'Mirrored horizontal': ('H', 0),
'Rotated 180': (None, 180),
'Mirrored vertical': ('V', 0),
'Mirrored horizontal then rotated 90 CCW': ('H', -90),
'Rotated 90 CCW': (None, -90),
'Mirrored horizontal then rotated 90 CW': ('H', 90),
'Rotated 90 CW': (None, 90),
}
def exif_position(exif):
"""Reads exifread tags and extracts a float tuple (lat, lon)"""
lat = exif.get("GPS GPSLatitude")
lon = exif.get("GPS GPSLongitude")
if None in (lat, lon):
return None, None
lat = dms_to_float(lat)
lon = dms_to_float(lon)
if None in (lat, lon):
return None, None
if exif.get('GPS GPSLatitudeRef').printable == 'S':
lat *= -1
if exif.get('GPS GPSLongitudeRef').printable == 'S':
lon *= -1
return lat, lon
def dms_to_float(p):
"""Converts exifread data points to decimal GPX floats"""
try:
degree = p.values[0]
minute = p.values[1]
second = p.values[2]
return (
float(degree.num)/float(degree.den) +
float(minute.num)/float(minute.den)/60 +
float(second.num)/float(second.den)/3600
)
except AttributeError:
return None
def exif_string(exif, key):
p = exif.get(key)
if p:
return p.printable.strip()
def exif_int(exif, key):
p = exif.get(key)
if p:
return int(p.printable or 0)
def exif_ratio(exif, key):
p = exif.get(key)
try:
if p:
p = p.values[0]
return int(p.num), int(p.den)
except AttributeError:
if isinstance(p, int):
return p
def exif_orientation(exif):
orientation = exif.get("Image Orientation")
if orientation is None:
return None, None, 0
mirror, angle = orientation2angle.get(orientation.printable)
return orientation.printable, mirror, angle
|
[
"johan@egneblad.se"
] |
johan@egneblad.se
|
6ef7593eaf4523b81bda588bc7f80fcb2b2b1983
|
a680b681210a070ff6ac3eab4ed3ea5a125991d6
|
/setup.py
|
971ef4d637bfcefe9f5b51d97ad073c92ddc4408
|
[
"BSD-2-Clause"
] |
permissive
|
moonbirdxp/PSpider
|
bb6da1de6a78d86ee8704b6eb8981773a1a31d8c
|
4d7238b4ebafd129ecc5dd1095ce1ece313945ec
|
refs/heads/master
| 2020-06-22T03:46:17.320420
| 2019-07-16T03:26:20
| 2019-07-16T04:44:17
| 197,624,496
| 0
| 0
|
BSD-2-Clause
| 2019-10-15T01:04:19
| 2019-07-18T16:48:09
|
Python
|
UTF-8
|
Python
| false
| false
| 502
|
py
|
# _*_ coding: utf-8 _*_
"""
install script: python3 setup.py install
"""
from setuptools import setup, find_packages
setup(
name="spider",
version="2.4.1",
author="xianhu",
keywords=["spider", "crawler", "multi-threads", "multi-processes", "proxies"],
packages=find_packages(exclude=("test.*",)),
package_data={
"": ["*.conf"], # include all *.conf files
},
install_requires=[
"pybloom_live>=3.0.0", # pybloom-live, fork from pybloom
]
)
|
[
"qixianhu@qq.com"
] |
qixianhu@qq.com
|
8665cf78ce4dd184cdd2109946d8f8d25e435330
|
b26c41926fa3a7c2c061132d80e91a2750f2f468
|
/tensorflow_probability/python/experimental/nn/variational_base.py
|
d6493e1392c98610b0dde840bd906e29b61b0ba5
|
[
"Apache-2.0"
] |
permissive
|
tensorflow/probability
|
22e679a4a883e408f8ef237cda56e3e3dfa42b17
|
42a64ba0d9e0973b1707fcd9b8bd8d14b2d4e3e5
|
refs/heads/main
| 2023-09-04T02:06:08.174935
| 2023-08-31T20:30:00
| 2023-08-31T20:31:33
| 108,053,674
| 4,055
| 1,269
|
Apache-2.0
| 2023-09-13T21:49:49
| 2017-10-23T23:50:54
|
Jupyter Notebook
|
UTF-8
|
Python
| false
| false
| 7,602
|
py
|
# Copyright 2019 The TensorFlow Probability Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Base class for variational layers for building neural networks."""
import collections
import tensorflow.compat.v2 as tf
from tensorflow_probability.python import random as tfp_random
from tensorflow_probability.python.distributions import distribution as distribution_lib
from tensorflow_probability.python.distributions import independent as independent_lib
from tensorflow_probability.python.distributions import mvn_diag as mvn_diag_lib
from tensorflow_probability.python.distributions import normal as normal_lib
from tensorflow_probability.python.experimental.nn import layers as layers_lib
from tensorflow_probability.python.internal import prefer_static as ps
from tensorflow_probability.python.util.seed_stream import SeedStream
__all__ = [
'VariationalLayer',
]
# The following aliases ensure docstrings read more succinctly.
tfd = distribution_lib
def unpack_kernel_and_bias(weights):
"""Returns `kernel`, `bias` tuple."""
if isinstance(weights, collections.abc.Mapping):
kernel = weights.get('kernel', None)
bias = weights.get('bias', None)
elif len(weights) == 1:
kernel, bias = weights, None
elif len(weights) == 2:
kernel, bias = weights
else:
raise ValueError('Unable to unpack weights: {}.'.format(weights))
return kernel, bias
class VariationalLayer(layers_lib.Layer):
"""Base class for all variational layers."""
def __init__(
self,
posterior,
prior,
activation_fn=None,
posterior_value_fn=tfd.Distribution.sample,
seed=None,
dtype=tf.float32,
validate_args=False,
name=None):
"""Base class for variational layers.
Args:
posterior: ...
prior: ...
activation_fn: ...
posterior_value_fn: ...
seed: ...
dtype: ...
validate_args: ...
name: Python `str` prepeneded to ops created by this object.
Default value: `None` (i.e., `type(self).__name__`).
"""
super(VariationalLayer, self).__init__(
validate_args=validate_args, name=name)
self._posterior = posterior
self._prior = prior
self._activation_fn = activation_fn
self._posterior_value_fn = posterior_value_fn
self._posterior_value = None
self._seed = SeedStream(seed, salt=self.name)
self._dtype = dtype
tf.nest.assert_same_structure(prior.dtype, posterior.dtype,
check_types=False)
@property
def dtype(self):
return self._dtype
@property
def posterior(self):
return self._posterior
@property
def prior(self):
return self._prior
@property
def activation_fn(self):
return self._activation_fn
@property
def posterior_value_fn(self):
return self._posterior_value_fn
@property
def posterior_value(self):
return self._posterior_value
class VariationalReparameterizationKernelBiasLayer(VariationalLayer):
"""Variational reparameterization linear layer."""
def __init__(
self,
posterior,
prior,
apply_kernel_fn,
activation_fn=None,
posterior_value_fn=tfd.Distribution.sample,
unpack_weights_fn=unpack_kernel_and_bias,
seed=None,
dtype=tf.float32,
validate_args=False,
name=None):
super(VariationalReparameterizationKernelBiasLayer, self).__init__(
posterior,
prior,
activation_fn=activation_fn,
posterior_value_fn=posterior_value_fn,
seed=seed,
dtype=dtype,
validate_args=validate_args,
name=name)
self._apply_kernel_fn = apply_kernel_fn
self._unpack_weights_fn = unpack_weights_fn
@property
def unpack_weights_fn(self):
return self._unpack_weights_fn
def __call__(self, x, **kwargs):
x = tf.convert_to_tensor(x, dtype=self.dtype, name='x')
self._posterior_value = self.posterior_value_fn(
self.posterior, seed=self._seed()) # pylint: disable=not-callable
kernel, bias = self.unpack_weights_fn(self.posterior_value) # pylint: disable=not-callable
y = x
if kernel is not None:
y = self._apply_kernel_fn(y, kernel)
if bias is not None:
y = y + bias
if self.activation_fn is not None:
y = self.activation_fn(y) # pylint: disable=not-callable
return y
class VariationalFlipoutKernelBiasLayer(VariationalLayer):
"""Variational flipout linear layer."""
def __init__(
self,
posterior,
prior,
apply_kernel_fn,
activation_fn=None,
posterior_value_fn=tfd.Distribution.sample,
unpack_weights_fn=unpack_kernel_and_bias,
seed=None,
dtype=tf.float32,
validate_args=False,
name=None):
super(VariationalFlipoutKernelBiasLayer, self).__init__(
posterior,
prior,
activation_fn=activation_fn,
posterior_value_fn=posterior_value_fn,
seed=seed,
dtype=dtype,
validate_args=validate_args,
name=name)
self._apply_kernel_fn = apply_kernel_fn
self._unpack_weights_fn = unpack_weights_fn
@property
def unpack_weights_fn(self):
return self._unpack_weights_fn
def __call__(self, x, **kwargs):
x = tf.convert_to_tensor(x, dtype=self.dtype, name='x')
self._posterior_value = self.posterior_value_fn(
self.posterior, seed=self._seed()) # pylint: disable=not-callable
kernel, bias = self.unpack_weights_fn(self.posterior_value) # pylint: disable=not-callable
y = x
if kernel is not None:
kernel_dist, _ = self.unpack_weights_fn( # pylint: disable=not-callable
self.posterior.sample_distributions(value=self.posterior_value)[0])
kernel_loc, kernel_scale = get_spherical_normal_loc_scale(kernel_dist)
# batch_size = tf.shape(x)[0]
# sign_input_shape = ([batch_size] +
# [1] * self._rank +
# [self._input_channels])
y *= tfp_random.rademacher(ps.shape(y),
dtype=y.dtype,
seed=self._seed())
kernel_perturb = normal_lib.Normal(loc=0., scale=kernel_scale)
y = self._apply_kernel_fn( # E.g., tf.matmul.
y,
kernel_perturb.sample(seed=self._seed()))
y *= tfp_random.rademacher(ps.shape(y),
dtype=y.dtype,
seed=self._seed())
y += self._apply_kernel_fn(x, kernel_loc)
if bias is not None:
y = y + bias
if self.activation_fn is not None:
y = self.activation_fn(y) # pylint: disable=not-callable
return y
def get_spherical_normal_loc_scale(d):
if isinstance(d, independent_lib.Independent):
return get_spherical_normal_loc_scale(d.distribution)
if isinstance(d, (normal_lib.Normal, mvn_diag_lib.MultivariateNormalDiag)):
return d.loc, d.scale
raise TypeError('Expected kernel `posterior` to be spherical Normal; '
'saw: "{}".'.format(type(d).__name__))
|
[
"gardener@tensorflow.org"
] |
gardener@tensorflow.org
|
22e0763bb5fdccc77c69692bfef9870bcca55aca
|
3481356e47dcc23d06e54388153fe6ba795014fa
|
/comprehensive_test/BaseStruct/BaseStruct/BaseStruct.py
|
b17d76fdf85e1568f29014674e40ae6663232196
|
[] |
no_license
|
Chise1/pyhk
|
c09a4c5a06ce93e7fe50c0cc078429f7f63fcb2f
|
44bdb51e1772efad9d0116feab1c991c601aa68a
|
refs/heads/master
| 2021-01-03T08:24:47.255171
| 2020-02-29T04:05:30
| 2020-02-29T04:05:30
| 239,998,705
| 1
| 0
| null | 2020-02-28T07:35:46
| 2020-02-12T11:40:39
|
C
|
UTF-8
|
Python
| false
| false
| 7,927
|
py
|
# This file was automatically generated by SWIG (http://www.swig.org).
# Version 3.0.12
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info as _swig_python_version_info
if _swig_python_version_info >= (2, 7, 0):
def swig_import_helper():
import importlib
pkg = __name__.rpartition('.')[0]
mname = '.'.join((pkg, '_BaseStruct')).lstrip('.')
try:
return importlib.import_module(mname)
except ImportError:
return importlib.import_module('_BaseStruct')
_BaseStruct = swig_import_helper()
del swig_import_helper
elif _swig_python_version_info >= (2, 6, 0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try:
fp, pathname, description = imp.find_module('_BaseStruct', [dirname(__file__)])
except ImportError:
import _BaseStruct
return _BaseStruct
try:
_mod = imp.load_module('_BaseStruct', fp, pathname, description)
finally:
if fp is not None:
fp.close()
return _mod
_BaseStruct = swig_import_helper()
del swig_import_helper
else:
import _BaseStruct
del _swig_python_version_info
try:
_swig_property = property
except NameError:
pass # Python < 2.2 doesn't have 'property'.
try:
import builtins as __builtin__
except ImportError:
import __builtin__
def _swig_setattr_nondynamic(self, class_type, name, value, static=1):
if (name == "thisown"):
return self.this.own(value)
if (name == "this"):
if type(value).__name__ == 'SwigPyObject':
self.__dict__[name] = value
return
method = class_type.__swig_setmethods__.get(name, None)
if method:
return method(self, value)
if (not static):
if _newclass:
object.__setattr__(self, name, value)
else:
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self, class_type, name, value):
return _swig_setattr_nondynamic(self, class_type, name, value, 0)
def _swig_getattr(self, class_type, name):
if (name == "thisown"):
return self.this.own()
method = class_type.__swig_getmethods__.get(name, None)
if method:
return method(self)
raise AttributeError("'%s' object has no attribute '%s'" % (class_type.__name__, name))
def _swig_repr(self):
try:
strthis = "proxy of " + self.this.__repr__()
except __builtin__.Exception:
strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
try:
_object = object
_newclass = 1
except __builtin__.Exception:
class _object:
pass
_newclass = 0
def add(a: 'int', b: 'int') -> "int":
return _BaseStruct.add(a, b)
add = _BaseStruct.add
class POINTER(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, POINTER, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, POINTER, name)
__repr__ = _swig_repr
__swig_setmethods__["x"] = _BaseStruct.POINTER_x_set
__swig_getmethods__["x"] = _BaseStruct.POINTER_x_get
if _newclass:
x = _swig_property(_BaseStruct.POINTER_x_get, _BaseStruct.POINTER_x_set)
__swig_setmethods__["y"] = _BaseStruct.POINTER_y_set
__swig_getmethods__["y"] = _BaseStruct.POINTER_y_get
if _newclass:
y = _swig_property(_BaseStruct.POINTER_y_get, _BaseStruct.POINTER_y_set)
def __init__(self):
this = _BaseStruct.new_POINTER()
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _BaseStruct.delete_POINTER
__del__ = lambda self: None
POINTER_swigregister = _BaseStruct.POINTER_swigregister
POINTER_swigregister(POINTER)
class VECTOR(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, VECTOR, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, VECTOR, name)
__repr__ = _swig_repr
__swig_setmethods__["x"] = _BaseStruct.VECTOR_x_set
__swig_getmethods__["x"] = _BaseStruct.VECTOR_x_get
if _newclass:
x = _swig_property(_BaseStruct.VECTOR_x_get, _BaseStruct.VECTOR_x_set)
__swig_setmethods__["y"] = _BaseStruct.VECTOR_y_set
__swig_getmethods__["y"] = _BaseStruct.VECTOR_y_get
if _newclass:
y = _swig_property(_BaseStruct.VECTOR_y_get, _BaseStruct.VECTOR_y_set)
__swig_setmethods__["vector_length"] = _BaseStruct.VECTOR_vector_length_set
__swig_getmethods__["vector_length"] = _BaseStruct.VECTOR_vector_length_get
if _newclass:
vector_length = _swig_property(_BaseStruct.VECTOR_vector_length_get, _BaseStruct.VECTOR_vector_length_set)
def __init__(self):
this = _BaseStruct.new_VECTOR()
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _BaseStruct.delete_VECTOR
__del__ = lambda self: None
VECTOR_swigregister = _BaseStruct.VECTOR_swigregister
VECTOR_swigregister(VECTOR)
class NET_DVR_TIME(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, NET_DVR_TIME, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, NET_DVR_TIME, name)
__repr__ = _swig_repr
__swig_setmethods__["dwYear"] = _BaseStruct.NET_DVR_TIME_dwYear_set
__swig_getmethods__["dwYear"] = _BaseStruct.NET_DVR_TIME_dwYear_get
if _newclass:
dwYear = _swig_property(_BaseStruct.NET_DVR_TIME_dwYear_get, _BaseStruct.NET_DVR_TIME_dwYear_set)
__swig_setmethods__["dwMonth"] = _BaseStruct.NET_DVR_TIME_dwMonth_set
__swig_getmethods__["dwMonth"] = _BaseStruct.NET_DVR_TIME_dwMonth_get
if _newclass:
dwMonth = _swig_property(_BaseStruct.NET_DVR_TIME_dwMonth_get, _BaseStruct.NET_DVR_TIME_dwMonth_set)
__swig_setmethods__["dwDay"] = _BaseStruct.NET_DVR_TIME_dwDay_set
__swig_getmethods__["dwDay"] = _BaseStruct.NET_DVR_TIME_dwDay_get
if _newclass:
dwDay = _swig_property(_BaseStruct.NET_DVR_TIME_dwDay_get, _BaseStruct.NET_DVR_TIME_dwDay_set)
__swig_setmethods__["dwHour"] = _BaseStruct.NET_DVR_TIME_dwHour_set
__swig_getmethods__["dwHour"] = _BaseStruct.NET_DVR_TIME_dwHour_get
if _newclass:
dwHour = _swig_property(_BaseStruct.NET_DVR_TIME_dwHour_get, _BaseStruct.NET_DVR_TIME_dwHour_set)
__swig_setmethods__["dwMinute"] = _BaseStruct.NET_DVR_TIME_dwMinute_set
__swig_getmethods__["dwMinute"] = _BaseStruct.NET_DVR_TIME_dwMinute_get
if _newclass:
dwMinute = _swig_property(_BaseStruct.NET_DVR_TIME_dwMinute_get, _BaseStruct.NET_DVR_TIME_dwMinute_set)
__swig_setmethods__["dwSecond"] = _BaseStruct.NET_DVR_TIME_dwSecond_set
__swig_getmethods__["dwSecond"] = _BaseStruct.NET_DVR_TIME_dwSecond_get
if _newclass:
dwSecond = _swig_property(_BaseStruct.NET_DVR_TIME_dwSecond_get, _BaseStruct.NET_DVR_TIME_dwSecond_set)
def __init__(self):
this = _BaseStruct.new_NET_DVR_TIME()
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _BaseStruct.delete_NET_DVR_TIME
__del__ = lambda self: None
NET_DVR_TIME_swigregister = _BaseStruct.NET_DVR_TIME_swigregister
NET_DVR_TIME_swigregister(NET_DVR_TIME)
# This file is compatible with both classic and new-style classes.
|
[
"chise123@live.com"
] |
chise123@live.com
|
b95fc2cc946058c79099fb52b43c4dd92c36b741
|
0b86600e0288c0fefc081a0f428277a68b14882e
|
/tortue/tortue_2.py
|
c2f8f226561cdee9a094b36f6cc92224f645b31b
|
[] |
no_license
|
Byliguel/python1-exo7
|
9ede37a8d2b8f384d1ebe3d612e8c25bbe47a350
|
fbf6b08f4c1e94dd9f170875eee871a84849399e
|
refs/heads/master
| 2020-09-22T10:16:34.044141
| 2019-12-01T11:52:51
| 2019-12-01T11:52:51
| 225,152,986
| 1
| 0
| null | 2019-12-01T11:51:37
| 2019-12-01T11:51:36
| null |
UTF-8
|
Python
| false
| false
| 849
|
py
|
##############################
# Tortue
##############################
##############################
# Activité 2 - Boucle "pour"
##############################
##############################
# Question 1
from turtle import *
# Un pentagone
width(5)
color('blue')
for i in range(5):
forward(100)
left(72)
##############################
# Question 2
# Un autre pentagone
color('red')
longueur = 200
angle = 72
for i in range(5):
forward(longueur)
left(angle)
##############################
# Question 3
# Un dodecagone (12 côtés quoi)
color("purple")
n = 12
angle = 360/n
for i in range(n):
forward(100)
left(angle)
##############################
# Question 4
# Une spirale
color("green")
longueur = 10
for i in range(25):
forward(longueur)
left(40)
longueur = longueur + 10
exitonclick()
|
[
"arnaud.bodin@math.univ-lille1.fr"
] |
arnaud.bodin@math.univ-lille1.fr
|
2220848f9b639102050af94c3eaf9af3d8c8c619
|
6fa7f99d3d3d9b177ef01ebf9a9da4982813b7d4
|
/oF8T7Apf7jfagC4fD_10.py
|
1b90b98f9bbbebbad5d49653d4c2612e67ba78bc
|
[] |
no_license
|
daniel-reich/ubiquitous-fiesta
|
26e80f0082f8589e51d359ce7953117a3da7d38c
|
9af2700dbe59284f5697e612491499841a6c126f
|
refs/heads/master
| 2023-04-05T06:40:37.328213
| 2021-04-06T20:17:44
| 2021-04-06T20:17:44
| 355,318,759
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 143
|
py
|
def antipodes_average(lst):
lst1, lst2 = lst[:len(lst) // 2], lst[len(lst) // 2:][::-1]
return [(x + y) / 2 for x, y in zip(lst1, lst2)]
|
[
"daniel.reich@danielreichs-MacBook-Pro.local"
] |
daniel.reich@danielreichs-MacBook-Pro.local
|
15e9b01fef356ff7e0ed0f51586e912e135b0326
|
d4a88e52bb2b504208fc8b278400791409c69dbb
|
/src/pyaC/mtools/sanityOfmtools.py
|
090f52e68939621761d3f293755fe1ba6dad9178
|
[
"MIT"
] |
permissive
|
pcschneider/PyAstronomy
|
23253ca57a35c2c4ed2ae01037f6512581e784bb
|
42c2200e4d45832935b7a3d9b3b05aeb30c54b50
|
refs/heads/master
| 2020-04-05T22:53:02.187609
| 2017-03-02T12:14:56
| 2017-03-02T12:14:56
| 52,076,073
| 0
| 0
| null | 2016-02-19T09:17:26
| 2016-02-19T09:17:25
| null |
UTF-8
|
Python
| false
| false
| 5,123
|
py
|
from __future__ import print_function
import unittest
import os
class SanityOfmtools(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
if os.path.isfile("test.tmp"):
os.remove("test.tmp")
def sanity_numericalDerivative(self):
"""
mtools: Checking accuracy of numerical derivatives.
"""
# Check polynomial
from .numericalDerivatives import diffCFD
import numpy as np
# Analytical derivatives
x = np.linspace(-10,10,1000)
y = [np.poly1d((0.03, -0.31, 0.4, 0.35, 1.4))]
for i in range(4):
y.append(y[-1].deriv())
for erro in [2,4,6]:
for i in range(1,5):
indi, der = diffCFD(x, np.polyval(y[0], x), i, erro)
self.assertLess(np.max(np.abs(der/np.polyval(y[i], x[indi]) - 1.0)), 2e-2*pow(10.0,erro-2))
# Check trigonometric
y = [np.sin(x/10.0*2*np.pi+3)]
y.append(2*np.pi/10.0 * np.cos(x/10.0*2*np.pi+3))
y.append(-(2*np.pi/10.0)**2 * np.sin(x/10.0*2*np.pi+3))
y.append(-(2*np.pi/10.0)**3 * np.cos(x/10.0*2*np.pi+3))
y.append((2*np.pi/10.0)**4 * np.sin(x/10.0*2*np.pi+3))
for erro in [2,4,6]:
for i in range(1,5):
indi, der = diffCFD(x, y[0], i, erro)
self.assertLess(np.max(np.abs(der/y[i][indi] - 1.0)), 1e-3*pow(10.0,erro-2))
# Check exponential
y = np.exp(x)
for erro in [2,4,6]:
for i in range(1,5):
print(i, erro)
indi, der = diffCFD(x, y, i, erro)
self.assertLess(np.max(np.abs(der/y[indi] - 1.0)), 1e-3*pow(10.0,erro-2))
def sanity_diffCFDExample(self):
"""
mtools: diffCFD example
"""
from PyAstronomy import pyaC
import matplotlib.pylab as plt
import numpy as np
x = np.linspace(-10,10,1000)
# Computer polynomial and its derivatives
# (quasi analytically)
y = [np.poly1d((0.03, -0.31, 0.4, 0.35, 1.4))]
for i in range(4):
y.append(y[-1].deriv())
# Compute derivates numerically and compare to
# analytic solution
erro = 2
for i in range(1,5):
indi, der = pyaC.diffCFD(x, np.polyval(y[0], x), i, erro)
plt.plot(x[indi], np.polyval(y[i], x[indi]), 'b.')
plt.plot(x[indi], der, 'r--')
# plt.show()
def sanity_ibtrapzExample(self):
"""
mtools: Checking example of ibtrapz
"""
from PyAstronomy.pyaC import mtools
import numpy as np
x = np.arange(-2.,2.01,0.1)
y = x**3 + 1.7
x0 = -1.375
x1 = +1.943
# Analytical value of integral
analyt = 0.25*(x1**4 - x0**4) + 1.7*(x1-x0)
print("Analytical value: ", analyt)
print("ibtrapz: ", mtools.ibtrapz(x, y, x0, x1))
def sanity_ibtrapz(self):
"""
mtools: Checking ibtrapz
"""
from PyAstronomy.pyaC import mtools
import numpy as np
x = np.arange(-2.,2.01,0.1)
y = 2. * x
x0 = -1.375
x1 = +1.943
# Analytical value of integral
analyt = x1**2 - x0**2
self.assertAlmostEqual(analyt, mtools.ibtrapz(x, y, x0, x1), delta=1e-10, msg="ibtrapz incorrect for linear function.")
self.assertAlmostEqual((-1.9)**2-(-2.0)**2, mtools.ibtrapz(x, y, -2.0, -2.0+0.1), delta=1e-10, msg="ibtrapz incorrect for linear function (-2,-1.9).")
self.assertAlmostEqual(0.0, mtools.ibtrapz(x, y, -2.0, +2.0), delta=1e-10, msg="ibtrapz incorrect for linear function (-2,+2).")
def sanity_zerocross1dExample(self):
"""
Checking sanity of zerocross1d example
"""
import numpy as np
import matplotlib.pylab as plt
from PyAstronomy import pyaC
# Generate some 'data'
x = np.arange(100.)**2
y = np.sin(x)
# Set the last data point to zero.
# It will not be counted as a zero crossing!
y[-1] = 0
# Set point to zero. This will be counted as a
# zero crossing
y[10] = 0.0
# Get coordinates and indices of zero crossings
xc, xi = pyaC.zerocross1d(x, y, getIndices=True)
# Plot the data
plt.plot(x, y, 'b.-')
# Add black points where the zero line is crossed
plt.plot(xc, np.zeros(len(xc)), 'kp')
# Add green points at data points preceding an actual
# zero crossing.
plt.plot(x[xi], y[xi], 'gp')
# plt.show()
def sanity_zerocross1d(self):
"""
Checking sanity of zerocross1d
"""
import numpy as np
from PyAstronomy import pyaC
x = np.arange(3.)
y = np.array([0, 3, 0])
xz = pyaC.zerocross1d(x, y)
self.assertEqual(len(xz), 0, msg="Found zero crossing in 0,3,0 array (problem with first/last point).")
y = np.array([-1., 1., -2.])
xz, xi = pyaC.zerocross1d(x, y, getIndices=True)
self.assertEqual(len(xz), 2, msg="Found the following zero crossings in -1,1,-2 array: " + str(xz))
self.assertEqual(len(xz), len(xi), "Number of values and indicies is not identical.")
self.assertAlmostEqual(np.max(np.abs(np.array([0.5, 1.+1./3.])-xz)), 0.0, delta=1e-8, msg="Found unexpected zero crossings: " + str(xz))
self.assertEqual(np.max(np.abs(xi - np.array([0,1]))), 0, msg="Found unexpected indices: " + str(xi))
|
[
"stefan.czesla@hs.uni-hamburg.de"
] |
stefan.czesla@hs.uni-hamburg.de
|
5c805124bb7ea1d536d8e28ac2d185d90ef5a508
|
d17403d0e6ffb7e32798df921e287ff60c8461f8
|
/GuoMei/spiders/shop.py
|
4a6f840433f0dd2f6da216f1748df92eb095dc3c
|
[] |
no_license
|
beautifulmistake/GuoMeiMall
|
7d29c67bb153f7f9b5924e9277fc13580b483fad
|
9537c926c2891905732972fc061f8a3da2af6439
|
refs/heads/master
| 2020-05-31T06:43:41.858288
| 2019-06-11T08:45:30
| 2019-06-11T08:45:30
| 190,148,927
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,851
|
py
|
"""
此爬虫是根据关键字采集国美的店铺信息
响应的数据为 json 格式
"""
import json
import os
import scrapy
from scrapy.exceptions import CloseSpider
from scrapy.signals import spider_closed
from scrapy.spidermiddlewares.httperror import HttpError
from scrapy_redis.spiders import RedisSpider
from twisted.internet.error import TCPTimedOutError, DNSLookupError
from GuoMei.items import GuoMeiShop
class GuoMeiShopSpider(RedisSpider):
# 爬虫名称
name = 'shopSpider'
# 启动命令
redis_key = 'GuoMeiShopSpider:items'
# 配置信息
custom_settings = dict(
ITEM_PIPELINES={
'GuoMei.pipelines.ShopInfoPipeline': 300,
}
)
def __init__(self, settings):
super(GuoMeiShopSpider, self).__init__()
# 任务文件列表
self.keyword_file_list = os.listdir(settings.get("KEYWORD_PATH"))
# 店铺请求的URL 10 ----> 每页显示数量 2 -----> 页号
self.shop_url = 'https://apis.gome.com.cn/p/mall/10/{page}/{keyword}?from=search'
# headers
self.headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/'
'537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36',
# '': '',
}
def parse_err(self, failure):
"""
异常处理函数
:param failure:
:return:
"""
if failure.check(TimeoutError, TCPTimedOutError, DNSLookupError):
# 失败的请求
request = failure.request
# 失败请求重新加入请求队列
self.server.rpush(self.redis_key, request)
if failure.check(HttpError):
# 响应
response = failure.value.response
# 失败请求重新加入请求队列
self.server.rpush(self.redis_key, response.url)
return
def start_requests(self):
"""
循环读取文件列表,生成初始请求
:return:
"""
if not self.keyword_file_list:
# 抛出异常,并关闭爬虫
raise CloseSpider('需要关键字文件')
for keyword_file in self.keyword_file_list:
# 循环获取关键字文件路径
file_path = os.path.join(self.settings.get("KEYWORD_PATH"), keyword_file)
# 读取文件
with open(file_path, 'r', encoding='utf-8') as f:
for keyword in f.readlines():
# 消除关键字末尾的空白字符
keyword = keyword.strip()
# 发起请求
yield scrapy.Request(url=self.shop_url.format(page=str(1), keyword=keyword), callback=self.parse,
errback=self.parse_err, meta={'keyword': keyword})
@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
# 配置信息
settings = crawler.settings
# 爬虫信息
spider = super(GuoMeiShopSpider, cls).from_crawler(crawler, settings, *args, **kwargs)
# 终止爬虫信号
crawler.signals.connect(spider.spider_closed, signal=spider_closed)
# 返回 spider
return spider
def spider_closed(self, spider):
"""
自定义爬虫关闭执行的操作
:param spider:
:return:
"""
self.logger.info('Spider closed : %s', spider.name)
# 视具体的情况添加如下两个文件的操作方法
# spider.record_file.write("]")
# spider.record_file.close()
def parse(self, response):
if response.status == 200:
print(response.text)
# 关键字
keyword = response.meta['keyword']
# json ---> dict
res = json.loads(response.text, encoding='utf-8')
# 总页数
totalPage = res.get('totalPage')
# 当前页号
currentPage = res.get('currentPage')
# 搜索结果总数
totalCount = res.get('totalCount')
# 店铺信息列表
shopList = res.get('shopList')
if shopList:
for shop in shopList:
# item
item = GuoMeiShop()
# 关键字
item['keyword'] = keyword
# 店铺总数
item['totalCount'] = totalCount
# 店铺信息
item['shop_info'] = shop
yield item
if int(currentPage) < int(totalPage):
# 下一页
yield scrapy.Request(url=self.shop_url.format(page=int(currentPage) + 1, keyword=keyword),
callback=self.parse, errback=self.parse_err, meta=response.meta)
|
[
"17319296359@163.com"
] |
17319296359@163.com
|
b92f9667ab1f08bc8ef15d2340b5e322bfb3e78d
|
522edf49c560a9d8857888d81500ecfd9d106933
|
/WayneBlog/WayneBlog/autocomplete.py
|
e1968bfa4787e65f20e6208c09f6649e4e1b27c5
|
[] |
no_license
|
WayneChen1994/WayneBlog
|
d5590b48151ef6ffef5b6e461069bbd0113aefc6
|
79355c11976fdf84e5dc35bda628a2c4756676d0
|
refs/heads/master
| 2020-04-29T07:29:01.977198
| 2019-04-19T15:22:56
| 2019-04-19T15:22:56
| 175,945,954
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 739
|
py
|
from dal import autocomplete
from blog.models import Category, Tag
class CategoryAutocomplete(autocomplete.Select2QuerySetView):
def get_queryset(self):
if not self.request.user.is_authenticated():
return Category.objects.none()
qs = Category.objects.filter(owner=self.request.user)
if self.q:
qs = qs.filter(name__istartswith=self.q)
return qs
class TagAutocomplete(autocomplete.Select2QuerySetView):
def get_queryset(self):
if not self.request.user.is_authenticated():
return Tag.objects.none()
qs = Tag.objects.filter(owner=self.request.user)
if self.q:
qs = qs.filter(name__istartswith=self.q)
return qs
|
[
"waynechen1994@163.com"
] |
waynechen1994@163.com
|
40cfecdd4710615953024f1d17a719922c50da7a
|
3365e4d4fc67bbefe4e8c755af289c535437c6f4
|
/.history/src/core/app/GUI_20170808112214.py
|
a41ca5b589222a9d9f1ef86e53b552cdda38bb1b
|
[] |
no_license
|
kiranhegde/OncoPlotter
|
f3ab9cdf193e87c7be78b16501ad295ac8f7d2f1
|
b79ac6aa9c6c2ca8173bc8992ba3230aa3880636
|
refs/heads/master
| 2021-05-21T16:23:45.087035
| 2017-09-07T01:13:16
| 2017-09-07T01:13:16
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 5,793
|
py
|
#! python3
import ctypes
import os
import sys
from PyQt5 import QtCore, QtGui
from PyQt5.QtWidgets import (QAction, QApplication, QFileDialog, QFrame,
QMainWindow, QPushButton, QSizePolicy, QSplitter,
QTextEdit, QVBoxLayout, QWidget)
#GUI
import core.gui.mainwindow as mainwindow
#Support functions
from core.app.support_functions import import_plot_data
from core.dialogs.spider_dialog import Spider, SpiderPlotter
#Dialogs
from core.dialogs.swimmer_dialog import Swimmer, SwimmerPlotter
from core.dialogs.waterfall_dialog import Waterfall, WaterfallPlotter
image_dir = os.path.dirname(os.path.abspath('../OncoPlot'))
class MainWindow(QMainWindow, mainwindow.Ui_MainWindow):
waterfall_data_signal = QtCore.pyqtSignal(list)
def __init__(self,parent=None):
QMainWindow.__init__(self,parent)
self.setupUi(self)
self.setup_window()
def setup_window(self):
#Dialogs
self.Waterfall_Plot = WaterfallPlotter(self)
self.Waterfall = Waterfall(self)
self.Waterfall_Widget = QWidget()
self.Waterfall_Box = QVBoxLayout()
self.Waterfall_Splitter = QSplitter(QtCore.Qt.Horizontal)
self.Waterfall_Splitter.addWidget(self.Waterfall)
self.Waterfall_Splitter.addWidget(self.Waterfall_Plot)
self.Waterfall_Box.addWidget(self.Waterfall_Splitter)
self.Waterfall_Widget.setLayout(self.Waterfall_Box)
self.Spider_Widget = QWidget()
self.Spider_Box = QVBoxLayout()
self.Spider_Splitter = QSplitter(QtCore.Qt.Horizontal)
self.Spider_Plot = SpiderPlotter(self)
self.Spider = Spider(self)
self.Spider_Splitter.addWidget(self.Spider)
self.Spider_Splitter.addWidget(self.Spider_Plot)
self.Spider_Box.addWidget(self.Spider_Splitter)
self.Spider_Widget.setLayout(self.Spider_Box)
self.Swimmer_Widget = QWidget()
self.Swimmer_Box = QVBoxLayout()
self.Swimmer_Splitter = QSplitter(QtCore.Qt.Horizontal)
self.Swimmer_Plot = SwimmerPlotter(self)
self.Swimmer = Swimmer(self)
self.Swimmer_Splitter.addWidget(self.Swimmer)
self.Swimmer_Splitter.addWidget(self.Swimmer_Plot)
self.Swimmer_Box.addWidget(self.Swimmer_Splitter)
self.Swimmer_Widget.setLayout(self.Swimmer_Box)
self.stackedWidget.addWidget(self.Waterfall_Widget) #0
self.stackedWidget.addWidget(self.Spider_Widget) #1
self.stackedWidget.addWidget(self.Swimmer_Widget) #2
self.stackedWidget.hide()
#Set up toolBar
self.toolBar.setToolButtonStyle(QtCore.Qt.ToolButtonTextUnderIcon)
importAction = QAction(QtGui.QIcon(os.path.join(image_dir,'images\Download.png')), 'Import date template', self)
importAction.triggered.connect(self.import_data)
importAction.setIconText("Import")
self.toolBar.addAction(importAction)
self.toolBar.addSeparator()
dumpAction = QAction(QtGui.QIcon(os.path.join(image_dir,'images\Rubbish.png')), 'Import date template', self)
#dumpAction.triggered.connect(self.dump_data)
dumpAction.setIconText("Dump data")
self.toolBar.addAction(dumpAction)
self.toolBar.addSeparator()
self.waterfallAction = QAction(QtGui.QIcon(os.path.join(image_dir,'images\waterfall.png')), 'Waterfall plot', self)
self.waterfallAction.triggered.connect(self.launch_waterfall)
self.waterfallAction.setIconText("Waterfall")
self.waterfallAction.setEnabled(False)
self.toolBar.addAction(waterfallAction)
spiderAction = QAction(QtGui.QIcon(os.path.join(image_dir,'images\spider.png')), 'Spider plot', self)
spiderAction.triggered.connect(self.launch_spider)
spiderAction.setIconText("Spider")
spiderAction.setEnabled(False)
self.toolBar.addAction(spiderAction)
swimmerAction = QAction(QtGui.QIcon(os.path.join(image_dir,'images\swimmer_stack.png')), 'Swimmer plot', self)
swimmerAction.triggered.connect(self.launch_spider)
swimmerAction.setIconText("Swimmer")
swimmerAction.setEnabled(False)
self.toolBar.addAction(swimmerAction)
self.toolBar.addSeparator()
#Signal interconnections
self.waterfall_data_signal.connect(self.Waterfall.on_waterfall_data_signal)
self.waterfall_data_signal.connect(self.Waterfall_Plot.on_waterfall_data_signal)
self.Waterfall.general_settings_signal.connect(self.Waterfall_Plot.on_general_settings_signal)
#Launch functions
def launch_waterfall(self):
self.stackedWidget.setCurrentIndex(0)
self.stackedWidget.show()
def launch_spider(self):
self.stackedWidget.setCurrentIndex(1)
self.stackedWidget.show()
def launch_swimmer(self):
self.stackedWidget.setCurrentIndex(2)
self.stackedWidget.show()
def import_data(self):
self.file_path = QFileDialog.getOpenFileName(self,"Select Data Template", "C:\\")[0]
if self.file_path == '':
pass
else:
self.waterfall_data = import_plot_data(self.file_path)
self.waterfall_data_signal.emit(self.waterfall_data)
waterfallAction.setEnabled(True)
spiderAction.setEnabled(True)
swimmerAction.setEnabled(True)
def main():
myappid = u'OncoPlotter_V1.0'
ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(myappid)
app = QApplication(sys.argv)
app.setApplicationName('OncoPlotter')
app.setStyle("plastique")
app.setStyleSheet("QSplitter::handle { background-color: gray }")
mainwindow = MainWindow()
mainwindow.show()
sys.exit(app.exec_())
if __name__=='__main__':
main()
|
[
"ngoyal95@terpmail.umd.edu"
] |
ngoyal95@terpmail.umd.edu
|
4a6477389290b79cec799eb18a7fb2e6883e2a89
|
42f418a4535c269ea1fbf2a023acf9805ee72c0a
|
/exploration/executables/December17/parabola_configurations.py
|
934f2503cab59c915d941033f4c014dc5cf8b03f
|
[] |
no_license
|
yumilceh/artificialbabbling
|
10e6fe2fe19edbc0263c61b2897514cbae9a1c1f
|
680b720e48b787bac1c626c597fd690159edd777
|
refs/heads/master
| 2022-11-12T22:21:38.606407
| 2018-04-24T14:37:20
| 2018-04-24T14:37:20
| 270,961,611
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,850
|
py
|
"""
Created on May 17, 2017
@author: Juan Manuel Acevedo Valle
"""
from exploration.algorithm.utils.competence_funcs import comp_Moulin2013_expl as comp_func_expl
from exploration.algorithm.utils.competence_funcs import comp_Moulin2013 as comp_func_
# from exploration.models.sensorimotor.ExplautoSM import ExplautoSM as ea_SM
# from exploration.models.sensorimotor.GMM_SM import GMM_SM
from exploration.models.Constraints.ExplautoCons import ExplautoCons as ea_cons
from exploration.models.Interest.ExplautoIM import explauto_IM as ea_IM
from exploration.models.Interest.Random import Random
# from exploration.models.Interest.GMM_IM import GMM_IM
# from exploration.models.Somatomotor.GMM_SS import GMM_SS
from exploration.models.sensorimotor.IGMM_SM import GMM_SM as IGMM_SM
# from exploration.models.Somatomotor.ILGMM_SS import GMM_SS as IGMM_SS
from exploration.models.sensorimotor.trash.ILGMM_SM2 import GMM_SM as ILGMM_old
from igmm import DynamicParameter
# from exploration.algorithm.utils.competence_funcs import comp_Baraglia2015_expl as comp_func_expl
# from exploration.algorithm.utils.competence_funcs import comp_Baraglia2015 as comp_func
comp_func = comp_func_
model_class = {'igmm_sm': IGMM_SM,
'igmm_old': ILGMM_old,
'igmm_ss': IGMM_SM,
'explauto_im': ea_IM,
'explauto_im_som': ea_IM,
'explauto_cons': ea_cons,
'random': Random}
models_params_list = {'igmm_sm': [],
'igmm_ss': [],
'igmm_old': [],
'explauto_cons': [], # 'LWLR-BFGS', 'nearest_neighbor', 'WNN', 'LWLR-CMAES'
'explauto_im': [],
'explauto_im_som':[],
'random': []
}
models_params_dict = {'igmm_ss': {'min_components': 3,
'max_step_components': 10, #5
'max_components': 80, #10
'sm_step': 120,
'somato': True,
'forgetting_factor': 0.2, #OK: 0.2, 0.05
'sigma_explo_ratio': 0.},
'igmm_sm': {'min_components': 3,
'somato': False,
'max_step_components': 5,
'max_components': 20,
'sm_step': 100,
'forgetting_factor': 0.05},
'igmm_old': {'min_components': 3,
'max_step_components': 5,
'max_components': 20,
'sm_step': 100,
'forgetting_factor': 0.2},
'explauto_cons': {'model_type': 'non_parametric', 'model_conf': {'fwd': 'WNN', 'inv': 'WNN',
'k':3, 'sigma':1.,
'sigma_explo_ratio':0.1}},
'explauto_im': {'competence_func': comp_func_expl, 'model_type': 'discretized_progress'},
'explauto_im_som': {'competence_func': comp_func_expl, 'model_type': 'discretized_progress','somato':True},
'random': {'mode': 'sensor'}
}
def model_(model_key, system, competence_func=None):
if competence_func is None:
return model_class[model_key](system, *models_params_list[model_key], **models_params_dict[model_key])
else:
return model_class[model_key](system, competence_func, *models_params_list[model_key],
**models_params_dict[model_key])
|
[
"jmavbpl@gmail.com"
] |
jmavbpl@gmail.com
|
ded8721ccec0deb754fd31df06a0b48f471fc4e1
|
7312266874e50682cf909f4b77260c9a69f13999
|
/python/packages/scipy-0.6.0/scipy/sparse/setup.py
|
9d54d45325a529b3cddd812510c4297bb0f15697
|
[] |
no_license
|
mbentz80/jzigbeercp
|
e354695b90a72c7fe3c5c7ec7d197d9cbc18d7d9
|
1a49320df3db13d0a06fddb30cf748b07e5ba5f0
|
refs/heads/master
| 2021-01-02T22:44:16.088783
| 2008-08-27T23:05:47
| 2008-08-27T23:05:47
| 40,231
| 1
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,323
|
py
|
#!/usr/bin/env python
from os.path import join
import sys
def configuration(parent_package='',top_path=None):
import numpy
from numpy.distutils.misc_util import Configuration
config = Configuration('sparse',parent_package,top_path)
config.add_data_dir('tests')
# Adding a Python file as a "source" file for an extension is something of
# a hack, but it works to put it in the right place.
sources = [join('sparsetools', x) for x in
['sparsetools.py', 'sparsetools_wrap.cxx']]
config.add_extension('_sparsetools',
sources=sources,
include_dirs=['sparsetools'],
)
## sparsetools_i_file = config.paths(join('sparsetools','sparsetools.i'))[0]
## def sparsetools_i(ext, build_dir):
## return sparsetools_i_file
## config.add_extension('_sparsetools',
## sources= [sparsetools_i_file],
## include_dirs=['sparsetools'],
## depends = [join('sparsetools', x) for x in
## ['sparsetools.i', 'sparsetools.h']]
## )
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(**configuration(top_path='').todict())
|
[
"mb434@cornell.edu"
] |
mb434@cornell.edu
|
96f690a53f749d69fdfe0fbb6ef5e7280b753b9e
|
786027545626c24486753351d6e19093b261cd7d
|
/ghidra9.2.1_pyi/ghidra/app/util/bin/format/coff/CoffSectionHeader3.pyi
|
2d208f75241a47f7c45a55cecba6be313298d43c
|
[
"MIT"
] |
permissive
|
kohnakagawa/ghidra_scripts
|
51cede1874ef2b1fed901b802316449b4bf25661
|
5afed1234a7266c0624ec445133280993077c376
|
refs/heads/main
| 2023-03-25T08:25:16.842142
| 2021-03-18T13:31:40
| 2021-03-18T13:31:40
| 338,577,905
| 14
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 7,087
|
pyi
|
from typing import List
import ghidra.app.util.bin
import ghidra.app.util.bin.format.coff
import ghidra.program.model.address
import ghidra.program.model.data
import ghidra.program.model.lang
import java.io
import java.lang
class CoffSectionHeader3(ghidra.app.util.bin.format.coff.CoffSectionHeader):
"""
A 0x2c byte COFF section header
"""
def equals(self, __a0: object) -> bool: ...
@overload
@staticmethod
def getAddress(language: ghidra.program.model.lang.Language, offset: long, section: ghidra.app.util.bin.format.coff.CoffSectionHeader) -> ghidra.program.model.address.Address:
"""
Convert address offset to an Address object. The default data space (defined by pspec)
will be used if section is null or corresponds to a data section. The language default
space (defined by slaspec) will be used for all non-data sections. If pspec does not
specify a default data space, the default language space is used.
@param language
@param offset address offset (byte offset assumed if section is null or is not explicitly
byte aligned, otherwise word offset assumed).
@param section section which contains the specified offset or null (data space assumed)
@return address object
"""
...
@overload
@staticmethod
def getAddress(language: ghidra.program.model.lang.Language, offset: long, space: ghidra.program.model.address.AddressSpace) -> ghidra.program.model.address.Address:
"""
Convert address offset to an Address in the specified space (defined by pspec).
If pspec does not specify a default data space, the default language space is used.
@param language
@param offset address offset (word offset assumed).
@param space address space
@return address object
"""
...
def getClass(self) -> java.lang.Class: ...
def getFlags(self) -> int:
"""
Returns the flags for this section.
@return the flags for this section
"""
...
def getLineNumberCount(self) -> int:
"""
Returns the number of line number entries for this section.
@return the number of line number entries for this section
"""
...
def getLineNumbers(self) -> List[ghidra.app.util.bin.format.coff.CoffLineNumber]: ...
def getName(self) -> unicode:
"""
Returns the section name.
The section name will never be more than eight characters.
@return the section name
"""
...
def getPage(self) -> int: ...
@overload
def getPhysicalAddress(self) -> int:
"""
Returns the physical address offset.
This is the address at which the section
should be loaded into memory and reflects a addressable word offset.
For linked executables, this is the absolute
address within the program space.
For unlinked objects, this address is relative
to the object's address space (i.e. the first section
is always at offset zero).
@return the physical address
"""
...
@overload
def getPhysicalAddress(self, language: ghidra.program.model.lang.Language) -> ghidra.program.model.address.Address:
"""
Returns the physical address.
This is the address at which the section
should be loaded into memory.
For linked executables, this is the absolute
address within the program space.
For unlinked objects, this address is relative
to the object's address space (i.e. the first section
is always at offset zero).
@return the physical address
"""
...
def getPointerToLineNumbers(self) -> int:
"""
Returns the file offset to the line numbers for this section.
@return the file offset to the line numbers for this section
"""
...
def getPointerToRawData(self) -> int:
"""
Returns the file offset to the section data.
@return the file offset to the section data
"""
...
def getPointerToRelocations(self) -> int:
"""
Returns the file offset to the relocations for this section.
@return the file offset to the relocations for this section
"""
...
def getRawDataStream(self, provider: ghidra.app.util.bin.ByteProvider, language: ghidra.program.model.lang.Language) -> java.io.InputStream:
"""
Returns an input stream that will supply the bytes
for this section.
@return the input stream
@throws IOException if an I/O error occurs
"""
...
def getRelocationCount(self) -> int:
"""
Returns the number of relocations for this section.
@return the number of relocations for this section
"""
...
def getRelocations(self) -> List[ghidra.app.util.bin.format.coff.CoffRelocation]: ...
def getReserved(self) -> int: ...
def getSize(self, language: ghidra.program.model.lang.Language) -> int:
"""
Returns the number of bytes of data stored in the file for this section.
NOTE: This value does not strictly indicate size in bytes.
For word-oriented machines, this value is represents
size in words.
@return the number of bytes of data stored in the file for this section
"""
...
def getVirtualAddress(self) -> int:
"""
Returns the virtual address.
This value is always the same as s_paddr.
@return the virtual address
"""
...
def hashCode(self) -> int: ...
def isAllocated(self) -> bool: ...
def isData(self) -> bool: ...
def isExecutable(self) -> bool: ...
def isExplicitlyByteAligned(self) -> bool:
"""
Returns true if this section is byte oriented and aligned and should assume
an addressable unit size of 1.
@return true if byte aligned, false if word aligned
"""
...
def isGroup(self) -> bool: ...
def isInitializedData(self) -> bool: ...
def isProcessedBytes(self, language: ghidra.program.model.lang.Language) -> bool: ...
def isReadable(self) -> bool: ...
def isUninitializedData(self) -> bool: ...
def isWritable(self) -> bool: ...
def move(self, offset: int) -> None:
"""
Adds offset to the physical address; this must be performed before
relocations in order to achieve the proper result.
@param offset the offset to add to the physical address
"""
...
def notify(self) -> None: ...
def notifyAll(self) -> None: ...
def toDataType(self) -> ghidra.program.model.data.DataType: ...
def toString(self) -> unicode: ...
@overload
def wait(self) -> None: ...
@overload
def wait(self, __a0: long) -> None: ...
@overload
def wait(self, __a0: long, __a1: int) -> None: ...
|
[
"tsunekou1019@gmail.com"
] |
tsunekou1019@gmail.com
|
523591afd5c27c3429d387457b5d75979d089531
|
824f831ce0921b3e364060710c9e531f53e52227
|
/Leetcode/Dynamic_Programming/LC-053. Maximum Subarray.py
|
e6b7d456cfe91003151f2a49f92a98deefd53cc7
|
[] |
no_license
|
adityakverma/Interview_Prepration
|
e854ff92c10d05bc2c82566ea797d2ce088de00a
|
d08a7f728c53943e9a27c33f8e4249633a69d1a6
|
refs/heads/master
| 2020-04-19T19:36:06.527353
| 2019-06-15T23:02:30
| 2019-06-15T23:02:30
| 168,392,921
| 0
| 2
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,000
|
py
|
# Tags: Array, Dynamic Programming, Divide & Conquer, Microsoft, LinkedIn
# ========================================================================
# Given an integer array nums, find the contiguous subarray
# (containing at least one number) which has the largest sum and
# return its sum.
# Example:
# Input: [-2,1,-3,4,-1,2,1,-5,4],
# Output: 6; Explanation: [4,-1,2,1] has the largest sum = 6.
# ALSO check LC 152 - Same concept
class Solution():
def maxSubArray(self, nums):
for i in range(1, len(nums)):
nums[i] = max(nums[i - 1]+ nums[i], nums[i])
return max(nums)
# Using Dynamic Programming. O(n) Space : Looks like Kadane
def maxSubArray_DP(self, nums): # OKKK ....
if not nums:
return None
dp = [0] * len(nums)
res = dp[0] = nums[0]
for i in xrange(1, len(nums)):
dp[i] = max(dp[i - 1] + nums[i], nums[i]) # we need to find that contiguous portion of an array where the sum is maximum.
#res = max(res, dp[i])
#print dp
return max(dp)
# def maxProfit(self, prices):
# """
# :type prices: List[int]
# :rtype: int
# """
# if not prices:
# return 0
#
# curSum = maxSum = 0
# for i in range(1, len(prices)):
# curSum = max(0, curSum + prices[i] - prices[i - 1])
# maxSum = max(maxSum, curSum)
#
# return maxSum
if __name__ == '__main__':
nums = [-2,1,-3,4,-1,2,1,-5,4]
nums1 = [-2,-3,6,-8,2,-9]
s = Solution()
#print s.maxSubArray(nums)
print s.maxSubArray_DP(nums)
#print s.maxProfit(nums)
# I was asked a follow up question to this question in an interview." How would we solve this given that there is an endless incoming stream of numbers ?" Ideas anybody?
# https://leetcode.com/problems/maximum-subarray/discuss/179894/Follow-up-question-in-Intervierw
|
[
"noreply@github.com"
] |
adityakverma.noreply@github.com
|
dd539a1c149d1e02e104ac2f58495f0374684f81
|
bdeb707894a4647cf46ab136e2ed5e7752094897
|
/hive/server/condenser_api/tags.py
|
a75c6be3d711cd4314f922d71ff246c9fa559459
|
[
"MIT"
] |
permissive
|
Jolly-Pirate/hivemind
|
e5f636070b1e51bcd047678b758d5617ad4d83ec
|
2bd91ece6c32f355bca40a156f2a5dc3d4d882bb
|
refs/heads/master
| 2020-05-30T09:01:22.910947
| 2019-04-11T19:25:48
| 2019-04-11T19:25:48
| 189,629,870
| 0
| 0
|
MIT
| 2019-05-31T17:05:18
| 2019-05-31T17:01:27
|
Python
|
UTF-8
|
Python
| false
| false
| 1,902
|
py
|
"""condenser_api trending tag fetching methods"""
from aiocache import cached
from hive.server.condenser_api.common import (return_error_info, valid_tag, valid_limit)
@return_error_info
@cached(ttl=7200, timeout=1200)
async def get_top_trending_tags_summary(context):
"""Get top 50 trending tags among pending posts."""
# Same results, more overhead:
#return [tag['name'] for tag in await get_trending_tags('', 50)]
sql = """
SELECT category
FROM hive_posts_cache
WHERE is_paidout = '0'
GROUP BY category
ORDER BY SUM(payout) DESC
LIMIT 50
"""
return await context['db'].query_col(sql)
@return_error_info
@cached(ttl=3600, timeout=1200)
async def get_trending_tags(context, start_tag: str = '', limit: int = 250):
"""Get top 250 trending tags among pending posts, with stats."""
limit = valid_limit(limit, ubound=250)
start_tag = valid_tag(start_tag or '', allow_empty=True)
if start_tag:
seek = """
HAVING SUM(payout) <= (
SELECT SUM(payout)
FROM hive_posts_cache
WHERE is_paidout = '0'
AND category = :start_tag)
"""
else:
seek = ''
sql = """
SELECT category,
COUNT(*) AS total_posts,
SUM(CASE WHEN depth = 0 THEN 1 ELSE 0 END) AS top_posts,
SUM(payout) AS total_payouts
FROM hive_posts_cache
WHERE is_paidout = '0'
GROUP BY category %s
ORDER BY SUM(payout) DESC
LIMIT :limit
""" % seek
out = []
for row in await context['db'].query_all(sql, limit=limit, start_tag=start_tag):
out.append({
'name': row['category'],
'comments': row['total_posts'] - row['top_posts'],
'top_posts': row['top_posts'],
'total_payouts': "%.3f SBD" % row['total_payouts']})
return out
|
[
"roadscape@users.noreply.github.com"
] |
roadscape@users.noreply.github.com
|
f27caf7428d9cbda9fc1a6c9fffb03dc2afaec0b
|
83cf4aedab8f8b54753dfca1122346a88faa3c05
|
/prysm/__init__.py
|
7028af45080d8151982adc1e43a33f7bf9e984b8
|
[
"MIT"
] |
permissive
|
tealtortoise/prysm
|
073c2a6bea10c390fb7be1d708ecab666a91bdb1
|
3c17cb7b6049a36a1f8b6a0035c216ca1078aee1
|
refs/heads/master
| 2022-05-08T05:30:21.819821
| 2019-04-12T11:49:48
| 2019-04-12T11:49:48
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,451
|
py
|
"""prysm, a python optics module."""
from pkg_resources import get_distribution
from prysm.conf import config
from prysm.convolution import Convolvable
from prysm.detector import Detector, OLPF, PixelAperture
from prysm.pupil import Pupil
from prysm.psf import PSF, AiryDisk
from prysm.otf import MTF
from prysm.interferogram import Interferogram
from prysm.geometry import (
circle,
truecircle,
gaussian,
rotated_ellipse,
square,
regular_polygon,
pentagon,
hexagon,
heptagon,
octagon,
nonagon,
decagon,
hendecagon,
dodecagon,
trisdecagon
)
from prysm.objects import (
Slit,
Pinhole,
SiemensStar,
TiltedSquare,
SlantedEdge,
)
from prysm.zernike import FringeZernike, NollZernike, zernikefit
from prysm.sample_data import sample_files
__all__ = [
'config',
'Detector',
'OLPF',
'PixelAperture',
'Pupil',
'FringeZernike',
'NollZernike',
'zernikefit',
'Interferogram',
'PSF',
'AiryDisk',
'MTF',
'gaussian',
'rotated_ellipse',
'regular_polygon',
'square',
'pentagon',
'hexagon',
'heptagon',
'octagon',
'nonagon',
'decagon',
'hendecagon',
'dodecagon',
'trisdecagon',
'Slit',
'Pinhole',
'SiemensStar',
'TiltedSquare',
'SlantedEdge',
'Convolvable',
'circle',
'truecircle',
'sample_files',
]
__version__ = get_distribution('prysm').version
|
[
"brandondube@gmail.com"
] |
brandondube@gmail.com
|
c6ddd34116e222c160be95243b8b1df1eb4c67b3
|
411d9c64d2f2142f225582f2b4af1280310426f6
|
/sk/logistic.py
|
b193e93f1811528873b0dbd74620a83c1a463b8f
|
[] |
no_license
|
631068264/learn-sktf
|
5a0dfafb898acda83a80dc303b6d6d56e30e7cab
|
4ba36c89003fca6797025319e81fd9863fbd05b1
|
refs/heads/master
| 2022-10-15T03:29:38.709720
| 2022-09-24T12:56:41
| 2022-09-24T12:56:41
| 133,602,172
| 0
| 0
| null | 2022-09-24T12:57:23
| 2018-05-16T02:57:01
|
Python
|
UTF-8
|
Python
| false
| false
| 1,522
|
py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@author = 'wyx'
@time = 2018/5/13 18:27
@annotation = ''
"""
import numpy as np
from matplotlib import pyplot as plt
from sklearn import datasets
iris = datasets.load_iris()
# petal width
X = iris["data"][:, 3:]
# 1 if Iris-Virginica, else 0
y = (iris["target"] == 2).astype(np.int)
from sklearn.linear_model import LogisticRegression
"""
LogisticRegression
Softmax回归分类器一次只能预测一个类(即,它是多类的,而不是多输出的),
所以它只能用于互斥类,如不同类型的植物。你不能用它来识别一张照片中的多个人。
"""
if False:
log_reg = LogisticRegression()
log_reg.fit(X, y)
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
plt.plot(X_new, y_proba[:, 1], "g-", label="Iris-Virginica")
plt.plot(X_new, y_proba[:, 0], "b--", label="Not Iris-Virginica")
plt.legend()
plt.show()
"""
多项
Scikit-Learn's LogisticRegression在两个以上的类上进行训练时默认使用“一对多”,
但您可以将multi_class超参数设置为“多项”来将其切换为Softmax回归 随机平均梯度下降解算器。
您还必须指定一个支持Softmax回归的解算器,例如“lbfgs”求解器
"""
X = iris["data"][:, (2, 3)]
y = iris["target"]
softmax_reg = LogisticRegression(multi_class="multinomial", solver="lbfgs", C=10)
softmax_reg.fit(X, y)
print(softmax_reg.predict([[5, 2]]))
print(softmax_reg.predict_proba([[5, 2]]))
|
[
"l631068264@gmail.com"
] |
l631068264@gmail.com
|
e12e3975e4e78c3b035185b63e0267d080e2e194
|
366c30997f60ed5ec19bee9d61c1c324282fe2bb
|
/deb/openmediavault/usr/sbin/omv-mkaptidx
|
b4820779bddb8f05d68daeb91a1c9571cb97b67d
|
[] |
no_license
|
maniacs-ops/openmediavault
|
9a72704b8e30d34853c991cb68fb055b767e7c6e
|
a1c26bdf269eb996405ce6de72211a051719d9e7
|
refs/heads/master
| 2021-01-11T17:28:04.571883
| 2017-01-19T22:06:35
| 2017-01-19T22:06:35
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 5,989
|
#!/usr/bin/env python3
#
# This file is part of OpenMediaVault.
#
# @license http://www.gnu.org/licenses/gpl.html GPL Version 3
# @author Volker Theile <volker.theile@openmediavault.org>
# @copyright Copyright (c) 2009-2017 Volker Theile
#
# OpenMediaVault is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# OpenMediaVault is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OpenMediaVault. If not, see <http://www.gnu.org/licenses/>.
# The following error might happen from time to time.
#
# Traceback (most recent call last):
# File "/usr/sbin/omv-mkaptidx", line 27, in <module>
# import openmediavault
# EOFError: EOF read where not expected
#
# To analyse the error execute:
# python3 -vc 'import openmediavault'
#
# To fix this error simply execute the following command:
# rm -f /usr/lib/python3/dist-packages/__pycache__/openmediavault.cpython-32.pyc
import sys
import apt
import apt_pkg
import json
import re
import openmediavault as omv
pi = omv.ProductInfo()
class OpenMediaVaultPluginsFilter(apt.cache.Filter):
def apply(self, pkg):
m = re.match(r"^%s-(\S+)$" % pi.package_name, pkg.name)
if not m:
return False
if m.group(1) == "keyring":
return False
return True
def get_extended_description(raw_description):
"""
Return the extended description according to the Debian policy
(Chapter 5.6.13).
See http://www.debian.org/doc/debian-policy/ch-controlfields.html
for more information.
"""
parts = raw_description.partition("\n");
lines = parts[2].split("\n");
for i, line in enumerate(lines):
lines[i] = line.strip();
if lines[i] == ".":
lines[i] = "\n"
return "\n".join(lines)
cache = apt.cache.Cache()
# Create the '/var/lib/openmediavault/apt/upgradeindex.json' file.
print("Creating index of upgradeable packages ...")
data = []
cache.upgrade(True)
for pkg in cache.get_changes():
if pkg.candidate is None:
continue
data.append({
"name": pkg.name,
"oldversion": pkg.installed.version
if pkg.is_installed and pkg.installed is not None
else "",
"repository": "%s/%s" % (pkg.candidate.origins[0].label,
pkg.candidate.origins[0].archive)
if pkg.candidate.origins is not None
else "",
"package": pkg.candidate.record.get("Package"),
"source": pkg.candidate.source_name,
"sourceversion": pkg.candidate.source_version,
"version": pkg.candidate.version,
"installedsize": pkg.candidate.size,
"maintainer": pkg.candidate.record.get("Maintainer", ""),
"architecture": pkg.candidate.architecture,
"depends": pkg.candidate.record.get("Depends", ""),
"suggests": pkg.candidate.record.get("Suggests", ""),
"conflicts": pkg.candidate.record.get("Conflicts", ""),
"breaks": pkg.candidate.record.get("Breaks", ""),
"abstract": pkg.candidate.summary, # Deprecated
"summary": pkg.candidate.summary,
"description": pkg.candidate.record.get("Description", ""),
"extendeddescription": get_extended_description(
pkg.candidate.raw_description),
"homepage": pkg.candidate.homepage,
"descriptionmd5": pkg.candidate.record.get("Description-md5", ""),
"multiarch": pkg.candidate.record.get("Multi-Arch", ""),
"predepends": pkg.candidate.record.get("Pre-Depends", ""),
"section": pkg.candidate.section,
"priority": pkg.candidate.priority,
"filename": pkg.candidate.filename,
"size": pkg.candidate.size,
"md5sum": pkg.candidate.md5,
"sha1": pkg.candidate.sha1,
"sha256": pkg.candidate.sha256,
"uri": pkg.candidate.uri,
"uris": pkg.candidate.uris
})
with open(omv.getenv('OMV_APT_UPGRADE_INDEX_FILE',
'/var/lib/openmediavault/apt/upgradeindex.json'), 'w') as outfile:
json.dump(data, outfile, sort_keys=True, indent=4)
# Create the '/var/lib/openmediavault/apt/pluginsindex.json' file.
print("Creating index of %s plugins ..." % pi.name)
data = []
cache = apt.cache.Cache()
fcache = apt.cache.FilteredCache(cache)
fcache.set_filter(OpenMediaVaultPluginsFilter())
for pkg in fcache:
if pkg.candidate is None:
continue
data.append({
"name": pkg.name,
"repository": "%s/%s" % (pkg.candidate.origins[0].label,
pkg.candidate.origins[0].archive)
if pkg.candidate.origins is not None
else "",
"package": pkg.candidate.record.get("Package"),
"version": pkg.candidate.version,
"installedsize": pkg.candidate.size,
"maintainer": pkg.candidate.record.get("Maintainer", ""),
"architecture": pkg.candidate.architecture,
"depends": pkg.candidate.record.get("Depends", ""),
"suggests": pkg.candidate.record.get("Suggests", ""),
"conflicts": pkg.candidate.record.get("Conflicts", ""),
"breaks": pkg.candidate.record.get("Breaks", ""),
"abstract": pkg.candidate.summary, # Deprecated
"summary": pkg.candidate.summary,
"description": pkg.candidate.record.get("Description", ""),
"extendeddescription": get_extended_description(
pkg.candidate.raw_description),
"homepage": pkg.candidate.homepage,
"descriptionmd5": pkg.candidate.record.get("Description-md5", ""),
"multiarch": pkg.candidate.record.get("Multi-Arch", ""),
"predepends": pkg.candidate.record.get("Pre-Depends", ""),
"section": pkg.candidate.section,
"pluginsection": pkg.candidate.record.get("Plugin-Section", ""),
"priority": pkg.candidate.priority,
"filename": pkg.candidate.filename,
"size": pkg.candidate.size,
"md5sum": pkg.candidate.md5,
"sha1": pkg.candidate.sha1,
"sha256": pkg.candidate.sha256,
"installed": pkg.is_installed
})
with open(omv.getenv('OMV_APT_PLUGINS_INDEX_FILE',
'/var/lib/openmediavault/apt/pluginsindex.json'), 'w') as outfile:
json.dump(data, outfile, sort_keys=True, indent=4)
sys.exit(0)
|
[
"votdev@gmx.de"
] |
votdev@gmx.de
|
|
fb3357f74e50b16b2e9ff2a5c707a1cd76f60390
|
d9c95cd0efad0788bf17672f6a4ec3b29cfd2e86
|
/disturbance/migrations/0102_sitetransferapiarysite_customer_selected.py
|
244cec5a7829e05fb0fa9710029b020299d7fdb4
|
[
"Apache-2.0"
] |
permissive
|
Djandwich/disturbance
|
cb1d25701b23414cd91e3ac5b0207618cd03a7e5
|
b1ba1404b9ca7c941891ea42c00b9ff9bcc41237
|
refs/heads/master
| 2023-05-05T19:52:36.124923
| 2021-06-03T06:37:53
| 2021-06-03T06:37:53
| 259,816,629
| 1
| 1
|
NOASSERTION
| 2021-06-03T09:46:46
| 2020-04-29T03:39:33
|
Python
|
UTF-8
|
Python
| false
| false
| 483
|
py
|
# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2020-07-08 02:21
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('disturbance', '0101_auto_20200708_1021'),
]
operations = [
migrations.AddField(
model_name='sitetransferapiarysite',
name='customer_selected',
field=models.BooleanField(default=False),
),
]
|
[
"brendan.blackford@dbca.wa.gov.au"
] |
brendan.blackford@dbca.wa.gov.au
|
bb2be937acf8912948c17324039d31f3142c2686
|
10fbe5526e5f0b8588b65f70f088cd86b6e9afbe
|
/rqwywo/migrations/0007_mahcgzunnb.py
|
b7a865d124c035fd8b4fe42e8e9ea49a6e5daaa5
|
[] |
no_license
|
MarkusH/django-migrations-benchmark
|
eb4b2312bb30a5a5d2abf25e95eca8f714162056
|
e2bd24755389668b34b87d254ec8ac63725dc56e
|
refs/heads/master
| 2016-09-05T15:36:45.250134
| 2015-03-31T23:44:28
| 2015-03-31T23:44:28
| 31,168,231
| 3
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 619
|
py
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('digmcd', '0014_auto_20150218_1625'),
('rqwywo', '0006_auto_20150218_1623'),
]
operations = [
migrations.CreateModel(
name='Mahcgzunnb',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('jytvvd', models.ForeignKey(null=True, related_name='+', to='digmcd.Untgafvod')),
],
),
]
|
[
"info@markusholtermann.eu"
] |
info@markusholtermann.eu
|
05a061fd125b2b45d56a164419cb1734443f7bcd
|
21b4585de4a0eacdb0d1e488dfae53684bb6564e
|
/111. Minimum Depth of Binary Tree.py
|
8df278cffa42331ca4af1a0e9c38c21b64172b2d
|
[] |
no_license
|
gaosq0604/LeetCode-in-Python
|
de8d0cec3ba349d6a6462f66286fb3ddda970bae
|
57ec95779a4109008dbd32e325cb407fcbfe5a52
|
refs/heads/master
| 2021-09-15T23:14:14.565865
| 2018-06-12T16:30:40
| 2018-06-12T16:30:40
| 113,881,474
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 425
|
py
|
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution:
def minDepth(self, root):
"""
:type root: TreeNode
:rtype: int
"""
if not root:
return 0
d = self.minDepth(root.left), self.minDepth(root.right)
return 1 + (min(d) or max(d))
|
[
"gaosq0604@gmail.com"
] |
gaosq0604@gmail.com
|
6476a4bce704d3c37f77b64321d4174453e2cdc5
|
5d069aa71e5cd242b4f1f29541dc85107822cbda
|
/mc_states/modules/mc_www.py
|
319be1cb2d7aa0ca54f2980cd05641837f59532c
|
[
"BSD-3-Clause"
] |
permissive
|
h4ck3rm1k3/makina-states
|
f22a5e3a0dde9d545108b4c14279451198682370
|
3f2dbe44867f286b5dea81b9752fc8ee332f3929
|
refs/heads/master
| 2020-02-26T13:17:09.895814
| 2015-01-12T21:45:02
| 2015-01-12T21:45:02
| 29,172,493
| 0
| 0
| null | 2015-01-13T04:23:09
| 2015-01-13T04:23:07
| null |
UTF-8
|
Python
| false
| false
| 1,230
|
py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
.. _module_mc_www:
mc_www / www registry
============================================
If you alter this module and want to test it, do not forget
to deploy it on minion using::
salt '*' saltutil.sync_modules
Documentation of this module is available with::
salt '*' sys.doc mc_www
'''
# Import python libs
import logging
import mc_states.utils
__name = 'www'
log = logging.getLogger(__name__)
def settings():
'''
www registry
fastcgi_socket_directory
fastcgi socket directory
'''
@mc_states.utils.lazy_subregistry_get(__salt__, __name)
def _settings():
grains = __grains__
pillar = __pillar__
locations = __salt__['mc_locations.settings']()
wwwData = __salt__['mc_utils.defaults'](
'makina-states.services.http.www', {
'doc_root': '/var/www/default',
'serveradmin_mail': 'webmaster@localhost',
'socket_directory': '/var/spool/www',
'upload_max_filesize': '25000000M',
})
return wwwData
return _settings()
def dump():
return mc_states.utils.dump(__salt__,__name)
# vim:set et sts=4 ts=4 tw=80:
|
[
"kiorky@cryptelium.net"
] |
kiorky@cryptelium.net
|
35df2d756e2076153f711a103a0783d15316232b
|
ce29884aa23fbb74a779145046d3441c619b6a3c
|
/all/217.py
|
280394e1b4abf3d78e10998357c12d53a005a8c9
|
[] |
no_license
|
gebijiaxiaowang/leetcode
|
6a4f1e3f5f25cc78a5880af52d62373f39a546e7
|
38eec6f07fdc16658372490cd8c68dcb3d88a77f
|
refs/heads/master
| 2023-04-21T06:16:37.353787
| 2021-05-11T12:41:21
| 2021-05-11T12:41:21
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 366
|
py
|
#!/usr/bin/python3.7
# -*- coding: utf-8 -*-
# @Time : 2020/12/1 22:48
# @Author : dly
# @File : 217.py
# @Desc :
class Solution(object):
def containsDuplicate(self, nums):
"""
:type nums: List[int]
:rtype: bool
"""
if len(nums) == len(set(nums)):
return False
else:
return True
|
[
"1083404373@qq.com"
] |
1083404373@qq.com
|
a7e95404a37e153e6cdf62771caf3112de8d8ed3
|
08a3833ec97e33c4a40bf1d1aa403da7836e0df0
|
/demo/urls.py
|
d4a707b0083f84bce11751e79741ea699a02c9cf
|
[] |
no_license
|
srikanthpragada/PYTHON_02_AUG_2018_WEBDEMO
|
ae667e154abcc05acaaf0d18d45be4ebc995c6cc
|
e34019102f5c8159beef35a2e3665028aea509ce
|
refs/heads/master
| 2020-03-28T06:10:43.303938
| 2018-09-27T12:33:35
| 2018-09-27T12:33:35
| 147,818,803
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,225
|
py
|
from django.contrib import admin
from django.urls import path
from . import views
from . import dept_views
from . import book_views
from . import ajax_views
from . import rest_views
from . import class_views
urlpatterns = [
path('product/', views.show_product),
path('products/', views.show_products_list),
path('netprice/', views.net_price_calculator),
path('netprice2/', views.net_price),
path('netprice3/', views.net_price_with_form),
path('add_dept/', dept_views.add_dept),
path('list_depts/', dept_views.list_depts),
path('book_home/', book_views.book_home),
path('add_book/', book_views.add_book),
path('list_books/', book_views.list_books),
path('search_books/', book_views.search_books),
path('ajax/', ajax_views.ajax_demo),
path('ajax_now/', ajax_views.now),
path('ajax_get_title/', ajax_views.get_title),
path('ajax_get_book/', ajax_views.get_book),
path('list_langs/', views.list_langs),
path('about/', class_views.AboutView.as_view()),
path('books/', class_views.BooksList.as_view()),
path('api/books/', rest_views.list_books),
path('api/books/<int:bookid>', rest_views.process_book),
path('rest_client/', rest_views.client),
]
|
[
"srikanthpragada@gmail.com"
] |
srikanthpragada@gmail.com
|
30675a30aad98c9bf2aec0cb940be25c5c0449d1
|
2cde0b63638ac8bad6a2470a4ec1cbbdae8f7d39
|
/percentilesandquartiles17.py
|
9066c301ed8954ed980881cb9437211cbe66d357
|
[] |
no_license
|
PhuongThuong/baitap
|
cd17b2f02afdbebfe00d0c33c67791ade4fc61ff
|
012c609aa99ba587c492b8e6096e15374dc905f2
|
refs/heads/master
| 2023-08-22T21:39:57.527349
| 2021-10-09T10:54:48
| 2021-10-09T10:54:48
| 415,281,667
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 775
|
py
|
import numpy as np
scores = np.array([8, 6, 4, 3, 9, 4, 7, 4, 4, 9, 7, 3, 9, 4, 2, 3, 8, 5, 9, 6])
print("Bách phân vị thứ 70: ", np.percentile(scores, 70, interpolation='lower'))
print("Bách phân vị thứ 70: ", np.percentile(scores, 70, interpolation='higher'))
print("Bách phân vị thứ 70: ", np.percentile(scores, 70, interpolation='linear'))
print("Bách phân vị thứ 70: ", np.percentile(scores, 70, interpolation='nearest'))
print("Bách phân vị thứ 70: ", np.percentile(scores, 70, interpolation='midpoint'))
print("Bách phân vị thứ 50: ", np.percentile(scores, 50))
print("Median = ", np.median(scores))
print("Q1 = : ", np.quantile(scores, 0.25))
print("Q2 = : ", np.quantile(scores, 0.5))
print("Q3 = : ", np.quantile(scores, 0.75))
|
[
"you@example.com"
] |
you@example.com
|
dcaa5f0a9632efd7fbc43f1cdf8d739b0161c222
|
1d2bbeda56f8fede69cd9ebde6f5f2b8a50d4a41
|
/hard/python3/c0006_37_sudoku-solver/00_leetcode_0006.py
|
b12724d00299702d5e3f0bb08ed2233d0c6d66e9
|
[] |
no_license
|
drunkwater/leetcode
|
38b8e477eade68250d0bc8b2317542aa62431e03
|
8cc4a07763e71efbaedb523015f0c1eff2927f60
|
refs/heads/master
| 2020-04-06T07:09:43.798498
| 2018-06-20T02:06:40
| 2018-06-20T02:06:40
| 127,843,545
| 0
| 2
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,049
|
py
|
# DRUNKWATER TEMPLATE(add description and prototypes)
# Question Title and Description on leetcode.com
# Function Declaration and Function Prototypes on leetcode.com
#37. Sudoku Solver
#Write a program to solve a Sudoku puzzle by filling the empty cells.
#A sudoku solution must satisfy all of the following rules:
#Each of the digits 1-9 must occur exactly once in each row.
#Each of the digits 1-9 must occur exactly once in each column.
#Each of the the digits 1-9 must occur exactly once in each of the 9 3x3 sub-boxes of the grid.
#Empty cells are indicated by the character '.'.
#A sudoku puzzle...
#...and its solution numbers marked in red.
#Note:
#The given board contain only digits 1-9 and the character '.'.
#You may assume that the given Sudoku puzzle will have a single unique solution.
#The given board size is always 9x9.
#class Solution:
# def solveSudoku(self, board):
# """
# :type board: List[List[str]]
# :rtype: void Do not return anything, modify board in-place instead.
# """
# Time Is Money
|
[
"Church.Zhong@audiocodes.com"
] |
Church.Zhong@audiocodes.com
|
16a31b69d896402cb06e319c17af7fcaddfc80e1
|
fd56bcee0326b529eaf66462827ec0bf790c3335
|
/days33/非阻塞模型/服务端.py
|
4c09ecd799b80ff203bd598805aa7ed69eadab80
|
[] |
no_license
|
95Li/oldboy-learning-code
|
d1ff638e7d9db27b03afde1ff99844f3de2d9f46
|
dbf97bd5bfb906bce4e92cc378aef277e6ebd443
|
refs/heads/master
| 2020-03-21T15:59:31.604549
| 2018-06-27T14:32:12
| 2018-06-27T14:32:12
| 138,744,611
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,104
|
py
|
from socket import *
import time
s=socket()
s.bind(('172.0.0.1',8080))
s.listen(5)
s.setblocking(False)
r_list=[]
w_list=[]
while True:
try:
conn,addr=s.accept()
r_list.append(conn)
except BlockingIOError:
print('ganhuo')
print('rlist',len(r_list))
del_rlist=[]
for conn in r_list:
try:
date=conn.recv(1024)
w_list.append((conn,date.upper()))
except BlockingIOError :
continue
except ConnectionResetError:
del_rlist.append(conn)
del_wlist=[]
for item in w_list:
try :
conn=item[0]
date=item[1]
conn.send(date)
del_wlist.append(item)
except BlockingIOError :
continue
except ConnectionResetError:
del_wlist.append(item)
for conn in del_rlist:
r_list.remove(conn)
for item in del_wlist:
w_list.remove(item)
|
[
"861083372@qq.com"
] |
861083372@qq.com
|
f99afe6266414414ff87f689507fa29216429229
|
8ca6f67883355bd9678a3b8a26ec7fe7b464f5c1
|
/project/deluxenation/urls.py
|
a6b9c3c3b09a2dcc9fba1dedda77b6389c95600e
|
[] |
no_license
|
mmahnken/deluxenation
|
868f971f1d2d73052e5f8a3444a7751141710c02
|
35f94a053db378dc4835868766e69c836f58eb1c
|
refs/heads/master
| 2020-09-21T20:52:56.268009
| 2017-02-06T05:21:42
| 2017-02-06T05:21:42
| 66,738,373
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,345
|
py
|
"""project URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.10/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import url, include
from django.contrib import admin
from django.contrib.admin.views.decorators import staff_member_required
from django.conf.urls.static import static
import drawings.urls
import drawings.views
import settings
urlpatterns = [
url(r'^admin/nb/$', staff_member_required(drawings.views.NotebookCreateView.as_view(template_name='drawings/notebook_create.html'))),
url(r'^admin/bulk-add/drawings/$', staff_member_required(drawings.views.BulkDrawingCreateView.as_view(template_name='drawings/bulk-add.html'))),
url(r'^admin/', admin.site.urls),
url(r'^', include(drawings.urls)),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
|
[
"mmm25eg@gmail.com"
] |
mmm25eg@gmail.com
|
f78c278d29399237b8c709646ac66aac72e5c105
|
ba54b70f93fe7f9d114623d76b1ad3f88309d66f
|
/paysto/templatetags/paysto_tags.py
|
9b92d8d51facb7986b806436b724bb611b2dd303
|
[] |
no_license
|
loobinsk/newprj
|
9769b2f26092ce7dd8612fce37adebb307b01b8b
|
c6aa6a46973fb46375f4b05a86fe76207a8ae16d
|
refs/heads/master
| 2023-05-07T00:28:44.242163
| 2021-05-25T08:22:05
| 2021-05-25T08:22:05
| 370,617,690
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,329
|
py
|
#-*- coding: utf-8 -*-
from django import template
from paysto import PAYSTO_ONLINE_MERCHANT_NOCART_URL
from paysto.forms import PaymentForm
from django.conf import settings
from paysto.models import BasePayment
from django.utils.safestring import mark_safe
register = template.Library()
@register.inclusion_tag('paysto/payment-form.html', takes_context=True)
def payment_form(context, payment):
form = PaymentForm(data={
'PAYSTO_SHOP_ID': settings.PAYSTO_SHOP_ID,
'PAYSTO_SUM': payment.total,
'PAYSTO_INVOICE_ID': payment.id,
'PAYSTO_DESC': payment.description,
'PayerEmail': payment.user.email
})
return {
'form': form,
'action': PAYSTO_ONLINE_MERCHANT_NOCART_URL
}
@register.filter
def payment_status(status):
if status == BasePayment.STATUS_CONFIRMED:
return mark_safe('<span class="text-success">%s</span>' % BasePayment.STATUSES[status])
if status == BasePayment.STATUS_WAITING:
return mark_safe('<span class="text-muted">%s</span>' % BasePayment.STATUSES[status])
if status == BasePayment.STATUS_ERROR:
return mark_safe('<span class="text-danger">%s</span>' % BasePayment.STATUSES[status])
else:
try:
return mark_safe(BasePayment.STATUSES[status])
except:
return ""
|
[
"root@bazavashdom.ru"
] |
root@bazavashdom.ru
|
450d68a0e2c390a31c2a557f70788062ceb572ce
|
048d13616971fdaf17947bf1f3b1840a30a5f6c8
|
/apps/team/migrations/0002_auto_20150904_1835.py
|
0b28c5bb3ba3fe9fb3786db434dec753d3076860
|
[] |
no_license
|
ImgBotApp/manutd.org.np
|
449955a89959d0781f181e71ec8be5c1c8ae3c42
|
18a53f7f4c4d4e0b47a22e5b5cb99069fd257d00
|
refs/heads/master
| 2021-06-24T06:58:53.269028
| 2017-09-09T15:22:25
| 2017-09-09T15:22:25
| 103,154,757
| 2
| 0
| null | 2017-09-12T04:46:16
| 2017-09-11T15:40:14
|
JavaScript
|
UTF-8
|
Python
| false
| false
| 1,586
|
py
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('team', '0001_initial'),
]
operations = [
migrations.RemoveField(
model_name='player',
name='address',
),
migrations.RemoveField(
model_name='player',
name='date_of_birth',
),
migrations.RemoveField(
model_name='player',
name='name',
),
migrations.RemoveField(
model_name='player',
name='phone',
),
migrations.RemoveField(
model_name='staff',
name='address',
),
migrations.RemoveField(
model_name='staff',
name='date_of_birth',
),
migrations.RemoveField(
model_name='staff',
name='name',
),
migrations.RemoveField(
model_name='staff',
name='phone',
),
migrations.AddField(
model_name='player',
name='user',
field=models.ForeignKey(default=1, to=settings.AUTH_USER_MODEL),
preserve_default=False,
),
migrations.AddField(
model_name='staff',
name='user',
field=models.ForeignKey(default=1, to=settings.AUTH_USER_MODEL),
preserve_default=False,
),
]
|
[
"xtranophilist@gmail.com"
] |
xtranophilist@gmail.com
|
b63539f52ac4f3863fe9cef412753b80adfc6b28
|
f9a8aecd848bcc79052ca068cc73850a63e6dfcf
|
/data/datasets/resampling/adaptive/importance_sampler.py
|
4112726c0616b4e253a6b9843e212d7b614c2d40
|
[
"MIT"
] |
permissive
|
khoehlein/fV-SRN-Ensemble-Compression
|
537981a1cd31565bb504b00ca730e8bf87e7e0ef
|
2780b83d2594c1b38b57ab58087b46bee4b61e8b
|
refs/heads/master
| 2023-04-17T09:42:48.037397
| 2022-09-07T08:55:01
| 2022-09-07T08:55:01
| 532,983,107
| 4
| 1
| null | 2022-09-06T14:39:26
| 2022-09-05T16:43:24
|
Python
|
UTF-8
|
Python
| false
| false
| 7,115
|
py
|
import argparse
from typing import Optional
import torch
from matplotlib import pyplot as plt
from common.mathparser import BigInteger
from inference import IFieldEvaluator
from data.datasets.resampling.coordinate_box import CoordinateBox, UnitCube
from data.datasets.resampling.adaptive.density_tree_sampler import FastDensityTreeSampler
from data.datasets.resampling.adaptive.density_tree import FastDensityTree
from data.datasets.resampling.adaptive.statistical_tests import (
FastKolmogorovSmirnovTestNd,
FastWhiteHomoscedasticityTest,
)
from data.datasets.resampling import IImportanceSampler
from data.datasets.sampling import ISampler, RandomUniformSampler
class DensityTreeImportanceSampler(IImportanceSampler):
@staticmethod
def init_parser(parser: argparse.ArgumentParser):
group = parser.add_argument_group('DensityTreeImportanceSampler')
prefix = '--importance-sampler:tree:'
group.add_argument(prefix + 'min-depth', type=int, default=4, help="""
minimum tree depth for adaptive loss grid
""")
group.add_argument(prefix + 'max-depth', type=int, default=12, help="""
maximum tree depth for adaptive loss grid
""")
group.add_argument(prefix + 'num-samples-per-node', type=int, default=128, help="""
number of samples per node for loss tree refinement
""")
group.add_argument(prefix + 'alpha', type=float, default=0.05, help="""
significance threshold for splitting decision
""")
group.add_argument(prefix + 'batch-size', type=BigInteger, default=None, help="""
batch size for loss evaluation during importance sampling (Default: Dataset batch size)
""")
group.add_argument(prefix + 'min-density', type=float, default=0.01, help="""
minimum probability density for sampling per grid box
""")
group.add_argument(prefix + 'max-ratio', type=float, default=10, help="""
maximum ratio of probability densities during node splitting
""")
# group.add_argument(prefix + 'seed', type=int, default=42, help="""
# seed for importance sampling random number generator
# """)
@staticmethod
def read_args(args: dict):
prefix = 'importance_sampler:tree:'
return {
key: args[prefix + key]
for key in ['min_depth', 'max_depth', 'num_samples_per_node', 'batch_size',
'min_density', 'max_ratio', 'alpha']
}
@classmethod
def from_dict(cls, args, dimension=None, device=None):
sampler_kws = DensityTreeImportanceSampler.read_args(args)
return DensityTreeImportanceSampler(**sampler_kws, dimension=dimension, device=device)
def __init__(
self,
sampler: Optional[ISampler] = None, dimension:Optional[int] = None, batch_size=None,
min_depth=4, max_depth=8, num_samples_per_node=128, min_density=0.01, max_ratio=10,
alpha=0.05, root_box: Optional[CoordinateBox] = None, device=None, dtype=None, seed=None
):
if dimension is None and sampler is not None:
dimension = sampler.dimension
if dimension is None and root_box is not None:
dimension = root_box.dimension
assert dimension is not None
if sampler is not None:
assert dimension == sampler.dimension
if device is not None:
assert device == sampler.device
if dtype is not None:
assert dtype == sampler.dtype
device = sampler.device
dtype = sampler.dtype
else:
assert device is not None
sampler = RandomUniformSampler(dimension, device=device, dtype=dtype)
if root_box is not None:
assert dimension == root_box.dimension
assert device == root_box.device
else:
root_box = UnitCube(dimension, device=device, dtype=dtype)
super(DensityTreeImportanceSampler, self).__init__(dimension, root_box, device)
self.dimension = dimension
self.sampler = sampler
self.root_box = root_box
self.min_depth = min_depth
self.max_depth = max_depth
self.num_samples_per_node = num_samples_per_node
self.min_density = min_density
self.max_ratio = max_ratio
self.alpha = alpha
assert batch_size is not None
self.batch_size = int(batch_size)
self.dtype = dtype
self.device = device
def generate_samples(self, num_samples: int, evaluator: IFieldEvaluator, **kwargs):
with torch.no_grad():
difference_test = FastKolmogorovSmirnovTestNd(alpha=self.alpha)
homoscedasticity_test = FastWhiteHomoscedasticityTest(alpha=self.alpha)
tree = FastDensityTree.from_scalar_field(
self.root_box, self.sampler, evaluator, difference_test, homoscedasticity_test=homoscedasticity_test,
min_depth=self.min_depth,max_depth=self.max_depth, num_samples_per_node=self.num_samples_per_node,
store_sample_summary=True, num_samples_per_batch=self.batch_size,
device=self.device
)
tree.add_summaries()
sampler = FastDensityTreeSampler(
self.sampler, tree,
min_density=self.min_density, max_ratio=self.max_ratio
)
samples, weights = sampler.generate_samples(num_samples)
if samples.device != self.device:
samples = samples.to(self.device)
weights = weights.to(self.device)
perm = torch.randperm(num_samples, device=torch.device('cpu'))
samples = samples[perm]
weights = weights[perm]
return samples, weights
def _test_sampler():
import torch
from torch import Tensor
class Evaluator(IFieldEvaluator):
def __init__(self, dimension, device=None):
super(Evaluator, self).__init__(dimension, 1, device)
self.direction = 4 * torch.tensor([1] * dimension, device=device)[None, :]# torch.randn(1, dimension, device=device)
self.offset = torch.tensor([0.5] * dimension, device=device)[None, :] # torch.randn(1, dimension, device=device)
def forward(self, positions: Tensor) -> Tensor:
return torch.sum(self.direction * (positions - self.offset), dim=-1) ** 2
device = torch.device('cuda:0')
evaluator = Evaluator(3, device=device)
sampler = DensityTreeImportanceSampler(
dimension=3, device=device, batch_size=64**3,
max_ratio=3,
min_density=0.1
)
for i in range(20):
samples = sampler.generate_samples(4000, evaluator)
c = evaluator.evaluate(samples)
samples = samples.data.cpu().numpy()
c = c[:, 0].data.cpu().numpy()
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(samples[:, 0], samples[:, 1], samples[:, 2], c=c)
plt.show()
plt.close()
if __name__ == '__main__':
_test_sampler()
|
[
"kevin.hoehlein@tum.de"
] |
kevin.hoehlein@tum.de
|
642167e640746fc13d9a13edf17d6d566e7db7ae
|
bd4270073cbdbafb8471bbdfe9a7fcfc75836384
|
/gen_yield_for.py
|
29f87e20a29a3bba8bb32624604fd0c06547e069
|
[] |
no_license
|
Hemanthtm2/learning.python
|
03c5fccb0ca205dba3edd418cbf2fecad34aa532
|
c93d30fcb4982f27153a3f649d2bdc4d50c56b1b
|
refs/heads/master
| 2020-04-08T14:19:35.508613
| 2018-10-10T17:17:40
| 2018-10-10T17:17:40
| 159,431,984
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 144
|
py
|
#!/usr/bin/python
def print_squares(start,end):
for x in range(start,end):
yield x**2
for n in print_squares(2,5):
print(n)
|
[
"hemanthtm2@gmail.com"
] |
hemanthtm2@gmail.com
|
1c7f5661d1a446f130607f085ddd3c15eec9fba3
|
c9ddbdb5678ba6e1c5c7e64adf2802ca16df778c
|
/cases/pa3/sample/str_cat-151.py
|
a0cd8b6099dc04249949a943feb61cd8816d1ab4
|
[] |
no_license
|
Virtlink/ccbench-chocopy
|
c3f7f6af6349aff6503196f727ef89f210a1eac8
|
c7efae43bf32696ee2b2ee781bdfe4f7730dec3f
|
refs/heads/main
| 2023-04-07T15:07:12.464038
| 2022-02-03T15:42:39
| 2022-02-03T15:42:39
| 451,969,776
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 279
|
py
|
a:str = "Hello"
b:str = "World"
c:str = "ChocoPy"
def cat2(a:str, b:str) -> str:
return a + b
def cat3(a:str, b:str, c:str) -> str:
return a + b + c
print(cat2(a, b))
print(cat2("", c))
print(cat3(a, " ", c))
$ID(len(a))
print(len(cat2(a,a)))
print(len(cat2("","")))
|
[
"647530+Virtlink@users.noreply.github.com"
] |
647530+Virtlink@users.noreply.github.com
|
988555a631d55ed2a9abdd2e1a296ccacb0d9a19
|
90f46ef9cd25460a05f10a4fb440c396debdb518
|
/address_old/app/config.py
|
b45dabe418e678e9aaad311863de1cb8db82d09b
|
[] |
no_license
|
khanh-trieu/folder-erp-lap-tsg
|
addf543c1ed2b27beb0fe9508a4db835e258dc15
|
151335a5f9364d66a456f174e2b621a2d8a9727e
|
refs/heads/master
| 2023-04-19T13:49:28.134422
| 2021-05-20T02:46:32
| 2021-05-20T02:46:32
| 369,057,071
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 269
|
py
|
API_GET_PROVINCE = 'https://api.mysupership.vn/v1/partner/areas/province'
API_GET_DISTRICT = 'https://api.mysupership.vn/v1/partner/areas/district?province='
API_GET_WARD = 'https://api.mysupership.vn/v1/partner/areas/commune?district='
MSG_SUCCESS = 'Thành công!'
|
[
"Tsg@1234"
] |
Tsg@1234
|
b6d8f27ccd78ce668ead8011086bf2b955637496
|
67932bfe656c093cc306d4c9bca682140d21a470
|
/loopsByRecursion.py
|
06f6799b5a153b5acb0049b7e7e2968688365091
|
[] |
no_license
|
dases/recursion_examples
|
bb8769c0f64ef8e612660547c00693ede725f72f
|
3733045157d0daf5016e0213eeace8f968203311
|
refs/heads/master
| 2023-07-16T05:46:51.521638
| 2021-09-01T21:27:34
| 2021-09-01T21:27:34
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 252
|
py
|
def theLoop(i):
if i < 4: # Stops when false.
# RECURSIVE CASE
# The main code:
print(i, 'Hello!')
theLoop(i + 1) # Increment i.
return
else:
# BASE CASE
return
theLoop(0) # Start i at 0.
|
[
"asweigart@gmail.com"
] |
asweigart@gmail.com
|
83bf67157c94ca91dcfad4370e48441432eeff06
|
3de2a746243ad1cb000994a06a0f9699db9a901f
|
/abc083b.py
|
25f54f5c86bcc9984b5f3c4dbc00bddd018005eb
|
[] |
no_license
|
takumi152/atcoder
|
71d726ffdf2542d8abac0d9817afaff911db7c6c
|
ebac94f1227974aa2e6bf372e18605518de46441
|
refs/heads/master
| 2022-10-30T12:14:41.742596
| 2022-09-29T19:49:32
| 2022-09-29T19:49:32
| 181,502,518
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 386
|
py
|
def main():
buf = input()
buflist = buf.split()
N = int(buflist[0])
A = int(buflist[1])
B = int(buflist[2])
sum = 0
for i in range(1, N + 1):
count = 0
for d in [int(d) for d in str(i)]:
count += d
if count >= A and count <= B:
sum += i
print(sum)
if __name__ == '__main__':
main()
|
[
"takumi152@hotmail.com"
] |
takumi152@hotmail.com
|
0442dccb71f8076d97bd44918f3116cd9633224c
|
75fe89a5ca7ceb91757199c4dde6fd20a69c94b9
|
/pygmsh/circle_arc.py
|
527ec58c5c88de3903e536d467a0c54bbda050b0
|
[
"MIT"
] |
permissive
|
guifon1000/pygmsh
|
39d1c1d3e890f64afa94e35e6da4e0c6967d3373
|
ce6bf8f080c359b1ab81c9d1adee6a81d3419d51
|
refs/heads/master
| 2021-01-22T23:53:42.835887
| 2017-03-20T13:45:53
| 2017-03-20T13:45:53
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 488
|
py
|
# -*- coding: utf-8 -*-
#
from .line_base import LineBase
from .point import Point
class CircleArc(LineBase):
def __init__(self, points):
super(CircleArc, self).__init__()
for p in points:
assert isinstance(p, Point)
self.points = points
self.code = '\n'.join([
'%s = newl;' % self.id,
'Circle(%s) = {%s, %s, %s};'
% (self.id, points[0].id, points[1].id, points[2].id)
])
return
|
[
"nico.schloemer@gmail.com"
] |
nico.schloemer@gmail.com
|
229da8f1da7b2924a17df7c56df8eef9f31d5ee1
|
bc0d87befa0329c50c2e57ead730050b648e15c6
|
/leaning_logs/models.py
|
48d76ed238355d3f04b39298a65d73a02204d8a6
|
[] |
no_license
|
YGragon/DjangoLearningLogs
|
85a0598849600536cdbf45cfb9444f36dfd31c57
|
e3228aed7aa181d657b9aa58d93e9d37f09d918d
|
refs/heads/master
| 2021-09-10T19:36:52.924500
| 2018-04-01T00:11:13
| 2018-04-01T00:11:13
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,676
|
py
|
from django.db import models
from django.contrib.auth.models import User
import json
from datetime import date, datetime
class Topic(models.Model):
"""用户要学习的主题."""
text = models.CharField(max_length=200)
date_added = models.DateTimeField(auto_now_add=True)
# 外键是用户,可以通过这个外键确定这个主题属于哪个用户
owner = models.ForeignKey(User, "on_delete=models.CASCADE")
def __str__(self):
"""返回模型的字符串表示"""
return self.text
class Entry(models.Model):
"""用户发表的文章"""
# 外键是主题,可以通过这个外键确定这个文章属于哪个主题
topic = models.ForeignKey(Topic, "on_delete=models.CASCADE")
text = models.TextField()
date_added = models.DateTimeField(auto_now_add = True)
class Meta:
verbose_name_plural = 'entries'
def __str__(self):
"""返回模型的字符串表示"""
return self.text[:50] + "..."
class MyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, datetime):
return obj.strftime('%Y-%m-%d %H:%M:%S')
elif isinstance(obj, date):
return obj.strftime("%Y-%m-%d")
else:
return json.JSONEncoder.default(self, obj)
class TopicEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Topic):
return obj.text
return json.JSONEncoder.default(self, obj)
class EntryEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Entry):
return obj.text
else:
return json.JSONEncoder.default(self, obj)
|
[
"1105894953@qq.com"
] |
1105894953@qq.com
|
ca87266c74882e0db39d481e2bf5488941ec7f3e
|
61e4cb5f60541939e122714aa085f2028a7904c5
|
/duckdown/tool/provision/utils.py
|
90d0b5cfc9e95d9eee414dbe564ed3dae2034188
|
[
"MIT"
] |
permissive
|
blueshed/duckdown
|
1a07bf541a5849000d9e8622622cc67a3bec933f
|
e6d0e62d378bd2d9ed0cd5ce4bc7ab3476b86020
|
refs/heads/master
| 2023-01-30T12:18:05.713016
| 2020-12-08T20:38:36
| 2020-12-08T20:38:36
| 314,370,191
| 0
| 0
|
MIT
| 2020-12-08T14:54:11
| 2020-11-19T21:06:47
|
Python
|
UTF-8
|
Python
| false
| false
| 711
|
py
|
""" utils """
import boto3
from pkg_resources import resource_filename
def get_bucket_arn(bucket_name, region="", account_id=""):
""" We need the arn of a queue """
bucket_arn = f"arn:aws:s3:{region}:{account_id}:{bucket_name}"
return bucket_arn
def get_aws_region_account():
""" Return the aws region and account_id """
session = boto3.Session()
region = session.region_name
sts = session.client("sts")
response = sts.get_caller_identity()
return region, response["Account"]
def get_resource(name):
""" return resource content """
path = resource_filename("duckdown.tool.provision", f"resources/{name}")
with open(path) as file:
return file.read()
|
[
"pete@blueshed.co.uk"
] |
pete@blueshed.co.uk
|
7190f1bfb7ef37d25eff124ec59c652fd6d5cdf5
|
ca7aa979e7059467e158830b76673f5b77a0f5a3
|
/Python_codes/p02947/s893966605.py
|
67b744e4b961123394465b52cb99e7a525a91c56
|
[] |
no_license
|
Aasthaengg/IBMdataset
|
7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901
|
f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8
|
refs/heads/main
| 2023-04-22T10:22:44.763102
| 2021-05-13T17:27:22
| 2021-05-13T17:27:22
| 367,112,348
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 667
|
py
|
from collections import defaultdict
from collections import deque
from collections import Counter
import itertools
import math
def readInt():
return int(input())
def readInts():
return list(map(int, input().split()))
def readChar():
return input()
def readChars():
return input().split()
def comb(n,r):
return math.factorial(n)//math.factorial(r)//math.factorial(max(1,n-r))
def con(t):
ans = ""
for i in t:
ans+=i[0]+str(i[1])
return ans
n = readInt()
d = defaultdict(int)
for i in [input() for i in range(n)]:
t = defaultdict(int)
for j in i:
t[j]+=1
d[con(sorted(t.items()))]+=1
ans = 0
for i in d:
ans+=comb(d[i],2)
print(ans)
|
[
"66529651+Aastha2104@users.noreply.github.com"
] |
66529651+Aastha2104@users.noreply.github.com
|
6671affb60476364415d4f792368ccdf01def877
|
0a949a1774607543de2dd5edf3a7efbbed3744c5
|
/Day 25 power of 2.py
|
498bb09544dee2f1fa49029e65f9e1572b050b82
|
[] |
no_license
|
AprajitaChhawi/365DaysOfCode.MARCH
|
ac46631665e508372b3ad6c9d57c89906f657f3d
|
dc1f70339491eb3ee194c62e1ded28695097d166
|
refs/heads/main
| 2023-03-28T12:34:56.499942
| 2021-04-01T17:45:22
| 2021-04-01T17:45:22
| 343,509,704
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 697
|
py
|
#User function Template for python3
class Solution:
##Complete this function
# Function to check if given number n is a power of two.
def isPowerofTwo(self,n):
co=0
while(n>0):
n=n&(n-1)
co=co+1
if(co==1):
return 1
return 0
##Your code here
#{
# Driver Code Starts
#Initial Template for Python 3
import math
def main():
T=int(input())
while(T>0):
n=int(input())
ob=Solution()
if ob.isPowerofTwo(n):
print("YES")
else:
print("NO")
T-=1
if __name__=="__main__":
main()
# } Driver Code Ends
|
[
"achhawip@gmail.com"
] |
achhawip@gmail.com
|
d3fecc8b89dd54cd773450d6b1f0371b7586260a
|
de24f83a5e3768a2638ebcf13cbe717e75740168
|
/moodledata/vpl_data/55/usersdata/67/23126/submittedfiles/av2_p3_civil.py
|
021be36901210757e4d1cd358ae7703c7b0c0ed6
|
[] |
no_license
|
rafaelperazzo/programacao-web
|
95643423a35c44613b0f64bed05bd34780fe2436
|
170dd5440afb9ee68a973f3de13a99aa4c735d79
|
refs/heads/master
| 2021-01-12T14:06:25.773146
| 2017-12-22T16:05:45
| 2017-12-22T16:05:45
| 69,566,344
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 379
|
py
|
# -*- coding: utf-8 -*-
from __future__ import division
import numpy as np
n=input("Digite a dimensão da matriz:")
pdp=input("Digite a posição a qual deseja pesar o valor da peça:")
linha=n
coluna=n
a=np.zeros((linha,coluna))
for i in range (0,a.shape[0],1):
for j in range (0,a.shape[1],1):
a[i,j]=input("Digite um elemento para a matriz:")
print a
|
[
"rafael.mota@ufca.edu.br"
] |
rafael.mota@ufca.edu.br
|
3edb99208215975625109feab3b281a6d20663b2
|
c2fef17ef7644316da7f1f1d3819f67a4cdb21cb
|
/python/L1TCSCTF_cfi.py
|
444847f551d3ae500d20283e9f2b8119607e6a63
|
[] |
no_license
|
rjwang/CSCTFDQM
|
cde5e937c3029a8309c189b20ad8253ff4e16df6
|
3dc419fad4f2cbf1b0cd1d4fb172a175f26d7025
|
refs/heads/master
| 2016-09-15T13:59:42.321400
| 2014-11-17T12:15:38
| 2014-11-17T12:15:38
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 578
|
py
|
import FWCore.ParameterSet.Config as cms
l1tCsctf = cms.EDAnalyzer("L1TCSCTF",
#gmtProducer = cms.InputTag("gtDigis"),
gmtProducer = cms.InputTag("null"),
statusProducer = cms.InputTag("csctfDigis"),
outputFile = cms.untracked.string(''),
lctProducer = cms.InputTag("csctfDigis"),
verbose = cms.untracked.bool(False),
gangedME11a = cms.untracked.bool(True),
trackProducer = cms.InputTag("csctfDigis"),
mbProducer = cms.InputTag("csctfDigis:DT"),
DQMStore = cms.untracked.bool(True),
disableROOToutput = cms.untracked.bool(True)
)
|
[
"r.jiewang@gmail.com"
] |
r.jiewang@gmail.com
|
a4479a3d1bc8e0e338b8104a625f384ee5214c0c
|
47c39800fa6f928e0d13f26727ba52bda2aa6ff0
|
/One/migrations/0013_knowledgemediastore_m_k_ip.py
|
3c2b40b750f7a0ed28e290d0e62cc7022e06c04f
|
[
"MIT"
] |
permissive
|
dddluke/zhihuipingtai
|
952ed5f9a4011cb4fb2765a0571c978af784d708
|
4e46e01440f8c270c05259ac0f38bd56dd04016c
|
refs/heads/master
| 2023-03-09T03:32:47.807760
| 2021-02-26T02:36:10
| 2021-02-26T02:36:10
| 341,816,381
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 424
|
py
|
# Generated by Django 3.1.2 on 2020-10-13 08:48
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('One', '0012_solution_s_user_name'),
]
operations = [
migrations.AddField(
model_name='knowledgemediastore',
name='m_k_ip',
field=models.CharField(max_length=40, null=True),
),
]
|
[
"lukeli0306@gmail.com"
] |
lukeli0306@gmail.com
|
c6c98662e4fa436e5465b70a23c3c113752a5975
|
1b287461e9bb550b96c1e2eff7e6c716ab7d536e
|
/pytorch/neuralNetwork.py
|
5ee68b460804d9842c9aa3027d8dfb849cc9aadf
|
[] |
no_license
|
TQCAI/python-study-notebook
|
64b9c7bfaa6c47ef2d783aa2a36ba914d13b9db2
|
e0a9eee556c3da7441e5be6ee4baf16f1ae075a9
|
refs/heads/master
| 2020-05-18T19:28:27.998087
| 2019-05-04T10:29:10
| 2019-05-04T10:29:10
| 184,609,373
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,820
|
py
|
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), 2)
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
# resize 矩阵为向量
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
params = list(net.parameters())
print(len(params))
print(type(params[0]))
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
net.zero_grad()
out.backward(torch.randn(1, 10))
output = net(input)
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
print(loss.grad_fn)
net.zero_grad() # zeroes the gradient buffers of all parameters
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
|
[
"1468632627@qq.com"
] |
1468632627@qq.com
|
4d4d91dbf71adc295e441bc7d03a0ec941967878
|
2f98aa7e5bfc2fc5ef25e4d5cfa1d7802e3a7fae
|
/python/python_8965.py
|
60985953452cc8e3a19e764e0aa2103646d7d4f7
|
[] |
no_license
|
AK-1121/code_extraction
|
cc812b6832b112e3ffcc2bb7eb4237fd85c88c01
|
5297a4a3aab3bb37efa24a89636935da04a1f8b6
|
refs/heads/master
| 2020-05-23T08:04:11.789141
| 2015-10-22T19:19:40
| 2015-10-22T19:19:40
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 141
|
py
|
# Use python to run shell script with Popen behaves differently in python command line and actual program
from subprocess import Popen, PIPE
|
[
"ubuntu@ip-172-31-7-228.us-west-2.compute.internal"
] |
ubuntu@ip-172-31-7-228.us-west-2.compute.internal
|
a5e60e51f46f0f4c1de4519f9e3ed6e437e72ee2
|
0862943574c7cbf98f7d049a516e8caf204304d1
|
/todo_list.py
|
3802e8c3b0c13da8abc8fac0eeef687f9a4748c8
|
[] |
no_license
|
MsMunda/Fun
|
4a29a4d70e8f46e8d94c385831a7a49135566104
|
6d22207b75a59cb53d1d2e85549472571feeb1c5
|
refs/heads/master
| 2021-01-20T03:23:27.051210
| 2017-04-27T01:08:24
| 2017-04-27T01:08:24
| 89,535,789
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,109
|
py
|
"""This is a Terminal-based program that allows a user to create and edit a to-do list.
The stub of each function has been provided. Read the docstrings for what each
function should do and complete the body of the functions below.
You can run the script in your Terminal at any time using the command:
>>> python todo_list.py
"""
def add_to_list(my_list):
"""Takes user input and adds it as a new item to the end of the list."""
print "The add_to_list function has not yet been written"
def view_list(my_list):
"""Print each item in the list."""
print "The view_list function has not yet been written"
def display_main_menu(my_list):
"""Displays main options, takes in user input, and calls view or add function."""
user_options = """
\nWould you like to:
A. Add a new item
B. View list
C. Quit the program
>>> """
while True:
user_input = raw_input(user_options)
if user_input == 'A':
add_to_list(my_list)
#-------------------------------------------------
my_list = []
display_main_menu(my_list)
|
[
"no-reply@hackbrightacademy.com"
] |
no-reply@hackbrightacademy.com
|
a9c3e5697018a920d3e2f4f5430092dd2011706e
|
ec00584ab288267a7cf46c5cd4f76bbec1c70a6b
|
/offline/__Digital_Twin/__release2/__release2_sprint4/15619 - Link the asset locations to local weather/dev/dev.py
|
585898cda425bdfd488487cf9f2d129e386b1a2e
|
[] |
no_license
|
rahuldbhadange/Python
|
b4cc806ff23953389c9507f43d817b3815260e19
|
7e162117f1acc12537c7eeb36d6983d804122ff3
|
refs/heads/master
| 2021-06-23T05:04:20.053777
| 2020-01-28T10:34:28
| 2020-01-28T10:34:28
| 217,307,612
| 0
| 0
| null | 2021-06-10T22:44:11
| 2019-10-24T13:35:42
|
Python
|
UTF-8
|
Python
| false
| false
| 2,116
|
py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Iotic Labs Ltd. All rights reserved.
from __future__ import unicode_literals, print_function
from json import dumps
import logging
logging.basicConfig(format='%(asctime)s %(levelname)s [%(name)s] {%(threadName)s} %(message)s', level=logging.WARNING)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
from IoticAgent.Core.compat import monotonic
from IoticAgent.ThingRunner import RetryingThingRunner
def pretty_print(msg, data):
print(msg, dumps(data, indent=4))
class TemplateTR(RetryingThingRunner):
LOOP_TIMER = 10 # minimum number of seconds duration of the main loop
def __init__(self, config=None):
"""Instantiation code in here, after the call to super().__init__()
"""
super(TemplateTR, self).__init__(config=config)
def on_startup(self):
"""Called once at the beginning, before main().
Use this method to create your things, rebind connections, setup hardware, etc.
"""
results = self.client.search(text="weather", location={'lat': 52.427809, 'long': -0.327829, 'radius': 10.789}, limit=1)
pretty_print("results", results)
for thing_guid, thing_data in results.items():
pretty_print("thing_guid", thing_guid)
pretty_print("thing_data", thing_data)
for point_guid, point_data in thing_data['points'].items():
pretty_print("point_guid", point_guid)
pretty_print("point_data", point_data)
descr = self.client.describe(point_guid)
pretty_print("descr", descr)
def main(self):
"""Called after on_startup.
Use this method for your main loop (if you need one).
Set LOOP_TIMER for your regular tick
"""
while True:
start = monotonic()
# loop code in here
stop = monotonic()
if self.wait_for_shutdown(max(0, self.LOOP_TIMER - (stop - start))):
break
def main():
TemplateTR(config="test.ini").run()
if __name__ == '__main__':
main()
|
[
"46024570+rahuldbhadange@users.noreply.github.com"
] |
46024570+rahuldbhadange@users.noreply.github.com
|
9b42299d9d109563fa71c7a5fed85b1ebd999120
|
36907c0840b34687026b3439a06c49e5c0d2ef11
|
/tests/test_file.py
|
752908478767a7efc177298baf3fad6d612dd537
|
[
"BSD-2-Clause"
] |
permissive
|
melscoop/pydeps
|
f5585adde69dfc2afd82254260a5dd4750cf57f2
|
c6078821222b314e2befbc6723a36967a9b5a47b
|
refs/heads/master
| 2023-08-29T00:19:55.845364
| 2021-10-14T06:07:02
| 2021-10-14T06:07:02
| 423,976,408
| 1
| 0
|
BSD-2-Clause
| 2021-11-02T19:51:10
| 2021-11-02T19:39:42
| null |
UTF-8
|
Python
| false
| false
| 713
|
py
|
# -*- coding: utf-8 -*-
import os
from tests.filemaker import create_files
from tests.simpledeps import simpledeps
def test_file():
files = """
a.py: |
import collections
"""
with create_files(files) as workdir:
assert simpledeps('a.py') == set()
def test_file_pylib():
files = """
a.py: |
import collections
"""
with create_files(files) as workdir:
assert 'collections -> a' in simpledeps('a.py', '--pylib')
def test_file_pyliball():
files = """
a.py: |
import collections
"""
with create_files(files) as workdir:
assert 'collections -> a' in simpledeps('a.py', '--pylib --pylib-all')
|
[
"bp@datakortet.no"
] |
bp@datakortet.no
|
105b3a2100dc54523233be6553a7d540acbaf799
|
b8fb2620257b6286871211b7bde1cd8a0a5468db
|
/ts_feature_extractor.py
|
c86913b859b6d8d349b58a479f536fd14205f9ff
|
[] |
no_license
|
mehdidc/elnino
|
f9aac0c586317d261151265cbd0290ae351731b8
|
7b85ad180634f1db4a61654d1475f54c60b694a4
|
refs/heads/master
| 2021-01-10T07:34:57.507222
| 2015-06-04T07:15:35
| 2015-06-04T07:15:35
| 36,854,530
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,739
|
py
|
import numpy as np
en_lat_bottom = -5
en_lat_top = 5
en_lon_left = 360-170
en_lon_right = 360-120
def get_enso_mean(tas):
"""The array of mean temperatures in the El Nino 3.4 region at all time points."""
return tas.loc[:, en_lat_bottom:en_lat_top, en_lon_left:en_lon_right].mean(dim=('lat','lon'))
class FeatureExtractor(object):
def __init__(self):
pass
def transform(self, temperatures_xray, n_burn_in, n_lookahead, skf_is):
"""Combine two variables: the montly means corresponding to the month of the target and
the current mean temperature in the El Nino 3.4 region."""
# This is the range for which features should be provided. Strip
# the burn-in from the beginning and the prediction look-ahead from
# the end.
valid_range = range(n_burn_in, temperatures_xray['time'].shape[0] - n_lookahead)
enso = get_enso_mean(temperatures_xray['tas'])
# reshape the vector into a table years as rows, months as columns
enso_matrix = enso.values.reshape((-1,12))
count_matrix = np.ones(enso_matrix.shape)
# compute cumulative means of columns (remember that you can only use
# the past at each time point) and reshape it into a vector
enso_monthly_mean = (enso_matrix.cumsum(axis=0) / count_matrix.cumsum(axis=0)).ravel()
# roll it backwards (6 months) so it corresponds to the month of the target
enso_monthly_mean_rolled = np.roll(enso_monthly_mean, n_lookahead - 12)
# select valid range
enso_monthly_mean_valid = enso_monthly_mean_rolled[valid_range]
enso_valid = enso.values[valid_range]
X = np.array([enso_valid, enso_monthly_mean_valid]).T
return X
|
[
"mehdi@cherti.name"
] |
mehdi@cherti.name
|
d5d85c91e7b3eb67b2067d90181727ba54def11a
|
3db7b5409f2f9c57ab3f98bda50f8b548d98063d
|
/tests/system/test_list_rows.py
|
4c08958c37ac71ea16fd4a02c8b4507fedecac2b
|
[
"Apache-2.0"
] |
permissive
|
googleapis/python-bigquery
|
66db156b52e97565f6211b2fab5aac4e519fa798
|
3645e32aeebefe9d5a4bc71a6513942741f0f196
|
refs/heads/main
| 2023-09-01T07:41:24.893598
| 2023-08-23T19:04:13
| 2023-08-23T19:04:13
| 226,992,475
| 622
| 287
|
Apache-2.0
| 2023-09-12T04:31:26
| 2019-12-10T00:09:04
|
Python
|
UTF-8
|
Python
| false
| false
| 4,455
|
py
|
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import decimal
from dateutil import relativedelta
from google.cloud import bigquery
from google.cloud.bigquery import enums
def test_list_rows_empty_table(bigquery_client: bigquery.Client, table_id: str):
from google.cloud.bigquery.table import RowIterator
table = bigquery_client.create_table(table_id)
# It's a bit silly to list rows for an empty table, but this does
# happen as the result of a DDL query from an IPython magic command.
rows = bigquery_client.list_rows(table)
assert isinstance(rows, RowIterator)
assert tuple(rows) == ()
def test_list_rows_page_size(bigquery_client: bigquery.Client, table_id: str):
num_items = 7
page_size = 3
num_pages, num_last_page = divmod(num_items, page_size)
to_insert = [{"string_col": "item%d" % i, "rowindex": i} for i in range(num_items)]
bigquery_client.load_table_from_json(to_insert, table_id).result()
df = bigquery_client.list_rows(
table_id,
selected_fields=[bigquery.SchemaField("string_col", enums.SqlTypeNames.STRING)],
page_size=page_size,
)
pages = df.pages
for i in range(num_pages):
page = next(pages)
assert page.num_items == page_size
page = next(pages)
assert page.num_items == num_last_page
def test_list_rows_scalars(bigquery_client: bigquery.Client, scalars_table: str):
rows = sorted(
bigquery_client.list_rows(scalars_table), key=lambda row: row["rowindex"]
)
row = rows[0]
assert row["bool_col"] # True
assert row["bytes_col"] == b"Hello, World!"
assert row["date_col"] == datetime.date(2021, 7, 21)
assert row["datetime_col"] == datetime.datetime(2021, 7, 21, 11, 39, 45)
assert row["geography_col"] == "POINT(-122.0838511 37.3860517)"
assert row["int64_col"] == 123456789
assert row["interval_col"] == relativedelta.relativedelta(
years=7, months=11, days=9, hours=4, minutes=15, seconds=37, microseconds=123456
)
assert row["numeric_col"] == decimal.Decimal("1.23456789")
assert row["bignumeric_col"] == decimal.Decimal("10.111213141516171819")
assert row["float64_col"] == 1.25
assert row["string_col"] == "Hello, World!"
assert row["time_col"] == datetime.time(11, 41, 43, 76160)
assert row["timestamp_col"] == datetime.datetime(
2021, 7, 21, 17, 43, 43, 945289, tzinfo=datetime.timezone.utc
)
nullrow = rows[1]
for column, value in nullrow.items():
if column == "rowindex":
assert value == 1
else:
assert value is None
def test_list_rows_scalars_extreme(
bigquery_client: bigquery.Client, scalars_extreme_table: str
):
rows = sorted(
bigquery_client.list_rows(scalars_extreme_table),
key=lambda row: row["rowindex"],
)
row = rows[0]
assert row["bool_col"] # True
assert row["bytes_col"] == b"\r\n"
assert row["date_col"] == datetime.date(9999, 12, 31)
assert row["datetime_col"] == datetime.datetime(9999, 12, 31, 23, 59, 59, 999999)
assert row["geography_col"] == "POINT(-135 90)"
assert row["int64_col"] == 9223372036854775807
assert row["interval_col"] == relativedelta.relativedelta(
years=-10000, days=-3660000, hours=-87840000
)
assert row["numeric_col"] == decimal.Decimal(f"9.{'9' * 37}E+28")
assert row["bignumeric_col"] == decimal.Decimal(f"9.{'9' * 75}E+37")
assert row["float64_col"] == float("Inf")
assert row["string_col"] == "Hello, World"
assert row["time_col"] == datetime.time(23, 59, 59, 999999)
assert row["timestamp_col"] == datetime.datetime(
9999, 12, 31, 23, 59, 59, 999999, tzinfo=datetime.timezone.utc
)
nullrow = rows[4]
for column, value in nullrow.items():
if column == "rowindex":
assert value == 4
else:
assert value is None
|
[
"noreply@github.com"
] |
googleapis.noreply@github.com
|
61d4759ecc91732720a3b6343a276d796bea8fd6
|
6aa7e203f278b9d1fd01244e740d5c944cc7c3d3
|
/airflow/api_connexion/schemas/event_log_schema.py
|
0753a8a104a44ec8b1a0d3b8965e9cd0eee383b3
|
[
"Apache-2.0",
"BSD-3-Clause",
"MIT",
"Python-2.0"
] |
permissive
|
laserpedro/airflow
|
83fc991d91749550b151c81876d9e7864bff3946
|
a28afa8172489e41ecf7c381674a0cb91de850ff
|
refs/heads/master
| 2023-01-02T04:55:34.030935
| 2020-10-24T15:55:11
| 2020-10-24T15:55:11
| 285,867,990
| 1
| 0
|
Apache-2.0
| 2020-08-07T15:56:49
| 2020-08-07T15:56:49
| null |
UTF-8
|
Python
| false
| false
| 1,861
|
py
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from typing import List, NamedTuple
from marshmallow import Schema, fields
from marshmallow_sqlalchemy import SQLAlchemySchema, auto_field
from airflow.models.log import Log
class EventLogSchema(SQLAlchemySchema):
""" Event log schema """
class Meta:
""" Meta """
model = Log
id = auto_field(data_key='event_log_id', dump_only=True)
dttm = auto_field(data_key='when', dump_only=True)
dag_id = auto_field(dump_only=True)
task_id = auto_field(dump_only=True)
event = auto_field(dump_only=True)
execution_date = auto_field(dump_only=True)
owner = auto_field(dump_only=True)
extra = auto_field(dump_only=True)
class EventLogCollection(NamedTuple):
""" List of import errors with metadata """
event_logs: List[Log]
total_entries: int
class EventLogCollectionSchema(Schema):
""" EventLog Collection Schema """
event_logs = fields.List(fields.Nested(EventLogSchema))
total_entries = fields.Int()
event_log_schema = EventLogSchema()
event_log_collection_schema = EventLogCollectionSchema()
|
[
"noreply@github.com"
] |
laserpedro.noreply@github.com
|
5c25a940343aea2972f36ee9b25e6e5a5019f0f5
|
9e567b8241ce00e9d53843f5aba11c4a119b079f
|
/tags/v0_61_0/htdocs/tut/silly_axes.py
|
acdd1abf85c5ec85aa02e9070be482fceec67f1b
|
[
"Python-2.0",
"LicenseRef-scancode-unknown-license-reference"
] |
permissive
|
neilpanchal/matplotlib
|
3d2a7133e858c4eefbb6c2939eb3f7a328b18118
|
7565d1f2943e0e7b4a3f11ce692dfb9b548d0b83
|
refs/heads/master
| 2020-06-11T09:20:43.941323
| 2011-01-21T21:50:16
| 2011-01-21T21:50:16
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 240
|
py
|
from matplotlib.matlab import *
t = arange(0, 2.1, 0.1)
rc('grid', color=0.75, linewidth=1.5)
rc('tick', color='b', labelsize=14)
a = subplot(111)
plot(t, t**2, '-')
title('Custom axes using rc')
grid(True)
savefig('custom_axes')
show()
|
[
"(no author)@f61c4167-ca0d-0410-bb4a-bb21726e55ed"
] |
(no author)@f61c4167-ca0d-0410-bb4a-bb21726e55ed
|
e3c9eebf31eb84021294e0f466f8819622e71363
|
7b65481a7183f56a8b4ba2bce08cd1131cebe0e6
|
/infinitelooper.py
|
0365155016e44f11023400bc8ece254156932bd4
|
[] |
no_license
|
guadalupeaceves-lpsr/class-samples
|
17e261084de2e64ceae6daaaac5a53618eeafb37
|
bc5c096453243ef76ca6854d54232ea234ba24b5
|
refs/heads/master
| 2021-01-21T04:48:01.443832
| 2016-06-13T03:50:15
| 2016-06-13T03:50:15
| 48,007,839
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 153
|
py
|
print("What's your favorite number?")
num = int(raw_input())
while num != 14:
print("Nope, I don't like it. Choose another.")
num = int(raw_input())
|
[
"lps@lps-1011PX.(none)"
] |
lps@lps-1011PX.(none)
|
63366e5426c932f5970b06f1ed4796f94c709e38
|
3fd9c7ee49a32eae3013191b63154a9a5d6dafe6
|
/12.6驾驶飞船/12.6.3左右移动/ship_3.py
|
29b66748188b9598c3cd9ceb7bfaba5932bbc169
|
[] |
no_license
|
taozhenting/alien_invasion
|
e0c03cd9797cb33e40ca47a13eadeda8b1c4cf85
|
fd9bd97d6238da702fbb1eb6fcb78e8352875fe2
|
refs/heads/master
| 2020-04-27T05:31:48.862784
| 2019-01-30T09:43:49
| 2019-01-30T09:43:50
| 174,083,029
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,737
|
py
|
#增加左边移动
import pygame
class Ship():
#screen参数指定了飞船绘制到什么地方
def __init__(self,screen):
"""初始化飞船并设置其初始位置"""
self.screen = screen
#加载飞船图像并获取其外接矩形
#调用pygame.image.load()这个函数返回一个表示飞船的surface并存储到了self.image
self.image = pygame.image.load('images/ship.bmp')
#rect处理对象是矩形,即便它们形状并非矩形。使用get_rect()获取相应surface的属性rect
self.rect = self.image.get_rect()
#将表示屏幕的矩形存储在self.screen_rect
self.screen_rect = screen.get_rect()
#将每艘新飞船放在屏幕中央
#将self.rect.centerx(飞船中心的x坐标)设置为表示屏幕的矩形的属性centerx
self.rect.centerx = self.screen_rect.centerx
#将self.rect.bottom(飞船下边缘的y坐标)设置为表示屏幕的矩形的属性bottom
self.rect.bottom = self.screen_rect.bottom
#移动标志,增加左移动
self.moving_right = False
self.moving_left = False
def update(self):
"""根据移动标志调整飞船的位置"""
if self.moving_right:
self.rect.centerx += 1
#添加了if而不是elif,因为如果同时按下左右箭头,先增加再减少self.rect.centerx
#如果用elif,右箭头将始终处于优先地位
if self.moving_left:
self.rect.centerx -= 1
#定义了方法blitme(),它根据self.rect指定的位置将图像绘制到屏幕上
def blitme(self):
"""在指定位置绘制飞船"""
self.screen.blit(self.image,self.rect)
|
[
"taozt@ichile.com.cn"
] |
taozt@ichile.com.cn
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.