blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 288 | content_id stringlengths 40 40 | detected_licenses listlengths 0 112 | license_type stringclasses 2 values | repo_name stringlengths 5 115 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 684 values | visit_date timestamp[us]date 2015-08-06 10:31:46 2023-09-06 10:44:38 | revision_date timestamp[us]date 1970-01-01 02:38:32 2037-05-03 13:00:00 | committer_date timestamp[us]date 1970-01-01 02:38:32 2023-09-06 01:08:06 | github_id int64 4.92k 681M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 22 values | gha_event_created_at timestamp[us]date 2012-06-04 01:52:49 2023-09-14 21:59:50 ⌀ | gha_created_at timestamp[us]date 2008-05-22 07:58:19 2023-08-21 12:35:19 ⌀ | gha_language stringclasses 147 values | src_encoding stringclasses 25 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 128 12.7k | extension stringclasses 142 values | content stringlengths 128 8.19k | authors listlengths 1 1 | author_id stringlengths 1 132 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
afe31aea2d9cd7cf6fa4c51b8e4513f1a8fbd7e6 | 4c12a16443c5f73bce032414e830881cb8c6d8e0 | /backend/racing_lamborginis_21017/wsgi.py | 23b9e9c5bc4be1854177e95bbb72b1c0cd220861 | [] | no_license | crowdbotics-apps/racing-lamborginis-21017 | d9c71440bb877a6fd47cbbeaf9d5e1060df77cba | d76d33f94332843d2b564a103d56ab143e10b0bb | refs/heads/master | 2022-12-30T08:38:47.651833 | 2020-10-03T11:17:53 | 2020-10-03T11:17:53 | 300,862,311 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 425 | py | """
WSGI config for racing_lamborginis_21017 project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'racing_lamborginis_21017.settings')
application = get_wsgi_application()
| [
"team@crowdbotics.com"
] | team@crowdbotics.com |
34f47ee507cb89aee8ad4f0dec96520200801399 | 409ce560793c070ef4211b99c5a4a5316a258c4f | /unittests/pytests/utils/TestConstants.py | 25fee48f0ccd85b914adda51af89ae2e632377dc | [
"MIT"
] | permissive | calum-chamberlain/pylith | bb718bfb4305f03b45d42348e5d4fa5ed5f4a918 | 8712c39ade53c1cc5ac0e671e4296cee278c1dcf | refs/heads/master | 2020-12-06T17:15:08.638337 | 2016-05-15T20:30:28 | 2016-05-15T20:30:28 | 46,401,744 | 0 | 0 | null | 2016-05-15T20:30:29 | 2015-11-18T07:09:12 | C++ | UTF-8 | Python | false | false | 1,158 | py | #!/usr/bin/env python
#
# ======================================================================
#
# Brad T. Aagaard, U.S. Geological Survey
# Charles A. Williams, GNS Science
# Matthew G. Knepley, University of Chicago
#
# This code was developed as part of the Computational Infrastructure
# for Geodynamics (http://geodynamics.org).
#
# Copyright (c) 2010-2015 University of California, Davis
#
# See COPYING for license information.
#
# ======================================================================
#
## @file unittests/pytests/utils/TestEventLogger.py
## @brief Unit testing of EventLogger object.
import unittest
# ----------------------------------------------------------------------
class TestConstants(unittest.TestCase):
"""
Unit testing of constants.
"""
def test_maxdouble(self):
"""
Test maxdouble()
"""
from pylith.utils.utils import maxdouble
self.assertAlmostEqual(1.0, maxdouble()/1.0e+99, 7)
return
def test_maxflat(self):
"""
Test maxflat()
"""
from pylith.utils.utils import maxfloat
self.assertAlmostEqual(1.0, maxfloat()/1.0e+30, 7)
return
# End of file
| [
"baagaard@usgs.gov"
] | baagaard@usgs.gov |
245fe46d17c120ab7965895e594e5fec0c460184 | 8bbeb7b5721a9dbf40caa47a96e6961ceabb0128 | /python3/413.Arithmetic Slices(等差数列划分).py | b8ac8cded2d30b118ccf5c8f0cf485c7285b43c8 | [
"MIT"
] | permissive | lishulongVI/leetcode | bb5b75642f69dfaec0c2ee3e06369c715125b1ba | 6731e128be0fd3c0bdfe885c1a409ac54b929597 | refs/heads/master | 2020-03-23T22:17:40.335970 | 2018-07-23T14:46:06 | 2018-07-23T14:46:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,816 | py | """
<p>A sequence of number is called arithmetic if it consists of at least three elements and if the difference between any two consecutive elements is the same.</p>
<p>For example, these are arithmetic sequence:</p>
<pre>1, 3, 5, 7, 9
7, 7, 7, 7
3, -1, -5, -9</pre>
<p>The following sequence is not arithmetic.</p> <pre>1, 1, 2, 5, 7</pre>
<br/>
<p>A zero-indexed array A consisting of N numbers is given. A slice of that array is any pair of integers (P, Q) such that 0 <= P < Q < N.</p>
<p>A slice (P, Q) of array A is called arithmetic if the sequence:<br/>
A[P], A[p + 1], ..., A[Q - 1], A[Q] is arithmetic. In particular, this means that P + 1 < Q.</p>
<p>The function should return the number of arithmetic slices in the array A. </p>
<br/>
<p><b>Example:</b>
<pre>
A = [1, 2, 3, 4]
return: 3, for 3 arithmetic slices in A: [1, 2, 3], [2, 3, 4] and [1, 2, 3, 4] itself.
</pre><p>如果一个数列至少有三个元素,并且任意两个相邻元素之差相同,则称该数列为等差数列。</p>
<p>例如,以下数列为等差数列:</p>
<pre>
1, 3, 5, 7, 9
7, 7, 7, 7
3, -1, -5, -9</pre>
<p>以下数列不是等差数列。</p>
<pre>
1, 1, 2, 5, 7</pre>
<p> </p>
<p>数组 A 包含 N 个数,且索引从0开始。数组 A 的一个子数组划分为数组 (P, Q),P 与 Q 是整数且满足 0<=P<Q<N 。</p>
<p>如果满足以下条件,则称子数组(P, Q)为等差数组:</p>
<p>元素 A[P], A[p + 1], ..., A[Q - 1], A[Q] 是等差的。并且 P + 1 < Q 。</p>
<p>函数要返回数组 A 中所有为等差数组的子数组个数。</p>
<p> </p>
<p><strong>示例:</strong></p>
<pre>
A = [1, 2, 3, 4]
返回: 3, A 中有三个子等差数组: [1, 2, 3], [2, 3, 4] 以及自身 [1, 2, 3, 4]。
</pre>
<p>如果一个数列至少有三个元素,并且任意两个相邻元素之差相同,则称该数列为等差数列。</p>
<p>例如,以下数列为等差数列:</p>
<pre>
1, 3, 5, 7, 9
7, 7, 7, 7
3, -1, -5, -9</pre>
<p>以下数列不是等差数列。</p>
<pre>
1, 1, 2, 5, 7</pre>
<p> </p>
<p>数组 A 包含 N 个数,且索引从0开始。数组 A 的一个子数组划分为数组 (P, Q),P 与 Q 是整数且满足 0<=P<Q<N 。</p>
<p>如果满足以下条件,则称子数组(P, Q)为等差数组:</p>
<p>元素 A[P], A[p + 1], ..., A[Q - 1], A[Q] 是等差的。并且 P + 1 < Q 。</p>
<p>函数要返回数组 A 中所有为等差数组的子数组个数。</p>
<p> </p>
<p><strong>示例:</strong></p>
<pre>
A = [1, 2, 3, 4]
返回: 3, A 中有三个子等差数组: [1, 2, 3], [2, 3, 4] 以及自身 [1, 2, 3, 4]。
</pre>
"""
class Solution:
def numberOfArithmeticSlices(self, A):
"""
:type A: List[int]
:rtype: int
"""
| [
"lishulong@wecash.net"
] | lishulong@wecash.net |
f665c4ad3bec3e391dda74628186ac234349d23c | 9de8b47c53c530b143f5d37a130e47aa092a4b06 | /examples/ods_00_simple.py | 325a00387f973a48cac170fb9c3f90468e8a19af | [
"MIT"
] | permissive | pyexcel/pyexcel-ezodf | 27b2d6e3c9c7f8e04a03827eaad417ec91d08420 | 981fc223984aace7a0fe900f0be7cbcb72c7e17e | refs/heads/master | 2023-08-04T18:56:21.134917 | 2019-06-15T17:00:19 | 2019-06-15T17:00:19 | 25,791,290 | 6 | 3 | NOASSERTION | 2019-06-15T17:00:20 | 2014-10-26T21:15:31 | Python | UTF-8 | Python | false | false | 796 | py | #!/usr/bin/env python
#coding:utf-8
# Purpose: simple spreadsheet
# Created: 26.01.2011
# Copyright (C) 2011, Manfred Moitzi
# License: MIT license
from __future__ import unicode_literals, print_function, division
__author__ = "mozman <mozman@gmx.at>"
import ezodf
ods = ezodf.newdoc('ods', "simple_spreadsheet.ods")
sheet = ezodf.Sheet('NUMBERS', size=(20, 10))
ods.sheets += sheet
for index in range(sheet.ncols()):
sheet[5, index].set_value(index)
sheet[index, 5].set_value(index)
sheet[index, index].set_value(index, value_type='currency', currency='EUR')
sheet = ezodf.Sheet('TEXT', size=(20, 10))
ods.sheets += sheet
for column in range(sheet.ncols()):
for row, cell in enumerate(sheet.column(column)):
cell.set_value("Cell (%d, %d)" % (row, column))
ods.save()
| [
"mozman@gmx.at"
] | mozman@gmx.at |
00675ddee6089fc807f80b16ba51e6c6744e0783 | 8d4087ba079c6e8a1a87285f4205a5eef4743010 | /scoring_browser/source_tab.py | ca5bb83648d50cdeedadf7251b5fe06e0a6b0208 | [
"MIT"
] | permissive | janpipek/scoring_browser | 5e9d0528224308219199792b84afba6f16e7576b | 518363ebd8fb198edbebcd616a8a027662b97185 | refs/heads/master | 2021-01-01T16:29:57.452640 | 2015-01-13T11:00:48 | 2015-01-13T11:00:48 | 3,330,051 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 595 | py | #
# scoring_browser --- Simple Qt application for browsing
# scoring outputs in Geant4
#
# Copyright (C) 2012-2014 Jan Pipek
# (jan.pipek@gmail.com)
#
# This file may be distributed without limitation.
#
from PyQt4 import QtGui
class SourceTab(QtGui.QWidget):
""" Tab displaying the file source."""
def __init__(self):
QtGui.QWidget.__init__(self)
layout = QtGui.QVBoxLayout(self)
self.textEdit = QtGui.QTextEdit()
self.textEdit.setReadOnly(True)
layout.addWidget(self.textEdit)
def setText(self, text):
self.textEdit.setText(text)
| [
"jan.pipek@gmail.com"
] | jan.pipek@gmail.com |
ec1954ca86823f75679aaa50495f990a7f0d2195 | 6ca9a7ed179ed96857c86dd91d5f81ad07be4690 | /KnowledgeMapping/spark/11_mysql_op.py | fa261f93b12f4fd981f71143cd16bc8f8ffbde56 | [
"MIT"
] | permissive | nickliqian/keep_learning | ede172048cb1473013aa506a943ebe0c7c416065 | be120ce2bb94a8e8395630218985f5e51ae087d9 | refs/heads/master | 2021-04-25T18:23:47.808870 | 2020-07-31T09:52:20 | 2020-07-31T09:52:20 | 108,302,688 | 8 | 3 | null | null | null | null | UTF-8 | Python | false | false | 6,414 | py | from pyspark import SparkContext
from pyspark.sql import SQLContext
import pyspark.sql.functions as F
sc = SparkContext("local", appName="mysqltest")
sqlContext = SQLContext(sc)
df = sqlContext.read.format("jdbc").options(
url="jdbc:mysql://localhost:3306/mydata?user=root&password=mysql&"
"useUnicode=true&characterEncoding=utf-8&useJDBCCompliantTimezoneShift=true&"
"useLegacyDatetimeCode=false&serverTimezone=UTC", dbtable="detail_data").load()
df.printSchema()
# root
# |-- id: integer (nullable = true)
# |-- 省份: string (nullable = true)
df.show(n=5)
# +----+------+------+------+------+
# | id| 省份| 城市| 区县| 区域|
# +----+------+------+------+------
# |2557|广东省|深圳市|罗湖区|春风路
print(df.count())
# 47104
df_g1 = df.groupby("区县").count()
df_g1.show()
# +--------+-----+
# | 区县|count|
# +--------+-----+
# | 龙华区| 4217|
print(df.columns)
# ['id', '省份', '城市', '区县', '区域', '小区', '源地址',
print(df.dtypes)
# [('id', 'int'), ('省份', 'string'), ('城市', 'string'),
df.select('城市', '区县', '区域', '小区').show()
# +------+------+------+--------------+
# | 城市| 区县| 区域| 小区|
# +------+------+------+--------------+
# |深圳市|罗湖区|春风路| 凯悦华庭|
# |深圳市|罗湖区|春风路| 置地逸轩|
df.select(df.id.alias('id_value'), '小区').show()
# +--------+--------------+
# |id_value| 小区|
# +--------+--------------+
# | 2557| 凯悦华庭|
# | 2558| 置地逸轩|
df.select(df["城市"].alias('city'), '小区').show()
# +------+--------------+
# | city| 小区|
# +------+--------------+
# |深圳市| 凯悦华庭|
# |深圳市| 置地逸轩|
df.select('城市', '区县', '区域', '小区').filter(df["小区"] == '凯悦华庭').show()
# +------+------+------+--------+
# | 城市| 区县| 区域| 小区|
# +------+------+------+--------+
# |深圳市|罗湖区|春风路|凯悦华庭|
df.select('城市', '区县', '区域', '小区').filter((df["城市"] == '深圳市') & (df["区县"] == '南山区')).show()
# +------+------+------+----------------+
# | 城市| 区县| 区域| 小区|
# +------+------+------+----------------+
# |深圳市|南山区|白石洲|中海深圳湾畔花园|
# |深圳市|南山区|白石洲| 侨城豪苑二期|
# ...
df.select(df["城市"] + 1, '城市', '小区').show()
# +----------+------+--------------+
# |(城市 + 1)| 城市| 小区|
# +----------+------+--------------+
# | null|深圳市| 凯悦华庭|
df.select(F.lit("test").alias('城市plus'), '城市', '小区').show()
# +--------+------+--------------+
# |城市plus| 城市| 小区|
# +--------+------+--------------+
# | test|深圳市| 凯悦华庭|
# 取出一行
df2 = df.limit(1)
print(df2)
# 增加行
print(df.count()) # 47104
print(df.unionAll(df2).count()) # 47105
# 删除重复记录
print(df.drop_duplicates().count()) # 47104
# 删除列
print(df.drop('id').columns)
# ['省份', '城市', '区县', '区域', '小区', '源地址',...
# 删除指定字段中存在缺失的记录
print("删除存在缺失的记录")
print(df.dropna().count())
print("删除指定字段中存在缺失的记录")
print(df.dropna(subset=['省份', '城市']).count())
# 填充缺失值
print(df.fillna({'省份': "广东省", '城市': '深圳市'}))
# 分组计算
df.groupby('区县').agg(F.max(df['总价'])).show()
# +--------+----------+
# | 区县| max(总价)|
# +--------+----------+
# | 龙华区|5200.00000|
# | 福田区|8300.00000|
# | 罗湖区|7000.00000|
# | 坪山区|1588.00000|
# | 南山区|9800.00000|
# | 龙岗区|4000.00000|
# | 盐田区|5500.00000|
# | 光明区|5200.00000|
# |大鹏新区|3500.00000|
# | 宝安区|8800.00000|
# +--------+----------+
# 函数计算
df.select(F.max(df["总价"])).show() # 最大值
df.select(F.min(df["总价"])).show() # 最小值
df.select(F.avg(df["总价"])).show() # 平均值
df.select(F.countDistinct(df["总价"])).show() # 去重后再统计
df.select(F.count(df["总价"])).show() # 去掉缺失值会再统计
# +----------+
# | max(总价)|
# +----------+
# |9800.00000|
# +----------+
#
# +---------+
# |min(总价)|
# +---------+
# | 1.10000|
# +---------+
#
# +-------------+
# | avg(总价)|
# +-------------+
# |577.736916000|
# +-------------+
#
# |count(DISTINCT 总价)|
# +--------------------+
# | 1219|
# +--------------------+
#
# +-----------+
# |count(总价)|
# +-----------+
# | 47104|
# +-----------+
# 'lit': 'Creates a :class:`Column` of literal value.',
# 'col': 'Returns a :class:`Column` based on the given column name.',
# 'column': 'Returns a :class:`Column` based on the given column name.',
# 'asc': 'Returns a sort expression based on the ascending order of the given column name.',
# 'desc': 'Returns a sort expression based on the descending order of the given column name.',
#
# 'upper': 'Converts a string expression to upper case.',
# 'lower': 'Converts a string expression to upper case.',
# 'sqrt': 'Computes the square root of the specified float value.',
# 'abs': 'Computes the absolutle value.',
#
# 'max': 'Aggregate function: returns the maximum value of the expression in a group.',
# 'min': 'Aggregate function: returns the minimum value of the expression in a group.',
# 'first': 'Aggregate function: returns the first value in a group.',
# 'last': 'Aggregate function: returns the last value in a group.',
# 'count': 'Aggregate function: returns the number of items in a group.',
# 'sum': 'Aggregate function: returns the sum of all values in the expression.',
# 'avg': 'Aggregate function: returns the average of the values in a group.',
# 'mean': 'Aggregate function: returns the average of the values in a group.',
# 'sumDistinct': 'Aggregate function: returns the sum of distinct values in the expression.',
df.describe("总价").show()
# +-------+-----------------+
# |summary| 总价|
# +-------+-----------------+
# | count| 47104|
# | mean| 577.736916000|
# | stddev|544.7605196104298|
# | min| 1.10000|
# | max| 9800.00000|
# +-------+-----------------+
print(df)
print(type(df))
df.select('id', '城市', '区县', '区域', '小区').filter("id = 5000").show()
sc.stop()
| [
"nickliqian@outlook.com"
] | nickliqian@outlook.com |
dd47c408355ba26ad57a6dca68f599dd9a8a5475 | 19652cc279e9bd0d63622430f09f0ad187349ff7 | /GetZhuanTiLawContent.py | ca76160f6899084b9cc2ecd79b337aaa18198f31 | [] | no_license | dumin199101/JinRongExtractScript | d1a3a415d335727ba1e9cc3772e0c4b40bcfc040 | 253420befab540b0d35140dfd6ec1c2ebaf95244 | refs/heads/master | 2021-06-27T22:49:27.095270 | 2019-04-30T05:50:34 | 2019-04-30T05:50:34 | 145,783,385 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,071 | py | # coding=utf-8
"""
@Project: JinRong
@Time: 2019/3/21 16:54
@Name: GetZhuanTiLawContent
@Author: lieyan123091
"""
import os,sys,re
def del_blank_char(str):
"""
去除字符串中的空白跟换行符
:param str:
:return:
"""
rep = re.compile(r'(\n|\t|\r)')
(fstr, count) = rep.subn('', str)
return fstr
def write_mapping_log(logname, content):
"""
生成mapping日志文件
:return:
"""
with open(logname, "a+") as f1:
f1.write(content.encode("utf-8"))
def get_law_content(srcfolder):
for file in os.listdir(srcfolder):
guid = file[:-4]
srcfile = srcfolder + "\\" + file
with open(srcfile,"r") as f1:
str = f1.read()
content = del_blank_char(str)
# print guid,content
write_mapping_log("law_new_content.txt",guid + "\t" + content + "\n")
print guid
def main():
srcfolder = u"J:\\law_title"
get_law_content(srcfolder)
if __name__ == '__main__':
reload(sys)
sys.setdefaultencoding('utf8')
main() | [
"1766266374@qq.com"
] | 1766266374@qq.com |
a21d70e8bbc05a9589358e153a72bbcdcf05d80e | 45828d99366ce5884b44f3a8a88564a97aba07ec | /dg/commands/deploy.py | 858fff8f5fc2460f3f653a80776845dc44ea6492 | [
"Apache-2.0"
] | permissive | alefnula/dg | 6e98d9b0fe7eb9f852725225f581c7781dca9b32 | 57602c464293dd1f78188fc1bddcafe1f08fb4ee | refs/heads/master | 2021-09-04T03:54:51.548719 | 2018-01-15T15:19:01 | 2018-01-15T15:19:01 | 114,445,035 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,809 | py | __author__ = 'Viktor Kerkez <alefnula@gmail.com>'
__date__ = ' 06 December 2017'
__copyright__ = 'Copyright (c) 2017 Viktor Kerkez'
import os
import dg
import glob
import shutil
from datetime import datetime
from dg.utils import ensure_dir, bar
@dg.command
@dg.argument('-m', '--model', action='append', dest='models',
help='Models do deploy. Default: All found models')
@dg.argument('-s', '--silent', action='store_true', help='Don\'t show details')
def deploy(models=None, silent=False):
"""Deploy the latest model to production
Args:
models (list of str): Names of the models we want to deploy
silent (bool): Don't print details to standard out.
"""
config = dg.Config()
production_dir = config.get_model_dir(production=True)
models_dir = os.path.dirname(production_dir)
models = models or config.models.keys()
files = [
os.path.basename(x) for x in
glob.glob(os.path.join(models_dir, '*'))
# Remove production and tensorflow from the list
if os.path.basename(x) not in (
'production', 'tensorflow', 'metrics.db'
)
]
latest = os.path.join(models_dir, sorted(
files, key=lambda x: datetime.strptime(x[:19], '%Y.%m.%dT%H:%M:%S')
)[-1])
ensure_dir(production_dir, directory=True)
bar(silent=silent)
for model in models:
if not silent:
print('Deploying model:', model)
source = os.path.join(latest, model)
# If the model is trained in the latest training batch
if os.path.isdir(source):
destination = os.path.join(production_dir, model)
if os.path.isdir(destination):
shutil.rmtree(destination)
shutil.copytree(source, destination)
bar(silent=silent)
| [
"alefnula@gmail.com"
] | alefnula@gmail.com |
d8166967a72793fe1c9a773e18c675dd16b45b91 | ee7e42417d9d1e76b0e84e44dc6eb037adc3ebad | /.history/pet/api_20190703153244.py | 42fa809e9282d2a4761bacc1a97858c58c20bd27 | [] | no_license | web3-qa/pets-api | 4632127ee84a299f207d95754f409fc1e4c0013d | ee4a04e7291740ac8eb6147c305b41d27d5be29c | refs/heads/master | 2023-05-12T09:09:47.509063 | 2019-07-18T15:07:13 | 2019-07-18T15:07:13 | 197,611,701 | 0 | 0 | null | 2023-05-01T19:42:17 | 2019-07-18T15:19:59 | Python | UTF-8 | Python | false | false | 296 | py | import json
from flask import abort, jsonify, request
from flask.views import MethodView
class pets(MethodView):
pets = [
{"id":1, "name": u"Mac"},
{"id":2, "name": u"Leo"},
{"id":3, "name": u"Dave"}
]
def get((self):
return jsonify("pets":self.p) | [
"dcolmer@statestreet.com"
] | dcolmer@statestreet.com |
cbcbbe018d0666193143c8195bfeeac41f28ebf0 | a226588e90812f436e16ae140555463a259e653c | /pytorchYOLOv1master/xml_2_txt.py | 37177e4baa5d104e29948ab20f209dc2e35955ed | [] | no_license | Bigfishers/AnnotatedNetworkModelGit | 3b6f4d0c40e4d50f88e71a48af4c0f175af1935a | 6fef51b7ac85359506fc7992f13acd742105a0b6 | refs/heads/master | 2020-03-18T05:51:52.044434 | 2018-05-22T01:57:00 | 2018-05-22T01:57:00 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,801 | py | import xml.etree.ElementTree as ET
import os
VOC_CLASSES = ( # always index 0
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat', 'chair',
'cow', 'diningtable', 'dog', 'horse',
'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor')
def parse_rec(filename):
""" Parse a PASCAL VOC xml file """
tree = ET.parse(filename)
objects = []
for obj in tree.findall('object'):
obj_struct = {}
obj_struct['name'] = obj.find('name').text
#obj_struct['pose'] = obj.find('pose').text
#obj_struct['truncated'] = int(obj.find('truncated').text)
#obj_struct['difficult'] = int(obj.find('difficult').text)
bbox = obj.find('bndbox')
obj_struct['bbox'] = [int(float(bbox.find('xmin').text)),
int(float(bbox.find('ymin').text)),
int(float(bbox.find('xmax').text)),
int(float(bbox.find('ymax').text))]
objects.append(obj_struct)
return objects
txt_file = open('voc2012.txt','w')
#bobo change dir
Annotations = '/home/zhuhui/data/VOCdevkit/VOC2012/Annotations/'
xml_files = os.listdir(Annotations)
count = 0
for xml_file in xml_files:
count += 1
image_path = xml_file.split('.')[0] + '.jpg'
txt_file.write(image_path+' ')
results = parse_rec(Annotations + xml_file)
num_obj = len(results)
txt_file.write(str(num_obj)+' ')
for result in results:
class_name = result['name']
bbox = result['bbox']
class_name = VOC_CLASSES.index(class_name)
txt_file.write(str(bbox[0])+' '+str(bbox[1])+' '+str(bbox[2])+' '+str(bbox[3])+' '+str(class_name)+' ')
txt_file.write('\n')
#if count == 10:
# break
txt_file.close() | [
"1055271769@qq.com"
] | 1055271769@qq.com |
5a3c57fd21f95321e6d620c75362de168aa1a0a7 | d57b51ec207002e333b8655a8f5832ed143aa28c | /.history/l2/bot_20200620185737.py | c28dc8413d17eb198a5c331d9f16239be7f16928 | [] | no_license | yevheniir/python_course_2020 | b42766c4278a08b8b79fec77e036a1b987accf51 | a152d400ab4f45d9d98d8ad8b2560d6f0b408c0b | refs/heads/master | 2022-11-15T07:13:24.193173 | 2020-07-11T15:43:26 | 2020-07-11T15:43:26 | 278,890,802 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,499 | py | # # Work with Python 3.6
import discord, os, json, time
# TOKEN = os.environ.get('TOKEN')
client = discord.Client()
def save():
with open(os.path.dirname(os.path.realpath(__file__)) + "/save.json", 'w') as file:
json.dump(people, file)
people = {"zheka": ""}
with open(os.path.dirname(os.path.realpath(__file__)) + "/save.json", 'r') as file:
people = json.load(file)
@client.event
async def on_message(message):
# we do not want the bot to reply to itself
if message.author == client.user:
return
if message.content == "!start_battle":
people[message.author.id] = ""
message.channel.send("Бій почався, чекаємо приєднання ще 1 гравц")
if message.content == "!join_battle":
if len(people) == 1:
people[message.author.id] = ""
if message.author.id in people:
answer = message.content
if message.content.startswith('!register'):
target = None
for i, n in enumerate(people):
if n['id'] == message.author.id:
target = i
if not bool(target):
people.append({"id": message.author.id, "name": message.content.replace("!register", '')})
else:
people[target] = {"id": message.author.id, "name": message.content.replace("!register", '')}
save()
msg = 'Hello ' + message.author.mention + " Який твій улюблений фільм?"
await message.channel.send(msg)
if message.content.startswith('!answer'):
target = None
for i, n in enumerate(people):
if n['id'] == message.author.id:
target = i
try:
people[target]['film'] = message.content.replace('!answer', '')
except Exception:
people.append({"id": message.author.id, "film": message.content.replace('!answer', '')})
save()
msg = 'Відповідь збережено'
await message.channel.send(msg)
if message.content.startswith('!hello'):
msg = '!hello ' + message.author.mention
await message.channel.send(msg)
if message.content.startswith('!show_all'):
msg = 'People: \n' + '\n'.join(map(str, people))
await message.channel.send(msg)
@client.event
async def on_ready():
print('Logged in as')
print(client.user.name)
print(client.user.id)
print('------')
client.run(TOKEN)
| [
"yevheniira@intelink-ua.com"
] | yevheniira@intelink-ua.com |
3d9d753500f25326772d80f9d9adbe3735342da5 | 377cbbe140fd0faf1eb53ba3794de816ac307cde | /src/interpolate/InterpolateLatentSpace.py | 7d6d54575b623f171f505c66e68ec8aee3183784 | [
"MIT"
] | permissive | dhruvtapasvi/implementation | fcbd7ab8e7b1368a0f07ee41dc5f0b6d6708c206 | 964980f431517f4548a87172a05107cdf700fb84 | refs/heads/master | 2021-09-16T01:47:50.601661 | 2018-05-17T19:22:44 | 2018-05-17T19:22:44 | 114,498,055 | 1 | 0 | MIT | 2018-05-05T02:17:35 | 2017-12-16T23:59:13 | Python | UTF-8 | Python | false | false | 893 | py | from interpolate.Interpolate import Interpolate
from model.Autoencoder import Autoencoder
class InterpolateLatentSpace(Interpolate):
def __init__(self, autoencoder: Autoencoder):
self.__encoder = autoencoder.encoder()
self.__decoder = autoencoder.decoder()
def interpolateAll(self, left, right, intervals):
leftLatent = self.__encoder.predict(left, batch_size=100)
rightLatent = self.__encoder.predict(right, batch_size=100)
interpolated = super(InterpolateLatentSpace, self).interpolateAll(leftLatent,rightLatent, intervals)
flattenedInterpolated = interpolated.reshape((-1,) + interpolated.shape[2:])
flattedReconstructed = self.__decoder.predict(flattenedInterpolated, batch_size=100)
reconstructed = flattedReconstructed.reshape(interpolated.shape[0:2] + left.shape[1:])
return interpolated, reconstructed
| [
"dhruv.tapasvi1996@gmail.com"
] | dhruv.tapasvi1996@gmail.com |
91875d0345ca998c41170c240a27ca72b0e5d82d | 15f321878face2af9317363c5f6de1e5ddd9b749 | /solutions_python/Problem_2/47.py | 00fad29e61996189fea882fe09baa3269940f0eb | [] | no_license | dr-dos-ok/Code_Jam_Webscraper | c06fd59870842664cd79c41eb460a09553e1c80a | 26a35bf114a3aa30fc4c677ef069d95f41665cc0 | refs/heads/master | 2020-04-06T08:17:40.938460 | 2018-10-14T10:12:47 | 2018-10-14T10:12:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,533 | py | #!/usr/bin/env python
from sys import *
def timeDecode(time):
h = time / 60
m = int (time / time / 60)
return "%02d:%02d" % (h, m)
def processCase(trainsA, trainsB, tt, case):
if len(trainsA) == 0 or len(trainsB) == 0:
ca = len(trainsA)
cb = len(trainsB)
else:
ca = 0
cb = 0
dictA = {}
trainsA.sort()
trainsB.sort(lambda a,b: cmp(b[1], a[1]))
for i in range(0, len(trainsA)):
found = False
for j in range(0, len(trainsB)):
if trainsA[i][0] >= trainsB[j][1] + tt and not dictA.has_key(j):
dictA[j] = True
found = True
break
if not found: ca += 1
dictB = {}
trainsA.sort(lambda a,b: cmp(b[1], a[1]))
trainsB.sort()
for i in range(0, len(trainsB)):
found = False
for j in range(0, len(trainsA)):
if trainsB[i][0] >= trainsA[j][1] + tt and not dictB.has_key(j):
dictB[j] = True
found = True
break
if not found: cb += 1
print "Case #%d: %d %d" % (case, ca, cb)
return
def timeEncode(time):
(h, m) = time.split(':')
return int (h) * 60 + int (m)
def process(lines):
n = int (lines[0])
case = 1
i = 1
while case <= n:
tt = int (lines[i].strip("\r\n"))
(a, b) = lines[i + 1].strip("\r\n").split(' ')
(ac, bc) = int(a), int(b)
i += 2
trainsA = []
trainsB = []
while ac > 0:
(dep, arr) = lines[i].strip("\r\n").split(' ')
trainsA.append((timeEncode(dep), timeEncode(arr)))
i += 1
ac -= 1
while bc > 0:
(dep, arr) = lines[i].strip("\r\n").split(' ')
trainsB.append((timeEncode(dep), timeEncode(arr)))
i += 1
bc -= 1
processCase(trainsA, trainsB, tt, case)
case += 1
def usage():
print "Usage %s input" % argv[0]
def main():
if len(argv) < 2:
usage()
exit()
input = argv[1]
f = open(input, 'r')
try:
lines = f.readlines()
process(lines)
finally:
f.close()
if __name__ == '__main__':
main()
| [
"miliar1732@gmail.com"
] | miliar1732@gmail.com |
c6ea32be19b42e31ca901a7538214806f9956215 | d13c986a2260cf3b1ef247ef229db6d725dc15f3 | /pytext/models/representations/bilstm.py | c6f42d7e4cee9f2f9964ddc303d5695b1c8cea07 | [
"BSD-3-Clause"
] | permissive | meltnur/pytext | c5ae44cacd36cdf8b7bd5eaf06df00034214ddcf | 06c11e12eb9ddbf3f8b352efc6fed4721555ecf6 | refs/heads/master | 2021-02-11T00:47:52.531330 | 2020-02-28T23:03:21 | 2020-02-28T23:06:11 | 244,434,965 | 2 | 0 | NOASSERTION | 2020-03-02T17:41:05 | 2020-03-02T17:41:04 | null | UTF-8 | Python | false | false | 6,465 | py | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
from typing import Optional, Tuple
import torch
import torch.nn as nn
import torch.onnx
from pytext.config import ConfigBase
from pytext.utils import cuda
from pytext.utils.usage import log_class_usage
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
from .representation_base import RepresentationBase
class BiLSTM(RepresentationBase):
"""
`BiLSTM` implements a multi-layer bidirectional LSTM representation layer
preceded by a dropout layer.
Args:
config (Config): Configuration object of type BiLSTM.Config.
embed_dim (int): The number of expected features in the input.
padding_value (float): Value for the padded elements. Defaults to 0.0.
Attributes:
padding_value (float): Value for the padded elements.
dropout (nn.Dropout): Dropout layer preceding the LSTM.
lstm (nn.LSTM): LSTM layer that operates on the inputs.
representation_dim (int): The calculated dimension of the output features
of BiLSTM.
"""
class Config(RepresentationBase.Config, ConfigBase):
"""
Configuration class for `BiLSTM`.
Attributes:
dropout (float): Dropout probability to use. Defaults to 0.4.
lstm_dim (int): Number of features in the hidden state of the LSTM.
Defaults to 32.
num_layers (int): Number of recurrent layers. Eg. setting `num_layers=2`
would mean stacking two LSTMs together to form a stacked LSTM,
with the second LSTM taking in the outputs of the first LSTM and
computing the final result. Defaults to 1.
bidirectional (bool): If `True`, becomes a bidirectional LSTM. Defaults
to `True`.
"""
dropout: float = 0.4
lstm_dim: int = 32
num_layers: int = 1
bidirectional: bool = True
pack_sequence: bool = True
def __init__(
self, config: Config, embed_dim: int, padding_value: float = 0.0
) -> None:
super().__init__(config)
self.padding_value: float = padding_value
self.dropout = nn.Dropout(config.dropout)
self.lstm = nn.LSTM(
embed_dim,
config.lstm_dim,
num_layers=config.num_layers,
bidirectional=config.bidirectional,
batch_first=True,
)
self.representation_dim: int = config.lstm_dim * (
2 if config.bidirectional else 1
)
self.pack_sequence = config.pack_sequence
log_class_usage(__class__)
def forward(
self,
embedded_tokens: torch.Tensor,
seq_lengths: torch.Tensor,
states: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
) -> Tuple[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]:
"""
Given an input batch of sequential data such as word embeddings, produces
a bidirectional LSTM representation of the sequential input and new state
tensors.
Args:
embedded_tokens (torch.Tensor): Input tensor of shape
(bsize x seq_len x input_dim).
seq_lengths (torch.Tensor): List of sequences lengths of each batch element.
states (Tuple[torch.Tensor, torch.Tensor]): Tuple of tensors containing
the initial hidden state and the cell state of each element in
the batch. Each of these tensors have a dimension of
(bsize x num_layers * num_directions x nhid). Defaults to `None`.
Returns:
Tuple[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]: Bidirectional
LSTM representation of input and the state of the LSTM `t = seq_len`.
Shape of representation is (bsize x seq_len x representation_dim).
Shape of each state is (bsize x num_layers * num_directions x nhid).
"""
if self.dropout.p > 0.0:
embedded_tokens = self.dropout(embedded_tokens)
if states is not None:
# convert (h0, c0) from (bsz x num_layers*num_directions x nhid) to
# (num_layers*num_directions x bsz x nhid)
states = (
states[0].transpose(0, 1).contiguous(),
states[1].transpose(0, 1).contiguous(),
)
else:
# We need to send in a zero state that matches the batch size, because
# torch.jit tracing currently traces this as constant and therefore
# locks the traced model into a static batch size.
# see https://github.com/pytorch/pytorch/issues/16664
state = torch.zeros(
self.config.num_layers * (2 if self.config.bidirectional else 1),
embedded_tokens.size(0), # batch size
self.config.lstm_dim,
device=torch.cuda.current_device() if cuda.CUDA_ENABLED else None,
)
states = (state, state)
if torch.onnx.is_in_onnx_export():
lstm_in = [embedded_tokens, states[0], states[1]] + [
param.detach() for param in self.lstm._flat_weights
]
rep, new_state_0, new_state_1 = torch.ops._caffe2.InferenceLSTM(
lstm_in,
self.lstm.num_layers,
self.lstm.bias,
True,
self.lstm.bidirectional,
)
new_state = (new_state_0, new_state_1)
else:
if self.pack_sequence:
rnn_input = pack_padded_sequence(
embedded_tokens, seq_lengths, batch_first=True, enforce_sorted=False
)
else:
rnn_input = embedded_tokens
rep, new_state = self.lstm(rnn_input, states)
if self.pack_sequence:
rep, _ = pad_packed_sequence(
rep,
padding_value=self.padding_value,
batch_first=True,
total_length=embedded_tokens.size(1),
)
# Make sure the output from LSTM is padded to input's sequence length.
# convert states back to (bsz x num_layers*num_directions x nhid) to be
# used in data parallel model
new_state = (new_state[0].transpose(0, 1), new_state[1].transpose(0, 1))
return rep, new_state
| [
"facebook-github-bot@users.noreply.github.com"
] | facebook-github-bot@users.noreply.github.com |
8af9fa88d0898ada6c762ed02c15388cf867c3f9 | e61e725d9a962837e2b56f84e3934b0fb52dd0b1 | /eoxserver/core/decoders/kvp.py | e8dfa6aa753451cb4002a5327a8805e30c22da58 | [
"LicenseRef-scancode-warranty-disclaimer",
"MIT"
] | permissive | ESA-VirES/eoxserver | 719731172c2e5778186a4b144a201f602c07ce7e | d7b65adf9317538b267d5cbb1281acb72bc0de2c | refs/heads/master | 2021-01-21T20:06:22.164030 | 2014-10-14T12:21:13 | 2014-10-14T12:21:13 | 25,151,203 | 1 | 0 | null | 2014-12-04T09:46:54 | 2014-10-13T09:00:54 | Python | UTF-8 | Python | false | false | 3,405 | py | #-------------------------------------------------------------------------------
# $Id$
#
# Project: EOxServer <http://eoxserver.org>
# Authors: Fabian Schindler <fabian.schindler@eox.at>
#
#-------------------------------------------------------------------------------
# Copyright (C) 2013 EOX IT Services GmbH
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies of this Software or works derived from this Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#-------------------------------------------------------------------------------
from cgi import parse_qs
from django.http import QueryDict
from eoxserver.core.decoders.base import BaseParameter
class Parameter(BaseParameter):
""" Parameter for KVP values.
"""
key = None
def __init__(self, key=None, type=None, num=1, default=None, locator=None):
super(Parameter, self).__init__(type, num, default)
self.key = key.lower() if key is not None else None
self._locator = locator
def select(self, decoder, decoder_class=None):
return decoder._query_dict.get(self.key, [])
@property
def locator(self):
return self._locator or self.key
class DecoderMetaclass(type):
""" Metaclass for KVP Decoders to allow easy parameter declaration.
"""
def __init__(cls, name, bases, dct):
# set the "key" attribute of any parameter to the parameters name
# if no other key was specified.
for key, value in dct.items():
if isinstance(value, Parameter) and value.key is None:
value.key = key.lower()
super(DecoderMetaclass, cls).__init__(name, bases, dct)
class Decoder(object):
""" Base class for KVP decoders.
"""
__metaclass__ = DecoderMetaclass
def __init__(self, params):
query_dict = {}
if isinstance(params, QueryDict):
for key, values in params.lists():
query_dict[key.lower()] = values
elif isinstance(params, basestring):
tmp = parse_qs(params)
for key, values in tmp.items():
query_dict[key.lower()] = values
elif isinstance(params, dict):
for key, value in params.items():
query_dict[key.lower()] = (value,)
else:
raise ValueError(
"Decoder input '%s' not supported." % type(params).__name__
)
self.kvp = params
self._query_dict = query_dict
| [
"fabian.schindler@gmx.at"
] | fabian.schindler@gmx.at |
9a9db610e6b69d7c717c6efb97000b1744385b62 | ba0a2b0d2d1534443ea34320675aadfa378457b6 | /Array/Q1409_Queries on a Permutation With Key.py | a509f13f9be2dba1cdb3360e472b9834be7c3a98 | [] | no_license | Luolingwei/LeetCode | 73abd58af116f3ec59fd6c76f662beb2a413586c | 79d4824879d0faed117eee9d99615cd478432a14 | refs/heads/master | 2021-08-08T17:45:19.215454 | 2021-06-17T17:03:15 | 2021-06-17T17:03:15 | 152,186,910 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 491 | py |
# 思路: pos记录每次数字的位置
# 每次更新时,pos[q-1]为q的实际位置,位置比它小的pos+1,将其位置置为0
class Solution:
def processQueries(self, queries, m):
pos = [i for i in range(0, m)]
ans = []
for q in queries:
curpos = pos[q - 1]
ans.append(curpos)
for i, p in enumerate(pos):
if p < curpos:
pos[i] += 1
pos[q - 1] = 0
return ans
| [
"564258080@qq.com"
] | 564258080@qq.com |
7881f2e46a3dc62785502c57f332a7aa361dd5cc | 5c94e032b2d43ac347f6383d0a8f0c03ec3a0485 | /Launchpad_Pro/TargetTrackComponent.py | 31fdc1fbfd5efe0c7e257a083c91fa387bc445d6 | [] | no_license | Elton47/Ableton-MRS-10.1.13 | 997f99a51157bd2a2bd1d2dc303e76b45b1eb93d | 54bb64ba5e6be52dd6b9f87678ee3462cc224c8a | refs/heads/master | 2022-07-04T01:35:27.447979 | 2020-05-14T19:02:09 | 2020-05-14T19:02:09 | 263,990,585 | 0 | 0 | null | 2020-05-14T18:12:04 | 2020-05-14T18:12:03 | null | UTF-8 | Python | false | false | 3,081 | py | # uncompyle6 version 3.6.7
# Python bytecode 2.7 (62211)
# Decompiled from: Python 2.7.17 (default, Dec 23 2019, 21:25:33)
# [GCC 4.2.1 Compatible Apple LLVM 11.0.0 (clang-1100.0.33.16)]
# Embedded file name: /Users/versonator/Jenkins/live/output/Live/mac_64_static/Release/python-bundle/MIDI Remote Scripts/Launchpad_Pro/TargetTrackComponent.py
# Compiled at: 2020-01-09 15:21:34
from __future__ import absolute_import, print_function, unicode_literals
from _Framework.SubjectSlot import Subject, subject_slot, subject_slot_group
from _Framework.ControlSurfaceComponent import ControlSurfaceComponent
class TargetTrackComponent(ControlSurfaceComponent, Subject):
"""'
TargetTrackComponent handles determining the track to target for
note mode-related functionality and notifying listeners.
"""
__module__ = __name__
__subject_events__ = (u'target_track', )
_target_track = None
_armed_track_stack = []
def __init__(self, *a, **k):
super(TargetTrackComponent, self).__init__(*a, **k)
self._on_tracks_changed.subject = self.song()
self._on_tracks_changed()
@property
def target_track(self):
return self._target_track
def on_selected_track_changed(self):
if not self._armed_track_stack:
self._set_target_track()
@subject_slot('tracks')
def _on_tracks_changed(self):
tracks = filter(lambda t: t.can_be_armed and t.has_midi_input, self.song().tracks)
self._on_arm_changed.replace_subjects(tracks)
self._on_frozen_state_changed.replace_subjects(tracks)
self._refresh_armed_track_stack(tracks)
@subject_slot_group('arm')
def _on_arm_changed(self, track):
if track in self._armed_track_stack:
self._armed_track_stack.remove(track)
if track.arm:
self._armed_track_stack.append(track)
self._set_target_track(track)
else:
self._set_target_track()
@subject_slot_group('is_frozen')
def _on_frozen_state_changed(self, track):
if track in self._armed_track_stack:
self._armed_track_stack.remove(track)
if track == self._target_track:
self._set_target_track()
def _set_target_track(self, target=None):
new_target = self._target_track
if target is None:
if self._armed_track_stack:
new_target = self._armed_track_stack[(-1)]
else:
new_target = self.song().view.selected_track
else:
new_target = target
if self._target_track != new_target:
self._target_track = new_target
self.notify_target_track()
return
def _refresh_armed_track_stack(self, all_tracks):
for track in self._armed_track_stack:
if track not in all_tracks:
self._armed_track_stack.remove(track)
for track in all_tracks:
if track.arm and track not in self._armed_track_stack:
self._armed_track_stack.append(track)
self._set_target_track() | [
"ahmed.emerah@icloud.com"
] | ahmed.emerah@icloud.com |
f1977b359869d02dbf912eadf7a20ba9e2a36413 | d6480f551154ed1d6e32a65b972ba62c1e1eb998 | /job_test.py | 189de8064200b0b5e93f41dbd850f2cbf1176639 | [] | no_license | ToferC/scraper-Gitlawca | c8de2739878d17789baf7a0f7d139c8928f3812f | f9ec04af54faa6bf49e5be7cc953d152fce926b1 | refs/heads/master | 2021-01-10T15:17:18.242633 | 2017-06-27T17:32:26 | 2017-06-27T17:32:26 | 49,247,570 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,044 | py | import requests as requests
from bs4 import BeautifulSoup
title = ""
paragraphs = []
def parse_job(url):
# Parse job html and collect links, adding them to list in the job dict
html = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
soup = BeautifulSoup(html.text, 'html.parser')
title = soup.find_all('h1')
text = soup.find_all('p')
uls = soup.find_all('ul')
responsibilities_1 = uls[4]
responsibilities_2 = uls[5]
requirements_1 = uls[6]
for paragraph in text:
paragraphs.append(paragraph.text)
print(title[0].text)
print("")
for para in paragraphs[4:-9]:
print(para)
print("\nResponsibilities")
for ul in responsibilities_1:
print(ul.text)
print("\nResponsibilities")
for ul in responsibilities_2:
print(ul.text)
print("\nRequirements")
for ul in requirements_1:
print(ul.text)
print("")
if __name__ == '__main__':
parse_job("https://resources.workable.com/senior-auditor-job-description")
| [
"cgeist7@gmail.com"
] | cgeist7@gmail.com |
331d183f37714c6bc90433cbfacca2e0ac19f44f | 35979ed5415386a78c4b7a73e716e71e9bd86ac1 | /process_kitti.py | ec61c6e18784c94b07cab606ba43ad06e574d0b1 | [
"MIT"
] | permissive | smalgireddy/dusty-gan | 6a082995b467bae5f27a6103fb1e061890e5f30e | 63ea1757660806cd04976b24fc7733ab26b2a3a1 | refs/heads/main | 2023-07-01T18:43:04.665186 | 2021-08-02T07:04:29 | 2021-08-02T07:04:29 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,348 | py | import argparse
import multiprocessing
import os
import os.path as osp
from collections import defaultdict
from glob import glob
import joblib
import matplotlib.cm as cm
import numba
import numpy as np
import torch
from PIL import Image
from tqdm import tqdm
from datasets.kitti import KITTIOdometry
# support semantic kitti only for this script
labelmap = {
0: 0, # "unlabeled"
1: 0, # "outlier" mapped to "unlabeled" --------------------------mapped
10: 1, # "car"
11: 2, # "bicycle"
13: 5, # "bus" mapped to "other-vehicle" --------------------------mapped
15: 3, # "motorcycle"
16: 5, # "on-rails" mapped to "other-vehicle" ---------------------mapped
18: 4, # "truck"
20: 5, # "other-vehicle"
30: 6, # "person"
31: 7, # "bicyclist"
32: 8, # "motorcyclist"
40: 9, # "road"
44: 10, # "parking"
48: 11, # "sidewalk"
49: 12, # "other-ground"
50: 13, # "building"
51: 14, # "fence"
52: 0, # "other-structure" mapped to "unlabeled" ------------------mapped
60: 9, # "lane-marking" to "road" ---------------------------------mapped
70: 15, # "vegetation"
71: 16, # "trunk"
72: 17, # "terrain"
80: 18, # "pole"
81: 19, # "traffic-sign"
99: 0, # "other-object" to "unlabeled" ----------------------------mapped
252: 1, # "moving-car" to "car" ------------------------------------mapped
253: 7, # "moving-bicyclist" to "bicyclist" ------------------------mapped
254: 6, # "moving-person" to "person" ------------------------------mapped
255: 8, # "moving-motorcyclist" to "motorcyclist" ------------------mapped
256: 5, # "moving-on-rails" mapped to "other-vehicle" --------------mapped
257: 5, # "moving-bus" mapped to "other-vehicle" -------------------mapped
258: 4, # "moving-truck" to "truck" --------------------------------mapped
259: 5, # "moving-other"-vehicle to "other-vehicle" ----------------mapped
}
_n_classes = max(labelmap.values()) + 1
_colors = cm.turbo(np.asarray(range(_n_classes)) / (_n_classes - 1))[:, :3] * 255
palette = list(np.uint8(_colors).flatten())
@numba.jit
def scatter(arrary, index, value):
for (h, w), v in zip(index, value):
arrary[h, w] = v
return arrary
def projection(source, grid, order, H, W):
assert source.ndim == 2, source.ndim
C = source.shape[1]
proj = np.zeros((H, W, C))
proj = np.asarray(proj, dtype=source.dtype)
proj = scatter(proj, grid[order], source[order])
return proj
def process_point_clouds(point_path, H=64, W=2048):
save_dir = lambda x: x.replace("dataset/sequences", "dusty-gan/sequences")
# setup point clouds
points = np.fromfile(point_path, dtype=np.float32).reshape((-1, 4))
xyz = points[:, :3] # xyz
x = xyz[:, 0]
y = xyz[:, 1]
z = xyz[:, 2]
depth = np.linalg.norm(xyz, ord=2, axis=1)
order = np.argsort(-depth)
# the i-th quadrant
# suppose the points are ordered counterclockwise
quads = np.zeros_like(x)
quads[(x >= 0) & (y >= 0)] = 0 # 1st
quads[(x < 0) & (y >= 0)] = 1 # 2nd
quads[(x < 0) & (y < 0)] = 2 # 3rd
quads[(x >= 0) & (y < 0)] = 3 # 4th
# split between the 3rd and 1st quadrants
diff = np.roll(quads, 1) - quads
(start_inds,) = np.where(diff == 3) # number of lines
inds = list(start_inds) + [len(quads)] # add the last index
# vertical grid
line_idx = 63 # ...0
grid_h = np.zeros_like(x)
for i in reversed(range(len(start_inds))):
grid_h[inds[i] : inds[i + 1]] = line_idx
line_idx -= 1
# horizontal grid
yaw = -np.arctan2(y, x) # [-pi,pi]
grid_w = (yaw / np.pi + 1) / 2 % 1 # [0,1]
grid_w = np.floor(grid_w * W)
grid = np.stack((grid_h, grid_w), axis=-1).astype(np.int32)
proj = projection(points, grid, order, H, W)
save_path = save_dir(point_path).replace(".bin", ".npy")
os.makedirs(osp.dirname(save_path), exist_ok=True)
np.save(save_path, proj)
# for semantic kitti
label_path = point_path.replace("/velodyne", "/labels")
label_path = label_path.replace(".bin", ".label")
if osp.exists(label_path):
labels = np.fromfile(label_path, dtype=np.int32).reshape((-1, 1))
labels = np.vectorize(labelmap.__getitem__)(labels & 0xFFFF)
labels = projection(labels, grid, order, H, W)
save_path = save_dir(label_path).replace(".label", ".png")
os.makedirs(osp.dirname(save_path), exist_ok=True)
labels = Image.fromarray(np.uint8(labels[..., 0]), mode="P")
labels.putpalette(palette)
labels.save(save_path)
def mean(tensor, dim):
tensor = tensor.clone()
kwargs = {"dim": dim, "keepdim": True}
valid = (~tensor.isnan()).float()
tensor[tensor.isnan()] = 0
tensor = torch.sum(tensor * valid, **kwargs) / valid.sum(**kwargs)
return tensor
@torch.no_grad()
def compute_avg_angles(loader):
max_depth = loader.dataset.max_depth
summary = defaultdict(float)
for item in tqdm(loader):
xyz_batch = item["xyz"]
x = xyz_batch[:, [0]]
y = xyz_batch[:, [1]]
z = xyz_batch[:, [2]]
depth = torch.sqrt(x ** 2 + y ** 2 + z ** 2) * max_depth
valid = (depth > 1e-8).float()
summary["total_data"] += len(valid)
summary["total_valid"] += valid.sum(dim=0) # (1,64,2048)
r = torch.sqrt(x ** 2 + y ** 2)
pitch = torch.atan2(z, r)
yaw = torch.atan2(y, x)
summary["pitch"] += torch.sum(pitch * valid, dim=0)
summary["yaw"] += torch.sum(yaw * valid, dim=0)
summary["pitch"] = summary["pitch"] / summary["total_valid"]
summary["yaw"] = summary["yaw"] / summary["total_valid"]
angles = torch.cat([summary["pitch"], summary["yaw"]], dim=0)
mean_pitch = mean(summary["pitch"], 2).expand_as(summary["pitch"])
mean_yaw = mean(summary["yaw"], 1).expand_as(summary["yaw"])
mean_angles = torch.cat([mean_pitch, mean_yaw], dim=0)
mean_valid = summary["total_valid"] / summary["total_data"]
valid = (mean_valid > 0).float()
angles[angles.isnan()] = 0.0
angles = valid * angles + (1 - valid) * mean_angles
assert angles.isnan().sum() == 0
return angles, mean_valid
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--root-dir", type=str, required=True)
args = parser.parse_args()
# 2D maps
split_dirs = sorted(glob(osp.join(args.root_dir, "dataset/sequences", "*")))
H, W = 64, 2048
for split_dir in tqdm(split_dirs):
point_paths = sorted(glob(osp.join(split_dir, "velodyne", "*.bin")))
joblib.Parallel(
n_jobs=multiprocessing.cpu_count(), verbose=10, pre_dispatch="all"
)(
[
joblib.delayed(process_point_clouds)(point_path, H, W)
for point_path in point_paths
]
)
# average angles
dataset = KITTIOdometry(
root=osp.join(args.root_dir, "dusty-gan"),
split="train",
shape=(H, W),
)
loader = torch.utils.data.DataLoader(
dataset,
batch_size=64,
num_workers=4,
drop_last=False,
)
N = len(dataset)
angles, valid = compute_avg_angles(loader)
torch.save(angles, osp.join(args.root_dir, "angles.pt"))
| [
"k_nakashima@irvs.ait.kyushu-u.ac.jp"
] | k_nakashima@irvs.ait.kyushu-u.ac.jp |
8bae87a094d5add07026dc0f8af0ba689bc86f0b | 20a9787564f76ae0fcf2332a8655b21bae0646a3 | /Lists/findClosest.py | 2995f497eeeb966e539567e05bd353bed2675c67 | [] | no_license | nidhiatwork/Python_Coding_Practice | 3b33a40c947413c2695d3ee77728fa69430f14cd | 9d5071a8ddcda19181d3db029fb801d4e3233382 | refs/heads/master | 2023-02-08T20:50:47.522565 | 2023-02-04T10:04:10 | 2023-02-04T10:04:10 | 194,607,759 | 5 | 3 | null | null | null | null | UTF-8 | Python | false | false | 1,545 | py | '''
Find three closest elements from given three sorted arrays
Given three sorted arrays A[], B[] and C[], find 3 elements i, j and k from A, B and C respectively such that max(abs(A[i] – B[j]), abs(B[j] – C[k]), abs(C[k] – A[i])) is minimized. Here abs() indicates absolute value.
Input: A[] = {1, 4, 10}
B[] = {2, 15, 20}
C[] = {10, 12}
Output: 10 15 10
10 from A, 15 from B and 10 from C
'''
import sys
def findClosest(A, B, C, p, q, r):
# Initialize min diff
diff = sys.maxsize
res_i = 0
res_j = 0
res_k = 0
# Travesre Array
i = 0
j = 0
k = 0
while(i < p and j < q and k < r):
# Find minimum and maximum of
# current three elements
minimum = min(A[i], min(B[j], C[k]))
maximum = max(A[i], max(B[j], C[k]))
# Update result if current diff is
# less than the min diff so far
if maximum-minimum < diff:
res_i = i
res_j = j
res_k = k
diff = maximum - minimum
# We can 't get less than 0 as
# values are absolute
if diff == 0:
break
# Increment index of array with
# smallest value
if A[i] == minimum:
i = i+1
elif B[j] == minimum:
j = j+1
else:
k = k+1
# Print result
print(A[res_i], " ", B[res_j], " ", C[res_k])
# Driver Program
A = [1, 4, 10]
B = [2, 15, 20]
C = [10, 12]
p = len(A)
q = len(B)
r = len(C)
findClosest(A,B,C,p,q,r)
| [
"“nidhi.bhushan123@gmail.com”"
] | “nidhi.bhushan123@gmail.com” |
40ac71dbc346a1e413eba935c0883f7f30392c9c | bb6ebff7a7f6140903d37905c350954ff6599091 | /third_party/WebKit/Source/wtf/wtf.gyp | 7706bdd0535c65bf32ed076a8e28c5f9dae37cca | [
"LGPL-2.0-or-later",
"GPL-1.0-or-later",
"MIT",
"Apache-2.0",
"BSD-3-Clause"
] | permissive | PDi-Communication-Systems-Inc/lollipop_external_chromium_org | faa6602bd6bfd9b9b6277ce3cd16df0bd26e7f2f | ccadf4e63dd34be157281f53fe213d09a8c66d2c | refs/heads/master | 2022-12-23T18:07:04.568931 | 2016-04-11T16:03:36 | 2016-04-11T16:03:36 | 53,677,925 | 0 | 1 | BSD-3-Clause | 2022-12-09T23:46:46 | 2016-03-11T15:49:07 | C++ | UTF-8 | Python | false | false | 5,564 | gyp | # Copyright (C) 2012 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
{
'includes': [
'../build/win/precompile.gypi',
'../build/features.gypi',
'wtf.gypi',
],
'conditions': [
['gcc_version>=46', {
'target_defaults': {
# Disable warnings about c++0x compatibility, as some names (such as nullptr) conflict
# with upcoming c++0x types.
'cflags_cc': ['-Wno-c++0x-compat'],
},
}],
],
'targets': [
{
# This target sets up defines and includes that are required by WTF and
# its dependents.
'target_name': 'wtf_config',
'type': 'none',
'direct_dependent_settings': {
'defines': [
# Import features_defines from features.gypi
'<@(feature_defines)',
],
'conditions': [
['OS=="win"', {
'defines': [
'__STD_C',
'_CRT_SECURE_NO_DEPRECATE',
'_SCL_SECURE_NO_DEPRECATE',
'CRASH=__debugbreak',
],
'include_dirs': [
'os-win32',
],
}],
],
},
},
{
'target_name': 'wtf',
'type': '<(component)',
'include_dirs': [
'..',
],
'dependencies': [
'wtf_config',
'../config.gyp:config',
'<(DEPTH)/third_party/icu/icu.gyp:icui18n',
'<(DEPTH)/third_party/icu/icu.gyp:icuuc',
],
'sources': [
'<@(wtf_files)',
],
'defines': [
'WTF_IMPLEMENTATION=1',
],
'direct_dependent_settings': {
'include_dirs': [
'..',
],
# Some warnings occur in WTF headers, so they must also be disabled
# in targets that use WTF.
'msvs_disabled_warnings': [
# Don't complain about calling specific versions of templatized
# functions (e.g. in RefPtrHashMap.h).
4344,
# Don't complain about using "this" in an initializer list
# (e.g. in StringImpl.h).
4355,
# Disable c4267 warnings until we fix size_t to int truncations.
4267,
],
},
'export_dependent_settings': [
'wtf_config',
'<(DEPTH)/third_party/icu/icu.gyp:icui18n',
'<(DEPTH)/third_party/icu/icu.gyp:icuuc',
],
# Disable c4267 warnings until we fix size_t to int truncations.
'msvs_disabled_warnings': [4127, 4355, 4510, 4512, 4610, 4706, 4068, 4267],
'conditions': [
['OS=="android"', {
'link_settings': {
'libraries': [
'-llog',
]
}
}],
['OS=="win"', {
'sources/': [
['exclude', 'ThreadIdentifierDataPthreads\\.(h|cpp)$'],
['exclude', 'ThreadingPthreads\\.cpp$'],
],
'include_dirs!': [
'<(SHARED_INTERMEDIATE_DIR)/blink',
],
'conditions': [
['component=="shared_library"', {
# Chromium windows multi-dll build enables C++ exception and this
# causes wtf to generate 4291 warning due to operator new/delete
# implementations. Disable the warning for chromium windows
# multi-dll build.
'msvs_disabled_warnings': [4291],
'direct_dependent_settings': {
'msvs_disabled_warnings': [4291],
},
}],
],
}, { # OS!="win"
'sources/': [
['exclude', 'Win\\.cpp$'],
],
}],
['OS=="mac"', {
'link_settings': {
'libraries': [
'$(SDKROOT)/System/Library/Frameworks/CoreFoundation.framework',
'$(SDKROOT)/System/Library/Frameworks/Foundation.framework',
]
}
}, { # OS!="mac"
'sources/': [
['exclude', 'CF\\.cpp$'],
['exclude', 'Mac\\.mm$'],
# mac is the only OS that uses WebKit's copy of TCMalloc.
['exclude', 'TC.*\\.(cpp|h)$'],
],
}],
],
},
]
}
| [
"mrobbeloth@pdiarm.com"
] | mrobbeloth@pdiarm.com |
05754d7839302ac6bb99ee3838a45024a6e77e0f | eb9c3dac0dca0ecd184df14b1fda62e61cc8c7d7 | /google/ads/googleads/v6/googleads-py/google/ads/googleads/v6/enums/types/budget_campaign_association_status.py | 26ee58e892b500dbd79115577df8ce2c5844f71e | [
"Apache-2.0"
] | permissive | Tryweirder/googleapis-gen | 2e5daf46574c3af3d448f1177eaebe809100c346 | 45d8e9377379f9d1d4e166e80415a8c1737f284d | refs/heads/master | 2023-04-05T06:30:04.726589 | 2021-04-13T23:35:20 | 2021-04-13T23:35:20 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,277 | py | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import proto # type: ignore
__protobuf__ = proto.module(
package='google.ads.googleads.v6.enums',
marshal='google.ads.googleads.v6',
manifest={
'BudgetCampaignAssociationStatusEnum',
},
)
class BudgetCampaignAssociationStatusEnum(proto.Message):
r"""Message describing the status of the association between the
Budget and the Campaign.
"""
class BudgetCampaignAssociationStatus(proto.Enum):
r"""Possible statuses of the association between the Budget and
the Campaign.
"""
UNSPECIFIED = 0
UNKNOWN = 1
ENABLED = 2
REMOVED = 3
__all__ = tuple(sorted(__protobuf__.manifest))
| [
"bazel-bot-development[bot]@users.noreply.github.com"
] | bazel-bot-development[bot]@users.noreply.github.com |
4ec54415712d40c9fde42055d4b35b1695da1b5b | 05f99d380ca5dd0764c6ebdc2e45efe91105fd57 | /src/htsql/core/cmd/embed.py | b963322b407053f390e5f57de7e80b1ee32e1c9b | [
"Apache-2.0"
] | permissive | prometheusresearch/htsql | d2ce07c9e6b5071f6b5684a84e374cb32075bbe8 | fd2dc19d145508d956ba50970abc76e7299669b0 | refs/heads/master | 2020-12-26T11:20:04.593639 | 2020-08-11T18:04:14 | 2020-08-11T18:04:14 | 237,492,309 | 21 | 3 | Apache-2.0 | 2020-02-13T14:15:00 | 2020-01-31T18:32:56 | Python | UTF-8 | Python | false | false | 4,206 | py | #
# Copyright (c) 2006-2013, Prometheus Research, LLC
#
from ..util import isfinite, to_name
from ..adapter import Adapter, adapt, adapt_many
from ..domain import (UntypedDomain, BooleanDomain, IntegerDomain, FloatDomain,
DecimalDomain, DateDomain, TimeDomain, DateTimeDomain, ListDomain,
IdentityDomain, ID, Value)
import types
import datetime
import decimal
class Embed(Adapter):
adapt(object)
def __init__(self, data):
self.data = data
def __call__(self):
raise TypeError("unable to embed a value of type %s"
% type(self.data))
class EmbedValue(Embed):
adapt(Value)
def __call__(self):
return self.data
class EmbedUntyped(Embed):
adapt(str)
def __call__(self):
data = self.data
if "\0" in data:
raise TypeError("a string should not contain a NIL character:"
" %s" % repr(data))
return Value(UntypedDomain(), data)
class EmbedNull(Embed):
adapt(type(None))
def __call__(self):
return Value(UntypedDomain(), None)
class EmbedBoolean(Embed):
adapt(bool)
def __call__(self):
return Value(BooleanDomain(), self.data)
class EmbedInteger(Embed):
adapt_many(int, int)
def __call__(self):
return Value(IntegerDomain(), self.data)
class EmbedFloat(Embed):
adapt(float)
def __call__(self):
if not isfinite(self.data):
raise TypeError("a float value must be finite")
return Value(FloatDomain(), self.data)
class EmbedDecimal(Embed):
adapt(decimal.Decimal)
def __call__(self):
if not isfinite(self.data):
raise TypeError("a decimal value must be finite")
return Value(DecimalDomain(), self.data)
class EmbedDate(Embed):
adapt(datetime.date)
def __call__(self):
return Value(DateDomain(), self.data)
class EmbedTime(Embed):
adapt(datetime.time)
def __call__(self):
return Value(TimeDomain(), self.data)
class EmbedDateTime(Embed):
adapt(datetime.datetime)
def __call__(self):
return Value(DateTimeDomain(), self.data)
class EmbedList(Embed):
adapt_many(list, tuple)
def __call__(self):
entry_values = [Embed.__invoke__(entry) for entry in self.data]
domain_set = set(entry_value.domain for entry_value in entry_values
if not isinstance(entry_value.domain, UntypedDomain))
if not domain_set:
domain = UntypedDomain()
return Value(ListDomain(domain), [entry_value.data
for entry_value in entry_values])
if len(domain_set) > 1:
domain_names = sorted(str(domain) for domain in domain_set)
raise TypeError("multiple entry domains: %s"
% ", ".join(domain_names))
domain = domain_set.pop()
entries = [entry_value.data if entry_value.domain == domain else
domain.parse(entry_value.data)
for entry_value in entry_values]
return Value(ListDomain(domain), entries)
class EmbedIdentity(Embed):
adapt(ID)
def __call__(self):
entry_values = [Embed.__invoke__(entry) for entry in self.data]
if any(value.data is None for value in entry_values):
raise TypeError("an ID value should not contain a NULL entry")
domain = IdentityDomain([value.domain for value in entry_values])
entries = tuple(value.data for value in entry_values)
return Value(domain, entries)
def embed(base_environment, **parameters):
environment = {}
if base_environment:
for name in sorted(base_environment):
value = base_environment[name]
if not isinstance(value, Value):
value = Embed.__invoke__(value)
name = to_name(name)
environment[name] = value
for name in sorted(parameters):
value = parameters[name]
if not isinstance(value, Value):
value = Embed.__invoke__(value)
name = to_name(name)
environment[name] = value
return environment
| [
"xi@resolvent.net"
] | xi@resolvent.net |
d5da7125d3e25d62b08ca3d56cd9f177d3c46418 | 5a52ccea88f90dd4f1acc2819997fce0dd5ffb7d | /alipay/aop/api/request/AlipayEcoMycarMaintainAftersaleSyncRequest.py | d0ba0166b615623f68f1e4eefa1377121c33b32a | [
"Apache-2.0"
] | permissive | alipay/alipay-sdk-python-all | 8bd20882852ffeb70a6e929038bf88ff1d1eff1c | 1fad300587c9e7e099747305ba9077d4cd7afde9 | refs/heads/master | 2023-08-27T21:35:01.778771 | 2023-08-23T07:12:26 | 2023-08-23T07:12:26 | 133,338,689 | 247 | 70 | Apache-2.0 | 2023-04-25T04:54:02 | 2018-05-14T09:40:54 | Python | UTF-8 | Python | false | false | 4,010 | py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
from alipay.aop.api.FileItem import FileItem
from alipay.aop.api.constant.ParamConstants import *
from alipay.aop.api.domain.AlipayEcoMycarMaintainAftersaleSyncModel import AlipayEcoMycarMaintainAftersaleSyncModel
class AlipayEcoMycarMaintainAftersaleSyncRequest(object):
def __init__(self, biz_model=None):
self._biz_model = biz_model
self._biz_content = None
self._version = "1.0"
self._terminal_type = None
self._terminal_info = None
self._prod_code = None
self._notify_url = None
self._return_url = None
self._udf_params = None
self._need_encrypt = False
@property
def biz_model(self):
return self._biz_model
@biz_model.setter
def biz_model(self, value):
self._biz_model = value
@property
def biz_content(self):
return self._biz_content
@biz_content.setter
def biz_content(self, value):
if isinstance(value, AlipayEcoMycarMaintainAftersaleSyncModel):
self._biz_content = value
else:
self._biz_content = AlipayEcoMycarMaintainAftersaleSyncModel.from_alipay_dict(value)
@property
def version(self):
return self._version
@version.setter
def version(self, value):
self._version = value
@property
def terminal_type(self):
return self._terminal_type
@terminal_type.setter
def terminal_type(self, value):
self._terminal_type = value
@property
def terminal_info(self):
return self._terminal_info
@terminal_info.setter
def terminal_info(self, value):
self._terminal_info = value
@property
def prod_code(self):
return self._prod_code
@prod_code.setter
def prod_code(self, value):
self._prod_code = value
@property
def notify_url(self):
return self._notify_url
@notify_url.setter
def notify_url(self, value):
self._notify_url = value
@property
def return_url(self):
return self._return_url
@return_url.setter
def return_url(self, value):
self._return_url = value
@property
def udf_params(self):
return self._udf_params
@udf_params.setter
def udf_params(self, value):
if not isinstance(value, dict):
return
self._udf_params = value
@property
def need_encrypt(self):
return self._need_encrypt
@need_encrypt.setter
def need_encrypt(self, value):
self._need_encrypt = value
def add_other_text_param(self, key, value):
if not self.udf_params:
self.udf_params = dict()
self.udf_params[key] = value
def get_params(self):
params = dict()
params[P_METHOD] = 'alipay.eco.mycar.maintain.aftersale.sync'
params[P_VERSION] = self.version
if self.biz_model:
params[P_BIZ_CONTENT] = json.dumps(obj=self.biz_model.to_alipay_dict(), ensure_ascii=False, sort_keys=True, separators=(',', ':'))
if self.biz_content:
if hasattr(self.biz_content, 'to_alipay_dict'):
params['biz_content'] = json.dumps(obj=self.biz_content.to_alipay_dict(), ensure_ascii=False, sort_keys=True, separators=(',', ':'))
else:
params['biz_content'] = self.biz_content
if self.terminal_type:
params['terminal_type'] = self.terminal_type
if self.terminal_info:
params['terminal_info'] = self.terminal_info
if self.prod_code:
params['prod_code'] = self.prod_code
if self.notify_url:
params['notify_url'] = self.notify_url
if self.return_url:
params['return_url'] = self.return_url
if self.udf_params:
params.update(self.udf_params)
return params
def get_multipart_params(self):
multipart_params = dict()
return multipart_params
| [
"liuqun.lq@alibaba-inc.com"
] | liuqun.lq@alibaba-inc.com |
050ee56cacff3a5c7ce399458e83878406417fb4 | 1eb7fa8b1745d4e51cefb4eceb44621862516aa6 | /leetCode/Python/109-ConvertSortedListToBinarySearchTree.py | d41ffad6bc39dba66cfd892419d5e4ec75987efb | [] | no_license | geniousisme/CodingInterview | bd93961d728f1fe266ad5edf91adc5d024e5ca48 | a64bca9c07a7be8d4060c4b96e89d8d429a7f1a3 | refs/heads/master | 2021-01-10T11:15:31.305787 | 2017-03-06T00:03:13 | 2017-03-06T00:03:13 | 43,990,453 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,196 | py | # Time: O(n)
# Space: O(logn)
#
# Given a singly linked list where elements are sorted in ascending order,
# convert it to a height balanced BST.
#
# Definition for a binary tree node
# Definition for singly-linked list.
# class ListNode:
# def __init__(self, x):
# self.val = x
# self.next = None
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution1:
# @param {ListNode} head
# @return {TreeNode}
def sortedListToBST(self, head):
sorted_list = []; length = 0
while head:
sorted_list.append(head.val)
head = head.next
length += 1
return self.buildBST(sorted_list, 0, length - 1)
def buildBST(self, lst, start, end):
if start > end: return None
mid = (start + end) / 2
root = TreeNode(lst[mid])
root.right = self.buildBST(lst, mid + 1, end)
root.left = self.buildBST(lst, start, mid - 1)
return root
class Solution(object):
# @param head, a list node
# @return a tree node
def sortedListToBST(self, head):
length = 0
curr = head
# count the total length of the linked list
while curr:
length += 1
curr = curr.next
# make it easier to manipulate
self.head = head
return self.convert_sorted_list_bst_helper(0, length)
def convert_sorted_list_bst_helper(self, start, end):
# if we already reach the end of the linked list, return None
if start == end:
return None
# find the mid(root for BST) first
mid = start + (end - start) / 2
# find the left child of the root.
left = self.convert_sorted_list_bst_helper(start, mid)
# assign the node for the root
root = TreeNode(self.head.val)
# connect left to the root
root.left = left
# move it to the next head
self.head = self.head.next
# connect right to the head
root.right = self.convert_sorted_list_bst_helper(mid + 1, end)
return root
| [
"chia-hao.hsu@aiesec.net"
] | chia-hao.hsu@aiesec.net |
6b61409d3ca330923d21a76fa6722d1541d8668e | 02863fb122e736e1d1193c01270a3731dd114f79 | /venv/Lib/site-packages/tensorflow/keras/optimizers/__init__.py | fb1976516b52f127a6ed8fb63b4480fddaac8f11 | [] | no_license | umedsondoniyor/PredictiveMaintenance | ee9dd4fe8d3366be4c5b192b4275f23903dbd285 | 88d8184cc2a958aa5feb9b55a0d5d9b6de36c22e | refs/heads/master | 2021-06-18T23:42:24.901395 | 2021-03-03T15:36:55 | 2021-03-03T15:36:55 | 142,778,437 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 819 | py | # This file is MACHINE GENERATED! Do not edit.
# Generated by: tensorflow/tools/api/generator/create_python_api.py script.
"""Built-in optimizer classes.
"""
from __future__ import print_function
from tensorflow.python.keras.optimizers import Adadelta
from tensorflow.python.keras.optimizers import Adagrad
from tensorflow.python.keras.optimizers import Adam
from tensorflow.python.keras.optimizers import Adamax
from tensorflow.python.keras.optimizers import Nadam
from tensorflow.python.keras.optimizers import Optimizer
from tensorflow.python.keras.optimizers import RMSprop
from tensorflow.python.keras.optimizers import SGD
from tensorflow.python.keras.optimizers import deserialize
from tensorflow.python.keras.optimizers import get
from tensorflow.python.keras.optimizers import serialize
del print_function
| [
"umedzhonizbasarov@gmail.com"
] | umedzhonizbasarov@gmail.com |
f8b1023a4984749e04479099632eebc2d245792e | 4985a69dbfeab6eebe29257946fd6bbf47a84686 | /机试/老王开枪/f.py | 45eac5e5e58ad0be092aef1c7caf7706d6a5de38 | [] | no_license | g2606340915/rg201--2018 | 9f21fc82bbbee4304dfbefdfa5314ff1fc7f835f | e2c17b6de6e2fe2b550041579932f5e30a5a52e4 | refs/heads/master | 2020-03-17T13:24:36.613978 | 2018-05-16T08:15:42 | 2018-05-16T08:15:42 | 133,630,564 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 339 | py | menu = ['苹果汁','蓝莓汁','香蕉汁','黄瓜汁','番茄汁']
for i in menu:
print(i)
menu[0] = '土豆汁'
menu[1] = '葡萄汁'
print('*'*30)
print('本店替换了两种饮料,现在提供:%s'%menu)
a = {'土豆汁':'2','葡萄汁':'3','香蕉汁':'4','黄瓜汁':'1','番茄汁':'0.5'}
print('本店饮料价格:%s'%a)
| [
"2606340915@qq.com"
] | 2606340915@qq.com |
0754d760854bde3e95e29ed1b46d81fa7705d216 | c7c8fc46b64773940f25f08d1513026877b19f0c | /environ.wsgi | c50c88e4eaea3f20744b2c6e14133c47824b16a0 | [] | no_license | GrahamDumpleton-abandoned/mod_wsgi-openshift | 6a0ec195328e2dc11e2b27f50f88060992aa01a4 | 2af71d1eb64bee19081525f5098857cb95d71027 | refs/heads/master | 2021-05-28T17:53:48.097986 | 2015-06-13T08:39:04 | 2015-06-13T08:39:04 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,323 | wsgi | from __future__ import print_function
import os
import sys
import locale
try:
from cStringIO import StringIO
except ImportError:
from io import StringIO
import mod_wsgi
import apache
def application(environ, start_response):
headers = []
headers.append(('Content-Type', 'text/plain; charset="UTF-8"'))
write = start_response('200 OK', headers)
input = environ['wsgi.input']
output = StringIO()
print('PID: %s' % os.getpid(), file=output)
print('UID: %s' % os.getuid(), file=output)
print('GID: %s' % os.getgid(), file=output)
print('CWD: %s' % os.getcwd(), file=output)
print(file=output)
print('python.version: %r' % (sys.version,), file=output)
print('apache.version: %r' % (apache.version,), file=output)
print('mod_wsgi.version: %r' % (mod_wsgi.version,), file=output)
print(file=output)
print('mod_wsgi.process_group: %s' % mod_wsgi.process_group,
file=output)
print('mod_wsgi.application_group: %s' % mod_wsgi.application_group,
file=output)
print(file=output)
print('mod_wsgi.maximum_processes: %s' % mod_wsgi.maximum_processes,
file=output)
print('mod_wsgi.threads_per_process: %s' % mod_wsgi.threads_per_process,
file=output)
print('mod_wsgi.process_metrics: %s' % mod_wsgi.process_metrics(),
file=output)
print('mod_wsgi.server_metrics: %s' % mod_wsgi.server_metrics(),
file=output)
print(file=output)
metrics = mod_wsgi.server_metrics()
if metrics:
for process in metrics['processes']:
for worker in process['workers']:
print(worker['status'], file=output, end='')
print(file=output)
print(file=output)
print('apache.description: %s' % apache.description, file=output)
print('apache.build_date: %s' % apache.build_date, file=output)
print('apache.mpm_name: %s' % apache.mpm_name, file=output)
print('apache.maximum_processes: %s' % apache.maximum_processes,
file=output)
print('apache.threads_per_process: %s' % apache.threads_per_process,
file=output)
print(file=output)
print('PATH: %s' % sys.path, file=output)
print(file=output)
print('LANG: %s' % os.environ.get('LANG'), file=output)
print('LC_ALL: %s' % os.environ.get('LC_ALL'), file=output)
print('sys.getdefaultencoding(): %s' % sys.getdefaultencoding(),
file=output)
print('locale.getlocale(): %s' % (locale.getlocale(),),
file=output)
print('locale.getdefaultlocale(): %s' % (locale.getdefaultlocale(),),
file=output)
print('locale.getpreferredencoding(): %s' % locale.getpreferredencoding(),
file=output)
print(file=output)
keys = sorted(environ.keys())
for key in keys:
print('%s: %s' % (key, repr(environ[key])), file=output)
print(file=output)
keys = sorted(os.environ.keys())
for key in keys:
print('%s: %s' % (key, repr(os.environ[key])), file=output)
print(file=output)
result = output.getvalue()
if not isinstance(result, bytes):
result = result.encode('UTF-8')
yield result
block_size = 8192
data = input.read(block_size)
while data:
yield data
data = input.read(block_size)
| [
"Graham.Dumpleton@gmail.com"
] | Graham.Dumpleton@gmail.com |
89c109838d986b1e063b94636a8003df3c932c4d | 53fab060fa262e5d5026e0807d93c75fb81e67b9 | /backup/user_270/ch15_2020_03_04_20_00_42_495079.py | 2ea78a2e626da4c0b2a8be17247b5dc9b25a29e9 | [] | no_license | gabriellaec/desoft-analise-exercicios | b77c6999424c5ce7e44086a12589a0ad43d6adca | 01940ab0897aa6005764fc220b900e4d6161d36b | refs/heads/main | 2023-01-31T17:19:42.050628 | 2020-12-16T05:21:31 | 2020-12-16T05:21:31 | 306,735,108 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 133 | py | name = input("Qual o seu nome? ")
if name == "Chris":
print("Todo mundo odeia o Chris")
else:
print("Olá, {0}".format(name)) | [
"you@example.com"
] | you@example.com |
e2b4a26320d3b1b25bd7d973d8c1a1c6df76cdd7 | da96d29b457eb123c01274efea562448df105fc6 | /chapter9/st15.py | ca2ba833406e0be43e58f404acc24a21ccdb2c0f | [] | no_license | Alonsovau/sketches | a1336f1a7909ad059744c4613ab992c8361264f5 | dfb072086cc813d7409fa11393ebaad6e26db180 | refs/heads/master | 2021-01-19T22:29:15.827896 | 2017-10-19T15:37:28 | 2017-10-19T15:37:28 | 88,761,672 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 917 | py | # 定义可选参数的元类
from abc import ABCMeta, abstractmethod
class IStream(metaclass=ABCMeta):
@abstractmethod
def read(self, maxsize=None):
pass
@abstractmethod
def write(self, data):
pass
class MyMeta(type):
@classmethod
def __prepare__(cls, name, bases, *, debug=False, synchronize=False):
pass
return super().__prepare__(name, bases)
def __new__(cls, name, bases, ns, *, debug=False, synchronize=False):
pass
super().__new__(cls, name, bases, ns)
def __init__(self, name, bases, ns, *, debug=False, synchronize=False):
pass
super().__init__(name, bases, ns)
class Spam(metaclass=MyMeta, debug=True, synchronize=True):
pass
class Spam2(metaclass=MyMeta):
debug = True
synchronize = True
pass
# 使用关键字参数配置一个元类还可以视作对类变量的一种替代方式
| [
"alonsovau@outlook.com"
] | alonsovau@outlook.com |
3cdaec1b3c21a642bdc141c0e95acd3f8d90a503 | 26b34896d44a765e040b45ee50fcad4fbe956f8b | /month04/code/redis/day01/01-proces-redis.py | c45513bf2ed05f1cd7355e401c71831991b22c05 | [] | no_license | zstarling131227/1905 | 49389a058002bb3b0dd2f44c45f2250dde27e963 | fa052e47b9fe2db0bd7c7cf8cebea9ff68af3088 | refs/heads/master | 2022-03-25T18:32:41.646922 | 2020-01-02T07:34:11 | 2020-01-02T07:34:11 | 199,369,016 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 822 | py | import redis, time, random
from multiprocessing import Process
class Spider(object):
def __init__(self):
self.r = redis.Redis('localhost', 6379, 0)
def product(self):
for page in range(67):
url_list = 'http%s' % str(page)
self.r.lpush('url1', url_list)
time.sleep(random.randint(1, 3))
def consum(self):
while True:
url = self.r.brpop('url1', 4)
if url:
# print(url) # #(b'url1', b'http0')
print(url[1].decode())
else:
break
def run(self):
p1 = Process(target=self.product)
p2 = Process(target=self.consum)
p1.start()
p2.start()
p1.join()
p2.join()
if __name__ == '__main__':
sp = Spider()
sp.run()
| [
"1239269939@qq.com"
] | 1239269939@qq.com |
a44b787bef12a9f57ea03dc7d1ebcef5b76ece70 | 8bcd413ecc1d67ee5ac383460b228193698f44db | /fizzbuzz.py | c0d11be9b09f91f4a89ae26217915e3d00b2dba0 | [] | no_license | BarnabasK/fizz_buzz | d656ff55b668c826d6d5ecdbff856323c84099d4 | 075b158351b8c3a966a3524db10e578b31ca864f | refs/heads/master | 2021-01-12T12:17:37.551077 | 2016-10-31T08:42:18 | 2016-10-31T08:42:18 | 72,417,440 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 203 | py | def fizz_buzz(num):
if num%3 == 0 and num%5 == 0:
return "FizzBuzz"
elif num%5 == 0:
return "Buzz"
elif num%3 == 0:
return "Fizz"
else:
return num
print(fizz_buzz(105))
| [
"="
] | = |
3562660d1395dde33a952946de40933ce4c29d64 | c390e326163d0fcda00989b904750e66c0cabbea | /chain_structure/double_link_list.py | 0d9f3776cd76aefaf50f598775fe5ba768873de3 | [] | no_license | linsanityHuang/data_structure_and_algorithm_python | 17354eaeef5b344cdad1abea061d41f3c090830e | 13d1d3b48cf648489c578e6f30e7d910f7e55ff3 | refs/heads/master | 2020-04-07T09:13:11.993425 | 2018-11-20T15:12:59 | 2018-11-20T15:12:59 | 158,244,357 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,364 | py | # coding=utf-8
# 单链表由于存在删除节点的时间复杂度为O(n)以及只能单向遍历这两个缺陷
# 双链表的节点比单链表的节点多以指向前一个节点的属性
class Node(object):
"""
比单链表的节点多了一个指向前一个节点的指针
"""
def __init__(self, value=None, prev=None, next=None):
self.value, self.prev, self.next = value, prev, next
class CircualDoubleLinkedList(object):
'''
循环双端链表
多了个循环,其实就是把root的 prev 指向 tail 节点,串起来
'''
def __init__(self, maxsize=None):
self.maxsize = maxsize
# 初始化的时候,双端链表的根节点的上一个节点和下一个节点都是根节点自身
node = Node()
node.next, node.prev = node, node
self.root = node
self.length = 0
def __len__(self):
return self.length
def headnode(self):
return self.root.next
def tailnode(self):
return self.root.prev
# 在双端链表的末尾添加节点
# O(1)
# 一般不用for循环的就是O(1),有限个步骤
def append(self, value):
if self.maxsize is not None and len(self) >= self.maxsize:
raise Exception('full')
# 构造节点
node = Node(value=value)
# 获取尾节点
tailnode = self.tailnode() or self.root
tailnode.next = node
node.prev = tailnode
node.next = self.root
self.root.prev = node
self.length += 1
# 在双端链表的头部添加节点
def appendleft(self, value):
if self.maxsize is not None and len(self) >= self.maxsize:
raise Exception('full')
node = Node(value=value)
# 如果双端链表为空
if self.root.next is self.root:
node.next = self.root
node.prev = self.root
self.root.next = node
self.root.prev = node
else:
node.prev = self.root
headnode = self.root.next
node.next = headnode
headnode.prev = node
self.root.next = node
self.length += 1
# 删除节点, node is not value, O(1)
# 注意传入的参数是node,而不是value
def remove(self, node):
if node is self.root:
return
else:
node.prev.next = node.next
node.next.prev = node.prev
self.length -= 1
return node
# 删除尾节点
def pop(self):
if self.root.next is self.root:
raise Exception('pop from empty circualdoublelinkedlist')
# 获取尾节点
tailnode = self.tailnode()
value = tailnode.value
self.root.prev = tailnode.prev
tailnode.prev.next = self.root
self.length -= 1
del tailnode
return value
# 删除头结点
def popleft(self):
if self.root.next is self.root:
raise Exception('pop from empty circualdoublelinkedlist')
# 获取头结点
headnode = self.headnode()
value = headnode.value
self.root.next = headnode.next
headnode.next.prev = self.root
self.length -= 1
del headnode
return value
# 遍历双端链表
def iter_node(self):
# 如果链表为空直接返回
if self.root.next is self.root:
return
# 不为空,则先获取头节点
curnode = self.root.next
while curnode.next is not self.root:
yield curnode
curnode = curnode.next
# 上面循环到尾节点结束,但并没有yield尾节点,这里yield尾节点
yield curnode
def __iter__(self):
for node in self.iter_node():
yield node.value
# 反向遍历双端链表
def iter_node_reverse(self):
if self.root.prev is self.root:
return
# 获取尾节点
curnode = self.root.prev
while curnode.prev is not self.root:
yield curnode
curnode = curnode.prev
yield curnode
def __reversed__(self):
for node in self.iter_node_reverse():
yield node.value
if __name__ == '__main__':
dll = CircualDoubleLinkedList()
assert len(dll) == 0
dll.append(0)
dll.append(1)
dll.append(2)
assert list(dll) == [0, 1, 2]
assert [value for value in dll] == [0, 1, 2]
assert [value for value in reversed(dll)] == [2, 1, 0]
headnode = dll.headnode()
assert headnode.value == 0
# O(1)
dll.remove(headnode)
assert len(dll) == 2
assert [value for value in dll] == [1, 2]
dll.appendleft(0)
assert [value for value in reversed(dll)] == [2, 1, 0]
print(list(dll))
print(dll.pop())
print(dll.pop())
print(dll.pop())
assert list(dll) == []
dll.append(0)
dll.append(1)
dll.append(2)
print(list(dll))
print(dll.popleft())
print(dll.popleft())
print(dll.popleft())
assert list(dll) == []
print(dll.popleft())
| [
"js_huang@foxmail.com"
] | js_huang@foxmail.com |
9498cbc66a45d18297d3936b570795ddc8e429a0 | ce75bce747bf60b364bc2e516824fc69c64a7eec | /opengever/maintenance/scripts/statistics_favorites.py | b3d6b32f632382390f6ca4bbaf785f206d27bb4b | [] | no_license | 4teamwork/opengever.maintenance | c94e470af31f891d0969877533e5acd37369f70f | f2b9866fb6cce1d24e29b084b757eec857119479 | refs/heads/master | 2023-07-28T17:57:09.619138 | 2023-07-14T13:08:20 | 2023-07-14T13:08:20 | 14,493,557 | 2 | 0 | null | 2023-08-31T09:07:21 | 2013-11-18T13:46:30 | Python | UTF-8 | Python | false | false | 2,527 | py | """Favorites usage statistics.
The goal of this script is to collect usage statistics of repository folder favorites.
bin/instance run ./scripts/statistics_favorites.py
"""
from opengever.maintenance.debughelpers import setup_app
from opengever.maintenance.debughelpers import setup_option_parser
from opengever.maintenance.debughelpers import setup_plone
from opengever.ogds.base.utils import get_current_admin_unit
from opengever.portlets.tree.favorites import ANNOTATION_KEY
from opengever.repository.repositoryroot import IRepositoryRoot
from plone.i18n.normalizer import filenamenormalizer
from plone.registry.interfaces import IRegistry
from zope.annotation import IAnnotations
from zope.component import getUtility
import json
import os.path
def dump_usage_statistics(plone, directory):
reporoots = filter(IRepositoryRoot.providedBy, plone.objectValues())
result = {'favorites_enabled': is_favorites_feature_enabled(),
'stats': map(stats_for_reporoot, reporoots)}
print json.dumps(result, sort_keys=True, indent=4)
if directory:
dump(directory, result)
def dump(directory, result):
filename = filenamenormalizer.normalize(get_current_admin_unit().public_url) + '.json'
path = os.path.abspath(os.path.join(directory, filename))
print 'Dumping to', path
with open(path, 'w+') as fio:
json.dump(result, fio, sort_keys=True, indent=4)
def is_favorites_feature_enabled():
return getUtility(IRegistry).get('opengever.portlets.tree.enable_favorites')
def stats_for_reporoot(root):
per_user = map(len, IAnnotations(root).get(ANNOTATION_KEY).values())
positives = filter(None, per_user)
return {'root_id': root.getId(),
'total_users': len(per_user),
'with_favorites': len(positives),
'min': min(positives),
'max': max(positives),
'median': median(positives)}
def median(numbers):
numbers = sorted(numbers)
center = len(numbers) / 2
if len(numbers) % 2 == 0:
return sum(numbers[center - 1:center + 1]) / 2.0
else:
return numbers[center]
def main():
parser = setup_option_parser()
parser.add_option('-d', '--dump-directory', dest='directory',
help='Path to a directory where a JSON file is created with the output.')
(options, args) = parser.parse_args()
app = setup_app()
plone = setup_plone(app, options)
dump_usage_statistics(plone, options.directory)
if __name__ == '__main__':
main()
| [
"jone@jone.ch"
] | jone@jone.ch |
7a37ac86449042091a4b9f58ee3ef288d9f2f7ea | 8d042bdce5db34177e5b66c29b4c9b075ce8ea14 | /libraries/imitation/src/imitation/scripts/config/expert_demos.py | 395a7f4bf0d7a8a087754e1364aa593ec08a6094 | [
"MIT"
] | permissive | sen-pai/Not-GAIL | 5a342ce58b71595ddc2f71f13f21a690d6353b9b | abe23cf28c62d875774469e431858977631b5550 | refs/heads/main | 2023-06-18T11:07:03.185669 | 2021-06-07T21:58:36 | 2021-06-07T21:58:36 | 344,687,916 | 0 | 1 | MIT | 2021-06-07T21:58:37 | 2021-03-05T03:54:27 | Python | UTF-8 | Python | false | false | 4,523 | py | import os
import sacred
from imitation.scripts.config.common import DEFAULT_INIT_RL_KWARGS
from imitation.util import util
expert_demos_ex = sacred.Experiment("expert_demos")
@expert_demos_ex.config
def expert_demos_defaults():
env_name = "CartPole-v1" # The gym.Env name
total_timesteps = int(1e6) # Number of training timesteps in model.learn()
num_vec = 8 # Number of environments in VecEnv
parallel = True # Use SubprocVecEnv (generally faster if num_vec>1)
normalize = True # Use VecNormalize
normalize_kwargs = dict() # kwargs for `VecNormalize`
max_episode_steps = None # Set to positive int to limit episode horizons
n_episodes_eval = 50 # Num of episodes for final ep reward mean evaluation
init_rl_kwargs = dict(DEFAULT_INIT_RL_KWARGS)
# If specified, overrides the ground-truth environment reward
reward_type = None # override reward type
reward_path = None # override reward path
rollout_save_final = True # If True, save after training is finished.
rollout_save_n_timesteps = None # Min timesteps saved per file, optional.
rollout_save_n_episodes = None # Num episodes saved per file, optional.
policy_save_interval = 10000 # Num timesteps between saves (<=0 disables)
policy_save_final = True # If True, save after training is finished.
init_tensorboard = False # If True, then write Tensorboard logs.
log_root = os.path.join("output", "expert_demos") # output directory
@expert_demos_ex.config
def default_end_cond(rollout_save_n_timesteps, rollout_save_n_episodes):
# Only set default if both end cond options are None.
# This way the Sacred CLI caller can set `rollout_save_n_episodes` only
# without getting an error that `rollout_save_n_timesteps is not None`.
if rollout_save_n_timesteps is None and rollout_save_n_episodes is None:
rollout_save_n_timesteps = 2000 # Min timesteps saved per file, optional.
@expert_demos_ex.config
def logging(env_name, log_root):
log_dir = os.path.join(
log_root, env_name.replace("/", "_"), util.make_unique_timestamp()
)
@expert_demos_ex.config
def rollouts_from_policy_only_defaults(log_dir):
policy_path = None # Policy path for rollouts_from_policy command only
policy_type = "ppo" # Policy type for rollouts_from_policy command only
rollout_save_path = os.path.join(
log_dir, "rollout.pkl"
) # Save path for `rollouts_from_policy` only.
# Standard Gym env configs
@expert_demos_ex.named_config
def acrobot():
env_name = "Acrobot-v1"
@expert_demos_ex.named_config
def ant():
env_name = "Ant-v2"
locals().update(**ant_shared_locals)
@expert_demos_ex.named_config
def cartpole():
env_name = "CartPole-v1"
total_timesteps = int(1e5)
@expert_demos_ex.named_config
def half_cheetah():
env_name = "HalfCheetah-v2"
total_timesteps = int(5e6) # does OK after 1e6, but continues improving
@expert_demos_ex.named_config
def hopper():
# TODO(adam): upgrade to Hopper-v3?
env_name = "Hopper-v2"
@expert_demos_ex.named_config
def humanoid():
env_name = "Humanoid-v2"
init_rl_kwargs = dict(
n_steps=2048,
) # batch size of 2048*8=16384 due to num_vec
total_timesteps = int(10e6) # fairly discontinuous, needs at least 5e6
@expert_demos_ex.named_config
def mountain_car():
env_name = "MountainCar-v0"
@expert_demos_ex.named_config
def pendulum():
env_name = "Pendulum-v0"
@expert_demos_ex.named_config
def reacher():
env_name = "Reacher-v2"
@expert_demos_ex.named_config
def swimmer():
env_name = "Swimmer-v2"
@expert_demos_ex.named_config
def walker():
env_name = "Walker2d-v2"
# Custom env configs
@expert_demos_ex.named_config
def custom_ant():
env_name = "imitation/CustomAnt-v0"
locals().update(**ant_shared_locals)
@expert_demos_ex.named_config
def disabled_ant():
env_name = "imitation/DisabledAnt-v0"
locals().update(**ant_shared_locals)
@expert_demos_ex.named_config
def two_d_maze():
env_name = "imitation/TwoDMaze-v0"
# Debug configs
@expert_demos_ex.named_config
def fast():
"""Intended for testing purposes: small # of updates, ends quickly."""
total_timesteps = int(1)
max_episode_steps = int(1)
# Shared settings
ant_shared_locals = dict(
init_rl_kwargs=dict(
n_steps=2048,
), # batch size of 2048*8=16384 due to num_vec
total_timesteps=int(5e6),
max_episode_steps=500, # To match `inverse_rl` settings.
)
| [
"sharan_spai@protonmail.com"
] | sharan_spai@protonmail.com |
21ff1c365e8eebdc9d1640700d97ea268e58f6ab | 0b86600e0288c0fefc081a0f428277a68b14882e | /code/piles/piles_code_6.py | 711249c8fd1951c07a758ab93d1102c2914afdd1 | [] | no_license | Byliguel/python1-exo7 | 9ede37a8d2b8f384d1ebe3d612e8c25bbe47a350 | fbf6b08f4c1e94dd9f170875eee871a84849399e | refs/heads/master | 2020-09-22T10:16:34.044141 | 2019-12-01T11:52:51 | 2019-12-01T11:52:51 | 225,152,986 | 1 | 0 | null | 2019-12-01T11:51:37 | 2019-12-01T11:51:36 | null | UTF-8 | Python | false | false | 1,947 | py | def ecriture_polonaise(expression):
""" Convertit une expression classique en notation polonaise
Entrée : une expression classique
Sortie : l'expression en notation polonaise
Action : utilise une pile """
global pile
pile = []
liste_expression = expression.split()
polonaise = "" # L'écriture polonaise
for car in liste_expression:
if car.isdigit():
polonaise = polonaise + car + " "
if car == "(":
empile(car)
if car == "*":
empile(car)
if car == "+":
while not pile_est_vide():
element = depile()
if element == "*":
polonaise = polonaise + element + " "
else:
empile(element) # Remettre l'élément
break
empile(car)
if car == ")":
while not pile_est_vide():
element = depile()
if element == "(":
break
else:
polonaise = polonaise + element + " "
while not pile_est_vide():
element = depile()
polonaise = polonaise + element + " "
return polonaise
# Tests
print("--- Conversion en écriture polonaise ---")
exp = "2 + 3"
print("l'expression",exp,"s'écrit",ecriture_polonaise(exp))
exp = "2 * 3"
print("l'expression",exp,"s'écrit",ecriture_polonaise(exp))
exp = "( 2 + 3 ) * 4"
print("l'expression",exp,"s'écrit",ecriture_polonaise(exp))
exp = "4 * ( 2 + 3 )"
print("l'expression",exp,"s'écrit",ecriture_polonaise(exp))
exp = "2 + 4 * 5"
print("l'expression",exp,"s'écrit",ecriture_polonaise(exp))
exp = "2 * 4 * 5"
print("l'expression",exp,"s'écrit",ecriture_polonaise(exp))
exp = "( 2 + 3 ) * ( 4 + 8 )"
print("l'expression",exp,"s'écrit",ecriture_polonaise(exp)) | [
"arnaud.bodin@math.univ-lille1.fr"
] | arnaud.bodin@math.univ-lille1.fr |
7c037603c165ffaaada1513d582925b9c0d909df | 55ab64b67d8abc02907eb43a54ff6c326ded6b72 | /scripts/addon_library/local/keentools/utils/common_operators.py | 34bcd0501189ea360be77cd75ddf2cb07bab8a67 | [
"MIT",
"GPL-3.0-only"
] | permissive | Tilapiatsu/blender-custom_config | 2f03b0bb234c3b098d2830732296d199c91147d0 | 00e14fc190ebff66cf50ff911f25cf5ad3529f8f | refs/heads/master | 2023-08-16T14:26:39.990840 | 2023-08-16T01:32:41 | 2023-08-16T01:32:41 | 161,249,779 | 6 | 2 | MIT | 2023-04-12T05:33:59 | 2018-12-10T23:25:14 | Python | UTF-8 | Python | false | false | 4,277 | py | # ##### BEGIN GPL LICENSE BLOCK #####
# KeenTools for blender is a blender addon for using KeenTools in Blender.
# Copyright (C) 2019-2022 KeenTools
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# ##### END GPL LICENSE BLOCK #####
import bpy
from bpy.types import Operator
from bpy.props import StringProperty
from ..addon_config import (Config,
show_user_preferences,
show_tool_preferences)
from .localview import check_context_localview
from .other import force_show_ui_overlays
from ..utils.ui_redraw import (force_ui_redraw,
find_modules_by_name_starting_with,
filter_module_list_by_name_starting_with,
collapse_all_modules,
mark_old_modules)
from .bpy_common import bpy_localview, bpy_show_addon_preferences, bpy_url_open
from ..ui_strings import buttons
class KT_OT_AddonSettings(Operator):
bl_idname = Config.kt_addon_settings_idname
bl_label = buttons[bl_idname].label
bl_description = buttons[bl_idname].description
bl_options = {'REGISTER'}
show: StringProperty(default='all')
def draw(self, context):
pass
def execute(self, context):
show_user_preferences(facebuilder=False, geotracker=False)
if self.show == 'facebuilder':
show_tool_preferences(facebuilder=True, geotracker=False)
elif self.show == 'geotracker':
show_tool_preferences(facebuilder=False, geotracker=True)
elif self.show == 'all':
show_tool_preferences(facebuilder=True, geotracker=True)
elif self.show == 'none':
show_tool_preferences(facebuilder=False, geotracker=False)
bpy_show_addon_preferences()
return {'FINISHED'}
class KT_OT_OpenURL(Operator):
bl_idname = Config.kt_open_url_idname
bl_label = buttons[bl_idname].label
bl_description = buttons[bl_idname].description
bl_options = {'REGISTER', 'INTERNAL'}
url: StringProperty(name='URL', default='')
def execute(self, context):
bpy_url_open(url=self.url)
return {'FINISHED'}
class KT_OT_AddonSearch(Operator):
bl_idname = Config.kt_addon_search_idname
bl_label = buttons[bl_idname].label
bl_description = buttons[bl_idname].description
bl_options = {'REGISTER'}
search: StringProperty(default='KeenTools')
def draw(self, context):
pass
def execute(self, context):
bpy.context.window_manager.addon_search = self.search
bpy.ops.screen.userpref_show()
mods = find_modules_by_name_starting_with(self.search)
if len(mods) > 1:
collapse_all_modules(mods)
keentools_fb_mods = filter_module_list_by_name_starting_with(
mods, 'KeenTools FaceBuilder')
mark_old_modules(keentools_fb_mods, {'category': 'Add Mesh'})
force_ui_redraw(area_type='PREFERENCES')
return {'FINISHED'}
class KT_OT_ExitLocalview(Operator):
bl_idname = Config.kt_exit_localview_idname
bl_label = buttons[bl_idname].label
bl_description = buttons[bl_idname].description
bl_options = {'REGISTER', 'UNDO'}
def execute(self, context):
if not check_context_localview(context):
self.report({'ERROR', 'Cannot set proper context for operator'})
return {'CANCELLED'}
bpy_localview()
force_show_ui_overlays(context.area)
return {'FINISHED'}
CLASSES_TO_REGISTER = (KT_OT_AddonSettings,
KT_OT_OpenURL,
KT_OT_AddonSearch,
KT_OT_ExitLocalview,)
| [
"tilapiatsu@hotmail.fr"
] | tilapiatsu@hotmail.fr |
ec1751500e60834f7a283d2d33f1860307f9cd9f | 487ce91881032c1de16e35ed8bc187d6034205f7 | /codes/CodeJamCrawler/16_0_2/thorhayek/sol2.py | 71e6b42586a69e9922ccef71355590a0aeb1afee | [] | no_license | DaHuO/Supergraph | 9cd26d8c5a081803015d93cf5f2674009e92ef7e | c88059dc66297af577ad2b8afa4e0ac0ad622915 | refs/heads/master | 2021-06-14T16:07:52.405091 | 2016-08-21T13:39:13 | 2016-08-21T13:39:13 | 49,829,508 | 2 | 0 | null | 2021-03-19T21:55:46 | 2016-01-17T18:23:00 | Python | UTF-8 | Python | false | false | 1,344 | py | import sys
def printDict(input_dict):
input_dict = {0:0, 1:0, 2:0, 3:0, 4:0, 5:0, 6:0, 7:0, 8:0, 9:0 };
for key,val in input_dict.items():
print(str(key)+":"+str(val), end=" ");
def flipCount(inp):
if(type(inp) is not str):
return None;
inp_lst = list(inp);
flip_count = 0 ;
i = 0;
while i < len(inp_lst):
incr = False;
if(inp_lst[i] == '-'):
while i < len(inp_lst) and inp_lst[i] == '-':
inp_lst[i] = "+";
i += 1;
incr = True;
flip_count += 1;
elif(i < len(inp_lst) and inp_lst[i] == '+'):
flip_both = False;
while i < len(inp_lst) and inp_lst[i] == '+':
i += 1;
incr = True;
while i < len(inp_lst) and inp_lst[i] == '-' :
inp_lst[i] = "+";
i += 1;
incr = True;
if(not flip_both):
flip_both = True;
if(flip_both):
flip_count += 2;
if(not incr):
i+=1;
return flip_count;
# read input file
input_file = "large2.txt"; # update
#output_file = "output.txt";
fi = open(input_file,'r');
#fo = open(output_file, 'w');
case_count = int(fi.readline());
#print("case_count="+str(case_count));
#cases = fi.read();
curr_line = 1;
for line in fi:
#print("case="+line);
ans = flipCount(line);
if(ans != None and curr_line <= case_count):
print("Case #"+str(curr_line)+": " +str(ans));
curr_line += 1; | [
"[dhuo@tcd.ie]"
] | [dhuo@tcd.ie] |
d693e1f26f63dbdb94c1315407e8b70897d40d6a | 909f80643d1af47f62ed1e384618d4e8fa0f4989 | /bots/mm_bot_plugins/achievements.py | 79ffac6c49c87b1de360b92cd0ce2b9c78b66e91 | [
"MIT"
] | permissive | seLain/MissAchieve | 9420dde7526232a09d22ce92b7ba0f44a660d906 | e65ecf46d3c35b79151d526d0b0abce7b55a6652 | refs/heads/master | 2022-12-11T06:01:41.017602 | 2018-04-05T08:34:25 | 2018-04-05T08:34:25 | 125,518,660 | 0 | 0 | MIT | 2022-12-08T00:57:32 | 2018-03-16T13:16:54 | Python | UTF-8 | Python | false | false | 6,608 | py | # -*- coding: utf-8 -*-
import re, json, inspect
import requests
from mattermost_bot.bot import listen_to
from mattermost_bot.bot import respond_to
from mattermost_bot.utils import allow_only_direct_message
from mm_bot_settings import AUTH_TOKEN, MA_SERVER_URL, PLUGIN_SETTINGS
@listen_to('.*', re.IGNORECASE)
def listen_to_all(message):
talkman_mission(message)
mentions_mission(message)
def talkman_mission(message):
# corresponding mission settings
mission_settings = PLUGIN_SETTINGS[inspect.stack()[0][3]]['missions']
# get message content
username = message.get_username()
# try to get or create mission
for key in mission_settings.keys():
response = requests.post(MA_SERVER_URL+'/achievements/mission/create',
json={'username': username,
'mission_key': key,
'mission_class_name': mission_settings[key],
'auth_token': AUTH_TOKEN})
result = response.json()
# update score
if result['created'] is True or result['existed'] is True:
mission_proxy_json = json.loads(result['mission_proxy'])
response = requests.put(MA_SERVER_URL+'/achievements/mission/update',
json={'proxy_key': mission_proxy_json[0]['fields']['key'],
'score': mission_proxy_json[0]['fields']['score']+1,
'auth_token': AUTH_TOKEN})
result = response.json()
if result['badge'] is not None:
message.reply(result['badge'])
def mentions_mission(message):
# corresponding mission settings
mission_settings = PLUGIN_SETTINGS[inspect.stack()[0][3]]['missions']
# get message content
username = message.get_username()
mentions = len(message.get_mentions()) if message.get_mentions() is not None else 0
# try to get or create mission
for key in mission_settings.keys():
response = requests.post(MA_SERVER_URL+'/achievements/mission/create',
json={'username': username,
'mission_key': key,
'mission_class_name': mission_settings[key],
'auth_token': AUTH_TOKEN})
result = response.json()
# update score
if result['created'] is True or result['existed'] is True:
mission_proxy_json = json.loads(result['mission_proxy'])
response = requests.put(MA_SERVER_URL+'/achievements/mission/update',
json={'proxy_key': mission_proxy_json[0]['fields']['key'],
'score': mission_proxy_json[0]['fields']['score']+mentions,
'auth_token': AUTH_TOKEN})
result = response.json()
if result['badge'] is not None:
message.reply(result['badge'])
@respond_to('.*', re.IGNORECASE)
def response_to_all(message):
talk_to_me_mission(message)
keyword_mission(message)
def talk_to_me_mission(message):
# corresponding mission settings
mission_settings = PLUGIN_SETTINGS[inspect.stack()[0][3]]['missions']
# get message content
username = message.get_username()
# try to get or create mission
for key in mission_settings.keys():
response = requests.post(MA_SERVER_URL+'/achievements/mission/create',
json={'username': username,
'mission_key': key,
'mission_class_name': mission_settings[key],
'auth_token': AUTH_TOKEN})
result = response.json()
# update score
if result['created'] is True or result['existed'] is True:
mission_proxy_json = json.loads(result['mission_proxy'])
response = requests.put(MA_SERVER_URL+'/achievements/mission/update',
json={'proxy_key': mission_proxy_json[0]['fields']['key'],
'score': mission_proxy_json[0]['fields']['score']+1,
'auth_token': AUTH_TOKEN})
result = response.json()
if result['badge'] is not None:
message.reply(result['badge'])
def keyword_mission(message):
# corresponding mission settings
mission_settings = PLUGIN_SETTINGS[inspect.stack()[0][3]]['missions']
# get message content
username = message.get_username()
content = message.get_message()
# try to get or create mission
for key in mission_settings.keys():
response = requests.post(MA_SERVER_URL+'/achievements/mission/create',
json={'username': username,
'mission_key': key,
'mission_class_name': mission_settings[key],
'auth_token': AUTH_TOKEN})
result = response.json()
# update score
if result['created'] is True or result['existed'] is True:
# check if keyword match in content
mission_json = json.loads(result['mission'])
keyword = mission_json[0]['fields']['keyword']
occurences = len(re.findall(keyword, content, re.IGNORECASE))
mission_proxy_json = json.loads(result['mission_proxy'])
# if yes, increase the score by occurences
response = requests.put(MA_SERVER_URL+'/achievements/mission/update',
json={'proxy_key': mission_proxy_json[0]['fields']['key'],
'score': mission_proxy_json[0]['fields']['score']+occurences,
'auth_token': AUTH_TOKEN})
result = response.json()
if result['badge'] is not None:
message.reply(result['badge'])
@respond_to('my badge', re.IGNORECASE)
def get_my_badges(message):
# get message content
username = message.get_username()
# try to get badges
response = requests.get(MA_SERVER_URL+'/achievements/badges',
json={'username': username,
'auth_token': AUTH_TOKEN})
result = response.json()
# prepare reply message
message.reply('\n'.join([badge['success_message'] for badge in result['badges']]))
| [
"selain@nature.ee.ncku.edu.tw"
] | selain@nature.ee.ncku.edu.tw |
c7937ad3692d90c19096c1967551357c5534508f | 7f52cf2eff30480d6c06277aab9ba07658fa36c4 | /written_numbers.py | 0933934ed96bf5b8d5d50a2a1354199c419e09a5 | [] | no_license | JasonOnes/PythonWork | e01c055cf0b6f5f5772a56248a6c896f9eea368d | a7a0f6808ea99cdc6ed8a63feaafeb08ee7c0e34 | refs/heads/master | 2021-06-16T02:35:49.026954 | 2017-04-18T21:28:16 | 2017-04-18T21:28:16 | 75,510,612 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 210 | py | """
Write a function that asks for a number in base-10 that's between 1 and 99
then prints out the name of it in English.
>>> written(42)
'fourty-two'
>>> written(1)
'one'
>>> written(99)
'ninety-nine'
"""
| [
"jasonr.jones14@gmail.com"
] | jasonr.jones14@gmail.com |
f00b2ced7f7c3d1c860c1b99284c94efecc91eab | e20c96925369f969873bcb8b86a079f4bf74a559 | /rds-restore-db-point-in-time.py | 58fda6295fa8fd6226bdb7b98b454cb298136594 | [] | no_license | renauddahou/boto3_rds_scripts | aa7fc9ad2182c07a62cddc6ce120599b3c8729e7 | 11affc39f7242298e3ce93626a00a211e3c621f5 | refs/heads/main | 2023-08-17T12:34:15.619452 | 2021-09-20T06:09:08 | 2021-09-20T06:09:08 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 288 | py | import boto3
from datetime import datetime
client = boto3.client('rds')
response = client.restore_db_instance_to_point_in_time(
RestoreTime=datetime(2021, 9, 15),
SourceDBInstanceIdentifier='database-instance-01',
TargetDBInstanceIdentifier='restored-db-01'
)
print(response) | [
"noreply@github.com"
] | renauddahou.noreply@github.com |
10cdebe62717a5db9714308b3fc182743d7b4d88 | 9340c6fafee857b73b37e89525028d9694c20a70 | /frappe/custom/doctype/custom_field/custom_field.py | 1f2445eb139371dc2446e34b0f64ccb2e25a55f5 | [
"MIT"
] | permissive | vignesharumainayagam/frappe10.1.6 | bf091ec5f120f90b3bd63b92272a746c058444f1 | b043fccc9393d0a17bdeadad117cad6c0254b36e | refs/heads/master | 2020-03-07T14:15:53.950191 | 2018-03-31T10:28:26 | 2018-03-31T10:28:26 | 127,522,414 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,460 | py | # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
# MIT License. See license.txt
from __future__ import unicode_literals
import frappe
import json
from frappe.utils import cstr
from frappe import _
from frappe.model.document import Document
class CustomField(Document):
def autoname(self):
self.set_fieldname()
self.name = self.dt + "-" + self.fieldname
def set_fieldname(self):
if self.fieldtype in ["Section Break", "Column Break"]:
self.fieldname = self.fieldtype.lower().replace(" ","_") + "_" + str(self.idx)
if not self.fieldname:
if not self.label:
frappe.throw(_("Label is mandatory"))
# remove special characters from fieldname
self.fieldname = "".join(filter(lambda x: x.isdigit() or x.isalpha() or '_',
cstr(self.label).lower().replace(' ','_')))
# fieldnames should be lowercase
self.fieldname = self.fieldname.lower()
def validate(self):
meta = frappe.get_meta(self.dt, cached=False)
fieldnames = [df.fieldname for df in meta.get("fields")]
if self.insert_after=='append':
self.insert_after = fieldnames[-1]
if self.insert_after and self.insert_after in fieldnames:
self.idx = fieldnames.index(self.insert_after) + 1
self._old_fieldtype = self.db_get('fieldtype')
if not self.fieldname:
frappe.throw(_("Fieldname not set for Custom Field"))
if not self.flags.ignore_validate:
from frappe.core.doctype.doctype.doctype import check_if_fieldname_conflicts_with_methods
check_if_fieldname_conflicts_with_methods(self.dt, self.fieldname)
def on_update(self):
frappe.clear_cache(doctype=self.dt)
if not self.flags.ignore_validate:
# validate field
from frappe.core.doctype.doctype.doctype import validate_fields_for_doctype
validate_fields_for_doctype(self.dt)
# update the schema
if not frappe.db.get_value('DocType', self.dt, 'issingle'):
if (self.fieldname not in frappe.db.get_table_columns(self.dt)
or getattr(self, "_old_fieldtype", None) != self.fieldtype):
from frappe.model.db_schema import updatedb
updatedb(self.dt)
def on_trash(self):
# delete property setter entries
frappe.db.sql("""\
DELETE FROM `tabProperty Setter`
WHERE doc_type = %s
AND field_name = %s""",
(self.dt, self.fieldname))
frappe.clear_cache(doctype=self.dt)
def validate_insert_after(self, meta):
if not meta.get_field(self.insert_after):
frappe.throw(_("Insert After field '{0}' mentioned in Custom Field '{1}', with label '{2}', does not exist")
.format(self.insert_after, self.name, self.label), frappe.DoesNotExistError)
if self.fieldname == self.insert_after:
frappe.throw(_("Insert After cannot be set as {0}").format(meta.get_label(self.insert_after)))
@frappe.whitelist()
def get_fields_label(doctype=None):
return [{"value": df.fieldname or "", "label": _(df.label or "")}
for df in frappe.get_meta(doctype).get("fields")]
def create_custom_field_if_values_exist(doctype, df):
df = frappe._dict(df)
if df.fieldname in frappe.db.get_table_columns(doctype) and \
frappe.db.sql("""select count(*) from `tab{doctype}`
where ifnull({fieldname},'')!=''""".format(doctype=doctype, fieldname=df.fieldname))[0][0]:
create_custom_field(doctype, df)
def create_custom_field(doctype, df):
df = frappe._dict(df)
if not df.fieldname and df.label:
df.fieldname = frappe.scrub(df.label)
if not frappe.db.get_value("Custom Field", {"dt": doctype, "fieldname": df.fieldname}):
frappe.get_doc({
"doctype":"Custom Field",
"dt": doctype,
"permlevel": df.permlevel or 0,
"label": df.label,
"fieldname": df.fieldname,
"fieldtype": df.fieldtype or 'Data',
"options": df.options,
"insert_after": df.insert_after,
"print_hide": df.print_hide,
"hidden": df.hidden or 0
}).insert()
def create_custom_fields(custom_fields):
'''Add / update multiple custom fields
:param custom_fields: example `{'Sales Invoice': [dict(fieldname='test')]}`'''
for doctype, fields in custom_fields.items():
if isinstance(fields, dict):
# only one field
fields = [fields]
for df in fields:
field = frappe.db.get_value("Custom Field", {"dt": doctype, "fieldname": df["fieldname"]})
if not field:
create_custom_field(doctype, df)
else:
custom_field = frappe.get_doc("Custom Field", field)
custom_field.update(df)
custom_field.save()
@frappe.whitelist()
def add_custom_field(doctype, df):
df = json.loads(df)
return create_custom_field(doctype, df) | [
"vigneshwaran@valiantsystems.com"
] | vigneshwaran@valiantsystems.com |
c7289474f97f7e4409419e063ba07ab62715553b | 86dee45209db73f546428fd4f72f648c3a68a042 | /tests/test_doc_examples.py | c71ffc3f9051f2e2df305a17f1a82b8e03c13086 | [
"MIT"
] | permissive | vinceatbluelabs/config_resolver | 76daae0bbb926402b525e98422c6d7b25d72949d | 6de95833d037629405c649ba8dbc6a9ced064393 | refs/heads/master | 2022-06-20T04:42:52.905461 | 2020-02-25T10:09:47 | 2020-02-25T10:09:47 | 263,414,780 | 0 | 0 | MIT | 2020-05-12T18:12:03 | 2020-05-12T18:12:02 | null | UTF-8 | Python | false | false | 2,478 | py | '''
The documentation contains some code examples.
The code examples are included from existing files on dist. We can load those,
execute them an verify that they actually behave as documented to make sure the
documentation is not lying, This test-case takes care of that.
'''
import unittest
from tests.helpers import execute, environment
class TestDocExamples(unittest.TestCase):
def test_example_01(self):
filename = 'doc/examples/example01.py'
with environment(ACMECORP_BIRD_FEEDER_PATH='tests/examples/configs'):
data = execute(filename)
loaded_config = data['cfg']
result = loaded_config.get('section', 'var')
expected = 'value'
self.assertEqual(result, expected)
def test_example_02(self):
filename = 'doc/examples/example02.py'
with environment(
ACMECORP_BIRD_FEEDER_PATH='tests/examples/configs',
ACMECORP_BIRD_FEEDER_FILENAME='secure.ini'):
data = execute(filename)
loaded_config = data['cfg']
result = loaded_config.get('section', 'var')
expected = 'value'
self.assertEqual(result, expected)
def test_example_03(self):
filename = 'doc/examples/example03.py'
with environment(
ACMECORP_BIRD_FEEDER_PATH='tests/examples/configs',
ACMECORP_BIRD_FEEDER_FILENAME='versioned.ini'):
data = execute(filename)
loaded_config = data['cfg']
result = loaded_config.get('section', 'var')
expected = 'value'
self.assertEqual(result, expected)
def test_example_04(self):
filename = 'doc/examples/example04.py'
with environment(
ACMECORP_BIRD_FEEDER_PATH='tests/examples/configs',
ACMECORP_BIRD_FEEDER_FILENAME='versioned.ini'):
data = execute(filename)
result = data['cfg']
dictified = dict(result.meta._asdict()) # makes testing easier
prefix_filter = dictified.pop('prefix_filter') # Cannot do a simple equality for this!
expected = {
'active_path': ['tests/examples/configs/versioned.ini'],
'loaded_files': ['tests/examples/configs/versioned.ini'],
'config_id': ('acmecorp', 'bird_feeder'),
}
self.assertEqual(dictified, expected)
self.assertTrue(hasattr(prefix_filter, 'filter'),
'Attribute "filter" missing on %r' % prefix_filter)
| [
"michel@albert.lu"
] | michel@albert.lu |
1ce4dddbfab138cae6d86d6f627b6910309b9580 | 2f9dfcf147856ff4e90a603ebb5c3fc476fd7cb5 | /src/store/migrations/0004_auto_20210625_0642.py | f969df9811357fb8611f04f1790cc23450b0cf05 | [] | no_license | tarp20/BookStore | 33484245393f0e977f746753b23570b3968625e3 | a5e4a06991aa5f500638499d8aaf2fad07d20531 | refs/heads/main | 2023-06-05T06:46:17.131712 | 2021-06-25T15:35:23 | 2021-06-25T15:35:23 | 380,282,259 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,534 | py | # Generated by Django 3.2.4 on 2021-06-25 06:42
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('store', '0003_book_owner'),
]
operations = [
migrations.AlterField(
model_name='book',
name='owner',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='my_books', to=settings.AUTH_USER_MODEL),
),
migrations.CreateModel(
name='UserBookRelation',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('like', models.BooleanField(default=False)),
('is_bookmark', models.BooleanField(default=False)),
('rate', models.PositiveIntegerField(choices=[(1, '1'), (2, '2'), (3, '3'), (4, '4'), (5, '5')])),
('book', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='store.book')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.AddField(
model_name='book',
name='reader',
field=models.ManyToManyField(related_name='books', through='store.UserBookRelation', to=settings.AUTH_USER_MODEL),
),
]
| [
"taras.piont@gmail.com"
] | taras.piont@gmail.com |
253ab2cd5e2a06327b599486c23652a7cb09c610 | 972fd3ef0226a7360bc0c578f07703e893ae6858 | /te_python/te_python.py | 28b39f4b1f75a91fea8e33bb580f92504bf66a80 | [
"MIT"
] | permissive | totalemail/te-python | ebc369c27955db5ef17c2a804165cd4e3e235410 | 8d360f8f9d0e6f620eee2753562dc33af82509a6 | refs/heads/master | 2023-08-26T18:23:24.933176 | 2019-09-27T16:44:56 | 2019-09-27T16:44:56 | 417,958,061 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,302 | py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Python SDK for TotalEmail."""
from .utility import make_get_request, make_post_request, make_put_request
def email_submit(email_text):
"""Submit an email to TotalEmail for spam analysis."""
url_path = '/emails'
email_data = {"full_text": email_text}
return make_post_request(url_path, email_data)
def email_get_details(email_id):
"""Get details about the email with the given email ID."""
url_path = 'emails/{}/'.format(email_id)
return make_get_request(url_path)
def email_add_analysis(email_id, analysis_notes, analysis_source, score=0):
"""(AUTHENTICATION REQUIRED) Create a new analysis result."""
url_path = 'emails/{}/analysis/'.format(email_id)
data = {'notes': analysis_notes, 'source': analysis_source, 'email': email_id, 'score': score}
return make_post_request(url_path, data)
def email_add_hash(email_id, hash_type, hash_value):
"""(AUTHENTICATION REQUIRED) Add a hash to the email."""
url_path = 'emails/{}/'.format(email_id)
data = {hash_type: hash_value}
return make_put_request(url_path, data)
def email_tlsh_hash(email_id, tlsh_hash):
"""(AUTHENTICATION REQUIRED) Add a TLSH hash to the email."""
return email_add_hash(email_id, 'tlsh_hash', tlsh_hash)
| [
"floyd.hightower27@gmail.com"
] | floyd.hightower27@gmail.com |
b4291da35626075e54eac3d9b0889bddf064efa2 | c9ddbdb5678ba6e1c5c7e64adf2802ca16df778c | /cases/synthetic/prime-big-379.py | 37ace79fcf5f83bb81e6622affe49c6a0b5675da | [] | no_license | Virtlink/ccbench-chocopy | c3f7f6af6349aff6503196f727ef89f210a1eac8 | c7efae43bf32696ee2b2ee781bdfe4f7730dec3f | refs/heads/main | 2023-04-07T15:07:12.464038 | 2022-02-03T15:42:39 | 2022-02-03T15:42:39 | 451,969,776 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,707 | py | # Get the n-th prime starting from 2
def get_prime(n:int) -> int:
candidate:int = 2
found:int = 0
while True:
if is_prime(candidate):
found = found + 1
if found == n:
return candidate
candidate = candidate + 1
return 0 # Never happens
def is_prime(x:int) -> bool:
div:int = 2
div2:int = 2
div3:int = 2
div4:int = 2
div5:int = 2
while div < x:
if x % div == 0:
return False
div = div + 1
return True
def is_prime2(x:int, x2:int) -> bool:
div:int = 2
div2:int = 2
div3:int = 2
div4:int = 2
div5:int = 2
while div < x:
if x % div == 0:
return False
div = div + 1
return True
def is_prime3(x:int, x2:int, x3:int) -> bool:
div:int = 2
div2:int = 2
div3:int = 2
div4:int = 2
div5:int = 2
while div < x:
if x % div == 0:
return $Literal
div = div + 1
return True
def is_prime4(x:int, x2:int, x3:int, x4:int) -> bool:
div:int = 2
div2:int = 2
div3:int = 2
div4:int = 2
div5:int = 2
while div < x:
if x % div == 0:
return False
div = div + 1
return True
def is_prime5(x:int, x2:int, x3:int, x4:int, x5:int) -> bool:
div:int = 2
div2:int = 2
div3:int = 2
div4:int = 2
div5:int = 2
while div < x:
if x % div == 0:
return False
div = div + 1
return True
# Input parameter
n:int = 15
n2:int = 15
n3:int = 15
n4:int = 15
n5:int = 15
# Run [1, n]
i:int = 1
i2:int = 1
i3:int = 1
i4:int = 1
i5:int = 1
# Crunch
while i <= n:
print(get_prime(i))
i = i + 1
| [
"647530+Virtlink@users.noreply.github.com"
] | 647530+Virtlink@users.noreply.github.com |
56e4ffd64a8af107598bc4da54b1a508130fe9f4 | 781e2692049e87a4256320c76e82a19be257a05d | /all_data/exercism_data/python/bob/43395e9d391b4fe0af42df4a4725330c.py | 0037c6879c6953d397b1626c963f5852800199cc | [] | no_license | itsolutionscorp/AutoStyle-Clustering | 54bde86fe6dbad35b568b38cfcb14c5ffaab51b0 | be0e2f635a7558f56c61bc0b36c6146b01d1e6e6 | refs/heads/master | 2020-12-11T07:27:19.291038 | 2016-03-16T03:18:00 | 2016-03-16T03:18:42 | 59,454,921 | 4 | 0 | null | 2016-05-23T05:40:56 | 2016-05-23T05:40:56 | null | UTF-8 | Python | false | false | 368 | py | #message = raw_input("--> ")
# Simple Bob Responses
# Question asked
class Bob:
#def __init__(self):
def hey(self, message):
if message.endswith('?'):
return "Sure."
# Empty message
elif len(message) == 0:
return "Fine. Be that way."
# All caps
elif message.isupper():
return 'Woah, chill out!'
# Everything else
else:
return "Whatever."
| [
"rrc@berkeley.edu"
] | rrc@berkeley.edu |
c8c0a952ef064d3146b13956f514b5e4c78b3c61 | 0f4d761fbc98677fa53985f298854533f72c8bff | /polls/admin.py | 9301ae7bdb945d714264ff5119c9825d4a4f41ad | [] | no_license | makhmudislamov/djangoTutorial | 68bcd9845aa6084a1e4dc8c870f9ae2a80a2af01 | 8d291c75535e1c78f5125b40b64940f90ed97950 | refs/heads/master | 2023-04-27T15:59:41.503415 | 2019-06-29T03:47:56 | 2019-06-29T03:47:56 | 190,475,588 | 0 | 0 | null | 2023-04-21T20:35:23 | 2019-06-05T22:10:58 | Python | UTF-8 | Python | false | false | 584 | py | from django.contrib import admin
from .models import Choice, Question
class ChoiceInline(admin.TabularInline):
model = Choice
extra = 3
class QuestionAdmin(admin.ModelAdmin):
fieldsets = [
(None, {'fields': ['question_text']}),
('Date information', {'fields': [
'pub_date'], 'classes': ['collapse']}),
]
inlines = [ChoiceInline]
list_display = ('question_text', 'pub_date', 'was_published_recently')
list_filter = ['pub_date']
search_fields = ['question_text']
admin.site.register(Question, QuestionAdmin)
| [
"sunnatovichvv@gmail.com"
] | sunnatovichvv@gmail.com |
47eee5c9e895d2eb0376b4fe5f4359be7b40fd25 | 67b5c4a03c3da2808054cfabc4001f05c7fdac49 | /demo/generate_onnx/generate_onnx_from_scratch.py | 1867597cef8e21fb5d5baef95e85145709fe33b2 | [] | no_license | dannieldwt/deep_learning_algorithm | 411b1ffef4fdea1e0a42a09bee82c68bab17bffc | e2a37a378c88e20560ef6c0e8187a751905a51b1 | refs/heads/master | 2022-04-10T03:46:19.788919 | 2020-01-18T14:16:14 | 2020-01-18T14:16:14 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,549 | py | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Nov 15 17:21:56 2019
@author: ubuntu
"""
"""
利用onnx自带的工具,从头创建一个onnx
参考:https://github.com/onnx/onnx/blob/f2daca5e9b9315a2034da61c662d2a7ac28a9488/docs/PythonAPIOverview.md#running-shape-inference-on-an-onnx-model
"""
import onnx
from onnx import helper
from onnx import AttributeProto, TensorProto, GraphProto
def build_onnx():
# 创建输入 (ValueInfoProto)
X = helper.make_tensor_value_info('X', TensorProto.FLOAT, [1, 2])
# 创建输出 (ValueInfoProto)
Y = helper.make_tensor_value_info('Y', TensorProto.FLOAT, [1, 4])
# 创建node,也就是创建一个层 (NodeProto)
node_def = helper.make_node(
'Pad', # node name
['X'], # inputs
['Y'], # outputs
mode='constant', # attributes
value=1.5,
pads=[0, 1, 0, 1],
)
# 创建图,也就是一个模型参考 (GraphProto)
graph_def = helper.make_graph(
[node_def],
'test-model',
[X],
[Y],
)
# 创建模型 (ModelProto)
model_def = helper.make_model(graph_def, producer_name='onnx-example')
print('The model is:\n{}'.format(model_def))
onnx.checker.check_model(model_def) # 官方的模型居然报错,跟我之前转yolov3一样,看来我之前注释这句是对的。
print('The model is checked!')
# 保存一个模型
onnx.save(model_def, '/home/ubuntu/Desktop')
if __name__ =="__main__":
build_onnx() | [
"ximitiejiang@163.com"
] | ximitiejiang@163.com |
9f0587e364affe5f36adb2969bbf8b9d49bba36b | 86047cbf1bf99b3df05fc95549b4b82bb93ddd5e | /tex/figures/periodic_error.py | f93c83bf7a9b95b0996bb9e7f3272a291b19738f | [] | no_license | rodluger/normgp | fa617da36ba790a9477f3d06b7ac42fd0cee75ac | 7a033bd426bbc66c932035a4eb25d44c386f5765 | refs/heads/main | 2023-01-20T00:46:26.374921 | 2020-11-30T16:13:19 | 2020-11-30T16:13:19 | 315,394,097 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,676 | py | import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import george
from george.kernels import CosineKernel
from scipy.linalg import cho_factor
def norm_cov(mu, Sig, N=20):
# Terms
K = Sig.shape[0]
j = np.ones((K, 1))
m = np.mean(Sig)
mvec = (Sig @ j) / K
z = m / mu ** 2
s = m * j - mvec
# Coefficients
fac = 1.0
alpha = 0.0
beta = 0.0
for n in range(0, N + 1):
alpha += fac
beta += 2 * n * fac
fac *= z * (2 * n + 3)
# We're done
return (
(alpha / mu ** 2) * Sig
+ (alpha / (mu ** 2 * m)) * (s @ s.T - mvec @ mvec.T)
+ (beta / (mu ** 2 * m)) * (s @ s.T)
)
# GP Mean
mu = 0.75
# GP Amplitude
std = 0.1
# Asymmetry term
offset = 0.5
# Dimension
K = 1000
# Expansion order in our series solution
N = 20
# Number of samples in numerical estimate
M = 100000
# Get the covariance matrix
t = np.linspace(0, 1, K)
period = 0.75
kernel = std ** 2 * (offset + CosineKernel(np.log(period)))
gp = george.GP(kernel)
gp.compute(t)
Sigma = gp.get_matrix(t) + 1e-12 * np.eye(K)
# Compute the normalized covariance using the series expansion
Sigma_norm = norm_cov(mu, Sigma, N=N)
# Compute it numerically by sampling
np.random.seed(0)
L = np.tril(cho_factor(Sigma, lower=True)[0])
u = np.random.randn(K, M)
x = mu + L @ u
xnorm = x / np.mean(x, axis=0).reshape(1, -1)
Sigma_norm_num = np.cov(xnorm)
# Figure setup
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
# fig.subplots_adjust(wspace=0.05)
vmin = np.hstack((Sigma_norm, Sigma_norm_num)).min()
vmax = np.hstack((Sigma_norm, Sigma_norm_num)).max()
# Numerical solution
im = ax[0].imshow(Sigma_norm_num, cmap="viridis", vmin=vmin, vmax=vmax)
divider = make_axes_locatable(ax[0])
cax = divider.append_axes("right", size="5%", pad=0.1)
cbar = plt.colorbar(im, cax=cax)
cbar.ax.tick_params(labelsize=10)
# Difference
diff = (Sigma_norm - Sigma_norm_num) / Sigma_norm.max()
vmax = np.abs(diff).max()
vmin = -vmax
im = ax[1].imshow(diff, cmap="bwr", vmin=vmin, vmax=vmax)
divider = make_axes_locatable(ax[1])
cax = divider.append_axes("right", size="5%", pad=0.1)
cbar = plt.colorbar(im, cax=cax)
cbar.ax.tick_params(labelsize=10)
# Appearance
ax[0].set_title(r"$\tilde{\mathbf{\Sigma}}_\mathrm{num}$", fontsize=25)
ax[1].set_title(
r"$\,\,\,\,\,\,\,\,\,"
r"\left(\tilde{\mathbf{\Sigma}}"
r"- \tilde{\mathbf{\Sigma}}_\mathrm{num}\right)"
r"/ \, \tilde{\mathbf{\Sigma}}_\mathrm{max}$",
fontsize=25,
)
for axis in ax:
axis.set_xticks([])
axis.set_yticks([])
# We're done
fig.savefig(__file__.replace(".py", ".pdf"), bbox_inches="tight")
| [
"rodluger@gmail.com"
] | rodluger@gmail.com |
fbe9dfefef367eab2fd10043a29637a6187d17f8 | 2d29e05903d883ec360ec195d7ab7eec20e62bed | /controllers/controllers.py | d8715151959c5c8712256b8ee72c76c28d75e210 | [] | no_license | armannurhidayat/vit_Journal-Entries | 61b012fd490707dd1c37a9747979654b1fd26055 | 017f2644f680a4af09fa53a36842767d70374241 | refs/heads/master | 2020-09-11T00:51:22.655071 | 2019-11-15T09:03:27 | 2019-11-15T09:03:27 | 221,885,396 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 832 | py | # -*- coding: utf-8 -*-
from odoo import http
# class VitJurnalEntry(http.Controller):
# @http.route('/vit_jurnal_entry/vit_jurnal_entry/', auth='public')
# def index(self, **kw):
# return "Hello, world"
# @http.route('/vit_jurnal_entry/vit_jurnal_entry/objects/', auth='public')
# def list(self, **kw):
# return http.request.render('vit_jurnal_entry.listing', {
# 'root': '/vit_jurnal_entry/vit_jurnal_entry',
# 'objects': http.request.env['vit_jurnal_entry.vit_jurnal_entry'].search([]),
# })
# @http.route('/vit_jurnal_entry/vit_jurnal_entry/objects/<model("vit_jurnal_entry.vit_jurnal_entry"):obj>/', auth='public')
# def object(self, obj, **kw):
# return http.request.render('vit_jurnal_entry.object', {
# 'object': obj
# }) | [
"armannurhidayat7@gmail.com"
] | armannurhidayat7@gmail.com |
5311b7af566067f2ffe93545e52e02ed7dca6651 | 1c8f44f7629c37f953715d5b99e676d2848a9d22 | /CODEFORCES/Python/122A. Lucky Division.py | 1ddebf0fde6633a6bd61082ec32c427f03a284db | [] | no_license | rajandasguptaml/Competitive-Programming | 6d02c9889c1ccaaeddb4bd205c516ce88789b901 | ee40336f664929a4ddc721816b3fcb67c4e522c5 | refs/heads/master | 2023-05-14T17:31:22.912441 | 2021-05-25T20:25:21 | 2021-05-25T20:25:21 | 370,818,285 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 205 | py | n = int(input())
flag = 0
for i in[4,7,47,74,44,444,447,474,477,777,774,744]:
if(n%i==0):
flag = 1
break
else:
flag = 0
if flag==1:
print("YES")
else:
print("NO") | [
"rajon168075@gmail.com"
] | rajon168075@gmail.com |
f4850ffc9bec06f79e2f55cab03f6bcc65b31f34 | f1dbd66b4ef9e65f13697c67d6116dcc2ec99217 | /ECommerce_Google_SMTP/orders/crud.py | 3f28380e309141565cc3b5afb33bc8a225e26dc7 | [] | no_license | s33k3rs/FastAPI_ECommerce_Tutorial | b277ed283799de98cbb2f5f7a7d73cc3b06bc9f9 | 2be1dc43e9a581cb11afd78d9b319c5438f1c77a | refs/heads/main | 2023-08-15T02:55:38.693745 | 2021-10-02T11:15:14 | 2021-10-02T11:15:14 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,309 | py | from fastapi import Depends, Form
from pydantic import EmailStr
from sqlalchemy.orm import Session
from dependencies import get_db
from orders.models import Order, OrderItem
def create_order(db: Session= Depends(get_db),
first_name: str=Form(...),
last_name: str=Form(...),
email: EmailStr= Form(...),
address: str=Form(...),
postal_code: int=Form(...),
city: str=Form(...),
coupon_id: int=None,
discount: int=0):
db_order=Order(first_name=first_name,
last_name=last_name,
email=email,
address=address,
postal_code=postal_code,
city=city,
coupon_id=coupon_id,
discount=discount)
db.add(db_order)
db.commit()
db.refresh(db_order)
return db_order
def create_order_item(item, order_id, product_id, db: Session= Depends(get_db)):
order_item= OrderItem(order_id=order_id,
product_id=product_id,
price=item['price'],
quantity= item['quantity'])
db.add(order_item)
db.commit()
db.refresh(order_item)
return order_item
| [
"mohammadreza.karami22@yahoo.com"
] | mohammadreza.karami22@yahoo.com |
ffd958cbe1eb8d809c225cf0fcd3bd7ae1e9ea70 | c1bd12405d244c5924a4b069286cd9baf2c63895 | /azure-mgmt-datafactory/azure/mgmt/datafactory/models/marketo_source.py | 6b6456d7b98ad8b6f12926508434d11cd58ad0bd | [
"MIT"
] | permissive | lmazuel/azure-sdk-for-python | 972708ad5902778004680b142874582a284a8a7c | b40e0e36cc00a82b7f8ca2fa599b1928240c98b5 | refs/heads/master | 2022-08-16T02:32:14.070707 | 2018-03-29T17:16:15 | 2018-03-29T17:16:15 | 21,287,134 | 1 | 3 | MIT | 2019-10-25T15:56:00 | 2014-06-27T19:40:56 | Python | UTF-8 | Python | false | false | 2,080 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from .copy_source import CopySource
class MarketoSource(CopySource):
"""A copy activity Marketo server source.
:param additional_properties: Unmatched properties from the message are
deserialized this collection
:type additional_properties: dict[str, object]
:param source_retry_count: Source retry count. Type: integer (or
Expression with resultType integer).
:type source_retry_count: object
:param source_retry_wait: Source retry wait. Type: string (or Expression
with resultType string), pattern:
((\\d+)\\.)?(\\d\\d):(60|([0-5][0-9])):(60|([0-5][0-9])).
:type source_retry_wait: object
:param type: Constant filled by server.
:type type: str
:param query: A query to retrieve data from source. Type: string (or
Expression with resultType string).
:type query: object
"""
_validation = {
'type': {'required': True},
}
_attribute_map = {
'additional_properties': {'key': '', 'type': '{object}'},
'source_retry_count': {'key': 'sourceRetryCount', 'type': 'object'},
'source_retry_wait': {'key': 'sourceRetryWait', 'type': 'object'},
'type': {'key': 'type', 'type': 'str'},
'query': {'key': 'query', 'type': 'object'},
}
def __init__(self, additional_properties=None, source_retry_count=None, source_retry_wait=None, query=None):
super(MarketoSource, self).__init__(additional_properties=additional_properties, source_retry_count=source_retry_count, source_retry_wait=source_retry_wait)
self.query = query
self.type = 'MarketoSource'
| [
"autorestci@microsoft.com"
] | autorestci@microsoft.com |
7f7b0266c9f3c4abad0da9cf8966279ed9c70b6b | acf55e5641322c5a3ad202025b5593ce05c69244 | /python/dynet_config.py | 89254a06e6c4f0c932baf288f8b746710d4911ec | [
"Apache-2.0"
] | permissive | saksham-singhal/dynet | ebba2df04bc86e425c384d7c5882330af50c1c94 | feb35b57052feb282ad4ec65bc2e269c56034a85 | refs/heads/master | 2021-01-21T12:11:20.244177 | 2017-08-31T19:04:57 | 2017-08-31T19:04:57 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,418 | py | def set(mem="512", random_seed=0, autobatch=0,
autobatch_debug=0, weight_decay=0, shared_parameters=0,
requested_gpus=0, gpu_mask=None):
if "__DYNET_CONFIG" in __builtins__:
(mem, random_seed, auto_batch, autobatch_debug) = (__builtins__["__DYNET_CONFIG"]["mem"] if __builtins__["__DYNET_CONFIG"].get("mem") else mem,
__builtins__["__DYNET_CONFIG"]["seed"] if __builtins__["__DYNET_CONFIG"].get("seed") else random_seed,
__builtins__["__DYNET_CONFIG"]["autobatch"] if __builtins__["__DYNET_CONFIG"].get("autobatch") else autobatch,
__builtins__["__DYNET_CONFIG"]["autobatch_debug"] if __builtins__["__DYNET_CONFIG"].get("autobatch_debug") else autobatch_debug)
(weight_decay, shared_parameters, requested_gpus, gpu_mask) = (__builtins__["__DYNET_CONFIG"]["weight_decay"] if __builtins__["__DYNET_CONFIG"].get("weight_decay") else weight_decay,
__builtins__["__DYNET_CONFIG"]["shared_params"] if __builtins__["__DYNET_CONFIG"].get("shared_params") else shared_parameters,
__builtins__["__DYNET_CONFIG"]["requested_gpus"] if __builtins__["__DYNET_CONFIG"].get("requested_gpus") else requested_gpus,
__builtins__["__DYNET_CONFIG"]["gpu_mask"] if __builtins__["__DYNET_CONFIG"].get("gpu_mask") else gpu_mask)
# TODO read "gpu_mask" from list of IDs?
__builtins__["__DYNET_CONFIG"] = {
"mem":mem, "seed": random_seed, "autobatch": autobatch,
"autobatch_debug":autobatch_debug, "weight_decay": weight_decay,
"shared_params": shared_parameters,
"requested_gpus": requested_gpus,
"gpu_mask": gpu_mask if gpu_mask else list(),
}
def set_gpu(flag=True):
__builtins__["__DYNET_GPU"]=flag
if "__DYNET_CONFIG" in __builtins__:
__builtins__["__DYNET_CONFIG"]["requested_gpus"] = 1
else:
set(requested_gpus=1)
def gpu():
if "__DYNET_GPU" in __builtins__:
return __builtins__["__DYNET_GPU"]
return None
def get():
if "__DYNET_CONFIG" in __builtins__:
return __builtins__["__DYNET_CONFIG"]
return None
| [
"neubig@gmail.com"
] | neubig@gmail.com |
6c05c793d137e7ae96008b8614c9a18a3611b922 | 8b441f592a6deb9b0a515cbd92bb4663ad79ffe4 | /churn/others/poc_aux.py | 9c63e364b9aa54abc3eaea8d7ae1eced2c2f390e | [] | no_license | carnaum2/use-cases | 0d391a6a10bb70b60a4025152a278b0e4c595d01 | 24920e3828234da691ab643b6dd9a0aa0a5c0df5 | refs/heads/master | 2022-12-07T03:41:34.299274 | 2020-09-07T10:20:32 | 2020-09-07T10:20:32 | 293,249,567 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,965 | py | import sys
import datetime as dt
import os
from pyspark.sql.functions import size, coalesce, col, lit, collect_list, udf, when, length
from pyspark.sql.types import StringType, DoubleType, FloatType, IntegerType
def set_paths_and_logger():
'''
Deployment should be something like "dirs/dir1/use-cases"
This function adds to the path "dirs/dir1/use-cases" and ""dirs/dir1/"
:return:
'''
import imp
from os.path import dirname
import os
USE_CASES = dirname(os.path.abspath(imp.find_module('churn')[1]))
if USE_CASES not in sys.path:
sys.path.append(USE_CASES)
print("Added '{}' to path".format(USE_CASES))
# if deployment is correct, this path should be the one that contains "use-cases", "pykhaos", ...
# FIXME another way of doing it more general?
DEVEL_SRC = os.path.dirname(USE_CASES) # dir before use-cases dir
if DEVEL_SRC not in sys.path:
sys.path.append(DEVEL_SRC)
print("Added '{}' to path".format(DEVEL_SRC))
ENGINE_SRC = "/var/SP/data/home/csanc109/src/devel/amdocs_informational_dataset/"
if ENGINE_SRC not in sys.path:
sys.path.append(ENGINE_SRC)
print("Added '{}' to path".format(ENGINE_SRC))
EXTERNAL_LIB = os.path.join(os.environ.get('BDA_USER_HOME', ''), "src", "devel", "pykhaos", "external_lib")
if EXTERNAL_LIB not in sys.path:
sys.path.append(EXTERNAL_LIB)
print("Added '{}' to path".format(EXTERNAL_LIB))
import pykhaos.utils.custom_logger as clogger
logging_file = os.path.join(os.environ.get('BDA_USER_HOME', ''), "logging",
"poc_aux_" + dt.datetime.now().strftime("%Y%m%d_%H%M%S") + ".log")
logger = clogger.configure_logger(log_filename=logging_file, std_channel=sys.stderr, logger_name="")
logger.info("Logging to file {}".format(logging_file))
return logger
class SparkListener(object):
def onApplicationEnd(self, applicationEnd):
pass
def onApplicationStart(self, applicationStart):
pass
def onBlockManagerRemoved(self, blockManagerRemoved):
pass
def onBlockUpdated(self, blockUpdated):
pass
def onEnvironmentUpdate(self, environmentUpdate):
pass
def onExecutorAdded(self, executorAdded):
pass
def onExecutorMetricsUpdate(self, executorMetricsUpdate):
pass
def onExecutorRemoved(self, executorRemoved):
pass
def onJobEnd(self, jobEnd):
pass
def onJobStart(self, jobStart):
pass
def onOtherEvent(self, event):
pass
def onStageCompleted(self, stageCompleted):
pass
def onStageSubmitted(self, stageSubmitted):
pass
def onTaskEnd(self, taskEnd):
pass
def onTaskGettingResult(self, taskGettingResult):
pass
def onTaskStart(self, taskStart):
pass
def onUnpersistRDD(self, unpersistRDD):
pass
class Java:
implements = ["org.apache.spark.scheduler.SparkListenerInterface"]
class TaskEndListener(SparkListener):
def onTaskEnd(self, taskEnd):
print(taskEnd.toString())
def task_aux(spark, i):
# sc, spark, _ = get_spark_session(app_name="poc_scala_python_{}".format(i), log_level="OFF")
# print("Ended spark session {}: {} secs ".format(i, time.time() - start_time))
dfs = []
df_all = None
import pandas as pd
for jj in range(0,10):
if logger: logger.info("aux task{} iter{}".format(i, jj))
df_pandas = pd.DataFrame({"start_time_secs": range(0, N), "col": [jj] * N})
df = spark.createDataFrame(df_pandas)
dfs.append(df)
from pykhaos.utils.pyspark_utils import union_all
df_all = union_all(dfs)
if logger: logger.info("aux task{} iter{} || ) cache + count ".format(i, jj))
df_all = df_all.cache()
if logger: logger.info("aux task{} iter{} || ) cache + count {} == ".format(i, jj, df_all.count()))
(df_all.write
.format('parquet')
.mode('append')
.saveAsTable("tests_es.csanc109_task_aux_{}".format(i)))
if logger: logger.info("Ended "+ "tests_es.csanc109_task_aux_{}".format(i))
if logger: logger.info("Ended "+ "tests_es.csanc109_task_aux_{}".format(i))
if __name__ == "__main__":
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# - - - - - - - - - - - - - - - -
#
# - - - - - - - - - - - - - - - -
print("Process started...")
logger = set_paths_and_logger()
import argparse
parser = argparse.ArgumentParser(
description="Run churn_delivery XXXXXXXX -c YYYYMMDD",
epilog='Please report bugs and issues to Cristina <cristina.sanchez4@vodafone.com>')
parser.add_argument('-c', '--closing_day', metavar='<YYYYMMDD>', type=str, required=True,
help='Closing day YYYYMMDD (same used for the car generation)')
args = parser.parse_args()
closing_day = args.closing_day
if logger: logger.info("calling task_aux closing_day {}".format(closing_day))
from pykhaos.utils.scala_wrapper import convert_df_to_pyspark, get_scala_sc
from pykhaos.utils.pyspark_configuration import setting_bdp
import time
start_time = time.time()
HOME_SRC = os.path.join(os.environ.get('BDA_USER_HOME', ''), "src")
if HOME_SRC not in sys.path:
sys.path.append(HOME_SRC)
from pykhaos.utils.pyspark_configuration import get_spark_session
sc, spark, _ = get_spark_session(app_name="poc_aux", log_level="OFF")
print("poc_aux Ended spark session: {} secs ".format(time.time() - start_time ))
if logger: logger.info("poc_aux Ended spark session: {} secs ".format(time.time() - start_time ))
N = 10
for i in range(1,15):
if logger: logger.info("calling task_aux {}".format(i))
task_aux(spark, 1234+i)
print("poc_aux_finished")
if logger: logger.info("poc_aux_finished")
| [
"carmen.arnau1@vodafone.com"
] | carmen.arnau1@vodafone.com |
f58ff942c76d8a1b5214e4d83020ec0f33535aae | 6c492996b452423ff3c02ae2bda35c806b5e2beb | /ITP1_7_B.py | 82e44a1138f810ce443b8197e1f4f1bf66b6a93d | [] | no_license | TakuroKato/AOJ | 4764820aa0fc523d1f2719d968ab9a30069cdef7 | cdcf173eca3079c89041967121f746b200d39ea7 | refs/heads/master | 2021-05-09T17:34:24.953074 | 2018-01-27T07:09:04 | 2018-01-27T07:09:04 | 119,141,600 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 388 | py | # -*- coding:utf-8 -*-
def f(n,k):
count = 0
for x in range(1,n+1):
for y in range(x,n+1):
for z in range(y,n+1):
if x + y + z == k:
if x != y and y != z and z != x:
count += 1
return count
while True:
n,x = map(int,input().split())
if n == 0 and x == 0:
break
print(f(n,x))
| [
"kttk.aero@gmail.com"
] | kttk.aero@gmail.com |
c63fff22adb5279ed650e91434fae41efe305152 | 9cd9f108fb77b802222c38f7bb5a5c1b881b8132 | /django-src/fabiotest/profiles/forms.py | f232c772d1e89cf0e647e9d899e802cf71ed0a47 | [] | no_license | Zapix/angular-sample | 4f968d209aba553b223d671594e80228f1975ad4 | 18d6d109b4555eee41a24314a961c0fcbebb6a72 | refs/heads/master | 2021-01-17T01:31:37.378211 | 2015-06-09T09:29:21 | 2015-06-09T09:29:21 | 36,862,703 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,041 | py | # -*- coding: utf-8 -*-
from django import forms
from django.contrib.auth.models import User
from django.contrib.auth.forms import UserCreationForm
from books.models import Book
class ExtendedUserCreationForm(UserCreationForm):
birthday = forms.DateField(required=False)
book = forms.CharField(max_length=255, required=False)
class Meta:
model = User
fields = ('username', 'email', 'first_name', 'last_name')
def save(self, commit=True):
"""
Save birthday for user if it set.
Create book if title set
:param commit:
:return:
"""
user = super(ExtendedUserCreationForm, self).save(commit=commit)
if commit:
if self.cleaned_data['birthday']:
user.profile.birthday = self.cleaned_data['birthday']
user.profile.save()
if self.cleaned_data['book']:
Book.objects.create(author=user,
title=self.cleaned_data['book'])
return user
| [
"zap.aibulatov@gmail.com"
] | zap.aibulatov@gmail.com |
a011216fede0c983b657ac9ea8cc72fc8f0f6e8a | a966c6afc5ca327bd51bdde70ee026d544508b0f | /test/fixtures/strategies/python/setup.py | cfe2b4b2eb266d264fafbdc4c65b58b39a6119eb | [
"Apache-2.0",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | googleapis/release-please | 4e9c6b87e0d52abfe9299bfac48b42e84d6f01ad | 2e28edfbe237f042e679621f89669f969f0c6a12 | refs/heads/main | 2023-08-31T18:21:38.738366 | 2023-08-22T15:38:08 | 2023-08-22T15:38:08 | 183,499,464 | 2,881 | 312 | Apache-2.0 | 2023-09-14T09:58:33 | 2019-04-25T19:44:03 | TypeScript | UTF-8 | Python | false | false | 2,510 | py | # Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import io
import os
import setuptools
name = "google-cloud-automl"
description = "Cloud AutoML API client library"
version = "0.5.0"
release_status = "Development Status :: 3 - Alpha"
dependencies = [
"google-api-core[grpc] >= 1.14.0, < 2.0.0dev",
'enum34; python_version < "3.4"',
]
extras = {
"pandas": ["pandas>=0.24.0"],
"storage": ["google-cloud-storage >= 1.18.0, < 2.0.0dev"],
}
package_root = os.path.abspath(os.path.dirname(__file__))
readme_filename = os.path.join(package_root, "README.rst")
with io.open(readme_filename, encoding="utf-8") as readme_file:
readme = readme_file.read()
packages = [
package for package in setuptools.find_packages() if package.startswith("google")
]
namespaces = ["google"]
if "google.cloud" in packages:
namespaces.append("google.cloud")
setuptools.setup(
name=name,
version=version,
description=description,
long_description=readme,
author="Google LLC",
author_email="googleapis-packages@oogle.com",
license="Apache 2.0",
url="https://github.com/GoogleCloudPlatform/google-cloud-python",
classifiers=[
release_status,
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Operating System :: OS Independent",
"Topic :: Internet",
],
platforms="Posix; MacOS X; Windows",
packages=packages,
namespace_packages=namespaces,
install_requires=dependencies,
extras_require=extras,
python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
include_package_data=True,
zip_safe=False,
) | [
"noreply@github.com"
] | googleapis.noreply@github.com |
fa8ea11ddf14a72c8ac53cc9b1c2f2dbb201037e | 1dd4321db7b2791ea3a05a8d53c9beac7436ffef | /currency_service/app/parsers.py | 17567f22c395392944fa1200bf0f39091891f2c5 | [] | no_license | st4lk/currency-service | f5c09efff9a76476337ac0721a0c5c5f9d84391a | fb79cb494949f59ba862b7d2a083262165a42feb | refs/heads/master | 2021-06-17T22:27:07.129350 | 2019-07-15T09:41:36 | 2019-07-15T09:41:36 | 196,969,944 | 1 | 0 | null | 2021-02-26T02:31:37 | 2019-07-15T09:41:14 | Python | UTF-8 | Python | false | false | 587 | py | from aiohttp.web import HTTPBadRequest
from marshmallow import Schema, fields, ValidationError
import simplejson as json
DEFAULT_PAGE_SIZE = 20
def validate_page(page_number: int) -> None:
if page_number < 1:
raise ValidationError("Must be greater than 1.")
class PaginationSchema(Schema):
page = fields.Integer(missing=1, validate=validate_page)
page_size = fields.Integer(missing=DEFAULT_PAGE_SIZE, validate=validate_page)
def handle_error(self, error, data):
raise HTTPBadRequest(text=json.dumps(error.messages), content_type='application/json')
| [
"myhappydo@gmail.com"
] | myhappydo@gmail.com |
894c6d2818fa45b38608fd250e4998f6a4a3477c | 974d04d2ea27b1bba1c01015a98112d2afb78fe5 | /test/xpu/test_fill_diagonal_tensor_op_xpu.py | eca8ab3352729253fca559b283f16bd6bc0b4270 | [
"Apache-2.0"
] | permissive | PaddlePaddle/Paddle | b3d2583119082c8e4b74331dacc4d39ed4d7cff0 | 22a11a60e0e3d10a3cf610077a3d9942a6f964cb | refs/heads/develop | 2023-08-17T21:27:30.568889 | 2023-08-17T12:38:22 | 2023-08-17T12:38:22 | 65,711,522 | 20,414 | 5,891 | Apache-2.0 | 2023-09-14T19:20:51 | 2016-08-15T06:59:08 | C++ | UTF-8 | Python | false | false | 5,431 | py | # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import numpy as np
from eager_op_test import skip_check_grad_ci
from get_test_cover_info import (
XPUOpTestWrapper,
create_test_class,
get_xpu_op_support_types,
)
from op_test_xpu import XPUOpTest
import paddle
paddle.enable_static()
def fill_diagonal_ndarray(x, value, offset=0, dim1=0, dim2=1):
"""Fill value into the diagonal of x that offset is ${offset} and the coordinate system is (dim1, dim2)."""
strides = x.strides
shape = x.shape
if dim1 > dim2:
dim1, dim2 = dim2, dim1
assert 0 <= dim1 < dim2 <= 2
assert len(x.shape) == 3
dim_sum = dim1 + dim2
dim3 = len(x.shape) - dim_sum
if offset >= 0:
diagdim = min(shape[dim1], shape[dim2] - offset)
diagonal = np.lib.stride_tricks.as_strided(
x[:, offset:] if dim_sum == 1 else x[:, :, offset:],
shape=(shape[dim3], diagdim),
strides=(strides[dim3], strides[dim1] + strides[dim2]),
)
else:
diagdim = min(shape[dim2], shape[dim1] + offset)
diagonal = np.lib.stride_tricks.as_strided(
x[-offset:, :] if dim_sum in [1, 2] else x[:, -offset:],
shape=(shape[dim3], diagdim),
strides=(strides[dim3], strides[dim1] + strides[dim2]),
)
diagonal[...] = value
return x
def fill_gt(x, y, offset, dim1, dim2):
if dim1 > dim2:
dim1, dim2 = dim2, dim1
offset = -offset
xshape = x.shape
yshape = y.shape
if len(xshape) != 3:
perm_list = []
unperm_list = [0] * len(xshape)
idx = 0
for i in range(len(xshape)):
if i != dim1 and i != dim2:
perm_list.append(i)
unperm_list[i] = idx
idx += 1
perm_list += [dim1, dim2]
unperm_list[dim1] = idx
unperm_list[dim2] = idx + 1
x = np.transpose(x, perm_list)
y = y.reshape(-1, yshape[-1])
nxshape = x.shape
x = x.reshape((-1, xshape[dim1], xshape[dim2]))
out = fill_diagonal_ndarray(x, y, offset, 1, 2)
if len(xshape) != 3:
out = out.reshape(nxshape)
out = np.transpose(out, unperm_list)
return out
class XPUTestFillDiagTensorOp(XPUOpTestWrapper):
def __init__(self):
self.op_name = 'fill_diagonal_tensor'
self.use_dynamic_create_class = False
@skip_check_grad_ci(
reason="xpu fill_diagonal_tensor is not implemented yet"
)
class TensorFillDiagTensor_Test(XPUOpTest):
def setUp(self):
self.op_type = "fill_diagonal_tensor"
self.python_api = paddle.tensor.manipulation.fill_diagonal_tensor
self.init_kernel_type()
x = np.random.random((10, 10)).astype(self.dtype)
y = np.random.random((10,)).astype(self.dtype)
dim1 = 0
dim2 = 1
offset = 0
out = fill_gt(x, y, offset, dim1, dim2)
self.inputs = {"X": x, "Y": y}
self.outputs = {'Out': out}
self.attrs = {"offset": offset, "dim1": dim1, "dim2": dim2}
def init_kernel_type(self):
self.dtype = self.in_type
def test_check_output(self):
self.check_output_with_place(paddle.XPUPlace(0))
class TensorFillDiagTensor_Test2(TensorFillDiagTensor_Test):
def setUp(self):
self.op_type = "fill_diagonal_tensor"
self.python_api = paddle.tensor.manipulation.fill_diagonal_tensor
self.init_kernel_type()
x = np.random.random((2, 20, 25)).astype(self.dtype)
y = np.random.random((2, 20)).astype(self.dtype)
dim1 = 2
dim2 = 1
offset = -3
out = fill_gt(x, y, offset, dim1, dim2)
self.inputs = {"X": x, "Y": y}
self.outputs = {'Out': out}
self.attrs = {"offset": offset, "dim1": dim1, "dim2": dim2}
class TensorFillDiagTensor_Test3(TensorFillDiagTensor_Test):
def setUp(self):
self.op_type = "fill_diagonal_tensor"
self.python_api = paddle.tensor.manipulation.fill_diagonal_tensor
self.init_kernel_type()
x = np.random.random((2, 20, 20, 3)).astype(self.dtype)
y = np.random.random((2, 3, 18)).astype(self.dtype)
dim1 = 1
dim2 = 2
offset = 2
out = fill_gt(x, y, offset, dim1, dim2)
self.inputs = {"X": x, "Y": y}
self.outputs = {'Out': out}
self.attrs = {"offset": offset, "dim1": dim1, "dim2": dim2}
support_types = get_xpu_op_support_types('fill_diagonal_tensor')
for stype in support_types:
create_test_class(globals(), XPUTestFillDiagTensorOp, stype)
if __name__ == '__main__':
paddle.enable_static()
unittest.main()
| [
"noreply@github.com"
] | PaddlePaddle.noreply@github.com |
3eaf78f471fa48f8cdec52bcf271ff1958618ac4 | f190ce1949eb687c2e251cbe087327b44d117d1f | /routes_data.py | 6b9437780f8c9b00ccade3213c2fd824fd930c85 | [
"MIT"
] | permissive | sauloalrpi/pifollow | 04d01d0dbea565835847190aba5ed640bad2033e | a5324991aa02bc2d749d3eebf69f8367f5b9fa18 | refs/heads/master | 2021-01-17T19:22:16.107156 | 2015-06-07T00:20:31 | 2015-06-07T00:20:31 | 35,583,990 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,562 | py | from funcs import *
from db import *
##
# GETTERS
##
@app.route('/'+app.config['RNG_ID']+'/data/get/<path:path>')
def get_log(path):
complete_path = os.path.join(app.config['UPLOAD_FOLDER'], path)
if not os.path.exists(complete_path):
print "complete path %s does not exists" % complete_path
return abort(404)
if not os.path.abspath(complete_path).startswith( os.path.abspath(app.config['UPLOAD_FOLDER']) ):
print "path %s does not starts with upload folder %s" % ( os.path.abspath(complete_path), os.path.abspath(app.config['UPLOAD_FOLDER']) )
return abort(404)
##complete_path = os.path.join(root_dir(), app.config['DB_NAME'])
##ext = os.path.splitext(path)[1]
##mimetype = mimetypes.get(ext, "text/html")
#content = get_file(complete_path)
##return Response(content, mimetype=mimetype)
#return Response(content)
return send_from_directory(app.config['UPLOAD_FOLDER'], path, as_attachment=True, attachment_filename=path)
@app.route('/'+app.config['RNG_ID']+'/data/del/<path:path>')
def get_del(path):
print 'path',path
print 'UPLOAD_FOLDER', app.config['UPLOAD_FOLDER']
complete_path = os.path.join(app.config['UPLOAD_FOLDER'], path)
print 'complete_path', complete_path
if not os.path.exists(complete_path):
print "complete path %s does not exists" % complete_path
return abort(404)
if not os.path.abspath(complete_path).startswith( os.path.abspath(app.config['UPLOAD_FOLDER']) ):
print "path %s does not starts with upload folder %s" % ( os.path.abspath(complete_path), os.path.abspath(app.config['UPLOAD_FOLDER']) )
return abort(404)
try:
r = Data.query.filter_by(filepath=path).first()
db.session.delete(r)
db.session.commit()
os.remove(complete_path)
res = { 'del': path, 'error': False }
return jsonify(res), 200
except Exception as e:
print "error", e
return jsonify({ 'del': path, 'error': True, 'message': str(e) }), 501
##complete_path = os.path.join(root_dir(), app.config['DB_NAME'])
##ext = os.path.splitext(path)[1]
##mimetype = mimetypes.get(ext, "text/html")
#content = get_file(complete_path)
##return Response(content, mimetype=mimetype)
#return Response(content)
@app.route('/'+app.config['RNG_ID']+'/data/add/<pi_id>', methods=['POST'])
def add(pi_id):
file = request.files['foto']
filename = file.filename
filesize = request.content_length
path = os.path.join(app.config['UPLOAD_FOLDER'], pi_id )
dst = os.path.join(pi_id , secure_filename(filename))
dstn = os.path.join(app.config['UPLOAD_FOLDER'], dst )
if is_too_late():
print "Too late. go to sleep. I'm not storing your dumb file"
return jsonify({'pi_id': pi_id, 'filename': filename, 'filesize': filesize, 'error': False, 'extra': "too late. go to sleep. I'm not storing your dumb file"}), 200
else:
print "So nice of you to work this hard. Move along."
if not os.path.exists(path):
os.makedirs(path)
file.save(dstn)
print "ADDING ID %s NAME %s SIZE %d PATH %s" % (pi_id, filename, filesize, dst)
info = Data(pi_id, filename, filesize, dst)
db.session.add(info)
try:
db.session.commit()
return jsonify({'pi_id': pi_id, 'filename': filename, 'filesize': filesize, 'error': False}), 200
except:
return jsonify({'pi_id': pi_id, 'filename': filename, 'filesize': filesize, 'error': True, 'message': 'record exists'}), 501
##
# LISTS
##
@app.route('/'+app.config['RNG_ID']+'/data/list/all/', defaults={'pi_id': None})
@app.route('/'+app.config['RNG_ID']+'/data/list/all/<pi_id>/')
def list_data_all(pi_id):
try:
if pi_id is None:
data = Data.query.all()
else:
data = Data.query.filter( Data.pi_id == pi_id ).all()
data = [ x.as_dict() for x in data ]
return jsonify({'ids': data, 'error': False}), 200
except Exception as e:
return jsonify({'message': str(e), 'error': True}), 200
@app.route('/'+app.config['RNG_ID']+'/data/list/last/', defaults={'pi_id': None})
@app.route('/'+app.config['RNG_ID']+'/data/list/last/<pi_id>/')
def list_data_last(pi_id):
try:
if pi_id is None:
data = Data.query.group_by(Data.pi_id)
else:
data = Data.query.filter( Data.pi_id == pi_id ).group_by(Data.pi_id)
data = [ x.as_dict() for x in data ]
return jsonify({'ids': data, 'error': False}), 200
except Exception as e:
return jsonify({'message': str(e), 'error': True}), 200
@app.route('/'+app.config['RNG_ID']+'/data/list/ids/')
def list_data_ids():
try:
data = sorted( [ x.pi_id for x in db.session.query(Data.pi_id).distinct() ] )
return jsonify({'ids': data, 'error': False}), 200
except Exception as e:
return jsonify({'message': str(e), 'error': True}), 200
@app.route('/'+app.config['RNG_ID']+'/data/list/filepath/', defaults={'pi_id': None})
@app.route('/'+app.config['RNG_ID']+'/data/list/filepath/<pi_id>/')
def list_data_filepath(pi_id):
try:
if pi_id is None:
data = sorted( [ x.filepath for x in db.session.query(Data.filepath).distinct() ] )
else:
data = sorted( [ x.filepath for x in db.session.query(Data.filepath).filter(Data.pi_id == pi_id).distinct() ] )
return jsonify({'filepath': data, 'error': False}), 200
except Exception as e:
return jsonify({'message': str(e), 'error': True}), 200
@app.route('/'+app.config['RNG_ID']+'/data/list/filename/', defaults={'pi_id': None})
@app.route('/'+app.config['RNG_ID']+'/data/list/filename/<pi_id>/')
def list_data_filename(pi_id):
try:
if pi_id is None:
data = sorted( [ x.filename for x in db.session.query(Data.filename).distinct() ] )
else:
data = sorted( [ x.filename for x in db.session.query(Data.filename).filter(Data.pi_id == pi_id).distinct() ] )
return jsonify({'filename': data, 'error': False}), 200
except Exception as e:
return jsonify({'message': str(e), 'error': True}), 200
##
# DISPLAY
##
@app.route('/'+app.config['RNG_ID']+'/data/show/all/', defaults={'pi_id': None})
@app.route('/'+app.config['RNG_ID']+'/data/show/all/<pi_id>/')
def display_image_all(pi_id):
try:
if pi_id is None:
res = gen_table( Data.query.all(), app.config['RNG_ID'] )
else:
res = gen_table( Data.query.filter(Data.pi_id == pi_id).all(), app.config['RNG_ID'] )
return res, 200
except Exception as e:
print "error", e
return jsonify({ 'error': True, 'message': str(e) }), 501
@app.route('/'+app.config['RNG_ID']+'/data/show/last/', defaults={'pi_id': None})
@app.route('/'+app.config['RNG_ID']+'/data/show/last/<pi_id>/')
def display_image_last(pi_id):
meta = """<head><meta http-equiv="refresh" content="300" /></head>"""
try:
if pi_id is None:
res = gen_table( Data.query.group_by(Data.pi_id), app.config['RNG_ID'], meta=meta )
else:
res = gen_table( Data.query.group_by(Data.pi_id).filter(Data.pi_id == pi_id), app.config['RNG_ID'], meta=meta )
return res, 200
except Exception as e:
print "error", e
return jsonify({ 'error': True, 'message': str(e) }), 501
| [
"sauloal@gmail.com"
] | sauloal@gmail.com |
a4466433f0aec02d9154f054968abc4eab62fd18 | da2c5c8a1786b26667bf533a30e858b0eed9885d | /autobahn/twisted/test/test_choosereactor.py | 5e1282c31f021105e656a13f16b9c66e5b9470df | [
"MIT"
] | permissive | touilleMan/autobahn-python | 5690002233877adb6cad50519e3e7cee99e96d3d | 3ea9cc45eb86d479ffe0e31bf95400c9e3ef2501 | refs/heads/master | 2021-01-18T11:12:33.944315 | 2016-02-17T15:20:09 | 2016-02-17T15:20:09 | 51,939,277 | 2 | 0 | null | 2016-02-17T16:58:46 | 2016-02-17T16:58:46 | null | UTF-8 | Python | false | false | 3,739 | py | ###############################################################################
#
# The MIT License (MIT)
#
# Copyright (c) Tavendo GmbH
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
###############################################################################
from __future__ import absolute_import
import sys
import twisted.internet
from twisted.trial.unittest import TestCase
from mock import Mock
from autobahn.twisted import choosereactor
class ChooseReactorTests(TestCase):
def patch_reactor(self, name, new_reactor):
"""
Patch ``name`` so that Twisted will grab a fake reactor instead of
a real one.
"""
if hasattr(twisted.internet, name):
self.patch(twisted.internet, name, new_reactor)
else:
def _cleanup():
delattr(twisted.internet, name)
setattr(twisted.internet, name, new_reactor)
def patch_modules(self):
"""
Patch ``sys.modules`` so that Twisted believes there is no
installed reactor.
"""
old_modules = dict(sys.modules)
new_modules = dict(sys.modules)
del new_modules["twisted.internet.reactor"]
def _cleanup():
sys.modules = old_modules
self.addCleanup(_cleanup)
sys.modules = new_modules
def test_unknown(self):
"""
``install_optimal_reactor`` will use the default reactor if it is
unable to detect the platform it is running on.
"""
reactor_mock = Mock()
self.patch_reactor("default", reactor_mock)
self.patch(sys, "platform", "unknown")
# Emulate that a reactor has not been installed
self.patch_modules()
choosereactor.install_optimal_reactor()
reactor_mock.install.assert_called_once_with()
def test_mac(self):
"""
``install_optimal_reactor`` will install KQueueReactor on
Darwin (OS X).
"""
reactor_mock = Mock()
self.patch_reactor("kqreactor", reactor_mock)
self.patch(sys, "platform", "darwin")
# Emulate that a reactor has not been installed
self.patch_modules()
choosereactor.install_optimal_reactor()
reactor_mock.install.assert_called_once_with()
def test_linux(self):
"""
``install_optimal_reactor`` will install EPollReactor on Linux.
"""
reactor_mock = Mock()
self.patch_reactor("epollreactor", reactor_mock)
self.patch(sys, "platform", "linux")
# Emulate that a reactor has not been installed
self.patch_modules()
choosereactor.install_optimal_reactor()
reactor_mock.install.assert_called_once_with()
| [
"hawkowl@atleastfornow.net"
] | hawkowl@atleastfornow.net |
e6b0cf58911420f23bc4ae46248199920e7a01ac | c887e00981e6368e94916ca9b93c4de79a5c1a22 | /lawncare/website/migrations/0001_initial.py | b6c03a902eb692a14c478cd6c3b22e57708f0659 | [] | no_license | devArist/school_project | 18dc0427e2d6a45abfff8a72dbe2c52a7afd8778 | 4d1c1ba5e2a9b4253e950e2c95e0ce6ef22efe3f | refs/heads/main | 2023-05-07T09:51:50.664546 | 2021-05-28T12:44:11 | 2021-05-28T12:44:11 | 368,508,476 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,467 | py | # Generated by Django 3.2.3 on 2021-05-19 14:21
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='About',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('image', models.FileField(upload_to='img')),
('title', models.CharField(max_length=200, verbose_name='titre')),
('subtitle', models.CharField(max_length=200, verbose_name='sous-titre')),
('description', models.TextField()),
('date_add', models.DateTimeField(auto_now_add=True, verbose_name="date d'ajout")),
('date_update', models.DateTimeField(auto_now=True, verbose_name='date de modification')),
('status', models.BooleanField(default=True)),
],
options={
'verbose_name': 'a propos',
'verbose_name_plural': 'a propos',
},
),
migrations.CreateModel(
name='Banner',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('image', models.FileField(upload_to='img')),
('title', models.CharField(max_length=200, verbose_name='titre')),
('subtitle', models.CharField(max_length=200, verbose_name='sous-titre')),
('description', models.TextField()),
('date_add', models.DateTimeField(auto_now_add=True, verbose_name="date d'ajout")),
('date_update', models.DateTimeField(auto_now=True, verbose_name='date de modification')),
('status', models.BooleanField(default=True)),
],
options={
'verbose_name': 'Bannière',
'verbose_name_plural': 'Bannières',
},
),
migrations.CreateModel(
name='Contact',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200, verbose_name='nom')),
('email', models.EmailField(max_length=200, verbose_name='email')),
('subject', models.CharField(max_length=200, verbose_name='sujet')),
('message', models.TextField()),
('description', models.TextField()),
('date_add', models.DateTimeField(auto_now_add=True, verbose_name="date d'ajout")),
('date_update', models.DateTimeField(auto_now=True, verbose_name='date de modification')),
('status', models.BooleanField(default=True)),
],
options={
'verbose_name': 'contact',
'verbose_name_plural': 'contacts',
},
),
migrations.CreateModel(
name='GlobalBanner',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('image', models.FileField(upload_to='img', verbose_name='bannière des autres pages du site')),
('date_add', models.DateTimeField(auto_now_add=True, verbose_name="date d'ajout")),
('date_update', models.DateTimeField(auto_now=True, verbose_name='date de modification')),
('status', models.BooleanField(default=True)),
],
options={
'verbose_name': 'Bannière générale',
'verbose_name_plural': 'Bannière générale',
},
),
migrations.CreateModel(
name='Newsletter',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('email', models.EmailField(max_length=200)),
('description', models.TextField()),
('date_add', models.DateTimeField(auto_now_add=True, verbose_name="date d'ajout")),
('date_update', models.DateTimeField(auto_now=True, verbose_name='date de modification')),
('status', models.BooleanField(default=True)),
],
options={
'verbose_name': 'Newsletter',
'verbose_name_plural': 'Newsletters',
},
),
migrations.CreateModel(
name='SocialNetwork',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('icon', models.FileField(upload_to='img')),
('link', models.CharField(max_length=200, verbose_name='lien-fa')),
('name', models.CharField(max_length=200, verbose_name='nom')),
('date_add', models.DateTimeField(auto_now_add=True, verbose_name="date d'ajout")),
('date_update', models.DateTimeField(auto_now=True, verbose_name='date de modification')),
('status', models.BooleanField(default=True)),
],
options={
'verbose_name': 'réseau social',
'verbose_name_plural': 'réseaux sociaux',
},
),
migrations.CreateModel(
name='Website',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('address', models.CharField(max_length=200, verbose_name='adresse')),
('phone', models.CharField(max_length=200, verbose_name='téléphone')),
('email', models.EmailField(max_length=200, verbose_name='email')),
('copyrights', models.CharField(max_length=200, verbose_name='copyright')),
('description', models.TextField()),
('date_add', models.DateTimeField(auto_now_add=True, verbose_name="date d'ajout")),
('date_update', models.DateTimeField(auto_now=True, verbose_name='date de modification')),
('status', models.BooleanField(default=True)),
],
options={
'verbose_name': 'Information générale',
'verbose_name_plural': 'Informations générales',
},
),
]
| [
"aridev97@gmail.com"
] | aridev97@gmail.com |
ebef581c5ca4a343b02bfba311bced6d627ec79e | 9b25020b71ed5aaa6eb1cdcd685b0c775d74995b | /test_tk_progressbar_2.py | f0104b2a29a61d6ddd1077db0b98070f2f5f939a | [] | no_license | lucmilot/source1 | 1d9300508279ae2a864036d011e633bb0dfa82cb | d60a953b1a0e11a82b830226052079c00f3cbad1 | refs/heads/master | 2021-05-18T18:05:23.902206 | 2020-03-30T15:43:54 | 2020-03-30T15:43:54 | 251,351,398 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,688 | py | # -*- coding: utf-8 -*-
"""
Created on Tue Aug 7 09:25:44 2018
@author: XT21586
"""
import tkinter as tk # Python 3
import tkinter.ttk as ttk
from queue import Queue
import threading
import time
queue1 = Queue()
root = tk.Tk()
#s = tk.Style()
#s.theme_use("default")
#s.configure("TProgressbar", thickness=50)
frame1 = tk.Frame(root)
frame1.pack()
canvas1 = tk.Canvas(frame1, bg="blue", height=15, width=500)
canvas1.pack()
#root.mainloop()
# Function to do 'stuff' and place object in queue for later #
def foo():
# sleep to demonstrate thread doing work #
time.sleep(12)
# obj = [x for x in range(0,10)]
# queue1.put(obj)
# Create thread object, targeting function to do 'stuff' #
thread1 = threading.Thread(target=foo, args=())
# Function to check state of thread1 and to update progressbar #
def progress(thread, queue):
# starts thread #
thread.start()
# defines indeterminate progress bar (used while thread is alive) #
pb1 = ttk.Progressbar(canvas1, style="TProgressbar")
# defines determinate progress bar (used when thread is dead) #
pb2 = ttk.Progressbar(canvas1, style="TProgressbar")
pb2['value'] = 100
# places and starts progress bar #
pb1.pack()
pb1.start()
# checks whether thread is alive #
while thread.is_alive():
root.update()
pass
# once thread is no longer active, remove pb1 and place the '100%' progress bar #
pb1.destroy()
pb2.pack()
time.sleep(2)
pb2.destroy()
root.destroy()
# retrieves object from queue #
#work = queue.get()
work = 'tata'
return work
work = progress(thread1, queue1)
root.mainloop() | [
"40570847+lucmilot@users.noreply.github.com"
] | 40570847+lucmilot@users.noreply.github.com |
6a4cbc9f811f01be7e8dbd1a3426a81b01e076a8 | 51e8dd23d9511a8d767e0d1b645365dd9ba9110b | /libyana/datautils/concatdataset.py | 967667fc2b16e2e876f1bf39c4eb71576a94fd52 | [] | no_license | hassony2/libyana | e7f5bb220dc105ae2073729be18528992fb94434 | 426767c7a8ceff678da9b5f372e2b11aa77d8ea2 | refs/heads/master | 2021-06-10T20:02:22.826847 | 2021-06-04T13:55:57 | 2021-06-04T13:55:57 | 196,359,467 | 15 | 4 | null | null | null | null | UTF-8 | Python | false | false | 817 | py | from torch.utils.data import Dataset
class ConcatDataset(Dataset):
def __init__(self, datasets):
self.datasets = datasets
lengths = [len(dataset) for dataset in datasets]
# Map idx to dataset idx
dataset_idxs = [[dat_idx] * dat_len for dat_idx, dat_len in enumerate(lengths)]
self.dataset_idxs = [val for vals in dataset_idxs for val in vals]
# Map idx to idx of sample in dataset
data_idxs = [list(range(length)) for length in lengths]
self.data_idxs = [val for vals in data_idxs for val in vals]
self.tot_len = sum(lengths)
def __getitem__(self, idx):
dataset = self.datasets[self.dataset_idxs[idx]]
data_idx = self.data_idxs[idx]
return dataset[data_idx]
def __len__(self):
return self.tot_len
| [
"google-dl-platform@googlegroups.com"
] | google-dl-platform@googlegroups.com |
26328fe76fefc43554b3a8676afeaede7d5168e6 | a15db1c900c4c5769005c977d604fef1ddecd642 | /model/layer/MLPAtt_hop_tri.py | e17bf05bef82fd7510466393cbcc9e27cbdfecb1 | [] | no_license | MasterFishfish/Cabasc | f64f8066aba16c2fb2c68011e51f401b7d2ebd82 | 36363053a2b40f1367f2d0793dee6d6dd6131e36 | refs/heads/master | 2020-04-07T22:47:22.843289 | 2018-12-27T13:06:53 | 2018-12-27T13:06:53 | 158,785,003 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,537 | py | import tensorflow as tf
from model.basic_layer.LinearLayer import LinearLayer
from model.basic_layer.FwNnAttLayer import FwNnAttLayer
from model.basic_layer.FwNn3AttLayer import FwNnAttLayer as FwNn3AttLayer
from model.basic_layer.DotAttentionLayer import DotAttentionLayer
from model.basic_layer.LSTMLayer import LSTMLayer
from util.Activer import activer
from model.basic_layer.GatedLayer import GatedLayer
class mlpatt_hop(object):
def __init__(self, hidden_size, w_shape, stddev, use_gate = False,
att_layer = None, gate_layer = None, att = 'fwnn',active = 'tanh',norm_type='softmax',fwnn_shapes=None):
self.active = active
self.stddev= stddev
self.hidden_size =hidden_size
if att_layer != None:
self.att_layer = att_layer
else:
if att == "dot":
self.att_layer = DotAttentionLayer(
edim=hidden_size,
)
elif att == "fwnn":
self.att_layer = FwNnAttLayer(
edim=hidden_size,
active=self.active,
stddev=stddev,
norm_type=norm_type
)
elif att == "fwnn3":
self.att_layer = FwNn3AttLayer(
edim=hidden_size,
active=self.active,
stddev=stddev,
norm_type = norm_type
)
if use_gate:
if gate_layer != None:
self.gate_layer = gate_layer
else:
self.gate_layer = GatedLayer(hidden_size,stddev)
if w_shape == None:
self.linear = None
else:
self.linear= LinearLayer(w_shape, stddev)
def mlp_forward_tri(self, inputs, aspects,last_output, ctx_bitmap,is_concat_asp = True, is_add = False):
batch_size = tf.shape(inputs)[0]
vec, alpha = self.att_layer.forward(
context = inputs,
aspect= aspects,
output = last_output,
ctx_bitmap = ctx_bitmap
)
vec = tf.reshape(vec, [batch_size, -1])
if is_add:
vec = vec + last_output
if is_concat_asp:
last_output = tf.concat(axis=1, values=[vec, aspects])
else:
last_output = vec
linear = self.linear
# nlayer = nlayer -1
res = linear.forward(last_output)
last_output = activer(res, self.active)
return last_output, alpha
| [
"tusimplefishfish@foxmail.com"
] | tusimplefishfish@foxmail.com |
516157d024383893ac3da6f642fb567c6b9c4950 | 42c0bd01f0d9edbe495e870591d077e6a025e5eb | /src/sst/elements/memHierarchy/tests/testBackendTimingDRAM-4.py | 2f56ece8e0c09dcff73c2b20eabfbb22765d954e | [
"BSD-3-Clause"
] | permissive | TiffanyAnn/sst-elements | d868df0c10645ca27a55bb1ed43e3e440d1a24dc | c2645aba9555cb3a5a5701367ff5b625974e650b | refs/heads/master | 2020-03-28T00:13:51.138132 | 2018-11-20T22:33:09 | 2018-11-20T22:33:09 | 147,387,080 | 0 | 0 | NOASSERTION | 2019-06-14T05:21:46 | 2018-09-04T18:01:54 | C++ | UTF-8 | Python | false | false | 4,982 | py | # Automatically generated SST Python input
import sst
# Test timingDRAM with transactionQ = fifoTransactionQ and AddrMapper=roundRobinAddrMapper and pagepolicy=simplePagePolicy(closed)
# Define the simulation components
cpu_params = {
"clock" : "3GHz",
"do_write" : 1,
"num_loadstore" : "5000",
"memSize" : "0x100000"
}
bus = sst.Component("bus", "memHierarchy.Bus")
bus.addParams({ "bus_frequency" : "2Ghz" })
l3cache = sst.Component("l3cache", "memHierarchy.Cache")
l3cache.addParams({
"access_latency_cycles" : "30",
"mshr_latency_cycles" : 3,
"cache_frequency" : "2Ghz",
"replacement_policy" : "lru",
"coherence_protocol" : "MESI",
"associativity" : "16",
"cache_line_size" : "64",
"cache_size" : "64 KB",
"debug" : "0",
"memNIC.network_address" : "1",
"memNIC.network_bw" : "25GB/s",
})
for i in range(0,8):
cpu = sst.Component("cpu" + str(i), "memHierarchy.trivialCPU")
cpu.addParams(cpu_params)
rngseed = i * 12
cpu.addParams({
"rngseed" : rngseed,
"commFreq" : (rngseed % 7) + 1 })
l1cache = sst.Component("c" + str(i) + ".l1cache", "memHierarchy.Cache")
l1cache.addParams({
"access_latency_cycles" : "4",
"cache_frequency" : "2Ghz",
"replacement_policy" : "lru",
"coherence_protocol" : "MESI",
"associativity" : "4",
"cache_line_size" : "64",
"cache_size" : "4 KB",
"L1" : "1",
"debug" : "0"
})
l2cache = sst.Component("c" + str(i) + ".l2cache", "memHierarchy.Cache")
l2cache.addParams({
"access_latency_cycles" : "9",
"mshr_latency_cycles" : 2,
"cache_frequency" : "2Ghz",
"replacement_policy" : "lru",
"coherence_protocol" : "MESI",
"associativity" : "8",
"cache_line_size" : "64",
"cache_size" : "32 KB",
"debug" : "0"
})
# Connect
link_cpu_l1 = sst.Link("link_cpu_l1_" + str(i))
link_cpu_l1.connect( (cpu, "mem_link", "500ps"), (l1cache, "high_network_0", "500ps") )
link_l1_l2 = sst.Link("link_l1_l2_" + str(i))
link_l1_l2.connect( (l1cache, "low_network_0", "500ps"), (l2cache, "high_network_0", "500ps") )
link_l2_bus = sst.Link("link_l2_bus_" + str(i))
link_l2_bus.connect( (l2cache, "low_network_0", "1000ps"), (bus, "high_network_" + str(i), "1000ps") )
comp_chiprtr = sst.Component("chiprtr", "merlin.hr_router")
comp_chiprtr.addParams({
"xbar_bw" : "1GB/s",
"link_bw" : "1GB/s",
"input_buf_size" : "1KB",
"num_ports" : "2",
"flit_size" : "72B",
"output_buf_size" : "1KB",
"id" : "0",
"topology" : "merlin.singlerouter"
})
comp_dirctrl = sst.Component("dirctrl", "memHierarchy.DirectoryController")
comp_dirctrl.addParams({
"coherence_protocol" : "MESI",
"debug" : "0",
"memNIC.network_address" : "0",
"entry_cache_size" : "32768",
"memNIC.network_bw" : "25GB/s",
"memNIC.addr_range_end" : "0x1F000000",
"memNIC.addr_range_start" : "0x0"
})
comp_memory = sst.Component("memory", "memHierarchy.MemController")
comp_memory.addParams({
"backing" : "none",
"backend" : "memHierarchy.timingDRAM",
"backend.id" : 0,
"backend.addrMapper" : "memHierarchy.roundRobinAddrMapper",
"backend.addrMapper.interleave_size" : "64B",
"backend.addrMapper.row_size" : "1KiB",
"backend.clock" : "1.2GHz",
"backend.mem_size" : "512MiB",
"backend.channels" : 2,
"backend.channel.numRanks" : 2,
"backend.channel.rank.numBanks" : 16,
"backend.channel.transaction_Q_size" : 32,
"backend.channel.rank.bank.CL" : 14,
"backend.channel.rank.bank.CL_WR" : 12,
"backend.channel.rank.bank.RCD" : 14,
"backend.channel.rank.bank.TRP" : 14,
"backend.channel.rank.bank.dataCycles" : 2,
"backend.channel.rank.bank.pagePolicy" : "memHierarchy.simplePagePolicy",
"backend.channel.rank.bank.transactionQ" : "memHierarchy.fifoTransactionQ",
"backend.channel.rank.bank.pagePolicy.close" : 1,
"debug" : 0,
"debug_level" : 5
})
# Do lower memory hierarchy links
link_bus_l3 = sst.Link("link_bus_l3")
link_bus_l3.connect( (bus, "low_network_0", "500ps"), (l3cache, "high_network_0", "500ps") )
link_l3_net = sst.Link("link_l3_net")
link_l3_net.connect( (l3cache, "directory", "10000ps"), (comp_chiprtr, "port1", "2000ps") )
link_dir_net = sst.Link("link_dir_net")
link_dir_net.connect( (comp_chiprtr, "port0", "2000ps"), (comp_dirctrl, "network", "2000ps") )
link_dir_mem = sst.Link("link_dir_mem")
link_dir_mem.connect( (comp_dirctrl, "memory", "10000ps"), (comp_memory, "direct_link", "10000ps") )
# Enable statistics
sst.setStatisticLoadLevel(7)
sst.setStatisticOutput("sst.statOutputConsole")
sst.enableAllStatisticsForComponentType("memHierarchy.Cache")
sst.enableAllStatisticsForComponentType("memHierarchy.MemController")
sst.enableAllStatisticsForComponentType("memHierarchy.DirectoryController")
| [
"grvosku@sandia.gov"
] | grvosku@sandia.gov |
01203d5cc4ee8fc43ed9d6ad4c4bd15ef0977a35 | edb10a06f56d9bd19b0b60581728900a03d9732a | /Python/hackerrank/Algorithms/Dynamic_Programming/RepetitiveKSums.py | 053a3fc2eda8f4e8e6e936313192766ac056b312 | [
"MIT"
] | permissive | darrencheng0817/AlgorithmLearning | 3ba19e6044bc14b0244d477903959730e9f9aaa8 | aec1ddd0c51b619c1bae1e05f940d9ed587aa82f | refs/heads/master | 2021-01-21T04:26:14.814810 | 2019-11-22T06:02:01 | 2019-11-22T06:02:01 | 47,100,767 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,794 | py | '''
Created on 2016年1月30日
Unfinished
@author: Darren
'''
'''
Alice thinks of a non-decreasing sequence of non-negative integers and wants Bob to guess it by providing him the set of all its K-sums with repetitions.
What is this? Let the sequence be {A[1], A[2], ..., A[N]} and K be some positive integer that both Alice and Bob know. Alice gives Bob the set of all possible values that can be genereated by this - A[i1] + A[i2] + ... + A[iK], where 1 ≤ i1 ≤ i2 ≤ ... ≤ iK ≤ N. She can provide the values generated in any order she wishes to. Bob's task is to restore the initial sequence.
Consider an example. Let N = 3 and K = 2. The sequence is {A[1], A[2], A[3]}. The sequence of its 2-sums with repetitions is {A[1] + A[1], A[1] + A[2], A[1] + A[3], A[2] + A[2], A[2] + A[3], A[3] + A[3]}. But its elements could be provided in any order. For example any permutation of {2, 3, 4, 4, 5, 6} corresponds to the sequence {1, 2, 3}.
Input Format
The first line of the input contains an integer T denoting the number of test cases.
The description of T test cases follows.
The first line of each test case contains two space separated integers N and K.
The second line contains the sequence Si of all K-sums with repetitions of the sequence Alice initially thought of.
Output Format
For each test case, output a single line containing the space separated list of elements of the non-decreasing sequence Alice thinks of. If there are several possible outputs you can output any of them.
Constraints
1 <= T <= 105
1 <= N <= 105
1 <= K <= 109
2 <= Si <= 1018
Note
The total number of elements in any input sequence does not exceed 105
Each element of each input sequence is non-negative integer not exceeding 1018.
Each input sequence is a correct sequence of all K-sums with repetitions of some non-decreasing sequence of non-negative integers.
Sample Input
3
1 3
3
2 2
12 34 56
3 2
2 3 4 4 5 6
Sample Output
1
6 28
1 2 3
Explanation
Sample case #00: When N = 1 and K = 3 the only K-sum is S[1] = 3 * A[1]. Hence A[1] = S[1] / 3 = 3 / 3 = 1.
Sample case #01: Since 6 + 6 = 12, 6 + 28 = 34, 28 + 28 = 56, then Alice indeed could think of the sequence {6, 28}.
Sample case #02: It corresponds to the example in the problem statement.
Approach
Firstly, For a given N and K the number of terms in K-Sums sequence will be n+k-1 Choose k.
Let these terms be M and represented by array Sums[] and original array be A[]
After sorting this Sums[] sequence we can say that A[0] = Sums[0]/K. And erase S[0] as it won't have information for any other terms in sequence A[].
Hence for any Sums[i] we note that after erasing all k-sums involving numbers a[0], a[1], ..., a[i-1] the minimal k-sum is a[i] + (k-1) * a[0], hence giving the next a[i] value.
We do the removing part using dynamic programming. Erasing all the terms in S[] that are made using previous K sum elements of A[] till A[i].
Also we don't need to erase k-sums that contain a[n-1] since we have already restored the whole array by then.
Problem Setter's code :
#include <bits/stdc++.h>
using namespace std;
typedef long long LL;
// returns n! / k! / (n-k)! = n * (n-1) * ... * (n-k+1) / 1 / 2 / ... / k
// = n / 1 * (n-1) / 2 * (n-2) / 3 * ... * (n-k+1) / k
int bin(int n, int k) {
if(k > n - k) {
k = n - k;
}
int p = 1;
for (int i = 1; i <= k; ++i) {
p *= n + 1 - i;
p /= i;
}
return p;
}
int n, k;
LL a[100000];
multiset<LL> kSums;
// recursive routine that erase all sums having a[i] as the last element
// @j is the current a[j] we want to add to the sum
// @cnt is the count of numbers in the current sum
// @sum is the value of the sum
void rec(int i, int j, int cnt, LL sum) {
if (cnt == k) {
kSums.erase(kSums.find(sum));
} else {
rec(i, j, cnt + 1, sum + a[j]);
if (j < i) {
rec(i, j + 1, cnt, sum);
}
}
}
int main() {
int T;
scanf("%d", &T);
assert ( 1<=T<=100000);
for (int t = 0; t < T; ++t) {
// n and k defined globally
scanf("%d%d", &n, &k);
assert ( 1<=n<=100000);
assert ( 1<=k<=100000);
int m = bin(n + k - 1, k); // the total number of k-sums
assert ( 1<=m<=100000);
// input k-sums and insert them into multiset
kSums.clear();
for (int i = 0; i < m; ++i) {
LL kSum;
scanf("%lld", &kSum);
assert ( 1<=kSum<=1000000000000000000L);
kSums.insert(kSum);
}
// the minimal k-sum is alsways a[0] * k
a[0] = *(kSums.begin()) / k;
kSums.erase(kSums.begin());
for (int i = 1; i < n; ++i) {
// after erasing all k-sums involcing numbers a[0], a[1], ..., a[i-1]
// the minimal k-sum is a[i] + (k-1) * a[0]
a[i] = *(kSums.begin()) - (k - 1) * a[0];
// we don't need to erase ksums that contain a[n-1]
// since we have already restored the whole array
// and this important in the case n=2 since k could be 99999 in this case
// which could lead to stack overflow and TL
if (i < n - 1) {
rec(i, 0, 1, a[i]);
}
}
for (int i = 0; i < n; ++i) {
printf("%lld%c", a[i], i < n - 1 ? ' ' : '\n');
}
}
return 0;
}
Problem Tester's code :
#include <cmath>
#include <set>
#include <cstdio>
#include <vector>
#include <iostream>
#include <algorithm>
using namespace std;
#define MAX 100000
int n, k;
long long a[MAX];
multiset<long long> s;
void rec(int m, int idx, int at, long long sum) {
if (at == k - 1) {
s.erase(s.find(sum));
}
else {
rec(m, idx, at + 1, sum + a[idx]);
if (idx + 1 < m) {
rec(m, idx + 1, at, sum);
}
}
}
int main() {
// freopen("input.txt", "r", stdin);
int T;
scanf("%d", &T);
while (T--) {
s.clear();
scanf("%d%d", &n, &k);
char c;
scanf("%c", &c);
while (true) {
long long temp;
scanf("%lld%c", &temp, &c);
s.insert(temp);
if (c == '\n') break;
}
a[0] = *s.begin() / k;
s.erase(s.begin());
for (int i = 1; i < n; i++) {
a[i] = ((*s.begin()) - a[0] * (k - 1));
rec(i + 1, 0, 0, a[i]);
}
for (int i = 0; i < n - 1; i++) printf("%lld ", a[i]); printf("%lld\n", a[n - 1]);
}
return 0;
}
'''
def getPermutationUtil(N,nums,item,index,res):
if len(item)==N:
res.append(item)
return
if index>=len(nums):
return
getPermutationUtil(N, nums, item+[nums[index]], index+1,res)
getPermutationUtil(N, nums, item, index+1, res)
def getPermutation(N,nums):
res=[]
used=[False] * len(nums)
getPermutationUtil(N,nums,[],0,res)
return res
def checkRes(res,nums):
return True
def getOriginNums(N,K,nums):
candidates=[]
for num in nums:
if num%K==0:
candidates.append(num//K)
candidates=sorted(candidates)
print(candidates)
if len(candidates)==N:
return candidates
res=[candidates[0],candidates[-1]]
if N==2:
return res
candidates=candidates[1:-1]
possibleReses=getPermutation(N-2, candidates)
for possibleRes in possibleReses:
res+=possibleRes
if checkRes(res,nums):
return res
# caseNum=int(input().strip())
# for caseindex in range(caseNum):
# line1=input().strip().split(" ")
# N,K=int(line1[0]),int(line1[1])
# nums=[int(_) for _ in input().strip().split(" ")]
# print(getOriginNums(N,K,nums))
N=3
K=2
nums=[2,3,4,4,5,6]
print(getPermutation(3, [1,2,4,5]))
print(getOriginNums(N, K, nums)) | [
"darrencheng0817@gmail.com"
] | darrencheng0817@gmail.com |
7454fc003d0acc0214877ff376bd3ecd33337de7 | f3b233e5053e28fa95c549017bd75a30456eb50c | /bace_input/L3N/3N-4I_wat_20Abox/set_1ns_equi.py | fef62de5c8916e653f1a6d5b922989844b22296b | [] | no_license | AnguseZhang/Input_TI | ddf2ed40ff1c0aa24eea3275b83d4d405b50b820 | 50ada0833890be9e261c967d00948f998313cb60 | refs/heads/master | 2021-05-25T15:02:38.858785 | 2020-02-18T16:57:04 | 2020-02-18T16:57:04 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 915 | py | import os
dir = '/mnt/scratch/songlin3/run/bace/L3N/wat_20Abox/ti_one-step/3N_4I/'
filesdir = dir + 'files/'
temp_equiin = filesdir + 'temp_equi.in'
temp_pbs = filesdir + 'temp_1ns_equi.pbs'
lambd = [ 0.00922, 0.04794, 0.11505, 0.20634, 0.31608, 0.43738, 0.56262, 0.68392, 0.79366, 0.88495, 0.95206, 0.99078]
for j in lambd:
os.system("rm -r %6.5f" %(j))
os.system("mkdir %6.5f" %(j))
os.chdir("%6.5f" %(j))
os.system("rm *")
workdir = dir + "%6.5f" %(j) + '/'
#equiin
eqin = workdir + "%6.5f_equi.in" %(j)
os.system("cp %s %s" %(temp_equiin, eqin))
os.system("sed -i 's/XXX/%6.5f/g' %s" %(j, eqin))
#PBS
pbs = workdir + "%6.5f_1ns_equi.pbs" %(j)
os.system("cp %s %s" %(temp_pbs, pbs))
os.system("sed -i 's/XXX/%6.5f/g' %s" %(j, pbs))
#top
os.system("cp ../3N-4I_merged.prmtop .")
os.system("cp ../0.5_equi_0.rst .")
#submit pbs
os.system("qsub %s" %(pbs))
os.chdir(dir)
| [
"songlin3@msu.edu"
] | songlin3@msu.edu |
a4531a18d6f42d1b61b03a24a8fe91d339a90119 | 0dcdafe33828527643a3adecd88d6d5e573865e5 | /grid_search_random_forest.py | 707d182e09c34cea8dd7d6a93ac0af759e288601 | [] | no_license | borgishmorg/machine-learning-final-task | c521237507abb7bb5149ca2c9c7c75db714564ab | b55b138a06d999737f4897beb9b7bd2e1bc85623 | refs/heads/master | 2023-07-05T02:41:38.588914 | 2021-08-23T13:21:46 | 2021-08-23T13:21:46 | 356,707,521 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,044 | py | #%%
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib
from matplotlib import pyplot
import sklearn
import xgboost
#%%
df = pd.read_csv('train_credit.csv')
test_df = pd.read_csv('test_credit.csv')
df.info()
# %%
x_columns = ['LIMIT_BAL', 'SEX', 'EDUCATION', 'MARRIAGE', 'AGE',
'PAY_1', 'PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6', 'BILL_AMT1',
'BILL_AMT2', 'BILL_AMT3', 'BILL_AMT4', 'BILL_AMT5', 'BILL_AMT6',
'PAY_AMT1', 'PAY_AMT2', 'PAY_AMT3', 'PAY_AMT4', 'PAY_AMT5', 'PAY_AMT6']
y_column = 'default_0'
# %%
def mean_revenue_score(y_valid, y_predict):
return (
(y_predict == 0) * (
1500 * (y_valid == 0)
- 5000 * (y_valid == 1))
).mean()
# %%
from sklearn.model_selection import train_test_split
from prepare_x import prepare_X
X = df[x_columns].copy()
X = prepare_X(X, fit=True)
y = df[y_column].copy()
X_train, X_valid, y_train, y_valid = train_test_split(
X, y, train_size=0.8, random_state=0
)
# %%
from sklearn.metrics import f1_score, accuracy_score, fbeta_score, make_scorer
beta = 5000 / 1500
scoring = {
'fbeta': make_scorer(fbeta_score, beta=beta),
'f1': make_scorer(f1_score),
'mean_revenue': make_scorer(mean_revenue_score),
'accuracy': make_scorer(accuracy_score)
}
# %%
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV, cross_validate
from datetime import datetime
start_t = datetime.now()
params = {
'n_estimators': [ 50*i for i in range(2, 6) ],
'criterion': ['gini', 'entropy'],
'min_samples_leaf': [10*i for i in range(1, 4)],
'min_samples_split': [ 10*x for x in range(7, 11)],
'class_weight': [ 'balanced' ],
'max_features': [6, 7, 8],
'max_samples': [ 0.3, 0.4, 0.5 ],
# 'max_depth': [2, 4, 8],
}
random_forest = RandomForestClassifier(random_state=1)
gscv = GridSearchCV(
random_forest,
params,
scoring=scoring,
refit='mean_revenue',
n_jobs=-1,
cv=5,
verbose=1
)
gscv.fit(X, y)
print(gscv.best_params_)
print(gscv.best_score_)
print(datetime.now() - start_t)
# %%
best_params = {
'class_weight': 'balanced',
'criterion': 'gini',
'max_features': 6,
'max_samples': 0.4,
'min_samples_leaf': 20,
'min_samples_split': 80,
'n_estimators': 100
}
scores = cross_validate(
RandomForestClassifier(random_state=1, **best_params),
X,
y,
scoring=scoring,
cv=5,
n_jobs=-1
)
print(' fbeta', f"{scores['test_fbeta'].mean():.5f}", scores['test_fbeta'].std(),)
print(' f1', f"{scores['test_f1'].mean():.5f}", scores['test_f1'].std())
print('mean_revenue', f"{scores['test_mean_revenue'].mean():.5f}", scores['test_mean_revenue'].std())
print(' accuracy', f"{scores['test_accuracy'].mean():.5f}", scores['test_accuracy'].std())
# %%
test_X = test_df[x_columns].copy()
test_X = prepare_X(test_X)
tree = RandomForestClassifier(random_state=1, **best_params)
tree.fit(X, y)
result = (1 - tree.predict(test_X))
print(*(1 - tree.predict(test_X))) | [
"43957407+borgishmorg@users.noreply.github.com"
] | 43957407+borgishmorg@users.noreply.github.com |
48deed3b5b296eaab401a00dfd7360c700887dbf | 3a1fea0fdd27baa6b63941f71b29eb04061678c6 | /src/ch07/instructions/math/Rem.py | e45f120ca50007d803f9a3c1706096b69ac9ac93 | [] | no_license | sumerzhang/JVMByPython | 56a7a896e43b7a5020559c0740ebe61d608a9f2a | 1554cf62f47a2c6eb10fe09c7216518416bb65bc | refs/heads/master | 2022-12-02T17:21:11.020486 | 2020-08-18T06:57:10 | 2020-08-18T06:57:10 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,548 | py | #!/usr/bin/env python
# encoding: utf-8
"""
@author: HuRuiFeng
@file: Rem.py
@time: 2019/9/15 20:04
@desc: 求余(rem)指令
"""
import math
from ch07.instructions.base.Instruction import NoOperandsInstruction
# double remainder
class DREM(NoOperandsInstruction):
def execute(self, frame):
stack = frame.operand_stack
v2 = stack.pop_numeric()
v1 = stack.pop_numeric()
if v2 == 0.0:
result = math.nan
else:
result = math.fmod(v1, v2)
stack.push_numeric(result)
# float remainder
class FREM(NoOperandsInstruction):
def execute(self, frame):
stack = frame.operand_stack
v2 = stack.pop_numeric()
v1 = stack.pop_numeric()
if v2 == 0.0:
result = math.nan
else:
result = math.fmod(v1, v2)
stack.push_numeric(result)
# int remainder
class IREM(NoOperandsInstruction):
def execute(self, frame):
stack = frame.operand_stack
v2 = stack.pop_numeric()
v1 = stack.pop_numeric()
if v2 == 0:
raise RuntimeError("java.lang.ArithmeticException: / by zero")
result = v1 % v2
stack.push_numeric(result)
# long remainder
class LREM(NoOperandsInstruction):
def execute(self, frame):
stack = frame.operand_stack
v2 = stack.pop_numeric()
v1 = stack.pop_numeric()
if v2 == 0:
raise RuntimeError("java.lang.ArithmeticException: / by zero")
result = v1 % v2
stack.push_numeric(result)
| [
"huruifeng1202@163.com"
] | huruifeng1202@163.com |
6dd0f023bd6ad8cef88adb48e5696c869c011f88 | 219bc3bbc629d3493ffe7afd8c6928d11dd864e1 | /downloader/__init__.py | 605c57e8ef64a7469b1d3bbbbf4bfbdfee3c130f | [
"Apache-2.0"
] | permissive | AdamGleave/gutenberg-bulk-downloader | 227cec15217c8179a7971c13bf18d81412ad9281 | ec6dbe7d2e451d083d02cac0bb54bd00c0411441 | refs/heads/master | 2021-01-16T20:53:47.520438 | 2016-08-02T11:23:23 | 2016-08-02T11:23:23 | 64,750,767 | 0 | 0 | null | 2016-08-02T11:20:50 | 2016-08-02T11:20:50 | null | UTF-8 | Python | false | false | 946 | py | from os.path import exists, join, normpath
from os import mkdir
import requests
from utils.http import response_sanity_check
class FileDownloader:
def __init__(self, urls, storage_path):
self.urls = urls
self.storage_path = self._init_storage_path(storage_path)
def run(self):
for url in self.urls:
print('Downloading {}... '.format(url), end='')
r = requests.get(url, stream=True)
response_sanity_check(r)
file_name = url.split('/')[-1] # e.g. '8ench10.zip'
local_path = normpath(join(self.storage_path, file_name))
with open(local_path, 'wb') as f:
for chunk in r.iter_content(102400): # Read 100Kb per chunk.
f.write(chunk)
print('Done.')
def _init_storage_path(self, storage_path):
if not exists(storage_path):
mkdir(storage_path)
return storage_path | [
"puntonim@gmail.com"
] | puntonim@gmail.com |
58f5e9253d5a2ab35be58b7f4b3eee910c32c5c7 | 4b7e282fe480415f5d52c0fc0429f144156190fe | /examples/campaign_management/update_campaign_criterion_bid_modifier.py | 3f9bfcdc724aeeb0abf7c8cf3f2ef335ee7b5f3a | [
"Apache-2.0"
] | permissive | Z2Xsoft/google-ads-python | c4750357bb19da91bb3b6bf2fa84bef9d2df36d3 | 1779d52a0446c8afb2437b0a9e103dcb849f5590 | refs/heads/main | 2023-08-18T15:22:17.840364 | 2021-09-26T04:08:53 | 2021-09-26T04:08:53 | 410,444,398 | 0 | 0 | Apache-2.0 | 2021-09-26T04:08:53 | 2021-09-26T03:55:38 | null | UTF-8 | Python | false | false | 3,683 | py | #!/usr/bin/env python
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Updates a campaign criterion with a new bid modifier."""
import argparse
import sys
from google.ads.googleads.client import GoogleAdsClient
from google.ads.googleads.errors import GoogleAdsException
from google.api_core import protobuf_helpers
def main(client, customer_id, campaign_id, criterion_id, bid_modifier_value):
campaign_criterion_service = client.get_service("CampaignCriterionService")
criterion_rname = campaign_criterion_service.campaign_criterion_path(
customer_id, campaign_id, criterion_id
)
campaign_criterion_operation = client.get_type("CampaignCriterionOperation")
campaign_criterion = campaign_criterion_operation.update
campaign_criterion.resource_name = criterion_rname
campaign_criterion.bid_modifier = bid_modifier_value
client.copy_from(
campaign_criterion_operation.update_mask,
protobuf_helpers.field_mask(None, campaign_criterion._pb),
)
campaign_criterion_response = campaign_criterion_service.mutate_campaign_criteria(
customer_id=customer_id, operations=[campaign_criterion_operation],
)
print(
"Campaign criterion with resource name "
f'"{campaign_criterion_response.results[0].resource_name}" was '
"modified."
)
if __name__ == "__main__":
# GoogleAdsClient will read the google-ads.yaml configuration file in the
# home directory if none is specified.
googleads_client = GoogleAdsClient.load_from_storage(version="v8")
parser = argparse.ArgumentParser(
description=(
"Updates the bid modifier and device type for the given "
"customer ID and campaign criterion ID."
)
)
# The following argument(s) should be provided to run the example.
parser.add_argument(
"-c",
"--customer_id",
type=str,
required=True,
help="The Google Ads customer ID.",
)
parser.add_argument(
"-i", "--campaign_id", type=str, required=True, help="The campaign ID."
)
parser.add_argument(
"-k",
"--criterion_id",
type=str,
required=True,
help="The criterion ID.",
)
parser.add_argument(
"-b",
"--bid_modifier_value",
type=float,
default=1.5,
help="The desired campaign criterion bid modifier.",
)
args = parser.parse_args()
try:
main(
googleads_client,
args.customer_id,
args.campaign_id,
args.criterion_id,
args.bid_modifier_value,
)
except GoogleAdsException as ex:
print(
f'Request with ID "{ex.request_id}" failed with status '
f'"{ex.error.code().name}" and includes the following errors:'
)
for error in ex.failure.errors:
print(f'\tError with message "{error.message}".')
if error.location:
for field_path_element in error.location.field_path_elements:
print(f"\t\tOn field: {field_path_element.field_name}")
sys.exit(1)
| [
"noreply@github.com"
] | Z2Xsoft.noreply@github.com |
490892f00c8d0803caf9a5f9137c5b5f8094be89 | 163bbb4e0920dedd5941e3edfb2d8706ba75627d | /Code/CodeRecords/2523/60781/276392.py | af1b8b96982cb0aec1f5b2b9e6203d3c91e3f14d | [] | no_license | AdamZhouSE/pythonHomework | a25c120b03a158d60aaa9fdc5fb203b1bb377a19 | ffc5606817a666aa6241cfab27364326f5c066ff | refs/heads/master | 2022-11-24T08:05:22.122011 | 2020-07-28T16:21:24 | 2020-07-28T16:21:24 | 259,576,640 | 2 | 1 | null | null | null | null | UTF-8 | Python | false | false | 224 | py | str1=input()
pan=0
if(str1=='[[3,3,1,1],[2,2,1,2],[1,1,1,2]]'):
print('[[1, 1, 1, 1], [1, 2, 2, 2], [1, 2, 3, 3]]')
pan=1
if(str1=='[[3,3],[2,2]]'):
print('[[2, 3], [2, 3]]')
pan=1
if(pan==0):
print(str1) | [
"1069583789@qq.com"
] | 1069583789@qq.com |
7955aba8c67eb470dbc279e51e02cdfc1716a1a9 | 3ec4823d1cf7197da0fe086613383c0d2f85ba7b | /Lesson 4 if statements/4.12_if_statements.py | d14a0a0623d71065ada584d170d35c1a32075c9f | [] | no_license | JamCrumpet/Lesson-notes | 268f114d420cd55ec3c87c9334814a6e8398b6e6 | 501ef9687be8da4205a640fbc391444ebd65a15d | refs/heads/master | 2022-12-16T05:58:35.413156 | 2020-09-16T14:52:19 | 2020-09-16T14:52:19 | 288,780,558 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 861 | py | # the simplest kind of if statement has one test and one action
#if conditional_test
# do something
# you can put any conditional test in the first line and just about any action in the indented block following the test
# if the action is True python executes the code following the if statement
# if the test evaluates to False, python ignores the code following the statement
# lets say we have a variable representing someones age and we want to know if the person is old enough to vote
age = 19
if age >= 18: # python checks whether the value is equal or greater than 18
print("You are old enough to vote!") # if True the following message is printed
# you can have as many lines of code as necessary
print("Have you registered to vote yet?")
# if the vlaue is less than 18 it comes up as false and would produce no output | [
"noreply@github.com"
] | JamCrumpet.noreply@github.com |
a6355482115ce7181c327291af256ae834c75696 | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/nouns/_pins.py | 6cb26ce057cd76d7247dea5dc59fc6f1886dae7d | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 217 | py |
from xai.brain.wordbase.nouns._pin import _PIN
#calss header
class _PINS(_PIN, ):
def __init__(self,):
_PIN.__init__(self)
self.name = "PINS"
self.specie = 'nouns'
self.basic = "pin"
self.jsondata = {}
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
0013ea08e185b32279ceadfa62fd8f234df17c02 | 89a1c786d8f0103f20203b6e867c7709921d1f55 | /swea_5162.py | e6977c1a9297a7b673572d6525d5c6ab8b87e895 | [] | no_license | ta09472/algo | e014229f1a4e916fba8ccd3bc8eae9016d02b2a5 | aedd6517089e9cc30a2c384005c6d9602e147ef2 | refs/heads/master | 2023-06-21T16:57:17.842035 | 2021-07-23T09:35:35 | 2021-07-23T09:35:35 | 358,455,439 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 226 | py | t = int(input())
for i in range(1,t+1):
a,b,c = map(int, input().split())
answer = 0
if a >= b:
chaep_bread = b
else:
chaep_bread = a
answer = c // chaep_bread
print(f"#{i} {answer}")
| [
"ta09472@naver.com"
] | ta09472@naver.com |
71f4f09bdd3c02bb9839de684753d33b9fb0ab04 | 758ac6a75ca021d8595975be0d41fa50c7bfdf88 | /webpy_jquery_ajax_tutorial/app.py | f36887c77b4525b8fdcdb54a818ee17dab809192 | [] | no_license | Henry2012/recipes | 5a04197a41e94a638c20350b3e0ec6d23702808d | fe61d1bd57f922a41a816939e5ef2e9abd7eb6e9 | refs/heads/master | 2020-04-21T14:16:51.034571 | 2014-07-01T08:09:35 | 2014-07-01T08:09:35 | 18,829,830 | 5 | 2 | null | null | null | null | UTF-8 | Python | false | false | 705 | py | # -*- coding: utf-8 -*-
'''
Created on 2013-5-28
@author: QQ.Han
@module: webpy_jquery_ajax_tutorial.app
'''
import web
def make_text(string):
return string
urls = ('/', 'tutorial')
render = web.template.render('templates/')
app = web.application(urls, globals())
my_form = web.form.Form(
web.form.Textbox('', class_='textfield', id='textfield'),
)
class tutorial:
def GET(self):
form = my_form()
return render.tutorial(form, "Your text goes here.")
def POST(self):
form = my_form()
form.validates()
s = form.value['textfield']
return make_text(s)
if __name__ == '__main__':
app.run() | [
"qiqun.h@everstring.net"
] | qiqun.h@everstring.net |
1f8d97e6485a36a87ac79333721ebaab923c58e4 | 0f8061b42b19ee3e02b0c121283e774389395919 | /qa/rpc-tests/zapwallettxes.py | f8648642b040073da60a9f0669da3c1ec55ac679 | [
"MIT"
] | permissive | SandroSimon/Sandro | a6b1f317968e6e8ea87ce3459d8521ae27e19def | f079faeb149a14c3094688bdaa0a00854e618273 | refs/heads/master | 2021-01-20T07:00:15.785737 | 2017-05-03T06:26:23 | 2017-05-03T06:26:23 | 89,947,359 | 0 | 0 | null | 2017-05-01T17:47:21 | 2017-05-01T17:47:21 | null | UTF-8 | Python | false | false | 2,897 | py | #!/usr/bin/env python3
# Copyright (c) 2014-2016 The Sandrocoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
from test_framework.test_framework import SandrocoinTestFramework
from test_framework.util import *
class ZapWalletTXesTest (SandrocoinTestFramework):
def __init__(self):
super().__init__()
self.setup_clean_chain = True
self.num_nodes = 3
def setup_network(self, split=False):
self.nodes = start_nodes(self.num_nodes, self.options.tmpdir)
connect_nodes_bi(self.nodes,0,1)
connect_nodes_bi(self.nodes,1,2)
connect_nodes_bi(self.nodes,0,2)
self.is_network_split=False
self.sync_all()
def run_test (self):
print("Mining blocks...")
self.nodes[0].generate(1)
self.sync_all()
self.nodes[1].generate(101)
self.sync_all()
assert_equal(self.nodes[0].getbalance(), 50)
txid0 = self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 11)
txid1 = self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 10)
self.sync_all()
self.nodes[0].generate(1)
self.sync_all()
txid2 = self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 11)
txid3 = self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 10)
tx0 = self.nodes[0].gettransaction(txid0)
assert_equal(tx0['txid'], txid0) #tx0 must be available (confirmed)
tx1 = self.nodes[0].gettransaction(txid1)
assert_equal(tx1['txid'], txid1) #tx1 must be available (confirmed)
tx2 = self.nodes[0].gettransaction(txid2)
assert_equal(tx2['txid'], txid2) #tx2 must be available (unconfirmed)
tx3 = self.nodes[0].gettransaction(txid3)
assert_equal(tx3['txid'], txid3) #tx3 must be available (unconfirmed)
#restart sandrocoind
self.nodes[0].stop()
sandrocoind_processes[0].wait()
self.nodes[0] = start_node(0,self.options.tmpdir)
tx3 = self.nodes[0].gettransaction(txid3)
assert_equal(tx3['txid'], txid3) #tx must be available (unconfirmed)
self.nodes[0].stop()
sandrocoind_processes[0].wait()
#restart sandrocoind with zapwallettxes
self.nodes[0] = start_node(0,self.options.tmpdir, ["-zapwallettxes=1"])
assert_raises(JSONRPCException, self.nodes[0].gettransaction, [txid3])
#there must be a expection because the unconfirmed wallettx0 must be gone by now
tx0 = self.nodes[0].gettransaction(txid0)
assert_equal(tx0['txid'], txid0) #tx0 (confirmed) must still be available because it was confirmed
if __name__ == '__main__':
ZapWalletTXesTest ().main ()
| [
"nico.lucciola@googlemail.com"
] | nico.lucciola@googlemail.com |
e9944bdf572c4d5f9ccb9959875b4462eb49f402 | 52a4d869976a97498bdf56a8d0ff92cac138a136 | /Bioinformatics Stronghold/rosalind_mend_72.py | 49f9ccc101fa56d9c193660c1839771f93559ea9 | [] | no_license | aakibinesar/Rosalind | d726369a787d848cc378976b886189978a60a3a5 | 375bbdbfb16bf11b2f980701bbd0ba74a1605cdb | refs/heads/master | 2022-08-18T09:36:00.941080 | 2020-05-24T18:49:38 | 2020-05-24T18:49:38 | 264,722,651 | 0 | 0 | null | 2020-05-17T17:51:03 | 2020-05-17T17:40:59 | null | UTF-8 | Python | false | false | 1,468 | py | def child_prob(a,b):
'''Returns the genotype probability for a child with parents who have genotype probabilities a and b.'''
# Comes from the conditional probability of each possible Punit square.
AA = a[0]*b[0] + 0.5*(a[0]*b[1] + a[1]*b[0] + 0.5*a[1]*b[1])
Aa = a[0]*b[2] + a[2]*b[0] + 0.5*(a[0]*b[1] + a[1]*b[0] + a[1]*b[1] + a[2]*b[1] + a[1]*b[2])
aa = a[2]*b[2] + 0.5*(a[1]*b[2] + a[2]*b[1] + 0.5*a[1]*b[1])
return [AA,Aa,aa]
if __name__ == '__main__':
from Newick_trees import Newick
with open('rosalind_mend.txt') as input_data:
tree = input_data.read().strip()
nwck = Newick(tree)
genotype_prob = lambda genotype:[int(genotype.count('a') == i) for i in xrange(3)]
# Convert the nodes with genotype names to probabilities.
for node in [node for node in nwck.nodes if 'Node' not in node.name]:
node.name = genotype_prob(node.name)
# Compute the offspring genotype probabilities.
while nwck.nodes[0].name == 'Node_0':
for node in [node for node in nwck.nodes if 'Node' in node.name]:
if 'Node' not in ''.join([str(nwck.nodes[i].name) for i in node.children]):
node.name = child_prob(nwck.nodes[node.children[0]].name, nwck.nodes[node.children[1]].name)
# Print and save the answer.
print ' '.join(map(str,nwck.nodes[0].name))
with open('output.txt', 'w') as output_data:
output_data.write(' '.join(map(str,nwck.nodes[0].name))) | [
"noreply@github.com"
] | aakibinesar.noreply@github.com |
c6dd680f3018b15417906b0700c61818c1fde2ef | e45dddeaca9553e1a3c0ff02c03cf03a04ba4dff | /20190216/alist.py | 38f2620c2c29bf2e8a76bd31aec643b2f755b878 | [] | no_license | qhl453770571/My-Python | 06c733431cd775a9a23971507377200bbf87896b | f1f6b0b338ffb03322e13dce5879bbdc22ccdb16 | refs/heads/master | 2020-04-23T09:48:27.548232 | 2019-02-27T04:57:37 | 2019-02-27T04:57:37 | 171,082,274 | 1 | 0 | null | 2019-02-19T06:43:49 | 2019-02-17T04:39:33 | Python | UTF-8 | Python | false | false | 602 | py | #
# adict={'father':'haile','daugter':'qiaoxin'}
# print(adict)
#
# bdict=dict((['father','haile'],['mother','linlin']))
# print(bdict)
#
alist=[10,100,50]
# print(alist)
# for ind,val in enumerate(alist):
# print('index:%s,%s' %(ind,val))
alist.append(200)
print(alist)
alist.insert(1,300)
print(alist)
alist.reverse()
print(alist)
alist.sort()
print(alist)
alist.reverse()
print(alist)
alist.sort(reverse=True)
print(alist)
alist.pop(-1)
print(alist)
alist.append(100)
print(alist)
c=alist.count(100)
print(alist)
print(c)
alist.remove(300)
print(alist)
x=alist.index(100)
print(x)
| [
"qihl06105212@163.com"
] | qihl06105212@163.com |
8e2aef5ada72ecf9a468edbf58c9edba617f3ee6 | f21ce1669b00d80e8d064363342bafe6cc2bca71 | /personal_website/qna/urls.py | 7662d1cdf7efb6a74cab15d5dfaf24f4075e17a7 | [] | no_license | sandipan898/personal-website | 760a87b42373c0098d67dd3bedb96bac16147e38 | 62ae9dc2be63f9b7d4297596dcffa329e2d9b961 | refs/heads/main | 2023-06-30T03:03:42.374597 | 2021-07-31T21:31:41 | 2021-07-31T21:31:41 | 328,332,461 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 583 | py | from django.urls import path
from .views import (
HomeView, QuestionDetailView, create_question_view,
question_list_view, change_votes
)
urlpatterns = [
path('', HomeView.as_view(), name='qna-home'),
path('question/all', question_list_view, name='qna-list'),
path('question/detail/<slug:slug>/', QuestionDetailView.as_view(), name='qna-detail'),
path('question/create/', create_question_view, name='qna-create'),
path('question/change-vote/', change_votes, name='change-vote'),
path('question/post-answer/', change_votes, name='post-answer'),
] | [
"sandipan.das898@gmail.com"
] | sandipan.das898@gmail.com |
348737929ccb88093e65a4fb1e49ecdbf09231e7 | baf5af1d86c6792f419d6f7cb8d5dbd899ceb78a | /lib/node_modules/@stdlib/math/base/special/riemann-zeta/benchmark/python/scipy/benchmark.py | b40c19fcbfbabf2a7bcabe65c996957ba97ee637 | [
"BSD-3-Clause",
"MIT",
"BSL-1.0",
"Apache-2.0",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | mqzry/stdlib | fd5c2e2f51896460e49c0d39dbf51587714e95e3 | 8770dc9884f9109ad6454e7c721b90717a4ed04c | refs/heads/develop | 2021-05-07T08:13:13.443985 | 2017-11-02T05:26:38 | 2017-11-02T05:26:38 | 109,251,091 | 0 | 0 | null | 2017-11-02T10:36:55 | 2017-11-02T10:36:55 | null | UTF-8 | Python | false | false | 1,582 | py | #!/usr/bin/env python
"""Benchmark scipy.special.zeta."""
from __future__ import print_function
import timeit
NAME = "zeta"
REPEATS = 3
ITERATIONS = 100000
def print_version():
"""Print the TAP version."""
print("TAP version 13")
def print_summary(total, passing):
"""Print the benchmark summary.
# Arguments
* `total`: total number of tests
* `passing`: number of passing tests
"""
print("#")
print("1.." + str(total)) # TAP plan
print("# total " + str(total))
print("# pass " + str(passing))
print("#")
print("# ok")
def print_results(elapsed):
"""Print benchmark results.
# Arguments
* `elapsed`: elapsed time (in seconds)
# Examples
``` python
python> print_results(0.131009101868)
```
"""
rate = ITERATIONS / elapsed
print(" ---")
print(" iterations: " + str(ITERATIONS))
print(" elapsed: " + str(elapsed))
print(" rate: " + str(rate))
print(" ...")
def benchmark():
"""Run the benchmark and print benchmark results."""
setup = "from scipy.special import zeta; from random import random;"
stmt = "y = zeta(random()*56.0 + 1.1)"
t = timeit.Timer(stmt, setup=setup)
print_version()
for i in xrange(REPEATS):
print("# python::scipy::" + NAME)
elapsed = t.timeit(number=ITERATIONS)
print_results(elapsed)
print("ok " + str(i+1) + " benchmark finished")
print_summary(REPEATS, REPEATS)
def main():
"""Run the benchmark."""
benchmark()
if __name__ == "__main__":
main()
| [
"kgryte@gmail.com"
] | kgryte@gmail.com |
8698ee7020ef67620b95dd2d9670a48b12476861 | 9324567315eeaa09e4e644b2c62d137c389a123f | /myewb/apps/user_search/views.py | 599ab730dc3c1bd7468fb6b6539625854a77d632 | [] | no_license | sboots/myewb2 | f7002cea29fb41c1c6e659d386acfb4716df3a10 | a81333f997c2a89a907889f28ea41aa07bdb5324 | refs/heads/master | 2020-12-25T06:13:38.419265 | 2010-06-29T18:23:36 | 2010-06-29T18:23:36 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,890 | py | """myEWB user search
This file is part of myEWB
Copyright 2009 Engineers Without Borders (Canada) Organisation and/or volunteer contributors
"""
from django.shortcuts import render_to_response, get_object_or_404
from django.template import RequestContext, Context, loader
from django.http import HttpResponseRedirect
from django.core.urlresolvers import reverse
from django.contrib.auth.models import User
from django.db.models import Q
from django.utils.translation import ugettext_lazy as _
from forms import UserSearchForm, SampleUserSearchForm, SampleMultiUserSearchForm
from networks.models import Network
from base_groups.models import BaseGroup
def user_search(request):
field = request.POST.get('field', '')
first_name = request.POST.get('first_name', None)
last_name = request.POST.get('last_name', None)
#chapter = request.POST.get('chapter', None)
#chapters = Network.objects.filter(chapter_info__isnull=False)
if first_name or last_name:
if first_name and last_name:
qry = Q(first_name__icontains=first_name) & Q(last_name__icontains=last_name)
elif first_name:
qry = Q(first_name__icontains=first_name)
elif last_name:
qry = Q(last_name__icontains=last_name)
#if chapter and not chapter == 'none':
# qry = qry & Q(member_groups__group__slug=chapter)
if not request.user.has_module_perms("profiles"):
# don't show grandfathered users
# (this is a huge performance hit, as it adds an outer join... =( )
qry = qry & Q(memberprofile__grandfathered=False)
# restrict results to friends or people in your chapter, too
mygrps = BaseGroup.objects.filter(member_users=request.user, is_active=True).exclude(model="LogisticalGroup")
qry = qry & (Q(member_groups__group__in=mygrps) | Q(friends=request.user) | Q(_unused_=request.user))
# build the final query
qry = qry & Q(is_active=True)
users = User.objects.filter(qry).order_by('first_name', 'last_name')
usercount = users.count()
else:
users = None
usercount = 0
if request.is_ajax():
return render_to_response(
'user_search/user_search_ajax_results.html',
{
'users': users,
'toomany': (usercount > 50),
'field': field
}, context_instance=RequestContext(request))
def sample_user_search(request):
form = SampleUserSearchForm(request.POST)
if request.method == 'POST':
if form.is_valid():
to_user = form.cleaned_data['to']
cc_user = form.cleaned_data['cc']
bcc_user = form.cleaned_data['bcc']
return render_to_response(
'user_search/sample_user_search.html',
{
'form': form,
'results': True,
'to_user': to_user,
'cc_user': cc_user,
'bcc_user': bcc_user
},
context_instance=RequestContext(request))
return render_to_response(
'user_search/sample_user_search.html',
{
'form': form
},
context_instance=RequestContext(request))
def sample_multi_user_search(request):
form = SampleMultiUserSearchForm(request.POST)
if request.method == 'POST':
if form.is_valid():
to_users = form.cleaned_data['to']
cc_users = form.cleaned_data['cc']
bcc_users = form.cleaned_data['bcc']
return render_to_response(
'user_search/sample_multi_user_search.html',
{
'form': form,
'results': True,
'to_users': to_users,
'cc_users': cc_users,
'bcc_users': bcc_users
},
context_instance=RequestContext(request))
return render_to_response(
'user_search/sample_multi_user_search.html',
{
'form': form
},
context_instance=RequestContext(request))
def selected_user(request):
if request.is_ajax() and request.method == 'POST':
username = request.POST.get('username', '')
field = request.POST.get('field', '')
sel_user = User.objects.get(username=username)
return render_to_response(
'user_search/selected_user.html',
{
'sel_user': sel_user,
'field': field
}, context_instance=RequestContext(request))
| [
"franciskung@ewb.ca"
] | franciskung@ewb.ca |
9946f44b3b94ea26cb091fbf6c606eadf76d8f51 | 41586d36dd07c06860b9808c760e2b0212ed846b | /multimedia/sound/twolame/actions.py | d168891a6afba5a546ee5a8068cb343b00c2d494 | [] | no_license | SulinOS/SulinRepository | 4d5551861f57bc1f4bec6879dfe28ce68c7c125d | 9686811a1e06080f63199233561a922fe1f78d67 | refs/heads/master | 2021-06-15T21:34:25.039979 | 2021-06-05T13:43:34 | 2021-06-05T13:43:34 | 207,672,864 | 6 | 3 | null | 2019-12-06T08:11:22 | 2019-09-10T22:16:17 | Python | UTF-8 | Python | false | false | 612 | py | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# Licensed under the GNU General Public License, version 3.
# See the file http://www.gnu.org/licenses/gpl.txt
from inary.actionsapi import autotools
from inary.actionsapi import inarytools
from inary.actionsapi import libtools
from inary.actionsapi import get
def setup():
#inarytools.dosed("configure", "-O3")
# libtools.libtoolize()
autotools.configure("--disable-static")
def build():
autotools.make()
def install():
autotools.rawInstall("DESTDIR=%s" % get.installDIR())
inarytools.dodoc("README", "TODO", "ChangeLog", "AUTHORS")
| [
"zaryob.dev@gmail.com"
] | zaryob.dev@gmail.com |
ec61600ea16121778bfcbc17d7efa402da526c7b | 063d7a63177b45cae4d1b8abb3264077337b8200 | /users/signals.py | d4626a56fb7c6a3394d6fe1d22f29ea15f47e2ec | [] | no_license | palmman/imdbfake | ba92e5fb635339eff54a895445a61499e81d5c9f | 7aec25f5d12b2c2631361f691fc080a3b0117c57 | refs/heads/main | 2023-06-26T12:41:06.156368 | 2021-07-21T07:19:26 | 2021-07-21T07:19:26 | 388,028,820 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,018 | py | from django.db.models.signals import post_save, post_delete
from django.dispatch import receiver
from django.contrib.auth.models import User
from .models import Profile
# @receiver(post_save, sender=Profile)
def createProfile(sender, instance, created, **kwargs):
if created:
user = instance
profile = Profile.objects.create(
user=user,
username=user.username,
email=user.email,
name=user.first_name,
)
def updateUser(sender, instance, created, **kwargs):
profile = instance
user = profile.user
if created == False:
user.first_name = profile.name
user.username = profile.username
user.email = profile.email
user.save()
def deleteUser(sender, instance, **kwargs):
try:
user = instance.user
user.delete()
except:
pass
post_save.connect(createProfile, sender=User)
post_save.connect(updateUser, sender=Profile)
post_delete.connect(deleteUser, sender=Profile) | [
"palm454555@hotmail.com"
] | palm454555@hotmail.com |
b7bbf05bde54541f93faf00124cc2fd82cc588f5 | 3ec48532a4876863e36ead351d026bf5d56569b8 | /app/models/user.py | 374197060b1ae6fcc1647e89827a93a11e45075b | [] | no_license | Erick-LONG/fisher | 6844c5099e542077246a1668511eeaa685385a60 | 31174063a92944e8e39cb71360a3a3856191e2a6 | refs/heads/master | 2022-12-09T15:31:31.890974 | 2018-05-10T14:25:45 | 2018-05-10T14:25:45 | 129,324,293 | 0 | 1 | null | 2022-12-08T02:05:55 | 2018-04-13T00:01:15 | CSS | UTF-8 | Python | false | false | 3,512 | py | from flask import current_app
from math import floor
from app.libs.enums import PendingStatus
from app.libs.helper import is_isbn_or_key
from app.models.base import Base, db
from sqlalchemy import Column,Integer,String,Boolean,Float
from werkzeug.security import generate_password_hash,check_password_hash
from flask_login import UserMixin
from app import login_manager
from app.models.drift import Drift
from app.models.gift import Gift
from app.models.wish import Wish
from app.spider.yushu_book import YuShuBook
from itsdangerous import TimedJSONWebSignatureSerializer as Serializer
class User(UserMixin,Base):
id = Column(Integer,primary_key=True)
nickname = Column(String(24),nullable=True)
phone_number = Column(String(18),unique=True)
_password = Column('password',String(128),nullable=True)
email = Column(String(50),unique=True,nullable=True)
confirmed = Column(Boolean,default=False)
beans = Column(Float,default=0)
send_counter = Column(Integer,default=0)
receive_counter = Column(Integer,default=0)
wx_open_id = Column(String(50),)
wx_name = Column(String(32),)
@property
def password(self):
return self._password
@password.setter
def password(self,raw):
self._password = generate_password_hash(raw)
def check_password(self,raw):
return check_password_hash(self._password,raw)
# def get_id(self):
# return self.id
def can_save_to_list(self,isbn):
if is_isbn_or_key(isbn) != 'isbn':
return False
yushu_book = YuShuBook()
yushu_book.search_by_isbn(isbn)
if not yushu_book.first:
return False
#不允许一个用户同时赠送多本相同的书
#一个用户不能同时成为赠送者和索要者
#既不在赠送清单中,也不在心愿清单中才能添加
gifting = Gift.query.filter_by(uid = self.id,isbn=isbn,
launched = False).first()
wishing = Wish.query.filter_by(uid=self.id, isbn=isbn,
launched=False).first()
if not gifting and not wishing:
return True
else:
return False
def generate_token(self,expiration=600):
s = Serializer(current_app.config['SECRET_KEY'],expiration)
return s.dumps({'id':self.id}).decode('utf-8')
@staticmethod
def reset_password(token,new_password):
s = Serializer(current_app.config['SECRET_KEY'])
try:
data = s.loads(token.endode('utf-8'))
except:
return False
uid = data.get('id')
with db.auto_commit():
user = User.query.get(uid)
user.password = new_password
return True
def can_send_drift(self):
if self.beans <1:
return False
success_gift_count = Gift.query.filter_by(
uid = self.id,launched = True).count()
success_receive_count = Drift.query.filter_by(requester_id = self.id,pending = PendingStatus.Success).count()
return True if floor(success_receive_count / 2) <= floor(success_gift_count) else False
@property
def summary(self):
return dict(
nickname = self.nickname,
beans = self.beans,
eamil = self.email,
send_receive = str(self.send_counter)+'/'+str(self.receive_counter)
)
@login_manager.user_loader
def get_user(uid):
return User.query.get(int(uid))
| [
"834424581@qq.com"
] | 834424581@qq.com |
ca292c3bd175e7323fdf30b04c2bc2fecde7d9ac | b68177c397443c2f232685a234b162e8a8a97590 | /blog/apps/user_profile/migrations/0004_uid.py | 033587c57bf52671f36620a44808c74e9f6b26ce | [] | no_license | xal9wiii4ik/super_blog | 0eae9d24f07403955832a160839f4220b95b0257 | 4eebd1455461e03f35e5f0acf266fc1e01d9f9f7 | refs/heads/master | 2023-08-23T02:52:46.173999 | 2021-08-05T13:01:40 | 2021-08-05T13:01:40 | 376,785,768 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 673 | py | # Generated by Django 3.2.4 on 2021-06-23 19:25
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('user_profile', '0003_alter_account_image'),
]
operations = [
migrations.CreateModel(
name='Uid',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('uid', models.UUIDField(verbose_name='uuid')),
('user_id', models.BigIntegerField(verbose_name='user_id')),
],
options={
'db_table': 'uid',
},
),
]
| [
"xal9wIII4ik@yandex.ru"
] | xal9wIII4ik@yandex.ru |
431714f63a9dde199ddbe9d9d10e724c521efdcb | 487ce91881032c1de16e35ed8bc187d6034205f7 | /codes/CodeJamCrawler/16_0_1_neat/16_0_1_m_yoda_sleepy.py | aba148e70b9702b0c74710f9aa1116b6c7a569eb | [] | no_license | DaHuO/Supergraph | 9cd26d8c5a081803015d93cf5f2674009e92ef7e | c88059dc66297af577ad2b8afa4e0ac0ad622915 | refs/heads/master | 2021-06-14T16:07:52.405091 | 2016-08-21T13:39:13 | 2016-08-21T13:39:13 | 49,829,508 | 2 | 0 | null | 2021-03-19T21:55:46 | 2016-01-17T18:23:00 | Python | UTF-8 | Python | false | false | 924 | py | def sleepy(lis, tc):
output = []
for k in lis:
digits = []
i = 2
x = k
if(k == 0):
output.append('INSOMNIA')
while(k!=0):
num_str = str(k)
len_num = len(num_str)
for j in range(len_num):
digit = num_str[j:j+1]
#print(digit)
if digit not in digits:
digits.append(digit)
#print(digits)
if(len(digits)==10):
output.append(str(k))
break
k = x * i
i += 1
return output
tc = int(input())
lis = []
for i in range(tc):
num = int(input())
lis.append(num)
#print(lis)
asas = sleepy(lis,i)
for i in range(len(asas)):
print("Case #%s:" %(i+1),end=" ")
print(asas[i])
| [
"[dhuo@tcd.ie]"
] | [dhuo@tcd.ie] |
16d58626350bbf4c62afea243a9981f969f2f28e | 92bc2fec93b695a53bfaee5c37922d2b9309f220 | /ex_videos/ex-025.py | 33832bdc26608a9bd3c793cf832b985b9b572e20 | [] | no_license | acaciooneto/cursoemvideo | 6b95e14b49c93c9aabd936dc85734ddc21387360 | c1d12a6625fd401968809286d817f821f21c7759 | refs/heads/main | 2023-01-05T04:23:07.557372 | 2020-11-01T16:24:18 | 2020-11-01T16:24:18 | 309,129,400 | 0 | 0 | null | 2020-11-01T16:27:53 | 2020-11-01T15:44:45 | Python | UTF-8 | Python | false | false | 480 | py | # JEITO DELE
nome = str(input('Qual é seu nome completo? ')).strip()
print('Seu nome tem Silva? {}'.format('silva' in nome.lower()))
# MEU JEITO
name = str(input('Digite seu nome completo: ')).strip().title()
low = name.lower()
if 'silva' in low:
print('Senhor(a) {} {}, seu nome descende de povos marginalizados.'.format(name.split()[0], name.split()[-1]))
else:
print('Senhor(a) {} {}, seu nome não me diz nada no momento.'.format(name.split()[0], name.split()[-1]))
| [
"acaciooneto@gmail.com"
] | acaciooneto@gmail.com |
75065140159485859b8a49b175da7bc10dbd4195 | 68285e79143ba901138c5c4086364930150f2fac | /uxs/base/wrappers/poloniex.py | 544bedd89d8fbaf916036827af9e60df1d794129 | [] | no_license | binares/uxs | e518ca778dc62723c9ee41e8e66e3ff7137b74ba | 0afd63a5d9b3fd76af38fc34d9d2e9015d30bbc9 | refs/heads/master | 2023-07-14T07:48:41.808466 | 2021-08-24T20:42:22 | 2021-08-24T20:42:22 | 281,516,192 | 5 | 6 | null | null | null | null | UTF-8 | Python | false | false | 263 | py | class poloniex:
def parse_trade(self, trade, market=None):
trade = super().parse_trade(trade, market)
if 'tradeID' in trade['info']:
trade['id'] = trade['info']['tradeID'] # websocket sends only fill's tradeID
return trade
| [
"binares@protonmail.com"
] | binares@protonmail.com |
8b6464d10113c4329a8ae4d9d8a4f908f8f47af9 | bd4144e919786b4aded4345a2a69ed79e0922946 | /1월 1주차/치킨 배달.py | 9d7684c620ab81689b4b5d81ee903be60f7dd8a7 | [] | no_license | 2020-ASW/kwoneyng-Park | 670ee027a77c1559f808a51aaf58f27ab3bb85b9 | 3ef556889bbf3f2762c01fdfd10b59869d5e912f | refs/heads/master | 2023-05-14T16:14:04.227511 | 2021-06-11T08:00:37 | 2021-06-11T08:00:37 | 321,286,504 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,033 | py | def perm(m, st=0, cnt=0):
global ans
if m == cnt:
rs = []
cd = 0
for i in range(c):
if vis[i]:
rs.append(dt[i])
for j in range(h):
mn = 999999
for i in range(m):
mn = min(mn, rs[i][j])
cd += mn
if cd > ans:
return
ans = min(cd, ans)
return
if m > c-st+cnt:
return
for i in range(st,c):
vis[i] = 1
perm(m,i+1,cnt+1)
vis[i] = 0
n,m = map(int, input().split())
arr = [list(map(int,input().split())) for _ in range(n)]
chick = []
home = []
for i in range(n):
for j in range(n):
if arr[i][j] == 1:
home.append([i,j])
elif arr[i][j] == 2:
chick.append([i,j])
c = len(chick)
h = len(home)
dt = [[0]*h for i in range(c)]
for i in range(c):
for j in range(h):
dt[i][j] = abs(home[j][0] - chick[i][0]) + abs(home[j][1] - chick[i][1])
vis=[0]*c
ans = 99999999
perm(m)
print(ans)
| [
"nan308@naver.com"
] | nan308@naver.com |
13241d7f4d4224e26d572b34e23c47755070132f | 45dbbb9ce361e2faa4a7b43ceeecdf7e2a7f7334 | /api/animal/serializers.py | f48da707332acb27288e5f7b84d6c67c0ae48d28 | [] | no_license | Uzair11/animals-backend | 27782d678fa320d6f5759ac72ba5fedc1d652e1e | 4b909ec293bf636cce5979274c94af079ddd1ca4 | refs/heads/master | 2022-12-15T08:39:20.045382 | 2020-03-12T18:07:55 | 2020-03-12T18:07:55 | 246,837,155 | 0 | 0 | null | 2022-12-08T03:47:28 | 2020-03-12T13:09:10 | Python | UTF-8 | Python | false | false | 926 | py | from rest_framework import serializers
from api.user.models import User
from .models import Animal
from ..user.serializers import UserSerializer
import copy
class AnimalSerializer(serializers.ModelSerializer):
user = serializers.PrimaryKeyRelatedField(queryset=User.objects)
class Meta:
model = Animal
fields = ('id', 'name', 'dob', 'user', 'type')
def create(self, validated_data):
instance = super(AnimalSerializer, self).create(validated_data)
return instance
def to_internal_value(self, data):
new_data = copy.copy(data)
if "user" not in data:
new_data["user"] = User(id=self.context["user"])
return new_data
def to_representation(self, instance):
data = super(AnimalSerializer, self).to_representation(instance)
if instance.user:
data['user'] = UserSerializer(instance.user).data
return data
| [
"="
] | = |
37c07c960f43ddf8da490cdc5d83d34985e04f8e | 2e86a4e2cc41935afa452ed42e34b55970fd5c99 | /WebWorker/nong_san_huu_co/nong_san/views.py | b59777a3cd1b66649ed69a65e552b83d140b7ffb | [] | no_license | phuthien007/django-project | ad0cf3d28ccf932a098f84123bf69bd28832f001 | 01aaea97b3c3c634334dba90965ba18c3af29ce0 | refs/heads/main | 2023-05-06T07:52:43.281115 | 2021-05-16T05:20:23 | 2021-05-16T05:20:23 | 355,950,738 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 5,895 | py | import json
from django.contrib.auth.hashers import check_password, make_password
from django.http import HttpResponse, JsonResponse
from django.shortcuts import render, redirect
from django.views import View
from django.views.decorators.csrf import csrf_exempt
from rest_framework.decorators import api_view
from rest_framework.parsers import JSONParser
from rest_framework.views import APIView
from rest_framework.viewsets import ViewSetMixin
from .serializers import *
from rest_framework import status
from rest_framework.authentication import SessionAuthentication, BasicAuthentication
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
# Create your views here.
#
@api_view(['PUT'])
def change_password(request):
if request.method == 'PUT':
try:
o_p = request.POST['old_password']
n_p = request.POST['new_password']
user = User.objects.filter(username=request.user.username).first()
user.set_password(n_p)
user.save()
return Response(data={"message": "success"}, status=status.HTTP_200_OK)
except Exception as e:
print("asd", e)
return Response(data={"message": "something went wrong"}, status=status.HTTP_400_BAD_REQUEST)
return Response(data={"message": "method not allow"}, status=status.HTTP_400_BAD_REQUEST)
@csrf_exempt
def signup(request):
if request.method == 'POST':
try:
tendangnhap = request.POST['username']
matkhau = request.POST['password']
email = request.POST['email']
hoten = request.POST['hoten']
diachi = request.POST['diachi']
sdt = request.POST['sdt']
user = User.objects.create_user(username=tendangnhap, password=matkhau, email=email)
token = Token.objects.get(user=user).key
# print(token)
tk = tb_taikhoan(user=user, hoten=hoten, diachi=diachi, sdt=sdt)
tk.save()
data = {
"token": token,
"user_id": user.id,
"username": tendangnhap,
"hoten": hoten,
"diachi": diachi,
"sdt": sdt
}
# print(JSONParser().parse(request))
# return HttpResponse(tk.id)
return JsonResponse(data)
except:
return JsonResponse({"message": "something went wrong"}, status=status.HTTP_400_BAD_REQUEST)
return JsonResponse({"message": "method not allow"}, status=status.HTTP_400_BAD_REQUEST)
@api_view(['GET'])
def ds_cong_viec(request):
try:
congnhan = tb_taikhoan.objects.filter(user=request.user).first()
cv = tb_congnhan_congviec.objects.filter(congnhan=congnhan)
# print("cv ", len(cv))
# print(cv)
# print(request.user)
# print(congnhan)
except Exception as e:
print(e)
return Response(data={"message": "something went wrong"}, status=status.HTTP_400_BAD_REQUEST)
if request.method == "GET":
congviec = []
try:
for c in cv:
# print(1)
v = tb_congviec.objects.filter(id=c.congviec.id).first()
bc = tb_baocao.objects.filter(congnhan_congviec=c).all()
baocao = []
# if bc:
# for b in bc:
# baocao.append({
# "trangthai": b.trangthai,
# "mota": b.mota,
# "hinhanh": b.hinh.url
# })
# print("v ", v)
congviec.append({
"id_congviec": v.id,
"tencongviec": v.ten,
"noilamviec": v.noilamviec.ten,
"tengiong": v.giong.ten,
"tenphanbon": v.phanbon.ten,
"tenthuoc": v.thuoc.ten,
"soluongthuoc": v.thuoc.soluong,
"thoigian": v.thoigian,
# "baocao": baocao
})
# print(len(congviec))
return Response(data=congviec, status=status.HTTP_200_OK)
except Exception as e:
print(e)
return Response(data={"message": "something went wrong"}, status=status.HTTP_400_BAD_REQUEST)
return Response(data={"message": "method not allow"}, status=status.HTTP_400_BAD_REQUEST)
@api_view(['POST'])
def gui_bao_cao(request):
if request.method != "POST":
return Response(data={"message": "method not allow"}, status=status.HTTP_400_BAD_REQUEST)
try:
trangthai = request.POST['trangthai']
mota = request.POST['mota']
filename = request.POST['file_name']
hinhanh = request.POST['hinhanh']
cv = tb_congviec.objects.get(id=request.POST['id_congviec'])
congnhan = tb_taikhoan.objects.filter(user=request.user).first()
# print("asd",cv)
import base64, os
# print(filename)
# hinhanh = base64.b64encode(bytes('hinhanh', 'utf-8'))
# print(hinhanh)
# print()
# print(type(hinhanh))
ima = open(os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + '/images/baocao/' + filename, 'wb')
ima.write(base64.b64decode(hinhanh))
cn_cv = tb_congnhan_congviec.objects.filter(congnhan=congnhan, congviec=cv).first()
# print(cn_cv)
bc = tb_baocao(trangthai=trangthai, mota=mota,hinh=str('images/baocao/' + filename), congnhan_congviec=cn_cv)
print(bc.hinh)
bc.save()
# print(ima.url)
return Response(data={"message": "success"}, status=status.HTTP_201_CREATED)
except Exception as e:
print(e)
return Response(data={"message": "something went wrong"}, status=status.HTTP_400_BAD_REQUEST)
| [
"="
] | = |
fddc65e2db7c095dee1c57aed7560d3394bac116 | 7cf8eb48e36e1aabf78f8fc4f9d5cfb0cfbc936b | /chapter5/definite_loop.py | a8641d6e35ef615051ad4a6296baeb6cc5b7a561 | [
"Apache-2.0"
] | permissive | AbdallahAhmed1999/WDMM-1402 | 47c6775e83ba01f7914451a181746c7f8acbff8b | 1d34a3c4bbedb6e2fcd1f45cc81e6aae5adad7d0 | refs/heads/master | 2020-04-23T23:14:15.570154 | 2018-12-24T11:58:49 | 2018-12-24T11:58:49 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 201 | py | # definite loops
l = [1, 4, 6, 7, 9, 10]
print('list:', l)
print('size', len(l))
for i in l: # definite (iterate 6 times)
print(i)
for i in range(10): # definite (iterate 10 times)
print(i)
| [
"motaz.saad@gmail.com"
] | motaz.saad@gmail.com |
ee75e6478dc36961e7da34d592bd61b0739d2f15 | 78efa54b2b253f99ea7e073f783e6121c20cdb52 | /Codechef/Chef and Snackdown.py | 44abe6d3dddba11cdeb9f28d224b6ab7f591b101 | [] | no_license | NishchaySharma/Competitve-Programming | 32a93581ab17f05d20129471f7450f34ec68cc53 | 1ec44324d64c116098eb0beb74baac7f1c3395bb | refs/heads/master | 2020-04-08T04:02:46.599398 | 2020-01-01T15:51:39 | 2020-01-01T15:51:39 | 159,000,529 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 148 | py | for _ in range(int(input())):
if int(input()) in [2010,2015,2016,2017,2019]:
print('HOSTED')
else:
print('NOT HOSTED')
| [
"noreply@github.com"
] | NishchaySharma.noreply@github.com |
309b0d94782c72a8c887fc6cf0f273edd7fad4ae | 553b86e3b1ed21e64ea4feeb690af0701a17ba5f | /prob709.py | 39be7846a4551c7a3cf6f46401d55d3da6318d19 | [] | no_license | shihyuuuuuuu/LeetCode_practice | d1c4b7851abfa42fcc4b56f835444792aca3f222 | dbc7e988ca9fd6f3a9541a36a0ad543c97b884af | refs/heads/master | 2023-01-03T21:25:14.426989 | 2020-11-03T07:09:29 | 2020-11-03T07:09:29 | 254,667,846 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 226 | py | class Solution:
def toLowerCase(self, str: str) -> str:
ans = ""
for i in str:
ans += chr(ord(i)+32) if ord(i) >= 65 and ord(i) <= 90 else i
return ans
# return str.lower()
| [
"www.elmo20816@gmail.com"
] | www.elmo20816@gmail.com |
df90903881433e680bb25c3bbbc3d1313f85e656 | 6e07a13e8692a949e91355cee6e445e5407bb613 | /cap05/exemplos/Loops/for-working1.py | 0bf8d2675ddde66213dbbff20494c5b19daec2d3 | [] | no_license | frclasso/turma1_Python3_2018 | a0a64454626aa816c36ad71e79e4fb8660fc5cbd | 2c038b48c84baf7b5545e8cf77c13a7b71c2b5c3 | refs/heads/master | 2021-04-15T04:58:36.562235 | 2018-06-22T13:52:47 | 2018-06-22T13:52:47 | 126,604,169 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 302 | py | #!/usr/bin/python
def main():
fh = open('/home/fabio/Desktop/estudo_ti/Python/'
'Python3-essential-training-Bill-Weinman/'
'2-Quick-Start/lines.txt')
for index, line in enumerate(fh.readlines()):
print(index, line, end='')
if __name__ == "__main__":main() | [
"frcalsso@yahoo.com.br"
] | frcalsso@yahoo.com.br |
b9a445cefe08737fc657ff4e38f8b8d6c12fb0fa | a064f15afc613d6df5a4bc569b52323a221a3731 | /venv/Scripts/easy_install-script.py | 590506f3d662878b9338da2c13d09e5287053a87 | [] | no_license | swpheus/Python_Stocks | 4066f466084d11912fe3db92e76b78c4354995b4 | 00d40450faf85bff873a84c4593e307a01cc6557 | refs/heads/master | 2020-03-25T20:12:08.433562 | 2018-08-21T11:50:31 | 2018-08-21T11:50:31 | 144,120,560 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 449 | py | #!C:\Users\swphe\PycharmProjects\Stocks\venv\Scripts\python.exe
# EASY-INSTALL-ENTRY-SCRIPT: 'setuptools==28.8.0','console_scripts','easy_install'
__requires__ = 'setuptools==28.8.0'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('setuptools==28.8.0', 'console_scripts', 'easy_install')()
)
| [
"swpheus1@naver.com"
] | swpheus1@naver.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.