blob_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| path
stringlengths 3
288
| content_id
stringlengths 40
40
| detected_licenses
listlengths 0
112
| license_type
stringclasses 2
values | repo_name
stringlengths 5
115
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| branch_name
stringclasses 684
values | visit_date
timestamp[us]date 2015-08-06 10:31:46
2023-09-06 10:44:38
| revision_date
timestamp[us]date 1970-01-01 02:38:32
2037-05-03 13:00:00
| committer_date
timestamp[us]date 1970-01-01 02:38:32
2023-09-06 01:08:06
| github_id
int64 4.92k
681M
⌀ | star_events_count
int64 0
209k
| fork_events_count
int64 0
110k
| gha_license_id
stringclasses 22
values | gha_event_created_at
timestamp[us]date 2012-06-04 01:52:49
2023-09-14 21:59:50
⌀ | gha_created_at
timestamp[us]date 2008-05-22 07:58:19
2023-08-21 12:35:19
⌀ | gha_language
stringclasses 147
values | src_encoding
stringclasses 25
values | language
stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 128
12.7k
| extension
stringclasses 142
values | content
stringlengths 128
8.19k
| authors
listlengths 1
1
| author_id
stringlengths 1
132
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
34ff077858f4ae33064ecedb7125229e30d88e37
|
2d8ec841d75acb7ca3c3d1117c06d951e9be0169
|
/test/X13_TestRomantoInteger.py
|
3d16e1d47bbcdcf1ab7b8cd7deb848d1167d0c76
|
[] |
no_license
|
mcceTest/leetcode_py
|
040aee95ed23674b7e2fea899d22945b12f85981
|
eb25b3e5866b51fbac10d4686966f2c546c4696f
|
refs/heads/master
| 2021-06-27T02:04:56.856659
| 2021-01-08T03:14:56
| 2021-01-08T03:14:56
| 205,760,975
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 461
|
py
|
import unittest
from X13_RomantoInteger import Solution
class TestSum(unittest.TestCase):
def test1(self):
sol = Solution()
self.assertEqual(sol.romanToInt("III"), 3)
self.assertEqual(sol.romanToInt("IV"), 4)
self.assertEqual(sol.romanToInt("IX"), 9)
self.assertEqual(sol.romanToInt("LVIII"), 58)
self.assertEqual(sol.romanToInt("MCMXCIV"), 1994)
if __name__ == "__main__":
unittest.main()
|
[
"zhuxuyu@gmail.com"
] |
zhuxuyu@gmail.com
|
7f71b8025b0da4af24e6f067a0fed3393f846eb2
|
af259acdd0acd341370c9d5386c444da6a7a28a6
|
/Supervised-Learning-with-scikit-learn/04-Preprocessing-and-pipelines/04-Dropping-missing-data.py
|
438c80f1a8b57e4ed94a02048b0011d9457d637e
|
[] |
no_license
|
pace-noge/datacamp
|
fcd544f6478040660f7149b1a37bfd957eef9747
|
eeffb8af233e7304c0f122a48e6b4f78ee7c650e
|
refs/heads/master
| 2020-07-04T12:41:29.635167
| 2019-09-17T10:11:39
| 2019-09-17T10:11:39
| 202,289,041
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,804
|
py
|
"""
Dropping missing data
The voting dataset from Chapter 1 contained a bunch of missing values that we dealt with for you behind the scenes. Now, it's time for you to take care of these yourself!
The unprocessed dataset has been loaded into a DataFrame df. Explore it in the IPython Shell with the .head() method. You will see that there are certain data points labeled with a '?'. These denote missing values. As you saw in the video, different datasets encode missing values in different ways. Sometimes it may be a '9999', other times a 0 - real-world data can be very messy! If you're lucky, the missing values will already be encoded as NaN. We use NaN because it is an efficient and simplified way of internally representing missing data, and it lets us take advantage of pandas methods such as .dropna() and .fillna(), as well as scikit-learn's Imputation transformer Imputer().
In this exercise, your job is to convert the '?'s to NaNs, and then drop the rows that contain them from the DataFrame.
INSTRUCTION
-----------
Explore the DataFrame df in the IPython Shell. Notice how the missing value is represented.
Convert all '?' data points to np.nan.
Count the total number of NaNs using the .isnull() and .sum() methods. This has been done for you.
Drop the rows with missing values from df using .dropna().
Hit 'Submit Answer' to see how many rows were lost by dropping the missing values.
"""
# Convert '?' to NaN
df[df == '?'] = np.nan
# Print the number of NaNs
print(df.isnull().sum())
# Print shape of original DataFrame
print("Shape of Original DataFrame: {}".format(df.shape))
# Drop missing values and print shape of new DataFrame
df = df.dropna()
# Print shape of new DataFrame
print("Shape of DataFrame After Dropping All Rows with Missing Values: {}".format(df.shape))
|
[
"noreply@github.com"
] |
pace-noge.noreply@github.com
|
6757338d0b65931a2f906bdc7f2b1f72184ecadb
|
9743d5fd24822f79c156ad112229e25adb9ed6f6
|
/xai/brain/wordbase/otherforms/_schoolteachers.py
|
a97e1eb48efc47de30e4d91ff8c1c06736a2b31a
|
[
"MIT"
] |
permissive
|
cash2one/xai
|
de7adad1758f50dd6786bf0111e71a903f039b64
|
e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6
|
refs/heads/master
| 2021-01-19T12:33:54.964379
| 2017-01-28T02:00:50
| 2017-01-28T02:00:50
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 250
|
py
|
#calss header
class _SCHOOLTEACHERS():
def __init__(self,):
self.name = "SCHOOLTEACHERS"
self.definitions = schoolteacher
self.parents = []
self.childen = []
self.properties = []
self.jsondata = {}
self.basic = ['schoolteacher']
|
[
"xingwang1991@gmail.com"
] |
xingwang1991@gmail.com
|
94a2a1fa20e97c51852243a2a81a4149bdffabba
|
fb54704d4a6f9475f42b85d8c470e3425b37dcae
|
/medium/ex1381.py
|
3b090451fda3eb43df141d4f0235c64721da852a
|
[] |
no_license
|
ziyuan-shen/leetcode_algorithm_python_solution
|
b2784071a94b04e687fd536b57e8d5a9ec1a4c05
|
920b65db80031fad45d495431eda8d3fb4ef06e5
|
refs/heads/master
| 2021-06-27T05:19:47.774044
| 2021-02-04T09:47:30
| 2021-02-04T09:47:30
| 210,991,299
| 2
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 638
|
py
|
class CustomStack:
def __init__(self, maxSize: int):
self.stack = []
self.maxSize = maxSize
def push(self, x: int) -> None:
if len(self.stack) < self.maxSize:
self.stack.append(x)
def pop(self) -> int:
if self.stack:
return self.stack.pop()
else:
return -1
def increment(self, k: int, val: int) -> None:
for i in range(min(k, len(self.stack))):
self.stack[i] += val
# Your CustomStack object will be instantiated and called as such:
# obj = CustomStack(maxSize)
# obj.push(x)
# param_2 = obj.pop()
# obj.increment(k,val)
|
[
"ziyuan.shen@duke.edu"
] |
ziyuan.shen@duke.edu
|
95323488f1a2f39dd31806aae172ae8687c22cab
|
39fe41a33c00ea6dc8e04c61842c3764fdd07ff1
|
/py3standardlib/algorithms/contextlib/contextlib_exitstack_pop_all.py
|
68ab294ea41ed5242c5100523e6e1a684725e4f4
|
[] |
no_license
|
playbar/pylearn
|
f9639ffa1848a9db2aba52977de6c7167828b317
|
8bcd1b5a043cb19cde1631947eb128d9c05c259d
|
refs/heads/master
| 2021-06-12T01:51:33.480049
| 2021-03-31T12:16:14
| 2021-03-31T12:16:14
| 147,980,595
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,251
|
py
|
# contextlib_exitstack_pop_all.py
import contextlib
from contextlib_context_managers import *
def variable_stack(contexts):
with contextlib.ExitStack() as stack:
for c in contexts:
stack.enter_context(c)
# Return the close() method of a new stack as a clean-up
# function.
return stack.pop_all().close
# Explicitly return None, indicating that the ExitStack could
# not be initialized cleanly but that cleanup has already
# occurred.
return None
print('No errors:')
cleaner = variable_stack([
HandleError(1),
HandleError(2),
])
cleaner()
print('\nHandled error building context manager stack:')
try:
cleaner = variable_stack([
HandleError(1),
ErrorOnEnter(2),
])
except RuntimeError as err:
print('caught error {}'.format(err))
else:
if cleaner is not None:
cleaner()
else:
print('no cleaner returned')
print('\nUnhandled error building context manager stack:')
try:
cleaner = variable_stack([
PassError(1),
ErrorOnEnter(2),
])
except RuntimeError as err:
print('caught error {}'.format(err))
else:
if cleaner is not None:
cleaner()
else:
print('no cleaner returned')
|
[
"hgl868@126.com"
] |
hgl868@126.com
|
cbe4b5b0a93c62e34a4f64d8d65fcb3619111147
|
1b5802806cdf2c3b6f57a7b826c3e064aac51d98
|
/tensorrt-basic-1.10-3rd-plugin/TensorRT-main/tools/Polygraphy/examples/cli/run/05_comparing_with_custom_input_data/data_loader.py
|
4284ddc1e5d6dbe661a164e636b3c38257bcee12
|
[
"Apache-2.0",
"MIT",
"BSD-3-Clause",
"ISC",
"BSD-2-Clause"
] |
permissive
|
jinmin527/learning-cuda-trt
|
def70b3b1b23b421ab7844237ce39ca1f176b297
|
81438d602344c977ef3cab71bd04995c1834e51c
|
refs/heads/main
| 2023-05-23T08:56:09.205628
| 2022-07-24T02:48:24
| 2022-07-24T02:48:24
| 517,213,903
| 36
| 18
| null | 2022-07-24T03:05:05
| 2022-07-24T03:05:05
| null |
UTF-8
|
Python
| false
| false
| 1,709
|
py
|
#!/usr/bin/env python3
#
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Demonstrates two methods of loading custom input data in Polygraphy:
Option 1: Defines a `load_data` function that returns a generator yielding
feed_dicts so that this script can be used as the argument for
the --data-loader-script command-line parameter.
Option 2: Writes input data to a JSON file that can be used as the argument for
the --load-inputs command-line parameter.
"""
import numpy as np
from polygraphy.json import save_json
INPUT_SHAPE = (1, 2, 28, 28)
# Option 1: Define a function that will yield feed_dicts (i.e. Dict[str, np.ndarray])
def load_data():
for _ in range(5):
yield {"x": np.ones(shape=INPUT_SHAPE, dtype=np.float32)} # Still totally real data
# Option 2: Create a JSON file containing the input data using the `save_json()` helper.
# The input to `save_json()` should have type: List[Dict[str, np.ndarray]].
# For convenience, we'll reuse our `load_data()` implementation to generate the list.
input_data = list(load_data())
save_json(input_data, "custom_inputs.json", description="custom input data")
|
[
"dujw@deepblueai.com"
] |
dujw@deepblueai.com
|
4e746f96da01b3c31cde8ca5fd5703e427fb4f2d
|
e2a7f0ac4e5369e7e924029c1650a986716e78fc
|
/provisioning/ubuntu-trusty/config.py
|
698b06100dd24a2e25e9602b22025650824fecf2
|
[
"Unlicense"
] |
permissive
|
reductus/reductus
|
f89566de60cda387fc20b1aba4210528c3bd535b
|
07e865a08396b42fa7ae035de97628bc995506bc
|
refs/heads/main
| 2023-05-22T15:08:10.730577
| 2023-05-12T16:08:49
| 2023-05-12T16:08:49
| 1,320,973
| 7
| 12
|
Unlicense
| 2022-09-30T03:23:50
| 2011-02-02T16:46:54
|
Python
|
UTF-8
|
Python
| false
| false
| 863
|
py
|
#############################################################
# rename or copy this file to config.py if you make changes #
#############################################################
# change this to your fully-qualified domain name to run a
# remote server. The default value of localhost will
# only allow connections from the same computer.
#jsonrpc_servername = "h3.umd.edu"
jsonrpc_servername = "localhost"
jsonrpc_port = 8001
http_port = 8000
serve_staticfiles = False
#use_redis = True
use_diskcache = True
diskcache_params = {"size_limit": int(4*2**30), "shards": 5}
use_msgpack = True
data_sources = [
{
"name": "ncnr",
"url": "https://www.ncnr.nist.gov/pub/",
"start_path": "ncnrdata"
},
]
file_helper_url = {
"ncnr": "https://www.ncnr.nist.gov/ipeek/listftpfiles.php"
}
instruments = ["refl", "ospec", "sans"]
|
[
"brian.maranville@nist.gov"
] |
brian.maranville@nist.gov
|
8105b5e240ed50ebab8d4237de4287212a077d45
|
8c55d93116982758740665fdf93a57d7668d62f3
|
/calls/bin/registry-read.py
|
077e111f7c9656f2119fe7a8ed3124acc0c3e36b
|
[] |
no_license
|
Ngahu/Making-web-calls
|
42971fbb5835a46237854d45702f7feb50dd9314
|
df7e0d9032db914b73a9f19a73be18453e524f6e
|
refs/heads/master
| 2021-07-11T06:20:36.953011
| 2016-09-22T09:22:24
| 2016-09-22T09:22:24
| 68,893,415
| 0
| 1
| null | 2020-07-26T08:34:38
| 2016-09-22T06:55:32
|
Python
|
UTF-8
|
Python
| false
| false
| 4,984
|
py
|
#!/SHARED-THINGS/ONGOING/making calls/calls/bin/python
# Copyright (c) 2003-2015 CORE Security Technologies)
#
# This software is provided under under a slightly modified version
# of the Apache Software License. See the accompanying LICENSE file
# for more information.
#
# Author: Alberto Solino (@agsolino)
#
# Description: A Windows Registry Reader Example
#
# Reference for:
# winregistry.py
#
import impacket
from impacket.examples import logger
from impacket import version
from impacket import winregistry
import sys
import argparse
import ntpath
def bootKey(reg):
baseClass = 'ControlSet001\\Control\\Lsa\\'
keys = ['JD','Skew1','GBG','Data']
tmpKey = ''
for key in keys:
tmpKey = tmpKey + reg.getClass(baseClass + key).decode('utf-16le')[:8].decode('hex')
transforms = [ 8, 5, 4, 2, 11, 9, 13, 3, 0, 6, 1, 12, 14, 10, 15, 7 ]
syskey = ''
for i in xrange(len(tmpKey)):
syskey += tmpKey[transforms[i]]
print syskey.encode('hex')
def getClass(reg, className):
regKey = ntpath.dirname(className)
regClass = ntpath.basename(className)
value = reg.getClass(className)
if value is None:
return
print "[%s]" % regKey
print "Value for Class %s: \n" % regClass,
winregistry.hexdump(value,' ')
def getValue(reg, keyValue):
regKey = ntpath.dirname(keyValue)
regValue = ntpath.basename(keyValue)
value = reg.getValue(keyValue)
print "[%s]\n" % regKey
if value is None:
return
print "Value for %s:\n " % regValue,
reg.printValue(value[0],value[1])
def enumValues(reg, searchKey):
key = reg.findKey(searchKey)
if key is None:
return
print "[%s]\n" % searchKey
values = reg.enumValues(key)
for value in values:
print " %-30s: " % (value),
data = reg.getValue('%s\\%s'%(searchKey,value))
# Special case for binary string.. so it looks better formatted
if data[0] == winregistry.REG_BINARY:
print ''
reg.printValue(data[0],data[1])
print ''
else:
reg.printValue(data[0],data[1])
def enumKey(reg, searchKey, isRecursive, indent=' '):
parentKey = reg.findKey(searchKey)
if parentKey is None:
return
keys = reg.enumKey(parentKey)
for key in keys:
print "%s%s" %(indent, key)
if isRecursive is True:
if searchKey == '\\':
enumKey(reg, '\\%s'%(key),isRecursive,indent+' ')
else:
enumKey(reg, '%s\\%s'%(searchKey,key),isRecursive,indent+' ')
def walk(reg, keyName):
return reg.walk(keyName)
def main():
print version.BANNER
parser = argparse.ArgumentParser(add_help = True, description = "Reads data from registry hives.")
parser.add_argument('hive', action='store', help='registry hive to open')
subparsers = parser.add_subparsers(help='actions', dest='action')
# A enum_key command
enumkey_parser = subparsers.add_parser('enum_key', help='enumerates the subkeys of the specified open registry key')
enumkey_parser.add_argument('-name', action='store', required=True, help='registry key')
enumkey_parser.add_argument('-recursive', dest='recursive', action='store_true', required=False, help='recursive search (default False)')
# A enum_values command
enumvalues_parser = subparsers.add_parser('enum_values', help='enumerates the values for the specified open registry key')
enumvalues_parser.add_argument('-name', action='store', required=True, help='registry key')
# A get_value command
getvalue_parser = subparsers.add_parser('get_value', help='retrieves the data for the specified registry value')
getvalue_parser.add_argument('-name', action='store', required=True, help='registry value')
# A get_class command
getclass_parser = subparsers.add_parser('get_class', help='retrieves the data for the specified registry class')
getclass_parser.add_argument('-name', action='store', required=True, help='registry class name')
# A walk command
walk_parser = subparsers.add_parser('walk', help='walks the registry from the name node down')
walk_parser.add_argument('-name', action='store', required=True, help='registry class name to start walking down from')
if len(sys.argv)==1:
parser.print_help()
sys.exit(1)
options = parser.parse_args()
reg = winregistry.Registry(options.hive)
if options.action.upper() == 'ENUM_KEY':
print "[%s]" % options.name
enumKey(reg, options.name, options.recursive)
elif options.action.upper() == 'ENUM_VALUES':
enumValues(reg, options.name)
elif options.action.upper() == 'GET_VALUE':
getValue(reg, options.name)
elif options.action.upper() == 'GET_CLASS':
getClass(reg, options.name)
elif options.action.upper() == 'WALK':
walk(reg, options.name)
reg.close()
if __name__ == "__main__":
main()
|
[
"jamaalaraheem@gmail.com"
] |
jamaalaraheem@gmail.com
|
f632da1a09f68ac9dc98d92c87a45c3d48be3d42
|
d2c4934325f5ddd567963e7bd2bdc0673f92bc40
|
/tests/artificial/transf_None/trend_PolyTrend/cycle_0/ar_/test_artificial_32_None_PolyTrend_0__20.py
|
a4acd452378510898a95c0d7c7399843cf84e0e1
|
[
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
] |
permissive
|
jmabry/pyaf
|
797acdd585842474ff4ae1d9db5606877252d9b8
|
afbc15a851a2445a7824bf255af612dc429265af
|
refs/heads/master
| 2020-03-20T02:14:12.597970
| 2018-12-17T22:08:11
| 2018-12-17T22:08:11
| 137,104,552
| 0
| 0
|
BSD-3-Clause
| 2018-12-17T22:08:12
| 2018-06-12T17:15:43
|
Python
|
UTF-8
|
Python
| false
| false
| 263
|
py
|
import pyaf.Bench.TS_datasets as tsds
import pyaf.tests.artificial.process_artificial_dataset as art
art.process_dataset(N = 32 , FREQ = 'D', seed = 0, trendtype = "PolyTrend", cycle_length = 0, transform = "None", sigma = 0.0, exog_count = 20, ar_order = 0);
|
[
"antoine.carme@laposte.net"
] |
antoine.carme@laposte.net
|
2c5fb9bc6be3248ac3b35d7d12190b2ea8d205a5
|
b9efe70d12c2cbd55065d02e974f5725534583ee
|
/old_scripts/show_corpus.py
|
b687f9489733eacbea05242368defb71cc58c4e7
|
[] |
no_license
|
diegoami/bankdomain_PY
|
5089581ea7b7db6233243dff305488ff27dc8e90
|
83816e1beb96d3e9e0f746bec7f9db9521f32ee7
|
refs/heads/master
| 2022-12-17T05:05:13.557911
| 2020-06-03T22:19:44
| 2020-06-03T22:19:44
| 131,530,574
| 0
| 0
| null | 2022-12-08T01:30:27
| 2018-04-29T21:12:25
|
HTML
|
UTF-8
|
Python
| false
| false
| 996
|
py
|
import yaml
from repository.mongo_ops import copy_into_qa_documents, split_qa_documents_into_questions, print_all_questions, iterate_questions_in_mongo
from preprocess.preprocessor import create_corpus, load_corpus, print_corpus
from language.custom_lemmas import my_component
from textacy.corpus import Corpus
import spacy
if __name__ == '__main__':
config = yaml.safe_load(open("config.yml"))
data_dir = config['data_dir']
mongo_connection = config['mongo_connection']
corpus_out_dir = config['corpus_dir']
corpus_filename = config['corpus_filename']
corpus_proc_filename = config['corpus_proc_filename']
corpus = load_corpus(corpus_out_dir+'/'+corpus_proc_filename)
new_corpus = Corpus('de')
new_corpus.spacy_lang.add_pipe(my_component, name='print_length', last=True)
new_corpus.add_texts([doc.text for doc in corpus] )
#print_corpus(corpus)
# corpus.spacy_vocab
print(new_corpus.word_doc_freqs(normalize=u'lemma', as_strings=True))
|
[
"diego.amicabile@gmail.com"
] |
diego.amicabile@gmail.com
|
08da355ed5009788d673daf96c0f5f8075c62524
|
77ab53380f74c33bb3aacee8effc0e186b63c3d6
|
/720_longest_word_in_dictionary.py
|
b1ef8b4f98d24725eeb93e621ed887835df90cb5
|
[] |
no_license
|
tabletenniser/leetcode
|
8e3aa1b4df1b79364eb5ca3a97db57e0371250b6
|
d3ebbfe2e4ab87d5b44bc534984dfa453e34efbd
|
refs/heads/master
| 2023-02-23T18:14:31.577455
| 2023-02-06T07:09:54
| 2023-02-06T07:09:54
| 94,496,986
| 2
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,631
|
py
|
'''
Given a list of strings words representing an English Dictionary, find the longest word in words that can be built one character at a time by other words in words. If there is more than one possible answer, return the longest word with the smallest lexicographical order.
If there is no answer, return the empty string.
Example 1:
Input:
words = ["w","wo","wor","worl", "world"]
Output: "world"
Explanation:
The word "world" can be built one character at a time by "w", "wo", "wor", and "worl".
Example 2:
Input:
words = ["a", "banana", "app", "appl", "ap", "apply", "apple"]
Output: "apple"
Explanation:
Both "apply" and "apple" can be built from other words in the dictionary. However, "apple" is lexicographically smaller than "apply".
Note:
All the strings in the input will only contain lowercase letters.
The length of words will be in the range [1, 1000].
The length of words[i] will be in the range [1, 30].
'''
class Solution(object):
def longestWord(self, words):
"""
:type words: List[str]
:rtype: str
"""
words.sort()
w_dict = set()
result = ''
for w in words:
w_dict.add(w)
for w in words:
can_be_built = True
for i in xrange(1, len(w)):
if w[:i] not in w_dict:
can_be_built = False
break
if can_be_built and len(w) > len(result):
result = w
return result
s = Solution()
print s.longestWord(["w","wo","wor","worl", "world"])
print s.longestWord(["a", "banana", "app", "appl", "ap", "apply", "apple"])
|
[
"tabletenniser@gmail.com"
] |
tabletenniser@gmail.com
|
ae998783f6f09edee5eb0409239e0811735c2f57
|
141b42d9d72636c869ff2ce7a2a9f7b9b24f508b
|
/myvenv/Lib/site-packages/phonenumbers/data/region_SJ.py
|
30448b9dce8f8518c9cc53db0649a80ffccfe27c
|
[
"BSD-3-Clause"
] |
permissive
|
Fa67/saleor-shop
|
105e1147e60396ddab6f006337436dcbf18e8fe1
|
76110349162c54c8bfcae61983bb59ba8fb0f778
|
refs/heads/master
| 2021-06-08T23:51:12.251457
| 2018-07-24T08:14:33
| 2018-07-24T08:14:33
| 168,561,915
| 1
| 0
|
BSD-3-Clause
| 2021-04-18T07:59:12
| 2019-01-31T17:00:39
|
Python
|
UTF-8
|
Python
| false
| false
| 1,464
|
py
|
"""Auto-generated file, do not edit by hand. SJ metadata"""
from ..phonemetadata import NumberFormat, PhoneNumberDesc, PhoneMetadata
PHONE_METADATA_SJ = PhoneMetadata(id='SJ', country_code=47, international_prefix='00',
general_desc=PhoneNumberDesc(national_number_pattern='0\\d{4}|[45789]\\d{7}', possible_length=(5, 8)),
fixed_line=PhoneNumberDesc(national_number_pattern='79\\d{6}', example_number='79123456', possible_length=(8,)),
mobile=PhoneNumberDesc(national_number_pattern='(?:4[015-8]|5[89]|9\\d)\\d{6}', example_number='41234567', possible_length=(8,)),
toll_free=PhoneNumberDesc(national_number_pattern='80[01]\\d{5}', example_number='80012345', possible_length=(8,)),
premium_rate=PhoneNumberDesc(national_number_pattern='82[09]\\d{5}', example_number='82012345', possible_length=(8,)),
shared_cost=PhoneNumberDesc(national_number_pattern='810(?:0[0-6]|[2-8]\\d)\\d{3}', example_number='81021234', possible_length=(8,)),
personal_number=PhoneNumberDesc(national_number_pattern='880\\d{5}', example_number='88012345', possible_length=(8,)),
voip=PhoneNumberDesc(national_number_pattern='85[0-5]\\d{5}', example_number='85012345', possible_length=(8,)),
uan=PhoneNumberDesc(national_number_pattern='0\\d{4}|81(?:0(?:0[7-9]|1\\d)|5\\d{2})\\d{3}', example_number='01234', possible_length=(5, 8)),
voicemail=PhoneNumberDesc(national_number_pattern='81[23]\\d{5}', example_number='81212345', possible_length=(8,)))
|
[
"gruzdevasch@gmail.com"
] |
gruzdevasch@gmail.com
|
4b62a941576cc59defc792dd09df58d2eb7e386b
|
ef11a06c906f37fa98c3de38aa1307110269b2f4
|
/Notes/Fall2019/Ch4C.py
|
6d229ca716a445032f72e9b2a630d709bd76b422
|
[] |
no_license
|
fwparkercode/IntroProgrammingNotes
|
0d389d2d281122303da48ab2c1648750e594c04f
|
ad64777208d2f84f87e4ab45695adbfe073eae18
|
refs/heads/master
| 2021-07-16T07:13:55.665243
| 2020-06-09T12:45:38
| 2020-06-09T12:45:38
| 170,581,913
| 3
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,842
|
py
|
# Chapter 4 - Loops and Random Numbers
# Random numbers
import random
# Randrange function - random.randrange(start, end, count_by)
print(random.randrange(10)) # generates a random int from 0 to 9
print(random.randrange(5, 10)) # random int from 5 to 9
print(random.randrange(50, 100, 5)) # random int from 50 to 99 counting by 5s
# make a random number between -10 and - 5
print(random.randrange(-10, -4))
# random even number from 28 to 36
print(random.randrange(28, 37, 2))
# random function - random.random()
# generates a random float from 0 to 1
print(random.random())
# to generate any other float use random.random() * spread + offset
# random float from 0 to 10
print(random.random() * 10)
# random float from 10 to 15
print(random.random() * 5 + 10)
# random float from -5 to 0
print(random.random() * 5 - 5)
# FOR LOOPS
# for the example below: i is the index, range is from 0 to 9
for i in range(10):
print("Taco Tuesday")
print("and Quesadillas")
print("Pulled Pork Wednesday")
# print twenty random integers from 0 to 100
for i in range(20):
print(random.randrange(101))
# Range function - range(start, end, count_by)
# works like random.randrange()
for i in range(10):
print(i)
for i in range(1, 11):
print(i)
for i in range(0, 101, 2):
print(i)
for i in range(50, 10, -5):
print(i)
# Nested loops
for i in range(3):
print("a")
for j in range(3):
print("b")
print("\n\n")
for i in range(3):
print("a")
for j in range(3):
print("b")
'''
for hours in range(24):
for minutes in range(60):
for seconds in range(60):
print(hours, minutes, seconds)
'''
for row in range(1, 21):
for seat in range(1, 21):
print("row", row, "seat", seat)
# Add all the numbers from 1 to 100
total = 0
for i in range(1, 101):
total += i
print(total)
# WHILE Loops
# use a FOR loop if you can.
# use a WHILE loop when you want to keep going until a condition exists
# count from 1 to 10
for i in range(1, 11):
print(i)
i = 1
while i <= 10:
print(i)
i += 1
# print multiples of 7 from 21 to 42
for i in range(21, 43, 7):
print(i)
i = 21
while i <= 42:
print(i)
i += 7
# what are all of the squared numbers under 100000
n = 1
while n ** 2 < 100000:
print(n, "squared is", n ** 2)
n += 1
# Beware the infinite loop
'''
n = 10
while n == 10:
print("TEN")
'''
'''
n = 10
while n > 0:
print(n)
n *= 2
'''
'''
while 4:
print("AHHHH")
'''
# GAME LOOP
done = False
print("Welcome to Dragon Quest 2!")
while not done:
answer = input("A dragon is blocking the exit. Do you want to wake it? ")
if answer.lower() == "yes" or answer.lower() == "y":
print("The dragon eats you!")
done = True
print("Thank you for playing")
|
[
"alee@fwparker.org"
] |
alee@fwparker.org
|
bfec5c587e199e8661352e09e76eb119ef9d4709
|
7d1d30be1995f2780cbf8999f1891e967936c090
|
/pttweaks/activity/tests/test_models.py
|
f91ed5f80aebe191788ab9c3ad566ab2ce0f26ee
|
[] |
no_license
|
EastAgile/PT-tweaks
|
118274f70c198fb8885f4a42136a5a1bdefc4e51
|
7d5742862e42672eb77441ef7a7250d7a3a9359e
|
refs/heads/master
| 2022-12-10T20:07:04.859288
| 2019-08-08T05:29:41
| 2019-08-08T05:29:41
| 164,597,129
| 0
| 1
| null | 2022-12-08T05:49:40
| 2019-01-08T08:06:59
|
Python
|
UTF-8
|
Python
| false
| false
| 319
|
py
|
from django.test import SimpleTestCase
from robber import expect
from activity.factories import ActivityChangeLogFactory
class ActivityChangeLogTestCase(SimpleTestCase):
def test_model_str(self):
activity = ActivityChangeLogFactory.build(story_id='123123')
expect(str(activity)).to.eq('123123')
|
[
"open-source@eastagile.com"
] |
open-source@eastagile.com
|
d2d30ed9ef98512ff7d30f5c9754fa535c698414
|
c838c53ec5de94af57696f11db08f332ff2a65d8
|
/mission/migrations/0121_picklist_pick_order.py
|
042a0e8203c7fedb1905237711b5f99b55874583
|
[] |
no_license
|
memobijou/erpghost
|
4a9af80b3c948a4d7bb20d26e5afb01b40efbab5
|
c0ee90718778bc2b771b8078d9c08e038ae59284
|
refs/heads/master
| 2022-12-11T14:47:59.048889
| 2019-01-28T02:30:40
| 2019-01-28T02:30:40
| 113,774,918
| 1
| 1
| null | 2022-11-22T02:02:41
| 2017-12-10T18:53:41
|
Python
|
UTF-8
|
Python
| false
| false
| 557
|
py
|
# -*- coding: utf-8 -*-
# Generated by Django 1.11.2 on 2018-08-18 14:37
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('mission', '0120_pickorder'),
]
operations = [
migrations.AddField(
model_name='picklist',
name='pick_order',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='mission.PickOrder'),
),
]
|
[
"mbijou@live.de"
] |
mbijou@live.de
|
16cbf16dc6b7fe17bb9032ee14ac7d326eeaced8
|
3a570384a3fa9c4c7979d33b182556e1c637e9eb
|
/anwmisc/anw-pyui/Packages/anwp/gui/configinfo.py
|
5c7bf27ee02e7b8c9821e1b50aacc8cfcb5713f6
|
[] |
no_license
|
colshag/ANW
|
56a028af5042db92b5ead641dc542fcb4533344e
|
46948d8d18a0639185dd4ffcffde126914991553
|
refs/heads/master
| 2020-03-27T00:22:49.409109
| 2018-10-27T06:37:04
| 2018-10-27T06:37:04
| 145,618,125
| 2
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 7,603
|
py
|
# ---------------------------------------------------------------------------
# Armada Net Wars (ANW)
# configinfo.py
# Written by Chris Lewis
# ---------------------------------------------------------------------------
# This panel Displays User Config information
# ---------------------------------------------------------------------------
import pyui
import guibase
import anwp.func.globals
class ConfigInfoFrame(guibase.BaseFrame):
"""Displays User Config Information"""
def __init__(self, mode, app, title='User Configuration'):
self.app = app
self.width = 1024
try:
self.height = (app.height - mode.mainMenu.height - mode.mainFooter.height - 40)
except:
self.height = (app.height - 120)
try:
y = (mode.mainMenu.height)
except:
y = 40
x = 0
guibase.BaseFrame.__init__(self, mode, x, y, self.width, self.height, title)
self.setPanel(ConfigInfoPanel(self))
class ConfigInfoPanel(guibase.BasePanel):
"""Panel for User Config information"""
def __init__(self, frame):
guibase.BasePanel.__init__(self, frame)
numExtend = 1
x = (self.frame.app.height - 768) / (22 * numExtend)
cells = 28 + (numExtend * x)
self.setLayout(pyui.layouts.TableLayoutManager(8, cells))
# subject title
self.pctEmpire = pyui.widgets.Picture('')
self.addChild(self.pctEmpire, (0, 0, 1, 3))
self.lblTitle = pyui.widgets.Label(text='', type=1)
self.addChild(self.lblTitle, (1, 1, 3, 1))
self.btnSurrender = pyui.widgets.Button('Surrender Game', self.onSurrender)
self.addChild(self.btnSurrender, (6, 1, 2, 1))
n = 4
self.lbl = pyui.widgets.Label(text='CHANGE EMPIRE INFO:', type=1)
self.addChild(self.lbl, (0, n, 4, 1))
self.lbl = pyui.widgets.Label(text='Email Address:', type=2)
self.addChild(self.lbl, (0, n+1, 2, 1))
self.txtEmail = pyui.widgets.Edit('',50)
self.addChild(self.txtEmail, (2, n+1, 4, 1))
self.btnEmail = pyui.widgets.Button('Change Email', self.onChangeEmail)
self.addChild(self.btnEmail, (6, n+1, 2, 1))
self.lbl = pyui.widgets.Label(text='Login Password:', type=2)
self.addChild(self.lbl, (0, n+2, 2, 1))
self.txtPassword = pyui.widgets.Edit('',20)
self.addChild(self.txtPassword, (2, n+2, 2, 1))
self.btnEmail = pyui.widgets.Button('Change Password', self.onChangePassword)
self.addChild(self.btnEmail, (6, n+2, 2, 1))
# starship captains
n = n+4
self.lbl = pyui.widgets.Label(text='SELECT STARSHIP CAPTAIN:', type=1)
self.addChild(self.lbl, (0, n, 4, 1))
self.lstCaptains = pyui.widgets.ListBox(self.onCaptainSelected,None,100,100,0)
self.addChild(self.lstCaptains, (0, n+1, 8, 16+x))
n = n+18+x
self.lbl = pyui.widgets.Label(text='Selected Captain Name:', type=2)
self.addChild(self.lbl, (0, n, 2, 1))
self.txtName = pyui.widgets.Edit('',20)
self.addChild(self.txtName, (2, n, 2, 1))
self.btnName = pyui.widgets.Button('Change Captain Name', self.onChangeName)
self.addChild(self.btnName, (4, n, 2, 1))
self.pack
self.populate()
def buildCaptainsData(self):
"""Display all Captains in Empire"""
d = {}
# sort captains by experience level
##captains = anwp.func.funcs.sortDictByChildObjValue(self.frame.mode.game.myCaptains, 'experience', True, {})
for captainID, myCaptainDict in self.frame.mode.game.myCaptains.iteritems():
d[captainID] = '%s - RANK:%s' % (myCaptainDict['name'], myCaptainDict['rank'])
return d
def onCaptainSelected(self, item):
"""Select item from List"""
if not item:
self.btnName.disable()
else:
if self.lstCaptains.selected <> -1:
self.btnName.enable()
self.txtName.setText(self.frame.mode.game.myCaptains[item.data]['name'])
def onChangeEmail(self, item):
"""Change Email Address"""
try:
d = {}
d['emailAddress'] = self.txtEmail.text
serverResult = self.frame.mode.game.server.setEmpire(self.frame.mode.game.authKey, d)
if serverResult == 1:
self.frame.mode.game.myEmpire['emailAddress'] = self.txtEmail.text
self.frame.mode.modeMsgBox('Empire Email Address Changed')
else:
self.frame.mode.modeMsgBox(serverResult)
except:
self.frame.mode.modeMsgBox('onChangeEmail->Connection to Server Lost, Login Again')
def onSurrender(self, item):
"""Surrender Game"""
self.frame.mode.modeYesNoBox('Do you really want to surrender the game?', 'surrenderYes', 'surrenderNo')
def onChangeName(self, item):
"""Change Selected Captain Name"""
try:
id = self.lstCaptains.getSelectedItem().data
serverResult = self.frame.mode.game.server.setCaptainName(self.frame.mode.game.authKey, id, self.txtName.text)
if serverResult == 1:
self.frame.mode.game.myCaptains[id]['name'] = self.txtName.text
self.frame.mode.modeMsgBox('Captain name Changed')
self.populate()
else:
self.frame.mode.modeMsgBox(serverResult)
except:
self.frame.mode.modeMsgBox('onChangeName->Connection to Server Lost, Login Again')
def onChangePassword(self, item):
"""Change Password"""
try:
d = {}
d['password'] = self.txtPassword.text
serverResult = self.frame.mode.game.server.setEmpire(self.frame.mode.game.authKey, d)
if serverResult == 1:
self.frame.mode.game.empirePass = self.txtPassword.text
self.frame.mode.modeMsgBox('Empire Password Changed')
else:
self.frame.mode.modeMsgBox(serverResult)
except:
self.frame.mode.modeMsgBox('onChangePassword->Connection to Server Lost, Login Again')
def populate(self):
"""Populate frame with new data"""
self.btnName.disable()
try:
myEmpireDict = self.frame.mode.game.myEmpire
myEmpirePict = '%s%s.png' % (self.frame.app.simImagePath, myEmpireDict['imageFile'])
self.lblTitle.setText('CONFIGURATION FOR: %s' % myEmpireDict['name'])
self.lblTitle.setColor(anwp.func.globals.colors[myEmpireDict['color1']])
myCaptains = self.buildCaptainsData()
self.txtEmail.setText(myEmpireDict['emailAddress'])
self.txtPassword.setText(self.frame.mode.game.empirePass)
except:
# this allows for testing panel outside game
myEmpirePict = self.testImagePath + 'empire1.png'
self.lblTitle.setText('CONFIGURATION FOR: Test')
myCaptains = self.testDict
self.pctEmpire.setFilename(myEmpirePict)
self.populateListbox(self.lstCaptains, myCaptains)
def main():
"""Run gui for testing"""
import run
width = 1024
height = 768
pyui.init(width, height, 'p3d', 0, 'Testing Config Info Panel')
app = run.TestApplication(width, height)
frame = ConfigInfoFrame(None, app)
app.addGui(frame)
app.run()
pyui.quit()
if __name__ == '__main__':
main()
|
[
"colshag@gmail.com"
] |
colshag@gmail.com
|
4b0e4be71217535a3b023b8cd57fa3de00fa5b98
|
a4440a990b86a239a30b4295661ca588db3f5928
|
/src/knn/digital_recognition.py
|
5411be618eb5bc4fae3f1b70b129b3cbdb7ead0f
|
[] |
no_license
|
YangXinNewlife/MachineLearning
|
fdaa1f75b90c143165d457b645d3c13fee7ea9a1
|
196ebdc881b74c746f63768b7ba31fec65e462d5
|
refs/heads/master
| 2020-04-05T00:10:25.050507
| 2019-06-10T03:44:33
| 2019-06-10T03:44:33
| 156,386,186
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,377
|
py
|
# -*- coding:utf-8 -*-
__author__ = 'yangxin_ryan'
from numpy import *
from os import listdir
from collections import Counter
import operator
"""
图片的输入为 32 * 32的转换为 1 * 1024的向量
"""
class DigitalRecognition(object):
def __init__(self):
print("Welcome, 手写数字识别算法!")
"""
1.距离计算
tile生成和训练样本对应的矩阵,并与训练样本求差
取平方
将矩阵的每一行相加
开方
根据距离从小到大的排序,并返回对应的索引位置
2.选择距离最小的k个值
3.排序并返回出现最多的那个类型
"""
def classify_1(self, in_x, data_set, labels, k):
data_set_size = data_set.shape[0]
diff_mat = tile(in_x, (data_set_size, 1)) - data_set
sq_diff_mat = diff_mat ** 2
sq_distances = sq_diff_mat.sum(axis=1)
distances = sq_distances ** 0.5
sorted_dist_indicies = distances.argsort()
class_count = {}
for i in range(k):
vote_i_label = labels[sorted_dist_indicies[i]]
class_count[vote_i_label] = class_count.get(vote_i_label, 0) + 1
sorted_class_count = sorted(class_count.items(), key=operator.itemgetter(1), reverse=True)
return sorted_class_count
"""
1.计算距离
2.k个最近的标签
3.出现次数最多的标签即为最终类别
"""
def classify_2(self, in_x, data_set, labels, k):
dist = np.sum((in_x - data_set) ** 2, axis=1) ** 0.5
k_labels = [labels[index] for index in dist.argsort()[0:k]]
label = Counter(k_labels).most_common(1)[0][0]
return label
def file_to_matrix(self, file_name):
fr = open(file_name)
number_of_lines = len(fr.readlines())
return_mat = zeros((number_of_lines, 3))
class_label_vector = []
fr = open(file_name)
index = 0
for line in fr.readlines():
line = line.strip()
list_from_line = line.split("\t")
return_mat[index, :] = list_from_line[0:3]
class_label_vector.append(int(list_from_line[-1]))
index += 1
return return_mat, class_label_vector
"""
将图片转换为向量
图片的输入为 32 * 32的,将图像转换为向量,该函数创建 1 * 1024 的Numpy数组
"""
def img_to_vector(self, file_name):
return_vector = zeros((1, 1024))
fr = open(file_name, 'r')
for i in range(32):
line_str = fr.readline()
for j in range(32):
return_vector[0, 32 * i + j] = int(line_str[j])
return return_vector
def run(self, train_file_path, test_file_path, k):
labels = []
training_file_list = listdir(train_file_path)
train_len = len(training_file_list)
training_mat = zeros((train_len, 1024))
for i in range(train_len):
file_name_str = training_file_list[i]
file_str = file_name_str.split(".")[0]
class_num_str = int(file_str.split("_")[0])
labels.append(class_num_str)
img_file = train_file_path + file_name_str
print(img_file)
training_mat[i] = self.img_to_vector(img_file)
test_file_list = listdir(test_file_path)
error_count = 0.0
test_len = len(test_file_list)
for i in range(test_len):
file_name_str = test_file_list[i]
file_str = file_name_str.split(".")[0]
class_num_str = int(file_str.split("_")[0])
test_file_img = test_file_path + file_name_str
vector_under_test = self.img_to_vector(test_file_img)
classifier_result = self.classify_1(vector_under_test, training_mat, labels, k)
if classifier_result != class_num_str:
print(file_name_str)
error_count += 1.0
print("\nthe total number of errors is: %d" % error_count)
print("\nthe total error rate is: %f" % (error_count / float(test_len)))
if __name__ == '__main__':
digital_recognition = DigitalRecognition()
digital_recognition.run("/Users/yangxin_ryan/PycharmProjects/MachineLearning/data/knn/trainingDigits/",
"/Users/yangxin_ryan/PycharmProjects/MachineLearning/data/knn/testDigits/",
6)
|
[
"yangxin03@youxin.com"
] |
yangxin03@youxin.com
|
c6beb75082885391bc95b1891a36e80e937a4666
|
642151dff23fff48310139ddc9b89c8bf6a670e3
|
/app/base/routes.py
|
14e706ea47230919be667a314346e7ccf0af73e5
|
[] |
no_license
|
gemicn/flask-navigation
|
7ad371e0ac8220c14687f02b130707bf89c81553
|
382940492ca6cae41da44d30fb78c9535e8de955
|
refs/heads/master
| 2022-09-03T22:51:41.920901
| 2020-05-26T01:37:26
| 2020-05-26T01:37:26
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 226
|
py
|
from bcrypt import checkpw
from flask import jsonify, render_template, redirect, request, url_for
from app.base import blueprint
@blueprint.route('/')
def route_default():
return redirect(url_for('nav_blueprint.index'))
|
[
"ntuwang@126.com"
] |
ntuwang@126.com
|
22673e5c1820b8646b55bf630652d58b49177ef8
|
bbf025a5f8596e5513bd723dc78aa36c46e2c51b
|
/dfs + tree/100 sameTree.py
|
41ecd7912ef4878879d74277903c001435c1f6a2
|
[] |
no_license
|
AlanFermat/leetcode
|
6209bb5cf2d1b19e3fe7b619e1230f75bb0152ab
|
cacba4abaca9c4bad8e8d12526336115067dc6a0
|
refs/heads/master
| 2021-07-11T04:00:00.594820
| 2020-06-22T21:31:02
| 2020-06-22T21:31:02
| 142,341,558
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 533
|
py
|
from binaryTree import Node
t1 = Node(1)
t1.left = Node(2)
t1.right = Node(3)
t2 = Node(1)
t2.left = Node(2)
t2.right = Node(3)
def treeToList(t):
if t == None:
return []
result = [t.val]
return result + treeToList(t.left) + treeToList(t.right)
def isSameTree(t,s):
t_list = treeToList(t)
s_list = treeToList(s)
return s_list == t_list
def isSame(t,s):
if not t and not s:
return True
if t and s:
if t.val == s.val:
return isSame(t.left, s.left) and isSame(t.right, s.right)
return False
print (isSame(t1,t2))
|
[
"zy19@rice.edu"
] |
zy19@rice.edu
|
336c31fca80c5b1edd2c0ae1909608af55e7d349
|
61d22eef5483a046b418a295d2ffa22857a296e1
|
/swtest/1952.py
|
0a3cb502106980c8bed236da21232f446e6783fd
|
[] |
no_license
|
seoul-ssafy-class-2-studyclub/hyeonhwa
|
7ad680a67ba253eece07a9605a3b983f98a8cca3
|
e51163b3135cf529d295bc0d527c98b642f8c367
|
refs/heads/master
| 2021-10-06T12:57:44.046963
| 2021-10-02T09:42:55
| 2021-10-02T09:42:55
| 198,594,633
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 465
|
py
|
import sys
sys.stdin = open('input4.txt', 'r')
T = int(input())
for t in range(T):
money = list(map(int, input().split()))
plan = list(map(int, input().split()))
dp = [0] * 12
dp[0] = min(money[0]*plan[0], money[1])
for i in range(1, 12):
dp[i] = min(dp[i-1]+money[0]*plan[i], dp[i-1]+money[1])
if i >= 2:
dp[i] = min(dp[i-3]+money[2], dp[i])
res = min(dp[11], money[3])
print('#{} {}'.format(t+1, res))
|
[
"h3652k@gmail.com"
] |
h3652k@gmail.com
|
f79dd2dab6a9e5e06232020fc812c26b78740da4
|
a50e906945260351f43d57e014081bcdef5b65a4
|
/collections/ansible_collections/fortinet/fortios/plugins/modules/fortios_test_autod.py
|
ba72e9dc091a5fba49d6174bde73f232cf0ec22c
|
[] |
no_license
|
alhamdubello/evpn-ipsec-dci-ansible
|
210cb31f4710bb55dc6d2443a590f3eb65545cf5
|
2dcc7c915167cd3b25ef3651f2119d54a18efdff
|
refs/heads/main
| 2023-06-08T10:42:35.939341
| 2021-06-28T09:52:45
| 2021-06-28T09:52:45
| 380,860,067
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 7,724
|
py
|
#!/usr/bin/python
from __future__ import (absolute_import, division, print_function)
# Copyright 2019-2020 Fortinet, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: fortios_test_autod
short_description: Automation daemon in Fortinet's FortiOS and FortiGate.
description:
- This module is able to configure a FortiGate or FortiOS (FOS) device by allowing the
user to set and modify test feature and autod category.
Examples include all parameters and values need to be adjusted to datasources before usage.
Tested with FOS v6.4.0
version_added: "2.10"
author:
- Link Zheng (@chillancezen)
- Jie Xue (@JieX19)
- Hongbin Lu (@fgtdev-hblu)
- Frank Shen (@frankshen01)
- Miguel Angel Munoz (@mamunozgonzalez)
- Nicolas Thomas (@thomnico)
notes:
- Legacy fortiosapi has been deprecated, httpapi is the preferred way to run playbooks
requirements:
- ansible>=2.9.0
options:
access_token:
description:
- Token-based authentication.
Generated from GUI of Fortigate.
type: str
required: false
vdom:
description:
- Virtual domain, among those defined previously. A vdom is a
virtual instance of the FortiGate that can be configured and
used as a different unit.
type: str
default: root
test_autod:
description:
- Automation daemon.
default: null
type: dict
suboptions:
<Integer>:
description:
- Test level.
type: str
'''
EXAMPLES = '''
- hosts: fortigates
collections:
- fortinet.fortios
connection: httpapi
vars:
vdom: "root"
ansible_httpapi_use_ssl: yes
ansible_httpapi_validate_certs: no
ansible_httpapi_port: 443
tasks:
- name: Automation daemon.
fortios_test_autod:
vdom: "{{ vdom }}"
test_autod:
<Integer>: "<your_own_value>"
'''
RETURN = '''
build:
description: Build number of the fortigate image
returned: always
type: str
sample: '1547'
http_method:
description: Last method used to provision the content into FortiGate
returned: always
type: str
sample: 'PUT'
http_status:
description: Last result given by FortiGate on last operation applied
returned: always
type: str
sample: "200"
mkey:
description: Master key (id) used in the last call to FortiGate
returned: success
type: str
sample: "id"
name:
description: Name of the table used to fulfill the request
returned: always
type: str
sample: "urlfilter"
path:
description: Path of the table used to fulfill the request
returned: always
type: str
sample: "webfilter"
revision:
description: Internal revision number
returned: always
type: str
sample: "17.0.2.10658"
serial:
description: Serial number of the unit
returned: always
type: str
sample: "FGVMEVYYQT3AB5352"
status:
description: Indication of the operation's result
returned: always
type: str
sample: "success"
vdom:
description: Virtual domain used
returned: always
type: str
sample: "root"
version:
description: Version of the FortiGate
returned: always
type: str
sample: "v5.6.3"
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.connection import Connection
from ansible_collections.fortinet.fortios.plugins.module_utils.fortios.fortios import FortiOSHandler
from ansible_collections.fortinet.fortios.plugins.module_utils.fortios.fortios import check_legacy_fortiosapi
from ansible_collections.fortinet.fortios.plugins.module_utils.fortimanager.common import FAIL_SOCKET_MSG
def filter_test_autod_data(json):
option_list = ['<Integer>']
dictionary = {}
for attribute in option_list:
if attribute in json and json[attribute] is not None:
dictionary[attribute] = json[attribute]
return dictionary
def underscore_to_hyphen(data):
if isinstance(data, list):
for i, elem in enumerate(data):
data[i] = underscore_to_hyphen(elem)
elif isinstance(data, dict):
new_data = {}
for k, v in data.items():
new_data[k.replace('_', '-')] = underscore_to_hyphen(v)
data = new_data
return data
def test_autod(data, fos):
vdom = data['vdom']
test_autod_data = data['test_autod']
filtered_data = underscore_to_hyphen(filter_test_autod_data(test_autod_data))
return fos.set('test',
'autod',
data=filtered_data,
vdom=vdom)
def is_successful_status(status):
return status['status'] == "success" or \
status['http_method'] == "DELETE" and status['http_status'] == 404
def fortios_test(data, fos):
if data['test_autod']:
resp = test_autod(data, fos)
else:
fos._module.fail_json(msg='missing task body: %s' % ('test_autod'))
return not is_successful_status(resp), \
resp['status'] == "success" and \
(resp['revision_changed'] if 'revision_changed' in resp else True), \
resp
def main():
mkeyname = None
fields = {
"access_token": {"required": False, "type": "str", "no_log": True},
"vdom": {"required": False, "type": "str", "default": "root"},
"test_autod": {
"required": False, "type": "dict", "default": None,
"options": {
"<Integer>": {"required": False, "type": "str"}
}
}
}
check_legacy_fortiosapi()
module = AnsibleModule(argument_spec=fields,
supports_check_mode=False)
versions_check_result = None
if module._socket_path:
connection = Connection(module._socket_path)
if 'access_token' in module.params:
connection.set_option('access_token', module.params['access_token'])
fos = FortiOSHandler(connection, module, mkeyname)
is_error, has_changed, result = fortios_test(module.params, fos)
versions_check_result = connection.get_system_version()
else:
module.fail_json(**FAIL_SOCKET_MSG)
if versions_check_result and versions_check_result['matched'] is False:
module.warn("Ansible has detected version mismatch between FortOS system and galaxy, see more details by specifying option -vvv")
if not is_error:
if versions_check_result and versions_check_result['matched'] is False:
module.exit_json(changed=has_changed, version_check_warning=versions_check_result, meta=result)
else:
module.exit_json(changed=has_changed, meta=result)
else:
if versions_check_result and versions_check_result['matched'] is False:
module.fail_json(msg="Error in repo", version_check_warning=versions_check_result, meta=result)
else:
module.fail_json(msg="Error in repo", meta=result)
if __name__ == '__main__':
main()
|
[
"a.u.bello@bham.ac.uk"
] |
a.u.bello@bham.ac.uk
|
6ec54bc427e38fff5352b9d4a525b5c7b1bbc069
|
214ea3873f451940c73c4fb02981b08c8161b23c
|
/Array/range-addition.py
|
8617f93a5c2256d727dd8742f68e760606d4d616
|
[] |
no_license
|
Tiierr/LeetCode-Python
|
4a086a76a6d3780140e47246304d11c520548396
|
e8532b63fc5bb6ceebe30a9c53ab3a2b4b2a75a3
|
refs/heads/master
| 2021-06-14T04:36:57.394115
| 2017-03-07T06:46:39
| 2017-03-07T06:46:39
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,589
|
py
|
# Time: O(k + n)
# Space: O(1)
#
# Assume you have an array of length n initialized with
# all 0's and are given k update operations.
#
# Each operation is represented as a triplet:
# [startIndex, endIndex, inc] which increments each element of subarray
# A[startIndex ... endIndex] (startIndex and endIndex inclusive) with inc.
#
# Return the modified array after all k operations were executed.
#
# Example:
#
# Given:
#
# length = 5,
# updates = [
# [1, 3, 2],
# [2, 4, 3],
# [0, 2, -2]
# ]
#
# Output:
#
# [-2, 0, 3, 5, 3]
#
# Explanation:
#
# Initial state:
# [ 0, 0, 0, 0, 0 ]
#
# After applying operation [1, 3, 2]:
# [ 0, 2, 2, 2, 0 ]
#
# After applying operation [2, 4, 3]:
# [ 0, 2, 5, 5, 3 ]
#
# After applying operation [0, 2, -2]:
# [-2, 0, 3, 5, 3 ]
#
# Hint:
#
# Thinking of using advanced data structures? You are thinking it too complicated.
# For each update operation, do you really need to update all elements between i and j?
# Update only the first and end element is sufficient.
# The optimal time complexity is O(k + n) and uses O(1) extra space.
class Solution(object):
def getModifiedArray(self, length, updates):
"""
:type length: int
:type updates: List[List[int]]
:rtype: List[int]
"""
result = [0] * length
for update in updates:
result[update[0]] += update[2]
if update[1]+1 < length:
result[update[1]+1] -= update[2]
for i in xrange(1, length):
result[i] += result[i - 1]
return result
|
[
"rayyu03@163.com"
] |
rayyu03@163.com
|
a0641d0c62f8c6e39c81e1f9266c4710026b35aa
|
f3a7b2b71af1ca16e87fcc2c6063670d056f59c6
|
/libs/configs_old/MLT/gwd/cfgs_res101_mlt_v1.py
|
d7bff74ce9ce7235a49aac2edc2032e4000f6281
|
[
"Apache-2.0"
] |
permissive
|
DLPerf/RotationDetection
|
3af165ab00ea6d034774a7289a375b90e4079df4
|
c5d3e604ace76d7996bc461920854b2c79d8c023
|
refs/heads/main
| 2023-07-16T06:01:42.496723
| 2021-08-28T03:17:39
| 2021-08-28T03:17:39
| 400,690,285
| 0
| 0
|
Apache-2.0
| 2021-08-28T03:16:55
| 2021-08-28T03:16:55
| null |
UTF-8
|
Python
| false
| false
| 3,449
|
py
|
# -*- coding: utf-8 -*-
from __future__ import division, print_function, absolute_import
import os
import tensorflow as tf
import math
from dataloader.pretrained_weights.pretrain_zoo import PretrainModelZoo
"""
FLOPs: 737874381; Trainable params: 51265150
trainval/test + sqrt tau=2
"""
# ------------------------------------------------
VERSION = 'RetinaNet_MLT_GWD_1x_20201222'
NET_NAME = 'resnet101_v1d' # 'MobilenetV2'
# ---------------------------------------- System
ROOT_PATH = os.path.abspath('../../')
print(20*"++--")
print(ROOT_PATH)
GPU_GROUP = "0"
NUM_GPU = len(GPU_GROUP.strip().split(','))
SHOW_TRAIN_INFO_INTE = 20
SMRY_ITER = 200
SAVE_WEIGHTS_INTE = 10000 * 2
SUMMARY_PATH = os.path.join(ROOT_PATH, 'output/summary')
TEST_SAVE_PATH = os.path.join(ROOT_PATH, 'tools/test_result')
pretrain_zoo = PretrainModelZoo()
PRETRAINED_CKPT = pretrain_zoo.pretrain_weight_path(NET_NAME, ROOT_PATH)
TRAINED_CKPT = os.path.join(ROOT_PATH, 'output/trained_weights')
EVALUATE_R_DIR = os.path.join(ROOT_PATH, 'output/evaluate_result_pickle/')
# ------------------------------------------ Train and test
RESTORE_FROM_RPN = False
FIXED_BLOCKS = 1 # allow 0~3
FREEZE_BLOCKS = [True, False, False, False, False] # for gluoncv backbone
USE_07_METRIC = True
ADD_BOX_IN_TENSORBOARD = True
MUTILPY_BIAS_GRADIENT = 2.0 # if None, will not multipy
GRADIENT_CLIPPING_BY_NORM = 10.0 # if None, will not clip
CLS_WEIGHT = 1.0
REG_WEIGHT = 1.0
ANGLE_WEIGHT = 0.5
REG_LOSS_MODE = 2
ALPHA = 1.0
BETA = 1.0
BATCH_SIZE = 1
EPSILON = 1e-5
MOMENTUM = 0.9
LR = 1e-3
DECAY_STEP = [SAVE_WEIGHTS_INTE*12, SAVE_WEIGHTS_INTE*16, SAVE_WEIGHTS_INTE*20]
MAX_ITERATION = SAVE_WEIGHTS_INTE*20
WARM_SETP = int(1.0 / 4.0 * SAVE_WEIGHTS_INTE)
# -------------------------------------------- Dataset
DATASET_NAME = 'MLT' # 'pascal', 'coco'
PIXEL_MEAN = [123.68, 116.779, 103.939] # R, G, B. In tf, channel is RGB. In openCV, channel is BGR
PIXEL_MEAN_ = [0.485, 0.456, 0.406]
PIXEL_STD = [0.229, 0.224, 0.225] # R, G, B. In tf, channel is RGB. In openCV, channel is BGR
IMG_SHORT_SIDE_LEN = [800, 600, 1000, 1200]
IMG_MAX_LENGTH = 1500
CLASS_NUM = 1
IMG_ROTATE = True
RGB2GRAY = True
VERTICAL_FLIP = True
HORIZONTAL_FLIP = True
IMAGE_PYRAMID = True
# --------------------------------------------- Network
SUBNETS_WEIGHTS_INITIALIZER = tf.random_normal_initializer(mean=0.0, stddev=0.01, seed=None)
SUBNETS_BIAS_INITIALIZER = tf.constant_initializer(value=0.0)
PROBABILITY = 0.01
FINAL_CONV_BIAS_INITIALIZER = tf.constant_initializer(value=-math.log((1.0 - PROBABILITY) / PROBABILITY))
WEIGHT_DECAY = 1e-4
USE_GN = False
FPN_CHANNEL = 256
NUM_SUBNET_CONV = 4
FPN_MODE = 'fpn'
# --------------------------------------------- Anchor
LEVEL = ['P3', 'P4', 'P5', 'P6', 'P7']
BASE_ANCHOR_SIZE_LIST = [32, 64, 128, 256, 512]
ANCHOR_STRIDE = [8, 16, 32, 64, 128]
ANCHOR_SCALES = [2 ** 0, 2 ** (1.0 / 3.0), 2 ** (2.0 / 3.0)]
ANCHOR_RATIOS = [1, 1 / 2, 2., 1 / 3., 3., 5., 1 / 5.]
ANCHOR_ANGLES = [-90, -75, -60, -45, -30, -15]
ANCHOR_SCALE_FACTORS = None
USE_CENTER_OFFSET = True
METHOD = 'H'
USE_ANGLE_COND = False
ANGLE_RANGE = 90 # or 180
# -------------------------------------------- Head
SHARE_NET = True
USE_P5 = True
IOU_POSITIVE_THRESHOLD = 0.5
IOU_NEGATIVE_THRESHOLD = 0.4
NMS = True
NMS_IOU_THRESHOLD = 0.1
MAXIMUM_DETECTIONS = 100
FILTERED_SCORE = 0.05
VIS_SCORE = 0.2
# -------------------------------------------- GWD
GWD_TAU = 2.0
GWD_FUNC = tf.sqrt
|
[
"yangxue0827@126.com"
] |
yangxue0827@126.com
|
f942e1aeb559fdac152b1e65d28e59acc2f85863
|
7d172bc83bc61768a09cc97746715b8ec0e13ced
|
/catalog/migrations/0003_saleorder.py
|
65f1ca2bf38a92aa7ef0c747114f6b61e4a61de3
|
[] |
no_license
|
shivam1111/jjuice
|
a3bcd7ee0ae6647056bdc62ff000ce6e6af27594
|
6a2669795ed4bb4495fda7869eeb221ed6535582
|
refs/heads/master
| 2020-04-12T05:01:27.981792
| 2018-11-08T13:00:49
| 2018-11-08T13:00:49
| 81,114,622
| 2
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 668
|
py
|
# -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2017-03-30 07:24
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('catalog', '0002_s3object'),
]
operations = [
migrations.CreateModel(
name='SaleOrder',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('name', models.CharField(max_length=100, verbose_name='Name')),
],
options={
'db_table': 'sale_order',
'managed': False,
},
),
]
|
[
"shivam1111@gmail.com"
] |
shivam1111@gmail.com
|
370bdedfa4af55d99c1d4c1db116c26d97c39037
|
aea96aa406250c3a2a8f2799e6cbbad256c262c3
|
/EG/reduce_str_float.py
|
f08662bdfd724cd99ca78245308d53339a3013ec
|
[] |
no_license
|
xiaochuanjiejie/python_exercise
|
cb0ffaa4b7c961c8ca9847526c84ee6ba261620c
|
710fa85fd2d7a17994081bdc5f8b5ff66b77416e
|
refs/heads/master
| 2021-01-21T16:18:04.640093
| 2017-08-11T10:02:49
| 2017-08-11T10:02:49
| 95,403,650
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 447
|
py
|
#coding: utf-8
import math
from functools import reduce
s = '1234.5678'
index = s.index('.')
n = len(s) - 1 - index
s = s.replace('.','')
print s
def chr2num(s):
return {'0':0,'1':1,'2':2,'3':3,'4':4,'5':5,'6':6,'7':7,'8':8,'9':9}[s]
print map(chr2num,s)
lst = map(chr2num,s)
# lst = list(map(chr2num,s))
# print lst
def cal(x,y):
return x * 10 + y
number = reduce(cal,lst)
print number
floatx = number / math.pow(10,n)
print floatx
|
[
"xiaochuanjiejie@gmail.com"
] |
xiaochuanjiejie@gmail.com
|
50fa25dc930404d0d7198078e8b171b750953f5c
|
ca5c5becd8b57b4d77af3aa776b3c478ca962bf0
|
/src/main/ODR_MC/branches/v3/upFun.py
|
28e5d2a45a215fc146e7f361050eb44ad40d186e
|
[] |
no_license
|
gerritholl/Harmonisation
|
42118c46d093115ddd87fca094a9ac8881aede71
|
31b8bd5a0da8c6fc4a31453cf7801fcca25d4951
|
refs/heads/master
| 2021-09-07T03:07:03.843117
| 2017-12-18T13:57:38
| 2017-12-18T13:57:38
| 110,711,229
| 0
| 0
| null | 2017-11-14T15:53:19
| 2017-11-14T15:53:19
| null |
UTF-8
|
Python
| false
| false
| 4,966
|
py
|
""" FIDUCEO FCDR harmonisation
Author: Arta Dilo / NPL MM
Date created: 09-01-2017
Last update: 17-01-2017
Functions for propagating uncertainty to the calibrated radiance:
- function to calculate first derivatives to measurement eq. variables,
- and first derivatives to calibration coefficients;
- function for uncertainty propagation using GUM.
"""
import numpy as np
class avhrr(object):
''' The class contains a function for the measurement equation and functions
for calculating sensitivity coefficients to variables and parameters in the
measurement equation. '''
def __init__(self, nop, nos):
self.slabel = 'avhrr' # series label
self.nopairs = nop # number of sensor pairs in the series
self.nosensors = nos # number of sensors in the series
# set manually number of meas. eq. parameters; will change if needed
self.nocoefs = 4 # number of calibration coefficients
self.novars = 5 # number of meas. eq. variables
# AVHRR measurement equation
def measEq(self, X, a):
# add checks for number of calib. coefficients and variables
a0 = a[0] # AVHRR model coefficients
a1 = a[1]
a2 = a[2]
a3 = a[3]
CE = X[:,2] # Earth counts
Cs = X[:,0] # space counts
Cict = X[:,1] # ICT counts
Lict = X[:,3] # ICT radiance
To = X[:,4] # orbit temperature
# Earth radiance from Earth counts and calibration data
LE = a0 + (0.98514+a1)*Lict*(Cs-CE)/(Cs-Cict) + a2*(Cict-CE)*(Cs-CE)
LE += a3*To
return LE # return Earth radiance
''' Partial derivatives to measurement equation variables and coefficients;
these form the Jacobian row(s) for the LS in a pair sensor-reference. '''
def sensCoeff(self, X, a):
p = self.nocoefs # number of calibration coefficients
m = self.novars # number of harmonisation variables
a1 = a[1] # AVHRR model coefficients
a2 = a[2]
a3 = a[3]
CE = X[:,2] # Earth counts
Cs = X[:,0] # space counts
Cict = X[:,1] # ICT counts
Lict = X[:,3] # ICT radiance
To = X[:,4] # orbit temperature
# initialize array of sensitivity coefficients per data row
sens = np.zeros((CE.shape[0], p+m)) # should check it is 9
# partial derivatives to calibration coefficients
sens[:,0] = 1. # dLE / da0
sens[:,1] = Lict * (Cs - CE) / (Cs - Cict) # dLE / da1
sens[:,2] = (Cict - CE) * (Cs - CE) # dLE / da2
sens[:,3] = To # dLE / da3
# partial derivatives to meas.eq. variables
sens[:,4] = (0.98514+a1)*Lict*(CE-Cict)/(Cs-Cict)**2 + a2*(Cict-CE) # dLE/dCs
sens[:,5] = (0.98514+a1)*Lict*(Cs-CE)/(Cs-Cict)**2 + a2*(Cs-CE) # dLE/dCict
sens[:,6] = (0.98514+a1)*Lict/(Cict-Cs) + a2*(2*CE-Cs-Cict) # dLE/dCE
sens[:,7] = (0.98514+a1) * (Cs-CE) / (Cs-Cict) # dLE/dLict
sens[:,8] = a3 # dLE/dTo
return sens
''' Evaluate Earth radiance uncertainty from coefficients uncertainty '''
def va2ULE(self, X, a, Va):
p = self.nocoefs # number of calibration coefficients
sens = self.sensCoeff(X, a) # sensitivity coeffs for matchup obs.
# compute uncertainty from calibration coefficients
u2La = np.dot(sens[:, 0:p]**2, np.diag(Va)) # coeffs. variance component
corU = np.zeros((X[:,0].shape[0]))
for i in range(p-1):
for j in range(i+1,p):
corU[:] += 2 * sens[:,i] * sens[:,j] * Va[i,j]
u2La += corU # add coeffs' correlation component
return np.sqrt(u2La) # return radiance uncert. from coeffs uncertainty
''' Evaluate Earth radiance uncertainty via GUM law of propagation '''
def uncLE(self, X, a, uX, Va):
# assumes no correlation between X variables
p = self.nocoefs # number of calibration coefficients
m = self.novars # number of harmonisation variables
sens = self.sensCoeff(X, a) # sensitivity coeffs for matchup obs.
u2La = self.va2ULE(X, a, Va)**2 # uncertainty from calib. coefficients
# evaluate uncertainty from harmonisation data variables
u2LX = np.einsum('ij,ij->i', sens[:, p:p+m]**2, uX**2)
u2L = u2La + u2LX # total squared uncertainty of radiance
print "Ratio of coeffs' uncertainty component to total radiance uncertainty:"
print min(np.sqrt(u2La/u2L)), '-', max(np.sqrt(u2La/u2L))
return np.sqrt(u2L) # return uncertainty of Earth radiance
|
[
"seh2@eoserver.npl.co.uk"
] |
seh2@eoserver.npl.co.uk
|
e3715d7cbdd7977bd57b89bffe7e1c7374827eb2
|
40fc1d38f2d4b643bc99df347c4ff3a763ba65e3
|
/arcade/space_shooter/setup.py
|
899a87123f997e3c1122f83ff3f7a77fa541a2a7
|
[
"LicenseRef-scancode-public-domain",
"MIT",
"CC-BY-4.0",
"CC-BY-3.0"
] |
permissive
|
alecordev/pygaming
|
0be4b7a1c9e7922c63ce4cc369cd893bfef7b03c
|
35e479b703acf038f47c2151b3759ad852781e4c
|
refs/heads/master
| 2023-05-14T05:03:28.484678
| 2021-06-03T10:11:08
| 2021-06-03T10:11:08
| 372,768,733
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 736
|
py
|
import sys
from cx_Freeze import setup, Executable
import os
# Dependencies are automatically detected, but it might need fine tuning.
build_exe_options = {"packages": ["os", "pygame"]}
# GUI applications require a different base on Windows (the default is for a
# console application).
base = None
if sys.platform == "win32":
base = "Win32GUI"
pygame_py_file = os.path.join("spaceshooter", "spaceShooter.py")
## The image and sound files are added manually into the zip file
## A fix for this would be released
setup(
name="Space Shooter",
version="0.0.2",
description="classic retro game made using pygame",
options={"build_exe": build_exe_options},
executables=[Executable(pygame_py_file, base=base)],
)
|
[
"alecor.dev@gmail.com"
] |
alecor.dev@gmail.com
|
8904beb072f5f0d6c02deb340ad9e1bde96aa958
|
6509c398816baffafa4a1fcfb2855e1bc9d1609b
|
/sistema-operacional/diretorios/pathlib/exemplos/pathlib-4.py
|
7986086bea5a528b646fbaa9b9c5e9fc10c68789
|
[] |
no_license
|
marcoswebermw/learning-python
|
6b0dfa81a0d085f4275865dce089d9b53b494aa5
|
931ed2985b8a3fec1a48c660c089e290aaac123d
|
refs/heads/master
| 2021-10-27T21:19:46.013020
| 2019-04-19T23:25:46
| 2019-04-19T23:25:46
| 87,670,464
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 148
|
py
|
# Listando só os arquivos de um diretório.
from pathlib import Path
diretorio = Path('.')
[print(x) for x in diretorio.iterdir() if x.is_file()]
|
[
"marcoswebermw@gmail.com"
] |
marcoswebermw@gmail.com
|
ce3eb9f306532e3d901fc2acb81877bb8a80fbde
|
b70f00927b9ed862252ad7345ca39f9d44ae87a2
|
/exec -l /bin/bash/google-cloud-sdk/.install/.backup/lib/googlecloudsdk/command_lib/filestore/operations/flags.py
|
a2422ffca4165e41c34c82b6d85667ae3393cac0
|
[
"LicenseRef-scancode-unknown-license-reference",
"Apache-2.0"
] |
permissive
|
sparramore/Art-Roulette
|
7654dedad6e9423dfc31bd0f807570b07a17a8fc
|
c897c9ec66c27ccab16f1a12213d09fe982d4a95
|
refs/heads/master
| 2021-07-06T13:04:22.141681
| 2018-07-12T23:30:13
| 2018-07-12T23:30:13
| 139,061,941
| 0
| 2
| null | 2020-07-25T11:32:11
| 2018-06-28T19:49:24
|
Python
|
UTF-8
|
Python
| false
| false
| 1,156
|
py
|
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Flags and helpers for the Cloud Filestore operations commands."""
from __future__ import unicode_literals
OPERATIONS_LIST_FORMAT = """\
table(
name.basename():label=OPERATION_NAME,
name.segment(3):label=LOCATION,
metadata.verb:label=TYPE,
metadata.target.basename(),
done.yesno(yes='DONE', no='RUNNING'):label=STATUS,
metadata.createTime.date():sort=1,
duration(start=metadata.createTime,end=metadata.endTime,precision=0,calendar=false).slice(2:).join("").yesno(no="<1S"):label=DURATION
)"""
|
[
"djinnie24@gmail.com"
] |
djinnie24@gmail.com
|
7df867c895807b675e26661a7c94fcedf8969c23
|
82b946da326148a3c1c1f687f96c0da165bb2c15
|
/sdk/python/pulumi_azure_native/operationalinsights/v20200301preview/get_linked_storage_account.py
|
c615084e822e9a1b8d05805b45a7c7b14a8e0d07
|
[
"BSD-3-Clause",
"Apache-2.0"
] |
permissive
|
morrell/pulumi-azure-native
|
3916e978382366607f3df0a669f24cb16293ff5e
|
cd3ba4b9cb08c5e1df7674c1c71695b80e443f08
|
refs/heads/master
| 2023-06-20T19:37:05.414924
| 2021-07-19T20:57:53
| 2021-07-19T20:57:53
| 387,815,163
| 0
| 0
|
Apache-2.0
| 2021-07-20T14:18:29
| 2021-07-20T14:18:28
| null |
UTF-8
|
Python
| false
| false
| 4,552
|
py
|
# coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
__all__ = [
'GetLinkedStorageAccountResult',
'AwaitableGetLinkedStorageAccountResult',
'get_linked_storage_account',
]
@pulumi.output_type
class GetLinkedStorageAccountResult:
"""
Linked storage accounts top level resource container.
"""
def __init__(__self__, data_source_type=None, id=None, name=None, storage_account_ids=None, type=None):
if data_source_type and not isinstance(data_source_type, str):
raise TypeError("Expected argument 'data_source_type' to be a str")
pulumi.set(__self__, "data_source_type", data_source_type)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if storage_account_ids and not isinstance(storage_account_ids, list):
raise TypeError("Expected argument 'storage_account_ids' to be a list")
pulumi.set(__self__, "storage_account_ids", storage_account_ids)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
@property
@pulumi.getter(name="dataSourceType")
def data_source_type(self) -> str:
"""
Linked storage accounts type.
"""
return pulumi.get(self, "data_source_type")
@property
@pulumi.getter
def id(self) -> str:
"""
Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the resource
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="storageAccountIds")
def storage_account_ids(self) -> Optional[Sequence[str]]:
"""
Linked storage accounts resources ids.
"""
return pulumi.get(self, "storage_account_ids")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
"""
return pulumi.get(self, "type")
class AwaitableGetLinkedStorageAccountResult(GetLinkedStorageAccountResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetLinkedStorageAccountResult(
data_source_type=self.data_source_type,
id=self.id,
name=self.name,
storage_account_ids=self.storage_account_ids,
type=self.type)
def get_linked_storage_account(data_source_type: Optional[str] = None,
resource_group_name: Optional[str] = None,
workspace_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetLinkedStorageAccountResult:
"""
Linked storage accounts top level resource container.
:param str data_source_type: Linked storage accounts type.
:param str resource_group_name: The name of the resource group. The name is case insensitive.
:param str workspace_name: The name of the workspace.
"""
__args__ = dict()
__args__['dataSourceType'] = data_source_type
__args__['resourceGroupName'] = resource_group_name
__args__['workspaceName'] = workspace_name
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('azure-native:operationalinsights/v20200301preview:getLinkedStorageAccount', __args__, opts=opts, typ=GetLinkedStorageAccountResult).value
return AwaitableGetLinkedStorageAccountResult(
data_source_type=__ret__.data_source_type,
id=__ret__.id,
name=__ret__.name,
storage_account_ids=__ret__.storage_account_ids,
type=__ret__.type)
|
[
"noreply@github.com"
] |
morrell.noreply@github.com
|
4365938a5db92558c7c18ea93d358dfe9ffed5bd
|
0b0d3246d39974cb8faff7d269da2d539415afab
|
/problem_python/p49.py
|
88e5b7863a391d3d438604eab2bbd0cc41c6c173
|
[] |
no_license
|
xionghhcs/leetcode
|
972e7ae4ca56b7100223630b294b5a97ba5dd7e8
|
8bd43dcd995a9de0270b8cea2d9a48df17ffc08b
|
refs/heads/master
| 2020-03-07T17:18:08.465559
| 2019-09-29T11:11:26
| 2019-09-29T11:11:26
| 127,607,564
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 535
|
py
|
class Solution:
def groupAnagrams(self, strs):
import copy
strs_cp = copy.deepcopy(strs)
for i, item in enumerate(strs_cp):
item = list(item)
item.sort()
item = ''.join(item)
strs_cp[i] = item
table = dict()
for i, item in enumerate(strs_cp):
if item not in table:
table[item] = []
table[item].append(strs[i])
ans = []
for k in table:
ans.append(table[k])
return ans
|
[
"xionghhcs@163.com"
] |
xionghhcs@163.com
|
6036b4e9fe5bce86b786985656d485851ebc000e
|
78918441c6735b75adcdf20380e5b6431891b21f
|
/api/views.py
|
7db0c4896ac2f5194e3116b1611b1bf43493ac47
|
[] |
no_license
|
dede-20191130/PracticeDjango_2
|
eba40532d5ce8bd4fd13fbd15d94f31942111cfa
|
23593c0fa4c4dff04bd76583e8176e600ca69014
|
refs/heads/master
| 2020-12-23T14:23:08.039363
| 2020-02-27T12:16:00
| 2020-02-27T12:16:00
| 237,176,619
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,502
|
py
|
import json
from collections import OrderedDict
from django.http import HttpResponse
from mybook.models import Book
def render_json_response(request, data, status=None):
"""response を JSON で返却"""
json_str = json.dumps(data, ensure_ascii=False, indent=2)
callback = request.GET.get('callback')
if not callback:
callback = request.POST.get('callback') # POSTでJSONPの場合
if callback:
json_str = "%s(%s)" % (callback, json_str)
response = HttpResponse(json_str, content_type='application/javascript; charset=UTF-8', status=status)
else:
response = HttpResponse(json_str, content_type='application/json; charset=UTF-8', status=status)
return response
def book_list(request):
"""書籍と感想のJSONを返す"""
books = []
for book in Book.objects.all().order_by('id'):
impressions = []
for impression in book.impressions.order_by('id'):
impression_dict = OrderedDict([
('impression_id', impression.id),
('comment', impression.comment),
])
impressions.append(impression_dict)
book_dict = OrderedDict([
('book_id', book.id),
('name', book.name),
('publisher', book.publisher),
('page', book.page),
('impressions', impressions)
])
books.append(book_dict)
data = OrderedDict([('books', books)])
return render_json_response(request, data)
|
[
"1044adad@gmail.com"
] |
1044adad@gmail.com
|
3dba9cc472654cbd43ec5366ccd01fa7bd6f03de
|
9edaf93c833ba90ae9a903aa3c44c407a7e55198
|
/travelport/models/type_start_end_time.py
|
9a4b3e1df4ef31fe8155d79f3f8f6c17d3a73f86
|
[] |
no_license
|
tefra/xsdata-samples
|
c50aab4828b8c7c4448dbdab9c67d1ebc519e292
|
ef027fe02e6a075d8ed676c86a80e9647d944571
|
refs/heads/main
| 2023-08-14T10:31:12.152696
| 2023-07-25T18:01:22
| 2023-07-25T18:01:22
| 222,543,692
| 6
| 1
| null | 2023-06-25T07:21:04
| 2019-11-18T21:00:37
|
Python
|
UTF-8
|
Python
| false
| false
| 1,920
|
py
|
from __future__ import annotations
from dataclasses import dataclass, field
__NAMESPACE__ = "http://www.travelport.com/schema/vehicle_v52_0"
@dataclass
class TypeStartEndTime:
"""
Used to specify earliest and latest pickup/dropoff times for a vehicle.
Parameters
----------
time
The time in 24 hour clock format.
requirement_passed
When true, the time requirement has been met.
mon
tue
wed
thu
fri
sat
sun
"""
class Meta:
name = "typeStartEndTime"
time: None | str = field(
default=None,
metadata={
"name": "Time",
"type": "Attribute",
"required": True,
}
)
requirement_passed: None | bool = field(
default=None,
metadata={
"name": "RequirementPassed",
"type": "Attribute",
}
)
mon: None | bool = field(
default=None,
metadata={
"name": "Mon",
"type": "Attribute",
}
)
tue: None | bool = field(
default=None,
metadata={
"name": "Tue",
"type": "Attribute",
}
)
wed: None | bool = field(
default=None,
metadata={
"name": "Wed",
"type": "Attribute",
}
)
thu: None | bool = field(
default=None,
metadata={
"name": "Thu",
"type": "Attribute",
}
)
fri: None | bool = field(
default=None,
metadata={
"name": "Fri",
"type": "Attribute",
}
)
sat: None | bool = field(
default=None,
metadata={
"name": "Sat",
"type": "Attribute",
}
)
sun: None | bool = field(
default=None,
metadata={
"name": "Sun",
"type": "Attribute",
}
)
|
[
"chris@komposta.net"
] |
chris@komposta.net
|
7a57b9d8fc4353b0116d5eb59291d529fd673296
|
91e98f30ab87f13cbd533c276e24690912690b35
|
/BlaineFry/Phys_707_Model_Selection_v2.py
|
a908f6a9aff841b250489f5e5527751582a51e48
|
[] |
no_license
|
ladosamushia/PHYS707
|
a5a3f4954746722a3c7e530730a7cbd01caeb5f4
|
968e143022d49bfe477590b38e40184e3affed02
|
refs/heads/master
| 2020-07-20T06:38:42.914658
| 2019-12-23T12:27:43
| 2019-12-23T12:27:43
| 206,591,395
| 1
| 4
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,413
|
py
|
# -*- coding: utf-8 -*-
"""
Created on Tue Oct 22 09:52:28 2019
@author: Blaine Fry
"""
# import packages
import numpy as np
from numpy import random as rand
from matplotlib import pyplot as plt
#%% generate some data
Npoints = 50
mu_0 = 1.0
sigma_0 = 1.0
data = rand.normal(loc=mu_0,scale=sigma_0,size=Npoints)
# and make a histogram of it
xbounds = [-5,5]
Nbins = 20
Bins = np.linspace(xbounds[0],xbounds[1],num=Nbins+1)
plt.figure(1)
plt.xlim(xbounds[0],xbounds[1])
plt.xlabel('value')
plt.ylabel('Normalized Frequency')
plt.title('Data Histogram')
plt.grid(alpha=0.5)
plt.hist(data,bins=Bins,alpha=0.8,color='m',normed=True,label='Data') # may need to switch normed to density if this line calls an error
plt.legend()
#%% define the models to test out
# first, a general gaussian
def gauss(x,mu,sigma):
return (1/(sigma*np.sqrt(2*np.pi)))*np.exp(-((x-mu)*(x-mu))/(2*sigma*sigma))
# then the models
def Gauss_A(x):
return gauss(x,1.0,1.0)
def Gauss_B(x):
return gauss(x,1.2,1.0)
x = np.linspace(xbounds[0],xbounds[1],num=1000)
plt.plot(x,Gauss_A(x),'c-',label='Gauss A')
plt.plot(x,Gauss_B(x),'b-',label='Gauss B')
plt.legend()
#%% start comparing models
# P({x_i}) = P(x_1)*P(x_2)*P(x_3)*...
# logs would be better... consider revising
Ntrials = 1000
def compare_models(actual_dist,mu_ex,sigma_ex,model_1,model_2): # actual_dist = 'Gauss' or 'Cauchy'
log_ratios = []
for i in range(Ntrials):
if actual_dist is 'Gauss':
data = rand.normal(loc=mu_ex,scale=sigma_ex,size=Npoints)
else:
data = rand.standard_cauchy(size=Npoints)
# find the probability of the data set given model 1
prob1 = 1
for i in range(Npoints):
prob1 *= model_1(data[i])
# find the probability of the data set given model 2
prob2 = 1
for i in range(Npoints):
prob2 *= model_2(data[i])
log_ratios.append(np.log10(prob1/prob2))
return log_ratios
ratios_A = compare_models('Gauss',1.0,1.0,Gauss_A,Gauss_B) # compare the models if A is true
ratios_B = compare_models('Gauss',1.2,1.0,Gauss_A,Gauss_B) # compare the models if B is true
plt.figure(2)
plt.title('Model Comparison')
plt.ylabel('Normalized Frequency')
plt.xlabel(r'$\log_{10} \left(\frac{f_A}{f_B}\right)$')
plt.hist(ratios_A,bins=Ntrials/10,alpha=0.7,normed=True,label='A is True')
plt.hist(ratios_B,bins=Ntrials/10,alpha=0.7,normed=True,label='B is True')
plt.legend()
#%% Now we want to do the same, but with Cauchy vs Gauss
mu_star = 0
sigma_star = 1
def GAUSS(x):
return gauss(x,mu_star,sigma_star)
def CAUCHY(x):
return 1.0/((np.pi*sigma_star)*(1.0+(((x-mu_star)/sigma_star)**2)))
plt.figure(3)
plt.title('Example Distributions')
x = np.linspace(-5,5,100)
plt.plot(x,GAUSS(x),'b',label='Gauss')
plt.plot(x,CAUCHY(x),'r-',label='Cauchy')
plt.legend()
ratios_Gauss = compare_models('Gauss',0.0,1.0,GAUSS,CAUCHY)
ratios_Cauchy = compare_models('Cauchy',0.0,1.0,GAUSS,CAUCHY)
plt.figure(4)
plt.title('Gauss vs Cauchy')
plt.ylabel('Normalized Frequency')
plt.xlabel(r'$\log_{10} \left(\frac{f_{Gauss}}{f_{Cauchy}}\right)$')
plt.hist(ratios_Gauss,bins=Ntrials/10,alpha=0.7,normed=True,label='Gauss is True')
plt.hist(ratios_Cauchy,bins=Ntrials/10,alpha=0.7,normed=True,label='Cauchy is True')
plt.legend()
|
[
"noreply@github.com"
] |
ladosamushia.noreply@github.com
|
4fed593d5f025735e0ad7e586d3fa993077381f3
|
5e5252812e67393a75830b313cd0d746c912123b
|
/python/Calculating with Functions.py
|
5da5ee3fd6e14cf8e4c65e409919b2cbc840f9a6
|
[] |
no_license
|
Konohayui/Codewars
|
20dfc6b147d2afd68172d5f5824b6c8c8dfa05f1
|
97291462e7b2e42e437355fb676e9152013a5e3a
|
refs/heads/master
| 2021-10-19T18:07:26.973873
| 2019-02-22T22:52:33
| 2019-02-22T22:52:33
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 915
|
py
|
'''
Modified Xueyimei's solution
for better understanding
'''
def zero(f = None):
if not f:
print("first 0")
return 0
else:
print("second 0")
return f(0)
def one(f = None):
if not f:
print("first 1")
return 1
else:
print("second 1")
return f(1)
def two(f = None):
return 2 if not f else f(2)
def three(f = None):
return 3 if not f else f(3)
def four(f = None):
return 4 if not f else f(4)
def five(f = None):
return 5 if not f else f(5)
def six(f = None):
return 6 if not f else f(6)
def seven(f = None):
return 7 if not f else f(7)
def eight(f = None):
return 8 if not f else f(8)
def nine(f = None):
return 9 if not f else f(9)
def plus(y): return lambda x: int(x+y)
def minus(y): return lambda x: int(x-y)
def times(y): return lambda x: int(x*y)
def divided_by(y): return lambda x: int(x/y)
|
[
"noreply@github.com"
] |
Konohayui.noreply@github.com
|
b55a5033285fe85e350014f77edb19070e901038
|
947acace352c4b2e719e94600f7447d1382adfe2
|
/env/Scripts/painter.py
|
3b388efab78cf5cdae0a57d979b2692a59b44692
|
[] |
no_license
|
skconan/autoPlayAtariBreakout
|
ac9de1ef3342e81b57519fe588eb88e9bb6c6695
|
3a7167f31d810951b099c30bfceed0da6dcdf12f
|
refs/heads/master
| 2022-12-21T15:21:44.552250
| 2017-10-10T08:54:18
| 2017-10-10T08:54:18
| 106,190,698
| 2
| 2
| null | 2022-12-11T06:28:01
| 2017-10-08T16:18:04
|
Python
|
UTF-8
|
Python
| false
| false
| 2,215
|
py
|
#!c:\users\skconan\desktop\พี่สอนน้อง\env\scripts\python.exe
#
# The Python Imaging Library
# $Id$
#
# this demo script illustrates pasting into an already displayed
# photoimage. note that the current version of Tk updates the whole
# image every time we paste, so to get decent performance, we split
# the image into a set of tiles.
#
import sys
if sys.version_info[0] > 2:
import tkinter
else:
import Tkinter as tkinter
from PIL import Image, ImageTk
#
# painter widget
class PaintCanvas(tkinter.Canvas):
def __init__(self, master, image):
tkinter.Canvas.__init__(self, master,
width=image.size[0], height=image.size[1])
# fill the canvas
self.tile = {}
self.tilesize = tilesize = 32
xsize, ysize = image.size
for x in range(0, xsize, tilesize):
for y in range(0, ysize, tilesize):
box = x, y, min(xsize, x+tilesize), min(ysize, y+tilesize)
tile = ImageTk.PhotoImage(image.crop(box))
self.create_image(x, y, image=tile, anchor=tkinter.NW)
self.tile[(x, y)] = box, tile
self.image = image
self.bind("<B1-Motion>", self.paint)
def paint(self, event):
xy = event.x - 10, event.y - 10, event.x + 10, event.y + 10
im = self.image.crop(xy)
# process the image in some fashion
im = im.convert("L")
self.image.paste(im, xy)
self.repair(xy)
def repair(self, box):
# update canvas
dx = box[0] % self.tilesize
dy = box[1] % self.tilesize
for x in range(box[0]-dx, box[2]+1, self.tilesize):
for y in range(box[1]-dy, box[3]+1, self.tilesize):
try:
xy, tile = self.tile[(x, y)]
tile.paste(self.image.crop(xy))
except KeyError:
pass # outside the image
self.update_idletasks()
#
# main
if len(sys.argv) != 2:
print("Usage: painter file")
sys.exit(1)
root = tkinter.Tk()
im = Image.open(sys.argv[1])
if im.mode != "RGB":
im = im.convert("RGB")
PaintCanvas(root, im).pack()
root.mainloop()
|
[
"supakit.kr@gmail.com"
] |
supakit.kr@gmail.com
|
b8057bfd90277d7f954e3713e2198773a6ce19d8
|
78ade3f3f334593e601ea78c1e6fd8575f0fe86b
|
/tfx/examples/chicago_taxi_pipeline/taxi_utils_test.py
|
466676a43ebd5b80bfdecd3f72a58490953f907b
|
[
"Apache-2.0"
] |
permissive
|
rmothukuru/tfx
|
82725e20a7d71265f791122ec3ec5d7708443761
|
f46de4be29e96c123e33f90245dc5021d18f8294
|
refs/heads/master
| 2023-01-11T08:50:20.552722
| 2020-11-06T11:11:47
| 2020-11-06T11:11:47
| 279,754,672
| 1
| 1
|
Apache-2.0
| 2020-07-15T03:37:39
| 2020-07-15T03:37:39
| null |
UTF-8
|
Python
| false
| false
| 7,554
|
py
|
# Lint as: python2, python3
# Copyright 2019 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for tfx.examples.chicago_taxi_pipeline.taxi_utils."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import types
import apache_beam as beam
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform import beam as tft_beam
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_metadata.proto.v0 import schema_pb2
from tfx.components.trainer import executor as trainer_executor
from tfx.examples.chicago_taxi_pipeline import taxi_utils
from tfx.utils import io_utils
from tfx.utils import path_utils
class TaxiUtilsTest(tf.test.TestCase):
def setUp(self):
super(TaxiUtilsTest, self).setUp()
self._testdata_path = os.path.join(
os.path.dirname(os.path.dirname(os.path.dirname(__file__))),
'components/testdata')
def testUtils(self):
key = 'fare'
xfm_key = taxi_utils._transformed_name(key)
self.assertEqual(xfm_key, 'fare_xf')
def testPreprocessingFn(self):
schema_file = os.path.join(self._testdata_path, 'schema_gen/schema.pbtxt')
schema = io_utils.parse_pbtxt_file(schema_file, schema_pb2.Schema())
feature_spec = taxi_utils._get_raw_feature_spec(schema)
working_dir = self.get_temp_dir()
transform_graph_path = os.path.join(working_dir, 'transform_graph')
transformed_examples_path = os.path.join(
working_dir, 'transformed_examples')
# Run very simplified version of executor logic.
# TODO(kestert): Replace with tft_unit.assertAnalyzeAndTransformResults.
# Generate legacy `DatasetMetadata` object. Future version of Transform
# will accept the `Schema` proto directly.
legacy_metadata = dataset_metadata.DatasetMetadata(
dataset_schema.from_feature_spec(feature_spec))
decoder = tft.coders.ExampleProtoCoder(legacy_metadata.schema)
with beam.Pipeline() as p:
with tft_beam.Context(temp_dir=os.path.join(working_dir, 'tmp')):
examples = (
p
| 'ReadTrainData' >> beam.io.ReadFromTFRecord(
os.path.join(self._testdata_path, 'csv_example_gen/train/*'),
coder=beam.coders.BytesCoder(),
# TODO(b/114938612): Eventually remove this override.
validate=False)
| 'DecodeTrainData' >> beam.Map(decoder.decode))
(transformed_examples, transformed_metadata), transform_fn = (
(examples, legacy_metadata)
| 'AnalyzeAndTransform' >> tft_beam.AnalyzeAndTransformDataset(
taxi_utils.preprocessing_fn))
# WriteTransformFn writes transform_fn and metadata to subdirectories
# tensorflow_transform.SAVED_MODEL_DIR and
# tensorflow_transform.TRANSFORMED_METADATA_DIR respectively.
# pylint: disable=expression-not-assigned
(transform_fn
|
'WriteTransformFn' >> tft_beam.WriteTransformFn(transform_graph_path))
encoder = tft.coders.ExampleProtoCoder(transformed_metadata.schema)
(transformed_examples
| 'EncodeTrainData' >> beam.Map(encoder.encode)
| 'WriteTrainData' >> beam.io.WriteToTFRecord(
os.path.join(transformed_examples_path,
'train/transformed_examples.gz'),
coder=beam.coders.BytesCoder()))
# pylint: enable=expression-not-assigned
# Verify the output matches golden output.
# NOTE: we don't verify that transformed examples match golden output.
expected_transformed_schema = io_utils.parse_pbtxt_file(
os.path.join(
self._testdata_path,
'transform/transform_graph/transformed_metadata/schema.pbtxt'),
schema_pb2.Schema())
transformed_schema = io_utils.parse_pbtxt_file(
os.path.join(transform_graph_path, 'transformed_metadata/schema.pbtxt'),
schema_pb2.Schema())
# Clear annotations so we only have to test main schema.
transformed_schema.ClearField('annotation')
for feature in transformed_schema.feature:
feature.ClearField('annotation')
self.assertEqual(transformed_schema, expected_transformed_schema)
def testTrainerFn(self):
temp_dir = os.path.join(
os.environ.get('TEST_UNDECLARED_OUTPUTS_DIR', self.get_temp_dir()),
self._testMethodName)
schema_file = os.path.join(self._testdata_path, 'schema_gen/schema.pbtxt')
output_dir = os.path.join(temp_dir, 'output_dir')
trainer_fn_args = trainer_executor.TrainerFnArgs(
train_files=os.path.join(self._testdata_path,
'transform/transformed_examples/train/*.gz'),
transform_output=os.path.join(self._testdata_path,
'transform/transform_graph'),
output_dir=output_dir,
serving_model_dir=os.path.join(temp_dir, 'serving_model_dir'),
eval_files=os.path.join(self._testdata_path,
'transform/transformed_examples/eval/*.gz'),
schema_file=schema_file,
train_steps=1,
eval_steps=1,
verbosity='INFO',
base_model=None)
schema = io_utils.parse_pbtxt_file(schema_file, schema_pb2.Schema())
training_spec = taxi_utils.trainer_fn(trainer_fn_args, schema)
estimator = training_spec['estimator']
train_spec = training_spec['train_spec']
eval_spec = training_spec['eval_spec']
eval_input_receiver_fn = training_spec['eval_input_receiver_fn']
self.assertIsInstance(estimator,
tf.estimator.DNNLinearCombinedClassifier)
self.assertIsInstance(train_spec, tf.estimator.TrainSpec)
self.assertIsInstance(eval_spec, tf.estimator.EvalSpec)
self.assertIsInstance(eval_input_receiver_fn, types.FunctionType)
# Test keep_max_checkpoint in RunConfig
self.assertGreater(estimator._config.keep_checkpoint_max, 1)
# Train for one step, then eval for one step.
eval_result, exports = tf.estimator.train_and_evaluate(
estimator, train_spec, eval_spec)
self.assertGreater(eval_result['loss'], 0.0)
self.assertEqual(len(exports), 1)
self.assertGreaterEqual(len(tf.io.gfile.listdir(exports[0])), 1)
# Export the eval saved model.
eval_savedmodel_path = tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=path_utils.eval_model_dir(output_dir),
eval_input_receiver_fn=eval_input_receiver_fn)
self.assertGreaterEqual(len(tf.io.gfile.listdir(eval_savedmodel_path)), 1)
# Test exported serving graph.
with tf.compat.v1.Session() as sess:
metagraph_def = tf.compat.v1.saved_model.loader.load(
sess, [tf.saved_model.SERVING], exports[0])
self.assertIsInstance(metagraph_def, tf.compat.v1.MetaGraphDef)
if __name__ == '__main__':
tf.test.main()
|
[
"tensorflow-extended-team@google.com"
] |
tensorflow-extended-team@google.com
|
d3f7e5a38010e610526dfe18104e43a8f58375e6
|
c4ecc70400f3c4375dd4b2335673137dd36b72b4
|
/venv/lib/python3.6/site-packages/xero_python/accounting/models/contact_groups.py
|
44c5b00b3e94d4523d3baf225c292a9d849de367
|
[
"MIT"
] |
permissive
|
TippyFlitsUK/FarmXero
|
1bb3496d164d66c940bd3012e36e1763990ff30d
|
881b1e6648e927631b276e66a4c5287e4de2cbc1
|
refs/heads/main
| 2023-07-05T14:49:57.186130
| 2021-08-19T19:33:48
| 2021-08-19T19:33:48
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,766
|
py
|
# coding: utf-8
"""
Accounting API
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
Contact: api@xero.com
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
from xero_python.models import BaseModel
class ContactGroups(BaseModel):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {"contact_groups": "list[ContactGroup]"}
attribute_map = {"contact_groups": "ContactGroups"}
def __init__(self, contact_groups=None): # noqa: E501
"""ContactGroups - a model defined in OpenAPI""" # noqa: E501
self._contact_groups = None
self.discriminator = None
if contact_groups is not None:
self.contact_groups = contact_groups
@property
def contact_groups(self):
"""Gets the contact_groups of this ContactGroups. # noqa: E501
:return: The contact_groups of this ContactGroups. # noqa: E501
:rtype: list[ContactGroup]
"""
return self._contact_groups
@contact_groups.setter
def contact_groups(self, contact_groups):
"""Sets the contact_groups of this ContactGroups.
:param contact_groups: The contact_groups of this ContactGroups. # noqa: E501
:type: list[ContactGroup]
"""
self._contact_groups = contact_groups
|
[
"ben.norquay@gmail.com"
] |
ben.norquay@gmail.com
|
d1a3312fd06cdd1c33319651970db66ccf6feaff
|
844501294ca37f1859b9aa0a258e6dd6b1bf2349
|
/snipe/__init__.py
|
ed31be10c2f86531161797372795b4dd3a2ba4bb
|
[
"MIT",
"BSD-2-Clause"
] |
permissive
|
1ts-org/snipe
|
2ac1719bc8f6b3b158c04536464f866c34051253
|
ad84a629e9084f161e0fcf811dc86ba54aaf9e2b
|
refs/heads/master
| 2021-06-04T22:32:36.038607
| 2020-03-27T05:18:36
| 2020-04-05T21:50:42
| 18,642,653
| 6
| 3
|
NOASSERTION
| 2019-10-08T02:02:50
| 2014-04-10T16:01:32
|
Python
|
UTF-8
|
Python
| false
| false
| 1,377
|
py
|
# -*- encoding: utf-8 -*-
# Copyright © 2014 the Snipe contributors
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
# CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS
# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
# TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
# THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE.
|
[
"kcr@1ts.org"
] |
kcr@1ts.org
|
7b47974d7c6dff9d2d526ea771620b522c940bca
|
5f4da925312f9ad4b4de36e7d1861031d3f03731
|
/app.py
|
964943b9a931d3e43f46b67109b0c953a4cb9dad
|
[] |
no_license
|
geofferyj/PROJECT1
|
1b1c0cad5c3766589af8291b0c2635d15cfd599d
|
89cdfe42e27c3176dbdce79654d1161013e041cf
|
refs/heads/master
| 2021-01-01T12:28:51.167516
| 2020-03-02T15:59:41
| 2020-03-02T15:59:41
| 239,279,791
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 6,679
|
py
|
import os, requests
from functools import wraps
from flask import Flask, session, redirect, render_template, url_for, request, flash, jsonify, make_response, abort
from flask_session import Session
from sqlalchemy import create_engine, exc
from sqlalchemy.orm import scoped_session, sessionmaker
app = Flask(__name__)
dbstring = "postgres://fabeidpsjarnlm:080cd5f8a5a7ce8dd8d6c71863c76924e7a26ebcab39588e6dc637a1741bf496@ec2-3-234-109-123.compute-1.amazonaws.com:5432/de693jkmt9rih3"
# Configure session to use filesystem
app.config['SECRET_KEY'] = "efd432e0aca715610c505c533037b95d6fb22f5692a0d33820ab7b19ef06f513"
app.config["SESSION_PERMANENT"] = False
app.config["SESSION_TYPE"] = "filesystem"
Session(app)
# Set up database
engine = create_engine(dbstring)
db = scoped_session(sessionmaker(bind=engine))
db.execute("""CREATE TABLE IF NOT EXISTS users(uid SERIAL PRIMARY KEY,
name VARCHAR NOT NULL,
username VARCHAR NOT NULL UNIQUE,
email VARCHAR NOT NULL UNIQUE,
password VARCHAR NOT NULL)""")
db.execute("""CREATE TABLE IF NOT EXISTS books(isbn VARCHAR PRIMARY KEY,
title VARCHAR NOT NULL,
author VARCHAR NOT NULL,
year INTEGER NOT NULL)""")
db.execute("""CREATE TABLE IF NOT EXISTS reviews(id SERIAL PRIMARY KEY,
uid INTEGER NOT NULL REFERENCES users(uid),
isbn VARCHAR NOT NULL REFERENCES books(isbn),
review VARCHAR NOT NULL,
rating INTEGER CHECK(rating > 0 AND rating <= 5) NOT NULL,
review_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT unique_uid_isbn UNIQUE(uid,isbn)
)""")
db.commit()
def login_required(func):
@wraps(func)
def wrapper(*args, **kwargs):
if 'uid' not in session:
return redirect(url_for('login', next=request.url))
return func(*args, **kwargs)
return wrapper
@app.route("/", methods = ["POST", "GET"])
@login_required
def index():
if request.method == "POST":
search = request.form.get("search")
data = db.execute("SELECT * FROM books WHERE title ILIKE :search OR author ILIKE :search OR isbn ILIKE :search", {"search": '%' + search + '%'}).fetchall()
if data:
return render_template('index.html', data=data)
else:
flash("Sorry No match was found for your search")
return render_template('index.html', data=data)
return render_template('index.html')
@app.route("/login/", methods = ["POST", "GET"])
def login():
if request.method == "POST":
form = request.form
email = form["email"]
password = form["password"]
next_url = form["next"]
user = db.execute("SELECT uid FROM users WHERE email = :email", {"email": email}).fetchone()
if user:
session["uid"] = user.uid
if next_url:
flash("Login successful")
return redirect(next_url)
return redirect(url_for("index"))
else:
flash("user not found")
return redirect(url_for("login"))
return render_template("login.html")
@app.route("/logout/")
def logout():
session.pop("uid", None)
return redirect(url_for("login"))
@app.route("/signup/", methods = ["POST", "GET"])
def signup():
if request.method == "POST":
form = request.form
username = form["username"]
name = form["name"]
email = form["email"]
password = form["password"]
try:
db.execute("INSERT INTO users(name, username, email, password) VALUES(:name, :username, :email, :password)", {
"name": name, "username": username, "email": email, "password": password})
db.commit()
return redirect(url_for('login'))
except exc.IntegrityError:
flash('Username Already exists')
return redirect(url_for('signup'))
return render_template('signup.html')
@app.route("/book/<isbn>/", methods = ["GET", "POST"])
@login_required
def book_details(isbn):
if request.method == "POST":
review = request.form.get("review")
rating = request.form.get("rating")
uid = session["uid"]
try:
db.execute("INSERT INTO reviews (uid, isbn, review, rating) VALUES(:uid, :isbn, :review, :rating)", {"uid": uid, "isbn": isbn, "review": review, "rating": rating})
db.commit()
except exc.IntegrityError:
flash('You have already revied this book')
return redirect(url_for('book_details', isbn=isbn))
reviews = db.execute("SELECT name, review, rating FROM users, reviews WHERE users.uid = reviews.uid AND reviews.isbn = :isbn ORDER BY reviews.review_date", {"isbn":isbn})
details = db.execute("SELECT * FROM books WHERE isbn = :isbn", {"isbn":isbn}).fetchone()
res = requests.get("https://www.goodreads.com/book/review_counts.json", params={"key": "e9hh8mpJf995M7SzMfst5A", "isbns": isbn}).json()
for i in res['books']:
gr_data = i
return render_template("book_details.html", details=details, reviews=reviews, gr_data=gr_data)
@app.route("/api/<isbn>/", methods=['GET'])
def api(isbn):
if request.method == 'GET':
book = db.execute('SELECT * FROM books WHERE isbn = :isbn', {'isbn': isbn}).fetchone()
if book:
rating = db.execute("SELECT ROUND( AVG(rating), 2) FROM reviews WHERE isbn = :isbn", {'isbn':isbn}).fetchone()
review = db.execute("SELECT COUNT(review) FROM reviews WHERE isbn = :isbn", {'isbn':isbn}).fetchone()
for i in rating:
if i:
avg_rating = float(i)
else:
avg_rating = 0
for i in review:
if i:
review_count = int(i)
else:
review_count = 0
return make_response(jsonify({
"title": book.title,
"author": book.author,
"year": book.year,
"isbn": book.isbn,
"review_count": review_count,
"average_score": avg_rating,
}))
else:
return abort(404)
@app.shell_context_processor
def make_shell_context():
return {'db': db}
if __name__ == "__main__":
app.debug = True
app.run()
|
[
"geofferyjoseph1@gmail.com"
] |
geofferyjoseph1@gmail.com
|
38c7b8ae0a1fe7c519e2cb5f2fca8b9894080414
|
bcc00e164c3d20b3c0ac1099741a71491af0e302
|
/.history/neotropical_datasetAPI_20191014144558.py
|
7ff867bf62e30070273816b13537d6b29785d50f
|
[] |
no_license
|
manasa151/Toshokan
|
cff2af75c480bd629b49ce39c17857b316102e45
|
192c7eaf8523e38fa5821affdec91eb60ae5b7ce
|
refs/heads/master
| 2020-08-05T14:56:10.285024
| 2019-10-15T17:07:09
| 2019-10-15T17:07:09
| 212,586,868
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,422
|
py
|
import csv
from os import makedirs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.common.exceptions import NoSuchElementException
import time
from io import BytesIO
from PIL import _imaging
from PIL import Image
import requests
import datetime
import os
import time
from os.path import getsize, join
import imghdr
import os
from os.path import getsize, join
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from io import BytesIO
from PIL import Image
import requests
import datetime
import os
import time
from os.path import getsize, join
import imghdr
import os
from os.path import getsize, join
def search_bySpecies():
species = 'Semioptera wallacii'
with open('request_species.csv', 'r') as csv_file:
csv_reader = csv.reader(csv_file)
next(csv_reader)
for row in csv_reader:
if species == row[2]:
# makedirs(f'testing/{row[1]}', exist_ok=True)
makedirs(f'{row[0]}/{row[1]}/{row[2]}', exist_ok=True)
# download_species_byOrder(row[0], row[1], row[2])
print(row)
def NEW_download_from_CSV():
with open('request_species.csv', 'r') as csv_file:
csv_reader = csv.reader(csv_file)
next(csv_reader)
for row in csv_reader:
makedirs(f'{row[0]}/{row[1]}/{row[2]}', exist_ok=True)
download_species_byOrder(row[0], row[1], row[2], row[3])
time.sleep(10)
def download_species_byOrder(bird_family, bird_order, bird_species, tax_code):
# initate web driver
ebird_url = f'https://ebird.org/species/{tax_code}'
chromeDriver = 'C:\\Users\\jmentore\\Documents\\Selenium Chrome Driver\\chromedriver.exe'
driver = webdriver.Chrome(executable_path=chromeDriver)
driver.get(ebird_url)
driver.maximize_window()
time.sleep(3)
# Clicks the view all link
view_all = driver.find_element(
By.XPATH, '/html/body/div/div[7]/div/div/div[2]/div[1]/a')
time.sleep(5)
view_all.click()
ids = driver.find_elements_by_tag_name('img')
sci_name = bird_species
family = bird_family
order = bird_order
ebird_counter = 0
file_ext = '.jpg'
show_more = driver.find_element_by_id('show_more')
while show_more.is_displayed():
try:
for ii in ids:
download_link = ii.get_attribute('src')
r = requests.get(download_link)
img = Image.open(BytesIO(r.content))
ebird_counter = ebird_counter + 1
img.save(
f'{family}/{order}/{sci_name}/{sci_name}-{ebird_counter}{file_ext}')
time.sleep(5)
print(download_link)
time.sleep(5)
driver.find_element_by_xpath('//*[@id="show_more"]').click()
except Exception as e:
messages.append(e)
time.sleep(1)
if not show_more.is_displayed():
print(f'Total url extracted: {ebird_counter}')
driver.quit()
def post_safe(url, params):
done = False
tries_left = 3
messages = []
while tries_left and not done:
tries_left -= 1
try:
response = requests.post(url, data=params)
done = True
except Exception as e:
messages.append(e)
time.sleep(1)
if not done:
output = "%s\n" % (datetime.now().strftime('%Y-%m-%d %H:%M'),)
output += "requests() failed 3 times:\n"
for m in messages:
output += m+"\n"
print(output)
return done
def test(tax_code):
ebird_url = f'https://ebird.org/species/{tax_code}'
chromeDriver = 'C:\\Users\\jmentore\\Documents\\Selenium Chrome Driver\\chromedriver.exe'
driver = webdriver.Chrome(executable_path=chromeDriver)
driver.get(ebird_url)
driver.maximize_window()
time.sleep(3)
# Clicks the view all link
view_all = driver.find_element(
By.XPATH, '/html/body/div/div[7]/div/div/div[2]/div[1]/a')
time.sleep(5)
view_all.click()
NEW_download_from_CSV()
# search_bySpecies()
test('walsta2')
# search_byTaxcode('zimant1')
|
[
"cornerstoneconnections@gmail.com"
] |
cornerstoneconnections@gmail.com
|
fc9c235e3d4f8607eaf02246e0cb7385120abb75
|
17c280ade4159d4d8d5a48d16ba3989470eb3f46
|
/18/mc/ExoDiBosonResonances/EDBRTreeMaker/test/crab3_analysisM4500_R_0-7.py
|
ae645a54658b7cd536c87077e75805da8681f2d2
|
[] |
no_license
|
chengchen1993/run2_ntuple
|
798ff18489ff5185dadf3d1456a4462e1dbff429
|
c16c2b203c05a3eb77c769f63a0bcdf8b583708d
|
refs/heads/master
| 2021-06-25T18:27:08.534795
| 2021-03-15T06:08:01
| 2021-03-15T06:08:01
| 212,079,804
| 0
| 2
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,201
|
py
|
from WMCore.Configuration import Configuration
name = 'WWW'
steam_dir = 'xulyu'
config = Configuration()
config.section_("General")
config.General.requestName = 'M4500_R0-7_off'
config.General.transferLogs = True
config.section_("JobType")
config.JobType.pluginName = 'Analysis'
config.JobType.inputFiles = ['Autumn18_V19_MC_L1FastJet_AK4PFchs.txt','Autumn18_V19_MC_L2Relative_AK4PFchs.txt','Autumn18_V19_MC_L3Absolute_AK4PFchs.txt','Autumn18_V19_MC_L1FastJet_AK8PFchs.txt','Autumn18_V19_MC_L2Relative_AK8PFchs.txt','Autumn18_V19_MC_L3Absolute_AK8PFchs.txt','Autumn18_V19_MC_L1FastJet_AK8PFPuppi.txt','Autumn18_V19_MC_L2Relative_AK8PFPuppi.txt','Autumn18_V19_MC_L3Absolute_AK8PFPuppi.txt','Autumn18_V19_MC_L1FastJet_AK4PFPuppi.txt','Autumn18_V19_MC_L2Relative_AK4PFPuppi.txt','Autumn18_V19_MC_L3Absolute_AK4PFPuppi.txt' ]
#config.JobType.inputFiles = ['PHYS14_25_V2_All_L1FastJet_AK4PFchs.txt','PHYS14_25_V2_All_L2Relative_AK4PFchs.txt','PHYS14_25_V2_All_L3Absolute_AK4PFchs.txt','PHYS14_25_V2_All_L1FastJet_AK8PFchs.txt','PHYS14_25_V2_All_L2Relative_AK8PFchs.txt','PHYS14_25_V2_All_L3Absolute_AK8PFchs.txt']
# Name of the CMSSW configuration file
#config.JobType.psetName = 'bkg_ana.py'
config.JobType.psetName = 'analysis.py'
#config.JobType.allowUndistributedCMSSW = True
config.JobType.allowUndistributedCMSSW = True
config.section_("Data")
#config.Data.inputDataset = '/WJetsToLNu_13TeV-madgraph-pythia8-tauola/Phys14DR-PU20bx25_PHYS14_25_V1-v1/MINIAODSIM'
config.Data.inputDataset = '/WkkToWRadionToWWW_M4500-R0-7_TuneCP5_13TeV-madgraph/RunIIAutumn18MiniAOD-102X_upgrade2018_realistic_v15-v1/MINIAODSIM'
#config.Data.inputDBS = 'global'
config.Data.inputDBS = 'global'
config.Data.splitting = 'LumiBased'
config.Data.unitsPerJob =5
config.Data.totalUnits = -1
config.Data.publication = False
config.Data.outLFNDirBase='/store/group/phys_b2g/chench/cc/'#chench/'# = '/store/group/dpg_trigger/comm_trigger/TriggerStudiesGroup/STEAM/' + steam_dir + '/' + name + '/'
# This string is used to construct the output dataset name
config.Data.outputDatasetTag = 'M4500_R0-7_off'
config.section_("Site")
# Where the output files will be transmitted to
config.Site.storageSite = 'T2_CH_CERN'
|
[
"c.chen@cern.ch"
] |
c.chen@cern.ch
|
10d919ed0109a6401f4dd3ac01502930a7d4097e
|
80383bd5f39fd7eacff50f4b0fcc3c5e7c8329e0
|
/reddwarf/tests/api/instances_delete.py
|
9bef213d56cfc68c4ac1598aaffd3bb0d1ab7020
|
[] |
no_license
|
imsplitbit/reddwarf
|
646409a2365459515b37f70445c0acb22610898d
|
2f50d9a12a390c6016aad6a612a14bd6c34b66fd
|
refs/heads/master
| 2020-05-19T15:45:26.733102
| 2013-01-08T21:37:10
| 2013-01-08T21:37:10
| 2,270,590
| 0
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,257
|
py
|
import time
from proboscis import after_class
from proboscis import before_class
from proboscis import test
from proboscis.asserts import *
from proboscis.decorators import time_out
from reddwarfclient import exceptions
from reddwarf.tests.util import create_dbaas_client
from reddwarf.tests.util import poll_until
from reddwarf.tests.util import test_config
from reddwarf.tests.util.users import Requirements
class TestBase(object):
def set_up(self):
reqs = Requirements(is_admin=True)
self.user = test_config.users.find_user(reqs)
self.dbaas = create_dbaas_client(self.user)
def create_instance(self, name, size=1):
result = self.dbaas.instances.create(name, 1, {'size': size}, [], [])
return result.id
def wait_for_instance_status(self, instance_id, status="ACTIVE"):
poll_until(lambda: self.dbaas.instances.get(instance_id),
lambda instance: instance.status == status,
time_out=10)
def wait_for_instance_task_status(self, instance_id, description):
poll_until(lambda: self.dbaas.management.show(instance_id),
lambda instance: instance.task_description == description,
time_out=10)
def is_instance_deleted(self, instance_id):
while True:
try:
instance = self.dbaas.instances.get(instance_id)
except exceptions.NotFound:
return True
time.sleep(.5)
def get_task_info(self, instance_id):
instance = self.dbaas.management.show(instance_id)
return instance.status, instance.task_description
def delete_instance(self, instance_id, assert_deleted=True):
instance = self.dbaas.instances.get(instance_id)
instance.delete()
if assert_deleted:
assert_true(self.is_instance_deleted(instance_id))
def delete_errored_instance(self, instance_id):
self.wait_for_instance_status(instance_id, 'ERROR')
status, desc = self.get_task_info(instance_id)
assert_equal(status, "ERROR")
self.delete_instance(instance_id)
@test(runs_after_groups=["services.initialize"],
groups=['dbaas.api.instances.delete'])
class ErroredInstanceDelete(TestBase):
"""
Test that an instance in an ERROR state is actually deleted when delete
is called.
"""
@before_class
def set_up(self):
"""Create some flawed instances."""
super(ErroredInstanceDelete, self).set_up()
# Create an instance that fails during server prov.
self.server_error = self.create_instance('test_SERVER_ERROR')
# Create an instance that fails during volume prov.
self.volume_error = self.create_instance('test_VOLUME_ERROR', size=9)
# Create an instance that fails during DNS prov.
#self.dns_error = self.create_instance('test_DNS_ERROR')
# Create an instance that fails while it's been deleted the first time.
self.delete_error = self.create_instance('test_ERROR_ON_DELETE')
@test
@time_out(20)
def delete_server_error(self):
self.delete_errored_instance(self.server_error)
@test
@time_out(20)
def delete_volume_error(self):
self.delete_errored_instance(self.volume_error)
@test(enabled=False)
@time_out(20)
def delete_dns_error(self):
self.delete_errored_instance(self.dns_error)
@test
@time_out(20)
def delete_error_on_delete_instance(self):
id = self.delete_error
self.wait_for_instance_status(id, 'ACTIVE')
self.wait_for_instance_task_status(id, 'No tasks for the instance.')
instance = self.dbaas.management.show(id)
assert_equal(instance.status, "ACTIVE")
assert_equal(instance.task_description, 'No tasks for the instance.')
# Try to delete the instance. This fails the first time due to how
# the test fake is setup.
self.delete_instance(id, assert_deleted=False)
instance = self.dbaas.management.show(id)
assert_equal(instance.status, "SHUTDOWN")
assert_equal(instance.task_description, "Deleting the instance.")
# Try a second time. This will succeed.
self.delete_instance(id)
|
[
"tim.simpson@rackspace.com"
] |
tim.simpson@rackspace.com
|
d6f9aae369f645e06dd5a81e0da92deb03d22e25
|
350d6b7246d6ef8161bdfccfb565b8671cc4d701
|
/Insert Interval.py
|
fdec8d8c9a46850b59d3f652b43f6c85e069796d
|
[] |
no_license
|
YihaoGuo2018/leetcode_python_2
|
145d5fbe7711c51752b2ab47a057b37071d2fbf7
|
2065355198fd882ab90bac6041c1d92d1aff5c65
|
refs/heads/main
| 2023-02-14T14:25:58.457991
| 2021-01-14T15:57:10
| 2021-01-14T15:57:10
| 329,661,893
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,191
|
py
|
class Solution:
def insert(self, intervals: 'List[Interval]', newInterval: 'Interval') -> 'List[Interval]':
# init data
new_start, new_end = newInterval
idx, n = 0, len(intervals)
output = []
# add all intervals starting before newInterval
while idx < n and new_start > intervals[idx][0]:
output.append(intervals[idx])
idx += 1
# add newInterval
# if there is no overlap, just add the interval
if not output or output[-1][1] < new_start:
output.append(newInterval)
# if there is an overlap, merge with the last interval
else:
output[-1][1] = max(output[-1][1], new_end)
# add next intervals, merge with newInterval if needed
while idx < n:
interval = intervals[idx]
start, end = interval
idx += 1
# if there is no overlap, just add an interval
if output[-1][1] < start:
output.append(interval)
# if there is an overlap, merge with the last interval
else:
output[-1][1] = max(output[-1][1], end)
return output
|
[
"yihao_guo@gwmail.gwu.edu"
] |
yihao_guo@gwmail.gwu.edu
|
139a215386fd73c93520824d0d8e1a4d7e908698
|
7dbbde919349fdc3651eff1a7be744aed25eea30
|
/scripts/multiprocessing_example.py
|
13f78ecf32de52baab2847baa7991b1bf9d173e0
|
[] |
no_license
|
adrn/scicoder-notebooks
|
06ca10a12c4f89a5c2e4062c70b6e4eb3bc0b1b0
|
7c8a5850200c3fb78aca1c336af7ed47ad52c52a
|
refs/heads/master
| 2021-03-12T21:37:39.597404
| 2013-07-08T04:37:23
| 2013-07-08T04:37:23
| 11,226,565
| 1
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 662
|
py
|
# coding: utf-8
""" Demonstration of the built-in multiprocessing package """
from __future__ import division, print_function
__author__ = "adrn <adrn@astro.columbia.edu>"
# Standard library
import multiprocessing
# Define a 'task' or 'worker' function -- something function that you
# need to call over and over and over
def task(x):
return x**2
if __name__ == "__main__":
# a pool is like a magical box that knows how to execute things on
# multiple CPUs
pool = multiprocessing.Pool(processes=4)
# this will run the function task() on all values in range(10000)
result = pool.map(task, range(10000))
print(result)
|
[
"adrian.prw@gmail.com"
] |
adrian.prw@gmail.com
|
5a41960a55928dd63bb70c8a7008554e17a3496e
|
fa04e703556632fb6f513181070a496294b4f0dd
|
/patchnotifyer.py
|
e9804345e8e3f1936beb087a831e11a4efd27754
|
[] |
no_license
|
mhagander/patchnotifyer
|
a377d741c3837cbe6e5c8026ceced9a0a4b4c056
|
14c9b1d14780460645807227176db01aeef18267
|
refs/heads/master
| 2021-01-11T15:01:35.217795
| 2017-01-31T10:03:56
| 2017-01-31T10:03:56
| 80,282,835
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,583
|
py
|
#!/usr/bin/env python3
import argparse
from io import StringIO
import socket
import smtplib
from email.mime.text import MIMEText
import apt_pkg
class _DevNullProgress(object):
# Need this class to make the apt output not go to the console
def update(self, percent = None):
pass
def done(self, item = None):
pass
def stop(self):
pass
def pulse(self, owner = None):
pass
def update_status(self, a, b, c, d):
pass
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Patch status notifyer")
parser.add_argument('--fromaddr', type=str, help='From email address')
parser.add_argument('--toaddr', type=str, help='To email address')
parser.add_argument('--subject', type=str, help='Subject', default="Patch status on {0}".format(socket.gethostname()))
parser.add_argument('--ignorepkg', type=str, nargs='+', default='Ignore packages by exact name')
args = parser.parse_args()
if args.fromaddr and not args.toaddr:
parser.error("Can't specify from without to")
if args.toaddr and not args.fromaddr:
parser.error("Can't specify to without from")
status = StringIO()
apt_pkg.init()
# Turn off cache to avoid concurrency issues
apt_pkg.config.set("Dir::Cache::pkgcache","")
# "apt-get update"
sl = apt_pkg.SourceList()
sl.read_main_list()
tmpcache = apt_pkg.Cache(_DevNullProgress())
tmpcache.update(_DevNullProgress(), sl)
# Now do the actual check
cache = apt_pkg.Cache(_DevNullProgress())
depcache = apt_pkg.DepCache(cache)
depcache.read_pinfile()
depcache.init()
if depcache.broken_count > 0:
status.write("Depcache broken count is {0}\n\n".format(depcache.broken_count))
depcache.upgrade(True)
if depcache.del_count > 0:
status.write("Dist-upgrade generated {0} pending package removals!\n\n".format(depcache.del_count))
for pkg in cache.packages:
if depcache.marked_install(pkg) or depcache.marked_upgrade(pkg):
if pkg.name in args.ignorepkg:
continue
status.write("Package {0} requires an update\n".format(pkg.name))
if status.tell() > 0:
if args.fromaddr:
# Send email!
msg = MIMEText(status.getvalue())
msg['Subject'] = args.subject
msg['From'] = args.fromaddr
msg['To'] = args.toaddr
s = smtplib.SMTP('localhost')
s.send_message(msg)
s.quit()
else:
print(status.getvalue())
|
[
"magnus@hagander.net"
] |
magnus@hagander.net
|
eb86c5d2bcdc85721b23e67fb5747812f0c969e5
|
a6e4a6f0a73d24a6ba957277899adbd9b84bd594
|
/sdk/python/pulumi_azure_native/providerhub/v20201120/get_skus_nested_resource_type_first.py
|
48f43b29f82376eb8e0e141895212a4bf923acac
|
[
"BSD-3-Clause",
"Apache-2.0"
] |
permissive
|
MisinformedDNA/pulumi-azure-native
|
9cbd75306e9c8f92abc25be3f73c113cb93865e9
|
de974fd984f7e98649951dbe80b4fc0603d03356
|
refs/heads/master
| 2023-03-24T22:02:03.842935
| 2021-03-08T21:16:19
| 2021-03-08T21:16:19
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,018
|
py
|
# coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
from . import outputs
__all__ = [
'GetSkusNestedResourceTypeFirstResult',
'AwaitableGetSkusNestedResourceTypeFirstResult',
'get_skus_nested_resource_type_first',
]
@pulumi.output_type
class GetSkusNestedResourceTypeFirstResult:
def __init__(__self__, id=None, name=None, properties=None, type=None):
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if properties and not isinstance(properties, dict):
raise TypeError("Expected argument 'properties' to be a dict")
pulumi.set(__self__, "properties", properties)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def id(self) -> str:
"""
Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the resource
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def properties(self) -> 'outputs.SkuResourceResponseProperties':
return pulumi.get(self, "properties")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
"""
return pulumi.get(self, "type")
class AwaitableGetSkusNestedResourceTypeFirstResult(GetSkusNestedResourceTypeFirstResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetSkusNestedResourceTypeFirstResult(
id=self.id,
name=self.name,
properties=self.properties,
type=self.type)
def get_skus_nested_resource_type_first(nested_resource_type_first: Optional[str] = None,
provider_namespace: Optional[str] = None,
resource_type: Optional[str] = None,
sku: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetSkusNestedResourceTypeFirstResult:
"""
Use this data source to access information about an existing resource.
:param str nested_resource_type_first: The first child resource type.
:param str provider_namespace: The name of the resource provider hosted within ProviderHub.
:param str resource_type: The resource type.
:param str sku: The SKU.
"""
__args__ = dict()
__args__['nestedResourceTypeFirst'] = nested_resource_type_first
__args__['providerNamespace'] = provider_namespace
__args__['resourceType'] = resource_type
__args__['sku'] = sku
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('azure-native:providerhub/v20201120:getSkusNestedResourceTypeFirst', __args__, opts=opts, typ=GetSkusNestedResourceTypeFirstResult).value
return AwaitableGetSkusNestedResourceTypeFirstResult(
id=__ret__.id,
name=__ret__.name,
properties=__ret__.properties,
type=__ret__.type)
|
[
"noreply@github.com"
] |
MisinformedDNA.noreply@github.com
|
147eabb86c23fd8281b4ba09190388f7a3989371
|
549afd4c4c5c9b401a2643210d6a4d75b7aaa308
|
/src/optlang_operations.py
|
17e0c0f49974128cd21393d536cc7587fb94db62
|
[] |
no_license
|
OGalOz/FBA_Learn_Python
|
ff6c4ab5335b8f0cbfead5dc8da7392429503235
|
2df9b6fd128db8af1f97f6d12e9ab34ec5268a49
|
refs/heads/master
| 2023-05-27T03:06:27.671342
| 2019-10-07T18:19:34
| 2019-10-07T18:19:34
| 210,938,075
| 0
| 2
| null | 2019-10-03T21:48:04
| 2019-09-25T20:49:44
|
Python
|
UTF-8
|
Python
| false
| false
| 2,749
|
py
|
# In this file we make use of optlang
# More info here: https://optlang.readthedocs.io/en/latest/
from optlang import Model, Variable, Constraint, Objective
# You can declare the symbolic variables here with upper and lower bounds:
'''
x1 = Variable('x1', lb=0, ub = 100)
'''
#S is the stoichiomatrix as passed in by numpy
# objective function is the last variable (v)
# upperbound needs to be an int
# Constraints is as long as the amount of compounds
# flux_bounds is as long as the amount of reactions. It is a d2_list
# flux_bounds = [[lower bounds],[upper bounds]]
def stoichiomatrix_solution(S, flux_bounds, objective_index, objective_direction):
#We make a variable 'v-(index)' for each reaction (column) in the matrix:
variables = make_variables(S, flux_bounds)
constraints = make_constraints(S, variables)
obj = make_objective(objective_index, objective_direction, variables)
model= Model(name='Stoichiomatrix')
model.objective = obj
model.add(constraints)
status = model.optimize()
return [status, model]
# This function makes the variables
def make_variables(S, flux_bounds):
variables = []
row_1 = S[0]
for i in range(len(row_1)):
v = Variable('v-' + str(i+1), lb = flux_bounds[0][i], ub = flux_bounds[1][i])
variables.append(v)
print(variables)
return variables
def make_constraints(S, variables):
#Creating the constraints, one per compound:
constraints = []
for row in S:
constraint_sum = 0
for i in range(len(row)):
constraint_sum += row[i]*variables[i]
c = Constraint(constraint_sum, lb=0, ub =0)
constraints.append(c)
return constraints
def make_objective(objective_index, objective_direction, variables):
#The objective is just to either Maximize or Minimize a Variable.
obj_var = variables[objective_index]
print("Objective variable name: " + obj_var.name)
obj = Objective(variables[objective_index], direction = objective_direction)
return obj
def model_print(model):
print("status:", model.status)
#print("objective variable name: " + model.objective.name)
print("objective value:", model.objective.value)
print("----------")
print(model.variables.items())
for var_name, var in model.variables.items():
print(var_name, "=", var.primal)
def make_fluxes(model):
#fluxes holds the names and their values, then we sort by that and make the fluxes array
fluxes = []
for var_name, var in model.variables.items():
fluxes.append([int(var_name[2:]),var.primal])
fluxes.sort(key = lambda fluxes: fluxes[0])
flux_array = []
for flux in fluxes:
flux_array.append(flux[1])
return flux_array
|
[
"ogaloz@lbl.gov"
] |
ogaloz@lbl.gov
|
1f72205dad514935455fd3be194807f4ebba7730
|
7a10bf8748c7ce9c24c5461c21b5ebf420f18109
|
/ml_training/PythonCode/P4_Pandas_Basics.py
|
76770dbb25d18994fa84bd5163e320d499c538b4
|
[] |
no_license
|
VishalChak/machine_learning
|
aced4b4bf65bbbd08c966a2f028f217a918186d5
|
c6e29abe0509a43713f35ebf53da29cd1f0314c1
|
refs/heads/master
| 2021-06-15T07:13:56.583097
| 2019-10-05T06:01:58
| 2019-10-05T06:01:58
| 133,164,656
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,051
|
py
|
# Import Library
import pandas as pd
# Read data from a url
url = "https://vincentarelbundock.github.io/Rdatasets/csv/datasets/HairEyeColor.csv"
df = pd.read_csv(url)
# Type of the df object
type(df)
# Column names
list(df)
# Show first few rows
df.head()
# Show last few rows
df.tail()
# Data type of each column
df.dtypes
# Return number of columns and rows of dataframe
df.shape
# Number of rows
len(df.index)
# Number of columns
len(df.columns)
# Basic statistics
df.describe()
# Extract first three rows
df[0:3]
# or
#df.iloc[:3]
# Filter for black hair
#df[df['Hair']=="Black"]
# or
df.query("Hair =='Black'")
# Filter for males who have black hair
#df[(df['Hair']=="Black") & (df["Sex"]=="Male")]
# or
df.query("Hair == 'Black' & Sex =='Male'")
#WAP to Filter for those who have brown eye or black hair
#Ans:
z = df[(df['Hair']=="Black") | (df["Eye"]=="Brown")]
# or
z = df.query("Hair == 'Black' | Eye =='Brown'")
z.head(6)
# Filter for eye color of blue, hazel and green
df[df.Eye.isin(['Blue','Hazel','Green'])].head()
# Select one column
df[["Eye"]].head()
# or
df.Eye.head()
# Select two columns
df[["Eye","Sex"]].head()
# Unique Eye colors
df["Eye"].unique()
# Maximum of the "Freq" column
df.Freq.max()
# Call functions on multiple columns
import numpy as np
pd.DataFrame({'Max_freq': [df.Freq.max()], 'Min_freq': [df.Freq.min()], 'Std_freq': [np.std(df.Freq)]})
# Maximum Frequency by Sex
df.groupby("Sex").agg({"Freq":"max"})
#Display max Freq by color
df.groupby("Eye").agg({"Freq":"max"})
# Count by Eye color and Sex
df.groupby(["Eye","Sex"]).agg({"Freq":"count"}).rename(columns={"Freq":"Count"})
# Call functions for grouping
df.assign(Gt50 = (df.Freq > 50)).groupby("Gt50").agg({"Gt50":"count"}).rename(columns ={"Gt50":"Count"})
# Do the analysis on selected rows only
pd.DataFrame({'Max_freq': [df[0:10].Freq.max()], 'Min_freq': [df[0:10].Freq.min()], 'Std_freq': [np.std(df[0:10].Freq)]})
# Remove a column
df.drop('Unnamed: 0', 1).head()
# Return the first occurance
df.query("Eye == 'Blue'")[:1]
# Return the last occurance
df.query("Eye == 'Blue'")[-1:]
# Return a count
df[df.Eye.isin(['Blue','Hazel']) & (df.Sex=="Male")].shape[0]
# Count for each group
df[df.Eye.isin(['Blue','Hazel']) & (df.Sex=="Male")].groupby(["Eye","Sex"]).agg({"Freq":"count"}).rename(columns={"Freq":"Count"})
# Order in ascending order
df.sort_values(by='Freq').tail(6)
# Order in descending order
df.sort_values(by='Freq', ascending = False).tail(6)
# "Freq" in descending and "Eye" in ascending
df.sort_values(by=['Freq','Eye'], ascending = [False,True]).tail(6)
# Rename columns
df.rename(columns = {"Freq":"Frequency","Eye":"Eye_Color"}).tail()
# Unique rows
df[["Eye","Sex"]].drop_duplicates()
# Create new column
df.assign(Eye_Hair =df.Eye + df.Hair)[["Eye","Hair","Eye_Hair"]].head()
|
[
"vishalbabu.in@gmail.com"
] |
vishalbabu.in@gmail.com
|
d332b3dd09b91c5e952ba6af93587d2050fea535
|
f20d9ff8aafb8ef2d3e4a14b1d055be7c1a1e0db
|
/create_database.py
|
1c4848520b8d61043baad8f24786a792f0988323
|
[] |
no_license
|
HopeCheung/menu_api
|
25fee2d807e86245bc547c753a8bc156d99b9962
|
bfb410bfe5cd686e237f937f64bac198e178c75e
|
refs/heads/master
| 2020-05-09T20:59:49.467719
| 2019-04-15T17:42:24
| 2019-04-15T17:42:24
| 181,426,747
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 683
|
py
|
import os
import sqlite3
conn = sqlite3.connect("menu.db")
conn.execute('create table menu (id int, name varchar(20), item varchar(20))')
cur = conn.cursor()
cur.execute('insert into menu values(1, "Lunch Specials", "Chicken")')
cur.execute('insert into menu values(2, "Dinner Specials", "Pork")')
cur.execute('insert into menu values(3, "Specials of the day", "Salad")')
cur.execute('insert into menu values(1, "Lunch Specials", "Beef")')
cur.execute('insert into menu values(2, "Dinner Specials", "Sheep")')
cur.execute('insert into menu values(3, "Specials of the day", "Vegetables")')
conn.commit()
cur = conn.cursor()
cur.execute("select * from menu")
print(cur.fetchall())
|
[
"568038810@qq.com"
] |
568038810@qq.com
|
8543373d98c1f04b791fbc898524b98731cd31c2
|
490fad8eb8856c16b3d1d2e1ac3d00f5bd1280ba
|
/langsea/managers/category_manager.py
|
5e1600baac8d6797c904a7ef17ec7107403b641f
|
[
"MIT"
] |
permissive
|
blancheta/langsea
|
ebd12b16ff1b36d4292f527ec58f23b93deecbe7
|
e268b43fb94e3234ac161f2e5d9600d51360e4b3
|
refs/heads/master
| 2020-12-25T14:14:49.029568
| 2016-08-20T16:31:00
| 2016-08-20T16:31:00
| 66,143,438
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 591
|
py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
from langsea.models.category import Category
class CategoryManager:
categories_api_url = 'http://www.langsea.org/api/categories/'
def all(self):
response = requests.get(self.categories_api_url)
if response.ok:
categories = []
for category_json in response.json():
categories.append(Category(*category_json))
return categories
def get(self, name):
response = requests.get(self.categories_api_url + name)
category = None
if response.ok:
category = Category(*response.json())
return category
|
[
"alexandreblanchet44@gmail.com"
] |
alexandreblanchet44@gmail.com
|
f8fb78b34913903cdd4e7dbecf2b63afad70b866
|
b19a1baf69d1f7ba05a02ace7dfcba15c8d47cfb
|
/my_random.py
|
1a36aadaabd3e30ae66d3858d940a5fa861897f8
|
[] |
no_license
|
MarkHofstetter/20191018-wifi-python
|
20ed5de1cf28996902cecf7cd681d054e0d06739
|
7427b896783059a77c541e95df851a492ef5ebb9
|
refs/heads/master
| 2020-08-15T03:43:42.964992
| 2019-10-28T14:39:17
| 2019-10-28T14:39:17
| 215,275,139
| 2
| 2
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 991
|
py
|
# kopfrechen
# der benutzer bekommt 2 unterschiedliche zufallszahlen jeweils im Bereich 1 - 10
# die muss der Benutzer multiplizieren
# und es wird ueberprueft ob die aufgabe richtig geloest wurde
# zusatz
# 10 verschiedene Aufgaben stellen und sich merken
# wieviel richtig und falsch waren
import random
from util import user_input_positive_number
# import util
wrong = 0
right = 0
user_input = user_input_positive_number(question = 'Wieviele Runden')
for i in range(0, user_input):
print(i)
m1 = random.randint(1, 10)
m2 = random.randint(1, 10)
print(str(m1) + ' mal ' + str(m2) + ' ergibt?')
product = m1 * m2
user_input = user_input_positive_number('Bitte eine Lösung eingeben: ')
if product == user_input:
print("Richtig!")
right += 1
else:
print("Falsch!")
wrong += 1
print('Richtig: ' + str(right) )
print('Falsch: ' + str(wrong))
print('Korrekt {:0.2f} %'.format(right/(i+1)*100))
|
[
"mark@hofstetter.at"
] |
mark@hofstetter.at
|
4befe135006f88eaa43f75a4a79d805a6d066eaa
|
6188f8ef474da80c9e407e8040de877273f6ce20
|
/examples/docs_snippets/docs_snippets/guides/dagster/asset_tutorial/non_argument_deps.py
|
9d15421d4ee2978a194a43bb4d650ad0f3abb1eb
|
[
"Apache-2.0"
] |
permissive
|
iKintosh/dagster
|
99f2a1211de1f3b52f8bcf895dafaf832b999de2
|
932a5ba35263deb7d223750f211c2ddfa71e6f48
|
refs/heads/master
| 2023-01-24T15:58:28.497042
| 2023-01-20T21:51:35
| 2023-01-20T21:51:35
| 276,410,978
| 1
| 0
|
Apache-2.0
| 2020-07-01T15:19:47
| 2020-07-01T15:13:56
| null |
UTF-8
|
Python
| false
| false
| 2,104
|
py
|
"""isort:skip_file"""
import csv
import requests
from dagster import asset
@asset
def cereals():
response = requests.get("https://docs.dagster.io/assets/cereal.csv")
lines = response.text.split("\n")
return [row for row in csv.DictReader(lines)]
@asset
def nabisco_cereals(cereals):
"""Cereals manufactured by Nabisco"""
return [row for row in cereals if row["mfr"] == "N"]
@asset
def cereal_protein_fractions(cereals):
"""
For each cereal, records its protein content as a fraction of its total mass.
"""
result = {}
for cereal in cereals:
total_grams = float(cereal["weight"]) * 28.35
result[cereal["name"]] = float(cereal["protein"]) / total_grams
return result
@asset
def highest_protein_nabisco_cereal(nabisco_cereals, cereal_protein_fractions):
"""
The name of the nabisco cereal that has the highest protein content.
"""
sorted_by_protein = sorted(
nabisco_cereals, key=lambda cereal: cereal_protein_fractions[cereal["name"]]
)
return sorted_by_protein[-1]["name"]
# cereal_ratings_zip_start
import urllib.request
@asset
def cereal_ratings_zip() -> None:
urllib.request.urlretrieve(
"https://dagster-git-tutorial-nothing-elementl.vercel.app/assets/cereal-ratings.csv.zip",
"cereal-ratings.csv.zip",
)
# cereal_ratings_zip_end
# cereal_ratings_csv_start
import zipfile
@asset(non_argument_deps={"cereal_ratings_zip"})
def cereal_ratings_csv() -> None:
with zipfile.ZipFile("cereal-ratings.csv.zip", "r") as zip_ref:
zip_ref.extractall(".")
# cereal_ratings_csv_end
# nabisco_cereal_ratings_start
@asset(non_argument_deps={"cereal_ratings_csv"})
def nabisco_cereal_ratings(nabisco_cereals):
with open("cereal-ratings.csv", "r") as f:
cereal_ratings = {
row["name"]: row["rating"] for row in csv.DictReader(f.readlines())
}
result = {}
for nabisco_cereal in nabisco_cereals:
name = nabisco_cereal["name"]
result[name] = cereal_ratings[name]
return result
# nabisco_cereal_ratings_end
|
[
"noreply@github.com"
] |
iKintosh.noreply@github.com
|
36d309841dbe245ef49c789e87285f004a3dd0c7
|
169e75df163bb311198562d286d37aad14677101
|
/tensorflow/python/keras/_impl/keras/layers/gru_test.py
|
48e7e14f5ab73b534ab0d1c765ad2572b2930b2b
|
[
"Apache-2.0"
] |
permissive
|
zylo117/tensorflow-gpu-macosx
|
e553d17b769c67dfda0440df8ac1314405e4a10a
|
181bc2b37aa8a3eeb11a942d8f330b04abc804b3
|
refs/heads/master
| 2022-10-19T21:35:18.148271
| 2020-10-15T02:33:20
| 2020-10-15T02:33:20
| 134,240,831
| 116
| 26
|
Apache-2.0
| 2022-10-04T23:36:22
| 2018-05-21T08:29:12
|
C++
|
UTF-8
|
Python
| false
| false
| 7,280
|
py
|
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for GRU layer."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.framework import test_util as tf_test_util
from tensorflow.python.keras._impl import keras
from tensorflow.python.keras._impl.keras import testing_utils
from tensorflow.python.platform import test
from tensorflow.python.training.rmsprop import RMSPropOptimizer
class GRULayerTest(test.TestCase):
@tf_test_util.run_in_graph_and_eager_modes()
def test_return_sequences_GRU(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
testing_utils.layer_test(
keras.layers.GRU,
kwargs={'units': units,
'return_sequences': True},
input_shape=(num_samples, timesteps, embedding_dim))
@tf_test_util.run_in_graph_and_eager_modes()
def test_dynamic_behavior_GRU(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
layer = keras.layers.GRU(units, input_shape=(None, embedding_dim))
model = keras.models.Sequential()
model.add(layer)
model.compile(RMSPropOptimizer(0.01), 'mse')
x = np.random.random((num_samples, timesteps, embedding_dim))
y = np.random.random((num_samples, units))
model.train_on_batch(x, y)
@tf_test_util.run_in_graph_and_eager_modes()
def test_dropout_GRU(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
testing_utils.layer_test(
keras.layers.GRU,
kwargs={'units': units,
'dropout': 0.1,
'recurrent_dropout': 0.1},
input_shape=(num_samples, timesteps, embedding_dim))
@tf_test_util.run_in_graph_and_eager_modes()
def test_implementation_mode_GRU(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
for mode in [0, 1, 2]:
testing_utils.layer_test(
keras.layers.GRU,
kwargs={'units': units,
'implementation': mode},
input_shape=(num_samples, timesteps, embedding_dim))
def test_statefulness_GRU(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
layer_class = keras.layers.GRU
with self.test_session():
model = keras.models.Sequential()
model.add(
keras.layers.Embedding(
4,
embedding_dim,
mask_zero=True,
input_length=timesteps,
batch_input_shape=(num_samples, timesteps)))
layer = layer_class(
units, return_sequences=False, stateful=True, weights=None)
model.add(layer)
model.compile(optimizer='sgd', loss='mse')
out1 = model.predict(np.ones((num_samples, timesteps)))
self.assertEqual(out1.shape, (num_samples, units))
# train once so that the states change
model.train_on_batch(
np.ones((num_samples, timesteps)), np.ones((num_samples, units)))
out2 = model.predict(np.ones((num_samples, timesteps)))
# if the state is not reset, output should be different
self.assertNotEqual(out1.max(), out2.max())
# check that output changes after states are reset
# (even though the model itself didn't change)
layer.reset_states()
out3 = model.predict(np.ones((num_samples, timesteps)))
self.assertNotEqual(out2.max(), out3.max())
# check that container-level reset_states() works
model.reset_states()
out4 = model.predict(np.ones((num_samples, timesteps)))
np.testing.assert_allclose(out3, out4, atol=1e-5)
# check that the call to `predict` updated the states
out5 = model.predict(np.ones((num_samples, timesteps)))
self.assertNotEqual(out4.max(), out5.max())
# Check masking
layer.reset_states()
left_padded_input = np.ones((num_samples, timesteps))
left_padded_input[0, :1] = 0
left_padded_input[1, :2] = 0
out6 = model.predict(left_padded_input)
layer.reset_states()
right_padded_input = np.ones((num_samples, timesteps))
right_padded_input[0, -1:] = 0
right_padded_input[1, -2:] = 0
out7 = model.predict(right_padded_input)
np.testing.assert_allclose(out7, out6, atol=1e-5)
def test_regularizers_GRU(self):
embedding_dim = 4
layer_class = keras.layers.GRU
with self.test_session():
layer = layer_class(
5,
return_sequences=False,
weights=None,
input_shape=(None, embedding_dim),
kernel_regularizer=keras.regularizers.l1(0.01),
recurrent_regularizer=keras.regularizers.l1(0.01),
bias_regularizer='l2',
activity_regularizer='l1')
layer.build((None, None, 2))
self.assertEqual(len(layer.losses), 3)
x = keras.backend.variable(np.ones((2, 3, 2)))
layer(x)
self.assertEqual(len(layer.get_losses_for(x)), 1)
def test_constraints_GRU(self):
embedding_dim = 4
layer_class = keras.layers.GRU
with self.test_session():
k_constraint = keras.constraints.max_norm(0.01)
r_constraint = keras.constraints.max_norm(0.01)
b_constraint = keras.constraints.max_norm(0.01)
layer = layer_class(
5,
return_sequences=False,
weights=None,
input_shape=(None, embedding_dim),
kernel_constraint=k_constraint,
recurrent_constraint=r_constraint,
bias_constraint=b_constraint)
layer.build((None, None, embedding_dim))
self.assertEqual(layer.cell.kernel.constraint, k_constraint)
self.assertEqual(layer.cell.recurrent_kernel.constraint, r_constraint)
self.assertEqual(layer.cell.bias.constraint, b_constraint)
def test_with_masking_layer_GRU(self):
layer_class = keras.layers.GRU
with self.test_session():
inputs = np.random.random((2, 3, 4))
targets = np.abs(np.random.random((2, 3, 5)))
targets /= targets.sum(axis=-1, keepdims=True)
model = keras.models.Sequential()
model.add(keras.layers.Masking(input_shape=(3, 4)))
model.add(layer_class(units=5, return_sequences=True, unroll=False))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(inputs, targets, epochs=1, batch_size=2, verbose=1)
def test_from_config_GRU(self):
layer_class = keras.layers.GRU
for stateful in (False, True):
l1 = layer_class(units=1, stateful=stateful)
l2 = layer_class.from_config(l1.get_config())
assert l1.get_config() == l2.get_config()
if __name__ == '__main__':
test.main()
|
[
"zylo117@hotmail.com"
] |
zylo117@hotmail.com
|
fe6afb0a5ceacf91383ce734fe45b592f58f00f9
|
d05a59feee839a4af352b7ed2fd6cf10a288a3cb
|
/xlsxwriter/test/comparison/test_chart_axis30.py
|
9e4ec252d029acea26ddf0d4218712e6c3c78c56
|
[
"BSD-2-Clause-Views"
] |
permissive
|
elessarelfstone/XlsxWriter
|
0d958afd593643f990373bd4d8a32bafc0966534
|
bb7b7881c7a93c89d6eaac25f12dda08d58d3046
|
refs/heads/master
| 2020-09-24T06:17:20.840848
| 2019-11-24T23:43:01
| 2019-11-24T23:43:01
| 225,685,272
| 1
| 0
|
NOASSERTION
| 2019-12-03T18:09:06
| 2019-12-03T18:09:05
| null |
UTF-8
|
Python
| false
| false
| 1,350
|
py
|
###############################################################################
#
# Tests for XlsxWriter.
#
# Copyright (c), 2013-2019, John McNamara, jmcnamara@cpan.org
#
from ..excel_comparsion_test import ExcelComparisonTest
from ...workbook import Workbook
class TestCompareXLSXFiles(ExcelComparisonTest):
"""
Test file created by XlsxWriter against a file created by Excel.
"""
def setUp(self):
self.set_filename('chart_axis30.xlsx')
def test_create_file(self):
"""Test the creation of a simple XlsxWriter file."""
workbook = Workbook(self.got_filename)
worksheet = workbook.add_worksheet()
chart = workbook.add_chart({'type': 'line'})
chart.axis_ids = [69200896, 69215360]
data = [
[1, 2, 3, 4, 5],
[2, 4, 6, 8, 10],
[3, 6, 9, 12, 15],
]
chart.set_x_axis({'position_axis': 'on_tick'})
worksheet.write_column('A1', data[0])
worksheet.write_column('B1', data[1])
worksheet.write_column('C1', data[2])
chart.add_series({'values': '=Sheet1!$A$1:$A$5'})
chart.add_series({'values': '=Sheet1!$B$1:$B$5'})
chart.add_series({'values': '=Sheet1!$C$1:$C$5'})
worksheet.insert_chart('E9', chart)
workbook.close()
self.assertExcelEqual()
|
[
"jmcnamara@cpan.org"
] |
jmcnamara@cpan.org
|
17155a2faf01fd4d1b8ef2bd64c48e450adac8c7
|
8aa04db29bae5e0391543349eb2c0f778c56ffae
|
/tensorflow/python/trackable/asset.py
|
c218f7240e4f29d6e95140050581981776c3b287
|
[
"Apache-2.0",
"LicenseRef-scancode-generic-cla",
"BSD-2-Clause"
] |
permissive
|
mansnils/tensorflow
|
ec1a840f8fca6742d6e54dcf7b00eae0180f4023
|
b0164f014fd4f1b5af2c7b578aa7687198c5d92e
|
refs/heads/master
| 2023-01-30T00:13:07.772844
| 2023-01-09T09:45:45
| 2023-01-09T09:49:49
| 226,075,754
| 1
| 0
|
Apache-2.0
| 2019-12-05T10:27:38
| 2019-12-05T10:27:37
| null |
UTF-8
|
Python
| false
| false
| 4,278
|
py
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Asset-type Trackable object."""
import os
from tensorflow.python.eager import context
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.lib.io import file_io
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import resource_variable_ops
from tensorflow.python.saved_model import path_helpers
from tensorflow.python.trackable import base
from tensorflow.python.util.tf_export import tf_export
@tf_export("saved_model.Asset")
class Asset(base.Trackable):
"""Represents a file asset to hermetically include in a SavedModel.
A SavedModel can include arbitrary files, called assets, that are needed
for its use. For example a vocabulary file used initialize a lookup table.
When a trackable object is exported via `tf.saved_model.save()`, all the
`Asset`s reachable from it are copied into the SavedModel assets directory.
Upon loading, the assets and the serialized functions that depend on them
will refer to the correct filepaths inside the SavedModel directory.
Example:
```
filename = tf.saved_model.Asset("file.txt")
@tf.function(input_signature=[])
def func():
return tf.io.read_file(filename)
trackable_obj = tf.train.Checkpoint()
trackable_obj.func = func
trackable_obj.filename = filename
tf.saved_model.save(trackable_obj, "/tmp/saved_model")
# The created SavedModel is hermetic, it does not depend on
# the original file and can be moved to another path.
tf.io.gfile.remove("file.txt")
tf.io.gfile.rename("/tmp/saved_model", "/tmp/new_location")
reloaded_obj = tf.saved_model.load("/tmp/new_location")
print(reloaded_obj.func())
```
Attributes:
asset_path: A path, or a 0-D `tf.string` tensor with path to the asset.
"""
def __init__(self, path):
"""Record the full path to the asset."""
if isinstance(path, os.PathLike):
path = os.fspath(path)
# The init_scope prevents functions from capturing `path` in an
# initialization graph, since it is transient and should not end up in a
# serialized function body.
with ops.init_scope(), ops.device("CPU"):
self._path = ops.convert_to_tensor(
path, dtype=dtypes.string, name="asset_path")
@property
def asset_path(self):
"""Fetch the current asset path."""
return self._path
@classmethod
def _deserialize_from_proto(cls, object_proto, export_dir, asset_file_def,
**unused_kwargs):
proto = object_proto.asset
filename = file_io.join(
path_helpers.get_assets_dir(export_dir),
asset_file_def[proto.asset_file_def_index].filename)
asset = cls(filename)
if not context.executing_eagerly():
ops.add_to_collection(ops.GraphKeys.ASSET_FILEPATHS, asset.asset_path)
return asset
def _add_trackable_child(self, name, value):
setattr(self, name, value)
def _export_to_saved_model_graph(self, tensor_map, **unused_kwargs):
# TODO(b/205008097): Instead of mapping 1-1 between trackable asset
# and asset in the graph def consider deduping the assets that
# point to the same file.
asset_path_initializer = array_ops.placeholder(
shape=self.asset_path.shape,
dtype=dtypes.string,
name="asset_path_initializer")
asset_variable = resource_variable_ops.ResourceVariable(
asset_path_initializer)
tensor_map[self.asset_path] = asset_variable
return [self.asset_path]
ops.register_tensor_conversion_function(
Asset, lambda asset, **kw: ops.convert_to_tensor(asset.asset_path, **kw))
|
[
"gardener@tensorflow.org"
] |
gardener@tensorflow.org
|
e86d67f32b9eade3829748ae16ebc5608042241f
|
f791462fb1286607d16459c1602d133f8d8c8b59
|
/test/test_distributions_mixture.py
|
a1ab093e65ea5c73f333d6fcd898c35cb3340e73
|
[
"Apache-2.0"
] |
permissive
|
pyro-ppl/numpyro
|
b071ed2bd93be41bafc3da8764c9f5617f996d92
|
ca96eca8e8e1531e71ba559ef7a8ad3b4b68cbc2
|
refs/heads/master
| 2023-09-03T15:56:13.252692
| 2023-08-28T14:32:25
| 2023-08-28T14:32:25
| 170,580,540
| 1,941
| 219
|
Apache-2.0
| 2023-09-04T11:26:11
| 2019-02-13T21:13:59
|
Python
|
UTF-8
|
Python
| false
| false
| 5,161
|
py
|
# Copyright Contributors to the Pyro project.
# SPDX-License-Identifier: Apache-2.0
import pytest
import jax
import jax.numpy as jnp
import numpyro.distributions as dist
rng_key = jax.random.PRNGKey(42)
def get_normal(batch_shape):
"""Get parameterized Normal with given batch shape."""
loc = jnp.zeros(batch_shape)
scale = jnp.ones(batch_shape)
normal = dist.Normal(loc=loc, scale=scale)
return normal
def get_mvn(batch_shape):
"""Get parameterized MultivariateNormal with given batch shape."""
dimensions = 2
loc = jnp.zeros((*batch_shape, dimensions))
cov_matrix = jnp.eye(dimensions, dimensions)
for i, s in enumerate(batch_shape):
loc = jnp.repeat(jnp.expand_dims(loc, i), s, axis=i)
cov_matrix = jnp.repeat(jnp.expand_dims(cov_matrix, i), s, axis=i)
mvn = dist.MultivariateNormal(loc=loc, covariance_matrix=cov_matrix)
return mvn
@pytest.mark.parametrize("jax_dist_getter", [get_normal, get_mvn])
@pytest.mark.parametrize("nb_mixtures", [1, 3])
@pytest.mark.parametrize("batch_shape", [(), (1,), (7,), (2, 5)])
@pytest.mark.parametrize("same_family", [True, False])
def test_mixture_same_batch_shape(
jax_dist_getter, nb_mixtures, batch_shape, same_family
):
mixing_probabilities = jnp.ones(nb_mixtures) / nb_mixtures
for i, s in enumerate(batch_shape):
mixing_probabilities = jnp.repeat(
jnp.expand_dims(mixing_probabilities, i), s, axis=i
)
assert jnp.allclose(mixing_probabilities.sum(axis=-1), 1.0)
mixing_distribution = dist.Categorical(probs=mixing_probabilities)
if same_family:
component_distribution = jax_dist_getter((*batch_shape, nb_mixtures))
else:
component_distribution = [
jax_dist_getter(batch_shape) for _ in range(nb_mixtures)
]
_test_mixture(mixing_distribution, component_distribution)
@pytest.mark.parametrize("jax_dist_getter", [get_normal, get_mvn])
@pytest.mark.parametrize("nb_mixtures", [3])
@pytest.mark.parametrize("mixing_batch_shape, component_batch_shape", [[(2,), (7, 2)]])
@pytest.mark.parametrize("same_family", [True, False])
def test_mixture_broadcast_batch_shape(
jax_dist_getter, nb_mixtures, mixing_batch_shape, component_batch_shape, same_family
):
# Create mixture
mixing_probabilities = jnp.ones(nb_mixtures) / nb_mixtures
for i, s in enumerate(mixing_batch_shape):
mixing_probabilities = jnp.repeat(
jnp.expand_dims(mixing_probabilities, i), s, axis=i
)
assert jnp.allclose(mixing_probabilities.sum(axis=-1), 1.0)
mixing_distribution = dist.Categorical(probs=mixing_probabilities)
if same_family:
component_distribution = jax_dist_getter((*component_batch_shape, nb_mixtures))
else:
component_distribution = [
jax_dist_getter(component_batch_shape) for _ in range(nb_mixtures)
]
_test_mixture(mixing_distribution, component_distribution)
def _test_mixture(mixing_distribution, component_distribution):
# Create mixture
mixture = dist.Mixture(
mixing_distribution=mixing_distribution,
component_distributions=component_distribution,
)
assert (
mixture.mixture_size == mixing_distribution.probs.shape[-1]
), "Mixture size needs to be the size of the probability vector"
if isinstance(component_distribution, dist.Distribution):
assert (
mixture.batch_shape == component_distribution.batch_shape[:-1]
), "Mixture batch shape needs to be the component batch shape without the mixture dimension."
else:
assert (
mixture.batch_shape == component_distribution[0].batch_shape
), "Mixture batch shape needs to be the component batch shape."
# Test samples
sample_shape = (11,)
# Samples from component distribution(s)
component_samples = mixture.component_sample(rng_key, sample_shape)
assert component_samples.shape == (
*sample_shape,
*mixture.batch_shape,
mixture.mixture_size,
*mixture.event_shape,
)
# Samples from mixture
samples = mixture.sample(rng_key, sample_shape=sample_shape)
assert samples.shape == (*sample_shape, *mixture.batch_shape, *mixture.event_shape)
# Check log_prob
lp = mixture.log_prob(samples)
nb_value_dims = len(samples.shape) - mixture.event_dim
expected_shape = samples.shape[:nb_value_dims]
assert lp.shape == expected_shape
# Samples with indices
samples_, [indices] = mixture.sample_with_intermediates(
rng_key, sample_shape=sample_shape
)
assert samples_.shape == samples.shape
assert indices.shape == (*sample_shape, *mixture.batch_shape)
assert jnp.issubdtype(indices.dtype, jnp.integer)
assert (indices >= 0).all() and (indices < mixture.mixture_size).all()
# Check mean
mean = mixture.mean
assert mean.shape == mixture.shape()
# Check variance
var = mixture.variance
assert var.shape == mixture.shape()
# Check cdf
if mixture.event_shape == ():
cdf = mixture.cdf(samples)
assert cdf.shape == (*sample_shape, *mixture.shape())
|
[
"noreply@github.com"
] |
pyro-ppl.noreply@github.com
|
815c29c7ac315b39685f4cb97cfe0129b2f4b029
|
b2c0517a0421c32f6782d76e4df842875d6ffce5
|
/Algorithms/Dynamic Programming/121. Best Time to Buy and Sell Stock.py
|
ebf0f3692c9e7ba74f468b736b65e900ba63d3d1
|
[] |
no_license
|
SuYuxi/yuxi
|
e875b1536dc4b363194d0bef7f9a5aecb5d6199a
|
45ad23a47592172101072a80a90de17772491e04
|
refs/heads/master
| 2022-10-04T21:29:42.017462
| 2022-09-30T04:00:48
| 2022-09-30T04:00:48
| 66,703,247
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,053
|
py
|
#Forward
class Solution(object):
def maxProfit(self, prices):
if(not prices):
return 0
minP = prices[0]
maxPro = 0
for i in prices:
if(i <= minP):
minP = i
else:
maxPro = max(maxPro, i-minP)
return maxPro
#Backward
class Solution(object):
def maxProfit(self, prices):
maxP = 0
maxPro = 0
for i in prices[::-1]:
if(i > maxP):
maxP = i
else:
maxPro = max(maxPro, maxP - i)
return maxPro
#Kadane's Algorithm
#Max sum Contiguous subarray search
class Solution(object):
def maxProfit(self, prices):
L = []
for i in range(1, len(prices)):
L.append(prices[i] - prices[i-1])
maxCur = 0
maxSofar = 0
for i in L:
maxCur = max(0, maxCur + i)
maxSofar = max(maxSofar, maxCur)
return maxSofar
#Lite version
class Solution(object):
def maxProfit(self, prices):
maxCur = 0
maxSofar = 0
for i in range(1, len(prices)):
maxCur = max(0, maxCur + prices[i] - prices[i-1])
maxSofar = max(maxSofar, maxCur)
return maxSofar
|
[
"soration2099@gmail.com"
] |
soration2099@gmail.com
|
c174eeaece6b1b311b305f2b8e6aae548566a5fb
|
b314518eb3e33c872f880c4f80a0f3d0856cf9ee
|
/12_marks.py
|
bd31d77188a83adcb12a63ab0e53a8fd0675250c
|
[] |
no_license
|
namntran/2021_python_principles
|
0ba48d2cb6ff32a4fefd0b13ae24d2376e17740e
|
bf33210f9b0e02dfefe7a9a008936e8f47d25149
|
refs/heads/main
| 2023-03-10T15:47:48.930202
| 2021-02-25T07:27:53
| 2021-02-25T07:27:53
| 330,814,436
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 964
|
py
|
# 12_marks.py
# Prompt for and read marks for a test unit a negative number is entered.
# Print the number of marks entered and the average (arithmetic mean) of marks
# Print the highest and lowest marks
# use indefinite loop - while loop
n = 0
total = 0.0
mark = float(input("Enter a mark: ")) #initialise variables to the first value it should have, don't make up numbers
highest = mark #initialise variables to the first value it should have, don't make up numbers
lowest = mark #initialise variables to the first value it should have, don't make up numbers
while mark >= 0.0:
n += 1
total += mark
if mark > highest:
highest = mark
if mark < lowest:
lowest = mark
mark = float(input("Enter a mark: "))
print("The number of marks: ", n)
if n > 0: # only print average if n > 0
print("The average mark is: ", total/ n)
print("The lowest mark is: ", lowest)
print("The highest mark is: ", highest)
|
[
"namtran78@gmail.com"
] |
namtran78@gmail.com
|
25a56b9668be160cc2d3f1113f3f44564b46c9fe
|
356151747d2a6c65429e48592385166ab48c334c
|
/backend/manager/threads/manage_chef/th_remove_chef_query.py
|
ea0d3f08d872c0edeeb0b8a88499869306d0296d
|
[] |
no_license
|
therealrahulsahu/se_project
|
c82b2d9d467decd30a24388f66427c7805c23252
|
c9f9fd5594191ab7dce0504ca0ab3025aa26a0c1
|
refs/heads/master
| 2020-06-25T02:51:30.355677
| 2020-04-20T13:01:36
| 2020-04-20T13:01:36
| 199,175,627
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,371
|
py
|
from PyQt5.QtCore import QThread, pyqtSignal
class ThreadRemoveChefQuery(QThread):
signal = pyqtSignal('PyQt_PyObject')
def __init__(self, parent_class):
super().__init__()
self.output_list = []
self.parent_class = parent_class
def run(self):
in_name = r'(?i){}'.format(self.parent_class.curr_wid.le_rm_chef.text().strip())
self.output_list = []
self.output_itmes = []
from errors import ChefNotFoundError
from pymongo.errors import AutoReconnect
try:
myc = self.parent_class.MW.DB.chef
data_list = list(myc.find({'name': {'$regex': in_name}},
{'password': 0, 'phone': 0}).limit(10))
if data_list:
self.output_itmes = data_list
self.output_list = [x['_id'] for x in data_list]
self.parent_class.MW.mess('List Fetched')
self.signal.emit(True)
else:
self.parent_class.curr_wid.bt_rm_confirm.setEnabled(False)
raise ChefNotFoundError
except ChefNotFoundError as ob:
self.parent_class.MW.mess(str(ob))
except AutoReconnect:
self.parent_class.MW.mess('-->> Network Error <<--')
finally:
self.parent_class.curr_wid.bt_get_rm_chef.setEnabled(True)
|
[
"43601158+therealrahulsahu@users.noreply.github.com"
] |
43601158+therealrahulsahu@users.noreply.github.com
|
858a53123632c2341a8d43156ec562807a7a9d52
|
53fab060fa262e5d5026e0807d93c75fb81e67b9
|
/backup/user_205/ch139_2020_04_01_19_45_57_332549.py
|
850ba88067b364259c8a558e3cceaab56293684b
|
[] |
no_license
|
gabriellaec/desoft-analise-exercicios
|
b77c6999424c5ce7e44086a12589a0ad43d6adca
|
01940ab0897aa6005764fc220b900e4d6161d36b
|
refs/heads/main
| 2023-01-31T17:19:42.050628
| 2020-12-16T05:21:31
| 2020-12-16T05:21:31
| 306,735,108
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 158
|
py
|
def arcotangente(x,n):
m = 3
i = 3
u = -1
z = x
while (m<n):
z += u*(x**m/i)
u*=-1
m+=2
i+=2
return z
|
[
"you@example.com"
] |
you@example.com
|
3216fe50f659f9555182cd6e9010327a99bc736c
|
50c2bf03543eff23ec2e88f086e33848b50b5c4f
|
/docs/links.py
|
7fb1ce92193eab8aaee889f6876ac192227aa78d
|
[] |
no_license
|
CiscoTestAutomation/geniefiletransferutilslib
|
d06967476d78eafe1984a9991a57def25523ade7
|
9c32f121816d7d8f4a1fc4fc1b7c2fe0cf4e9449
|
refs/heads/master
| 2021-06-03T21:04:24.922438
| 2020-01-20T19:36:53
| 2020-01-20T19:36:53
| 131,624,514
| 3
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,016
|
py
|
internal_links = {'pyats': ('%s://wwwin-pyats.cisco.com', 'pyATS'),
'devnet': ('%s://developer.cisco.com/', 'Cisco DevNet'),
'multiprotocolfileutilities': ('%s://wwwin-pyats.cisco.com/documentation/html/utilities/file_transfer_utilities.html', 'Multiprotocol File Transfer'),
'mailto': ('mailto:asg-genie-support@%s','mailto'),
'communityforum': ('%s://piestack.cisco.com', 'community forum'),
}
external_links = {'pyats': ('%ss://developer.cisco.com/site/pyats/', 'pyATS'),
'devnet': ('%ss://developer.cisco.com/', 'Cisco DevNet'),
'multiprotocolfileutilities': ('%ss://pubhub.devnetcloud.com/media/pyats/docs/utilities/file_transfer_utilities.html', 'Multiprotocol File Transfer'),
'mailto': ('mailto:pyats-support-ext@%s','mailto'),
'communityforum': ('%ss://communities.cisco.com/community/developer/pyats', 'community forum'),
}
|
[
"karmoham@cisco.com"
] |
karmoham@cisco.com
|
221b36b5f091132763c293d1bd0373aa8ab7f2c8
|
7f80ea25908ce2eba6f6a72689f88c142319fe56
|
/backtracking/baekjoon/2580.py
|
6451fe1e80ba3457eba728de64bdc892abf909aa
|
[] |
no_license
|
JUNGEEYOU/Algorithm-Problems
|
1b242ae3aec3005d4e449f8b6170a63d1acac60b
|
5e4a8a37254120c7c572b545d99006ebb512e151
|
refs/heads/main
| 2023-04-06T11:45:47.867171
| 2021-04-22T13:49:36
| 2021-04-22T13:49:36
| 353,240,130
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 633
|
py
|
import sys
zero_list = []
arr = []
for i in range(9):
x = list(map(int, sys.stdin.readline().split()))
zero_list.extend([(i, j) for j in range(len(x)) if x[j] == 0])
arr.append(x)
zero = len(zero_list)
result = []
def dfs():
if len(result) == zero:
return
# 나의 가로, 나의 세로, 나의 사각형이 모두 합이 45
flag_x = False
flag_y = False
for x, y in zero_list:
for i in arr[x]:
if i == 0:
flag_x = True
break
for i in range(9):
if arr[i][y] == 0:
flag_y = True
break
|
[
"junge2u@naver.com"
] |
junge2u@naver.com
|
6b3e19b3c633b7ce0aa72c220770ab72ab12a828
|
6a0589aa1a5f9071cbcee3f84452c880bf96c12d
|
/tests/conftest.py
|
1b5dcb8b25e52d3f3937e03f61d604e1bf155437
|
[
"MIT"
] |
permissive
|
UWPCE-PythonCert/py220_extras
|
d3203e2fd44ee840d008fac9597a5b0c165e8cc7
|
57336429fb782c4901e7709c0275242e6af4264a
|
refs/heads/master
| 2020-12-01T23:42:58.660565
| 2020-03-11T02:44:18
| 2020-03-11T02:44:18
| 230,816,756
| 0
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 236
|
py
|
# -*- coding: utf-8 -*-
"""
Dummy conftest.py for uw_py220_extras.
If you don't know what this is for, just leave it empty.
Read more about conftest.py under:
https://pytest.org/latest/plugins.html
"""
# import pytest
|
[
"akmiles@icloud.com"
] |
akmiles@icloud.com
|
72f1f59bbbd15bb91ff2e22139d375f363c4fe26
|
e228abda54dc7ab992ba634997b0d21b7200d091
|
/runtests.py
|
9efa69f957de2f6c612397fa92b4ccd7e605a565
|
[
"MIT",
"LicenseRef-scancode-unknown-license-reference"
] |
permissive
|
AltSchool/dynamic-rest
|
3fd2456d72fdf84e75556bc0fea7303e496b7ec7
|
ed69e5af4ddf153e6eb304b7db80cc6adbf4d654
|
refs/heads/master
| 2023-09-06T01:57:47.555537
| 2023-03-27T16:15:22
| 2023-03-27T16:15:22
| 31,736,312
| 812
| 131
|
MIT
| 2023-05-28T09:15:45
| 2015-03-05T21:05:17
|
Python
|
UTF-8
|
Python
| false
| false
| 3,117
|
py
|
#! /usr/bin/env python
# Adopted from Django REST Framework:
# https://github.com/tomchristie/django-rest-framework/blob/master/runtests.py
from __future__ import print_function
import os
import subprocess
import sys
import pytest
APP_NAME = 'dynamic_rest'
TESTS = 'tests'
BENCHMARKS = 'benchmarks'
PYTEST_ARGS = {
'default': [
TESTS, '--tb=short', '-s', '-rw'
],
'fast': [
TESTS, '--tb=short', '-q', '-s', '-rw'
],
}
FLAKE8_ARGS = [APP_NAME, TESTS]
sys.path.append(os.path.dirname(__file__))
def exit_on_failure(ret, message=None):
if ret:
sys.exit(ret)
def flake8_main(args):
print('Running flake8 code linting')
ret = subprocess.call(['flake8'] + args)
print('flake8 failed' if ret else 'flake8 passed')
return ret
def split_class_and_function(string):
class_string, function_string = string.split('.', 1)
return "%s and %s" % (class_string, function_string)
def is_function(string):
# `True` if it looks like a test function is included in the string.
return string.startswith('test_') or '.test_' in string
def is_class(string):
# `True` if first character is uppercase - assume it's a class name.
return string[0] == string[0].upper()
if __name__ == "__main__":
try:
sys.argv.remove('--nolint')
except ValueError:
run_flake8 = True
else:
run_flake8 = False
try:
sys.argv.remove('--lintonly')
except ValueError:
run_tests = True
else:
run_tests = False
try:
sys.argv.remove('--benchmarks')
except ValueError:
run_benchmarks = False
else:
run_benchmarks = True
try:
sys.argv.remove('--fast')
except ValueError:
style = 'default'
else:
style = 'fast'
run_flake8 = False
if len(sys.argv) > 1:
pytest_args = sys.argv[1:]
first_arg = pytest_args[0]
try:
pytest_args.remove('--coverage')
except ValueError:
pass
else:
pytest_args = [
'--cov-report',
'xml',
'--cov',
APP_NAME
] + pytest_args
if first_arg.startswith('-'):
# `runtests.py [flags]`
pytest_args = [TESTS] + pytest_args
elif is_class(first_arg) and is_function(first_arg):
# `runtests.py TestCase.test_function [flags]`
expression = split_class_and_function(first_arg)
pytest_args = [TESTS, '-k', expression] + pytest_args[1:]
elif is_class(first_arg) or is_function(first_arg):
# `runtests.py TestCase [flags]`
# `runtests.py test_function [flags]`
pytest_args = [TESTS, '-k', pytest_args[0]] + pytest_args[1:]
else:
pytest_args = PYTEST_ARGS[style]
if run_benchmarks:
pytest_args[0] = BENCHMARKS
pytest_args.append('--ds=%s.settings' % BENCHMARKS)
if run_tests:
exit_on_failure(pytest.main(pytest_args))
if run_flake8:
exit_on_failure(flake8_main(FLAKE8_ARGS))
|
[
"alonetiev@gmail.com"
] |
alonetiev@gmail.com
|
b8a648e695ffd41107411a2a06894c584e2e6f86
|
82b946da326148a3c1c1f687f96c0da165bb2c15
|
/sdk/python/pulumi_azure_native/securityinsights/v20210301preview/get_dynamics365_data_connector.py
|
aab4cce733e30b9d124ff6383db6269c8390a7b0
|
[
"BSD-3-Clause",
"Apache-2.0"
] |
permissive
|
morrell/pulumi-azure-native
|
3916e978382366607f3df0a669f24cb16293ff5e
|
cd3ba4b9cb08c5e1df7674c1c71695b80e443f08
|
refs/heads/master
| 2023-06-20T19:37:05.414924
| 2021-07-19T20:57:53
| 2021-07-19T20:57:53
| 387,815,163
| 0
| 0
|
Apache-2.0
| 2021-07-20T14:18:29
| 2021-07-20T14:18:28
| null |
UTF-8
|
Python
| false
| false
| 5,892
|
py
|
# coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from . import outputs
__all__ = [
'GetDynamics365DataConnectorResult',
'AwaitableGetDynamics365DataConnectorResult',
'get_dynamics365_data_connector',
]
@pulumi.output_type
class GetDynamics365DataConnectorResult:
"""
Represents Dynamics365 data connector.
"""
def __init__(__self__, data_types=None, etag=None, id=None, kind=None, name=None, system_data=None, tenant_id=None, type=None):
if data_types and not isinstance(data_types, dict):
raise TypeError("Expected argument 'data_types' to be a dict")
pulumi.set(__self__, "data_types", data_types)
if etag and not isinstance(etag, str):
raise TypeError("Expected argument 'etag' to be a str")
pulumi.set(__self__, "etag", etag)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if kind and not isinstance(kind, str):
raise TypeError("Expected argument 'kind' to be a str")
pulumi.set(__self__, "kind", kind)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if system_data and not isinstance(system_data, dict):
raise TypeError("Expected argument 'system_data' to be a dict")
pulumi.set(__self__, "system_data", system_data)
if tenant_id and not isinstance(tenant_id, str):
raise TypeError("Expected argument 'tenant_id' to be a str")
pulumi.set(__self__, "tenant_id", tenant_id)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
@property
@pulumi.getter(name="dataTypes")
def data_types(self) -> 'outputs.Dynamics365DataConnectorDataTypesResponse':
"""
The available data types for the connector.
"""
return pulumi.get(self, "data_types")
@property
@pulumi.getter
def etag(self) -> Optional[str]:
"""
Etag of the azure resource
"""
return pulumi.get(self, "etag")
@property
@pulumi.getter
def id(self) -> str:
"""
Azure resource Id
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def kind(self) -> str:
"""
The kind of the data connector
Expected value is 'Dynamics365'.
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> str:
"""
Azure resource name
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="systemData")
def system_data(self) -> 'outputs.SystemDataResponse':
"""
Azure Resource Manager metadata containing createdBy and modifiedBy information.
"""
return pulumi.get(self, "system_data")
@property
@pulumi.getter(name="tenantId")
def tenant_id(self) -> str:
"""
The tenant id to connect to, and get the data from.
"""
return pulumi.get(self, "tenant_id")
@property
@pulumi.getter
def type(self) -> str:
"""
Azure resource type
"""
return pulumi.get(self, "type")
class AwaitableGetDynamics365DataConnectorResult(GetDynamics365DataConnectorResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetDynamics365DataConnectorResult(
data_types=self.data_types,
etag=self.etag,
id=self.id,
kind=self.kind,
name=self.name,
system_data=self.system_data,
tenant_id=self.tenant_id,
type=self.type)
def get_dynamics365_data_connector(data_connector_id: Optional[str] = None,
operational_insights_resource_provider: Optional[str] = None,
resource_group_name: Optional[str] = None,
workspace_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetDynamics365DataConnectorResult:
"""
Represents Dynamics365 data connector.
:param str data_connector_id: Connector ID
:param str operational_insights_resource_provider: The namespace of workspaces resource provider- Microsoft.OperationalInsights.
:param str resource_group_name: The name of the resource group. The name is case insensitive.
:param str workspace_name: The name of the workspace.
"""
__args__ = dict()
__args__['dataConnectorId'] = data_connector_id
__args__['operationalInsightsResourceProvider'] = operational_insights_resource_provider
__args__['resourceGroupName'] = resource_group_name
__args__['workspaceName'] = workspace_name
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('azure-native:securityinsights/v20210301preview:getDynamics365DataConnector', __args__, opts=opts, typ=GetDynamics365DataConnectorResult).value
return AwaitableGetDynamics365DataConnectorResult(
data_types=__ret__.data_types,
etag=__ret__.etag,
id=__ret__.id,
kind=__ret__.kind,
name=__ret__.name,
system_data=__ret__.system_data,
tenant_id=__ret__.tenant_id,
type=__ret__.type)
|
[
"noreply@github.com"
] |
morrell.noreply@github.com
|
286036647230c1f20766d06e3e4a66ddc5f011b7
|
2a1f4c4900693c093b2fcf4f84efa60650ef1424
|
/py/probe/functions/usb.py
|
01655565f4c286d2a11fe60aa67c5066b1325d29
|
[
"BSD-3-Clause"
] |
permissive
|
bridder/factory
|
b925f494303728fa95017d1ba3ff40ac5cf6a2fd
|
a1b0fccd68987d8cd9c89710adc3c04b868347ec
|
refs/heads/master
| 2023-08-10T18:51:08.988858
| 2021-09-21T03:25:28
| 2021-09-21T03:25:28
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,148
|
py
|
# Copyright 2018 The Chromium OS Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import os
import re
from cros.factory.probe.functions import sysfs
from cros.factory.probe.lib import cached_probe_function
REQUIRED_FIELDS = ['idVendor', 'idProduct']
OPTIONAL_FIELDS = ['manufacturer', 'product', 'bcdDevice']
def ReadUSBSysfs(dir_path):
result = sysfs.ReadSysfs(
dir_path, REQUIRED_FIELDS, optional_keys=OPTIONAL_FIELDS)
if result:
result['bus_type'] = 'usb'
return result
class USBFunction(cached_probe_function.GlobPathCachedProbeFunction):
"""Probes all usb devices listed in the sysfs ``/sys/bus/usb/devices/``.
Description
-----------
This function goes through ``/sys/bus/usb/devices/`` to read attributes of
each usb device (also includes usb root hub) listed there. Each result
should contain these fields:
- ``device_path``: Pathname of the sysfs directory.
- ``idVendor``
- ``idProduct``
The result might also contain these optional fields if they are exported in
the sysfs entry:
- ``manufacturer``
- ``product``
- ``bcdDevice``
Examples
--------
Let's say the Chromebook has two usb devices. One of which
(at ``/sys/bus/usb/devices/1-1``) has the attributes:
- ``idVendor=0x0123``
- ``idProduct=0x4567``
- ``manufacturer=Google``
- ``product=Google Fancy Camera``
- ``bcdDevice=0x8901``
And the other one (at ``/sys/bus/usb/devices/1-2``) has the attributes:
- ``idVendor=0x0246``
- ``idProduct=0x1357``
- ``product=Goofy Bluetooth``
Then the probe statement::
{
"eval": "usb"
}
will have the corresponding probed result::
[
{
"bus_type": "usb",
"idVendor": "0123",
"idProduct": "4567",
"manufacturer": "Google",
"product": "Google Fancy Camera",
"bcdDevice": "8901"
},
{
"bus_type": "usb",
"idVendor": "0246",
"idProduct": "1357",
"product": "Goofy Bluetooth"
}
]
To verify if the Chromebook has Google Fancy Camera or not, you can write
a probe statement like::
{
"eval": "usb",
"expect": {
"idVendor": "0123",
"idProduct": "4567"
}
}
and verify if the ``camera`` field of the probed result dict contains
elements or not.
You can also specify ``dir_path`` argument directly to ask the function
to probe that sysfs USB entry. For example, the probe statement ::
{
"eval": "usb:/sys/bus/usb/devices/1-1"
}
will have the corresponding probed results::
[
{
"bus_type": "usb",
"idVendor": "0123",
...
}
]
"""
GLOB_PATH = '/sys/bus/usb/devices/*'
@classmethod
def ProbeDevice(cls, dir_path):
# A valid usb device name is <roothub_num>-<addr>[.<addr2>[.<addr3>...]] or
# usb[0-9]+ for usb root hub.
name = os.path.basename(dir_path)
if (not re.match(r'^[0-9]+-[0-9]+(\.[0-9]+)*$', name) and
not re.match(r'^usb[0-9]+$', name)):
return None
return ReadUSBSysfs(dir_path)
|
[
"chrome-bot@chromium.org"
] |
chrome-bot@chromium.org
|
202a88655b5c4915d28f86f89d310486eed37aa5
|
5667b69eee4b384e09625c1c65799a9785336b5b
|
/ivi/tektronix/tektronixMDO4104.py
|
4255a76b58b35d946e378bc805e59ee55b55848d
|
[
"MIT"
] |
permissive
|
Diti24/python-ivi
|
ffae0aa38e7340fa142929541ded2148f41e8a9a
|
4bf570eeb370789404d5bae8a439b6bbdb57647e
|
refs/heads/master
| 2020-04-08T04:07:06.326253
| 2019-08-05T16:52:00
| 2019-08-05T16:52:00
| 60,081,649
| 0
| 1
| null | 2016-05-31T13:40:19
| 2016-05-31T10:51:47
|
Python
|
UTF-8
|
Python
| false
| false
| 1,640
|
py
|
"""
Python Interchangeable Virtual Instrument Library
Copyright (c) 2016 Alex Forencich
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""
from .tektronixMDO4000 import *
class tektronixMDO4104(tektronixMDO4000):
"Tektronix MDO4104 IVI oscilloscope driver"
def __init__(self, *args, **kwargs):
self.__dict__.setdefault('_instrument_id', 'MDO4104')
super(tektronixMDO4104, self).__init__(*args, **kwargs)
self._analog_channel_count = 4
self._digital_channel_count = 16
self._channel_count = self._analog_channel_count + self._digital_channel_count
self._bandwidth = 1e9
self._init_channels()
|
[
"alex@alexforencich.com"
] |
alex@alexforencich.com
|
dfeb29a64581f84d9e2ab512576acb3bf5fbf769
|
51aa2894c317f60726fe9a778999eb7851b6be3e
|
/140_gui/pyqt_pyside/examples/PyQt_PySide_book/002_Processing_of_signals_and_events/+21_Handling signal and slot/21_9_Using class QTimer.py
|
d3b27afda8a2731f5c7749a149ae85dd10462344
|
[] |
no_license
|
pranaymate/Python_Topics
|
dd7b288ab0f5bbee71d57080179d6481aae17304
|
33d29e0a5bf4cde104f9c7f0693cf9897f3f2101
|
refs/heads/master
| 2022-04-25T19:04:31.337737
| 2020-04-26T00:36:03
| 2020-04-26T00:36:03
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,608
|
py
|
# -*- coding: utf-8 -*-
from PyQt4 import QtCore, QtGui
import time
class MyWindow(QtGui.QWidget):
def __init__(self, parent=None):
QtGui.QWidget.__init__(self, parent)
self.setWindowTitle("Использование класса QTimer")
self.resize(200, 100)
self.label = QtGui.QLabel("")
self.label.setAlignment(QtCore.Qt.AlignCenter)
self.button1 = QtGui.QPushButton("Запустить")
self.button2 = QtGui.QPushButton("Остановить")
self.button2.setEnabled(False)
vbox = QtGui.QVBoxLayout()
vbox.addWidget(self.label)
vbox.addWidget(self.button1)
vbox.addWidget(self.button2)
self.setLayout(vbox)
self.connect(self.button1, QtCore.SIGNAL("clicked()"),
self.on_clicked_button1)
self.connect(self.button2, QtCore.SIGNAL("clicked()"),
self.on_clicked_button2)
self.timer = QtCore.QTimer()
self.connect(self.timer, QtCore.SIGNAL("timeout()"),
self.on_timeout);
def on_clicked_button1(self):
self.timer.start(1000) # 1 секунда
self.button1.setEnabled(False)
self.button2.setEnabled(True)
def on_clicked_button2(self):
self.timer.stop()
self.button1.setEnabled(True)
self.button2.setEnabled(False)
def on_timeout(self):
self.label.setText(time.strftime("%H:%M:%S"))
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
window = MyWindow()
window.show()
sys.exit(app.exec_())
|
[
"sergejyurskyj@yahoo.com"
] |
sergejyurskyj@yahoo.com
|
f020b777a70bdb831c3655afbe4fb727539df1c7
|
96d31b21fbc196fe83d22ee0fdeb63ba2e58ac4e
|
/hdf.py
|
cddbf2eb30743a6d615a5e500d608a8854ec5b2a
|
[] |
no_license
|
Sandy4321/analysis
|
7e0a392b9a9ac79fcefc5504e77303d4baa1b93a
|
ec2751eddbb5dd64c12d4386a86cda4515302419
|
refs/heads/master
| 2021-01-21T07:31:25.391005
| 2013-06-22T13:16:38
| 2013-06-22T13:16:38
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,266
|
py
|
import h5py
import pandas
import os
def add_col(hdf, name, data, compression = 'lzf'):
parts = name.split('/')
dirpath = '/'.join(parts[:-1])
if len(dirpath) > 0 and dirpath not in hdf:
hdf.create_group(dirpath)
hdf.create_dataset(name,
data=data,
dtype=data.dtype,
compression=compression,
chunks=True)
def dict_to_hdf(data, path, header, feature_names = None):
try:
os.remove(path)
except OSError:
pass
hdf = h5py.File(path, 'w')
if feature_names is None:
print "[dict_to_hdf] No feature names given, using dict keys"
feature_names = data.keys()
hdf.attrs['features'] = feature_names
ccy1 = header['ccy'][0]
ccy2 = header['ccy'][1]
hdf.attrs['ccy1'] = ccy1.encode('ascii')
hdf.attrs['ccy2'] = ccy2.encode('ascii')
hdf.attrs['ccy'] = (ccy1 + "/" + ccy2).encode('ascii')
hdf.attrs['year'] = header['year']
hdf.attrs['month'] = header['month']
hdf.attrs['day'] = header['day']
hdf.attrs['venue'] = header['venue'].encode('ascii')
hdf.attrs['start_time'] = data['t'][0]
hdf.attrs['end_time'] = data['t'][-1]
for name, vec in data.items():
add_col(hdf, name, vec)
# if program quits before this flag is added, ok to overwrite
# file in the future
hdf.attrs['finished'] = True
hdf.close()
def header_from_hdf(f):
a = f.attrs
assert 'ccy1' in a
assert 'ccy2' in a
assert 'year' in a
assert 'month' in a
assert 'day' in a
assert 'venue' in a
assert 'start_time' in a
assert 'end_time' in a
assert 'features' in a
header = {
'ccy': (a['ccy1'], a['ccy2']),
'year' : a['year'],
'month' : a['month'],
'day' : a['day'],
'venue' : a['venue'],
'start_time' : a['start_time'],
'end_time' : a['end_time'],
'features': a['features'],
}
return header
def header_from_hdf_filename(filename):
f = h5py.File(filename)
header = header_from_hdf(f)
f.close()
return header
def same_features(f1, f2):
s1 = set(f1)
s2 = set(f2)
same = s1 == s2
if not same:
print "Different features:", \
s1.symmetric_difference(s2)
return same
# file exists and 'finished' flag is true
def complete_hdf_exists(filename, feature_names):
if not os.path.exists(filename):
print "Doesn't exist"
return False
try:
f = h5py.File(filename, 'r')
attrs = f.attrs
finished = 'finished' in attrs and attrs['finished']
has_ccy = 'ccy1' in attrs and 'ccy2' in attrs
has_date = 'year' in attrs and 'month' in attrs and 'day' in f.attrs
has_venue = 'venue' in attrs
has_features = 'features' in attrs
if has_features:
have_same_features = same_features(attrs['features'], feature_names)
else:
have_same_features = False
f.close()
return finished and has_ccy and has_date and has_venue and \
has_features and have_same_features
except:
import sys
print sys.exc_info()
return False
import numpy as np
def dataframe_from_hdf(f):
cols = dict([(k,np.array(v[:])) for k, v in f.items()])
return pandas.DataFrame(data=cols, index=f['t'], dtype='float')
def dataframe_from_hdf_filename(path):
f = h5py.File(path)
df = dataframe_from_hdf(f)
f.close()
return df
|
[
"alex.rubinsteyn@gmail.com"
] |
alex.rubinsteyn@gmail.com
|
febab4b2955536ed556e53abca2fbc70e4387f08
|
f0604a3a32177e6baa0fad2c01766c3e99df3fe6
|
/courator/config.py
|
91678558386039d8d473ee9a055c685c75b9b02c
|
[
"MIT"
] |
permissive
|
Courator/courator-backend
|
354390902ae6bc8faa17e47ef2c3596162423f52
|
726845a06c1be7693fd107bdf571ea40b7d398ec
|
refs/heads/master
| 2021-04-22T02:07:09.563157
| 2020-05-07T08:09:30
| 2020-05-07T08:09:30
| 249,842,057
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 585
|
py
|
import logging
from databases import DatabaseURL
from starlette.config import Config
from starlette.datastructures import Secret
from .logging import setup_logging
config = Config(".env")
DEBUG = config("DEBUG", cast=bool, default=False)
DATABASE_URL: DatabaseURL = config("DB_CONNECTION", cast=DatabaseURL)
SECRET_KEY: Secret = config("SECRET_KEY", cast=Secret)
TOKEN_EXPIRATION_DAYS: float = config("TOKEN_EXPIRATION_DAYS", cast=float, default=60.0)
TOKEN_ALGORITHM = "HS256"
setup_logging(
("uvicorn.asgi", "uvicorn.access"),
logging.DEBUG if DEBUG else logging.INFO
)
|
[
"matthew331199@gmail.com"
] |
matthew331199@gmail.com
|
3a196c69b9f2abbd039544758aa0e5f4ffeb1fc0
|
7a1b88d06ea18772b065b43d775cec6dd2acdf80
|
/1620.py
|
af8268a37b2479d7b8eb091095dcf56bd0c39388
|
[] |
no_license
|
skaurl/baekjoon-online-judge
|
28144cca45168e79b1ae0baa9a351f498f8d19ab
|
1620d298c2f429e03c5f9387d8aca13763f5c731
|
refs/heads/master
| 2023-07-26T10:07:29.724066
| 2021-09-07T09:21:02
| 2021-09-07T09:21:02
| 299,019,978
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 326
|
py
|
import sys
a,b = map(int,sys.stdin.readline().strip().split())
dict_1 = {}
dict_2 = {}
for i in range(a):
name = sys.stdin.readline().strip()
dict_1[name] = i+1
dict_2[i+1] = name
for i in range(b):
x = sys.stdin.readline().strip()
try:
print(dict_2[int(x)])
except:
print(dict_1[x])
|
[
"dr_lunars@naver.com"
] |
dr_lunars@naver.com
|
06df8306b20a7459428d2bec87c1891edbec67bc
|
15f321878face2af9317363c5f6de1e5ddd9b749
|
/solutions_python/Problem_95/1593.py
|
e3d0777f40977183a794b3eb8007eeb8c64e1509
|
[] |
no_license
|
dr-dos-ok/Code_Jam_Webscraper
|
c06fd59870842664cd79c41eb460a09553e1c80a
|
26a35bf114a3aa30fc4c677ef069d95f41665cc0
|
refs/heads/master
| 2020-04-06T08:17:40.938460
| 2018-10-14T10:12:47
| 2018-10-14T10:12:47
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 618
|
py
|
#!/usr/bin/env python
mm = {'a': 'y', 'c': 'e', 'b': 'h', 'e': 'o', 'd': 's', 'g': 'v', 'f': 'c', 'i': 'd', 'h': 'x', 'k': 'i', 'j': 'u', 'm': 'l', 'l': 'g', 'o': 'k', 'n': 'b', 'p': 'r', 's': 'n', 'r': 't', 'u': 'j', 't': 'w', 'w': 'f', 'v': 'p', 'y': 'a', 'x': 'm', 'q': 'z','z':'q',' ': ' ','\n': ''}
def str_tran(inp):
string = ''
for i in inp:
string += mm[i]
return string
if __name__ == '__main__':
f = open('input')
a = int( f.readline() )
for case in range(a):
line = f.readline()
result = str_tran(line)
print "Case #"+str(case+1)+":", result
|
[
"miliar1732@gmail.com"
] |
miliar1732@gmail.com
|
f7b756861161b2a1d93f5522f0606c0fa0e8c1a9
|
09cd370cdae12eb45090033a00e9aae45ee26638
|
/STUDY/Graph Theory/18-43 어두운 길.py
|
ec19f71096e072352c9493057e1ec9be94080fd2
|
[] |
no_license
|
KWONILCHEOL/Python
|
ee340f6328945651eb29d2b23c425a92c84a4adb
|
1ea5f5f74894a5929e0e894c5c12f049b8eb9fb4
|
refs/heads/main
| 2023-04-11T09:36:54.874638
| 2021-04-24T04:29:12
| 2021-04-24T04:29:12
| 328,658,511
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 764
|
py
|
import sys
input = sys.stdin.readline
def find_parent(parent, x):
while parent[x] != x:
parent[x], x = parent[parent[x]], parent[x]
return x
def union_parent(parent, a, b):
a = find_parent(parent, a)
b = find_parent(parent, b)
if a < b:
parent[b] = a
else:
parent[a] = b
n, m = map(int, input().split())
parent = [i for i in range(n)]
edges = []
total = 0
for _ in range(m):
a, b, c = map(int, input().split())
edges.append((c,a,b))
total += c
edges.sort()
for c,a,b in edges:
if find_parent(parent, a) != find_parent(parent, b):
union_parent(parent, a, b)
total -= c
print(total)
# 7 11
# 0 1 7
# 0 3 5
# 1 2 8
# 1 3 9
# 1 4 7
# 2 4 5
# 3 4 15
# 3 5 6
# 4 5 8
# 4 6 9
# 5 6 11
|
[
"kwon6460@gmail.com"
] |
kwon6460@gmail.com
|
87952ccd9a2eb59e81b8c92ef355b23f757f7304
|
f445450ac693b466ca20b42f1ac82071d32dd991
|
/generated_tempdir_2019_09_15_163300/generated_part004399.py
|
ecb633d07fd632fb4d5e0042ffa3c812f780ceff
|
[] |
no_license
|
Upabjojr/rubi_generated
|
76e43cbafe70b4e1516fb761cabd9e5257691374
|
cd35e9e51722b04fb159ada3d5811d62a423e429
|
refs/heads/master
| 2020-07-25T17:26:19.227918
| 2019-09-15T15:41:48
| 2019-09-15T15:41:48
| 208,357,412
| 4
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,364
|
py
|
from sympy.abc import *
from matchpy.matching.many_to_one import CommutativeMatcher
from matchpy import *
from matchpy.utils import VariableWithCount
from collections import deque
from multiset import Multiset
from sympy.integrals.rubi.constraints import *
from sympy.integrals.rubi.utility_function import *
from sympy.integrals.rubi.rules.miscellaneous_integration import *
from sympy import *
class CommutativeMatcher141684(CommutativeMatcher):
_instance = None
patterns = {
0: (0, Multiset({0: 1}), [
(VariableWithCount('i2.2.1.0', 1, 1, S(1)), Mul)
])
}
subjects = {}
subjects_by_id = {}
bipartite = BipartiteGraph()
associative = Mul
max_optional_count = 1
anonymous_patterns = set()
def __init__(self):
self.add_subject(None)
@staticmethod
def get():
if CommutativeMatcher141684._instance is None:
CommutativeMatcher141684._instance = CommutativeMatcher141684()
return CommutativeMatcher141684._instance
@staticmethod
def get_match_iter(subject):
subjects = deque([subject]) if subject is not None else deque()
subst0 = Substitution()
# State 141683
if len(subjects) >= 1 and isinstance(subjects[0], Pow):
tmp1 = subjects.popleft()
subjects2 = deque(tmp1._args)
# State 141685
if len(subjects2) >= 1:
tmp3 = subjects2.popleft()
subst1 = Substitution(subst0)
try:
subst1.try_add_variable('i2.2.1.1', tmp3)
except ValueError:
pass
else:
pass
# State 141686
if len(subjects2) >= 1 and subjects2[0] == Integer(2):
tmp5 = subjects2.popleft()
# State 141687
if len(subjects2) == 0:
pass
# State 141688
if len(subjects) == 0:
pass
# 0: x**2
yield 0, subst1
subjects2.appendleft(tmp5)
subjects2.appendleft(tmp3)
subjects.appendleft(tmp1)
return
yield
from collections import deque
|
[
"franz.bonazzi@gmail.com"
] |
franz.bonazzi@gmail.com
|
41704a03c9c525e5742496757d48362c163126ef
|
26ae248d7f1ca16c51c4f34c1f67ef19be162a4e
|
/targAssign.py
|
5ac00997505f5b61d411830f332333dfd23ee9a2
|
[] |
no_license
|
csayres/astro598
|
0d87373904da8419b90665fb84d747cf49830ef6
|
676b7ae9ae08fbeca48ded0c6f980892e907972f
|
refs/heads/master
| 2020-11-24T05:42:13.775053
| 2019-12-14T09:22:10
| 2019-12-14T09:22:10
| 227,990,176
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 5,555
|
py
|
import time
from multiprocessing import Pool, cpu_count
import pickle
import numpy
import matplotlib.pyplot as plt
from keras.models import Sequential, load_model
from keras.layers import Dense
from kaiju import RobotGrid, utils
nHexDia = 7
xCoords, yCoords = utils.hexFromDia(nHexDia)
nPositioners = len(xCoords)
print("using %i positioners"%nPositioners)
nTargets = nPositioners
nProcs = 10 # processors to use for multiprocessing
batchSize = 100 # The number of samples to run through the network before the weights / gradient are updated
epochs = 5 # The number of times to iterate through the complete sample of training data
trainingRatio = 0.9
def getValidAssignments(seed):
"""seed is the random seed with which to initialze the RobotGrid
return dictionary keyed by positioner id with the coordinates of the
metrology fiber. These represent valid (non-collided) xy Fiber positions
for each robot
"""
rg = RobotGrid(seed=seed)
for ii, (xp, yp) in enumerate(zip(xCoords, yCoords)):
rg.addRobot(robotID=ii, xPos=xp, yPos=yp)
rg.initGrid()
# give all robots an initial (radom) target configuration
for robot in rg.robotDict.values():
# assigns a robot a target picked uniformly in xy
# from its patrol annulus
robot.setXYUniform()
# decollide any colliding robots so that we have a completely
# non-colliding target configuration
rg.decollideGrid()
targetPos = {}
for robot in rg.robotDict.values():
targetPos[robot.id] = robot.metFiberPos[:-1] # xy coord, drop the z
return targetPos
def generateAssignments(nSeeds):
p = Pool(nProcs)
tstart = time.time()
validAssignments = p.map(getValidAssignments, range(nSeeds))
tend = time.time() - tstart
print("took %.2f seconds"%tend)
p.close()
pickle.dump(validAssignments, open("validAssign_%i.p"%nSeeds, "wb"))
def target2NN(targetDict, shuffle=True):
y = numpy.zeros((nPositioners, nTargets)) #n x 2, robot x target
# shuffle targets
x = []
shuffledInds = numpy.arange(nPositioners)
if shuffle:
numpy.random.shuffle(shuffledInds)
for targetInd, robotInd in enumerate(shuffledInds):
target = targetDict[robotInd]
x.append(target[0]) # xvalue
x.append(target[1]) # yvalue
# x is flattened!
y[robotInd, targetInd] = 1
x = numpy.array(x)
# rows and columns of y sum to 1, total sums to nPositioners
# print("sum of y", numpy.sum(y, axis=0), numpy.sum(y, axis=1), numpy.sum(y))
y = y.flatten() # consider normalizing by something? sum of the array will be 547
return x, y
def form2NN(assignFile):
"""Take valid assignments from assignFile
Format for use with the NN. Shuffle input targets
"""
numpy.random.seed(547)
with open(assignFile, "rb") as f:
validAssignments = pickle.load(f)
# generate a big array
# use reshape (nTargs, 2) to get original array
X = [] # input n x [x1, y1, x2, y2, ... xn, yn]
Y = [] # output
for targetDict in validAssignments:
x, y = target2NN(targetDict)
X.append(x)
Y.append(y)
X = numpy.array(X)
Y = numpy.array(Y)
return X, Y
def runNN(X, Y):
"""X is input array xy coords
Y is output array, flattened nRobots x nTargets array indexing the answers
"""
# truncate?
# X = X[:10000,:]
# Y = Y[:10000,:]
# normalize
nTrials = X.shape[0]
nInputs = X.shape[1]
nHidden = int(nInputs*1.5)
nOutputs = Y.shape[1]
model = Sequential()
model.add(
Dense(nHidden,
activation="relu",
input_dim = nInputs,
))
model.add(
Dense(nOutputs, activation="softmax"))
model.summary()
model.compile(loss='categorical_crossentropy', # See: https://keras.io/losses/
optimizer='rmsprop', # See: https://keras.io/optimizers/
metrics=['accuracy']
)
# split the data into training and testing, 75% goes towards training
split = int(numpy.floor(nTrials*trainingRatio))
X_train = X[:split, :]
Y_train = Y[:split, :]
X_test = X[split:, :]
Y_test = Y[split:, :]
history = model.fit(X_train, Y_train,
batch_size=batchSize, epochs=epochs,
verbose=1, validation_data=(X_test, Y_test))
model.save("targAssign.h5")
def compareModeled():
model = load_model('targAssign.h5')
newSeed = 2000000 # never used
ii = 0
for seed in range(newSeed, newSeed+10):
targDict = getValidAssignments(seed)
x, yTrue = target2NN(targDict, shuffle=False)
# import pdb; pdb.set_trace()
print("xhape", x.shape)
yFit = model.predict(numpy.array([x]), verbose=1)
yFit = yFit.reshape(nPositioners, nTargets)
yTrue = yTrue.reshape(nPositioners, nTargets)
plt.figure()
plt.imshow(yFit/numpy.sum(yFit))
plt.title("NN Model Fit %i"%ii)
plt.ylabel("Positioner Index")
plt.xlabel("Target Index")
plt.savefig("model_%i.png"%ii)
plt.close()
ii += 1
plt.figure()
plt.imshow(yTrue/numpy.sum(yTrue))
plt.title("True Assignment")
plt.ylabel("Positioner Index")
plt.xlabel("Target Index")
plt.savefig("true.png")
plt.close()
if __name__ == "__main__":
nSeeds = 1000000
generateAssignments(nSeeds)
X, Y = form2NN("validAssign_%i.p"%nSeeds)
runNN(X, Y)
compareModeled()
|
[
"csayres@uw.edu"
] |
csayres@uw.edu
|
63da4abf9140ef6028f7be93dad6d9462a3652ae
|
25ebc03b92df764ff0a6c70c14c2848a49fe1b0b
|
/daily/20200414/codes/output/code081.py
|
b7717f8dce6a21d9f95ef23b3b3ed26b31bdeef3
|
[] |
no_license
|
podhmo/individual-sandbox
|
18db414fafd061568d0d5e993b8f8069867dfcfb
|
cafee43b4cf51a321f4e2c3f9949ac53eece4b15
|
refs/heads/master
| 2023-07-23T07:06:57.944539
| 2023-07-09T11:45:53
| 2023-07-09T11:45:53
| 61,940,197
| 6
| 0
| null | 2022-10-19T05:01:17
| 2016-06-25T11:27:04
|
Python
|
UTF-8
|
Python
| false
| false
| 200
|
py
|
import pygal
chart = pygal.Line(stroke_style={'width': 5, 'dasharray': '3, 6', 'linecap': 'round', 'linejoin': 'round'})
chart.add('line', [.0002, .0005, .00035])
print(chart.render(is_unicode=True))
|
[
"ababjam61+github@gmail.com"
] |
ababjam61+github@gmail.com
|
403f0e4f49753a0aec4176cc3333a60bd7a59334
|
55ab64b67d8abc02907eb43a54ff6c326ded6b72
|
/scripts/addon_library/local/uvpackmaster3/overlay.py
|
9f6fb0255008525ae59e25abc9636f43e1684ffc
|
[
"MIT"
] |
permissive
|
Tilapiatsu/blender-custom_config
|
2f03b0bb234c3b098d2830732296d199c91147d0
|
00e14fc190ebff66cf50ff911f25cf5ad3529f8f
|
refs/heads/master
| 2023-08-16T14:26:39.990840
| 2023-08-16T01:32:41
| 2023-08-16T01:32:41
| 161,249,779
| 6
| 2
|
MIT
| 2023-04-12T05:33:59
| 2018-12-10T23:25:14
|
Python
|
UTF-8
|
Python
| false
| false
| 6,194
|
py
|
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
import bpy
import blf
from .enums import OperationStatus, UvpmLogType
from .utils import in_debug_mode, print_backtrace, get_prefs
class TextOverlay:
def __init__(self, text, color):
self.coords = None
self.text = text
self.color = color
def set_coords(self, coords):
self.coords = coords
def draw(self, ov_manager):
if self.coords is None:
return
blf.color(ov_manager.font_id, *self.color)
region_coord = ov_manager.context.region.view2d.view_to_region(self.coords[0], self.coords[1])
blf.position(ov_manager.font_id, region_coord[0], region_coord[1], 0)
blf.draw(ov_manager.font_id, self.text)
class OverlayManager:
LINE_X_COORD = 10
LINE_Y_COORD = 35
LINE_TEXT_COLOR = (1, 1, 1, 1)
def __init__(self, context, callback):
prefs = get_prefs()
self.font_size = prefs.font_size_text_output
self.font_size_uv_overlay = prefs.font_size_uv_overlay
self.line_distance = int(float(25) / 15 * self.font_size)
self.font_id = 0
self.context = context
handler_args = (self, context)
self.__draw_handler = bpy.types.SpaceImageEditor.draw_handler_add(callback, handler_args, 'WINDOW', 'POST_PIXEL')
def finish(self):
if self.__draw_handler is not None:
bpy.types.SpaceImageEditor.draw_handler_remove(self.__draw_handler, 'WINDOW')
def print_text(self, coords, text, color, z_coord=0.0):
blf.size(self.font_id, self.font_size, 72)
blf.color(self.font_id, *color)
blf.position(self.font_id, coords[0], coords[1], z_coord)
blf.draw(self.font_id, text)
blf.color(self.font_id, *(0, 0, 0, 1))
def __print_text_inline(self, line_num, text, color):
x_coord = self.LINE_X_COORD
y_coord = self.LINE_Y_COORD + line_num * self.line_distance
self.print_text((x_coord, y_coord), text, color)
def print_text_inline(self, text, color=LINE_TEXT_COLOR):
self.__print_text_inline(self.next_line_num, text, color)
self.next_line_num += 1
def callback_begin(self):
self.next_line_num = 0
class EngineOverlayManager(OverlayManager):
WARNING_COLOR = (1, 0.4, 0, 1)
ERROR_COLOR = (1, 0, 0, 1)
DISABLED_DEVICE_COLOR_MULTIPLIER = 0.7
INTEND_STR = ' '
OPSTATUS_TO_COLOR = {
OperationStatus.ERROR : ERROR_COLOR,
OperationStatus.WARNING : WARNING_COLOR,
OperationStatus.CORRECT : OverlayManager.LINE_TEXT_COLOR
}
def __init__(self, op, dev_array):
super().__init__(op.p_context.context, engine_overlay_manager_draw_callback)
self.op = op
self.dev_array = dev_array
self.print_dev_progress = True
self.p_context = op.p_context
self.log_manager = op.log_manager
self.font_id = 0
def print_dev_array(self):
if self.dev_array is None:
return
for dev in reversed(self.dev_array):
dev_color = self.LINE_TEXT_COLOR
if self.print_dev_progress:
progress_str = "{}% ".format(dev.bench_entry.progress)
else:
progress_str = ''
if dev.settings.enabled:
dev_status = "{}(iterations: {})".format(progress_str, dev.bench_entry.iter_count)
else:
dev_status = 'disabled'
dev_color = tuple(self.DISABLED_DEVICE_COLOR_MULTIPLIER * c for c in dev_color)
self.print_text_inline("{}{}: {}".format(self.INTEND_STR, dev.name, dev_status), color=dev_color)
self.print_text_inline("[PACKING DEVICES]:")
def print_list(self, header, list, color):
for elem in reversed(list):
self.print_text_inline("{}* {}".format(self.INTEND_STR, elem), color=color)
self.print_text_inline("[{}]:".format(header), color=color)
def engine_overlay_manager_draw_callback(self, context):
try:
self.callback_begin()
status_str = self.log_manager.last_log(UvpmLogType.STATUS)
if status_str is None:
status_str = ''
status_color = self.OPSTATUS_TO_COLOR[self.log_manager.operation_status()]
hint_str = self.log_manager.last_log(UvpmLogType.HINT)
if hint_str:
status_str = "{} ({})".format(status_str, hint_str)
self.print_text_inline('[STATUS]: ' + status_str, color=status_color)
self.print_dev_array()
log_print_metadata = (\
(UvpmLogType.INFO, 'INFO'),
(UvpmLogType.WARNING,'WARNINGS'),
(UvpmLogType.ERROR, 'ERRORS')
)
for log_type, header in log_print_metadata:
op_status = self.log_manager.LOGTYPE_TO_OPSTATUS[log_type]
color = self.OPSTATUS_TO_COLOR[op_status]
log_list = self.log_manager.log_list(log_type)
if len(log_list) > 0:
self.print_list(header, log_list, color)
blf.size(self.font_id, self.font_size_uv_overlay, 72)
if self.p_context.p_islands is not None:
for p_island in self.p_context.p_islands:
overlay = p_island.overlay()
if overlay is not None:
overlay.draw(self)
except Exception as ex:
if in_debug_mode():
print_backtrace(ex)
|
[
"tilapiatsu@hotmail.fr"
] |
tilapiatsu@hotmail.fr
|
7a08037735251f82bbeb0a141dc986cc6be5b018
|
db9cc680a60997412eae035b257cc77efbcdcb06
|
/py3/leetcodeCN/tree/111. Minimum Depth of Binary Tree.py
|
32a79a0af7a51bccd7310bf7ea62463e9dd2d775
|
[] |
no_license
|
Azson/machineLearning
|
9630b62c73b2388a57c630644dae3ffa8e4db236
|
35662ddf39d322009f074ce8981e5f5d27786819
|
refs/heads/master
| 2022-05-06T07:03:23.543355
| 2021-08-20T14:57:25
| 2021-08-20T14:57:25
| 179,935,258
| 3
| 3
| null | 2019-11-04T14:26:51
| 2019-04-07T08:07:08
|
Python
|
UTF-8
|
Python
| false
| false
| 482
|
py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
def minDepth(root):
ls = [root]
ans = 0
while len(ls) > 0:
ans += 1
la = len(ls)
for i in range(la):
root = ls[i]
if not (root.left and root.right):
return ans+1
if root.left:
ls.append(root.left)
if root.right:
ls.append(root.right)
ls = ls[la:]
return ans
if __name__ == '__main__':
pass
|
[
"240326315@qq.com"
] |
240326315@qq.com
|
58efa655d1487cd5d3f26dbe92f3eb954bf9e877
|
ca42e62ce157095ace5fbaec0bf261a4fb13aa6a
|
/pyenv/lib/python3.6/site-packages/rest_framework/utils/serializer_helpers.py
|
4734332af888219ce4589dd089dcda9352ed0871
|
[
"Apache-2.0"
] |
permissive
|
ronald-rgr/ai-chatbot-smartguide
|
58f1e7c76b00248923f5fe85f87c318b45e38836
|
c9c830feb6b66c2e362f8fb5d147ef0c4f4a08cf
|
refs/heads/master
| 2021-04-18T03:15:23.720397
| 2020-03-23T17:55:47
| 2020-03-23T17:55:47
| 249,500,344
| 0
| 0
|
Apache-2.0
| 2021-04-16T20:45:28
| 2020-03-23T17:35:37
|
Python
|
UTF-8
|
Python
| false
| false
| 4,617
|
py
|
from __future__ import unicode_literals
import collections
from collections import OrderedDict
from django.utils.encoding import force_text
from rest_framework.compat import unicode_to_repr
class ReturnDict(OrderedDict):
"""
Return object from `serializer.data` for the `Serializer` class.
Includes a backlink to the serializer instance for renderers
to use if they need richer field information.
"""
def __init__(self, *args, **kwargs):
self.serializer = kwargs.pop('serializer')
super(ReturnDict, self).__init__(*args, **kwargs)
def copy(self):
return ReturnDict(self, serializer=self.serializer)
def __repr__(self):
return dict.__repr__(self)
def __reduce__(self):
# Pickling these objects will drop the .serializer backlink,
# but preserve the raw data.
return (dict, (dict(self),))
class ReturnList(list):
"""
Return object from `serializer.data` for the `SerializerList` class.
Includes a backlink to the serializer instance for renderers
to use if they need richer field information.
"""
def __init__(self, *args, **kwargs):
self.serializer = kwargs.pop('serializer')
super(ReturnList, self).__init__(*args, **kwargs)
def __repr__(self):
return list.__repr__(self)
def __reduce__(self):
# Pickling these objects will drop the .serializer backlink,
# but preserve the raw data.
return (list, (list(self),))
class BoundField(object):
"""
A field object that also includes `.value` and `.error` properties.
Returned when iterating over a serializer instance,
providing an API similar to Django forms and form fields.
"""
def __init__(self, field, value, errors, prefix=''):
self._field = field
self._prefix = prefix
self.value = value
self.errors = errors
self.name = prefix + self.field_name
def __getattr__(self, attr_name):
return getattr(self._field, attr_name)
@property
def _proxy_class(self):
return self._field.__class__
def __repr__(self):
return unicode_to_repr('<%s value=%s errors=%s>' % (
self.__class__.__name__, self.value, self.errors
))
def as_form_field(self):
value = '' if (self.value is None or self.value is False) else self.value
return self.__class__(self._field, value, self.errors, self._prefix)
class NestedBoundField(BoundField):
"""
This `BoundField` additionally implements __iter__ and __getitem__
in order to support nested bound fields. This class is the type of
`BoundField` that is used for serializer fields.
"""
def __init__(self, field, value, errors, prefix=''):
if value is None or value is '':
value = {}
super(NestedBoundField, self).__init__(field, value, errors, prefix)
def __iter__(self):
for field in self.fields.values():
yield self[field.field_name]
def __getitem__(self, key):
field = self.fields[key]
value = self.value.get(key) if self.value else None
error = self.errors.get(key) if self.errors else None
if hasattr(field, 'fields'):
return NestedBoundField(field, value, error, prefix=self.name + '.')
return BoundField(field, value, error, prefix=self.name + '.')
def as_form_field(self):
values = {}
for key, value in self.value.items():
if isinstance(value, (list, dict)):
values[key] = value
else:
values[key] = '' if (value is None or value is False) else force_text(value)
return self.__class__(self._field, values, self.errors, self._prefix)
class BindingDict(collections.MutableMapping):
"""
This dict-like object is used to store fields on a serializer.
This ensures that whenever fields are added to the serializer we call
`field.bind()` so that the `field_name` and `parent` attributes
can be set correctly.
"""
def __init__(self, serializer):
self.serializer = serializer
self.fields = OrderedDict()
def __setitem__(self, key, field):
self.fields[key] = field
field.bind(field_name=key, parent=self.serializer)
def __getitem__(self, key):
return self.fields[key]
def __delitem__(self, key):
del self.fields[key]
def __iter__(self):
return iter(self.fields)
def __len__(self):
return len(self.fields)
def __repr__(self):
return dict.__repr__(self.fields)
|
[
"ronald.garcia@gmail.com"
] |
ronald.garcia@gmail.com
|
febdaae751915967a6fef3b5f718c6b4c230ab89
|
e23a4f57ce5474d468258e5e63b9e23fb6011188
|
/125_algorithms/008_graph_algorithms/_exercises/templates/Cracking Coding Interviews - Mastering Algorithms/clone-graph-bfs.py
|
08c345f3dc7c721c8efea80633a287a2e51fb103
|
[] |
no_license
|
syurskyi/Python_Topics
|
52851ecce000cb751a3b986408efe32f0b4c0835
|
be331826b490b73f0a176e6abed86ef68ff2dd2b
|
refs/heads/master
| 2023-06-08T19:29:16.214395
| 2023-05-29T17:09:11
| 2023-05-29T17:09:11
| 220,583,118
| 3
| 2
| null | 2023-02-16T03:08:10
| 2019-11-09T02:58:47
|
Python
|
UTF-8
|
Python
| false
| false
| 1,084
|
py
|
# c_ Node
# ___ - val neighbors
# ? ?
# ? ?
#
# # Example Input:
#
# # 1 <---> 2
# # ^ ^
# # | |
# # v v
# # 4 <---> 3
#
# # Example Output:
#
# # 1 <---> 2
# # ^ ^
# # | |
# # v v
# # 4 <---> 3
#
# ___ clone node
# queue _ # LIST
#
# visited _ # DICT
#
# ?.ap.. ?
#
# w__ le. ? > 0
# cur _ ?.po. 0
#
# new_node _ N..
#
# __ c.. __ v__.k..
# n.. _ v..|c..
# ____
# new_node _ N.. c__.v.. ||
#
# neighbors _ n__.n..
#
# v..|c.. _ n..
#
# ___ i __ ra.. le. c__.n..
# __ c__.n..|? __ v__.k..
# n__.ap.. v..|c__.n..|?
# ____
# q__.ap.. c__.n..|?
# new_neighbor_node _ ? c__.n..|?.v.. ||
# n__.ap.. ?
# v..|c__.n..|? _ ?
#
# r_ v..|n..
#
#
#
#
# node = Node(1, [])
# node2 = Node(2, [])
# node3 = Node(3, [])
# node4 = Node(4, [])
#
# node.neighbors.append(node2)
# node.neighbors.append(node4)
#
# node2.neighbors.append(node)
# node2.neighbors.append(node3)
#
# node3.neighbors.append(node2)
# node3.neighbors.append(node4)
#
# node4.neighbors.append(node)
# node4.neighbors.append(node3)
|
[
"sergejyurskyj@yahoo.com"
] |
sergejyurskyj@yahoo.com
|
9e484b15c88d0bde693c3cc2535cf40b53bbc640
|
5a0122509b4e7e15e556460d261d9d8a1cee76ad
|
/repository/base/docattachments_pb.py
|
ddad9c7ef5a420750f8e46c2777173bcc3458696
|
[] |
no_license
|
cash2one/BHWGoogleProject
|
cec4d5353f6ea83ecec0d0325747bed812283304
|
18ecee580e284705b642b88c8e9594535993fead
|
refs/heads/master
| 2020-12-25T20:42:08.612393
| 2013-04-13T14:01:37
| 2013-04-13T14:01:37
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,677
|
py
|
# This file automatically generated by protocol-compiler from repository/base/docattachments.proto
# DO NOT EDIT!
from google3.net.proto import ProtocolBuffer
import array
import thread
from google3.net.proto import _net_proto___parse__python
__pychecker__ = """maxreturns=0 maxbranches=0 no-callinit
unusednames=printElemNumber,debug_strs no-special"""
from google3.net.proto.message_set import MessageSet
class DocAttachments(ProtocolBuffer.ProtocolMessage):
def __init__(self, contents=None):
self.attachments_ = MessageSet()
self.has_attachments_ = 0
if contents is not None: self.MergeFromString(contents)
def attachments(self): return self.attachments_
def mutable_attachments(self): self.has_attachments_ = 1; return self.attachments_
def clear_attachments(self):self.has_attachments_ = 0; self.attachments_.Clear()
def has_attachments(self): return self.has_attachments_
def MergeFrom(self, x):
assert x is not self
if (x.has_attachments()): self.mutable_attachments().MergeFrom(x.attachments())
def _CMergeFromString(self, s):
_net_proto___parse__python.MergeFromString(self, 'DocAttachments', s)
def _CEncode(self):
return _net_proto___parse__python.Encode(self, 'DocAttachments')
def _CToASCII(self, output_format):
return _net_proto___parse__python.ToASCII(self, 'DocAttachments', output_format)
def ParseASCII(self, s):
_net_proto___parse__python.ParseASCII(self, 'DocAttachments', s)
def ParseASCIIIgnoreUnknown(self, s):
_net_proto___parse__python.ParseASCIIIgnoreUnknown(self, 'DocAttachments', s)
def Equals(self, x):
if x is self: return 1
if self.has_attachments_ != x.has_attachments_: return 0
if self.has_attachments_ and self.attachments_ != x.attachments_: return 0
return 1
def __eq__(self, other):
return (other is not None) and (other.__class__ == self.__class__) and self.Equals(other)
def __ne__(self, other):
return not (self == other)
def IsInitialized(self, debug_strs=None):
initialized = 1
if (not self.has_attachments_):
initialized = 0
if debug_strs is not None:
debug_strs.append('Required field: attachments not set.')
elif not self.attachments_.IsInitialized(debug_strs): initialized = 0
return initialized
def ByteSize(self):
n = 0
n += self.lengthString(self.attachments_.ByteSize())
return n + 1
def Clear(self):
self.clear_attachments()
def OutputUnchecked(self, out):
out.putVarInt32(10)
out.putVarInt32(self.attachments_.ByteSize())
self.attachments_.OutputUnchecked(out)
def TryMerge(self, d):
while d.avail() > 0:
tt = d.getVarInt32()
if tt == 10:
length = d.getVarInt32()
tmp = ProtocolBuffer.Decoder(d.buffer(), d.pos(), d.pos() + length)
d.skip(length)
self.mutable_attachments().TryMerge(tmp)
continue
# tag 0 is special: it's used to indicate an error.
# so if we see it we raise an exception.
if (tt == 0): raise ProtocolBuffer.ProtocolBufferDecodeError
d.skipData(tt)
def __str__(self, prefix="", printElemNumber=0):
res=""
if self.has_attachments_:
res+=prefix+"attachments <\n"
res+=self.attachments_.__str__(prefix + " ", printElemNumber)
res+=prefix+">\n"
return res
kattachments = 1
_TEXT = (
"ErrorCode", # 0
"attachments", # 1
)
_TYPES = (
ProtocolBuffer.Encoder.NUMERIC, # 0
ProtocolBuffer.Encoder.STRING, # 1
)
# stylesheet for XML output
_STYLE = \
""""""
_STYLE_CONTENT_TYPE = \
""""""
_SERIALIZED_DESCRIPTOR = array.array('B', [
0x5a,
0x24,
0x72,
0x65,
0x70,
0x6f,
0x73,
0x69,
0x74,
0x6f,
0x72,
0x79,
0x2f,
0x62,
0x61,
0x73,
0x65,
0x2f,
0x64,
0x6f,
0x63,
0x61,
0x74,
0x74,
0x61,
0x63,
0x68,
0x6d,
0x65,
0x6e,
0x74,
0x73,
0x2e,
0x70,
0x72,
0x6f,
0x74,
0x6f,
0x0a,
0x0e,
0x44,
0x6f,
0x63,
0x41,
0x74,
0x74,
0x61,
0x63,
0x68,
0x6d,
0x65,
0x6e,
0x74,
0x73,
0x13,
0x1a,
0x0b,
0x61,
0x74,
0x74,
0x61,
0x63,
0x68,
0x6d,
0x65,
0x6e,
0x74,
0x73,
0x20,
0x01,
0x28,
0x02,
0x30,
0x0b,
0x38,
0x02,
0x4a,
0x0a,
0x4d,
0x65,
0x73,
0x73,
0x61,
0x67,
0x65,
0x53,
0x65,
0x74,
0x14,
])
_net_proto___parse__python.RegisterType(_SERIALIZED_DESCRIPTOR.tostring())
__all__ = ['DocAttachments']
|
[
"nojfouldshere@gmail.com"
] |
nojfouldshere@gmail.com
|
6581711d6a4c030829ec7b03eb6558cac005f100
|
6bb4291e34598a83d1cd4631abd04ae00df5290b
|
/api/test/utils/test_utils.py
|
9b440e5c4479bc56943a720ec1d824546f1bb28c
|
[] |
no_license
|
Frost-Lee/order_scheduling
|
f1f2e69bc142a81869f70697b79cb7f2664d6b2e
|
825456c1b9a95011fe3530a2fb449dffd40f5246
|
refs/heads/main
| 2023-01-12T12:27:45.182521
| 2020-11-16T06:06:13
| 2020-11-16T06:06:13
| 312,620,540
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 659
|
py
|
import unittest
from scheduler.utils import utils
class TestUtils(unittest.TestCase):
def test_aggregate_tuples(self):
with self.assertRaisesRegex(AssertionError, 'index out of range'):
utils.aggregate_tuples([('a', 'b', 1)], [2, 3], 0)
utils.aggregate_tuples([('a', 'b', 1)], [1, 2], 3)
self.assertEqual(
set(utils.aggregate_tuples([('a', 'b', 1), ('a', 'b', 2)], [0, 1], 2)),
set(('a', 'b', 3))
)
self.assertEqual(
set(utils.aggregate_tuples([('a', 'b', 1), ('a', 'b', 2), ('b', 'c', 3)], [0, 1], 2)),
set(('a', 'b', 3), ('b', 'c', 3))
)
|
[
"canchen.lee@gmail.com"
] |
canchen.lee@gmail.com
|
33c449dc19c9effe47b503bea32359f5b42fb142
|
f652cf4e0fa6fbfcca8d94cec5f942fd8bd021a0
|
/mbuild/__init__.py
|
bac5509ac8b9754a7fb64e88b372b76f045f4dc3
|
[
"MIT",
"LicenseRef-scancode-unknown-license-reference"
] |
permissive
|
Jonestj1/mbuild
|
83317ab3a53f40ff6c9c69f6be542b8562602eee
|
411cc60d3ef496fa26541bb0b7ea8dcf8c7449e4
|
refs/heads/master
| 2021-01-20T19:45:11.563610
| 2017-02-13T18:16:12
| 2017-02-13T18:16:12
| 32,886,030
| 0
| 0
| null | 2015-03-25T19:24:50
| 2015-03-25T19:24:50
| null |
UTF-8
|
Python
| false
| false
| 339
|
py
|
from mbuild.box import Box
from mbuild.coarse_graining import coarse_grain
from mbuild.coordinate_transform import *
from mbuild.compound import *
from mbuild.pattern import *
from mbuild.packing import *
from mbuild.port import Port
from mbuild.recipes import *
from mbuild.formats import *
from mbuild.version import version
|
[
"christoph.t.klein@me.com"
] |
christoph.t.klein@me.com
|
ea8f72a37588594d3881ecbe62617f136c7c6869
|
a0c782c69420f513bd2d0c0fcea896f732b05cb2
|
/account_bank_statement_advanced/res_partner_bank.py
|
5f1063e39f9b4a98c855f959b4764797ab92eb80
|
[] |
no_license
|
SVQTQ/noviat-apps
|
8b5116287681fabcefc5d456786c16c717de54ab
|
57ec751ccd4a3e32798ec8851c3501e809c09f91
|
refs/heads/8.0
| 2020-04-08T19:55:06.699350
| 2015-07-29T14:34:22
| 2015-07-29T14:34:22
| 32,148,042
| 0
| 0
| null | 2015-07-29T14:34:22
| 2015-03-13T09:45:48
|
Gettext Catalog
|
UTF-8
|
Python
| false
| false
| 3,843
|
py
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
#
# Copyright (c) 2014-2015 Noviat nv/sa (www.noviat.com).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp import models, api
import logging
_logger = logging.getLogger(__name__)
class res_partner_bank(models.Model):
_inherit = 'res.partner.bank'
def _acc_number_select(self, operator, number):
"""
The code below could be simplified if the Odoo standard
accounting modules would store bank account numbers
in the database in 'normalised' format (without spaces or
other formatting characters (such as '-').
"""
if operator in ['=', '=like', '=ilike']:
op = '='
else: # operator in ['like', 'ilike']
op = 'LIKE'
if len(number) == 12:
"""
Belgium BBAN is always 12 chars and subset of IBAN.
Hence we can retrieve the IBAN from a BBAN lookup.
TODO: extend logic to other countries
"""
select = \
"SELECT id FROM res_partner_bank WHERE " \
"(state='iban' AND SUBSTRING(acc_number FOR 2) = 'BE' AND " \
"REPLACE(acc_number, ' ', '') LIKE '%%'|| '%s' ||'%%' ) " \
% number
# other countries
if op == '=':
select += "OR " \
"REPLACE(REPLACE(acc_number, ' ', ''), '-','') = '%s'" \
% number
else:
select += "OR " \
"REPLACE(REPLACE(acc_number, ' ', ''), '-','') " \
"LIKE '%%'|| '%s' ||'%%' " \
% number
else:
if op == '=':
select = \
"SELECT id FROM res_partner_bank WHERE " \
"REPLACE(REPLACE(acc_number, ' ', ''), '-','') = '%s'" \
% number
else:
select = \
"SELECT id FROM res_partner_bank WHERE " \
"REPLACE(REPLACE(acc_number, ' ', ''), '-','') " \
"LIKE '%%'|| '%s' ||'%%' " \
% number
return select
@api.model
def search(self, args, offset=0, limit=None, order=None, count=False):
# _logger.warn('%s, search, args=%s', self._name, args)
for i, arg in enumerate(args):
if arg[0] == 'acc_number' and \
arg[1] in ['=', '=like', '=ilike', 'like', 'ilike']:
number = arg[2].replace(' ', '').replace('-', '').upper()
select = self._acc_number_select(arg[1], number)
self._cr.execute(select)
res = self._cr.fetchall()
if res:
rpb_ids = [x[0] for x in res]
args[i] = ['id', 'in', rpb_ids]
# _logger.warn('%s, search, args=%s', self._name, args)
return super(res_partner_bank, self).search(
args, offset, limit, order, count=count)
|
[
"luc.demeyer@noviat.com"
] |
luc.demeyer@noviat.com
|
a80777a871d9539635a4e13543e35fe55d86461d
|
7f9a73533b3678f0e83dc559dee8a37474e2a289
|
/aws-serverless-for-deep-learning-first-steps-workshop/notebooks/deep-learning-inference/PIL/PixarImagePlugin.py
|
5ea32ba89be5253e3ad0e8349be1bdcb42bb2494
|
[
"MIT"
] |
permissive
|
ryfeus/stepfunctions2processing
|
04a5e83ee9b74e029b79a3f19381ba6d9265fc48
|
0b74797402d39f4966cab278d9718bfaec3386c2
|
refs/heads/master
| 2022-10-08T16:20:55.459175
| 2022-09-09T05:54:47
| 2022-09-09T05:54:47
| 147,448,024
| 128
| 34
|
MIT
| 2022-01-04T18:56:47
| 2018-09-05T02:26:31
|
Python
|
UTF-8
|
Python
| false
| false
| 1,657
|
py
|
#
# The Python Imaging Library.
# $Id$
#
# PIXAR raster support for PIL
#
# history:
# 97-01-29 fl Created
#
# notes:
# This is incomplete; it is based on a few samples created with
# Photoshop 2.5 and 3.0, and a summary description provided by
# Greg Coats <gcoats@labiris.er.usgs.gov>. Hopefully, "L" and
# "RGBA" support will be added in future versions.
#
# Copyright (c) Secret Labs AB 1997.
# Copyright (c) Fredrik Lundh 1997.
#
# See the README file for information on usage and redistribution.
#
from . import Image, ImageFile
from ._binary import i16le as i16
#
# helpers
def _accept(prefix):
return prefix[:4] == b"\200\350\000\000"
##
# Image plugin for PIXAR raster images.
class PixarImageFile(ImageFile.ImageFile):
format = "PIXAR"
format_description = "PIXAR raster image"
def _open(self):
# assuming a 4-byte magic label
s = self.fp.read(4)
if s != b"\200\350\000\000":
raise SyntaxError("not a PIXAR file")
# read rest of header
s = s + self.fp.read(508)
self._size = i16(s[418:420]), i16(s[416:418])
# get channel/depth descriptions
mode = i16(s[424:426]), i16(s[426:428])
if mode == (14, 2):
self.mode = "RGB"
# FIXME: to be continued...
# create tile descriptor (assuming "dumped")
self.tile = [("raw", (0, 0) + self.size, 1024, (self.mode, 0, 1))]
#
# --------------------------------------------------------------------
Image.register_open(PixarImageFile.format, PixarImageFile, _accept)
Image.register_extension(PixarImageFile.format, ".pxr")
|
[
"ryfeus@gmail.com"
] |
ryfeus@gmail.com
|
cbb69a91ee6be1562c57094de5515967c74944d9
|
123d26781801473dc59d8be847dbac79d4b555df
|
/configs/swin/mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_adamw_3x_coco_fashion.py
|
8c91f897f95d7b08f2b9d121d6a71c791be11601
|
[
"Apache-2.0"
] |
permissive
|
jireh-father/CBNetV2
|
c2ed5358a81dde7dff3b50614371afe6045553c0
|
0b62f8107d72691a02efb7c92fc6dfcf5d0d0262
|
refs/heads/main
| 2023-07-17T16:42:55.126767
| 2021-08-31T10:34:15
| 2021-08-31T10:34:15
| 398,468,051
| 0
| 0
|
Apache-2.0
| 2021-08-21T04:47:14
| 2021-08-21T04:47:14
| null |
UTF-8
|
Python
| false
| false
| 3,037
|
py
|
_base_ = [
'../_base_/models/mask_rcnn_swin_fpn_fashion.py',
'../_base_/datasets/coco_instance_fashion.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
backbone=dict(
embed_dim=96,
depths=[2, 2, 6, 2],
num_heads=[3, 6, 12, 24],
window_size=7,
ape=False,
drop_path_rate=0.2,
patch_norm=True,
use_checkpoint=False
),
neck=dict(in_channels=[96, 192, 384, 768]))
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
# augmentation strategy originates from DETR / Sparse RCNN
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='AutoAugment',
policies=[
[
dict(type='Resize',
img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
(608, 1333), (640, 1333), (672, 1333), (704, 1333),
(736, 1333), (768, 1333), (800, 1333)],
multiscale_mode='value',
keep_ratio=True)
],
[
dict(type='Resize',
img_scale=[(400, 1333), (500, 1333), (600, 1333)],
multiscale_mode='value',
keep_ratio=True),
dict(type='RandomCrop',
crop_type='absolute_range',
crop_size=(384, 600),
allow_negative_crop=True),
dict(type='Resize',
img_scale=[(480, 1333), (512, 1333), (544, 1333),
(576, 1333), (608, 1333), (640, 1333),
(672, 1333), (704, 1333), (736, 1333),
(768, 1333), (800, 1333)],
multiscale_mode='value',
override=True,
keep_ratio=True)
]
]),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
]
data = dict(train=dict(pipeline=train_pipeline))
optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05,
paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
'relative_position_bias_table': dict(decay_mult=0.),
'norm': dict(decay_mult=0.)}))
lr_config = dict(step=[27, 33])
runner = dict(type='EpochBasedRunnerAmp', max_epochs=36)
# do not use mmdet version fp16
fp16 = None
optimizer_config = dict(
type="DistOptimizerHook",
update_interval=1,
grad_clip=None,
coalesce=True,
bucket_size_mb=-1,
use_fp16=True,
)
|
[
"seoilgun@gmail.com"
] |
seoilgun@gmail.com
|
4b3aa7a1ee58238dd8a25b2a149447be16633036
|
64267b1f7ca193b0fab949089b86bc7a60e5b859
|
/slehome/manage.py
|
1cb83a303a492fa808560a2831d6104bd01a8931
|
[] |
no_license
|
hongdangodori/slehome
|
6a9f2b4526c2783932627b982df0540762570bff
|
3e558c78c3943dadf0ec485738a0cc98dea64353
|
refs/heads/master
| 2021-01-17T12:00:34.221088
| 2015-02-06T13:44:00
| 2015-02-06T13:44:00
| 28,847,585
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 250
|
py
|
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "slehome.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
|
[
"chungdangogo@gmail.com"
] |
chungdangogo@gmail.com
|
76ac2d60a69c2c9463da4ae6c4547c5b867dd6e8
|
50948d4cb10dcb1cc9bc0355918478fb2841322a
|
/azure-mgmt-network/azure/mgmt/network/v2018_12_01/models/topology_association.py
|
7832c4a1904ba784c50b3cf5aae97796eb260dd0
|
[
"MIT"
] |
permissive
|
xiafu-msft/azure-sdk-for-python
|
de9cd680b39962702b629a8e94726bb4ab261594
|
4d9560cfd519ee60667f3cc2f5295a58c18625db
|
refs/heads/master
| 2023-08-12T20:36:24.284497
| 2019-05-22T00:55:16
| 2019-05-22T00:55:16
| 187,986,993
| 1
| 0
|
MIT
| 2020-10-02T01:17:02
| 2019-05-22T07:33:46
|
Python
|
UTF-8
|
Python
| false
| false
| 1,586
|
py
|
# coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class TopologyAssociation(Model):
"""Resources that have an association with the parent resource.
:param name: The name of the resource that is associated with the parent
resource.
:type name: str
:param resource_id: The ID of the resource that is associated with the
parent resource.
:type resource_id: str
:param association_type: The association type of the child resource to the
parent resource. Possible values include: 'Associated', 'Contains'
:type association_type: str or
~azure.mgmt.network.v2018_12_01.models.AssociationType
"""
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'resource_id': {'key': 'resourceId', 'type': 'str'},
'association_type': {'key': 'associationType', 'type': 'str'},
}
def __init__(self, **kwargs):
super(TopologyAssociation, self).__init__(**kwargs)
self.name = kwargs.get('name', None)
self.resource_id = kwargs.get('resource_id', None)
self.association_type = kwargs.get('association_type', None)
|
[
"lmazuel@microsoft.com"
] |
lmazuel@microsoft.com
|
e5c53766c994f5d150cd47187531f1339035c92b
|
065acd70109d206c4021954e68c960a631a6c5e3
|
/shot_detector/utils/collections/sliding_windows/__init__.py
|
dbbbfbf2e08e76dc001cf31e3d17b64c790ba048
|
[] |
permissive
|
w495/python-video-shot-detector
|
bf2e3cc8175687c73cd01cf89441efc349f58d4d
|
617ff45c9c3c96bbd9a975aef15f1b2697282b9c
|
refs/heads/master
| 2022-12-12T02:29:24.771610
| 2017-05-15T00:38:22
| 2017-05-15T00:38:22
| 37,352,923
| 20
| 3
|
BSD-3-Clause
| 2022-11-22T01:15:45
| 2015-06-13T01:33:27
|
Python
|
UTF-8
|
Python
| false
| false
| 347
|
py
|
# -*- coding: utf8 -*-
"""
Different kinds of sliding windows
"""
from __future__ import absolute_import, division, print_function
from .base_sliding_window import BaseSlidingWindow
from .delayed_sliding_window import DelayedSlidingWindow
from .repeated_sliding_window import RepeatedSlidingWindow
from .sliding_window import SlidingWindow
|
[
"w@w-495.ru"
] |
w@w-495.ru
|
494209f5626eff8613f8403f2084829f49a30c87
|
1554150a9720ebf35cd11c746f69169b595dca10
|
/package_package/package/model/fuzzy_number.py
|
b64b535dfc851ec40ee6a38917dddbbf78b72a3a
|
[] |
no_license
|
andrewili/shape-grammar-engine
|
37a809f8cf78b133f8f1c3f9cf13a7fbbb564713
|
2859d8021442542561bdd1387deebc85e26f2d03
|
refs/heads/master
| 2021-01-18T22:46:51.221257
| 2016-05-31T21:15:28
| 2016-05-31T21:15:28
| 14,129,359
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,214
|
py
|
import numpy as np
almost_equal = np.allclose
class FuzzyNumber(object):
def __init__(self, number_in):
"""Receives:
number_in num
"""
method_name = '__init__'
try:
if not self._is_a_number(number_in):
raise TypeError
except TypeError:
message = "The argument must be a number"
self.__class__._print_error_message(method_name, message)
else:
self.value = number_in
def _is_a_number(self, x):
"""Receives:
x object
Returns:
value boolean. True if x is an int, a float, an
np.int64, or an np.float64. False otherwise
"""
value = False
if (type(x) == int or
type(x) == float or
type(x) == np.int64 or
type(x) == np.float64
):
value = True
return value
def __eq__(self, other):
return almost_equal(self.value, other.value)
def __ge__(self, other):
return (
almost_equal(self.value, other.value) or
self.value > other.value)
def __gt__(self, other):
if almost_equal(self.value, other.value):
value = False
elif self.value > other.value:
value = True
else:
value = False
return value
def __le__(self, other):
return(
almost_equal(self.value, other.value) or
self.value < other.value)
def __lt__(self, other):
if almost_equal(self.value, other.value):
value = False
elif self.value < other.value:
value = True
else:
value = False
return value
def __ne__(self, other):
return not almost_equal(self.value, other.value)
### utility
@classmethod
def _print_error_message(cls, method_name, message):
print '%s.%s:\n %s' % (cls.__name__, method_name, message)
### represent
def __str__(self):
return str(self.value)
if __name__ == '__main__':
import doctest
doctest.testfile('tests/fuzzy_number_test.txt')
|
[
"i@andrew.li"
] |
i@andrew.li
|
a9297cfbbfe53a5bdce5b575f72cd5880abbafce
|
2154d0221e29a86850a1b83e4302f6e3e3f7fa5d
|
/thread_example/simple_thread_example.py
|
6f5917fe97dff2fcaaacbf683f71542762a6a5f6
|
[] |
no_license
|
aaqqxx/simple_for_life
|
3b8805c6791da6a3a7f42c069dc1ee7d2b8d3649
|
9ad6d61a56216d04250cd89aeaeda63c11942d0a
|
refs/heads/master
| 2020-04-04T09:18:59.396540
| 2015-04-28T11:22:55
| 2015-04-28T11:22:55
| 20,906,518
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 268
|
py
|
# coding:utf-8
#!/usr/bin/env python
__author__ = 'XingHua'
"""
"""
import time, thread
def timer():
print('hello')
def test():
for i in range(0, 10):
thread.start_new_thread(timer, ())
if __name__ == '__main__':
test()
time.sleep(10)
|
[
"aaqqxx1910@gmail.com"
] |
aaqqxx1910@gmail.com
|
3922747c24aeae6863311beb748f65358b035f73
|
62e58c051128baef9452e7e0eb0b5a83367add26
|
/edifact/D95A/DIRDEBD95AUN.py
|
5415a9c13741232f73c37ae3b49aa4c18660d498
|
[] |
no_license
|
dougvanhorn/bots-grammars
|
2eb6c0a6b5231c14a6faf194b932aa614809076c
|
09db18d9d9bd9d92cefbf00f1c0de1c590fe3d0d
|
refs/heads/master
| 2021-05-16T12:55:58.022904
| 2019-05-17T15:22:23
| 2019-05-17T15:22:23
| 105,274,633
| 0
| 0
| null | 2017-09-29T13:21:21
| 2017-09-29T13:21:21
| null |
UTF-8
|
Python
| false
| false
| 4,507
|
py
|
#Generated by bots open source edi translator from UN-docs.
from bots.botsconfig import *
from edifact import syntax
from recordsD95AUN import recorddefs
structure = [
{ID: 'UNH', MIN: 1, MAX: 1, LEVEL: [
{ID: 'BGM', MIN: 1, MAX: 1},
{ID: 'DTM', MIN: 1, MAX: 1},
{ID: 'BUS', MIN: 0, MAX: 1},
{ID: 'RFF', MIN: 0, MAX: 2, LEVEL: [
{ID: 'DTM', MIN: 0, MAX: 1},
]},
{ID: 'FII', MIN: 0, MAX: 5, LEVEL: [
{ID: 'CTA', MIN: 0, MAX: 1},
{ID: 'COM', MIN: 0, MAX: 5},
]},
{ID: 'NAD', MIN: 0, MAX: 3, LEVEL: [
{ID: 'CTA', MIN: 0, MAX: 1},
{ID: 'COM', MIN: 0, MAX: 5},
]},
{ID: 'LIN', MIN: 1, MAX: 9999, LEVEL: [
{ID: 'DTM', MIN: 0, MAX: 1},
{ID: 'RFF', MIN: 0, MAX: 2},
{ID: 'BUS', MIN: 0, MAX: 1},
{ID: 'FCA', MIN: 0, MAX: 1},
{ID: 'MOA', MIN: 0, MAX: 1, LEVEL: [
{ID: 'CUX', MIN: 0, MAX: 1},
{ID: 'DTM', MIN: 0, MAX: 2},
{ID: 'RFF', MIN: 0, MAX: 1},
]},
{ID: 'FII', MIN: 1, MAX: 1, LEVEL: [
{ID: 'CTA', MIN: 0, MAX: 1},
{ID: 'COM', MIN: 0, MAX: 5},
]},
{ID: 'NAD', MIN: 0, MAX: 3, LEVEL: [
{ID: 'CTA', MIN: 0, MAX: 1},
{ID: 'COM', MIN: 0, MAX: 5},
]},
{ID: 'INP', MIN: 0, MAX: 1, LEVEL: [
{ID: 'FTX', MIN: 0, MAX: 1},
{ID: 'DTM', MIN: 0, MAX: 2},
]},
{ID: 'GIS', MIN: 0, MAX: 10, LEVEL: [
{ID: 'MOA', MIN: 0, MAX: 1},
{ID: 'LOC', MIN: 0, MAX: 2},
{ID: 'NAD', MIN: 0, MAX: 1},
{ID: 'RCS', MIN: 0, MAX: 1},
{ID: 'FTX', MIN: 0, MAX: 10},
]},
{ID: 'PRC', MIN: 0, MAX: 1, LEVEL: [
{ID: 'FTX', MIN: 1, MAX: 1},
]},
{ID: 'SEQ', MIN: 1, MAX: 9999, LEVEL: [
{ID: 'MOA', MIN: 1, MAX: 1},
{ID: 'DTM', MIN: 0, MAX: 1},
{ID: 'RFF', MIN: 0, MAX: 3},
{ID: 'PAI', MIN: 0, MAX: 1},
{ID: 'FCA', MIN: 0, MAX: 1},
{ID: 'FII', MIN: 0, MAX: 3, LEVEL: [
{ID: 'CTA', MIN: 0, MAX: 1},
{ID: 'COM', MIN: 0, MAX: 5},
]},
{ID: 'NAD', MIN: 0, MAX: 3, LEVEL: [
{ID: 'CTA', MIN: 0, MAX: 1},
{ID: 'COM', MIN: 0, MAX: 5},
]},
{ID: 'INP', MIN: 0, MAX: 3, LEVEL: [
{ID: 'FTX', MIN: 0, MAX: 1},
{ID: 'DTM', MIN: 0, MAX: 2},
]},
{ID: 'GIS', MIN: 0, MAX: 10, LEVEL: [
{ID: 'MOA', MIN: 0, MAX: 1},
{ID: 'LOC', MIN: 0, MAX: 2},
{ID: 'NAD', MIN: 0, MAX: 1},
{ID: 'RCS', MIN: 0, MAX: 1},
{ID: 'FTX', MIN: 0, MAX: 10},
]},
{ID: 'PRC', MIN: 0, MAX: 1, LEVEL: [
{ID: 'FTX', MIN: 0, MAX: 5},
{ID: 'DOC', MIN: 0, MAX: 9999, LEVEL: [
{ID: 'MOA', MIN: 0, MAX: 5},
{ID: 'DTM', MIN: 0, MAX: 5},
{ID: 'RFF', MIN: 0, MAX: 5},
{ID: 'NAD', MIN: 0, MAX: 2},
{ID: 'CUX', MIN: 0, MAX: 5, LEVEL: [
{ID: 'DTM', MIN: 0, MAX: 1},
]},
{ID: 'AJT', MIN: 0, MAX: 100, LEVEL: [
{ID: 'MOA', MIN: 0, MAX: 1},
{ID: 'RFF', MIN: 0, MAX: 1},
{ID: 'FTX', MIN: 0, MAX: 5},
]},
{ID: 'DLI', MIN: 0, MAX: 1000, LEVEL: [
{ID: 'MOA', MIN: 1, MAX: 5},
{ID: 'PIA', MIN: 0, MAX: 5},
{ID: 'DTM', MIN: 0, MAX: 5},
{ID: 'CUX', MIN: 0, MAX: 5, LEVEL: [
{ID: 'DTM', MIN: 0, MAX: 1},
]},
{ID: 'AJT', MIN: 0, MAX: 10, LEVEL: [
{ID: 'MOA', MIN: 1, MAX: 1},
{ID: 'RFF', MIN: 0, MAX: 1},
{ID: 'FTX', MIN: 0, MAX: 5},
]},
]},
]},
{ID: 'GIS', MIN: 0, MAX: 1, LEVEL: [
{ID: 'MOA', MIN: 0, MAX: 5},
]},
]},
]},
]},
{ID: 'CNT', MIN: 0, MAX: 5},
{ID: 'AUT', MIN: 0, MAX: 5, LEVEL: [
{ID: 'DTM', MIN: 0, MAX: 1},
]},
{ID: 'UNT', MIN: 1, MAX: 1},
]},
]
|
[
"jason.capriotti@gmail.com"
] |
jason.capriotti@gmail.com
|
9f5061630659beed761f1e56fb5a1083b3bb3c3d
|
234c46d1249c9209f268417a19018afc12e378b4
|
/tests/modules/transformer/activation_layer_test.py
|
2af0338a92e9723143c9b963856628980b4971bc
|
[
"Apache-2.0"
] |
permissive
|
allenai/allennlp
|
1f4bcddcb6f5ce60c7ef03a9a3cd6a38bdb987cf
|
80fb6061e568cb9d6ab5d45b661e86eb61b92c82
|
refs/heads/main
| 2023-07-07T11:43:33.781690
| 2022-11-22T00:42:46
| 2022-11-22T00:42:46
| 91,356,408
| 12,257
| 2,712
|
Apache-2.0
| 2022-11-22T00:42:47
| 2017-05-15T15:52:41
|
Python
|
UTF-8
|
Python
| false
| false
| 804
|
py
|
import torch
import pytest
from allennlp.common import Params
from allennlp.modules.transformer import ActivationLayer
@pytest.fixture
def params_dict():
return {
"hidden_size": 5,
"intermediate_size": 3,
"activation": "relu",
}
@pytest.fixture
def params(params_dict):
return Params(params_dict)
@pytest.fixture
def activation_layer(params):
return ActivationLayer.from_params(params.duplicate())
def test_can_construct_from_params(activation_layer, params_dict):
activation_layer = activation_layer
assert activation_layer.dense.in_features == params_dict["hidden_size"]
assert activation_layer.dense.out_features == params_dict["intermediate_size"]
def test_forward_runs(activation_layer):
activation_layer.forward(torch.randn(7, 5))
|
[
"noreply@github.com"
] |
allenai.noreply@github.com
|
074aeca3d97502ed60c27a33d1803a45293f210c
|
c1ea75db1da4eaa485d39e9d8de480b6ed0ef40f
|
/app/api/app.py
|
bafb5b35fd82a3d3b5865aa651d5ecb12186e978
|
[
"Apache-2.0"
] |
permissive
|
gasbarroni8/VideoCrawlerEngine
|
a4f092b0a851dc0487e4dcf4c98b62d6282a6180
|
994933d91d85bb87ae8dfba1295f7a69f6d50097
|
refs/heads/master
| 2023-04-06T07:59:29.269894
| 2021-02-10T16:09:15
| 2021-02-10T16:09:15
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,259
|
py
|
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
from fastapi.responses import HTMLResponse
from .routers import include_routers
from app.helper.middleware import include_exception_handler
from helper.conf import get_conf
from .helper import read_html_file
from app.helper.middleware.proxy import ReverseProxyMiddleware
from ..helper.middleware import include_middleware
from urllib.parse import urljoin
import os
app = FastAPI()
conf = get_conf('app')
htmldist = {
'static': os.path.join(conf.html['dist'], 'static'),
'index': os.path.join(conf.html['dist'], 'index.html')
}
app.mount(
'/static',
StaticFiles(directory=htmldist['static']),
name='dist'
)
include_routers(app)
include_exception_handler(app)
proxy_pass_configures = [
{
'source': '/api/task/',
'pass': urljoin(
conf.taskflow['gateway'].geturl(),
'/api/v1/task/'
),
}, {
'source': '/api/script/',
'pass': urljoin(
conf.script['gateway'].geturl(),
'/api/v1/script/'
),
}
]
include_middleware(app, ReverseProxyMiddleware(proxy_pass_configures))
@app.get('/')
async def index():
return HTMLResponse(read_html_file(htmldist['index']))
|
[
"zzsaim@163.com"
] |
zzsaim@163.com
|
13a95e0835eddd0fa3db784494dd57177d13927b
|
8fcdcec1bf0f194d23bba4acd664166a04dc128f
|
/packages/grid_control_update.py
|
d21546c84a97777c2b4b0811e15519a89314cefb
|
[] |
no_license
|
grid-control/grid-control
|
e51337dd7e5d158644a8da35923443fb0d232bfb
|
1f5295cd6114f3f18958be0e0618ff6b35aa16d7
|
refs/heads/master
| 2022-11-13T13:29:13.226512
| 2021-10-01T14:37:59
| 2021-10-01T14:37:59
| 13,805,261
| 32
| 30
| null | 2023-02-19T16:22:47
| 2013-10-23T14:39:28
|
Python
|
UTF-8
|
Python
| false
| false
| 1,227
|
py
|
#!/usr/bin/env python
# | Copyright 2014-2017 Karlsruhe Institute of Technology
# |
# | Licensed under the Apache License, Version 2.0 (the "License");
# | you may not use this file except in compliance with the License.
# | You may obtain a copy of the License at
# |
# | http://www.apache.org/licenses/LICENSE-2.0
# |
# | Unless required by applicable law or agreed to in writing, software
# | distributed under the License is distributed on an "AS IS" BASIS,
# | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# | See the License for the specific language governing permissions and
# | limitations under the License.
import os, sys
def update_plugin_files():
base_dir = os.path.abspath(os.path.dirname(__file__))
sys.path.append(base_dir)
from hpfwk.hpf_plugin import create_plugin_file
def _select(path):
for pat in ['/share', '_compat_', '/requests', '/xmpp']:
if pat in path:
return False
return True
package_list = os.listdir(base_dir)
package_list.sort()
for package in package_list:
package = os.path.abspath(os.path.join(base_dir, package))
if os.path.isdir(package):
create_plugin_file(package, _select)
if __name__ == '__main__':
update_plugin_files()
|
[
"stober@cern.ch"
] |
stober@cern.ch
|
0817e8833e06cdbb3dc7357bbdcedcc83fb04a46
|
73fcadae6177ab973f1aa3ffe874ac3fadb52312
|
/server/fta/utils/i18n.py
|
4f91cd88e9509cabeab6ce284564a7f4a93d9ea7
|
[
"MIT",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference",
"BSL-1.0",
"Apache-2.0"
] |
permissive
|
huang1125677925/fta
|
352cd587aaca3d3149516345559d420c41d1caf4
|
a50a3c498c39b14e7df4a0a960c2a1499b1ec6bb
|
refs/heads/master
| 2023-03-18T16:08:40.904716
| 2019-02-22T09:35:23
| 2019-02-22T09:35:23
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 5,745
|
py
|
# -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making 蓝鲸智云PaaS平台社区版 (BlueKing PaaS Community Edition) available.
Copyright (C) 2017-2018 THL A29 Limited, a Tencent company. All rights reserved.
Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://opensource.org/licenses/MIT
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
""" # noqa
import logging
import os.path
import arrow
import pytz as tz
from babel import support
from fta.utils.lazy import LazyString
logger = logging.getLogger(__name__)
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class I18N(object):
__metaclass__ = Singleton
def __init__(self):
# 全局唯一, 修改后可更改语言, 时区
self.cc_biz_id = None
from fta import settings
self.default_locale = settings.DEFAULT_LOCALE
self.default_timezone = settings.DEFAULT_TIMEZONE
self.translations = {}
self.domain = None
def set_biz(self, cc_biz_id):
"""change biz method
"""
self.cc_biz_id = cc_biz_id
@property
def translation_directories(self):
"""翻译文件夹
"""
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
yield os.path.join(BASE_DIR, 'locale')
def locale_best_match(self, locale):
"""兼容不同编码
"""
if locale.lower() in ['zh', 'zh_cn', 'zh-cn']:
return 'zh_Hans_CN'
return 'en'
def get_locale(self):
"""
根据业务ID获取语言
"""
if not self.cc_biz_id:
return self.default_locale
try:
from project.utils import query_cc
locale = query_cc.get_app_by_id(self.cc_biz_id).get('Language')
if locale:
return self.locale_best_match(locale)
else:
return self.default_locale
except Exception:
return self.default_locale
def get_timezone(self):
try:
timezone = self._get_timezone()
except Exception:
timezone = tz.timezone(self.default_timezone)
return timezone
def _get_timezone(self):
"""
根据业务ID获取时区
"""
if not self.cc_biz_id:
return self.default_timezone
try:
from project.utils import query_cc
timezone = query_cc.get_app_by_id(self.cc_biz_id).get('TimeZone')
if timezone:
return timezone
else:
return self.default_timezone
except Exception:
return self.default_timezone
def get_translations(self):
"""get translation on the fly
"""
locale = self.get_locale()
if locale not in self.translations:
translations = support.Translations()
for dirname in self.translation_directories:
catalog = support.Translations.load(
dirname,
[locale],
self.domain,
)
translations.merge(catalog)
if hasattr(catalog, 'plural'):
translations.plural = catalog.plural
logger.info('load translations, %s=%s', locale, translations)
self.translations[locale] = translations
return self.translations[locale]
i18n = I18N()
def gettext(string, **variables):
"""replace stdlib
"""
t = i18n.get_translations()
if t is None:
return string if not variables else string % variables
s = t.ugettext(string)
return s if not variables else s % variables
def ngettext(singular, plural, n):
t = i18n.get_translations()
if t is None:
return singular
s = t.ngettext(singular, plural, n)
return s
def lazy_gettext(string, **variables):
"""Like :func:`gettext` but the string returned is lazy which means
it will be translated when it is used as an actual string.
Example::
hello = lazy_gettext(u'Hello World')
@app.route('/')
def index():
return unicode(hello)
"""
return LazyString(gettext, string, **variables)
_ = gettext
def arrow_localtime(value, timezone=None):
"""value必须是UTC时间, arrow转换成本地时间
"""
value = arrow.get(value).replace(tzinfo="utc")
if not timezone:
timezone = i18n.get_timezone()
value = value.to(timezone)
return value
def localtime(value, timezone=None):
"""value必须是UTC时间, datetime格式
"""
value = arrow_localtime(value, timezone)
value = value.datetime
return value
def arrow_now():
"""当前时区时间, arrow格式
"""
utcnow = arrow.utcnow()
timezone = i18n.get_timezone()
return utcnow.to(timezone)
def now():
"""当前时间, datetime格式
"""
return arrow_now().datetime
def lazy_join(iterable, word):
value = ''
is_first = True
for i in iterable:
if is_first:
value = value + i
is_first = False
else:
value = value + word + i
return value
|
[
"mycyzs@163.com"
] |
mycyzs@163.com
|
cef06aa93427891f9e1de15f76de7e4aa063276f
|
48ba8d0788e4ac7d4cacd7e7a2e2cf4f391c85ad
|
/Apple/rectangle_overlap.py
|
2fe9ccc6190a3b2c9e19a0c9399b0cd7700fb388
|
[] |
no_license
|
rahulvshinde/Python_Playground
|
c28ac2dc0865e254caa5360c3bb97b4ff5f23b3a
|
7a03b765dd440654caba1e06af5b149f584e9f08
|
refs/heads/master
| 2023-04-19T17:25:55.993837
| 2021-05-17T01:15:30
| 2021-05-17T01:15:30
| 280,736,898
| 2
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 886
|
py
|
"""
A rectangle is represented as a list [x1, y1, x2, y2], where (x1, y1) are the coordinates of its bottom-left corner,
and (x2, y2) are the coordinates of its top-right corner.
Two rectangles overlap if the area of their intersection is positive. To be clear, two rectangles that only touch at
the corner or edges do not overlap.
Given two (axis-aligned) rectangles, return whether they overlap.
Example 1:
Input: rec1 = [0,0,2,2], rec2 = [1,1,3,3]
Output: true
Example 2:
Input: rec1 = [0,0,1,1], rec2 = [1,0,2,1]
Output: false
Notes:
Both rectangles rec1 and rec2 are lists of 4 integers.
All coordinates in rectangles will be between -10^9 and 10^9.
"""
# rec1 = [0,0,2,2]
# rec2 = [1,1,3,3]
rec1 = [0,0,1,1]
rec2 = [1,0,2,1]
def rectOverlap(rec1, rec2):
return rec1[0]<rec2[2] and rec2[0] <rec1[2] and rec1[1]< rec2[3] and rec2[1]<rec1[3]
print(rectOverlap(rec1,rec2))
|
[
"r.shinde2007@gmail.com"
] |
r.shinde2007@gmail.com
|
fdb935308c84e6e8df3718a147bb41f284314a06
|
5be8b0f2ee392abeee6970e7a6364ac9a5b8ceaa
|
/xiaojian/forth_phase/Django./day03/exersice/exersice/wsgi.py
|
3d56e7a2957ad8662abfa9118725486dff7fda08
|
[] |
no_license
|
Wellsjian/20180826
|
424b65f828f0174e4d568131da01dafc2a36050a
|
0156ad4db891a2c4b06711748d2624080578620c
|
refs/heads/master
| 2021-06-18T12:16:08.466177
| 2019-09-01T10:06:44
| 2019-09-01T10:06:44
| 204,462,572
| 0
| 1
| null | 2021-04-20T18:26:03
| 2019-08-26T11:38:09
|
JavaScript
|
UTF-8
|
Python
| false
| false
| 394
|
py
|
"""
WSGI config for exersice project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "exersice.settings")
application = get_wsgi_application()
|
[
"1149158963@qq.com"
] |
1149158963@qq.com
|
8bbb7896a9faa5e12fd9ed8815e374e5c0f9b90b
|
61afe17201589a61c39429602ca11e3fdacf47a9
|
/Chapter3/Day19/12.异常细分(了解).py
|
53a37128647faf14441789776924fc9aa2b738f8
|
[] |
no_license
|
Liunrestrained/Python-
|
ec09315c50b395497dd9b0f83219fef6355e9b21
|
6b2cb4ae74c59820c6eabc4b0e98961ef3b941b2
|
refs/heads/main
| 2023-07-17T14:16:12.084304
| 2021-08-28T14:05:12
| 2021-08-28T14:05:12
| 399,408,426
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 734
|
py
|
import requests
from requests import exceptions
while True:
url = input("下载链接")
try:
res = requests.get(url=url)
print(res)
except exceptions.MissingSchema as e: # 细分处理
print("URL架构不存在")
except exceptions.InvalidSchema as e: # 细分处理
print("URL架构错误")
except exceptions.InvalidURL as e: # 细分处理
print("URL地址格式错误")
except exceptions.ConnectionError as e: # 细分处理
print("网络连接出错")
except Exception as e: # 模糊处理
print("代码出现错误", e)
# # 提示:如果想要写的简单一点,其实只写一个Exception捕获错误就可以了。
|
[
"noreply@github.com"
] |
Liunrestrained.noreply@github.com
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.